Papers updated in last 183 days (Page 12 of 1451 results)

Last updated:  2023-12-22
A Novel Preprocessing-Free Proofless Verifiable Computation Scheme from Integer Factoring
Alex Dalton, David Thomas, and Peter Cheung
Verifiable Computation (VC) schemes provide a mechanism for verifying the output of a remotely executed program. These are used to support computing paradigms wherein a computationally restricted client, the Verifier, wishes to delegate work to a more powerful but untrusted server, the Prover. The Verifier wishes to detect any incorrect results, be they accidental or malicious. The current state-of-the-art is only close-to-practical, usually because of a computationally demanding setup which must be amortised across repeat executions. We present a VC scheme for verifying the output of arithmetic circuits with a small one-time setup, KGen, independent of the size of the circuit being verified, and a insignificantly small constant program specific setup, ProbGen. To our knowledge our VC scheme is the first built from the hardness of integer factoring, a standard cryptographic assumption. Our scheme has the added novelty that the proofs are simply the raw output of the target computation, and the Prover is in effect blind to the fact they are taking part in a VC scheme at all. Although our scheme has worse asymptotic performance than the state-of-the-art it is particularly well suited for verifying one-shot programs and the output of large integer polynomial evaluation.
Last updated:  2023-12-22
HELIOPOLIS: Verifiable Computation over Homomorphically Encrypted Data from Interactive Oracle Proofs is Practical
Diego F. Aranha, Anamaria Costache, Antonio Guimarães, and Eduardo Soria-Vazquez
Homomorphic encryption (HE) enables computation on encrypted data, which in turn facilitates the outsourcing of computation on private data. However, HE offers no guarantee that the returned result was honestly computed by the cloud. In order to have such guarantee, it is necessary to add verifiable computation (VC) into the system. The most efficient recent works in VC over HE focus on verifying operations on the ciphertext space of the HE scheme, which usually lacks the algebraic structure that would make it compatible with existing VC systems. For example, multiplication of ciphertexts in the current most efficient HE schemes requires non-algebraic operations such as real division and rounding. Therefore, existing works for VC over HE have to either give up on those efficient HE schemes, or incur a large overhead (an amount of constraints proportional to the ciphertext ring's size) in order to emulate these non-algebraic operations. In this work, we move away from that paradigm by placing the verification checks in the plaintext space of HE, all while the prover remains computing on ciphertexts. We achieve this by introducing a general transformation for Interactive Oracle Proofs (IOPs) to work over HE, whose result we denote as HE-IOPs. We apply this same transformation to the FRI [Ben-Sasson et al., ICALP 2018] IOP of proximity and we show how to compile HE-Reed Solomon-encoded IOPs and HE-$\delta$-correlated-IOPs with HE-FRI into HE-IOPs. Furthermore, our construction is compatible with a prover that provides input in zero-knowledge, and only relies on building blocks that are plausibly quantum-safe. Aligning the security parameters of HE and FRI is a difficult task for which we introduce several optimizations. We demonstrate their efficiency with a proof-of-concept implementation in Python and show that, for an encrypted Reed Solomon codeword with degree bound $2^{11}$ and rate $1/16$ in a (plaintext) field of size $2^{256}$, we can run FRI's commit phase in just 43 minutes on a single thread on a c6i.metal instance (which could be reduced to less than a minute in a multi-threaded implementation in a large server). Verification takes less than 0.2 seconds, and, based on micro-benchmarks of the employed techniques, we show it could be up to 100 times faster in a fully optimized implementation.
Last updated:  2023-12-22
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
Daniel Benarroch, Matteo Campanelli, Dario Fiore, Kobi Gurkan, and Dimitris Kolonelos
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some $u$, ``$u \in S$ and $P(u)$ holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data. We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving $u \in S$ for a value $u$ that is in a public commitment $c_u$. We also extend our results to support {\em non-membership proofs}, i.e. proving $u \notin S$. Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form ``$u \in S$ and $P(u)$ holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for $P(u)$. Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16. We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and $3.7 \times$--$30\times$ faster proving time for a set of size $2^{64}$.
Last updated:  2023-12-22
SnarkFold: Efficient SNARK Proof Aggregation from Split Incrementally Verifiable Computation
Xun Liu, Shang Gao, Tianyu Zheng, and Bin Xiao
The succinct non-interactive argument of knowledge (SNARK) technique is widely used in blockchain systems to replace the costly on-chain computation with the verification of a succinct proof. However, when dealing with multiple proofs, most existing applications require each proof to be independently verified, resulting in a heavy load on nodes and high transaction fees for users. To improve the efficiency of verifying multiple proofs, we introduce SnarkFold, a universal SNARK-proof aggregation scheme based on incrementally verifiable computation (IVC). Unlike previous proof aggregation approaches based on inner product arguments, which have a logarithmic proof size and verification cost, SnarkFold achieves constant verification time and proof size. One core technical advance in SnarkFold, of independent interest, is the ``split IVC'': rather than using one running instance to fold/accumulate the computation, we employ two (or more) running instances of different types in the recursive circuit to avoid transferring into the same structure. This distinguishing feature is particularly well-suited for proof aggregation scenarios, as constructing arithmetic circuits for pairings can be expensive. We further demonstrate how to fold Groth16 proofs with our SnarkFold. With some further optimizations, SnarkFold achieves the highest efficiency among all approaches.
Last updated:  2023-12-22
Long Paper: Provable Secure Parallel Gadgets
Uncategorized
Francesco Berti, Sebastian Faust, and Maximilian Orlt
Show abstract
Uncategorized
Side-channel attacks are a fundamental threat to the security of cryptographic implementations. One of the most prominent countermeasures against side-channel attacks is masking, where each intermediate value of the computation is secret shared, thereby concealing the computation's sensitive information. An important security model to study the security of masking schemes is the random probing model, in which the adversary obtains each intermediate value of the computation with some probability $p$. To construct secure masking schemes, an important building block is the refreshing gadget, which updates the randomness of the secret shared intermediate values. Recently, Dziembowski, Faust, and Zebrowski (ASIACRYPT'19) analyzed the security of a simple refreshing gadget by using a new technique called the leakage diagram. In this work, we follow the approach of Dziembowski et al. and significantly improve its methodology. Concretely, we refine the notion of a leakage diagram via so-called dependency graphs, and show how to use this technique for arbitrary complex circuits via composition results and approximation techniques. To illustrate the power of our new techniques, as a case study, we designed provably secure parallel gadgets for the random probing model, and adapted the ISW multiplication such that all gadgets can be parallelized. Finally, we evaluate concrete security levels, and show how our new methodology can further improve the concrete security level of masking schemes. This results in a compiler provable secure up to a noise level of $ O({1})$ for affine circuits and $ O({1}/{{n^n}})$ in general.
Last updated:  2023-12-22
A one-time single-bit fault leaks all previous NTRU-HRSS session keys to a chosen-ciphertext attack
Daniel J. Bernstein
This paper presents an efficient attack that, in the standard IND-CCA2 attack model plus a one-time single-bit fault, recovers the NTRU-HRSS session key. This type of fault is expected to occur for many users through natural DRAM bit flips. In a multi-target IND-CCA2 attack model plus a one-time single-bit fault, the attack recovers every NTRU-HRSS session key that was encapsulated to the targeted public key before the fault. Software carrying out the full multi-target attack, using a simulated fault, is provided for verification. This paper also explains how a change in NTRU-HRSS in 2019 enabled this attack.
Last updated:  2023-12-22
The Fiat--Shamir Transformation of $(\Gamma_1,\dots,\Gamma_\mu)$-Special-Sound Interactive Proofs
Thomas Attema, Serge Fehr, Michael Klooß, and Nicolas Resch
The Fiat-Shamir transformation is a general principle to turn any public-coin interactive proof into non-interactive one (with security then typically analyzed in the random oracle model). While initially used for 3-round protocols, many recent constructions use it for multi-round protocols. However, in general the soundness error of the Fiat-Shamir transformed protocol degrades exponentially in the number of rounds. On the positive side, it was shown that for the special class of $(k_1,\dots,k_\mu)$-special-sound protocols the loss is actually only linear in the number of random oracle queries, and independent of the number of rounds, which is optimal. A natural next question is whether this positive result extends to the Fiat-Shamir transformation of so-called $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound protocols, a notion recently defined and analyzed in the interactive case, with the aim to capture the most general notion of special-soundness. We show in this work that this is indeed the case. Concretely, we show that the Fiat--Shamir transformation of any $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound interactive proof is knowledge sound under the same condition under which the original interactive proof is knowledge sound. Furthermore, also here the loss is linear in the number of random-oracle queries and independent of the number of rounds. In light of the above, one might suspect that our argument follows as a straightforward combination of the above mentioned prior works. However, this is not the case. The approach used for $(k_1,\dots,k_\mu)$-special-sound protocols, which is based on an extractor that samples without replacement, does not (seem to) generalize; on the other hand, the other approach, which uses an extractor based on sampling with replacement, comes with an additional loss that would blow up in the recursive multi-round analysis.
Last updated:  2023-12-22
Generalized Special-Sound Interactive Proofs and their Knowledge Soundness
Thomas Attema, Serge Fehr, and Nicolas Resch
A classic result in the theory of interactive proofs shows that a special-sound $\Sigma$-protocol is automatically a proof of knowledge. This result is very useful to have, since the latter property is typically tricky to prove from scratch, while the former is often easy to argue -- if it is satisfied. While classic $\Sigma$-protocols often are special-sound, this is unfortunately not the case for many recently proposed, highly efficient interactive proofs, at least not in this strict sense. Motivated by this, the original result was recently generalized to $k$-special-sound $\Sigma$-protocols (for arbitrary, polynomially bounded $k$), and to multi-round versions thereof. This generalization is sufficient to analyze (e.g.) Bulletproofs-like protocols, but is still insufficient for many other examples. In this work, we push the relaxation of the special soundness property to the extreme, by allowing an arbitrary access structure $\Gamma$ to specify for which subsets of challenges it is possible to compute a witness, when given correct answers to these challenges (for a fixed first message). Concretely, for any access structure $\Gamma$, we identify parameters $t_\Gamma$ and $\kappa_\Gamma$, and we show that any $\Gamma$-special-sound $\Sigma$-protocol is a proof of knowledge with knowledge error $\kappa_\Gamma$ if $t_\Gamma$ is polynomially bounded. We show a similar result for multi-round $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound interactive proofs. We apply our general result to a couple of simple but important example protocols, where we obtain a tight knowledge error as an immediate corollary. Beyond these simple examples, we analyze the FRI protocol. Here, showing the general special soundness notion is non-trivial, but can be done (for a certain range of parameters) by recycling some of the techniques used to argue ordinary soundness of the protocol (as an IOP). Again as a corollary, we then derive that the FRI protocol, as an interactive proof by using a Merkle-tree commitment, has a knowledge extractor with almost optimal knowledge error, with the caveat that the extractor requires (expected) quasi-polynomial time. Finally, building up on the technique for the parallel repetition of $(k_1,\dots,k_\mu)$-special-sound interactive proofs, we show the same strong parallel repetition result for $(\Gamma_1,\dots,\Gamma_\mu)$-special-sound interactive proofs.
Last updated:  2023-12-22
Concurrent Asynchronous Byzantine Agreement in Expected-Constant Rounds, Revisited
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, and Vassilis Zikas
It is well known that without randomization, Byzantine agreement (BA) requires a linear number of rounds in the synchronous setting, while it is flat out impossible in the asynchronous setting. The primitive which allows to bypass the above limitation is known as oblivious common coin (OCC). It allows parties to agree with constant probability on a random coin, where agreement is oblivious, i.e., players are not aware whether or not agreement has been achieved. The starting point of our work is the observation that no known protocol exists for information-theoretic multi-valued OCC---i.e., OCC where the coin might take a value from a domain of cardinality larger than 2---with optimal resiliency in the asynchronous (with eventual message delivery) setting. This apparent hole in the literature is particularly problematic, as multi-valued OCC is implicitly or explicitly used in several constructions. (In fact, it is often falsely attributed to the asynchronous BA result by Canetti and Rabin [STOC ’93], which, however, only achieves binary OCC and does not translate to a multi-valued OCC protocol.) In this paper, we present the first information-theoretic multi-valued OCC protocol in the asynchronous setting with optimal resiliency, i.e., tolerating $t<n/3$ corruptions, thereby filling this important gap. Further, our protocol efficiently implements OCC with an exponential-size domain, a property which is not even achieved by known constructions in the simpler, synchronous setting. We then turn to the problem of round-preserving parallel composition of asynchronous BA. A protocol for this task was proposed by Ben-Or and El-Yaniv [Distributed Computing ’03]. Their construction, however, is flawed in several ways: For starters, it relies on multi-valued OCC instantiated by Canetti and Rabin's result (which, as mentioned above, only provides binary OCC). This shortcoming can be repaired by plugging in our above multi-valued OCC construction. However, as we show, even with this fix it remains unclear whether the protocol of Ben-Or and El-Yaniv achieves its goal of expected-constant-round parallel asynchronous BA, as the proof is incorrect. Thus, as a second contribution, we provide a simpler, more modular protocol for the above task. Finally, and as a contribution of independent interest, we provide proofs in Canetti's Universal Composability framework; this makes our work the first one offering composability guarantees, which are important as BA is a core building block of secure multi-party computation protocols.
Last updated:  2023-12-22
Division of Regulatory Power: Collaborative Regulation for Privacy-Preserving Blockchains
Tianyu Zhaolu, Zhiguo Wan, and Huaqun Wang
Decentralized anonymous payment schemes may be exploited for illicit activities, such as money laundering, bribery and blackmail. To address this issue, several regulatory friendly decentralized anonymous payment schemes have been proposed. However, most of these solutions lack restrictions on the regulator’s authority, which could potentially result in power abuse and privacy breaches. In this paper, we present a decentralized anonymous payment scheme with collaborative regulation (DAPCR). Unlike existing solutions, DAPCR reduces the risk of power abuse by distributing regulatory authority to two entities: Filter and Supervisor, neither of which can decode transactions to access transaction privacy without the assistance of the other one. Our scheme enjoys three major advantages over others: ① Universality, achieved by using zk-SNARK to extend privacy-preserving transactions for regulation. ② Collab orative regulation, attained by adding the ring signature with controllable linkability to the transaction. ③ Efficient aggregation of payment amounts, achieved through amount tags. As a key technology for realizing collaborative regulation in DAPCR, the ring signature with controllable linkability (CLRS) is proposed, where a user needs to specify a linker and an opener to generate a signature. The linker can extract pseudonyms from signatures and link signatures submitted by the same signer based on pseudonyms, without leaking the signer’s identity. The opener can recover the signer’s identity from a given pseudonym. The experimental results reflect the efficiency of DAPCR. The time overhead for transaction generation is 1231.2 ms, representing an increase of less than 50 % compared to ZETH. Additionally, the time overhead for transaction verification is only 1.2 ms.
Last updated:  2023-12-22
Distinguisher and Related-Key Attack on HALFLOOP-96
Jinpeng Liu and Ling Sun
HALFLOOP-96 is a 96-bit tweakable block cipher used in high frequency radio to secure automatic link establishment messages. In this paper, we concentrate on its differential properties in the contexts of conventional, related-tweak, and related-key differential attacks. Using automatic techniques, we determine the minimum number of active S-boxes and the maximum differential probability in each of the three configurations. The resistance of HALFLOOP-96 to differential attacks in the conventional and related-tweak configurations is good, and the longest distinguishers in both configurations consist of five rounds. In contrast, the security of the cipher against differential attacks in the related-key configuration is inadequate. The most effective related-key distinguisher we can find spans eight rounds. The 8-round related-key differential distinguisher is then utilised to initiate a 9-round weak-key attack. With $2^{92.96}$ chosen-plaintexts, 38.77-bit equivalent information about the keys can be recovered. Even though the attack does not pose a significant security threat to HALFLOOP-96, its security margin in the related-key configuration is exceedingly narrow. Therefore, improper use must be avoided in the application.
Last updated:  2023-12-22
One for All, All for One: A Unified Evaluation Framework for Univariate DPA Attacks
Jiangshan Long, Chenxu Wang, Changhai Ou, Zhu Wang, Yongbin Zhou, and Ming Tang
Success Rate (SR) is one of the most popular security metrics measuring the efficiency of side-channel attacks. Theoretical expression reveals the functional dependency on critical parameters such as number of measurements and Signal-to-Noise Ratio (SNR), helping evaluators understand the threat of an attack as well as how one can mitigate it with proper countermeasures. However so far, existing works have exposed fundamental problems such as: (i) the evaluations are restricted to a very limited number of distinguishers and the methods in the literature seem specialized (i.e., hard to be extended). (ii) the evaluations assume an a-priori perfect leakage model which lacks practical relevance and ignores the fact that inaccurate profiling may lead to information loss and distorted SR. In this paper, we tackle above problems by providing an evaluation framework where different univariate DPA distinguishers are intuitively unified as linear maximum likelihood attack seeking for the closest `distance' between vectors in Euclidean space. We argue that this is an intrinsic property of the DPA mechanism and is independent of the leakage model. Then, we abstract the concept of SR and derive the theoretical expression in a geometric way. Finally, the theory allows a further study on leakage model where we formalize criterion explaining the impact of model errors as well as guaranteeing robust performance. We transfer the model effects to a degraded SNR parameter. Experimental results are inline with the theory, confirming that our theoretical expression coincides with the empirical ones.
Last updated:  2023-12-22
New Security Proofs and Complexity Records for Advanced Encryption Standard
Orhun Kara
Common block ciphers like AES specified by the NIST or KASUMI (A5/3) of GSM are extensively utilized by billions of individuals globally to protect their privacy and maintain confidentiality in daily communications. However, these ciphers lack comprehensive security proofs against the vast majority of known attacks. Currently, security proofs are limited to differential and linear attacks for both AES and KASUMI. For instance, the consensus on the security of AES is not based on formal mathematical proofs but on intensive cryptanalysis over its reduced rounds spanning several decades. In this work, we introduce new security proofs for AES against another attack method: impossible differential (ID) attacks. We classify ID attacks as reciprocal and nonreciprocal ID attacks. We show that sharp and generic lower bounds can be imposed on the data complexities of reciprocal ID attacks on substitution permutation networks. We prove that the minimum data required for a reciprocal ID attack on AES using a conventional ID characteristic is $2^{66}$ chosen plaintexts whereas a nonreciprocal ID attack involves at least $2^{88}$ computational steps. We mount a nonreciprocal ID attack on 6-round AES for 192-bit and 256-bit keys, which requires only $2^{18}$ chosen plaintexts and outperforms the data complexity of any attack. Given its marginal time complexity, this attack does not pose a substantial threat to the security of AES. However, we have made enhancements to the integral attack on 6-round AES, thereby surpassing the longstanding record for the most efficient attack after a period of 23 years.
Last updated:  2023-12-22
PNB-focused Differential Cryptanalysis of ChaCha Stream Cipher
Shotaro Miyashita, Ryoma Ito, and Atsuko Miyaji
This study focuses on differential cryptanalysis of the ChaCha stream cipher. In the conventional approach, an adversary first searches for an input/output differential pair with the highest differential bias and then analyzes the probabilistic neutral bits (PNB) based on the obtained input/output differential pair. However, although the time and data complexities for the attack can be estimated by the differential bias and PNB obtained by this approach, the combination of the differential bias and PNB is not always optimal. In addition, the existing studies have not performed a comprehensive analysis of the PNB; thus, they have not provided an upper bound on the number of rounds required for a differential attack that uses a single-bit truncated differential to be successful. To address these limitations, we propose a PNB-focused differential attack on reduced-round ChaCha by first comprehensively analyzing the PNB for all possible single-bit truncated output differences and then searching for the input/output differential pair with the highest differential bias based on the obtained PNB. The best existing attack on ChaCha, proposed by Beierle et al. at CRYPTO 2020, works on up to 7 rounds, whereas the most extended attack we observed works on up to 7.25 rounds using the proposed PNB-focused approach. The time complexity, data complexity, and success probability of the proposed attack are \(2^{255.62}\), \(2^{48.36}\), and 0.5, respectively. Although the proposed attack is less efficient than a brute force attack, it is the first dedicated attack on the target and provides both a baseline and useful components (i.e., differential bias and PNB) for improved attacks.
Last updated:  2023-12-21
Fully Parallel, One-Cycle Random Shuffling for Efficient Countermeasure in Post-Quantum Cryptography
Jong-Yeon Park, Dongsoo Lee, Seonggyeom Kim, Wonil lee, Bo Gyeong Kang, and Kouichi Sakurai
Hiding countermeasures are the most widely utilized techniques for thwarting side-channel attacks, and their significance has been further emphasized with the advent of Post Quantum Cryptography (PQC) algorithms, owing to the extensive use of vector operations. Commonly, the Fisher-Yates algorithm is adopted in hiding countermeasures with permuted operation for its security and efficiency in implementation, yet the inherently sequential nature of the algorithm imposes limitations on hardware acceleration. In this work, we propose a novel method named Addition Round Rotation ARR, which can introduce a time-area trade-off with block-based permutation. Our findings indicate that this approach can achieve a permutation complexity level commensurate with or exceeding $2^{128}$ in a single clock cycle while maintaining substantial resistance against second-order analysis. To substantiate the security of our proposed method, we introduce a new validation technique --Identity Verification. This technique allows theoretical validation of the proposed algorithm's security and is consistent with the experimental results. Finally, we introduce an actual hardware design and provide the implementation results on Application-Specific Integrated Circuit (ASIC). The measured performance demonstrates that our proposal fully supports the practical applicability.
Last updated:  2023-12-21
More forging (and patching) of tropical signatures
Daniel R. L. Brown and Chris Monico
Panny [3] described how to forge the “tropical signatures” proposed by Chen, Grigoriev and Shpilrain [1]. (These signatures are loosely related to the NP-complete problem of factoring tropical polynomials). We describe more methods to forge these tropical signatures. We also describe some patches that thwart all but one of these forgery methods (which we summarize as re-hashing an honest signature).
Last updated:  2023-12-21
Upgrading Fuzzy Extractors
Chloe Cachet, Ariel Hamlin, Maryam Rezapour, and Benjamin Fuller
Fuzzy extractors derive stable keys from noisy sources non-interactively (Dodis et al., SIAM Journal of Computing 2008). Since their introduction, research has focused on two tasks: 1) showing security for as many distributions as possible and 2) providing stronger security guarantees including allowing one to enroll the same value multiple times (reusability), security against an active attacker (robustness), and preventing leakage about the enrolled value (privacy). Existing constructions of reusable fuzzy extractors are direct and do not support as many distributions as the best non-reusable constructions. Constructions of robust fuzzy extractors require strong assumptions even in the CRS model. Given the need for progress on the basic fuzzy extractor primitive, it is prudent to seek generic mechanisms to transform a fuzzy extractor into one that is robust, private, and reusable so that it can inherit further improvements. This work asks if one can generically upgrade fuzzy extractors to achieve robustness, privacy, and reusability. We show positive and negative results: we show upgrades for robustness and privacy, but we provide a negative result on reuse. 1. We upgrade (private) fuzzy extractors to be robust under weaker assumptions than previously known in the common reference string model. 2. We show a generic upgrade for a private fuzzy extractor using multi-bit compute and compare (MBCC) obfuscation (Wichs and Zirdelis, FOCS 2017) that requires less entropy than prior work. 3. We show one cannot arbitrarily compose private fuzzy extractors. It is known one cannot reuse an arbitrary fuzzy extractor; each enrollment can leak a constant fraction of the input entropy. We show that one cannot build a reusable private fuzzy extractor by considering other enrollments as auxiliary input. In particular, we show that assuming MBCC obfuscation and collision-resistant hash functions, there does not exist a private fuzzy extractor secure against unpredictable auxiliary inputs strengthening a negative result of Brzuska et al. (Crypto 2014).
Last updated:  2023-12-21
Falcon Takes Off - A Hardware Implementation of the Falcon Signature Scheme
Michael Schmid, Dorian Amiet, Jan Wendler, Paul Zbinden, and Tao Wei
Falcon is one out of three post-quantum signature schemes which have been selected for standardization by NIST in July 2022. To the best of our knowledge, Falcon is the only selected algorithm that does not yet have a publicly reported hardware description that performs signing or key generation. The reason might be that the Falcon signature and key generation algorithms do not fit well in hardware due to the use of floating-point numbers and recursive functions. This publication describes the first hardware implementation for Falcon signing and key generation. To overcome the complexity of the Falcon algorithms, High-Level Synthesis (HLS) was preferred over a hardware description language like Verilog or VHDL. Our HLS code is based on the C reference implementation available at NIST. We describe the required modifications in order to be compliant with HLS, such as rewriting recursive functions into iterative versions. The hardware core at security level 5 requires 45,223 LUTs, 41,370 FFs, 182 DSPs, and 37 BRAMs to calculate one signature in 8.7 ms on a Zynq UltraScale+ FPGA. Security level 5 key generation takes 320.3 ms and requires 100,649 LUTs, 91,029 FFs, 1,215 DSPs, and 69 BRAMs.
Last updated:  2023-12-21
Concrete Time/Memory Trade-Offs in Generalised Stern’s ISD Algorithm
Sreyosi Bhattacharyya and Palash Sarkar
The first contribution of this work is a generalisation of Stern's information set decoding (ISD) algorithm. Stern's algorithm, a variant of Stern's algorithm due to Dumer, as well as a recent generalisation of Stern's algorithm due to Bernstein and Chou are obtained as special cases of our generalisation. Our second contribution is to introduce the notion of a set of effective time/memory trade-off (TMTO) points for any ISD algorithm for given ranges of values of parameters of the algorithm. Such a set succinctly and uniquely captures the entire landscape of TMTO points with only a minor loss in precision. We further describe a method to compute a set of effective TMTO points. As an application, we compute sets of effective TMTO points for the five variants of the Classic McEliece cryptosystem corresponding to the new algorithm as well as for Stern's, Dumer's and Bernstein and Chou's algorithms. The results show that while Dumer's and Bernstein and Chou's algorithms do not provide any interesting TMTO points beyond what is achieved by Stern's algorithm, the new generalisation that we propose provide about twice the number of effective TMTO points that is obtained from Stern's algorithm. Consequences of the obtained TMTO points to the classification of the variants of Classic McEliece in appropriate NIST categories are discussed.
Last updated:  2023-12-21
NOTRY: deniable messaging with retroactive avowal
Faxing Wang, Shaanan Cohney, Riad Wahby, and Joseph Bonneau
Modern secure messaging protocols typically aim to provide deniability. Achieving this requires that convincing cryptographic transcripts can be forged without the involvement of genuine users. In this work, we observe that parties may wish to revoke deniability and avow a conversation after it has taken place. We propose a new protocol called Not-on-the-Record-Yet (NOTRY) which enables users to prove a prior conversation transcript is genuine. As a key building block we propose avowable designated verifier proofs which may be of independent interest. Our implementation incurs roughly 8× communication and computation overhead over the standard Signal protocol during regular operation. We find it is nonetheless deployable in a realistic setting as key exchanges (the source of the overhead) still complete in just over 1ms on a modern computer. The avowal protocol induces only constant computation and communication performance for the communicating parties and scales linearly in the number of messages avowed for the verifier—in the tens of milliseconds per avowal.
Last updated:  2023-12-21
Applications of Neural Network-Based AI in Cryptography
Abderrahmane Nitaj and Tajjeeddine Rachidi
Artificial intelligence (AI) is a modern technology that allows plenty of advantages in daily life, such as predicting weather, finding directions, classifying images and videos, even automatically generating code, text, and videos. Other essential technologies such as blockchain and cybersecurity also benefit from AI. As a core component used in blockchain and cybersecurity, cryptography can benefit from AI in order to enhance the confidentiality and integrity of cyberspace. In this paper, we review the algorithms underlying four prominent cryptographic cryptosystems, namely the Advanced Encryption Standard, the Rivest--Shamir--Adleman, Learning With Errors, and the Ascon family of cryptographic algorithms for authenticated encryption. Where possible, we pinpoint areas where AI can be used to help improve their security.
Last updated:  2023-12-21
Half-Tree: Halving the Cost of Tree Expansion in COT and DPF
Xiaojie Guo, Kang Yang, Xiao Wang, Wenhao Zhang, Xiang Xie, Jiang Zhang, and Zheli Liu
GGM tree is widely used in the design of correlated oblivious transfer (COT), subfield vector oblivious linear evaluation (sVOLE), distributed point function (DPF), and distributed comparison function (DCF). Often, the cost associated with GGM tree dominates the computation and communication of these protocols. In this paper, we propose a suite of optimizations that can reduce this cost by half. • Halving the cost of COT and sVOLE. Our COT protocol introduces extra correlation to each level of a GGM tree used by the state-of-the-art COT protocol. As a result, it reduces both the number of AES calls and the communication by half. Extending this idea to sVOLE, we are able to achieve similar improvement with either halved computation or halved communication. • Halving the cost of DPF and DCF. We propose improved two-party protocols for the distributed generation of DPF/DCF keys. Our tree structures behind these protocols lead to more efficient full-domain evaluation and halve the communication and the round complexity of the state-of-the-art DPF/DCF protocols. All protocols are provably secure in the random-permutation model and can be accelerated based on fixed-key AES-NI. We also improve the state-of-the-art schemes of puncturable pseudorandom function (PPRF), DPF, and DCF, which are of independent interest in dealer-available scenarios.
Last updated:  2023-12-21
Batch Arguments to NIZKs from One-Way Functions
Eli Bradley, Brent Waters, and David J. Wu
Succinctness and zero-knowledge are two fundamental properties in the study of cryptographic proof systems. Several recent works have formalized the connections between these two notions by showing how to realize non-interactive zero-knowledge (NIZK) arguments from succinct non-interactive arguments. Specifically, Champion and Wu (CRYPTO 2023) as well as Bitansky, Kamath, Paneth, Rothblum, and Vasudevan (ePrint 2023) recently showed how to construct a NIZK argument for NP from a (somewhere-sound) non-interactive batch argument (BARG) and a dual-mode commitment scheme (and in the case of the Champion-Wu construction, a local pseudorandom generator). The main open question is whether a BARG suffices for a NIZK (just assuming one-way functions). In this work, we first show that an adaptively-sound BARG for NP together with an one-way function imply a computational NIZK argument for NP. We then show that the weaker notion of somewhere soundness achieved by existing BARGs from standard algebraic assumptions are also adaptively sound if we assume sub-exponential security. This transformation may also be of independent interest. Taken together, we obtain a NIZK argument for NP from one-way functions and a sub-exponentially-secure somewhere-sound BARG for NP. If we instead assume plain public-key encryption, we show that a standard polynomially-secure somewhere-sound batch argument for NP suffices for the same implication. As a corollary, this means a somewhere-sound BARG can be used to generically upgrade any semantically-secure public-key encryption scheme into one secure against chosen-ciphertext attacks. More broadly, our results demonstrate that constructing non-interactive batch arguments for NP is essentially no easier than constructing NIZK arguments for NP.
Last updated:  2023-12-21
Revocable Quantum Digital Signatures
Tomoyuki Morimae, Alexander Poremba, and Takashi Yamakawa
We study digital signatures with revocation capabilities and show two results. First, we define and construct digital signatures with revocable signing keys from the LWE assumption. In this primitive, the signing key is a quantum state which enables a user to sign many messages and yet, the quantum key is also revocable, i.e., it can be collapsed into a classical certificate which can later be verified. Once the key is successfully revoked, we require that the initial recipient of the key loses the ability to sign. We construct digital signatures with revocable signing keys from a newly introduced primitive which we call two-tier one-shot signatures, which may be of independent interest. This is a variant of one-shot signatures, where the verification of a signature for the message ``0'' is done publicly, whereas the verification for the message ``1'' is done in private. We give a construction of two-tier one-shot signatures from the LWE assumption. As a complementary result, we also construct digital signatures with quantum revocation from group actions, where the quantum signing key is simply ``returned'' and then verified as part of revocation. Second, we define and construct digital signatures with revocable signatures from OWFs. In this primitive, the signer can produce quantum signatures which can later be revoked. Here, the security property requires that, once revocation is successful, the initial recipient of the signature loses the ability to find accepting inputs to the signature verification algorithm. We construct this primitive using a newly introduced two-tier variant of tokenized signatures. For the construction, we show a new lemma which we call the adaptive hardcore bit property for OWFs, which may enable further applications.
Last updated:  2023-12-21
VDOO: A Short, Fast, Post-Quantum Multivariate Digital Signature Scheme
Anindya ganguly, Angshuman Karmakar, and Nitin Saxena
Hard lattice problems are predominant in constructing post-quantum cryptosystems. However, we need to continue developing post-quantum cryptosystems based on other quantum hard problems to prevent a complete collapse of post-quantum cryptography due to a sudden breakthrough in solving hard lattice problems. Solving large multivariate quadratic systems is one such quantum hard problem. Unbalanced Oil-Vinegar is a signature scheme based on the hardness of solving multivariate equations. In this work, we present a post-quantum digital signature algorithm VDOO (Vinegar-Diagonal-Oil-Oil) based on solving multivariate equations. We introduce a new layer called the diagonal layer over the oil-vinegar-based signature scheme Rainbow. This layer helps to improve the security of our scheme without increasing the parameters considerably. Due to this modification, the complexity of the main computational bottleneck of multivariate quadratic systems i.e. the Gaussian elimination reduces significantly. Thus making our scheme one of the fastest multivariate quadratic signature schemes. Further, we show that our carefully chosen parameters can resist all existing state-of-the-art attacks. The signature sizes of our scheme for the National Institute of Standards and Technology's security level of I, III, and V are 96, 226, and 316 bytes, respectively. This is the smallest signature size among all known post-quantum signature schemes of similar security.
Last updated:  2023-12-21
HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates
Binyi Chen, Benedikt Bünz, Dan Boneh, and Zhenfei Zhang
Plonk is a widely used succinct non-interactive proof system that uses univariate polynomial commitments. Plonk is quite flexible: it supports circuits with low-degree ``custom'' gates as well as circuits with lookup gates (a lookup gate ensures that its input is contained in a predefined table). For large circuits, the bottleneck in generating a Plonk proof is the need for computing a large FFT. We present HyperPlonk, an adaptation of Plonk to the boolean hypercube, using multilinear polynomial commitments. HyperPlonk retains the flexibility of Plonk but provides several additional benefits. First, it avoids the need for an FFT during proof generation. Second, and more importantly, it supports custom gates of much higher degree than Plonk without harming the running time of the prover. Both of these can dramatically speed up the prover's running time. Since HyperPlonk relies on multilinear polynomial commitments, we revisit two elegant constructions: one from Orion and one from Virgo. We show how to reduce the Orion opening proof size to less than 10kb (an almost factor 1000 improvement) and show how to make the Virgo FRI-based opening proof simpler and shorter.
Last updated:  2023-12-21
ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols
Benedikt Bünz and Binyi Chen
Accumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.
Last updated:  2023-12-21
LERNA: Secure Single-Server Aggregation via Key-Homomorphic Masking
Hanjun Li, Huijia Lin, Antigoni Polychroniadou, and Stefano Tessaro
This paper introduces LERNA, a new framework for single-server secure aggregation. Our protocols are tailored to the setting where multiple consecutive aggregation phases are performed with the same set of clients, a fraction of which can drop out in some of the phases. We rely on an initial secret sharing setup among the clients which is generated once-and-for-all, and reused in all following aggregation phases. Compared to prior works [Bonawitz et al. CCS’17, Bell et al. CCS’20], the reusable setup eliminates one round of communication between the server and clients per aggregation—i.e., we need two rounds for semi-honest security (instead of three), and three rounds (instead of four) in the malicious model. Our approach also significantly reduces the server’s computational costs by only requiring the reconstruction of a single secret-shared value (per aggregation). Prior work required reconstructing a secret-shared value for each client involved in the computation. We provide instantiations of LERNA based on both the Decisional Composite Residuosity (DCR) and (Ring) Learning with Rounding ((R)LWR) assumptions respectively and evaluate a version based on the latter assumption. In addition to savings in round-complexity (which result in reduced latency), our experiments show that the server computational costs are reduced by two orders of magnitude in comparison to the state-of-the-art. In settings with a large number of clients, we also reduce the computational costs up to twenty-fold for most clients, while a small set of “heavy clients” is subject to a workload that is still smaller than that of prior work.
Last updated:  2023-12-20
COMBINE: COMpilation and Backend-INdependent vEctorization for Multi-Party Computation
Benjamin Levy, Muhammad Ishaq, Ben Sherman, Lindsey Kennard, Ana Milanova, and Vassilis Zikas
Recent years have witnessed significant advances in programming technology for multi-party computation (MPC), bringing MPC closer to practice and wider applicability. Typical MPC programming frameworks focus on either front-end language design (e.g., Wysteria, Viaduct, SPDZ), or back-end protocol implementation (e.g., ABY, MOTION, SPDZ). We propose a methodology for an MPC compilation toolchain, which by mimicking the compilation methodology of classical compilers enables middle-end (i.e., machine-independent) optimizations, yielding significant improvements. We advance an intermediate language, which we call MPC-IR that can be viewed as the analogue of (enriched) Static Single Assignment (SSA) form. MPC-IR enables backend-independent optimizations in a close analogy to machine-independent optimizations in classical compilers. To demonstrate our approach, we focus on a specific backend-independent optimization, SIMD-vectorization: We devise a novel classical-compiler-inspired automatic SIMD vectorization on MPC-IR. To demonstrate backend independence and quality of our optimization, we evaluate our approach with two mainstream backend frameworks that support multiple types of MPC protocols, namely MOTION and MP-SPDZ, and show significant improvements across the board.
Last updated:  2023-12-20
More efficient comparison protocols for MPC
Wicher Malten, Mehmet Ugurbil, and Miguel de Vega
In 1982, Yao introduced the problem of comparing two private values, thereby launching the study of protocols for secure multi-party computation (MPC). Since then, comparison protocols have undergone extensive study and found widespread applications. We survey state-of-the-art comparison protocols for an arbitrary number of parties, decompose them into smaller primitives and analyse their communication complexity under the usual assumption that the underlying MPC protocol does preprocessing and computes linear operations without communication. We then develop two new comparison protocols and explain why they are faster than similar protocols, including those that are commonly used in practice: they reduce the number of online multiplications, without increasing preprocessing or round complexity. More concretely, online bandwidth is reduced by more than half for the standard comparison protocols whose round complexity is logarithmic in the bit-length, whereas for constant round comparison protocols the reduction is two-thirds.
Last updated:  2023-12-20
Multipars: Reduced-Communication MPC over Z2k
Sebastian Hasler, Pascal Reisert, Marc Rivinius, and Ralf Küsters
In recent years, actively secure SPDZ-like protocols for dishonest majority, like SPD$\mathbb Z_{2^k}$, Overdrive2k, and MHz2k, over base rings $\mathbb Z_{2^k}$ have become more and more efficient. In this paper, we present a new actively secure MPC protocol Multipars that outperforms these state-of-the-art protocols over $\mathbb Z_{2^k}$ by more than a factor of 2 in the two-party setup in terms of communication. Multipars is the first actively secure N-party protocol over $\mathbb Z_{2^k}$ that is based on linear homomorphic encryption (LHE) in the offline phase (instead of oblivious transfer or somewhat homomorphic encryption in previous works). The strong performance of Multipars relies on a new adaptive packing for BGV ciphertexts that allows us to reduce the parameter size of the encryption scheme and the overall communication cost. Additionally, we use modulus switching for further size reduction, a new type of enhanced CPA security over $\mathbb Z_{2^k}$, a truncation protocol for Beaver triples, and a new LHE-based offline protocol without sacrificing over $\mathbb Z_{2^k}$. We have implemented Multipars and therewith provide the fastest preprocessing phase over $\mathbb Z_{2^k}$. Our evaluation shows that Multipars offers at least a factor of 8 lower communication costs and up to a factor of 15 faster runtime in the WAN setting compared to the currently best available actively secure MPC implementation over $\mathbb Z_{2^k}$.
Last updated:  2023-12-20
Single-Trace Side-Channel Attacks on CRYSTALS-Dilithium: Myth or Reality?
Ruize Wang, Kalle Ngo, Joel Gärtner, and Elena Dubrova
We present a side-channel attack on CRYSTALS-Dilithium, a post-quantum secure digital signature scheme, with two variants of post-processing. The side-channel attack exploits information leakage in the secret key unpacking procedure of the signing algorithm to recover the coefficients of the polynomials in the secret key vectors ${\bf s}_1$ and ${\bf s}_2$ by profiled deep learning-assisted power analysis. In the first variant, one half of the coefficients of ${\bf s}_1$ and ${\bf s}_2$ is recovered by power analysis and the rest is derived by solving a system of linear equations based on ${\bf t} = {\bf A}{\bf s}_1 + {\bf s}_2$, where ${\bf A}$ and ${\bf t}$ are parts of the public key. This case assumes knowledge of the least significant bits of the vector ${\bf t}$, ${\bf t}_0$. The second variant waives this requirement. However, to succeed, it needs a larger portion of ${\bf s}_1$ to be recovered by power analysis. The remainder of ${\bf s}_1$ is obtained by lattice reduction. Once the full ${\bf s}_1$ is recovered, all the other information necessary for generating valid signatures can be trivially derived from the public key. We evaluate both variants on an ARM Cortex-M4 implementation of Dilithium-2. The profiling stage (trace capture and neural network training) takes less than 10 hours. In the attack assuming that ${\bf t}_0$ is known, the probability of successfully recovering the full vector ${\bf s}_1$ from a single trace captured from a different from profiling device is non-negligible (9%). The success rate approaches 100% if multiple traces are available for the attack. Our results demonstrate the necessity of protecting the secret key of CRYSTALS-Dilithium from single-trace attacks and call for a reassessment of the role of compression of the public key vector ${\bf t}$ in the security of CRYSTALS-Dilithium implementations.
Last updated:  2023-12-20
Dory: Faster Asynchronous BFT with Reduced Communication for Permissioned Blockchains
Zongyang Zhang, You Zhou, Sisi Duan, Haibin Zhang, Bin Hu, Licheng Wang, and Jianwei Liu
Asynchronous Byzantine fault-tolerance (BFT) protocols (e.g., HoneyBadger and Dumbo family protocols) have received increasing attention as the consensus mechanism of permissioned blockchains, given their particular robustness against timing and performance attacks. However, there is a substantial performance gap before they can be applied in real systems. In this paper, we identify and address two critical issues, and design Dory, an asynchronous BFT consensus protocol with improved efficiency and lower communication compared to the state-of-the-art protocol, sDumbo. At the core of our approach are two new building blocks reducing the communication cost and a novel framework utilizing transactions with quadratic message complexity. We have implemented Dory and sDumbo in a new Golang library. Via a deployment using up to 151 participants on Amazon EC2, we show that Dory consistently outperforms sDumbo during both failure and failure-free scenarios. For instance, Dory has up to 5x the throughput of sDumbo in the failure-free scenario.
Last updated:  2023-12-20
Generic Construction of Dual-Server Public Key Authenticated Encryption with Keyword Search
Keita Emura
Chen et al. (IEEE Transactions on Cloud Computing 2022) introduced dual-server public key authenticated encryption with keyword search (DS-PAEKS), and proposed a DS-PAEKS scheme under the decisional Diffie-Hellman assumption. In this paper, we propose a generic construction of DS-PAEKS from PAEKS, public key encryption, and signatures. By providing a concrete attack, we show that the DS-PAEKS scheme of Chen et al. is vulnerable. That is, the proposed generic construction yields the first DS-PAEKS schemes. Our attack with a slight modification works against the Chen et al. dual-server public key encryption with keyword search (DS-PEKS) scheme (IEEE Transactions on Information Forensics and Security 2016). Moreover, we demonstrate that the Tso et al. generic construction of DS-PEKS from public key encryption (IEEE Access 2020) is also vulnerable. We also analyze other pairing-free PAEKS schemes (Du et al., Wireless Communications and Mobile Computing 2022 and Lu and Li, IEEE Transactions on Mobile Computing 2022). Though we did not find any attack against these schemes, we show that at least their security proofs are wrong.
Last updated:  2023-12-20
Actively Secure Half-Gates with Minimum Overhead under Duplex Networks
Hongrui Cui, Xiao Wang, Kang Yang, and Yu Yu
Actively secure two-party computation (2PC) is one of the canonical building blocks in modern cryptography. One main goal for designing actively secure 2PC protocols is to reduce the communication overhead, compared to semi-honest 2PC protocols. In this paper, we make significant progress in closing this gap by proposing two new actively secure constant-round 2PC protocols, one with one-way communication of $2\kappa+5$ bits per AND gate (for $\kappa$-bit computational security and any statistical security) and one with total communication of $2\kappa+\rho+5$ bits per AND gate (for $\rho$-bit statistical security). In particular, our first protocol essentially matches the one-way communication of semi-honest half-gates protocol. Our optimization is achieved by three new techniques: 1. The recent compression technique by Dittmer et al. (Crypto 2022) shows that a relaxed preprocessing is sufficient for authenticated garbling that does not reveal masked wire values to the garbler. We introduce a new form of authenticated bits and propose a new technique of generating authenticated AND triples to reduce the one-way communication of preprocessing from $5\rho+1$ bits to $2$ bits per AND gate for $\rho$-bit statistical security. 2. Unfortunately, the above compressing technique is only compatible with a less compact authenticated garbled circuit of size $2\kappa+3\rho$ bits per AND gate. We designed a new authenticated garbling that does not use information theoretic MACs but rather dual execution without leakage to authenticate wire values in the circuit. This allows us to use a more compact half-gates based authenticated garbled circuit of size $2\kappa+1$ bits per AND gate, and meanwhile keep compatible with the compression technique. Our new technique can achieve one-way communication of $2\kappa+5$ bits per AND gate. 3. In terms of total communication, we notice that the communication overhead of the consistency checking method by Dittmer et al. (Crypto 2022) can be optimized by adding one-round of interaction and utilizing the Free-XOR property. This reduces the online communication from $2\kappa+3\rho$ bits down to $2\kappa+\rho+1$ bits per AND gate. Combined with our first contribution, this yields total amortized communication of $2\kappa+\rho+5$ bits.
Last updated:  2023-12-19
Cryptography from Planted Graphs: Security with Logarithmic-Size Messages
Damiano Abram, Amos Beimel, Yuval Ishai, Eyal Kushilevitz, and Varun Narayanan
We study the following broad question about cryptographic primitives: is it possible to achieve security against an arbitrary $\mathsf{poly}(n)$-time adversary with $O(\log n)$-size messages? It is common knowledge that the answer is ``no'' unless information-theoretic security is possible. In this work, we revisit this question by considering the setting of cryptography with public information and computational security. We obtain the following results, assuming variants of well-studied intractability assumptions: 1) A private simultaneous messages (PSM) protocol for every $f:[n]\times[n]\to\{0, 1\}$ requiring $(1+\epsilon)\log n$-bit messages for most functions and $(2+\epsilon)\log n$-bit messages for the remaining ones. We apply this towards non-interactive secure 3-party computation with similar message size in the preprocessing model, improving over previous 2-round protocols. 2) A secret-sharing scheme for any ``forbidden-graph'' access structure on $n$ nodes with $O(\log n)$ share size. 3) On the negative side, we show that computational threshold secret-sharing schemes with public information require share size $\Omega(\log \log n)$. For arbitrary access structures, we show that computational security does not help with 1-bit shares. The above positive results guarantee that any adversary of size $n^{o(\log n)}$ achieves an $n^{-\Omega(1)}$ distinguishing advantage. We show how to make the advantage negligible by slightly increasing the asymptotic message size, still improving over all known constructions. The security of our constructions is based on the conjectured hardness of variants of the planted clique problem, which was extensively studied in the algorithms, statistical inference, and complexity theory communities. Our work provides the first applications of such assumptions improving the efficiency of mainstream cryptographic primitives, gives evidence for the necessity of such assumptions, and suggests new questions in this domain that may be of independent interest.
Last updated:  2023-12-19
Owl: An Augmented Password-Authenticated Key Exchange Scheme
Feng Hao, Samiran Bag, Liqun Chen, and Paul C. van Oorschot
We present Owl, an augmented password-authenticated key exchange (PAKE) protocol that is both efficient and supported by security proofs. Owl is motivated by recognized limitations in SRP-6a and OPAQUE. SRP-6a is the only augmented PAKE that has enjoyed wide use in practice to date, but it lacks the support of formal security proofs, and does not support elliptic curve settings. OPAQUE was proposed in 2018 as a provably secure and efficient alternative to SRP-6a, and was chosen by the IETF in 2020 for standardization, but open issues leave it unclear whether OPAQUE will replace SRP-6a in practice. Owl is obtained by efficiently adapting J-PAKE to an asymmetric setting, providing additional security against server compromise yet with lower computation than J-PAKE. Owl is provably secure, efficient and agile in supporting implementations in diverse multiplicative groups and elliptic curve settings. To the best of our knowledge, Owl is the first augmented PAKE solution that provides systematic advantages over SRP-6a in terms of security, computation, message sizes, and agility.
Last updated:  2023-12-19
Regularized PolyKervNets: Optimizing Expressiveness and Efficiency for Private Inference in Deep Neural Networks
Toluwani Aremu
Private computation of nonlinear functions, such as Rectified Linear Units (ReLUs) and max-pooling operations, in deep neural networks (DNNs) poses significant challenges in terms of storage, bandwidth, and time consumption. To address these challenges, there has been a growing interest in utilizing privacy-preserving techniques that leverage polynomial activation functions and kernelized convolutions as alternatives to traditional ReLUs. However, these alternative approaches often suffer from a trade-off between achieving faster private inference (PI) and sacrificing model accuracy. In particular, when applied to much deeper networks, these methods encounter training instabilities, leading to issues like exploding gradients (resulting in NaNs) or suboptimal approximations. In this study, we focus on PolyKervNets, a technique known for offering improved dynamic approximations in smaller networks but still facing instabilities in larger and more complex networks. Our primary objective is to empirically explore optimization-based training recipes to enhance the performance of PolyKervNets in larger networks. By doing so, we aim to potentially eliminate the need for traditional nonlinear activation functions, thereby advancing the state-of-the-art in privacy-preserving deep neural network architectures.
Last updated:  2023-12-19
Unconditionally Secure Quantum Bit Commitment and Quantum Oblivious Transfer
Ping Wang, Yikang Lei, and Yiting Su
Recently, a novel secure quantum bit commitment (QBC) protocol has been proposed [29]. However, the protocol requires Alice and Bob to share Bell states in advance, making the protocol lacking in practicality. In this paper, we propose two new unconditionally secure quantum bit commitment protocols that do not require pre-shared Bell states based on entangled and non-entangled states, respectively. Their security stems from quantum mechanical properties such as quantum superposition, quantum entanglement, no-cloning theorem, and no-communication theorem. Furthermore, by combining the proposed QBC with Yao's quantum oblivious transfer (QOT) model, we can obtain an unconditionally secure QOT protocol.
Last updated:  2023-12-18
How to Prove Statements Obliviously?
Sanjam Garg, Aarushi Goel, and Mingyuan Wang
Cryptographic applications often require proving statements about hidden secrets satisfying certain circuit relations. Moreover, these proofs must often be generated obliviously, i.e., without knowledge of the secret. This work presents a new technique called --- FRI on hidden values --- for efficiently proving such statements. This technique enables a polynomial commitment scheme for values hidden inside linearly homomorphic primitives, such as linearly homomorphic encryption, linearly homomorphic commitment, group exponentiation, fully homomorphic encryption, etc. Building on this technique, we obtain the following results. 1. An efficient SNARK for proving the honest evaluation of FHE ciphertexts. This allows for an efficiently verifiable private delegation of computation, where the client only needs to perform logarithmic many FHE computations to verify the correctness of the computation. 2. An efficient approach for privately delegating the computation of zkSNARKs to a single untrusted server, without making any non-black-box use of cryptography. All prior works require multiple servers and the assumption that some subset of the servers are honest. 3. A weighted threshold signature scheme that does not require any setup. In particular, parties may sample their own keys independently, and no distributed key generation (DKG) protocol is needed. Furthermore, the efficiency of our scheme is completely independent of the weights. Prior to this work, there were no known black-box feasibility results for any of these applications. We also investigate the use of this approach in the context of public proof aggregation. These are only a few representative applications that we explore in this paper. We expect our techniques to be widely applicable in many other scenarios.
Last updated:  2023-12-18
The Internet Computer for Geeks
The DFINITY Team
Smart contracts are a new form of software that will revolutionize how software is written, IT systems are maintained, and applications and whole businesses are built. Smart contracts are composable and autonomous pieces of software that run on decentralized blockchains, which makes them tamperproof and unstoppable. In this paper, we describe the Internet Computer (IC), which is a radical new design of blockchain that unleashes the full potential of smart contracts, overcoming the limitations of smart contracts on traditional blockchains with respect to speed, storage costs, and computational capacity. This allows smart contracts for the first time to implement fully decentralized applications that are hosted end to end on blockchain. The IC consists of a set of cryptographic protocols that connects independently operated nodes into a collection of blockchains. These blockchains host and execute ``canisters'', the IC’s form of smart contracts. Canisters can store data, perform very general computations on that data, and provide a complete technology stack, serving web pages directly to end users. Computational and storage costs are covered by a ``reverse-gas model'', where canister developers pre-pay costs in cycles that are obtained from ICP, the native token of the IC. ICP tokens are also used for governance: the IC is governed by a decentralized autonomous organization, or DAO, which, among other things, determines changes to the topology of the network and upgrades to the protocol.
Last updated:  2023-12-18
Blockwise Rank Decoding Problem and LRPC Codes: Cryptosystems with Smaller Sizes
Yongcheng Song, Jiang Zhang, Xinyi Huang, and Wei Wu
In this paper, we initiate the study of the Rank Decoding (RD) problem and LRPC codes with blockwise structures in rank-based cryptosystems. First, we introduce the blockwise errors ($\ell$-errors) where each error consists of $\ell$ blocks of coordinates with disjoint supports, and define the blockwise RD ($\ell$-RD) problem as a natural generalization of the RD problem whose solutions are $\ell$-errors (note that the standard RD problem is actually a special $\ell$-RD problem with $\ell=1$). We adapt the typical attacks on the RD problem to the $\ell$-RD problem, and find that the blockwise structures do not ease the problem too much: the $\ell$-RD problem is still exponentially hard for appropriate choices of $\ell>1$. Second, we introduce blockwise LRPC ($\ell$-LRPC) codes as generalizations of the standard LPRC codes whose parity-check matrices can be divided into $\ell$ sub-matrices with disjoint supports, i.e., the intersection of two subspaces generated by the entries of any two sub-matrices is a null space, and investigate the decoding algorithms for $\ell$-errors. We find that the gain of using $\ell$-errors in decoding capacity outweighs the complexity loss in solving the $\ell$-RD problem, which makes it possible to design more efficient rank-based cryptosystems with flexible choices of parameters. As an application, we show that the two rank-based cryptosystems submitted to the NIST PQC competition, namely, RQC and ROLLO, can be greatly improved by using the ideal variants of the $\ell$-RD problem and $\ell$-LRPC codes. Concretely, for 128-bit security, our RQC has total public key and ciphertext sizes of 2.5 KB, which is not only about 50% more compact than the original RQC, but also smaller than the NIST Round 4 code-based submissions HQC, BIKE, and Classic McEliece.
Last updated:  2023-12-18
Holepunch: Fast, Secure File Deletion with Crash Consistency
Zachary Ratliff, Wittmann Goh, Abe Wieland, James Mickens, and Ryan Williams
A file system provides secure deletion if, after a file is deleted, an attacker with physical possession of the storage device cannot recover any data from the deleted file. Unfortunately, secure deletion is not provided by commodity file systems. Even file systems which explicitly desire to provide secure deletion are challenged by the subtleties of hardware controllers on modern storage devices; those controllers obscure the mappings between logical blocks and physical blocks, silently duplicate physical blocks, and generally make it hard for host-level software to make reliable assumptions about how file data is kept on the device. State-of-the-art frameworks for secure deletion also have no crash consistency, meaning that an ill-timed power outage or software fault will desynchonize keys and the associated encrypted file data, corrupting the file system. In this paper, we present Holepunch, a new software-level approach for implementing secure deletion. Holepunch treats the storage device as a black box, providing secure deletion via cryptographic erasure. Holepunch uses per-file keys to transparently encrypt outgoing file writes and decrypt incoming file reads, ensuring that all physical data in the storage device is always encrypted. Holepunch uses puncturable pseudorandom functions (PPRFs) to quickly access file keys; upon the deletion of file $f$, Holepunch updates the PPRF so that, even if the PPRF is recovered, the PPRF cannot be used to generate $f$'s key. By using PPRFs instead of the key trees leveraged by prior work, Holepunch reduces both the memory pressure caused by key management and the number of disk IOs needed to access files. Holepunch stores its master key in secure TPM storage, and uses a novel journaling scheme to provide crash consistency between TPM state and on-disk state.
Last updated:  2023-12-18
Best of Both Worlds: Revisiting the Spymasters Double Agent Problem
Anasuya Acharya, Carmit Hazay, Oxana Poburinnaya, and Muthuramakrishnan Venkitasubramaniam
This work defines a notion of secure multiparty computation: MPC with fall-back security. Fall-back security for an $n$-party protocol is defined with respect to an adversary structure $\mathcal{Z}$ wherein security is guaranteed in the presence of both a computationally unbounded adversary with adversary structure $\mathcal{Z}$, and a computationally bounded adversary corrupting an arbitrarily large subset of the parties. This notion was considered in the work of Chaum (Crypto 89) via the Spymaster's double agent problem where he showed a semi-honest secure protocol for the honest majority adversary structure. Subsequent works extend this to give maliciously secure protocols tolerating threshold adversaries. Our first main result is a compiler that can transform any $n$-party protocol that is semi-honestly secure with statistical security tolerating a general adversary structure $\mathcal{Z}$ to one that (additionally) provides semi-honest fall-back security w.r.t $\mathcal{Z}$. The resulting protocol has optimal round complexity, up to a constant factor, and is optimal in assumptions and the adversary structure. Our second result fully characterizes when malicious fall-back security is feasible. More precisely, we show that malicious fallback secure protocol w.r.t $\mathcal{Z}$ exists if and only if $\mathcal{Z}$ admits unconditional MPC against a semi-honest adversary (namely, iff $\mathcal{Z} \in \mathcal{Q}^2$).
Last updated:  2023-12-18
A Security Framework for Distributed Ledgers
Mike Graf, Daniel Rausch, Viktoria Ronge, Christoph Egger, Ralf Küsters, and Dominique Schröder
In the past few years blockchains have been a major focus for security research, resulting in significant progress in the design, formalization, and analysis of blockchain protocols. However, the more general class of distributed ledgers, which includes not just blockchains but also prominent non-blockchain protocols, such as Corda and OmniLedger, cannot be covered by the state-of-the-art in the security literature yet. These distributed ledgers often break with traditional blockchain paradigms, such as block structures to store data, system-wide consensus, or global consistency. In this paper, we close this gap by proposing the first framework for defining and analyzing the security of general distributed ledgers, with an ideal distributed ledger functionality, called $\mathcal{F}_\text{ledger}$, at the core of our contribution. This functionality covers not only classical blockchains but also non-blockchain distributed ledgers in a unified way. To illustrate $\mathcal{F}_\text{ledger}$, we first show that the prominent ideal blockchain functionalities $\mathcal{G}_\text{ledger}$ and $\mathcal{G}_\text{PL}$ realize (suitable instantiations of) $\mathcal{F}_\text{ledger}$, which precisely captures their security properties. This immediately implies that their respective implementations, including Bitcoin, Ouroboros Genesis, and Ouroboros Crypsinous, realize $\mathcal{F}_\text{ledger}$ as well. Secondly, we demonstrate that $\mathcal{F}_\text{ledger}$ is capable of precisely modeling also non-blockchain distributed ledgers by performing the first formal security analysis of such a distributed ledger, namely the prominent Corda protocol. Due to the wide spread use of Corda in the industry, in particular the financial sector, this analysis is of independent interest. These results also illustrate that $\mathcal{F}_\text{ledger}$ not just generalizes the modular treatment of blockchains to distributed ledgers, but moreover helps to unify existing results.
Last updated:  2023-12-18
HEIR: A Unified Representation for Cross-Scheme Compilation of Fully Homomorphic Computation
Song Bian, Zian Zhao, Zhou Zhang, Ran Mao, Kohei Suenaga, Yier Jin, Zhenyu Guan, and Jianwei Liu
We propose a new compiler framework that automates code generation over multiple fully homomorphic encryption (FHE) schemes. While it was recently shown that algorithms combining multiple FHE schemes (e.g., CKKS and TFHE) achieve high execution efficiency and task utility at the same time, developing fast cross-scheme FHE algorithms for real-world applications generally require heavy hand-tuned optimizations by cryptographic experts, resulting in either high usability costs or low computational efficiency. To solve the usability and efficiency dilemma, we design and implement HEIR, a compiler framework based on multi-level intermediate representation (IR). To achieve cross-scheme compilation of efficient FHE circuits, we develop a two-stage code-lowering structure based on our custom IR dialects. First, the plaintext program along with the associated data types are converted into FHE-friendly dialects in the transformation stage. Then, in the optimization stage, we apply FHE-specific optimizations to lower the transformed dialect into our bottom-level FHE library operators. In the experiment, we implement the entire software stack for HEIR, and demonstrate that complex end-to-end programs, such as homomorphic K-Means clustering and homomorphic data aggregation in databases, can easily be compiled to run $72$--$179\times$ faster than the program generated by the state-of-the-art FHE compilers.
Last updated:  2023-12-17
Instant Zero Knowledge Proof of Reserve
Trevor Conley, Nilsso Diaz, Diego Espada, Alvin Kuruvilla, and Xiang Fu
We present a non-interactive and public verifier scheme that allows one to assert the asset of a financial organization instantly and incrementally in zero knowledge with high throughput. It is enabled by the recent breakthrough in lookup argument, where the prover cost can be independent of the lookup table size after a pre-processing step. We extend the cq protocol and develop an aggregated non-membership proof for zero knowledge sets. Based on it, we design a non-intrusive protocol that works for pseudo-anonymous cryptocurrencies such as BTC. It has O(n log(n)) prover complexity and O(1) proof size, where n is the platform throughput (instead of anonymity set size). We implement and evaluate the protocol. Running on a 56-core server, it supports 1024 transactions per second.
Last updated:  2023-12-17
Differential Fault Attack on Ascon Cipher
Amit Jana
This work investigates the security of the Ascon authenticated encryption scheme in the context of fault attacks, with a specific focus on Differential Fault Analysis (DFA). Motivated by the growing significance of lightweight cryptographic solutions, particularly Ascon, we explore potential vulnerabilities in its design using DFA. By employing a novel approach that combines faulty forgery in the decryption query under two distinct fault models, leveraging bit-flip faults in the first phase and bit-set faults in the second, we successfully recover the complete Ascon key. This study sheds light on the impact of key whitening in the final permutation call and discusses potential threats when this safeguard is absent. Additionally, we consider the implications of injecting multiple bit-flip faults at the S-box input, suggesting alternative strategies for compromising the state space. Our findings contribute valuable insights into the gray-box security landscape of Ascon, emphasizing the need for robust defenses to ensure the integrity and resilience of lightweight cryptographic primitives against diverse fault attacks.
Last updated:  2023-12-17
GRandLine: Adaptively Secure DKG and Randomness Beacon with (Almost) Quadratic Communication Complexity
Renas Bacho, Christoph Lenzen, Julian Loss, Simon Ochsenreither, and Dimitrios Papachristoudis
A randomness beacon is a source of continuous and publicly verifiable randomness which is of crucial importance for many applications. Existing works on distributed randomness beacons suffer from at least one of the following drawbacks: (i) security only against a static/non-adaptive adversary, (ii) each epoch takes many rounds of communication, or (iii) computationally expensive tools such as Proof-of-Work (PoW) or Verifiable Delay Functions (VDF). In this paper, we introduce $\mathsf{GRandLine}$, the first adaptively secure randomness beacon protocol that overcomes all these limitations while preserving simplicity and optimal resilience in the synchronous network setting. We achieve our result in two steps. First, we design a novel distributed key generation (DKG) protocol $\mathsf{GRand}$ that runs in $\mathcal{O}(\lambda n^2\log{n})$ bits of communication but, unlike most conventional DKG protocols, outputs both secret and public keys as group elements. Here, $\lambda$ denotes the security parameter. Second, following termination of $\mathsf{GRand}$, parties can use their keys to derive a sequence of randomness beacon values, where each random value costs only a single asynchronous round and $\mathcal{O}(\lambda n^2)$ bits of communication. We implement $\mathsf{GRandLine}$ and evaluate it using a network of up to 64 parties running in geographically distributed AWS instances. Our evaluation shows that $\mathsf{GRandLine}$ can produce about 2 beacon outputs per second in a network of 64 parties. We compare our protocol to the state-of-the-art randomness beacon protocols in the same setting and observe that it vastly outperforms them.
Last updated:  2023-12-16
One for All, All for Ascon: Ensemble-based Deep Learning Side-channel Analysis
Azade Rezaeezade, Abraham Basurto-Becerra, Léo Weissbart, and Guilherme Perin
In recent years, deep learning-based side-channel analysis (DLSCA) has become an active research topic within the side-channel analysis community. The well-known challenge of hyperparameter tuning in DLSCA encouraged the community to use methods that reduce the effort required to identify an optimal model. One of the successful methods is ensemble learning. While ensemble methods have demonstrated their effectiveness in DLSCA, particularly with AES-based datasets, their efficacy in analyzing symmetric-key cryptographic primitives with different operational mechanics remains unexplored. Ascon was recently announced as the winner of the NIST lightweight cryptography competition. This will lead to broader use of Ascon and a crucial requirement for thorough side-channel analysis of its implementations. With these two considerations in view, we utilize an ensemble of deep neural networks to attack two implementations of Ascon. Using an ensemble of five multilayer perceptrons or convolutional neural networks, we could find the secret key for the Ascon-protected implementation with less than 3 000 traces. To the best of our knowledge, this is the best currently known result. We can also identify the correct key with less than 100 traces for the unprotected implementation of Ascon, which is on par with the state-of-the-art results.
Last updated:  2023-12-16
Analyzing UTXO-Based Blockchain Privacy Threats
Simin Ghesmati, Walid Fdhila, and Edgar Weippl
While blockchain technologies leverage compelling characteristics in terms of decentralization, immutability, and transparency, user privacy in public blockchains remains a fundamental challenge that requires particular attention. This is mainly due to the history of all transactions being accessible and available to anyone, thus making it possible for an attacker to infer data about users that is supposed to remain private. In this paper, we provide a threat model of possible privacy attacks on users utilizing the Bitcoin blockchain. To this end, we followed the LINDDUN GO methodology to identify threats and suggest possible mitigation.
Last updated:  2023-12-16
Towards Practical Secure Neural Network Inference: The Journey So Far and the Road Ahead
Zoltán Ádám Mann, Christian Weinert, Daphnee Chabal, and Joppe W. Bos
Neural networks (NNs) have become one of the most important tools for artificial intelligence (AI). Well-designed and trained NNs can perform inference (e.g., make decisions or predictions) on unseen inputs with high accuracy. Using NNs often involves sensitive data: depending on the specific use case, the input to the NN and/or the internals of the NN (e.g., the weights and biases) may be sensitive. Thus, there is a need for techniques for performing NN inference securely, ensuring that sensitive data remains secret. In the past few years, several approaches have been proposed for secure neural network inference. These approaches achieve better and better results in terms of efficiency, security, accuracy, and applicability, thus making big progress towards practical secure neural network inference. The proposed approaches make use of many different techniques, such as homomorphic encryption and secure multi-party computation. The aim of this survey paper is to give an overview of the main approaches proposed so far, their different properties, and the techniques used. In addition, remaining challenges towards large-scale deployments are identified.
Last updated:  2023-12-16
SIGMA: Secure GPT Inference with Function Secret Sharing
Kanav Gupta, Neha Jawalkar, Ananta Mukherjee, Nishanth Chandran, Divya Gupta, Ashish Panwar, and Rahul Sharma
Secure 2-party computation (2PC) enables secure inference that offers protection for both proprietary machine learning (ML) models and sensitive inputs to them. However, the existing secure inference solutions suffer from high latency and communication overheads, particularly for transformers. Function secret sharing (FSS) is a recent paradigm for obtaining efficient 2PC protocols with a preprocessing phase. We provide SIGMA, the first end-to-end system for secure transformer inference based on FSS. By constructing new FSS-based protocols for complex machine learning functionalities, such as Softmax, GeLU and SiLU, and also accelerating their computation on GPUs, SIGMA improves the latency of secure inference of transformers by $11-19\times$ over the state-of-the-art that uses preprocessing and GPUs. We present the first secure inference of generative pre-trained transformer (GPT) models. In particular, SIGMA executes Meta's LLaMA2 (available on HuggingFace) with 13 billion parameters in 44 seconds and GPT2 in 1.6 seconds.
Last updated:  2023-12-15
Automated Issuance of Post-Quantum Certificates: a New Challenge
Uncategorized
Alexandre Augusto Giron, Frederico Schardong, Lucas Pandolfo Perin, Ricardo Custódio, Victor Valle, and Víctor Mateu
Show abstract
Uncategorized
The Automatic Certificate Management Environment protocol (ACME) has significantly contributed to the widespread use of digital certificates in safeguarding the authenticity and privacy of Internet data. These certificates are required for implementing the Transport Layer Security (TLS) protocol. However, it is well known that the cryptographic algorithms employed in these certificates will become insecure with the emergence of quantum computers. This study assesses the challenges in transitioning ACME to the post-quantum landscape using Post-Quantum Cryptography (PQC). To evaluate the cost of ACME's PQC migration, we create a simulation environment for issuing PQC-only and hybrid digital certificates. Our experiments reveal performance drawbacks associated with the switch to PQC or hybrid solutions. However, considering the high volume of certificates issued daily by organizations like Let's Encrypt, the performance of ACME is of utmost importance. To address this concern, we propose a novel challenge method for ACME. Compared to the widely used HTTP-01 method, our findings indicate an average PQC certificate issuance time that is 4.22 times faster, along with a potential reduction of up to 35% in communication size.
Last updated:  2023-12-15
Camel: E2E Verifiable Instant Runoff Voting without Tallying Authorities
Luke Harrison, Samiran Bag, and Feng Hao
Instant Runoff Voting (IRV) is one example of ranked-choice voting. It provides many known benefits when used in elections, such as minimising vote splitting, ensuring few votes are wasted, and providing resistance to strategic voting. However, the voting and tallying procedures for IRV are much more complicated than those of plurality and are both error-prone and tedious. Many automated systems have been proposed to simplify these procedures in IRV. Some of these also employ cryptographic techniques to protect the secrecy of ballots and enable verification of the tally. Nearly all of these cryptographic systems require a set of trustworthy tallying authorities (TAs) to perform the decryption of votes and/or running of mix servers, which adds significant complexity to the implementation and election management. We address this issue by proposing Camel: an E2E verifiable solution for IRV that requires no TAs. Camel employs a novel representation and a universally verifiable shifting procedure for ballots that facilitate the elimination of candidates as required in an IRV election. We combine these with a homomorphic encryption scheme and zero-knowledge proofs to protect the secrecy of the ballots and enable any party to verify the well-formedness of the ballots and the correctness of the tally in an IRV election. We examine the security of Camel and prove it maintains ballot secrecy by limiting the learned information (namely the tally) against a set of colluding voters.
Last updated:  2023-12-15
Game Theoretical Analysis of DAG-Ledgers Backbone
Yackolley Amoussou-Guenou, Simone Galimberti, and Maria Potop-Butucaru
We study the rational behaviors of agents in DAG-Based Distributed Ledgers. We an- alyze generic algorithms that encapsulate the main actions of agents in a DAG-based dis- tributed ledger: voting for a block, and checking its validity. Knowing that those actions have costs, and validating a block gives rewards to agents who participated in the validation procedure, we study using game theory how strategic agents behave while trying to maximize their gains. We consider scenarios with different type of agents and investigate if there exist equilibria where the properties of the protocols are guaranteed. The analysis is focused on the study of equilibria when invalid blocks may be issued. We found that in such a case, there exist equilibria where protocols properties may be violated. However, we also show that in all studied cases, there exist equilibria satifisfying the protocol properties.
Last updated:  2023-12-15
Hidden Stream Ciphers and TMTO Attacks on TLS 1.3, DTLS 1.3, QUIC, and Signal
John Preuß Mattsson
Transport Layer Security (TLS) 1.3 and the Signal protocol are very important and widely used security protocols. We show that the key update function in TLS 1.3 and the symmetric key ratchet in Signal can be modeled as non-additive synchronous stream ciphers. This means that the efficient Time Memory Tradeoff Attacks for stream ciphers can be applied. The implication is that TLS 1.3, QUIC, DTLS 1.3, and Signal offer a lower security level against TMTO attacks than expected from the key sizes. We provide detailed analyses of the key update mechanisms in TLS 1.3 and Signal, illustrate the importance of ephemeral key exchange, and show that the process that DTLS 1.3 and QUIC use to calculate AEAD limits is flawed. We provide many concrete recommendations for the analyzed protocols.
Last updated:  2023-12-15
When and How to Aggregate Message Authentication Codes on Lossy Channels?
Eric Wagner, Martin Serror, Klaus Wehrle, and Martin Henze
Aggregation of message authentication codes (MACs) is a proven and efficient method to preserve valuable bandwidth in resource-constrained environments: Instead of appending a long authentication tag to each message, the integrity protection of multiple messages is aggregated into a single tag. However, while such aggregation saves bandwidth, a single lost message typically means that authentication information for multiple messages cannot be verified anymore. With the significant increase of bandwidth-constrained lossy communication, as applications shift towards wireless channels, it thus becomes paramount to study the impact of packet loss on the diverse MAC aggregation schemes proposed over the past 15 years to assess when and how to aggregate message authentication. Therefore, we empirically study all relevant MAC aggregation schemes in the context of lossy channels, investigating achievable goodput improvements, the resulting verification delays, processing overhead, and resilience to denial-of-service attacks. Our analysis shows the importance of carefully choosing and configuring MAC aggregation, as selecting and correctly parameterizing the right scheme can, e.g., improve goodput by 39% to 444%, depending on the scenario. However, since no aggregation scheme performs best in all scenarios, we provide guidelines for network operators to select optimal schemes and parameterizations suiting specific network settings.
Last updated:  2023-12-15
Computation of Hilbert class polynomials and modular polynomials from supersingular elliptic curves
Antonin Leroux
We present several new heuristic algorithms to compute class polynomials and modular polynomials modulo a prime $p$ by revisiting the idea of working with supersingular elliptic curves. The best known algorithms to this date are based on ordinary curves, due to the supposed inefficiency of the supersingular case. While this was true a decade ago, the recent advances in the study of supersingular curves through the Deuring correspondence motivated by isogeny-based cryptography has provided all the tools to perform the necessary tasks efficiently. Our main ingredients are two new heuristic algorithms to compute the $j$-invariants of supersingular curves having an endomorphism ring contained in some set of isomorphism class of maximal orders. The first one is derived easily from the existing tools of isogeny-based cryptography, while the second introduces new ideas to perform that task efficiently for a big number of maximal orders at the same time. For each of the polynomials (Hilbert and modular), we obtain two algorithms. The first one, that we will qualify as \textit{direct}, is based on the computation of a set of well-chosen supersingular $j$-invariants defined over $\FF_{p^2}$ and uses the aforementioned algorithm to translate maximal orders to $j$-invariants as its main building block. The second one is a CRT algorithm that applies the direct algorithm on a set of small primes and reconstruct the result modulo $p$ with the chinese remainder theorem. In both cases, the direct algorithm achieves the best known complexity for primes $p$ that are relatively small compared to the discriminant (for the Hilbert case) and to the level (for the modular case). Our CRT algorithms matches the complexities of the state-of-the-art CRT approach based on ordinary curves, while improving some of the steps, thus opening the possibility to a better practical efficiency. Second, we revisit the CRT method for both class and modular polynomials. We show that applying our second heuristic algorithm over supersingular curves to the CRT approach yields the same asymptotic complexity as the best known algorithms based on ordinary curves and we argue that our new approach might be more efficient in practice. The situation appears especially promising for modular polynomials, as our approach reduces the asymptotic cost of elliptic curve operations by a linear factor in the level $\ell$. We obtain an algorithm whose asymptotic complexity is now fully dominated by linear algebra and standard polynomial arithmetic over finite fields.
Last updated:  2023-12-14
Threshold ECDSA in Three Rounds
Jack Doerner, Yashvanth Kondi, Eysa Lee, and abhi shelat
We present a three-round protocol for threshold ECDSA signing with malicious security against a dishonest majority, which information-theoretically UC-realizes a standard threshold signing functionality, assuming only ideal commitment and two-party multiplication primitives. Our protocol combines an intermediate representation of ECDSA signatures that was recently introduced by Abram et al. (Eurocrypt'22) with an efficient statistical consistency check reminiscent of the ones used by the protocols of Doerner et al. (S&P'18, S&P'19). We show that shared keys for our signing protocol can be generated using a simple commit-release-and-complain procedure, without any proofs of knowledge, and to compute the intermediate representation of each signature, we propose a two-round vectorized multiplication protocol based on oblivious transfer that outperforms all similar constructions.
Last updated:  2023-12-14
m-Stability: Threshold Security Meets Transferable Utility
Osman Biçer, Burcu Yıldız, and Alptekin Küpçü
Use of game theory and mechanism design in cloud security is a well-studied topic. When applicable, it has the advantages of being efficient and simple compared to cryptography alone. Most analyses consider two-party settings, or multi-party settings where coalitions are not allowed. However, many cloud security problems that we face are in the multi-party setting and the involved parties can almost freely collaborate with each other. To formalize the study of disincentivizing coalitions from deviating strategies, a well-known definition named k-resiliency has been proposed by Abraham et al. (ACM PODC '06). Since its proposal, k-resiliency and related definitions are used extensively for mechanism design. However, in this work we observe the shortcoming of k-resiliency. That is, although this definition is secure, it is too strict to use for many cases and rule out secure mechanisms as insecure. To overcome this issue, we propose a new definition named l-repellence against the presence of a single coalition to replace k-resiliency. Our definition incorporates transferable utility in game theory as it is realistic in many distributed and multi-party computing settings. We also propose m-stability definition against the presence of multiple coalitions, which is inspired by threshold security in cryptography. We then show the advantages of our novel definitions on three mechanisms, none of which were previously analyzed against coalitions: incentivized cloud computation, forwarding data packages in ad hoc networks, and connectivity in ad hoc networks. Regarding the former, our concepts improve the proposal by Küpçü (IEEE TDSC '17), by ensuring a coalition-proof mechanism.
Last updated:  2023-12-14
SPURT: Scalable Distributed Randomness Beacon with Transparent Setup
Sourav Das, Vinith Krishnan, Irene Miriam Isaac, and Ling Ren
Having shared access to high-quality random numbers is essential in many important applications. Yet, existing constructions of distributed random beacons still have limitations such as imperfect security guarantees, strong setup or network assumptions, or high costs. In this paper, we present SPURT, an efficient distributed randomness beacon protocol that does not require any trusted or expensive setup and is secure against a malicious adversary that controls up to one-third of the nodes in a partially synchronous network. We formally prove that each output of SPURT is unpredictable, bias-resistant, and publicly verifiable. SPURT has an amortized total communication cost of $O(\lambda n^2)$ per beacon output where $\lambda$ is the security parameter. While designing SPURT, we also design a publicly verifiable secret sharing (PVSS) scheme whose security is based on the standard Decisional Bilinear Diffie-Hellman assumption and does not require a Random Oracle. We implement SPURT and evaluate it using a network of up to 128 nodes running in geographically distributed AWS instances. Our evaluation shows that SPURT can produce about 84 beacon outputs per minute in a network of 32 nodes and is comparable to systems with stronger assumptions or weaker security.
Last updated:  2023-12-14
FANNG-MPC: Framework for Artificial Neural Networks and Generic MPC
Najwa Aaraj, Abdelrahaman Aly, Tim Güneysu, Chiara Marcolla, Johannes Mono, Rogerio Paludo, Iván Santos-González, Mireia Scholz, Eduardo Soria-Vazquez, Victor Sucasas, and Ajith Suresh
In this work, we introduce FANNG-MPC, a versatile secure multi-party computation framework capable to offer active security for privacy preserving machine learning as a service (MLaaS). Derived from the now deprecated SCALE-MAMBA, FANNG is a data-oriented fork, featuring novel set of libraries and instructions for realizing private neural networks, effectively reviving the popular framework. To the best of our knowledge, FANNG is the first MPC framework to offer actively secure MLaaS in the dishonest majority setting, specifically two parties. FANNG goes beyond SCALE-MAMBA by decoupling offline and online phases and materializing the dealer model in software, enabling a separate set of entities to produce offline material. The framework incorporates database support, a new instruction set for pre-processed material, including garbled circuits and convolutional and matrix multiplication triples. FANNG also implements novel private comparison protocols and an optimized library supporting Neural Network functionality. All our theoretical claims are substantiated by an extensive evaluation using an open-sourced implementation, including the private evaluation of popular neural networks like LeNet and VGG16.
Last updated:  2023-12-14
On the Cost of Post-Compromise Security in Concurrent Continuous Group-Key Agreement
Benedikt Auerbach, Miguel Cueto Noval, Guillermo Pascual-Perez, and Krzysztof Pietrzak
Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key. It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF. CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server. The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise). One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates. Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties. This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC'20]. The recently proposed protocol CoCoA [Alwen et al. Eurocrypt'22], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required. The natural question, thus, is whether CoCoA is optimal in this setting. In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary $k$ number of rounds, that shows that CoCoA is very close to optimal. Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain $k$. We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key. We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al. Our lower bound is of order $k\cdot n^{1+1/(k-1)}/\log(k)$, where $2\le k\le \log(n)$ is the number of updates per user the protocol requires to heal. This generalizes the $n^2$ bound for $k=2$ from Bienstock et al. This bound almost matches the $k\cdot n^{1+2/(k-1)}$ or $k^2\cdot n^{1+1/(k-1)}$ efficiency we get for the variants of the CoCoA protocol also introduced in this paper.
Last updated:  2023-12-14
Not optimal but efficient: a distinguisher based on the Kruskal-Wallis test
Yan Yan, Arnab Roy, and Elisabeth Oswald
Research about the theoretical properties of side channel distinguishers revealed the rules by which to maximise the probability of first order success (``optimal distinguishers'') under different assumptions about the leakage model and noise distribution. Simultaneously, research into bounding first order success (as a function of the number of observations) has revealed universal bounds, which suggest that (even optimal) distinguishers are not able to reach theoretically possible success rates. Is this gap a proof artefact (aka the bounds are not tight) or does a distinguisher exist that is more trace efficient than the ``optimal'' one? We show that in the context of an unknown (and not linear) leakage model there is indeed a distinguisher that outperforms the ``optimal'' distinguisher in terms of trace efficiency: it is based on the Kruskal-Wallis test.
Last updated:  2023-12-14
One-out-of-$q$ OT Combiners
Oriol Farràs and Jordi Ribes-González
In $1$-out-of-$q$ Oblivious Transfer (OT) protocols, a sender Alice is able to send one of $q\ge 2$ messages to a receiver Bob, all while being oblivious to which message was transferred. Moreover, the receiver learns only one of these messages. Oblivious Transfer combiners take $n$ instances of OT protocols as input, and produce an OT protocol that is secure if sufficiently many of the $n$ original OT instances are secure. We present new $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries. Our combiners arise from secret sharing techniques. We show that given an $\mathbb{F}_q$-linear secret sharing scheme on a set of $n$ participants and adversary structure $\mathcal{A}$, we can construct $n$-server, $1$-out-of-$q$ OT combiners that are secure against an adversary corrupting either Alice and a set of servers in $\mathcal{A}$, or Bob and a set of servers $B$ with $\bar{B}\notin\mathcal{A}$. If the normalized total share size of the scheme is $\ell$, then the resulting OT combiner requires $\ell$ calls to OT protocols, and the total amount of bits exchanged during the protocol is $(q^2+q+1)\ell\log q$. We also present a construction based on $1$-out-of-$2$ OT combiners that uses the protocol of Crépeau, Brassard and Robert (FOCS 1986). This construction provides smaller communication costs for certain adversary structures, such as threshold ones: For any prime power $q\geq n$, there are $n$-server, $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries corrupting either Alice or Bob, and a minority of the OT candidates, exchanging $O(qn\log q)$ bits in total.
Last updated:  2023-12-13
Fast batched asynchronous distributed key generation
Jens Groth and Victor Shoup
We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
Last updated:  2023-12-13
Oops, I did it again revisited: another look at reusing one-time signatures
Scott Fluhrer
In "Oops, I did it again" - Security of One-Time Signatures under Two-Message Attacks, Bruinderink and Hülsing analyzed the effect of key reuse for several one time signature systems. When they analyzed the Winternitz system, they assumed certain probabilities were independent when they weren't, leading to invalid conclusions. This paper does a more correct characterization of the Winternitz scheme, and while their ultimate conclusion (that key reuse allows for practical forgeries) is correct, the situation is both better and worse than what they concluded.
Last updated:  2023-12-13
Muckle+: End-to-End Hybrid Authenticated Key Exchanges
Sonja Bruckner, Sebastian Ramacher, and Christoph Striecks
End-to-end authenticity in public networks plays a significant role. Namely, without authenticity, the adversary might be able to retrieve even confidential information straight away by impersonating others. Proposed solutions to establish an authenticated channel cover pre-shared key-based, password-based, and certificate-based techniques. To add confidentiality to an authenticated channel, authenticated key exchange (AKE) protocols usually have one of the three solutions built in. As an amplification, hybrid AKE (HAKE) approaches are getting more popular nowadays and were presented in several flavors to incorporate classical, post-quantum, or quantum-key-distribution components. The main benefit is redundancy, i.e., if some of the components fail, the primitive still yields a confidential and authenticated channel. However, current HAKE instantiations either rely on pre-shared keys (which yields inefficient end-to-end authenticity) or only support one or two of the three above components (resulting in reduced redundancy and flexibility). In this work, we present an extension of a modular HAKE framework due to Dowling, Brandt Hansen, and Paterson (PQCrypto'20) that does not suffer from the above constraints. While their instantiation, dubbed Muckle, requires pre-shared keys (and hence yields inefficient end-to-end authenticity), our extended instantiation called Muckle+ utilizes post-quantum digital signatures. While replacing pre-shared keys with digital signatures is rather straightforward in general, this turned out to be surprisingly non-trivial when applied to HAKE frameworks (resulting in a significant model change with adapted proof techniques).
Last updated:  2023-12-13
Efficient Low-Latency Masking of Ascon without Fresh Randomness
Srinidhi Hari Prasad, Florian Mendel, Martin Schläffer, and Rishub Nagpal
In this work, we present the first low-latency, second-order masked hardware implementation of Ascon that requires no fresh randomness using only $d+1$ shares. Our results significantly outperform any publicly known second-order masked implementations of AES and Ascon in terms of combined area, latency and randomness requirements. Ascon is a family of lightweight authenticated encryption and hashing schemes selected by NIST for standardization. Ascon is tailored for small form factors. It requires less power and energy while attaining the same or even better performance than current NIST standards. We achieve the reduction of latency by rearranging the linear layers of the Ascon permutation in a round-based implementation. We provide an improved technique to achieve implementations without the need for fresh randomness. It is based on the concept of changing of the guards extended to the second-order case. Together with the reduction of latency, we need to consider a large set of additional conditions which we propose to solve using a SAT solver. We have formally verified both, our first- and second-order implementations of Ascon using CocoAlma for the first two rounds. Additionally, we have performed a leakage assessment using t-tests on all 12 rounds of the initial permutation. Finally, we provide a comparison of our second-order masked Ascon implementation with other results.
Last updated:  2023-12-13
Breaking RSA Authentication on Zynq-7000 SoC and Beyond: Identification of Critical Security Flaw in FSBL Software
Prasanna Ravi, Arpan Jati, and Shivam Bhasin
In this report, we perform an in-depth analysis of the RSA authentication feature used in the secure boot procedure of Xilinx Zynq-7000 SoC device. The First Stage Boot Loader (FSBL) is a critical piece of software executed during secure boot, which utilizes the RSA authentication feature to validate all the hardware and software partitions to be mounted on the device. We analyzed the implementation of FSBL (provided by Xilinx) for the Zynq-7000 SoC and identified a critical security flaw, whose exploitation makes it possible to load an unauthenticated application onto the Zynq device, thereby bypassing RSA authentication. We also experimentally validated the presence of the vulnerability through a Proof of Concept (PoC) attack to successfully mount an unauthenticated software application on an RSA authenticated Zynq device. The identified flaw is only present in the FSBL software and thus can be easily fixed through appropriate modification of the FSBL software. Thus, the first contribution of our work is the identification of a critical security flaw in the FSBL software to bypass RSA authentication. Upon bypassing RSA authentication, an attacker can mount any unauthenticated software application on the target device to mount a variety of attacks. Among the several possible attacks, we are interested to perform recovery of the encrypted bitstream in the target boot image of the Zynq-7000 device. To the best of our knowledge, there does not exist any prior work that has reported a practical bitstream recovery attack on the Zynq-7000 device. In the context of bitstream recovery, Ender et al. in 2020 proposed the Starbleed attack that is applicable to standalone Virtex-6 and 7-series Xilinx FPGAs. The design advisory provided by Xilinx as a response to the Starbleed attack claims that the Zynq-7000 SoC is resistant “due to the use of asymmetric and/or symmetric authentication in the boot/configuration process that ensures configuration is authenticated prior to use". Due to the security flaw found in the FSBL, we managed to identify a novel approach to mount the Starbleed attack on the Zynq-7000 device for full bitstream recovery. Thus, as a second contribution of our work, we present the first practical demonstration of the Starbleed attack on the Zynq-7000 SoC. We perform experimental validation of our proposed attacks on the PYNQ-Z1 platform based on the Zynq-7000 SoC.
Last updated:  2023-12-13
Dishonest Majority Multiparty Computation over Matrix Rings
Hongqing Liu, Chaoping Xing, Chen Yuan, and Taoxu Zou
The privacy-preserving machine learning (PPML) has gained growing importance over the last few years. One of the biggest challenges is to improve the efficiency of PPML so that the communication and computation costs of PPML are affordable for large machine learning models such as deep learning. As we know, linear algebra such as matrix multiplication occupies a significant part of the computation in the deep learning such as deep convolutional neural networks (CNN). Thus, it is desirable to propose the MPC protocol specialized for the matrix operations. In this work, we propose a dishonest majority MPC protocol over matrix rings which supports matrix multiplication and addition. Our MPC protocol can be seen as a variant of SPDZ protocol, i.e., the MAC and global key of our protocol are vectors of length $m$ and the secret of our protocol is an $m\times m$ matrix. Compared to the classic SPDZ protocol, our MPC protocol reduces the communication complexity by at least $m$ times. We also show that our MPC protocol is as efficient as [11] which also presented a dishonest majority MPC protocol specialized for matrix operations. The MPC protocol [11] resorts to the homomorphic encryption scheme (BFV scheme) to produce the matrix triples in the preprocessing phase. This implies that their protocol only supports the matrix operations over integer rings or prime fields of large size. On the contrary, we resort to vector oblivious linear evaluations and random vector oblivious linear evaluations to generate correlated randomness in the preprocessing phase. Thus, the matrices of our MPC protocol can be defined over any finite field or integer ring. Due to the small size of our MAC, the communication complexity of our MPC protocol remains almost the same regardless of the size of the field or the ring.
Last updated:  2023-12-13
Non-Interactive Classical Verification of Quantum Depth: A Fine-Grained Characterization
Nai-Hui Chia and Shih-Han Hung
We introduce protocols for classical verification of quantum depth (CVQD). These protocols enable a classical verifier to differentiate between devices of varying quantum circuit depths, even in the presence of classical computation. The goal is to demonstrate that a classical verifier can reject a device with a quantum circuit depth of no more than $d$, even if the prover employs additional polynomial-time classical computation to deceive. Conversely, the verifier accepts a device with a quantum circuit depth of $d'>d$. Previous results for separating hybrid quantum-classical computers with various quantum depths require either quantum access to oracles or interactions between the classical verifier and the quantum prover. However, instantiating oracle separations can significantly increase the quantum depth in general, and interaction challenges the quantum device to keep the qubits coherent while waiting for the verifier's messages. These requirements pose barriers to implementing the protocols on near-term devices. In this work, we present a two-message protocol under the quantum hardness of learning with errors and the random oracle heuristic. An honest prover only needs classical access to the random oracle, and therefore any instantiation of the oracle does not increase the quantum depth. To our knowledge, our protocol is the first non-interactive CVQD, the instantiation of which using concrete hash functions, e.g., SHA-3, does not require additional quantum depth. Our second protocol seeks to explore the minimality of cryptographic assumptions and the tightness of the separations. To accomplish this, we introduce an untrusted quantum machine that shares entanglements with the target machine. Utilizing a robust self-test, our protocol certifies the depth of the target machine with information-theoretic security and nearly optimal separation.
Last updated:  2023-12-12
BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers
Qi Pang, Jinhao Zhu, Helen Möllering, Wenting Zheng, and Thomas Schneider
The advent of transformers has brought about significant advancements in traditional machine learning tasks. However, their pervasive deployment has raised concerns about the potential leakage of sensitive information during inference. Existing approaches using secure multiparty computation (MPC) face limitations when applied to transformers due to the extensive model size and resource-intensive matrix-matrix multiplications. In this paper, we present BOLT, a privacy-preserving inference framework for transformer models that supports efficient matrix multiplications and nonlinear computations. Combined with our novel machine learning optimizations, BOLT reduces the communication cost by 10.91x. Our evaluation on diverse datasets demonstrates that BOLT maintains comparable accuracy to floating-point models and achieves 4.8-9.5x faster inference across various network settings compared to the state-of-the-art system.
Last updated:  2023-12-12
Failed crypto: Matrices over non-standard arithmetic
Daniel R. L. Brown
A failed hypothesis is reported here. The hope was that large matrices over small non-standard arithmetic are likely to have infeasible division, and furthermore be secure for use in Rabi–Sherman associative cryptography.
Last updated:  2023-12-12
Ratel: MPC-extensions for Smart Contracts
Yunqi Li, Kyle Soska, Zhen Huang, Sylvain Bellemare, Mikerah Quintyne-Collins, Lun Wang, Xiaoyuan Liu, Dawn Song, and Andrew Miller
Enhancing privacy on smart contract-enabled blockchains has garnered much attention in recent research. Zero-knowledge proofs (ZKPs) is one of the most popular approaches, however, they fail to provide full expressiveness and fine-grained privacy. To illustrate this, we underscore an underexplored type of Miner Extractable Value (MEV), called Residual Bids Extractable Value (RBEV). Residual bids highlight the vulnerability where unfulfilled bids inadvertently reveal traders’ unmet demands and prospective trading strategies, thus exposing them to exploitation. ZKP-based approaches failed to ad- dress RBEV as they cannot provide post-execution privacy without some level of information disclosure. Other MEV mitigations like fair-ordering protocols also failed to address RBEV. We introduce Ratel, an innovative framework bridging a multi-party computation (MPC) prototyping framework (MP-SPDZ) and a smart contract language (Solidity), harmonizing the privacy with full expressiveness of MPC with Solidity ’s on-chain programmability. This synergy empowers developers to effortlessly craft privacy-preserving decentralized applications (DApps). We demonstrate Ratel’s efficacy through two distinguished decentralized finance (DeFi) applications: a decentralized exchange and a collateral auction, effectively mitigating the potential RBEV issue. Furthermore, Ratel is equipped with a lightweight crash-reset mechanism, enabling the seamless recovery of transiently benign faulty nodes. To prevent the crash-reset mechanism abused by malicious entities and ward off DoS attacks, we incorporate a cost-utility analysis anchored in the Bayesian approach. Our performance evaluation of the applications developed under the Ratel framework underscores their competency in managing real-world peak-time workloads.
Last updated:  2023-12-12
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain, and Decentralized Finance (DeFi) ecosystem. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and for scaling to large numbers of users. Our construction is blockchain-agnostic and is analyzed in the Universal Composition (UC) framework, offering a secure and modular approach for its integration into the broader blockchain ecosystem.
Last updated:  2023-12-12
Integral Cryptanalysis Using Algebraic Transition Matrices
Tim Beyne and Michiel Verbauwhede
In this work we introduce algebraic transition matrices as the basis for a new approach to integral cryptanalysis that unifies monomial trails (Hu et al., Asiacrypt 2020) and parity sets (Boura and Canteaut, Crypto 2016). Algebraic transition matrices allow for the computation of the algebraic normal form of a primitive based on the algebraic normal forms of its components by means of well-understood operations from linear algebra. The theory of algebraic transition matrices leads to better insight into the relation between integral properties of $F$ and $F^{−1}$. In addition, we show that the link between invariants and eigenvectors of correlation matrices (Beyne, Asiacrypt 2018) carries over to algebraic transition matrices. Finally, algebraic transition matrices suggest a generalized definition of integral properties that subsumes previous notions such as extended division properties (Lambin, Derbez and Fouque, DCC 2020). On the practical side, a new algorithm is described to search for these generalized properties and applied to Present, resulting in new properties. The algorithm can be instantiated with any existing automated search method for integral cryptanalysis.
Last updated:  2023-12-12
Exploring SIDH-based Signature Parameters
Andrea Basso, Mingjie Chen, Tako Boris Fouotsa, Péter Kutas, Abel Laval, Laurane Marco, and Gustave Tchoffo Saah
Isogeny-based cryptography is an instance of post-quantum cryptography whose fundamental problem consists of finding an isogeny between two (isogenous) elliptic curves $E$ and $E'$. This problem is closely related to that of computing the endomorphism ring of an elliptic curve. Therefore, many isogeny-based protocols require the endomorphism ring of at least one of the curves involved to be unknown. In this paper, we explore the design of isogeny based protocols in a scenario where one assumes that the endomorphism ring of all the curves are public. In particular, we identify digital signatures based on proof of isogeny knowledge from SIDH squares as such a candidate. We explore the design choices for such constructions and propose two variants with practical instantiations. We analyze their security according to three lines, the first consists of attacks based on KLPT with both polynomial and superpolynomial adversary, the second consists of attacks derived from the SIDH attacks and finally we study the zero-knowledge property of the underlying proof of knowledge.
Last updated:  2023-12-11
Streaming Functional Encryption
Jiaxin Guan, Alexis Korb, and Amit Sahai
We initiate the study of streaming functional encryption (sFE) which is designed for scenarios in which data arrives in a streaming manner and is computed on in an iterative manner as the stream arrives. Unlike in a standard functional encryption (FE) scheme, in an sFE scheme, we (1) do not require the entire data set to be known at encryption time and (2) allow for partial decryption given only a prefix of the input. More specifically, in an sFE scheme, we can sequentially encrypt each data point $x_i$ in a stream of data $x = x_1\ldots x_n$ as it arrives, without needing to wait for all $n$ values. We can then generate function keys for streaming functions which are stateful functions that take as input a message $x_i$ and a state $\mathsf{st}_i$ and output a value $y_i$ and the next state $\mathsf{st}_{i+1}$. For any $k \leq n$, a user with a function key for a streaming function $f$ can learn the first $k$ output values $y_1\ldots y_k$ where $(y_i, \mathsf{st}_{i+1}) = f(x_i, \mathsf{st}_i)$ and $\mathsf{st}_1 = \bot$ given only ciphertexts for the first $k$ elements $x_1\ldots x_k$. In this work, we introduce the notion of sFE and show how to construct it from FE. In particular, we show how to achieve a secure sFE scheme for $\mathsf{P/Poly}$ from a compact, secure FE scheme for $\mathsf{P/Poly}$, where our security notion for sFE is similar to standard FE security except that we require all function queries to be made before the challenge ciphertext query. Furthermore, by combining our result with the FE construction of Jain, Lin, and Sahai (STOC, 2022), we show how to achieve a secure sFE scheme for $\mathsf{P/Poly}$ from the polynomial hardness of well-studied assumptions.
Last updated:  2023-12-11
High-assurance zeroization
Santiago Arranz Olmos, Gilles Barthe, Ruben Gonzalez, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Tiago Oliveira, and Peter Schwabe
In this paper we revisit the problem of erasing sensitive data from memory and registers during return from a cryptographic routine. While the problem and related attacker model is fairly easy to phrase, it turns out to be surprisingly hard to guarantee security in this model when implementing cryptography in common languages such as C/C++ or Rust. We revisit the issues surrounding zeroization and then present a principled solution in the sense that it guarantees that sensitive data is erased and it clearly defines when this happens. We implement our solution as extension to the formally verified Jasmin compiler and extend the correctness proof of the compiler to cover zeroization. We show that the approach seamlessly integrates with state-of-the-art protections against microarchitectural attacks by integrating zeroization into Libjade, a cryptographic library written in Jasmin with systematic protections against timing and Spectre-v1 attacks. We present benchmarks showing that in many cases the overhead of zeroization is barely measurable and that it stays below 2% except for highly optimized symmetric crypto routines on short inputs.
Last updated:  2023-12-11
First-Order Masked Kyber on ARM Cortex-M4
Daniel Heinz, Matthias J. Kannwischer, Georg Land, Thomas Pöppelmann, Peter Schwabe, and Amber Sprenkels
In this work, we present a fast and first-order secure Kyber implementation optimized for ARM Cortex-M4. Most notably, to our knowledge this is the first liberally-licensed open-source Cortex-M4 implementation of masked Kyber. The ongoing NIST standardization process for post-quantum cryptography and newly proposed side-channel attacks have increased the demand for side-channel analysis and countermeasures for the finalists. On the foundation of the commonly used PQM4 project, we make use of the previously presented optimizations for Kyber on a Cortex-M4 and further combine different ideas from various recent works to achieve a better performance and improve the security in comparison to the original implementations. We show our performance results for first-order secure implementations. Our masked Kyber768 decapsulation on the ARM Cortex-M4 requires only 2 978 441 cycles, including randomness generation from the internal RNG. We then practically verify our implementation by using the t-test methodology with 100 000 traces.
Last updated:  2023-12-11
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
The recently standardized secure group messaging protocol “Messaging Layer Security” (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point secure messaging protocols such as “Signal”. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of “post-compromise security” and “forward secrecy” which mitigate the effects of user corruption over time. Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found. We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a “quarantine” mechanism for the inactive users in order to mitigate the risk induced by these users without removing them from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our “Quarantined-TreeKEM” protocol thus offers a good trade-off between security and functionality, with a very limited – and sometimes negative – communication overhead.
Last updated:  2023-12-11
A Transaction-Level Model for Blockchain Privacy
François-Xavier Wicht, Zhipeng Wang, Duc V. Le, and Christian Cachin
Considerable work explores blockchain privacy notions. Yet, it usually employs entirely different models and notations, complicating potential comparisons. In this work, we use the Transaction Directed Acyclic Graph (TDAG) and extend it to capture blockchain privacy notions (PDAG). We give consistent definitions for untraceability and unlinkability. Moreover, we specify conditions on a blockchain system to achieve each aforementioned privacy notion. Thus, we can compare the two most prominent privacy-preserving blockchains -- Monero and Zcash, in terms of privacy guarantees. Finally, we unify linking heuristics from the literature with our graph notation and review a good portion of research on blockchain privacy.
Last updated:  2023-12-11
Middle-Products of Skew Polynomials and Learning with Errors
Cong Ling and Andrew Mendelsohn
We extend the middle product to skew polynomials, which we use to define a skew middle-product Learning with Errors (LWE) variant. We also define a skew polynomial LWE problem, which we connect to Cyclic LWE (CLWE), a variant of LWE in cyclic division algebras. We then reduce a family of skew polynomial LWE problems to skew middle-product LWE, for a family which includes the structures found in CLWE. Finally, we give an encryption scheme and demonstrate its IND-CPA security, assuming the hardness of skew middle-product LWE.
Last updated:  2023-12-11
On Constructing One-Way Quantum State Generators, and More
Shujiao Cao and Rui Xue
As a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state and can be used to devise the one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results: Two variants of one-way quantum state generator are proposed, called the weak one-way quantum state generator and distributionally one-way quantum state generator. Then the equivalence between weak and strong one-way state generator is obtained, and the equivalence between weak and distributionally one-way quantum state generator is shown in the symmetric setting. We construct the symmetric distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to $\textsf{QSZK}$. We construct quantum bit commitment with statistical binding (sum-binding) and computational hiding directly from the average-case hardness of $\textsf{QSZK}$. To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $ \textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$. Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.
Last updated:  2023-12-11
FIN: Practical Signature-Free Asynchronous Common Subset in Constant Time
Sisi Duan, Xin Wang, and Haibin Zhang
Asynchronous common subset (ACS) is a powerful paradigm enabling applications such as Byzantine fault-tolerance (BFT) and multi-party computation (MPC). The most efficient ACS framework in the information-theoretic setting is due to Ben-Or, Kelmer, and Rabin (BKR, 1994). The BKR ACS protocol has been both theoretically and practically impactful. However, the BKR protocol has an $O(\log n)$ running time (where $n$ is the number of replicas) due to the usage of $n$ parallel asynchronous binary agreement (ABA) instances, impacting both performance and scalability. Indeed, for a network of 16~64 replicas, the parallel ABA phase occupies about 95%~97% of the total runtime in BKR. A long-standing open problem is whether we can build an ACS framework with $O(1)$ time while not increasing the message or communication complexity of the BKR protocol. In this paper, we resolve the open problem, presenting the first constant-time ACS protocol with $O(n^3)$ messages in the information-theoretic and signature-free settings. Moreover, as a key ingredient of our new ACS framework and an interesting primitive in its own right, we provide the first information-theoretic multivalued validated Byzantine agreement (MVBA) protocol with $O(1)$ time and $O(n^3)$ messages. Both results can improve---asymptotically and concretely---various applications using ACS and MVBA in the information-theoretic, quantum-safe, or signature-free settings. As an example, we implement FIN, a BFT protocol instantiated using our framework. Via a 121-server deployment on Amazon EC2, we show FIN is significantly more efficient than PACE (CCS 2022), the state-of-the-art asynchronous BFT protocol of the same type. In particular, FIN reduces the overhead of the ABA phase to as low as 1.23% of the total runtime, and FIN achieves up to 3.41x the throughput of PACE. We also show that FIN outperforms other BFT protocols with the standard liveness property such as Dumbo and Speeding Dumbo.
Last updated:  2023-12-10
Allowing Blockchain Loans with Low Collateral
Tom Azoulay, Uri Carl, and Ori Rottenstreich
Collateral is an item of value serving as security for the repayment of a loan. In blockchain-based loans, cryptocurrencies serve as the collateral. The high volatility of cryptocurrencies implies a serious barrier of entry with a common practice that collateral values equal multiple times the value of the loan. As assets serving as collateral are locked, this requirement prevents many candidates from obtaining loans. In this paper, we aim to make loans more accessible by offering loans with lower collateral, while keeping the risk for lenders bound. We propose a credit score based on data recovered from the blockchain to predict how likely a potential borrower is to repay a loan. Our protocol does not risk the initial amount granted by liquidity providers, but only risks part of the interest yield gained from the borrower by the protocol in the past.
Last updated:  2023-12-10
An Empirical Study of Cross-chain Arbitrage in Decentralized Exchanges
Ori Mazor and Ori Rottenstreich
Blockchain interoperability refers to the ability of blockchains to share information with each other. Decentralized Exchanges (DEXs) are peer-to-peer marketplaces where traders can exchange cryptocurrencies. Several studies have focused on arbitrage analysis within a single blockchain, typically in Ethereum. Recently, we have seen a growing interest in cross-chain technologies to create a more interconnected blockchain network. We present a framework to study cross-chain arbitrage in DEXs. We use this framework to analyze cross-chain arbitrages between two popular DEXs, PancakeSwap and QuickSwap, within a time frame of a month. While PancakeSwap is implemented on a blockchain named BNB Chain, QuickSwap is implemented on a different blockchain named Polygon. The approach of this work is to study the cross-chain arbitrage through an empirical study. We refer to the number of arbitrages, their revenue as well as to their duration. This work lays the basis for understanding cross-chain arbitrage and its potential impact on the blockchain technology.
Last updated:  2023-12-10
Selective Delegation of Attributes in Mercurial Signature Credentials
Colin Putman and Keith M. Martin
Anonymous credential schemes enable service providers to verify information that a credential holder willingly discloses, without needing any further personal data to corroborate that information, and without allowing the user to be tracked from one interaction to the next. Mercurial signatures are a novel class of anonymous credentials which show good promise as a simple and efficient construction without heavy reliance on zero-knowledge proofs. However, they still require significant development in order to achieve the functionality that most existing anonymous credential schemes provide. Encoding multiple attributes of the credential holder in such a way that they can be disclosed selectively with each use of the credential is often seen as a vital feature of anonymous credentials, and is one that mercurial signatures have not yet implemented. In this paper, we show a simple way to encode attributes in a mercurial signature credential and to regulate which attributes a credential holder can issue when delegating their credential to another user. We also extend the security model associated with mercurial signatures to account for the inclusion of attributes, and prove the security of our extension with respect to the original mercurial signature construction.
Last updated:  2023-12-09
Ring-LWE Hardness Based on Non-invertible Ideals
Charanjit S. Jutla and Chengyu Lin
We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices. In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible. Since the conductor itself is non-invertible, this gives a non-trivial multiplicative set that lies outside the ideal class group. Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.
Last updated:  2023-12-09
The Patching Landscape of Elisabeth-4 and the Mixed Filter Permutator Paradigm
Clément Hoffmann, Pierrick Méaux, and François-Xavier Standaert
Filter permutators are a family of stream cipher designs that are aimed for hybrid homomorphic encryption. While originally operating on bits, they have been generalized to groups at Asiacrypt 2022, and instantiated for evaluation with the TFHE scheme which favors a filter based on (negacyclic) Look Up Tables (LUTs). A recent work of Gilbert et al., to appear at Asiacrypt 2023, exhibited (algebraic) weaknesses in the Elisabeth-4 instance, exploiting the combination of the 4-bit negacyclic LUTs it uses as filter. In this article, we explore the landscape of patches that can be used to restore the security of such designs while maintaining their good properties for hybrid homomorphic encryption. Starting with minimum changes, we observe that just updating the filter function (still with small negacyclic LUTs) is conceptually feasible, and propose the resulting Elisabeth-b4 design with three levels of NLUTs. We then show that a group permutator combining two different functions in the filter can simplify the analysis and improve performances. We specify the Gabriel instance to illustrate this claim. We finally propose to modify the group filter permutator paradigm into a mixed filter permutator, which considers the permutation of the key with elements in a group and a filter outputting elements in a different group. We specify the Margrethe instance as a first example of mixed filter permutator, with key elements in $\mathbb{F}_2$ and output in $\mathbb{Z}_{16}$, that we believe well-suited for recent fully homomorphic encryption schemes that can efficiently evaluate larger (not negacyclic) LUTs.
Last updated:  2023-12-09
Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
Yilei Chen and Jiatu Li
A recent line of research has introduced a systematic approach to explore the complexity of explicit construction problems through the use of meta problems, namely, the range avoidance problem (abbrev. $\textsf{Avoid}$) and the remote point problem (abbrev. $\textsf{RPP}$). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether $\textsf{Avoid}$ and $\textsf{RPP}$ are hard for simple circuits such as low-depth circuits. In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constant-depth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC '23) against deterministic algorithms employing witness encryption for $\textsf{NP}$, where the inputs to $\textsf{Avoid}$ are general Boolean circuits. Our primary technical contribution is a novel construction of witness encryption inspired by public-key encryption for certain promise language in $\textsf{NP}$ that is unlikely to be $\textsf{NP}$-complete. We introduce a generic approach to transform a public-key encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial public-key encryption scheme. Based on this translation and variants of standard lattice-based or coding-based PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in $\textsf{NP}\setminus \textsf{coNP}_{/\textsf{poly}}$. Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich's super-bits (RANDOM '97), which is crucial for demonstrating the hardness of $\textsf{Avoid}$ and $\textsf{RPP}$ against nondeterministic algorithms.
Last updated:  2023-12-09
Zero-Knowledge Functional Elementary Databases
Xinxuan Zhang and Yi Deng
Zero-knowledge elementary databases (ZK-EDBs) enable a prover to commit a database ${D}$ of key-value $(x,v)$ pairs and later provide a convincing answer to the query ``send me the value $D(x)$ associated with $x$'' without revealing any extra knowledge (including the size of ${D}$). After its introduction, several works extended it to allow more expressive queries, but the expressiveness achieved so far is still limited: only a relatively simple queries--range queries over the keys and values-- can be handled by known constructions. In this paper we introduce a new notion called zero knowledge functional elementary databases (ZK-FEDBs), which allows the most general functional queries. Roughly speaking, for any Boolean circuit $f$, ZK-FEDBs allows the ZK-EDB prover to provide convincing answers to the queries of the form ``send me all records ${(x,v)}$ in ${{D}}$ satisfying $f(x,v)=1$,'' without revealing any extra knowledge (including the size of ${D}$). We present a construction of ZK-FEDBs in the random oracle model and generic group model, whose proof size is only linear in the length of record and the size of query circuit, and is independent of the size of input database $D$. Our technical constribution is two-fold. Firstly, we introduce a new variant of zero-knowledge sets (ZKS) which supports combined operations on sets, and present a concrete construction that is based on groups with unknown order. Secondly, we develop a tranformation that tranforms the query of Boolean circuit into a query of combined operations on related sets, which may be of independent interest.
Last updated:  2023-12-09
Revisiting BBS Signatures
Stefano Tessaro and Chenzhi Zhu
BBS signatures were implicitly proposed by Boneh, Boyen, and Shacham (CRYPTO ’04) as part of their group signature scheme, and explicitly cast as stand-alone signatures by Camenisch and Lysyanskaya (CRYPTO ’04). A provably secure version, called BBS+, was then devised by Au, Susilo, and Mu (SCN ’06), and is currently the object of a standardization effort which has led to a recent RFC draft. BBS+ signatures are suitable for use within anonymous credential and DAA systems, as their algebraic structure enables efficient proofs of knowledge of message-signature pairs that support partial disclosure. BBS+ signatures consist of one group element and two scalars. As our first contribution, we prove that a variant of BBS+ producing shorter signatures, consisting only of one group element and one scalar, is also secure. The resulting scheme is essentially the original BBS proposal, which was lacking a proof of security. Here we show it satisfies, under the q-SDH assumption, the same provable security guarantees as BBS+. We also provide a complementary tight analysis in the algebraic group model, which heuristically justifies instantiations with potentially shorter signatures. Furthermore, we devise simplified and shorter zero-knowledge proofs of knowledge of a BBS message-signature pair that support partial disclosure of the message. Over the BLS12-381 curve, our proofs are 896 bits shorter than the prior proposal by Camenisch, Drijvers, and Lehmann (TRUST ’16), which is also adopted by the RFC draft. Finally, we show that BBS satisfies one-more unforgeability in the algebraic group model in a scenario, arising in the context of credentials, where the signer can be asked to sign arbitrary group elements, meant to be commitments, without seeing their openings.
Last updated:  2023-12-08
Asymptotics of hybrid primal lattice attacks
Daniel J. Bernstein
The literature gives the impression that (1) existing heuristics accurately predict how effective lattice attacks are, (2) non-ternary lattice systems are not vulnerable to hybrid multi-decoding primal attacks, and (3) the asymptotic exponents of attacks against non-ternary systems have stabilized. This paper shows that 1 contradicts 2 and that 1 contradicts 3: the existing heuristics imply that hybrid primal key-recovery attacks are exponentially faster than standard non-hybrid primal key-recovery attacks against the LPR PKE with any constant error width. This is the first report since 2015 of an exponential speedup in heuristic non-quantum primal attacks against non-ternary LPR. Quantitatively, for dimension n, modulus n^{Q_0+o(1)}, and error width w, a surprisingly simple hybrid attack reduces heuristic costs from 2^{(ρ+o(1))n} to 2^{(ρ-ρ H_0+o(1))n}, where z_0=2Q_0/(Q_0+1/2)^2, ρ=z_0 log_4(3/2), and H_0=1/(1+(lg w)/0.057981z_0). This raises the questions of (1) what heuristic exponent is achieved by more sophisticated hybrid attacks and (2) what impact hybrid attacks have upon concrete cryptosystems whose security analyses have ignored hybrid attacks, such as Kyber-512.
Last updated:  2023-12-08
zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
Haochen Sun, Tonghe Bai, Jason Li, and Hongyang Zhang
The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep neural networks. To protect the intellectual properties of AI developers, directly examining the training process by accessing the model parameters and training data is often prohibited for verifiers. In response to this challenge, we present zero-knowledge deep learning (zkDL), an efficient zero-knowledge proof for deep learning training. To address the long-standing challenge of verifiable computations of non-linearities in deep learning training, we introduce zkReLU, a specialized proof for the ReLU activation and its backpropagation. zkReLU turns the disadvantage of non-arithmetic relations into an advantage, leading to the creation of FAC4DNN, our specialized arithmetic circuit design for modelling neural networks. This design aggregates the proofs over different layers and training steps, without being constrained by their sequential order in the training process. With our new CUDA implementation that achieves full compatibility with the tensor structures and the aggregated proof design, zkDL enables the generation of complete and sound proofs in less than a second per batch update for an 8-layer neural network with 10M parameters and a batch size of 64, while provably ensuring the privacy of data and model parameters. To our best knowledge, we are not aware of any existing work on zero-knowledge proof of deep learning training that is scalable to million-size networks.
Last updated:  2023-12-08
QCB is Blindly Unforgeable
Jannis Leuther and Stefan Lucks
QCB is a proposal for a post-quantum secure, rate-one authenticated encryption with associated data scheme (AEAD) based on classical OCB3 and \(\Theta\)CB, which are vulnerable against a quantum adversary in the Q2 setting. The authors of QCB prove integrity under plus-one unforgeability, whereas the proof of the stronger definition of blind unforgeability has been left as an open problem. After a short overview of QCB and the current state of security definitions for authentication, this work proves blind unforgeability of QCB. Finally, the strategy of using tweakable block ciphers in authenticated encryption is generalised to a generic blindly unforgeable AEAD model.
Last updated:  2023-12-08
Intermediate Certificate Suppression in Post-Quantum TLS: An Approximate Membership Querying Approach
Dimitrios Sikeridis, Sean Huntley, David Ott, and Michael Devetsikiotis
Quantum computing advances threaten the security of today's public key infrastructure, and have led to the pending standardization of alternative, quantum-resistant key encapsulation and digital signature cryptography schemes. Unfortunately, authentication algorithms based on the new post-quantum (PQ) cryptography create significant performance bottlenecks for TLS due to larger certificate chains which introduce additional packets and round-trips. The TLS handshake slowdown will be unacceptable to many applications, and detrimental to the broader adoption of quantum safe cryptography standards. In this paper, we propose a novel framework for Intermediate Certificate Authority (ICA) certificate suppression in TLS that reduces the authentication message size and prevents excessive round-trip delays. Our approach utilizes an approximate membership query (AMQ) data structure (probabilistic filter) to advertise known ICA certs to remote TLS endpoints so that unnecessary ICA certificates are omitted from the TLS handshake exchange. We showcase the extend of the PQ authentication overhead challenge in TLS, and evaluate the feasibility of AMQ filters for ICA suppression in terms of space and computational overhead. Finally, we experimentally evaluate the potential gains form our approach and showcase a $70\%$ reduction in exchanged ICA cert data that translates to 15-50 MB of savings in PQ TLS and for certain Web-based application scenarios.
Last updated:  2023-12-08
In-depth Correlation Power Analysis Attacks on a Hardware Implementation of CRYSTALS-Dilithium
Huaxin Wang, Yiwen Gao, Yuejun Liu, Qian Zhang, and Yongbin Zhou
During the standardisation process of post-quantum cryptography, NIST encourages research on side-channel analysis for candidate schemes. As the recommended lattice signature scheme, CRYSTALS-Dilithium, when implemented on hardware, has only been subjected to the side-channel attack presented by Steffen et al. in IACR ePrint 2022. This attack is not complete and requires excessive traces. Therefore, we investigate the leakage of an FPGA (Kintex7) implementation of CRYSTALS-Dilithium using the CPA method, where with a minimum of 70000 traces partial private key coefficients can be recovered. As far as we know, this is the first work that applies power leakage to sidechannel attacks on FPGA implementations of CRYSTALS-Dilithium. Furthermore, we optimise the attack by extracting Point-of-Interests using known information due to parallelism (named CPA-PoI) and by iteratively utilising parallel leakages (named CPA-ITR). We experimentally demonstrate that when recovering the same number of key coefficients, the CPA-PoI and CPA-ITR reduce the number of traces used by up to 16.67 percent and 25 percent, respectively, compared to the CPA method. When attacking with the same number of traces, the CPA-PoI method and the CPA-ITR method increase the number of recovered key coefficients by up to 55.17 percent and 93.10 percent, respectively, compared to the CPA method. Our experiments confirm that the FPGA implementation of CRYSTALS-Dilithium is also very vulnerable to side-channel analysis.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.