Papers updated in last 183 days (Page 2 of 1504 results)

Last updated:  2024-03-15
Threshold Structure-Preserving Signatures: Strong and Adaptive Security under Standard Assumptions
Aikaterini Mitrokotsa, Sayantan Mukherjee, Mahdi Sedaghat, Daniel Slamanig, and Jenit Tomy
Structure-preserving signatures (SPS) have emerged as an important cryptographic building block, as their compatibility with the Groth-Sahai (GS) NIZK framework allows to construct protocols under standard assumptions with reasonable efficiency. Over the last years there has been a significant interest in the design of threshold signature schemes. However, only very recently Crites et al. (ASIACRYPT 2023) have introduced threshold SPS (TSPS) along with a fully non-interactive construction. While this is an important step, their work comes with several limitations. With respect to the construction, they require the use of random oracles, interactive complexity assumptions and are restricted to so called indexed Diffie-Hellman message spaces. Latter limits the use of their construction as a drop-in replacement for SPS. When it comes to security, they only support static corruptions and do not allow partial signature queries for the forgery. In this paper, we ask whether it is possible to construct TSPS without such restrictions. We start from an SPS from Kiltz, Pan and Wee (CRYPTO 2015) which has an interesting structure, but thresholdizing it requires some modifications. Interestingly, we can prove it secure in the strongest model (TS-UF-1) for fully non-interactive threshold signatures (Bellare et al., CRYPTO 2022) and even under fully adaptive corruptions. Surprisingly, we can show the latter under a standard assumption without requiring any idealized model. All known constructions of efficient threshold signatures in the discrete logarithm setting require interactive assumptions and idealized models. Concretely, our scheme in type III bilinear groups under the SXDH assumption has signatures consisting of 7 group elements. Compared to the TSPS from Crites et al. (2 group elements), this comes at the cost of efficiency. However, our scheme is secure under standard assumptions, achieves strong and adaptive security guarantees and supports general message spaces, i.e., represents a drop-in replacement for many SPS applications. Given these features, the increase in the size of the signature seems acceptable even for practical applications.
Last updated:  2024-03-15
A trust-minimized e-cash for cryptocurrencies
Mario Yaksetig
We introduce a private cryptocurrency design based on the original e-cash protocol. Our proposal allows for private payments on existing blockchain systems. In our design, the issuance of the private cash is transparent and is associated with a blockchain transfer to provide stronger security.
Last updated:  2024-03-14
The cool and the cruel: separating hard parts of LWE secrets
Niklas Nolte, Mohamed Malhou, Emily Wenger, Samuel Stevens, Cathy Yuanchen Li, Francois Charton, and Kristin Lauter
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation. Known attacks on sparse binary LWE secrets include the sparse dual attack and the hybrid sparse dual-meet in the middle attack, which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial parallelized lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both the cruel and the cool bits of the secret. We provide concrete attack timings for recovering secrets in dimensions n = 256, 512, and 768. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter) applied in parallel. We also apply our new attack in the RLWE setting for 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Last updated:  2024-03-14
Lin2-Xor Lemma: an OR-proof that leads to the membership proof and signature
Anton A. Sokolov
In this paper we introduce an novel two-round public coin OR-proof protocol that extends in a natural way to the log-size membership proof and signature in a prime-order group. In the lemma called Lin2-Xor we prove that our OR-proof is perfectly complete and has witness-extended emulation under the discrete logarithm assumption. We derive from it a log-size one-out-of-many proof, which retains the perfect completeness and witness-extended emulation. Both of our OR- and membership- proofs easily acquire the special honest verifier zero-knowledge property under the decisional Diffie-Hellman assumption. We sketch out a setup-free pairings-free log-size linkable ring signature with strong security model on top of our membership proof. Many recently proposed discrete-log setup-free pairings-free log-size ring signatures are based on the ideas of commitment-to-zero proving system by Groth and Kohlweiss or on the Bulletproofs inner-product compression method by Bünz et al. Our Lin2-Xor lemma provides an alternative technique which, using the general reduction similar to Bulletproofs, leads directly to the log-size linkable ring signature under the same prerequisites.
Last updated:  2024-03-14
On the Adaptive Security of the Threshold BLS Signature Scheme
Renas Bacho and Julian Loss
Threshold signatures are a crucial tool for many distributed protocols. As shown by Cachin, Kursawe, and Shoup (PODC '00), schemes with unique signatures are of particular importance, as they allow to implement distributed coin flipping very efficiently and without any timing assumptions. This makes them an ideal building block for (inherently randomized) asynchronous consensus protocols. The threshold BLS signature of Boldyreva (PKC '03) is both unique and very compact, but unfortunately lacks a security proof against adaptive adversaries. Thus, current consensus protocols either rely on less efficient alternatives or are not adaptively secure. In this work, we revisit the security of the threshold BLS signature by showing the following results, assuming $t$ adaptive corruptions: - We give a modular security proof that follows a two-step approach: 1) We introduce a new security notion for distributed key generation protocols (DKG). We show that it is satisfied by several protocols that previously only had a static security proof. 2) Assuming any DKG protocol with this property, we then prove unforgeability of the threshold BLS scheme. Our reductions are tight and can be used to substantiate real-world parameter choices. - To justify our use of strong assumptions such as the algebraic group model (AGM) and the hardness of one-more-discrete logarithm (OMDL), we prove two impossibility results: 1) Without the AGM, we rule out a natural class of tight security reductions from $(t+1)$-OMDL. 2) Even in the AGM, a strong interactive assumption is required in order to prove the scheme secure.
Last updated:  2024-03-14
Fastcrypto: Pioneering Cryptography Via Continuous Benchmarking
Kostas Kryptos Chalkias, Jonas Lindstrøm, Deepak Maram, Ben Riva, Arnab Roy, Alberto Sonnino, and Joy Wang
In the rapidly evolving fields of encryption and blockchain technologies, the efficiency and security of cryptographic schemes significantly impact performance. This paper introduces a comprehensive framework for continuous benchmarking in one of the most popular cryptography Rust libraries, fastcrypto. What makes our analysis unique is the realization that automated benchmarking is not just a performance monitor and optimization tool, but it can be used for cryptanalysis and innovation discovery as well. Surprisingly, benchmarks can uncover spectacular security flaws and inconsistencies in various cryptographic implementations and standards, while at the same time they can identify unique opportunities for innovation not previously known to science, such as providing a) hints for novel algorithms, b) indications for mix-and-match library functions that result in world record speeds, and c) evidences of biased or untested real world algorithm comparisons in the literature. Our approach transcends traditional benchmarking methods by identifying inconsistencies in multi-threaded code, which previously resulted in unfair comparisons. We demonstrate the effectiveness of our methodology in identifying the fastest algorithms for specific cryptographic operations like signing, while revealing hidden performance characteristics and security flaws. The process of continuous benchmarking allowed fastcrypto to break many crypto-operations speed records in the Rust language ecosystem. A notable discovery in our research is the identification of vulnerabilities and unfair speed claims due to missing padding checks in high-performance Base64 encoding libraries. We also uncover insights into algorithmic implementations such as multi-scalar elliptic curve multiplications, which exhibit different performance gains when applied in different schemes and libraries. This was not evident in conventional benchmarking practices. Further, our analysis highlights bottlenecks in cryptographic algorithms where pre-computed tables can be strategically applied, accounting for L1 and L2 CPU cache limitations. Our benchmarking framework also reveals that certain algorithmic implementations incur additional overheads due to serialization processes, necessitating a refined `apples to apples' comparison approach. We identified unique performance patterns in some schemes, where efficiency scales with input size, aiding blockchain technologies in optimal parameter selection and data compression. Crucially, continuous benchmarking serves as a tool for ongoing audit and security assurance. Variations in performance can signal potential security issues during upgrades, such as cleptography, hardware manipulation or supply chain attacks. This was evidenced by critical private key leakage vulnerabilities we found in one of the most popular EdDSA Rust libraries. By providing a dynamic and thorough benchmarking approach, our framework empowers stakeholders to make informed decisions, enhance security measures, and optimize cryptographic operations in an ever-changing digital landscape.
Last updated:  2024-03-14
Secure Quantum Computation with Classical Communication
James Bartusek
The study of secure multi-party computation (MPC) has thus far been limited to the following two settings: every party is fully classical, or every party has quantum capabilities. This paper studies a notion of MPC that allows some classical and some quantum parties to securely compute a quantum functionality over their joint private inputs. In particular, we construct (constant-round, composable) protocols for blind and verifiable classical delegation of quantum computation, and give applications to the secure computation of quantum functionalities using only classical communication. Assuming QLWE (the quantum hardness of learning with errors), we obtain the following (maliciously-secure) protocols for computing any pseudo-deterministic quantum functionality. • A six-round protocol between one quantum server and multiple classical clients in the CRS (common random string) model. • A three-round protocol between one quantum server and multiple classical clients in the PKI (public key infrastructure) + QRO (quantum random oracle) model. • A two-message protocol between quantum sender and classical receiver (a quantum non-interactive secure computation protocol), in the QRO model. To enable the composability of our classical verification of quantum computation protocols, we require the notion of malicious blindness, which stipulates that the prover does not learn anything about the verifier’s delegated computation, even if it is able to observe whether or not the verifier accepted the proof. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the “nearly orthogonal projector” proof strategy (Alagic et al., TCC 2020).
Last updated:  2024-03-14
Cryptanalysis of rank-2 module-LIP in Totally Real Number Fields
Guilhem Mureau, Alice Pellet-Mary, Heorhii Pliatsok, and Alexandre Wallet
We formally define the Lattice Isomorphism Problem for module lattices (module-LIP) in a number field $K$. This is a generalization of the problem defined by Ducas, Postlethwaite, Pulles, and van Woerden (Asiacrypt 2022), taking into account the arithmetic and algebraic specificity of module lattices from their representation using pseudo-bases. We also provide the corresponding set of algorithmic and theoretical tools for the future study of this problem in a module setting. Our main contribution is an algorithm solving module-LIP for modules of rank $2$ in $K^2$, when $K$ is a totally real number field. Our algorithm exploits the connection between this problem, relative norm equations and the decomposition of algebraic integers as sums of two squares. For a large class of modules (including $\mathcal{O}_K^2$), and a large class of totally real number fields (including the maximal real subfield of cyclotomic fields) it runs in classical polynomial time in the degree of the field and the residue at 1 of the Dedekind zeta function of the field (under reasonable number theoretic assumptions). We provide a proof-of-concept code running over the maximal real subfield of some cyclotomic fields.
Last updated:  2024-03-14
Impossibility of Efficient Information-Theoretic Fuzzy Extraction
Benjamin Fuller
Fuzzy extractors convert noisy signals from the physical world into reliable cryptographic keys. Fuzzy min-entropy is an important measure of the ability of a fuzzy extractor to distill keys from a distribution: in particular, it bounds the length of the key that can be derived (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). In general, fuzzy min-entropy that is superlogarithmic in the security parameter is required for a noisy distribution to be suitable for key derivation. There is a wide gap between what is possible with respect to computational and information-theoretic adversaries. Under the assumption of general-purpose obfuscation, keys can be securely derived from all distributions with superlogarithmic entropy. Against information-theoretic adversaries, however, it is impossible to build a single fuzzy extractor that works for all distributions (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). A weaker information-theoretic goal is to build a fuzzy extractor for each particular probability distribution. This is the approach taken by Woodage et al. (Crypto 2017). Prior approaches use the full description of the probability mass function and are inefficient. We show this is inherent: for a quarter of distributions with fuzzy min-entropy and $2^k$ points there is no secure fuzzy extractor that uses less $2^{\Theta(k)}$ bits of information about the distribution.} This result rules out the possibility of efficient, information-theoretic fuzzy extractors for many distributions with fuzzy min-entropy. We show an analogous result with stronger parameters for information-theoretic secure sketches. Secure sketches are frequently used to construct fuzzy extractors.
Last updated:  2024-03-14
Naysayer proofs
István András Seres, Noemi Glaeser, and Joseph Bonneau
This work introduces the notion of naysayer proofs. We observe that in numerous (zero-knowledge) proof systems, it is significantly more efficient for the verifier to be convinced by a so-called naysayer that a false proof is invalid than it is to check that a genuine proof is valid. We show that every NP language has constant-size and constant-time naysayer proofs. We also show practical constructions for several example proof systems, including FRI polynomial commitments, post-quantum secure digital signatures, and verifiable shuffles. Naysayer proofs enable an interesting new optimistic verification mode potentially suitable for resource-constrained verifiers, such as smart contracts.
Last updated:  2024-03-14
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Chloé Baïsse, Antoine Moran, Guillaume Goy, Julien Maillard, Nicolas Aragon, Philippe Gaborit, Maxime Lecomte, and Antoine Loiseau
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allow to recover secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC where both the shared key and the secret key are targeted. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice, is that the RM decoder is applied before the RS decoder. This is what makes it possible to attack the two keys. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two chosen ciphertext attacks. One of them require a single trace and is successful until high noise levels.
Last updated:  2024-03-14
Cicada: A framework for private non-interactive on-chain auctions and voting
Noemi Glaeser, István András Seres, Michael Zhu, and Joseph Bonneau
Auction and voting schemes play a crucial role in the Web3 ecosystem. Yet currently deployed implementations either lack privacy or require at least two rounds, hindering usability and security. We introduce Cicada, a general framework for using linearly homomorphic time-lock puzzles (HTLPs) to enable provably secure, non-interactive private auction and voting protocols. We instantiate our framework with an efficient new HTLP construction and novel packing techniques that enable succinct ballot correctness proofs independent of the number of candidates. We demonstrate the practicality of our approach by implementing our protocols for the Ethereum Virtual Machine (EVM).
Last updated:  2024-03-14
Threshold implementations of cryptographic functions between finite Abelian groups
Enrico Piccione
The threshold implementation technique has been proposed in 2006 by Nikova et al. as a countermeasure to mitigate cryptographic side-channel attacks on hardware implementations when the effect of glitches is taken into account. This technique is based on Boolean sharing (also called masking) and it was developed for securing symmetric ciphers such as AES. In 2023, Piccione et al. proposed a general construction of threshold implementations that is universal for S-boxes that are bijective vectorial Boolean function (functions from a binary vector space $\mathbb{F}_{2}^n$ into itself). In this paper, we further generalize the construction and we propose a general theory of threshold implementations for any type of S-boxes. We investigate the case of functions (also not necessarily bijective) that are defined between two finite Abelian groups and we use the definition of threshold implementation given by Dhooghe et al. in 2019 with additive sharing. To show that this generalized notion is as useful as the one for Boolean sharing, we prove that many classical results still hold. An important tool in this theory is the notion of functional degree introduced by Aichinger and Moosbauer in 2021 which generalizes the algebraic degree of a vectorial Boolean function. We show that if $F$ has functional degree (at most) $d$ and the cardinality of the domain is divisible by the cardinality of the codomain, then $F$ admits a threshold implementation $\mathcal{F}$ with $s\geq d+2$ shares in input and $d+2$ shares in output. Moreover, we provide a complete overview on which are the available tools for studying the functional degree and how to represent those functions using a Integer-Valued (IV) polynomial representation. Then we apply our theory for the following applications: defining the inner product masking in our setting, providing a threshold implementation of any multiplication map, and computing the functional degree and the IV polynomial representations of the conversion maps between $\mathbb{F}_p^n$ and $\mathbb{Z}_{p^n}$.
Last updated:  2024-03-14
EFFLUX-F2: A High Performance Hardware Security Evaluation Board
Arpan Jati, Naina Gupta, Anupam Chattopadhyay, and Somitra Kumar Sanadhya
Side-channel analysis has become a cornerstone of modern hardware security evaluation for cryptographic accelerators. Recently, these techniques are also being applied in fields such as AI and Machine Learning to investigate possible threats. Security evaluations are reliant on standard test setups including commercial and open-source evaluation boards such as, SASEBO/SAKURA and ChipWhisperer. However, with shrinking design footprints and overlapping tasks on the same platforms, the quality of the side channel information as well as the speed of data capture can significantly influence security assessment. In this work, we designed EFFLUX-F2, a hardware security evaluation board to improve the quality and speed of side-channel information capture. We also designed a measurement setup to benchmark the signal differences between target boards. Multiple experimental evaluations like noise analysis, CPA and TVLA performed on EFFLUX-F2 and competing evaluation boards showcase the significant superiority of our design in all aspects.
Last updated:  2024-03-13
Insecurity of MuSig and BN Multi-Signatures with Delayed Message Selection
Sela Navot
This note reveals a vulnerability of MuSig and BN multi-signatures when used with delayed message selection. Despite the fact that both schemes can be correctly implemented with preprocessing of the first two signing rounds before the message to sign is selected, we show that they are insecure (i.e. not existentially unforgeable against chosen message attacks) when the message selection is deferred to the third signing round and when parallel signing sessions are permitted. The attack, which uses the algorithm by Benhamouda et al. to solve the ROS problem, is practical and runs in polynomial time.
Last updated:  2024-03-13
Re-Randomized FROST
Uncategorized
Conrado P. L. Gouvea and Chelsea Komlo
Show abstract
Uncategorized
We define a (small) augmentation to the FROST threshold signature scheme that additionally allows for re-randomizable public and secret keys. We build upon the notion of re-randomizable keys in the literature, but tailor this capability when the signing key is secret-shared among a set of mutually trusted parties. We do not make any changes to the plain FROST protocol, but instead define additional algorithms to allow for randomization of the threshold public key and participant’s individual public and secret key shares. We show the security of this re-randomized extension to FROST with respect to the algebraic one-more discrete logarithm (AOMDL) problem in the random oracle model, the same security assumptions underlying plain FROST.
Last updated:  2024-03-13
More Efficient Post-Quantum Electronic Voting from NTRU
Patrick Hough, Caroline Sandsbråten, and Tjerand Silde
In recent years, there has been much focus on developing core cryptographic primitives based on lattice assumptions, driven by the NIST cal for post-quantum key encapsulation and digital signature algorithms. However, more work must be conducted on efficient privacy-preserving protocols with post-quantum security. Electronic voting is one such privacy-preserving protocol whose adoption is increasing across the democratic world. E-voting offers both a fast and convenient alternative to postal voting whilst further ensuring cryptographic privacy of votes and offering full verifiability of the process. Owing to the sensitivity of voting and the infrastructure challenges it poses, it is important that post-quantum security be baked into e-voting solutions early. We present a post-quantum e-voting scheme based on the hardness of the RLWE and NTRU lattice problems, providing concrete parameters and an efficient implementation. Our design achieves a factor $\times 5.3$ reduction in ciphertext size, $\times 2.5$ reduction in total communication cost, and $\times 2$ reduction in total computation time compared to the state-of-the-art lattice-based voting scheme by Aranha et al. (ACM CCS 2023). We argue that the efficiency of this scheme makes it suitable for real-world elections. Our scheme makes use of non-ternary NTRU secrets to achieve optimal parameters. In order to compute the security of our design, we extend the ternary-NTRU work of Ducas and van Woerden (ASIACRYPT 2021) by determining the concrete fatigue point (for general secrets) of NTRU to be $q=0.0058\cdot \sigma^2 \cdot d^{\:2.484}$ (above which parameters become overstretched) for modulus $q$, ring dimension $d$ and secrets drawn from a Gaussian of parameter $\sigma$. We consider this relation to be of independent interest and demonstrate its significance by improving the efficiency of the (partially) blind signature scheme by del Pino and Katsumata (CRYPTO 2022).
Last updated:  2024-03-13
Unbiasable Verifiable Random Functions
Emanuele Giunta and Alistair Stewart
Verifiable Random Functions (VRFs) play a pivotal role in Proof of Stake (PoS) blockchain due to their applications in secret leader election protocols. However, the original definition by Micali, Rabin and Vadhan is by itself insufficient for such applications. The primary concern is that adversaries may craft VRF key pairs with skewed output distribution, allowing them to unfairly increase their winning chances. To address this issue David, Gaži, Kiayias and Russel (2017/573) proposed a stronger definition in the universal composability framework, while Esgin et al. (FC '21) put forward a weaker game-based one. Their proposed notions come with some limitations though. The former appears to be too strong, being seemingly impossible to instantiate without a programmable random oracle. The latter instead is not sufficient to prove security for VRF-based secret leader election schemes. In this work we close the above gap by proposing a new security property for VRF we call unbiasability. On the one hand, our notion suffices to imply fairness in VRF-based leader elections protocols. On the other hand, we provide an efficient compiler in the plain model (with no CRS) transforming any VRF into an unbiasable one under standard assumptions. Moreover, we show folklore VRF constructions in the ROM to achieve our notion without the need to program the random oracle. As a minor contribution, we also provide a generic and efficient construction of certified 1 to 1 VRFs from any VRF.
Last updated:  2024-03-13
Extractable Witness Encryption for KZG Commitments and Efficient Laconic OT
Nils Fleischhacker, Mathias Hall-Andersen, and Mark Simkin
We present a concretely efficient and simple extractable witness encryption scheme for KZG polynomial commitments. It allows to encrypt a message towards a triple $(\mathsf{com}, \alpha, \beta)$, where $\mathsf{com}$ is a KZG commitment for some polynomial $f$. Anyone with an opening for the commitment attesting $f(\alpha) = \beta$ can decrypt, but without knowledge of a valid opening the message is computationally hidden. Our construction is simple and highly efficient. The ciphertext is only a single group element. Encryption and decryption both require a single pairing evaluation and a constant number of group operations. Using our witness encryption scheme, we construct a simple and highly efficient laconic OT protocol, which significantly outperforms the state of the art in most important metrics.
Last updated:  2024-03-13
Parameter-Hiding Order-Revealing Encryption without Pairings
Cong Peng, Rongmao Chen, Yi Wang, Debiao He, and Xinyi Huang
Order-Revealing Encryption (ORE) provides a practical solution for conducting range queries over encrypted data. Achieving a desirable privacy-efficiency tradeoff in designing ORE schemes has posed a significant challenge. At Asiacrypt 2018, Cash et al. proposed Parameter-hiding ORE (pORE), which specifically targets scenarios where the data distribution shape is known, but the underlying parameters (such as mean and variance) need to be protected. However, existing pORE constructions rely on impractical bilinear maps, limiting their real-world applicability. In this work, we propose an alternative and efficient method for constructing pORE using identification schemes. By leveraging the map-invariance property of identification schemes, we eliminate the need for pairing computations during ciphertext comparison. Specifically, we instantiate our framework with the pairing-free Schnorr identification scheme and demonstrate that our proposed pORE scheme reduces ciphertext size by approximately 31.25\% and improves encryption and comparison efficiency by over two times compared to the current state-of-the-art pORE construction. Our work provides a more efficient alternative to existing pORE constructions and could be viewed as a step towards making pORE a viable choice for practical applications.
Last updated:  2024-03-13
Private Eyes: Zero-Leakage Iris Searchable Encryption
Julie Ha, Chloe Cachet, Luke Demarest, Sohaib Ahmad, and Benjamin Fuller
This work introduces Private Eyes, the first zero-leakage biometric database. The leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Approximate proximity queries are used: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric. This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Proximity queries are the required functionality: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric. Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and oblivious maps. One computes many LSHs of each record in the database, and uses these hashes as keys in an encrypted map with the matching biometric readings concatenated as the value. At search time with a noisy reading, one computes the LSHs, and retrieves the disjunction of the resulting values from the map. The underlying encrypted map needs to efficiently answer disjunction queries. We focus on the iris biometric. Iris biometric data requires a large number of LSHs, approximately 1000. The most relevant prior work is in zero-leakage k-nearest-neighbor search (Boldyreva and Tang, PoPETS 2021), but that work is designed for a small number of LSHs. Our cryptographic design is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most 6% of LSHs match any stored value. Our scheme is implemented and open-sourced. We evaluate using the ND-0405 dataset; this dataset has 356 irises suitable for testing. To scale our evaluation, we use a generative adversarial network to produce synthetic irises. Accurate statistics on sizes beyond available datasets is crucial to optimizing the cryptographic primitives. This tool may be of independent interest. For the largest tested parameters of a 5000 iris database, search requires 26 rounds of communication and 26 minutes of single-threaded computation.
Last updated:  2024-03-13
UniHand: Privacy-preserving Universal Handover for Small-Cell Networks in 5G-enabled Mobile Communication with KCI Resilience
Rabiah Alnashwan, Prosanta Gope, and Benjamin Dowling
Introducing Small Cell Networks (SCN) has significantly improved wireless link quality, spectrum efficiency and network capacity, which has been viewed as one of the key technologies in the fifth-generation (5G) mobile network. However, this technology increases the frequency of handover (HO) procedures caused by the dense deployment of cells in the network with reduced cell coverage, bringing new security and privacy issues. The current 5G-AKA and HO protocols are vulnerable to security weaknesses, such as the lack of forward secrecy and identity confusion attacks. The high HO frequency of HOs might magnify these security and privacy concerns in the 5G mobile network. This work addresses these issues by proposing a secure privacy-preserving universal HO scheme UniHand for SCNs in 5G mobile communication. UniHand can achieve mutual authentication, strong anonymity, perfect forward secrecy, key-escrow-free and key compromise impersonation (KCI) resilience. To the best of our knowledge, this is the first scheme to achieve secure, privacy-preserving universal HO with KCI resilience for roaming users in 5G environment. We demonstrate that our proposed scheme is resilient against all the essential security threats by performing a comprehensive formal security analysis and conducting relevant experiments to show the cost-effectiveness of the proposed scheme.
Last updated:  2024-03-13
The Uber-Knowledge Assumption: A Bridge to the AGM
Balthazar Bauer, Pooya Farshim, Patrick Harasser, and Markulf Kohlweiss
The generic-group model (GGM) and the algebraic-group model (AGM) have been exceptionally successful in proving the security of many classical and modern cryptosystems. These models, however, come with standard-model uninstantiability results, raising the question whether the schemes analyzed under them can be based on firmer standard-model footing. We formulate the uber-knowledge (UK) assumption, a standard-model assumption that naturally extends the uber-assumption family to knowledge-type problems. We justify the soundness of the UK assumption in both the bilinear GGM and the bilinear AGM. Along the way we extend these models to account for hashing into groups, an adversarial capability that is available in many concrete groups---In contrast to standard assumptions, hashing may affect the validity of knowledge assumptions. These results, in turn, enable a modular approach to security in the GGM and the AGM. As example applications, we use the UK assumption to prove knowledge soundness of Groth16 and of KZG polynomial commitments in the standard model, where for the former we reuse the existing proof in the AGM without hashing.
Last updated:  2024-03-13
Perfect Asynchronous MPC with Linear Communication Overhead
Ittai Abraham, Gilad Asharov, Shravani Patil, and Arpita Patra
We study secure multiparty computation in the asynchronous setting with perfect security and optimal resilience (less than one-fourth of the participants are malicious). It has been shown that every function can be computed in this model [Ben-OR, Canetti, and Goldreich, STOC'1993]. Despite 30 years of research, all protocols in the asynchronous setting require $\Omega(n^2C)$ communication complexity for computing a circuit with $C$ multiplication gates. In contrast, for nearly 15 years, in the synchronous setting, it has been known how to achieve $\mathcal{O}(nC)$ communication complexity (Beerliova and Hirt; TCC 2008). The techniques for achieving this result in the synchronous setting are not known to be sufficient for obtaining an analogous result in the asynchronous setting. We close this gap between synchronous and asynchronous secure computation and show the first asynchronous protocol with $\mathcal{O}(nC)$ communication complexity for a circuit with $C$ multiplication gates. Linear overhead forms a natural barrier for general secret-sharing-based MPC protocols. Our main technical contribution is an asynchronous weak binding secret sharing that achieves rate-1 communication (i.e., $\mathcal{O}(1)$-overhead per secret). To achieve this goal, we develop new techniques for the asynchronous setting, including the use of trivariate polynomials (as opposed to bivariate polynomials).
Last updated:  2024-03-13
Polynomial-Time Key-Recovery Attack on the ${\tt NIST}$ Specification of ${\tt PROV}$
River Moreira Ferreira and Ludovic Perret
In this paper, we present an efficient attack against ${\tt PROV}$, a recent variant of the popular Unbalanced Oil and Vinegar (${\tt UOV}$) multivariate signature scheme, that has been submitted to the ongoing ${\tt NIST}$ standardization process for additional post-quantum signature schemes. A notable feature of ${\tt PROV}$ is its proof of security, namely, existential unforgeability under a chosen-message attack (${\tt EUF-CMA}$), assuming the hardness of solving the system formed by the public-key non-linear equations. We present a polynomial-time key-recovery attack against the first specification of ${\tt PROV}$ (v$1.0$). To do so, we remark that a small fraction of the ${\tt PROV}$ secret-key is leaked during the signature process. Adapting and extending previous works on basic ${\tt UOV}$, we show that the entire secret-key can be then recovered from such a small fraction in polynomial-time. This leads to an efficient attack against ${\tt PROV}$ that we validated in practice. For all the security parameters suggested in by the authors of ${\tt PROV}$, our attack recovers the secret-key in at most $8$ seconds. We conclude the paper by discussing the apparent mismatch between such a practical attack and the theoretical security claimed by ${\tt PROV}$ designers. Our attack is not structural but exploits that the current specification of ${\tt PROV}$ differs from the required security model. A simple countermeasure makes ${\tt PROV}$ immune against the attack presented here and led the designers to update the specification of ${\tt PROV}$ (v$1.1$).
Last updated:  2024-03-13
Generalized Feistel Ciphers for Efficient Prime Field Masking - Full Version
Lorenzo Grassi, Loïc Masure, Pierrick Méaux, Thorben Moos, and François-Xavier Standaert
A recent work from Eurocrypt 2023 suggests that prime-field masking has excellent potential to improve the efficiency vs. security tradeoff of masked implementations against side-channel attacks, especially in contexts where physical leakages show low noise. We pick up on the main open challenge that this seed result leads to, namely the design of an optimized prime cipher able to take advantage of this potential. Given the interest of tweakable block ciphers with cheap inverses in many leakage-resistant designs, we start by describing the FPM (Feistel for Prime Masking) family of tweakable block ciphers based on a generalized Feistel structure. We then propose a first instantiation of FPM, which we denote as small-pSquare. It builds on the recent observation that the square operation (which is non-linear in Fp) can lead to masked gadgets that are more efficient than those for multiplication, and is tailored for efficient masked implementations in hardware. We analyze the mathematical security of the FPM family of ciphers and the small-pSquare instance, trying to isolate the parts of our study that can be re-used for other instances. We additionally evaluate the implementation features of small-pSquare by comparing the efficiency vs. security tradeoff of masked FPGA circuits against those of a state-of-the art binary cipher, namely SKINNY, confirming significant gains in relevant contexts.
Last updated:  2024-03-13
Perfect (Parallel) Broadcast in Constant Expected Rounds via Statistical VSS
Gilad Asharov and Anirudh Chandramouli
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits. This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Our protocol is optimal (up to expectation) for messages of length $L \in \Omega(n \log^2 n)$. Our main contribution is a framework for obtaining perfectly secure broadcast with an expected constant number of rounds from a statistically secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O({\sf poly} \log n)$ bits. All our protocols are adaptively secure.
Last updated:  2024-03-13
The impact of data-heavy, post-quantum TLS 1.3 on the Time-To-Last-Byte of real-world connections
Panos Kampanakis and Will Childs-Klein
It has been shown that post-quantum key exchange and authentication with ML-KEM and ML-DSA, NIST’s postquantum algorithm picks, will have an impact on TLS 1.3 performance used in the Web or other applications. Studies so far have focused on the overhead of quantum-resistant algorithms on TLS time-to-first-byte (handshake time). Although these works have been important in quantifying the slowdown in connection establishment, they do not capture the full picture regarding real-world TLS 1.3 connections which carry sizable amounts of data. Intuitively, the introduction of an extra 10KB of ML-KEM and ML-DSA exchanges in the connection negotiation will inflate the connection establishment time proportionally more than it will increase the total connection time of a Web connection carrying 200KB of data. In this work, we quantify the impact of ML-KEM and ML-DSA on typical TLS 1.3 connections which transfer a few hundreds of KB from the server to the client. We study the slowdown in the time-to-last-byte of postquantum connections under normal network conditions and in more unstable environments with high packet delay variability and loss probabilities. We show that the impact of ML-KEM and ML-DSA on the TLS 1.3 time-to-last-byte under stable network conditions is lower than the impact on the handshake and diminishes as the transferred data increases. The time-to-last-byte increase stays below 5% for high-bandwidth, stable networks. It goes from 32% increase of the handshake time to under 15% increase of the time-to-last-byte when transferring 50KiB of data or more under low-bandwidth, stable network conditions. Even when congestion control affects connection establishment, the additional slowdown drops below 10% as the connection data increases to 200KiB. We also show that connections under lossy or volatile network conditions could see higher impact from post-quantum handshakes, but these connections’ time-to-lastbyte increase still drops as the transferred data increases. Finally, we show that such connections are already significantly slow and volatile regardless of the TLS handshake.
Last updated:  2024-03-12
Plug-and-play sanitization for TFHE
Florian Bourse and Malika Izabachène
Fully Homomorphic encryption allows the evaluation of any circuits over encrypted data while preserving the privacy of the data. However, without any additional properties, no guarantee is provided for the privacy of the circuits which are evaluated. A sanitization algorithm allows to destroy all previous information about how a ciphertext was obtained, ensuring that the circuit which was evaluated remains secret. In this paper, we present two techniques to randomize RLWE ciphertexts, and show how they can be used to achieve ciphertext sanitization for the TFHE scheme proposed by Chilotti et al (Asiacrypt 2016), by modifying the bootstrapping procedure internally. The first technique is a generalization of the strategy proposed by Bourse et al (Crypto 2016) to the ring setting. While this approach adapts well in theory, we show evidence that it fails to provide a practical solution. To improve over this strategy, we relax the circuit privacy property to its computational counterpart, and make use of an efficient public randomizer composed of an RLWE-based public key encryption with additional properties on the ciphertexts distribution. This randomizer can also be used in the soak-and-spin paradigm of Ducas and Stehlé (Eurocrypt 2016). Using a backward induction over the circuit size, we also improve on the proof technique from Bourse et al to avoid randomization at each step of the computation, enabling faster randomization and smaller noise growth. As a proof of concept, we provide a C implementation of our sanitization strategy, which shows that a sanitized LWE ciphertext can be obtained almost for free compared to a bootstrapped LWE ciphertext assuming many discrete Gaussian samples at hand.
Last updated:  2024-03-12
Efficient Pre-processing PIR Without Public-Key Cryptography
Ashrujit Ghoshal, Mingxun Zhou, and Elaine Shi
Classically, Private Information Retrieval (PIR) was studied in a setting without any pre-processing. In this setting, it is well-known that 1) public-key cryptography is necessary to achieve non-trivial (i.e., sublinear) communication efficiency in the single-server setting, and 2) the total server computation per query must be linear in the size of the database, no matter in the single-server or multi-server setting. Recent works have shown that both of these barriers can be overcome if we are willing to introduce a pre-processing phase. In particular, a recent work called Piano showed that using only one-way functions, one can construct a single-server preprocessing PIR with $\widetilde{O}(\sqrt{n})$ bandwidth and computation per query, assuming $\widetilde{O}(\sqrt{n})$ client storage. For the two-server setting, the state-of-the-art is defined by two incomparable results. First, Piano immediately implies a scheme in the two-server setting with the same performance bounds as stated above. Moreover, Beimel et al. showed a two-server scheme with $O(n^{1/3})$ bandwidth and $O(n/\log^2 n)$ computation per query, and one with $O(n^{1/2 + \epsilon})$ cost both in bandwidth and computation -- both schemes provide information theoretic security. In this paper, we show that assuming the existence of one-way functions, we can construct a two-server preprocessing PIR scheme with $\widetilde{O}(n^{1/4})$ bandwidth and $\widetilde{O}(n^{1/2})$ computation per query, while requiring only $\widetilde{O}(n^{1/2})$ client storage. We also construct a new single-server preprocessing PIR scheme with $\widetilde{O}(n^{1/4})$ online bandwidth and $\widetilde{O}(n^{1/2})$ offline bandwidth and computation per query, also requiring $\widetilde{O}(n^{1/2})$ client storage. Specifically, the online bandwidth is the bandwidth required for the client to obtain an answer, and the offline bandwidth can be viewed as background maintenance work amortized to each query. Our new constructions not only advance the theoretical understanding of preprocessing PIR, but are also concretely efficient because the only cryptography needed is pseudorandom functions.
Last updated:  2024-03-12
SoK: Zero-Knowledge Range Proofs
Miranda Christ, Foteini Baldimtsi, Konstantinos Kryptos Chalkias, Deepak Maram, Arnab Roy, and Joy Wang
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.
Last updated:  2024-03-12
FOLEAGE: $\mathbb{F}_4$OLE-Based Multi-Party Computation for Boolean Circuits
Maxime Bombar, Dung Bui, Geoffroy Couteau, Alain Couvreur, Clément Ducros, and Sacha Servan-Schreiber
Secure Multi-party Computation (MPC) allows two or more parties to compute any public function over their privately-held inputs, without revealing any information beyond the result of the computation. The main efficiency bottleneck in MPC protocols comes from interaction between parties. To limit the interaction, modern protocols for MPC generate a large amount of input-independent preprocessing material called multiplication triples, in an offline phase. This preprocessing can later be used by the parties to efficiently instantiate an input-dependent online phase computing the function. To date, the state-of-the-art secure multi-party computation protocols in the preprocessing model are tailored to secure computation of arithmetic circuits over large fields and require little communication in the preprocessing phase. In contrast, when it comes to computing preprocessing for computations that are naturally represented as Boolean circuits, the state- of-the-art techniques have not evolved since the 1980s, and in particular, require every pair of parties to interactively generate a large number of oblivious transfers, which induces an $\Omega(N^2 \cdot m)$ communication overhead. In this paper, we introduce FOLEAGE, which addresses this gap by introducing an efficient preprocessing protocol tailored to Boolean circuits. FOLEAGE exhibits excellent performance: it generates m triples using only $N \cdot m + O(N^2 \cdot \log m)$ bits of communication for N -parties, and can concretely produce over 11 million triples per second on one core of a commodity machine. Our result builds upon an efficient Pseudorandom Correlation Generator (PCG) for multiplications triples over the field $\mathbb{F}_4$. Roughly speaking, a PCG enables parties to stretch a short seed into a large number of pseudorandom correlations non-interactively, which greatly improves the efficiency of the offline phase in MPC protocols. Our construction significantly outperforms the state-of-the-art, which we demonstrate via a prototype implementation. This is achieved by introducing a number of protocol-level, algorithmic-level, and implementation-level optimizations on the recent PCG construction of Bombar et al. (Crypto 2023) from the Quasi-Abelian Syndrome Decoding assumption supporting any field size $q \ge 3$.
Last updated:  2024-03-12
OpenFHE: Open-Source Fully Homomorphic Encryption Library
Ahmad Al Badawi, Andreea Alexandru, Jack Bates, Flavio Bergamaschi, David Bruce Cousins, Saroja Erabelli, Nicholas Genise, Shai Halevi, Hamish Hunt, Andrey Kim, Yongwoo Lee, Zeyu Liu, Daniele Micciancio, Carlo Pascoe, Yuriy Polyakov, Ian Quah, Saraswathy R.V., Kurt Rohloff, Jonathan Saylor, Dmitriy Suponitsky, Matthew Triplett, Vinod Vaikuntanathan, and Vincent Zucca
Fully Homomorphic Encryption (FHE) is a powerful cryptographic primitive that enables performing computations over encrypted data without having access to the secret key. We introduce OpenFHE, a new open-source FHE software library that incorporates selected design ideas from prior FHE projects, such as PALISADE, HElib, and HEAAN, and includes several new design concepts and ideas. The main new design features can be summarized as follows: (1) we assume from the very beginning that all implemented FHE schemes will support bootstrapping and scheme switching; (2) OpenFHE supports multiple hardware acceleration backends using a standard Hardware Abstraction Layer (HAL); (3) OpenFHE includes both user-friendly modes, where all maintenance operations, such as modulus switching, key switching, and bootstrapping, are automatically invoked by the library, and compiler-friendly modes, where an external compiler makes these decisions. This paper focuses on high-level description of OpenFHE design, and the reader is pointed to external OpenFHE references for a more detailed/technical description of the software library.
Last updated:  2024-03-12
SNOW-SCA: ML-assisted Side-Channel Attack on SNOW-V
Harshit Saurabh, Anupam Golder, Samarth Shivakumar Titti, Suparna Kundu, Chaoyun Li, Angshuman Karmakar, and Debayan Das
This paper presents SNOW-SCA, the first power side-channel analysis (SCA) attack of a 5G mobile communication security standard candidate, SNOW-V, running on a 32-bit ARM Cortex-M4 microcontroller. First, we perform a generic known-key correlation (KKC) analysis to identify the leakage points. Next, a correlation power analysis (CPA) attack is performed, which reduces the attack complexity to two key guesses for each key byte. The correct secret key is then uniquely identified utilizing linear discriminant analysis (LDA). The profiled SCA attack with LDA achieves 100% accuracy after training with < 200 traces, which means the attack succeeds with just a single trace. Overall, using the combined CPA and LDA attack model, the correct secret key byte is recovered with < 50 traces collected using the ChipWhisperer platform. The entire 256-bit secret key of SNOW-V can be recovered incrementally using the proposed SCA attack. Finally, we suggest low-overhead countermeasures that can be used to prevent these SCA attacks.
Last updated:  2024-03-12
Time-Averaged Analysis of Selfish Mining in Bitcoin
Roozbeh Sarenche, Ren Zhang, Svetla Nikova, and Bart Preneel
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase his relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in selfish mining unprofitability. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our result shows that even when considering the orphan blocks in the DAM, normal selfish mining can still be profitable; however, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM.
Last updated:  2024-03-12
A Cautionary Note: Side-Channel Leakage Implications of Deterministic Signature Schemes
Uncategorized
Hermann Seuschek, Johann Heyszl, and Fabrizio De Santis
Show abstract
Uncategorized
Two recent proposals by Bernstein and Pornin emphasize the use of deterministic signatures in DSA and its elliptic curve-based variants. Deterministic signatures derive the required ephemeral key value in a deterministic manner from the message to be signed and the secret key instead of using random number generators. The goal is to prevent severe security issues, such as the straight-forward secret key recovery from low quality random numbers. Recent developments have raised skepticism whether e.g. embedded or pervasive devices are able to generate randomness of sufficient quality. The main concerns stem from individual implementations lacking sufficient entropy source and standardized methods for random number generation with suspected back doors. While we support the goal of deterministic signatures, we are concerned about the fact that this has a significant influence on side-channel security of implementations. Specifically, attackers will be able to mount differential side-channel attacks on the additional use of the secret key in a cryptographic hash function to derive the deterministic ephemeral key. Previously, only a simple integer arithmetic function to generate the second signature parameter had to be protected, which is rather straight-forward. Hash functions are significantly more difficult to protect. In this contribution, we systematically explain how deterministic signatures introduce this new side-channel vulnerability.
Last updated:  2024-03-12
Unlinkable Policy-Compliant Signatures for Compliant and Decentralized Anonymous Payments
Christian Badertscher, Mahdi Sedaghat, and Hendrik Waldner
Privacy-preserving payment systems face the difficult task of balancing privacy and accountability: on one hand, users should be able to transact privately and anonymously, on the other hand, no illegal activities should be tolerated. The challenging question of finding the right balance lies at the core of the research on accountable privacy that stipulates the use of cryptographic techniques for policy enforcement. Current state-of-the-art systems are only able to enforce rather limited policies, such as spending or transaction limits, or assertions about single participants, but are unable to enforce more complex policies that for example jointly evaluate both, the private credentials of sender and recipient, such as admissible cross-border payments, let alone to do this without auditors in the loop during payment. This severely limits the cases where decentralized virtual assets can be used in accordance with regulatory compliance such as the Financial Action Task Force (FATF) travel rule, while further retaining strong privacy features. We present unlinkable Policy-Compliant Signatures (ul-PCS), an enhanced cryptographic primitive extending the work of Badertscher et al. (TCC 21). We give rigorous definitions, formally proven constructions, and benchmarks using our prototype developed using CharmCrypto. Unlinkable PCS has the following unique combination of features: 1. It is an enhanced signature scheme where the public key encodes in a privacy-preserving way the user's verifiable credentials (obtained from a credential authority). 2. Signatures can be created (and later publicly verified) by additionally specifying a recipient's public key aside of the to-be-signed message. A valid signature can only ever be created if the attributes $x_S$ of the signer and the attributes $x_R$ of the receiver fulfill some global policy $F(x_S,x_R)$. 3. The signature can be created by the signer just knowing the recipient's public key; there is no further interaction needed no attributes are leaked (beyond the validity of the policy). 4. Once credentials are obtained, a user can generate fresh public keys without interacting with the credential authority. By merging the act of signing a transaction with the act of providing an assurance about the involved participants being compliant with complex policies, yet retain that participants are able to change addresses without the involvement of an authority, we show how ul-PCS constitutes a crucial step towards achieving a technology that improves regulatory compliance of privacy coins such as Monero or Zcash.
Last updated:  2024-03-12
Efficient Actively Secure DPF and RAM-based 2PC with One-Bit Leakage
Wenhao Zhang, Xiaojie Guo, Kang Yang, Ruiyu Zhu, Yu Yu, and Xiao Wang
Secure two-party computation (2PC) in the RAM model has attracted huge attention in recent years. Most existing results only support semi-honest security, with the exception of Keller and Yanai (Eurocrypt 2018) with very high cost. In this paper, we propose an efficient RAM-based 2PC protocol with active security and one-bit leakage. 1) We propose an actively secure protocol for distributed point function (DPF), with one-bit leakage, that is essentially as efficient as the state-of-the-art semi-honest protocol. Compared with previous work, our protocol takes about $50 \times$ less communication for a domain with $2^{20}$ entries, and no longer requires actively secure generic 2PC. 2) We extend the dual-execution protocol to allow reactive computation, and then build a RAM-based 2PC protocol with active security on top of our new building blocks. The protocol follows the paradigm of Doerner and shelat (CCS 2017). We are able to prove that the protocol has end-to-end one-bit leakage. 3) Our implementation shows that our protocol is almost as efficient as the state-of-the-art semi-honest RAM-based 2PC protocol, and is at least two orders of magnitude faster than prior actively secure RAM-based 2PC without leakage, providing a realistic trade-off in practice.
Last updated:  2024-03-12
Kolmogorov Comes to Cryptomania: On Interactive Kolmogorov Complexity and Key-Agreement
Marshall Ball, Yanyi Liu, Noam Mazor, and Rafael Pass
Only a handful candidates for computational assumptions that imply secure key-agreement protocols (KA) are known, and even fewer are believed to be quantum safe. In this paper, we present a new hardness assumption---the worst-case hardness of a promise problem related to an interactive version of Kolmogorov Complexity. Roughly speaking, the promise problem requires telling apart tuples of strings $(\pi,x,y)$ with relatively (w.r.t. $K(\pi)$) low time-bounded Interactive Kolmogorov Complexity ($IK^t$), and those with relatively high Kolmogorov complexity, given the promise that $K^t(x|y)< s, K^t(y|x)< s$ and $s = log n$, and where $IK^t(\pi;x;y)$ is defined as the length of the shortest pair of $t$-bounded TMs $(A,B)$ such that the interaction of $(Ac,Bc)$ lead to the transcript $\pi$ and the respective outputs $x,y$. We demonstrate that when $t$ is some polynomial, then not only does this hardness assumption imply the existence of KA, but it is also necessary for the existence of secure KA. As such, it yields the first natural hardness assumption characterizing the existence of key-agreement protocols. We additionally show that when the threshold $s$ is bigger (e.g., $s = 55log n$), then the (worst-case) hardness of this problem instead characterizes the existence of one-way functions (OWFs). As such, our work also clarifies exactly what it would take to base KA on the existence of OWFs, and demonstrates that this question boils down to demonstrating a worst-case reduction between two closely related promise problems.
Last updated:  2024-03-11
Fully Homomorphic Encryption beyond IND-CCA1 Security: Integrity through Verifiability
Mark Manulis and Jérôme Nguyen
We focus on the problem of constructing fully homomorphic encryption (FHE) schemes that achieve some meaningful notion of adaptive chosen-ciphertext security beyond CCA1. Towards this, we propose a new notion, called security against verified chosen-ciphertext attack (vCCA). The idea behind it is to ascertain integrity of the ciphertext by imposing a strong control on the evaluation algorithm. Essentially, we require that a ciphertext obtained by the use of homomorphic evaluation must be "linked" to the original input ciphertexts. We formalize the vCCA notion in two equivalent formulations; the first is in the indistinguishability paradigm, the second follows the non-malleability simulation-based approach, and is a generalization of the targeted malleability introduced by Boneh et al. in 2012. We strengthen the credibility of our definitions by exploring relations to existing security notions for homomorphic encryption schemes, namely CCA1, RCCA, FuncCPA, CCVA, and HCCA. We prove that vCCA security is the strongest notion known so far, that can be achieved by an FHE scheme; in particular, vCCA is strictly stronger than CCA1. Finally, we provide a general transformation, that takes any CPA-secure FHE scheme and makes it vCCA-secure. Our transformation first turns an FHE scheme into a CCA2-secure scheme where a part of the ciphertext retains the homomorphic properties and then extends it with a succinct non-interactive argument of knowledge (SNARK) to verifiably control the evaluation algorithm. In fact, we obtain four general variation of this transformation. We handle both the asymmetric and the symmetric key FHE schemes, and for each we give two variations differing in whether the ciphertext integrity can be verified publicly or requires the secret key. We use well-known techniques to achieve CCA security in the first step of our transformation. In the asymmetric case, we use the double encryption paradigm, and in the symmetric case, we use Encrypt-then-MAC techniques. Furthermore, our transformation also gives the first CCA-secure FHE scheme based on bootstrapping techniques.
Last updated:  2024-03-11
Building MPCitH-based Signatures from MQ, MinRank, Rank SD and PKP
Thibauld Feneuil
The MPC-in-the-Head paradigm is a useful tool to build practical signature schemes. Many such schemes have been already proposed, relying on different assumptions. Some are relying on standard symmetric primitives like AES, some are relying on MPC-friendly primitives like LowMC or Rain, and some are relying on well-known hard problems like the syndrome decoding problem. This work focuses on the third type of MPCitH-based signatures. Following the same methodology as the work of Feneuil, Joux and Rivain (CRYPTO'22), we apply the MPC-in-the-Head paradigm to several problems: the multivariate quadratic problem, the MinRank problem, the rank syndrome decoding problem and the permuted kernel problem. Our goal is to study how this paradigm behaves for each of those problems. For the multivariate quadratic problem, our scheme outperforms slightly the existing schemes when considering large fields (as $\mathbb{F}_{256}$), and for the permuted kernel problem, we obtain larger sizes. Even if both schemes do not outperform the existing ones according to the communication cost, they are highly parallelizable and compatible with some MPC-in-the-Head techniques (like fast signature verification) while the former proposals were not. Moreover, we propose two efficient MPC protocols to check that the rank of a matrix over a field $\mathbb{F}_q$ is upper bounded by a public constant. The first one relies on the rank decomposition while the second one relies on $q$-polynomials. We then use them to build signature schemes relying on the MinRank problem and the rank syndrome decoding problem. Those schemes outperform the former schemes, achieving sizes below $6$ KB (while using only 256 parties for the MPC protocol).
Last updated:  2024-03-11
On the Concrete Security of Approximate FHE with Noise-Flooding Countermeasures
Flavio Bergamaschi, Anamaria Costache, Dana Dachman-Soled, Hunter Kippen, Lucas LaBuff, and Rui Tang
Approximate fully homomorphic encryption (FHE) schemes such as the CKKS scheme (Asiacrypt '17) are popular in practice due to their efficiency and utility for machine learning applications. Unfortunately, Li and Micciancio (Eurocrypt, '21) showed that, while achieving standard semantic (or $\mathsf{IND}\mbox{-}\mathsf{CPA}$ security), the CKKS scheme is broken under a variant security notion known as $\mathsf{IND}\mbox{-}\mathsf{CPA}^D$. Subsequently, Li, Micciancio, Schultz, and Sorrell (Crypto '22) proved the security of the CKKS scheme with a noise-flooding countermeasure, which adds Gaussian noise of sufficiently high variance before outputting the decrypted value. However, the variance required for provable security is very high, inducing a large loss in message precision. In this work, we ask whether there is an intermediate noise-flooding level, which may not be provably secure, but allows to maintain the performance of the scheme, while resisting known attacks. We analyze the security with respect to different adversarial models and various types of attacks. We investigate the effectiveness of lattice reduction attacks, guessing attacks and hybrid attacks with noise-flooding with variance $\rho^2_{\mathsf{circ}}$, the variance of the noise already present in the ciphertext as estimated by an average-case analysis, $100\cdot \rho^2_{\mathsf{circ}}$, and $t\cdot \rho^2_{\mathsf{circ}}$, where $t$ is the number of decryption queries. For noise levels of $\rho^2_{\mathsf{circ}}$ and $100\cdot \rho^2_{\mathsf{circ}}$, we find that a full guessing attack is feasible for all parameter sets and circuit types. We find that a lattice reduction attack is the most effective attack for noise-flooding level $t\cdot \rho^2_{\mathsf{circ}}$, but it only induces at most a several bit reduction in the security level. Due to the large dimension and modulus in typical FHE parameter sets, previous techniques even for estimating the concrete security of these attacks -- such as those in (Dachman-Soled, Ducas, Gong, Rossi, Crypto '20) -- become computationally infeasible, since they involve high dimensional and high precision matrix multiplication and inversion. We therefore develop new techniques that allow us to perform fast security estimation, even for FHE-size parameter sets.
Last updated:  2024-03-11
Plan your defense: A comparative analysis of leakage detection methods on RISC-V cores
Konstantina Miteloudi, Asmita Adhikary, Niels van Drueten, Lejla Batina, and Ileana Buhan
Hardening microprocessors against side-channel attacks is a critical aspect of ensuring their security. A key step in this process is identifying and mitigating “leaky” hardware modules, which inadvertently leak information during the execution of cryptographic algorithms. In this paper, we explore how different leakage detection methods, the Side-channel Vulnerability Factor (SVF) and the Test Vector Leakage Assessment (TVLA), contribute to hardening of microprocessors. We conduct experiments on two RISC-V cores, SHAKTI and Ibex, using two cryptographic algorithms, SHA-3 and AES. Our findings suggest that SVF and TVLA can provide valuable insights into identifying leaky modules. However, the effectiveness of these methods can vary depending on the specific core and cryptographic algorithm in use. We conclude that the choice of leakage detection method should be based not only on computational cost but also on the specific requirements of the system and the nature of the potential threats. Our research contributes to developing more secure microprocessors that are robust against side-channel attacks.
Last updated:  2024-03-11
A Class of Weightwise Almost Perfectly Balanced Boolean Functions with High Weightwise Nonlinearity
Deepak Kumar Dalai and Krishna Mallick
A Boolean function with good cryptographic properties over a set of vectors with constant Hamming weight is significant for stream ciphers like FLIP [MJSC16]. This paper presents a construction weightwise almost perfectly balanced (WAPB) Boolean functions by perturbing the support vectors of a highly nonlinear function in the construction presented in [DM]. As a result, the nonlinearity and weightwise nonlinearities of the modified functions improve substantially.
Last updated:  2024-03-11
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Xiangyu Hui and Sid Chi-Kin Chau
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with transparent setup to achieve logarithmic bounds. Omniring (CCS `19) and RingCT 3.0 (FC `20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS `22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We report an attack on DualDory that breaks its linkability. (2) To eliminate such attacks, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Last updated:  2024-03-11
A Guide to the Design of Digital Signatures based on Cryptographic Group Actions
Giacomo Borin, Edoardo Persichetti, Paolo Santini, Federico Pintore, and Krijn Reijnders
Cryptography based on group actions has been studied since 1990. In recent years, however, the area has seen a revival, partially due to its role in post-quantum cryptography. For instance, several works have proposed signature schemes based on group actions, as well as a variety of techniques aimed at improving their performance and efficiency. Most of these techniques can be explained as transforming one Sigma protocol into another, while essentially preserving security. In this work, we present a unified taxonomy of such techniques. In particular, we describe all techniques in a single fashion, show how they impact the performance of the resulting protocols and analyse in detail how different techniques can be combined for optimal performance. Furthermore, to provide a tangible perspective, we apply the results of our analysis to the (group action-based) candidates in the current NIST call for digital signatures. This gives a full overview of the state of the art of signatures based on group actions, as well as a flexible tool which is easy to adapt and employ in the design of future schemes.
Last updated:  2024-03-11
Revisiting The Multiple of Property for SKINNY The Exact Computation of the number of right pairs
Hanbeom Shin, Insung Kim, Sunyeop Kim, Seonggyeom Kim, Deukjo Hong, Jaechul Sung, and Seokhie Hong
At EUROCRYPT 2017, Grassi et al. proposed the multiple-of-8 property for 5-round AES, where the number $n$ of right pairs is a multiple of 8. At ToSC 2019, Boura et al. generalized the multiple-of property for a general SPN block cipher and applied it to block cipher SKINNY. In this paper, we present that $n$ is not only a multiple but also a fixed value for SKINNY. Unlike the previous proof of generalization of multiple-of property using equivalence class, we investigate the propagation of the set to compute the exact number $n$. We experimentally verified that presented property holds. We extend this property one round more using the lack of the whitening key on the SKINNY and use this property to construct 6-round distinguisher on SKINNY-64 and SKINNY-128. The probability of success of both distinguisher is almost 1 and the total complexities are $2^{16}$ and $2^{32}$ respectively. We verified that this property only holds for SKINNY, not for AES and MIDORI, and provide the conditions under which it exists for AES-like ciphers.
Last updated:  2024-03-11
Beware Your Standard Cells! On Their Role in Static Power Side-Channel Attacks
Jitendra Bhandari, Likhitha Mankali, Mohammed Nabeel, Ozgur Sinanoglu, Ramesh Karri, and Johann Knechtel
Static or leakage power, which is especially prominent in advanced technology nodes, enables so-called static power side-channel attacks (S-PSCA). While countermeasures exist, they often incur considerable overheads. Besides, hardware Trojans represent another threat. Although the interplay between static power, down-scaling of technology nodes, and the vulnerability to S-PSCA is already established, an important detail was not covered yet: the role of the components at the heart of this sensitive interplay, the standard cells. Here, we study this intricate relationship for two commercial 28nm and 65nm technologies, using a commercial-grade IC design setup, and under realistic PPA objectives. Specifically, we study how threshold-voltage (VT) tuning of standard cells impacts the resilience of representative AES and PRESENT cipher hardware, including versions with established countermeasures. Our proposed CAD framework enables a security-vs-PPA-aware design-space exploration. Contrary to the belief that high-performance designs are generally more vulnerable to S-PSCA, we find that timing constraints and the distribution of different VT cells are more pivotal factors. Furthermore, we discover that attackers can deploy highly effective and stealthy S-PSCA-based Trojans, all without any gate overheads or any timing violations.
Last updated:  2024-03-11
Diving Deep into the Preimage Security of AES-like Hashing
Shiyao Chen, Jian Guo, Eik List, Danping Shi, and Tianyu Zhang
Since the seminal works by Sasaki and Aoki, Meet-in-the-Middle (MITM) attacks are recognized as an effective technique for preimage and collision attacks on hash functions. At Eurocrypt 2021, Bao et al. automated MITM attacks on AES-like hashing and improved upon the best manual result. The attack framework has been furnished by subsequent works, yet far from complete. This paper elucidates three key contributions dedicated in further generalizing the idea of MITM and refining the automatic model on AES-like hashing. (1) We introduce S-box linearization to MITM pseudo-preimage attacks on AES-like hashing. The technique suits perfectly with superposition states to preserve information after S-box with an affordable cost. (2) We propose distributed initial structures, an extension on the original concept of initial states, that selects initial degrees of freedom in a more versatile manner to enlarge the search space. (3) We exploit the structural similarities between encryption and key schedule in constructions (e.g. Whirlpool and Streebog) to model propagations more accurately and avoid repeated costs. Weaponed with these innovative techniques, we further empower the MITM framework and improve the attack results on AES-like designs for preimage and collision. We obtain the first preimage attacks on 10-round AES-192, 10-round Rijndael-192/256, and 7.75-round Whirlpool, reduced time and/or memory complexities for preimage attacks on 5-, 6-round Whirlpool and 7.5-, 8.5-round Streebog, as well as improved collision attacks on 6- and 6.5-round Whirlpool.
Last updated:  2024-03-10
Gap MCSP is not (Levin) NP-complete in Obfustopia
Noam Mazor and Rafael Pass
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions. In more detail: - Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions. - Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.
Last updated:  2024-03-10
New Upper Bounds for Evolving Secret Sharing via Infinite Branching Programs
Bar Alon, Amos Beimel, Tamar Ben David, Eran Omri, and Anat Paskin-Cherniavsky
Evolving secret-sharing schemes, defined by Komargodski, Naor, and Yogev [TCC 2016B, IEEE Trans. on Info. Theory 2018], are secret-sharing schemes in which there is no a-priory bound on the number of parties. In such schemes, parties arrive one by one; when a party arrives, the dealer gives it a share and cannot update this share in later stages. The requirement is that some predefined sets (called authorized sets) should be able to reconstruct the secret, while other sets should learn no information on the secret. The collection of authorized sets that can reconstruct the secret is called an evolving access structure. The challenge of the dealer is to be able to give short shares to the the current parties without knowing how many parties will arrive in the future. The requirement that the dealer cannot update shares is designed to prevent expensive updates. Komargodski et al. constructed an evolving secret-sharing scheme for every monotone evolving access structure; the share size of the $t^{\text{th}}$ party in this scheme is $2^{t-1}$. Recently, Mazor [ITC 2023] proved that evolving secret-sharing schemes require exponentially-long shares for some evolving access structure, namely shares of size $2^{t-o(t)}$.In light of these results, our goal is to construct evolving secret-sharing schemes with non-trivial share size for wide classes of evolving access structures; e.g., schemes with share size $2^{ct}$ for $c<1$ or even polynomial size. We provide several results achieving this goal: -We define layered infinite branching programs representing evolving access structures, show how to transform them into generalized infinite decision trees, and show how to construct evolving secret-sharing schemes for generalized infinite decision trees. Combining these steps, we get a secret-sharing scheme realizing the evolving access structure. As an application of this framework, we construct an evolving secret-sharing scheme with non-trivial share size for access structures that can be represented by layered infinite branching programs with width at layer $t$ of at most $2^{0.15t}$. If the width is polynomial, then we get an evolving secret-sharing scheme with quasi-polynomial share size. -We construct efficient evolving secret-sharing schemes for dynamic-threshold access structures with high dynamic-threshold and for infinite $2$ slice and $3$-slice access structures. The share size of the $t^{\text{th}}$ party in these schemes is $2^{\tilde{O}((\log t)^{1/\sqrt{2}+\epsilon})}$ for any constant $\epsilon>0$, which is comparable to the best-known share size of $2^{\tilde{O}((\log t)^{1/2}))}$ for finite $2$-slice and 3-slice access structures. -We prove lower bounds on the share size of evolving secret-sharing schemes for infinite $k$-hypergraph access structures and for infinite directed st-connectivity access structures. As a by-product of the lower bounds, we provide the first non-trivial lower bound for finite directed st-connectivity access structures for general secret-sharing schemes.
Last updated:  2024-03-10
Fine-Grained Cryptanalysis: Tight Conditional Bounds for Dense k-SUM and k-XOR
Itai Dinur, Nathan Keller, and Ohad Klein
An average-case variant of the $k$-SUM conjecture asserts that finding $k$ numbers that sum to 0 in a list of $r$ random numbers, each of the order $r^k$, cannot be done in much less than $r^{\lceil k/2 \rceil}$ time. On the other hand, in the dense regime of parameters, where the list contains more numbers and many solutions exist, the complexity of finding one of them can be significantly improved by Wagner's $k$-tree algorithm. Such algorithms for $k$-SUM in the dense regime have many applications, notably in cryptanalysis. In this paper, assuming the average-case $k$-SUM conjecture, we prove that known algorithms are essentially optimal for $k= 3,4,5$. For $k>5$, we prove the optimality of the $k$-tree algorithm for a limited range of parameters. We also prove similar results for $k$-XOR, where the sum is replaced with exclusive or. Our results are obtained by a self-reduction that, given an instance of $k$-SUM which has a few solutions, produces from it many instances in the dense regime. We solve each of these instances using the dense $k$-SUM oracle, and hope that a solution to a dense instance also solves the original problem. We deal with potentially malicious oracles (that repeatedly output correlated useless solutions) by an obfuscation process that adds noise to the dense instances. Using discrete Fourier analysis, we show that the obfuscation eliminates correlations among the oracle's solutions, even though its inputs are highly correlated.
Last updated:  2024-03-10
Key Agreement with Physical Unclonable Functions and Biometric Identifiers
Onur Gunlu
This thesis addresses security and privacy problems for digital devices and biometrics, where a secret key is generated for authentication, identification, or secure computations. A physical unclonable function (PUF) is a promising solution for local security in digital devices. A low-complexity transform-coding algorithm is developed to make the information-theoretic analysis tractable and motivate a noisy (hidden) PUF source model. The optimal trade-offs between the secret-key, privacy-leakage, and storage rates for multiple measurements of hidden PUFs are characterized. The first optimal and low-complexity code constructions are proposed. Polar codes are designed to achieve the best known rate tuples. The gains from cost-constrained controllable PUF measurements are illustrated to motivate extensions.
Last updated:  2024-03-10
A Modular Treatment of Blind Signatures from Identification Schemes
Eduard Hauck, Eike Kiltz, and Julian Loss
We propose a modular security treatment of blind signatures derived from linear identification schemes in the random oracle model. To this end, we present a general framework that captures several well known schemes from the literature and allows to prove their security. Our modular security reduction introduces a new security notion for identification schemes called One-More-Man In the Middle Security which we show equivalent to the classical One-More-Unforgeability notion for blind signatures. We also propose a generalized version of the Forking Lemma due to Bellare and Neven (CCS 2006) and show how it can be used to greatly improve the understandability of the classical security proofs for blind signatures schemes by Pointcheval and Stern (Journal of Cryptology 2000).
Last updated:  2024-03-09
Securing IoT Devices with Fast and Energy Efficient Implementation of PRIDE and PRESENT Ciphers
Vijay Dahiphale, Hrishikesh Raut, Gaurav Bansod, and Devendra Dahiphale
The rise of low-power, cost-efficient internet-connected devices has led to a need for lightweight cryptography. The lightweight block cipher PRIDE, designed by Martin R. Albrecht, is one of the most efficient ciphers designed for IoT-constrained environments. It is useful for connected devices, requires fewer resources to implement, and has high performance. PRIDE is a software-oriented lightweight cipher optimized for microcontrollers. This paper focuses on the FPGA implementation of the PRIDE cipher by keeping throughput, energy, and power consumption metrics focused. The paper also presents a novel and simpler diagrammatical view of a Matrix Layer implementation of the PRIDE cipher. We also implemented the PRESENT cipher using the same metrics. We analyzed different design metrics on Field Programmable Gate Arrays (FPGAs) and compared the metrics of the PRIDE implementation with the well-known cipher PRESENT. This gives us an insight into the efficiency and reliability of PRIDE in IoT-constrained environments. We also proposed different architectures of the PRIDE cipher for 16-bit and 32-bit datapaths.
Last updated:  2024-03-09
A Refined Hardness Estimation of LWE in Two-step Mode
Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, and Baocang Wang
Recently, researchers have proposed many LWE estimators, such as lattice-estimator (Albrecht et al, Asiacrypt 2017) and leaky-LWE-Estimator (Dachman-Soled et al, Crypto 2020), while the latter has already been used in estimating the security level of Kyber and Dilithium using only BKZ. However, we prove in this paper that solving LWE by combining a lattice reduction step (by LLL or BKZ) and a target vector searching step (by enumeration or sieving), which we call a Two-step mode, is more efficient than using only BKZ. Moreover, we give a refined LWE estimator in Two-step mode by analyzing the relationship between the probability distribution of the target vector and the solving success rate in a Two-step mode LWE solving algorithm. While the latest Two-step estimator for LWE, which is the “primal-bdd” mode in lattice-estimator1, does not take into account some up-to-date results and lacks a thorough theoretical analysis. Under the same gate-count model, our estimation for NIST PQC standards drops by 2.1∼3.4 bits (2.2∼4.6 bits while considering more flexible blocksize and jump strategy) compared with leaky-LWE-Estimator. Furthermore, we also give a conservative estimation for LWE from the Two-step solving algorithm. Compared with the Core-SVP model, which is used in previous conservative estimations, our estimation relies on weaker assumptions and outputs higher evaluation results than the Core- SVP model. For NIST PQC standards, our conservative estimation is 4.17∼8.11 bits higher than the Core-SVP estimation. Hence our estimator can give a closer estimation for both upper bound and lower bound of LWE hardness.
Last updated:  2024-03-09
An improved exact CRR basis conversion algorithm for FHE without floating-point arithmetic
Hongyuan Qu and Guangwu Xu
Fully homomorphic encryption (FHE) has attracted much attention recently. Chinese remainder representation (CRR) or RNS representation is one of the core technologies of FHE. CRR basis conversion is a key step of KeySwitching procedure. Bajard et al. proposed a fast basis conversion method for CRR basis conversion, but the elimination of error had to be ignored. Halevi et al. suggested a method using floating-point arithmetic to avoid errors, but floating-point arithmetic has its own issues such as low efficiency and complex chip design. In this work, we establish a more concise and efficient CRR basis conversion method by observing that each of the ciphertext modulus selected by the CRR CKKS scheme is very close to an integer that is a power of 2. Our conversion algorithm eliminates errors and involves only integer arithmetic and bit operations. The proof of correctness of our algorithm is given. Extensive experiments are conducted and comparisons between the method of Halevi et al. and ours are obtained, which show that our method has the same accuracy and a slightly better effeciency. Our method is also applicable to the CRR variant of BGV and BFV schemes, and can be used to simplify chip design.
Last updated:  2024-03-09
Under What Conditions Is Encrypted Key Exchange Actually Secure?
Jake Januzelli, Lawrence Roy, and Jiayu Xu
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE. In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks: 1. We show that among the seven existing results on the UC-security of (O)EKE, six are flawed; 2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy the properties of strong pseudorandomness, pseudorandom non-malleability, and collision resistance, all of which are missing in existing works; 3. We give UC-security proofs for EKE and OEKE using Programmable-Once Random Function (POPF), which is the most efficient instantiation to date and is around 4 times faster than the standard instantiation using Ideal Cipher (IC). Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also give a security analysis of POPF in a new composition framework called almost UC, which we believe is interesting in its own right.
Last updated:  2024-03-08
Improved Circuit Synthesis with Amortized Bootstrapping for FHEW-like Schemes
Johannes Mono, Kamil Kluczniak, and Tim Güneysu
In recent years, the research community has made great progress in improving techniques for privacy-preserving computation, such as fully homomorphic encryption (FHE). Despite the progress, there remain open challenges, mainly in performance and usability, to further advance the adoption of these technologies. This work provides multiple contributions that improve the current state-of-the-art in both areas. More specifically, we significantly simplify the bootstrapping by Carpov, Izabachène, and Mollimard for Boolean-based FHE schemes such as FHEW or TFHE, making the concept usable in practice. Based on our simplifications, we implement an easy-to-use interface for amortized bootstrapping in the open-source library FHE-Deck, derive new parameter sets for multi-bit encryptions with state-of-the-art security, and build a toolset that translates high-level code to multi-bit operations on encrypted data using circuit synthesis. We propose the first non-trivial FHE-specific optimizations in synthesizing privacy-preserving circuits: look-up table (LUT) grouping and adder substitution. Using LUT grouping, we reduce the number of bootstrapping operations by almost 35% on average, while for adder substitution, we reduce the number of required bootstrappings by up to 80% for certain use cases. Overall, the execution time is up to 3.8× faster using our optimizations compared to previous state-of-the-art circuit synthesis.
Last updated:  2024-03-08
The supersingular Endomorphism Ring and One Endomorphism problems are equivalent
Aurel Page and Benjamin Wesolowski
The supersingular Endomorphism Ring problem is the following: given a supersingular elliptic curve, compute all of its endomorphisms. The presumed hardness of this problem is foundational for isogeny-based cryptography. The One Endomorphism problem only asks to find a single non-scalar endomorphism. We prove that these two problems are equivalent, under probabilistic polynomial time reductions. We prove a number of consequences. First, assuming the hardness of the endomorphism ring problem, the Charles–Goren–Lauter hash function is collision resistant, and the SQIsign identification protocol is sound for uniformly random keys. Second, the endomorphism ring problem is equivalent to the problem of computing arbitrary isogenies between supersingular elliptic curves, a result previously known only for isogenies of smooth degree. Third, there exists an unconditional probabilistic algorithm to solve the endomorphism ring problem in time $\tilde O(p^{1/2})$, a result that previously required to assume the generalized Riemann hypothesis. To prove our main result, we introduce a flexible framework for the study of isogeny graphs with additional information. We prove a general and easy-to-use rapid mixing theorem.
Last updated:  2024-03-08
Circuit Privacy for FHEW/TFHE-Style Fully Homomorphic Encryption in Practice
Kamil Kluczniak
A fully homomorphic encryption (FHE) scheme allows a client to encrypt and delegate its data to a server that performs computation on the encrypted data that the client can then decrypt. While FHE gives confidentiality to clients' data, it does not protect the server's input and computation. Nevertheless, FHE schemes are still helpful in building delegation protocols that reduce communication complexity, as the ciphertext's size is independent of the size of the computation performed on them. We can further extend FHE by a property called circuit privacy, which guarantees that the result of computing on ciphertexts reveals no information on the computed function and the inputs of the server. Thereby, circuit private FHE gives rise to round optimal and communication efficient secure two-party computation protocols. Unfortunately, despite significant efforts and much work put into the efficiency and practical implementations of FHE schemes, very little has been done to provide useful and practical FHE supporting circuit privacy. In this work, we address this gap and design the first randomized bootstrapping algorithm whose single invocation sanitizes a ciphertext and, consequently, serves as a tool to provide circuit privacy. We give an extensive analysis, propose parameters, and provide a C++ implementation of our scheme. Our bootstrapping can sanitize a ciphertext to achieve circuit privacy at an 80-bit statistical security level in between 1.3 and 0.9 seconds, depending which Gaussian sampling algorithm is used, and whether the parameter set targets a fast Fourier or a number theoretic transform-based implementation. In addition, we can perform non-sanitized bootstrapping in around 0.27 or 0.14 seconds. Crucially, we do not need to increase the parameters to perform computation before or after sanitization takes place. For comparison's sake, we revisit the Ducas-Stehl\'e washing machine method. In particular, we give a tight analysis, estimate efficiency, review old, and provide new parameters.
Last updated:  2024-03-08
An optimization of the addition gate count in Plonkish circuits
Steve Thakur
We slightly generalize Plonk's ([GWC19]) permutation argument by replacing permutations with (possibly non-injective) self-maps of an interval. We then use this succinct argument to obtain a protocol for weighted sums on committed vectors, which, in turn, allows us to eliminate the intermediate gates arising from high fan-in additions in Plonkish circuits. We use the KZG10 polynomial commitment scheme, which allows for a universal updateable CRS linear in the circuit size. In keeping with our recent work ([Th23]), we have used the monomial basis since it is compatible with any sufficiently large prime scalar field. In settings where the scalar field has a suitable smooth order subgroup, the techniques can be efficiently ported to a Lagrange basis. The proof size is constant, as is the verification time which is dominated by a single pairing check. For committed vectors of length $n$, the proof generation is $O(n\cdot \log(n))$ and is dominated by the $\mathbb{G}_1$-MSMs and a single sum of a few polynomial products over the prime scalar field via multimodular FFTs.
Last updated:  2024-03-08
Entropy Suffices for Guessing Most Keys
Timo Glaser, Alexander May, and Julian Nowakowski
Historically, most cryptosystems chose their keys uniformly at random. This is in contrast to modern (lattice-based) schemes, which typically sample their keys from more complex distributions $\mathcal{D}$, such as the discrete Gaussian or centered binomial distribution. It is well-known that any key drawn from the uniform distribution $\mathcal{U}$ can be guessed using at most $2^{\operatorname{H}(\mathcal{U})}$ key guesses, where $\operatorname{H}(\mathcal{U})$ denotes the entropy of the uniform distribution. However, for keys drawn from general distributions $\mathcal{D}$ only a lower bound of $\Omega(2^{\operatorname{H}(\mathcal{D})})$ key guesses is known. In fact, Massey (1994) even ruled out that the number of key guesses can be upper bounded by a function of the entropy alone. When analyzing the complexity of so-called hybrid lattice-attacks (which combine lattice reduction with key guessing) one therefore usually conservatively underestimates the complexity of key guessing by $2^{\operatorname{H}(\mathcal{D})}$. However, a tight complexity analysis is missing, and due to Massey's result considered impossible. In this work, we bypass Massey's impossibility result by focusing on the typical cryptographic setting, where keys are drawn from $n$-fold product distributions $\mathcal{D} = \chi^n$. It is well known that the optimal key guessing algorithm enumerates keys in $\chi^n$ in descending order of probability. In order to provide a refined analysis, we allow to abort enumeration after a certain amount of key guesses. As our main result, we prove that for any discrete probability distribution $\chi$ the key guessing algorithm that we abort after $2^{\operatorname{H}(\chi)n}$ keys has asymptotically success probability $\frac 1 2$, taken over the random key choice. The aborted algorithm allows for a quantum version with success probability $\frac 1 2$ within $2^{\operatorname{H}(\chi)n/2}$ key guesses. In other words, for any distribution $\chi$, we achieve a Grover-type square root speedup. Furthermore, we show that for the distributions used in Kyber and Falcon, the aborted algorithm outperforms the non-aborted algorithm by an exponential factor in the runtime. Hence, for a typical multi-key scenario, where a (large scale) adversary wants to attack as many keys with as few as possible resources, our results show that it greatly pays off to tackle only the likely keys.
Last updated:  2024-03-07
PRAC: Round-Efficient 3-Party MPC for Dynamic Data Structures
Sajin Sasy, Adithya Vadapalli, and Ian Goldberg
We present Private Random Access Computations (PRAC), a 3-party Secure Multi-Party Computation (MPC) framework to support random-access data structure algorithms for MPC with efficient communication in terms of rounds and bandwidth. PRAC extends the state-of-the-art DORAM Duoram with a new implementation, more flexibility in how the DORAM memory is shared, and support for Incremental and Wide DPFs. We then use these DPF extensions to achieve algorithmic improvements in three novel oblivious data structure protocols for MPC. PRAC exploits the observation that a secure protocol for an algorithm can gain efficiency if the protocol explicitly reveals information leaked by the algorithm inherently. We first present an optimized binary search protocol that reduces the bandwidth from $O(\lg^2 n)$ to $O(\lg n)$ for obliviously searching over $n$ items. We then present an oblivious heap protocol with rounds reduced from $O(\lg n)$ to $O(\lg \lg n)$ for insertions, and bandwidth reduced from $O(\lg^2 n)$ to $O(\lg n)$ for extractions. Finally, we also present the first oblivious AVL tree protocol for MPC where no party learns the data or the structure of the AVL tree, and can support arbitrary insertions and deletions with $O(\lg n)$ rounds and bandwidth. We experimentally evaluate our protocols with realistic network settings for a wide range of memory sizes to demonstrate their efficiency. For instance, we observe our binary search protocol provides $>27\times$ and $>3\times$ improvements in wall-clock time and bandwidth respectively over other approaches for a memory with $2^{26}$ items; for the same setting our heap's extract-min protocol achieves $>31\times$ speedup in wall-clock time and $>13\times$ reduction in bandwidth.
Last updated:  2024-03-07
Column-wise Garbling, and How to Go Beyond the Linear Model
Lei Fan, Zhenghao Lu, and Hong-Sheng Zhou
In the linear garbling model introduced by Zahur, Rosulek, and Evans (Eurocrypt 2015), garbling an AND gate requires at least \(2\kappa\) bits of ciphertext, where $\kappa$ is the security parameter. Though subsequent works, including those by Rosulek and Roy (Crypto 2021) and Acharya et al. (ACNS 2023), have advanced beyond these linear constraints, a more comprehensive design framework is yet to be developed. Our work offers a novel, unified, and arguably simple perspective on garbled circuits. We introduce a hierarchy of models that captures all existing practical garbling schemes. By determining the lower bounds for these models, we elucidate the capabilities and limits of each. Notably, our findings suggest that simply integrating a nonlinear processing function or probabilistic considerations does not break the \(2\kappa\) lower bound by Zahur, Rosulek, and Evans. However, by incorporating column correlations, the bound can be reduced to \((1+1/w)\kappa\), where \(w\ge 1\). Additionally, we demonstrate that a straightforward extension of Rosulek and Roy's technique (Crypto 2021) does not yield improved results. We also present a methodology for crafting new models and for exploring further extensions of both the new and the existing models. Our new models set the course for future designs. We introduce three innovative garbling schemes based on a common principle called ``majority voting.'' The third construction performs on par with the state-of-the-art.
Last updated:  2024-03-07
Two-Round Stateless Deterministic Two-Party Schnorr Signatures From Pseudorandom Correlation Functions
Yashvanth Kondi, Claudio Orlandi, and Lawrence Roy
Schnorr signatures are a popular choice due to their simplicity, provable security, and linear structure that enables relatively easy threshold signing protocols. The deterministic variant of Schnorr (where the nonce is derived in a stateless manner using a PRF from the message and a long term secret) is widely used in practice since it mitigates the threats of a faulty or poor randomness generator (which in Schnorr leads to catastrophic breaches of security). Unfortunately, threshold protocols for the deterministic variant of Schnorr have so far been quite inefficient, as they make non black-box use of the PRF involved in the nonce generation. In this paper, we present the first two-party threshold protocol for Schnorr signatures, where signing is stateless and deterministic, and only makes black-box use of the underlying cryptographic algorithms. We present a protocol from general assumptions which achieves covert security, and a protocol that achieves full active security under standard factoring-like assumptions. Our protocols make crucial use of recent advances within the field of pseudorandom correlation functions (PCFs). As an additional benefit, only two-rounds are needed to perform distributed signing in our protocol, connecting our work to a recent line of research on the trade-offs between round complexity and cryptographic assumptions for threshold Schnorr signatures.
Last updated:  2024-03-07
Quantum One-Wayness of the Single-Round Sponge with Invertible Permutations
Joseph Carolan and Alexander Poremba
Sponge hashing is a novel class of cryptographic hash algorithms which underlies the current international hash function standard SHA-3. In a nutshell, a sponge function takes as input a bit-stream of any length and processes it via a simple iterative procedure: it repeatedly feeds each block of the input into a so-called block function, and then produces a short digest which consists of a subset of the final output bits. While much is known about the post-quantum security of the sponge construction in the case when the block function is modeled as a random function or permutation, the case of invertible permutations, which more accurately models the construction underlying SHA-3, has so far remained a fundamental open problem. In this work, we make new progress towards overcoming this barrier and show several results. First, we prove the ``double-sided zero-search'' conjecture proposed by Unruh (eprint' 2021) and show that finding zero-pairs in a random $2n$-bit permutation requires at least $\Omega(2^{n/2})$ many queries---and this is tight due to Grover's algorithm. At the core of our proof lies a novel ``symmetrization argument'' which uses insights from the theory of Young subgroups. Second, we consider more general variants of the double-sided search problem and show similar query lower bounds for them. As an application, we prove the quantum one-wayness of the single-round sponge with invertible permutations in the quantum random oracle model.
Last updated:  2024-03-07
Bent functions construction using extended Maiorana-McFarland’s class
Juan Carlos Ku-Cauich, Javier Diaz-Vargas, and Sara Mandujano-Velazquez
In a particular case, we consider the extended Maiorana-McFarland’s class to obtain balanced bent functions restricted to vectors with even Hamming weight, an equal number of pre-images for each element in the range. Additionally, we demonstrate that all bent functions are balanced when we restrict to vectors of even Hamming weight or vectors with odd Hamming weight. Given the necessary tools, we provide a simple algorithm to obtain new bent functions using Maiorana-McFarland.
Last updated:  2024-03-07
Asymptotically Optimal Message Dissemination with Applications to Blockchains
Chen-Da Liu-Zhang, Christian Matt, and Søren Eller Thomsen
Messages in large-scale networks such as blockchain systems are typically disseminated using flooding protocols, in which parties send the message to a random set of peers until it reaches all parties. Optimizing the communication complexity of such protocols and, in particular, the per-party communication complexity is of primary interest since nodes in a network are often subject to bandwidth constraints. Previous flooding protocols incur a per-party communication complexity of $\Omega(l\cdot \gamma^{-1} \cdot (\log(n) + \kappa))$ bits to disseminate an $l$-bit message among $n$ parties with security parameter $\kappa$ when it is guaranteed that a $\gamma$ fraction of the parties remain honest. In this work, we present the first flooding protocols with a per-party communication complexity of $O(l\cdot \gamma^{-1})$ bits. We further show that this is asymptotically optimal and that our protocols can be instantiated provably securely in the usual setting for proof-of-stake blockchains. To demonstrate that one of our new protocols is not only asymptotically optimal but also practical, we perform several probabilistic simulations to estimate the concrete complexity for given parameters. Our simulations show that our protocol significantly improves the per-party communication complexity over the state-of-the-art for practical parameters. Hence, for given bandwidth constraints, our results allow to, e.g., increase the block size, improving the overall throughput of a blockchain.
Last updated:  2024-03-07
Quasi-Optimal Permutation Ranking and Applications to PERK
Slim Bettaieb, Alessandro Budroni, Marco Palumbi, and Décio Luiz Gazzoni Filho
A ranking function for permutations maps every permutation of length $n$ to a unique integer between $0$ and $n!-1$. For permutations of size that are of interest in cryptographic applications, evaluating such a function requires multiple-precision arithmetic. This work introduces a quasi-optimal ranking technique that allows us to rank a permutation efficiently without needing a multiple-precision arithmetic library. We present experiments that show the computational advantage of our method compared to the standard lexicographic optimal permutation ranking. As an application of our result, we show how this technique improves the signature sizes and the efficiency of PERK digital signature scheme.
Last updated:  2024-03-07
Polytopes in the Fiat-Shamir with Aborts Paradigm
Henry Bambury, Hugo Beguinet, Thomas Ricosset, and Eric Sageloli
The Fiat-Shamir with Aborts paradigm (FSwA) uses rejection sampling to remove a secret’s dependency on a given source distribution. Recent results revealed that unlike the uniform distribution in the hypercube, both the continuous Gaussian and the uniform distribution within the hypersphere minimise the rejection rate and the size of the proof of knowledge. However, in practice both these distributions suffer from the complexity of their sampler. So far, those three distributions are the only available alternatives, but none of them offer the best of all worlds: competitive proof of knowledge size and rejection rate with a simple sampler. We introduce a new generic framework for FSwA using polytope based rejection sampling to enable a wider variety of constructions. As a matter of fact, this framework is the first to generalise these results to integral distributions. To complement the lack of alternatives, we also propose a new polytope construction, whose uniform sampler approaches in simplicity that of the hypercube. At the same time, it provides competitive proof of knowledge size compared to that obtained from the Gaussian distribution. Concurrently, we share some experimental improvements of our construction to further reduce the proof size. Finally, we propose a signature based on the FSwA paradigm using both our framework and construction. We prove it to be competitive with Haetae in signature size and with Dilithium on sampler simplicity.
Last updated:  2024-03-07
Don’t Use It Twice! Solving Relaxed Linear Code Equivalence Problems
Alessandro Budroni, Jesús-Javier Chi-Domínguez, Giuseppe D'Alconzo, Antonio J. Di Scala, and Mukul Kulkarni
The Linear Code Equivalence (LCE) Problem has received increased attention in recent years due to its applicability in constructing efficient digital signatures. Notably, the LESS signature scheme based on LCE is under consideration for the NIST post-quantum standardization process, along with the MEDS signature scheme that relies on an extension of LCE to the rank metric, namely Matrix Code Equivalence (MCE) Problem. Building upon these developments, a family of signatures with additional properties, including linkable ring, group, and threshold signatures, has been proposed. These novel constructions introduce relaxed versions of LCE (and MCE), wherein multiple samples share the same secret equivalence. Despite their significance, these variations have often lacked a thorough security analysis, being assumed to be as challenging as their original counterparts. Addressing this gap, our work delves into the sample complexity of LCE and MCE --- precisely, the sufficient number of samples required for efficient recovery of the shared secret equivalence. Our findings reveal, for instance, that one shouldn't use the same secret twice in the LCE setting since this enables a polynomial time (and memory) algorithm to retrieve the secret. Consequently, our results unveil the insecurity of two advanced signatures based on variants of the LCE Problem.
Last updated:  2024-03-07
Lightweight Asynchronous Verifiable Secret Sharing with Optimal Resilience
Victor Shoup and Nigel P. Smart
We present new protocols for *Asynchronous Verifiable Secret Sharing* for Shamir (i.e., threshold $t<n$) sharing of secrets. Our protocols: * Use only "lightweight" cryptographic primitives, such as hash functions; * Can share secrets over rings such as $\mathbb{Z}_{p^k}$ as well as finite fields $\mathbb{F}_q$; * Provide *optimal resilience*, in the sense that they tolerate up to $t < n/3$ corruptions, where $n$ is the total number of parties; * Are *complete*, in the sense that they guarantee that if any honest party receives their share then all honest parties receive their shares; * Employ *batching* techniques, whereby a dealer shares many secrets in parallel, and achieves an amortized communication complexity that is linear in $n$, at least on the "happy path", where no party *provably* misbehaves.
Last updated:  2024-03-07
The Planck Constant and Quantum Fourier Transformation
Zhengjun Cao and Zhenfu Cao
Quantum Fourier Transformation (QFT) plays a key role in quantum computation theory. But its transform size has never been discussed. In practice, the Xilinx LogiCORE IP Fast Fourier Transform core has the maximum transform size $N=2^{16}$. Taking into account the Planck constant $\hbar=6.62607015\times 10^{-34}$ and the difficulty to physically implement basic operator $\left[ \begin{array}{cc} 1& 0\\ 0 & \exp(-2\pi\,i/N)\\ \end{array} \right]$ on a qubit, we think $N=2^{120}$ could be an upper bound for the transform size of QFT.
Last updated:  2024-03-07
Algebraic Algorithm for the Alternating Trilinear Form Equivalence Problem
Lars Ran, Simona Samardjiska, and Monika Trimoska
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography. ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine. In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones. Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack. Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Last updated:  2024-03-07
Evaluating the security of CRYSTALS-Dilithium in the quantum random oracle model
Kelsey A. Jackson, Carl A. Miller, and Daochen Wang
In the wake of recent progress on quantum computing hardware, the National Institute of Standards and Technology (NIST) is standardizing cryptographic protocols that are resistant to attacks by quantum adversaries. The primary digital signature scheme that NIST has chosen is CRYSTALS-Dilithium. The hardness of this scheme is based on the hardness of three computational problems: Module Learning with Errors (MLWE), Module Short Integer Solution (MSIS), and SelfTargetMSIS. MLWE and MSIS have been well-studied and are widely believed to be secure. However, SelfTargetMSIS is novel and, though classically as hard as MSIS, its quantum hardness is unclear. In this paper, we provide the first proof of the hardness of SelfTargetMSIS via a reduction from MLWE in the Quantum Random Oracle Model (QROM). Our proof uses recently developed techniques in quantum reprogramming and rewinding. A central part of our approach is a proof that a certain hash function, derived from the MSIS problem, is collapsing. From this approach, we deduce a new security proof for Dilithium under appropriate parameter settings. Compared to the previous work by Kiltz, Lyubashevsky, and Schaffner (EUROCRYPT 2018) that gave the only other rigorous security proof for a variant of Dilithium, our proof has the advantage of being applicable under the condition $q = 1 \text{ mod } 2n$, where $q$ denotes the modulus and $n$ the dimension of the underlying algebraic ring. This condition is part of the original Dilithium proposal and is crucial for the efficient implementation of the scheme. We provide new secure parameter sets for Dilithium under the condition $q = 1 \text{ mod } 2n$, finding that our public key size and signature size are about $2.9\times$ and $1.3\times$ larger, respectively, than those proposed by Kiltz et al. at the same security level.
Last updated:  2024-03-07
Recent Progress in Quantum Computing Relevant to Internet Security
Hilarie Orman
Quantum computers at some future date might be able to factor large numbers, and this poses a threat to some public key and key exchange systems in use today. This overview of recent progress in devising quantum algorithms and building quantum computing devices is meant to help technologists understand the difficult problems that quantum engineers are working on, where advances have been made, and how those things affect estimates of if and when large scale quantum computation might happen.
Last updated:  2024-03-07
Notus: Dynamic Proofs of Liabilities from Zero-knowledge RSA Accumulators
Jiajun Xin, Arman Haghighi, Xiangan Tian, and Dimitrios Papadopoulos
Proofs of Liabilities (PoL) allow an untrusted prover to commit to its liabilities towards a set of users and then prove independent users' amounts or the total sum of liabilities, upon queries by users or third-party auditors. This application setting is highly dynamic. User liabilities may increase/decrease arbitrarily and the prover needs to update proofs in epoch increments (e.g., once a day for a crypto-asset exchange platform). However, prior works mostly focus on the static case and trivial extensions to the dynamic setting open the system to windows of opportunity for the prover to under-report its liabilities and rectify its books in time for the next check, unless all users check their liabilities at all epochs. In this work, we develop Notus, the first dynamic PoL system for general liability updates that avoids this issue. Moreover, it achieves $O(1)$ query proof size, verification time, and auditor overhead-per-epoch. The core building blocks underlying Notus are a novel zero-knowledge (and SNARK-friendly) RSA accumulator and a corresponding zero-knowledge MultiSwap protocol, which may be of independent interest. We then propose optimizations to reduce the prover's update overhead and make Notus scale to large numbers of users ($10^6$ in our experiments). Our results are very encouraging, e.g., it takes less than $2$ms to verify a user's liability and the proof size is $256$ Bytes. On the prover side, deploying Notus on a cloud-based testbed with eight 32-core machines and exploiting parallelism, it takes ${\sim}3$ minutes to perform the complete epoch update, after which all proofs have already been computed.
Last updated:  2024-03-06
LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems
Dan Boneh and Binyi Chen
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Last updated:  2024-03-06
Nebula: A Privacy-First Platform for Data Backhaul
Jean-Luc Watson, Tess Despres, Alvin Tan, Shishir G. Patil, Prabal Dutta, and Raluca Ada Popa
Imagine being able to deploy a small, battery-powered device nearly anywhere on earth that humans frequent and having it be able to send data to the cloud without needing to provision a network—without buying a physical gateway, setting up WiFi credentials, or acquiring a cellular SIM. Such a capability would address one of the greatest bottlenecks to deploying the long-tail of small, embedded, and power-constrained IoT devices in nearly any setting. Unfortunately, decoupling the device deployment from the network configuration needed to transmit, or backhaul, sensor data to the cloud remains a tricky challenge, but the success of Tile and AirTag offers hope. They have shown that mobile phones can crowd-source worldwide local network coverage to find lost items, yet expanding these systems to enable general-purpose backhaul raises privacy concerns for network participants. In this work, we present Nebula, a privacy-focused architecture for global, intermittent, and low-rate data backhaul to enable nearly any thing to eventually connect to the cloud while (i) preserving the privacy of the mobile network participants from the platform provider by decentralizing data flow through the system, (ii) incentivizing participation through micropayments, and (iii) preventing system abuse.
Last updated:  2024-03-06
Modular Indexer: Fully User-Verified Execution Layer for Meta-Protocols on Bitcoin
Hongbo Wen, Hanzhi Liu, Shuyang Tang, Shuhan Cao, Domo, and Yu Feng
Before the emergence of inscriptions and ordinal protocols, Bitcoin was limited in its applications due to the Turing-incompleteness of its script language. Fortunately, with recent advances in techniques, Turing-complete off-chain execution layers are established via Bitcoin indexers. Yet, existing indexers have their data integrity and availability strongly dependent on the honesty of indexers. This violated the trustlessness and decentralization principle of the cryptocurrency literature. To provide an alternative Bitcoin indexer scheme and overcome the above limitations, we have reallocated the roles of committee indexers (for heavy computations), normal indexers, and light indexers (the client end), and established a fully user-verified execution layer based on our modular indexer protocol. For the trustless relay of data, we have adopted Verkle trees to store and prove the states. Thus, data integrity and availability are guaranteed even in the case of a majority of malicious committee indexers. Ideally, our modular indexer would safely bridge the gap between the Bitcoin layer-1 and applications from BRC-20, and contribute to the further prosperity of the Bitcoin ecosystem.
Last updated:  2024-03-06
Permutation-Based Hashing Beyond the Birthday Bound
Charlotte Lefevre and Bart Mennink
It is known that the sponge construction is tightly indifferentiable from a random oracle up to around $2^{c/2}$ queries, where $c$ is the capacity. In particular, it cannot provide generic security better than half of the underlying permutation size. In this paper, we aim to achieve hash function security beating this barrier. We present a hashing mode based on two $b$-bit permutations named the double sponge. The double sponge can be seen as the sponge embedded within the double block length hashing paradigm, making two permutation calls in parallel interleaved with an efficient mixing function. Similarly to the sponge, the permutation size is split as $b = r+c$, and the underlying compression function absorbs $r$ bits at a time. We prove that the double sponge is indifferentiable from a random oracle up to around $2^{2c/3}$ queries. This means that the double sponge achieves security beyond the birthday bound in the capacity. In addition, if $c>3b/4$, the double sponge beats the birthday bound in the primitive size, to our knowledge being the first hashing mode based on a permutation that accomplices this feature.
Last updated:  2024-03-06
Beyond Security: Achieving Fairness in Mailmen-Assisted Timed Data Delivery
Shiyu Li, Yuan Zhang, Yaqing Song, Hongbo Liu, Nan Cheng, Hongwei Li, Dahai Tao, and Kan Yang
Timed data delivery is a critical service for time-sensitive applications that allows a sender to deliver data to a recipient, but only be accessible at a specific future time. This service is typically accomplished by employing a set of mailmen to complete the delivery mission. While this approach is commonly used, it is vulnerable to attacks from realistic adversaries, such as a greedy sender (who accesses the delivery service without paying the service charge) and malicious mailmen (who release the data prematurely without being detected). Although some research works have been done to address these adversaries, most of them fail to achieve fairness. In this paper, we formally define the fairness requirement for mailmen-assisted timed data delivery and propose a practical scheme, dubbed DataUber, to achieve fairness. DataUber ensures that honest mailmen receive the service charge, lazy mailmen do not receive the service charge, and malicious mailmen are punished. Specifically, DataUber consists of two key techniques: 1) a new cryptographic primitive, i.e., Oblivious and Verifiable Threshold Secret Sharing (OVTSS), enabling a dealer to distribute a secret among multiple participants in a threshold and verifiable way without knowing any one of the shares, and 2) a smart-contract-based complaint mechanism, allowing anyone to become a reporter to complain about a mailman's misbehavior to a smart contract and receive a reward. Furthermore, we formally prove the security of DataUber and demonstrate its practicality through a prototype implementation.
Last updated:  2024-03-06
Unbalanced Circuit-PSI from Oblivious Key-Value Retrieval
Meng Hao, Weiran Liu, Liqiang Peng, Hongwei Li, Cong Zhang, Hanxiao Chen, and Tianwei Zhang
Circuit-based Private Set Intersection (circuit-PSI) empowers two parties, a client and a server, each with input sets $X$ and $Y$, to securely compute a function $f$ on the intersection $X \cap Y$ while preserving the confidentiality of $X \cap Y$ from both parties. Despite the recent proposals of computationally efficient circuit-PSI protocols, they primarily focus on the balanced scenario where $|X|$ is similar to $|Y|$. However, in many practical situations, a circuit-PSI protocol may be applied in an unbalanced context, where $|X|$ is significantly smaller than $|Y|$. Directly applying existing protocols to this scenario poses notable efficiency challenges due to the communication complexity of these protocols scaling at least linearly with the size of the larger set, i.e., $\max(|X|, |Y|)$. In this work, we put forth efficient constructions for unbalanced circuit-PSI, demonstrating sublinear communication complexity in the size of the larger set. Our key insight lies in formalizing unbalanced circuit-PSI as the process of obliviously retrieving values corresponding to keys from a set of key-value pairs. To achieve this, we propose a new functionality named Oblivious Key-Value Retrieval (OKVR) and design the OKVR protocol based on a new notion termed sparse Oblivious Key-Value Store (sparse OKVS). We conduct comprehensive experiments and the results showcase substantial improvements over the state-of-the-art circuit-PSI schemes, i.e., $1.84 \sim 48.86 \times$ communication improvement and $1.50 \sim39.81 \times$ faster computation. Compared to a very recent unbalanced circuit-PSI work, our constructions outperform them by $1.18 \sim 15.99 \times$ and $1.22 \sim 10.44 \times$ in communication and computation overhead, respectively, depending on set sizes and network environments.
Last updated:  2024-03-05
Quantum Key Leasing for PKE and FHE with a Classical Lessor
Orestis Chardouvelis, Vipul Goyal, Aayush Jain, and Jiahui Liu
In this work, we consider the problem of secure key leasing, also known as revocable cryptography (Agarwal et. al. Eurocrypt' 23, Ananth et. al. TCC' 23), as a strengthened security notion of its predecessor put forward in Ananth et. al. Eurocrypt' 21. This problem aims to leverage unclonable nature of quantum information to allow a lessor to lease a quantum key with reusability for evaluating a classical functionality. Later, the lessor can request the lessee to provably delete the key and then the lessee will be completely deprived of the capability to evaluate the function. In this work, we construct a secure key leasing scheme to lease a decryption key of a (classical) public-key, homomorphic encryption scheme from standard lattice assumptions. Our encryption scheme is exactly identical to the (primal) version of Gentry-Sahai-Waters homomorphic encryption scheme with a carefully chosen public key matrix. We achieve strong form of security where: * The entire protocol (including key generation and verification of deletion) uses merely classical communication between a classical lessor (client) and a quantum lessee (server). * Assuming standard assumptions, our security definition ensures that every computationally bounded quantum adversary could only simultaneously provide a valid classical deletion certificate and yet distinguish ciphertexts with at most negligible probability. Our security relies on the hardness of learning with errors assumption. Our scheme is the first scheme to be based on a standard assumption and satisfying the two properties mentioned above. The main technical novelty in our work is the design of an FHE scheme that enables us to apply elegant analyses done in the context of classically verifiable proofs of quantumness from LWE (Brakerski et. al.(FOCS'18, JACM'21) and its parallel amplified version in Radian et. al.(AFT'21)) to the setting of secure leasing. This connection leads to a modular construction and arguably simpler proofs than previously known. An important technical component we prove along the way is an amplified quantum search-to-decision reduction: we design an extractor that uses a quantum distinguisher (who has an internal quantum state) for decisional LWE, to extract secrets with success probability amplified to almost one. This technique might be of independent interest.
Last updated:  2024-03-05
SILBE: an Updatable Public Key Encryption Scheme from Lollipop Attacks
Max Duparc, Tako Boris Fouotsa, and Serge Vaudenay
We present a new post-quantum Public Key Encryption scheme (PKE) named Supersingular Isogeny Lollipop Based Encryption or SILBE. SILBE is obtained by leveraging the generalized lollipop attack of Castryck and Vercauteren on the M-SIDH Key exchange by Fouotsa, Moriya and Petit. Doing so, we can in fact make of SILBE a post-quantum secure Updatable Public Key Encryption scheme (UPKE). SILBE is the first isogeny-based UPKE which is not based on group actions. In its core, SILBE extensively uses both the Deuring Correspondence and Kani's Lemma, two central concepts in Isogeny-Based Cryptography.
Last updated:  2024-03-05
Some notes on algorithms for abelian varieties
Damien Robert
A compilation of various notes written over the years on some algorithms on abelian varieties.
Last updated:  2024-03-05
Fiat-Shamir Security of FRI and Related SNARKs
Alexander R. Block, Albert Garreta, Jonathan Katz, Justin Thaler, Pratyush Ranjan Tiwari, and Michał Zając
We establish new results on the Fiat-Shamir (FS) security of several protocols that are widely used in practice, and we provide general tools for establishing similar results for others. More precisely, we: (1) prove the FS security of the FRI and batched FRI protocols; (2) analyze a general class of protocols, which we call $\delta$-correlated, that use low-degree proximity testing as a subroutine (this includes many "Plonk-like" protocols (e.g., Plonky2 and Redshift), ethSTARK, RISC Zero, etc.); and (3) prove FS security of the aforementioned "Plonk-like" protocols, and sketch how to prove the same for the others. We obtain our first result by analyzing the round-by-round (RBR) soundness and RBR knowledge soundness of FRI. For the second result, we prove that if a $\delta$-correlated protocol is RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials, then it is RBR (knowledge) sound in general. Equipped with this tool, we prove our third result by formally showing that "Plonk-like" protocols are RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials. We then outline analogous arguments for the remainder of the aforementioned protocols. To the best of our knowledge, ours is the first formal analysis of the Fiat-Shamir security of FRI and widely deployed protocols that invoke it.
Last updated:  2024-03-05
On Soundness Notions for Interactive Oracle Proofs
Alexander R. Block, Albert Garreta, Pratyush Ranjan Tiwari, and Michał Zając
Interactive oracle proofs (IOPs) (Ben-Sasson et al., TCC 2016; Reingold et al., SICOMP 2021) have emerged as a powerful model for proof systems combining IP and PCP. While IOPs are not any more powerful than PCPs from a complexity theory perspective, their potential to create succinct proofs and arguments has been demonstrated by many recent constructions achieving better parameters such as total proof length, alphabet size, and query complexity. In this work, we establish new results on the relationship between various notions of soundness for IOPs. First, we formally generalize the notion of round-by-round soundness (Canetti et al., STOC 2019) and round-by-round knowledge soundness (Chiesa et al., TCC 2019). Given this generalization, we then examine its relationship to the notions of generalized special soundness (Attema et al., CRYPTO 2021) and generalized special unsoundness (Attema et al., TCC 2022). We show that: 1. generalized special soundness implies generalized round-by-round soundness; 2. generalized round-by-round knowledge soundness implies generalized special soundness; 3. generalized special soundness does not imply generalized round-by-round knowledge soundness; 4. generalized round-by-round soundness (resp., special unsoundness) is an upper bound (resp., a lower bound) on standard soundness, and this relationship is tight when the round-by-round soundness and special unsoundness errors are equal; and 5. any special sound IOP can be transformed via (a variant of) the Fiat-Shamir transformation (in the Random Oracle Model) into a non-interactive proof that is adaptively sound in the Quantum Random Oracle Model.
Last updated:  2024-03-05
Traceable Secret Sharing: Strong Security and Efficient Constructions
Dan Boneh, Aditi Partap, and Lior Rotem
Suppose Alice uses a $t$-out-of-$n$ secret sharing to store her secret key on $n$ servers. Her secret key is protected as long as $t$ of them do not collude. However, what if a less-than-$t$ subset of the servers decides to offer the shares they have for sale? In this case, Alice should be able to hold them accountable, or else nothing prevents them from selling her shares. With this motivation in mind, Goyal, Song, and Srinivasan (CRYPTO 21) introduced the concept of {\em traceable secret sharing}. In such schemes, it is possible to provably trace the leaked secret shares back to the servers who leaked them. Goyal et al.~presented the first construction of a traceable secret sharing scheme. However, secret shares in their construction are quadratic in the secret size, and their tracing algorithm is quite involved as it relies on Goldreich-Levin decoding. In this work, we put forth new definitions and practical constructions for traceable secret sharing. In our model, some $f < t$ servers output a reconstruction box~$R$ that may arbitrarily depend on their shares. Given additional $t-f$ shares, $R$ reconstructs and outputs the secret. The task is to trace $R$ back to the corrupted servers given black-box access to $R$. Unlike Goyal et al., we do not assume that the tracing algorithm has any information on how the corrupted servers constructed~$R$ from the shares in their possession. We then present two very efficient constructions of traceable secret sharing based on two classic secret sharing schemes. In both of our schemes, shares are only twice as large as the secret, improving over the quadratic overhead of Goyal et al. Our first scheme is obtained by presenting a new practical tracing algorithm for the widely-used Shamir secret sharing scheme. Our second construction is based on an extension of Blakley's secret sharing scheme. Tracing in this scheme is optimally efficient, and requires just one successful query to $R$. We believe that our constructions are an important step towards bringing traceable secret-sharing schemes to practice. This work also raises several interesting open problems that we describe in the paper.
Last updated:  2024-03-05
Aura: private voting with reduced trust on tallying authorities
Aram Jivanyan and Aaron Feickert
Electronic voting has long been an area of active and challenging research. Security properties relevant to physical voting in elections with a variety of threat models and priorities are often difficult to reproduce in cryptographic systems and protocols. Existing work in this space often focuses on the privacy of ballot contents, assurances to voters that their votes are tabulated, and verification that election results are correct; however, privacy of voter identity is often offloaded to trust requirements on election organizers or tallying authorities, or implies other kinds of trust related to cryptographic construction instantiation. Here we introduce Aura, an election protocol that reduces trust on tallying authorities and organizers while ensuring voter privacy. Ballots in Aura are dissociated from voter identity cryptographically, use verifiable encryption and threshold decryption to diffuse trust in tallying authorities, require no trusted setup for cryptographic primitives, and use efficient proving systems to reduce computation and communication complexity. These properties make Aura a competitive candidate for use in a variety of applications where trust minimization is desirable or necessary.
Last updated:  2024-03-05
Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols
Cas Cremers, Alexander Dax, and Niklas Medinger
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, e.g., in the context of the NIST process for post-quantum primitives. In this work, we (i) establish stronger security notions for KEMs, and (ii) develop a symbolic analysis method to analyze security protocols that use KEMs. First, we generalize existing security notions for KEMs in the computational setting, introduce several stronger security notions, and prove their relations. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. Second, we develop a family of fine-grained symbolic models that correspond to our hierarchy of computational security notions, and are suitable for the automated analysis of KEM-based security protocols. We encode our models as a library in the framework of the Tamarin prover. Given a KEM-based protocol, our approach can automatically derive the minimal binding properties required from the KEM; or, if also given a concrete KEM, can analyze if the protocols meets its security goals. In case studies, Tamarin automatically discovers, e.g., that the key exchange protocol proposed in the original Kyber paper requires stronger properties from the KEM than were proven in the paper.
Last updated:  2024-03-05
Generation of "independent" points on elliptic curves by means of Mordell--Weil lattices
Dmitrii Koshelev
This article develops a novel method of generating ``independent'' points on an ordinary elliptic curve over a finite field of large characteristic. Such points are actively used, e.g., in the Pedersen vector commitment scheme and its modifications. The conventional generation consists in sampling points successively via a hash function to the elliptic curve. The new generation method equally satisfies the NUMS (Nothing Up My Sleeve) principle, but it works faster on average. In other words, instead of finding each point separately, it is suggested to sample several points at once with a non-small success probability. This means that in practice the new method finishes in polynomial time, unless one is mysteriously unlucky. More precisely, some explicit formulas are represented in the article for deriving up to four ``independent'' points on any curve of $j$-invariant $0$. Such curves are known to be very popular in elliptic curve cryptography.
Last updated:  2024-03-05
AprèsSQI: Extra Fast Verification for SQIsign Using Extension-Field Signing
Maria Corte-Real Santos, Jonathan Komada Eriksen, Michael Meyer, and Krijn Reijnders
We optimise the verification of the SQIsign signature scheme. By using field extensions in the signing procedure, we are able to significantly increase the amount of available rational $2$-power torsion in verification, which achieves a significant speed-up. This, moreover, allows several other speed-ups on the level of curve arithmetic. We show that the synergy between these high-level and low-level improvements gives significant improvements, making verification $2.07$ times faster, or up to $3.41$ times when using size-speed trade-offs, compared to the state of the art, without majorly degrading the performance of signing.
Last updated:  2024-03-05
ModFalcon: compact signatures based on module NTRU lattices
Chitchanok Chuengsatiansup, Thomas Prest, Damien Stehlé, Alexandre Wallet, and Keita Xagawa
Lattices lead to promising practical post-quantum digital signatures, combining asymptotic efficiency with strong theoretical security guarantees. However, tuning their parameters into practical instantiations is a delicate task. On the one hand, NIST round 2 candidates based on Lyubashevsky's design (such as Dilithium and qTesla) allow several tradeoffs between security and efficiency, but at the expense of a large bandwidth consumption. On the other hand, the hash-and-sign falcon signature is much more compact and is still very efficient, but it allows only two security levels, with large compactness and security gaps between them. We introduce a new family of signature schemes based on the Falcon design, which relies on module lattices. Our concrete instantiation enjoys the compactness and efficiency of Falcon, and allows an intermediate security level. It leads to the most compact lattice-based signature achieving a quantum security above 128 bits.
Last updated:  2024-03-05
Breaking the DECT Standard Cipher with Lower Time Cost
Lin Ding, Zhengting Li, Ziyu Guan, Xinhai Wang, and Zheng Wu
The DECT Standard Cipher (DSC) is a proprietary stream cipher used for encryption in the Digital Enhanced Cordless Telecommunications (DECT), which is a standard for short range cordless communication and widely deployed worldwide both in residential and enterprise environments. New weaknesses of the DSC stream cipher which are not discovered in previous works are explored and analyzed in this paper. Based on these weaknesses, new practical key recovery attacks and distinguishing attack on DSC with lower time cost are proposed. The first cryptanalytic result show that DSC can be broken in about 13.12 seconds in the known IV setting, when an offline phase that takes about 58.33 minutes is completed. After then, a distinguishing attack on DSC in the related key chosen IV setting is given, which has a time complexity of only 2 encryptions and a success probability of almost 1. Finally, based on the slide property, a key recovery attack on DSC with practical complexities is proposed. The experimental result shows that DSC can be broken on a common PC within about 44.97 seconds in the multiple related key setting. The attacks on DSC proposed in this paper clearly show that a well-designed initialization is absolutely necessary to design a secure stream cipher.
Last updated:  2024-03-05
DARE to agree: Byzantine Agreement with Optimal Resilience and Adaptive Communication
Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Rachid Guerraoui, Jovan Komatovic, and Manuel Vidigueira
Byzantine Agreement (BA) enables $n$ processes to reach consensus on a common valid $L_o$-bit value, even in the presence of up to $t<n$ faulty processes that can deviate arbitrarily from their prescribed protocol. Despite its significance, the optimal communication complexity for key variations of BA has not been determined within the honest majority regime ($n=2t+1$), for both the worst-case scenario and the adaptive scenario, which accounts for the actual number $f \leq t$ of failures. We introduce ADA-DARE (Adaptively Disperse, Agree, Retrieve), a novel universal approach to solve BA efficiently. Let $\kappa$ represent the size of the cryptographic objects required to solve BA when $t>n/3$. Different instantiations of ADA-DARE achieve near-optimal adaptive bit complexity of $O(nL_o + n(f + 1) \kappa)$ for both strong multi-valued validated BA (SMVBA) and interactive consistency (IC). By definition, for IC, $L_o = nL_{in}$, with $L_{in}$ representing the size of an input value. These results achieve optimal $O(n(L_o+f))$ word complexity and significantly improve the previous best results by up to a linear factor, depending on $L_o$ and $f$.
Last updated:  2024-03-05
The Last Challenge Attack: Exploiting a Vulnerable Implementation of the Fiat-Shamir Transform in a KZG-based SNARK
Oana Ciobotaru, Maxim Peter, and Vesselin Velichkov
The Fiat-Shamir transform [1] is a well-known and widely employed technique for converting sound public-coin interactive protocols into sound non-interactive protocols. Even though the transformation itself is relatively clear and simple, some implementations choose to deviate from the specifications, for example for performance reasons. In this short note, we present a vulnerability arising from such a deviation in a KZG-based PLONK verifier implementation. This deviation stemmed from the incorrect computation of the last challenge of the PLONK protocol [2], where the KZG batching proof challenge was computed before, and, hence, independently from the KZG evaluation proofs. More generally, such a vulnerability may affect any KZG [3] implementation where one uses batched KZG proof evaluations for at least two distinct evaluation points. We call an attack enabled by such a deviation a Last Challenge Attack. For concreteness, we show that when a PLONK verifier implementation presents such a deviation, a malicious PLONK prover can mount a Last Challenge Attack to construct verifiable proofs of false statements. The described vulnerability was initially discovered as part of an audit, and has been responsibly disclosed to the developers and fixed. A proof of concept of the vulnerability, in which a proof is forged for an arbitrary public input, is made available. In addition to the above, in this work we also provide a security proof of the knowledge-soundness of the batched KZG scheme with evaluations for at least two distinct values.
Last updated:  2024-03-05
Efficient Unbalanced Quorum PSI from Homomorphic Encryption
Xinpeng Yang, Liang Cai, Yinghao Wang, Yinghao Wang, Lu Sun, and Jingwei Hu
Multiparty private set intersection (mPSI) protocol is capable of finding the intersection of multiple sets securely without revealing any other information. However, its limitation lies in processing only those elements present in every participant's set, which proves inadequate in scenarios where certain elements are common to several, but not all, sets. In this paper, we introduce an innovative variant of the mPSI protocol named unbalanced quorum PSI to fill in the gaps of the mPSI protocol. Unlike the previous quorum-PSI proposals which detect elements present in at least $k$ out of $n$ equal sets, our protocol is particularly tailored for unbalanced cases where the size of the receiver's set is much smaller than the size of the senders' sets. Our work achieves logarithmic communication complexity in the semi-honest setting, significantly surpassing previous work in efficiency. The benchmarks show that it takes 22.7 seconds in WAN and 14.7 seconds in LAN for online computation, and only 87.8 MB of total communication to intersect 5535 elements across 15 sets, each containing $2^{24}$ elements. Compared to prior work, this is roughly an 87$\times$ reduction in communication and a 31$\times$ reduction in online time. Our protocols can be easily extended to the larger set with $2^{28}$ elements which is nearly impractical for prior work.
Last updated:  2024-03-05
Plover: Masking-Friendly Hash-and-Sign Lattice Signatures
Muhammed F. Esgin, Thomas Espitau, Guilhem Niot, Thomas Prest, Amin Sakzad, and Ron Steinfeld
We introduce a toolkit for transforming lattice-based hash-and-sign signature schemes into masking-friendly signatures secure in the t-probing model. Until now, efficiently masking lattice-based hash-and-sign schemes has been an open problem, with unsuccessful attempts such as Mitaka. A first breakthrough was made in 2023 with the NIST PQC submission Raccoon, although it was not formally proven. Our main conceptual contribution is to realize that the same principles underlying Raccoon are very generic, and to find a systematic way to apply them within the hash-and-sign paradigm. Our main technical contribution is to formalize, prove, instantiate and implement a hash-and-sign scheme based on these techniques. Our toolkit includes noise flooding to mitigate statistical leaks, and an extended Strong Non-Interfering probing security (SNIu) property to handle masked gadgets with unshared inputs. We showcase the efficiency of our techniques in a signature scheme, Plover, based on (hint) Ring-LWE. It is the first lattice-based masked hash-and-sign scheme with quasi-linear complexity O(d log d) in the number of shares d. Our performances are competitive with the state-of-the-art masking-friendly signature, the Fiat-Shamir scheme Raccoon.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.