Papers updated in last 31 days (271 results)

Last updated:  2023-12-11
High-assurance zeroization
Santiago Arranz Olmos, Gilles Barthe, Ruben Gonzalez, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Tiago Oliveira, and Peter Schwabe
In this paper we revisit the problem of erasing sensitive data from memory and registers during return from a cryptographic routine. While the problem and related attacker model is fairly easy to phrase, it turns out to be surprisingly hard to guarantee security in this model when implementing cryptography in common languages such as C/C++ or Rust. We revisit the issues surrounding zeroization and then present a principled solution in the sense that it guarantees that sensitive data is erased and it clearly defines when this happens. We implement our solution as extension to the formally verified Jasmin compiler and extend the correctness proof of the compiler to cover zeroization. We show that the approach seamlessly integrates with state-of-the-art protections against microarchitectural attacks by integrating zeroization into Libjade, a cryptographic library written in Jasmin with systematic protections against timing and Spectre-v1 attacks. We present benchmarks showing that in many cases the overhead of zeroization is barely measurable and that it stays below 2% except for highly optimized symmetric crypto routines on short inputs.
Last updated:  2023-12-11
On Constructing One-Way Quantum State Generators, and More
Shujiao Cao and Rui Xue
As a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state and can be used to devise the one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results: Two variants of one-way quantum state generator are proposed, called the weak one-way quantum state generator and distributionally one-way quantum state generator. Then the equivalence between weak and strong one-way state generator is obtained, and the equivalence between weak and distributionally one-way quantum state generator is shown in the symmetric setting. We construct the symmetric distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to $\textsf{QSZK}$. We construct quantum bit commitment with statistical binding (sum-binding) and computational hiding directly from the average-case hardness of $\textsf{QSZK}$. To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $ \textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$. Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.
Last updated:  2023-12-11
FIN: Practical Signature-Free Asynchronous Common Subset in Constant Time
Sisi Duan, Xin Wang, and Haibin Zhang
Asynchronous common subset (ACS) is a powerful paradigm enabling applications such as Byzantine fault-tolerance (BFT) and multi-party computation (MPC). The most efficient ACS framework in the information-theoretic setting is due to Ben-Or, Kelmer, and Rabin (BKR, 1994). The BKR ACS protocol has been both theoretically and practically impactful. However, the BKR protocol has an $O(\log n)$ running time (where $n$ is the number of replicas) due to the usage of $n$ parallel asynchronous binary agreement (ABA) instances, impacting both performance and scalability. Indeed, for a network of 16~64 replicas, the parallel ABA phase occupies about 95%~97% of the total runtime in BKR. A long-standing open problem is whether we can build an ACS framework with $O(1)$ time while not increasing the message or communication complexity of the BKR protocol. In this paper, we resolve the open problem, presenting the first constant-time ACS protocol with $O(n^3)$ messages in the information-theoretic and signature-free settings. Moreover, as a key ingredient of our new ACS framework and an interesting primitive in its own right, we provide the first information-theoretic multivalued validated Byzantine agreement (MVBA) protocol with $O(1)$ time and $O(n^3)$ messages. Both results can improve---asymptotically and concretely---various applications using ACS and MVBA in the information-theoretic, quantum-safe, or signature-free settings. As an example, we implement FIN, a BFT protocol instantiated using our framework. Via a 121-server deployment on Amazon EC2, we show FIN is significantly more efficient than PACE (CCS 2022), the state-of-the-art asynchronous BFT protocol of the same type. In particular, FIN reduces the overhead of the ABA phase to as low as 1.23% of the total runtime, and FIN achieves up to 3.41x the throughput of PACE. We also show that FIN outperforms other BFT protocols with the standard liveness property such as Dumbo and Speeding Dumbo.
Last updated:  2023-12-10
Proof of Compliance for Anonymous, Unlinkable Messages
Mingxun Zhou, Elaine Shi, and Giulia Fanti
Anonymous systems are susceptible to malicious activity. For instance, in anonymous payment systems, users may engage in illicit practices like money laundering. Similarly, anonymous federated learning systems decouple user updates to a central machine learning model from the user's identity; malicious users can manipulate their updates to poison the model. Today, compliance with system-generated rules in such systems can be guaranteed at the level of a single message by utilizing Zero-Knowledge Proofs (ZKP). However, it remains unclear how to prove compliance for rules that are defined over a collection of a user's messages, without compromising the unlinkability of the messages. To address this challenge, we propose an efficient protocol called Shuffle-ZKP, which enables users within an unlinkable messaging system to collectively prove their compliance. Our protocol leverages a distributed and private set equality check protocol along with generic Non-Interactive Zero-Knowledge (NIZK) proof systems. We also provide an additional attributing protocol to identify misbehaving users. We theoretically analyze the protocol's correctness and privacy properties; we then implement and test it across multiple use cases. Our empirical results show that in use cases involving thousands of users, each user is able to generate a compliance proof within 0.2-10.6 seconds, depending on the use case, while the additional communication overhead remains under 3KB. Furthermore, the protocol is computationally efficient on the server side; the verification algorithm requires a few seconds to handle thousands of users in all of our use cases.
Last updated:  2023-12-10
Allowing Blockchain Loans with Low Collateral
Tom Azoulay, Uri Carl, and Ori Rottenstreich
Collateral is an item of value serving as security for the repayment of a loan. In blockchain-based loans, cryptocurrencies serve as the collateral. The high volatility of cryptocurrencies implies a serious barrier of entry with a common practice that collateral values equal multiple times the value of the loan. As assets serving as collateral are locked, this requirement prevents many candidates from obtaining loans. In this paper, we aim to make loans more accessible by offering loans with lower collateral, while keeping the risk for lenders bound. We propose a credit score based on data recovered from the blockchain to predict how likely a potential borrower is to repay a loan. Our protocol does not risk the initial amount granted by liquidity providers, but only risks part of the interest yield gained from the borrower by the protocol in the past.
Last updated:  2023-12-10
An Empirical Study of Cross-chain Arbitrage in Decentralized Exchanges
Ori Mazor and Ori Rottenstreich
Blockchain interoperability refers to the ability of blockchains to share information with each other. Decentralized Exchanges (DEXs) are peer-to-peer marketplaces where traders can exchange cryptocurrencies. Several studies have focused on arbitrage analysis within a single blockchain, typically in Ethereum. Recently, we have seen a growing interest in cross-chain technologies to create a more interconnected blockchain network. We present a framework to study cross-chain arbitrage in DEXs. We use this framework to analyze cross-chain arbitrages between two popular DEXs, PancakeSwap and QuickSwap, within a time frame of a month. While PancakeSwap is implemented on a blockchain named BNB Chain, QuickSwap is implemented on a different blockchain named Polygon. The approach of this work is to study the cross-chain arbitrage through an empirical study. We refer to the number of arbitrages, their revenue as well as to their duration. This work lays the basis for understanding cross-chain arbitrage and its potential impact on the blockchain technology.
Last updated:  2023-12-10
PRAC: Round-Efficient 3-Party MPC for Dynamic Data Structures
Sajin Sasy, Adithya Vadapalli, and Ian Goldberg
We present Private Random Access Computations (PRAC), a 3-party Secure Multi-Party Computation (MPC) framework to support random-access data structure algorithms for MPC with efficient communication in terms of rounds and bandwidth. PRAC extends the state-of-the-art DORAM Duoram with a new implementation, more flexibility in how the DORAM memory is shared, and support for Incremental and Wide DPFs. We then use these DPF extensions to achieve algorithmic improvements in three novel oblivious data structure protocols for MPC. PRAC exploits the observation that a secure protocol for an algorithm can gain efficiency if the protocol explicitly reveals information leaked by the algorithm inherently. We first present an optimized binary search protocol that reduces the bandwidth from $O(\lg^2 n)$ to $O(\lg n)$ for obliviously searching over $n$ items. We then present an oblivious heap protocol with rounds reduced from $O(\lg n)$ to $O(\lg \lg n)$ for insertions, and bandwidth reduced from $O(\lg^2 n)$ to $O(\lg n)$ for extractions. Finally, we also present the first oblivious AVL tree protocol for MPC where no party learns the data or the structure of the AVL tree, and can support arbitrary insertions and deletions with $O(\lg n)$ rounds and bandwidth. We experimentally evaluate our protocols with realistic network settings for a wide range of memory sizes to demonstrate their efficiency. For instance, we observe our binary search protocol provides $>27\times$ and $>3\times$ improvements in wall-clock time and bandwidth respectively over other approaches for a memory with $2^{26}$ items; for the same setting our heap's extract-min protocol achieves $>31\times$ speedup in wall-clock time and $>13\times$ reduction in bandwidth.
Last updated:  2023-12-10
Selective Delegation of Attributes in Mercurial Signature Credentials
Colin Putman and Keith M. Martin
Anonymous credential schemes enable service providers to verify information that a credential holder willingly discloses, without needing any further personal data to corroborate that information, and without allowing the user to be tracked from one interaction to the next. Mercurial signatures are a novel class of anonymous credentials which show good promise as a simple and efficient construction without heavy reliance on zero-knowledge proofs. However, they still require significant development in order to achieve the functionality that most existing anonymous credential schemes provide. Encoding multiple attributes of the credential holder in such a way that they can be disclosed selectively with each use of the credential is often seen as a vital feature of anonymous credentials, and is one that mercurial signatures have not yet implemented. In this paper, we show a simple way to encode attributes in a mercurial signature credential and to regulate which attributes a credential holder can issue when delegating their credential to another user. We also extend the security model associated with mercurial signatures to account for the inclusion of attributes, and prove the security of our extension with respect to the original mercurial signature construction.
Last updated:  2023-12-09
Ring-LWE Hardness Based on Non-invertible Ideals
Charanjit S. Jutla and Chengyu Lin
We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices. In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible. Since the conductor itself is non-invertible, this gives a non-trivial multiplicative set that lies outside the ideal class group. Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.
Last updated:  2023-12-09
The Patching Landscape of Elisabeth-4 and the Mixed Filter Permutator Paradigm
Clément Hoffmann, Pierrick Méaux, and François-Xavier Standaert
Filter permutators are a family of stream cipher designs that are aimed for hybrid homomorphic encryption. While originally operating on bits, they have been generalized to groups at Asiacrypt 2022, and instantiated for evaluation with the TFHE scheme which favors a filter based on (negacyclic) Look Up Tables (LUTs). A recent work of Gilbert et al., to appear at Asiacrypt 2023, exhibited (algebraic) weaknesses in the Elisabeth-4 instance, exploiting the combination of the 4-bit negacyclic LUTs it uses as filter. In this article, we explore the landscape of patches that can be used to restore the security of such designs while maintaining their good properties for hybrid homomorphic encryption. Starting with minimum changes, we observe that just updating the filter function (still with small negacyclic LUTs) is conceptually feasible, and propose the resulting Elisabeth-b4 design with three levels of NLUTs. We then show that a group permutator combining two different functions in the filter can simplify the analysis and improve performances. We specify the Gabriel instance to illustrate this claim. We finally propose to modify the group filter permutator paradigm into a mixed filter permutator, which considers the permutation of the key with elements in a group and a filter outputting elements in a different group. We specify the Margrethe instance as a first example of mixed filter permutator, with key elements in $\mathbb{F}_2$ and output in $\mathbb{Z}_{16}$, that we believe well-suited for recent fully homomorphic encryption schemes that can efficiently evaluate larger (not negacyclic) LUTs.
Last updated:  2023-12-09
Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
Yilei Chen and Jiatu Li
A recent line of research has introduced a systematic approach to explore the complexity of explicit construction problems through the use of meta problems, namely, the range avoidance problem (abbrev. $\textsf{Avoid}$) and the remote point problem (abbrev. $\textsf{RPP}$). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether $\textsf{Avoid}$ and $\textsf{RPP}$ are hard for simple circuits such as low-depth circuits. In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constant-depth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC '23) against deterministic algorithms employing witness encryption for $\textsf{NP}$, where the inputs to $\textsf{Avoid}$ are general Boolean circuits. Our primary technical contribution is a novel construction of witness encryption inspired by public-key encryption for certain promise language in $\textsf{NP}$ that is unlikely to be $\textsf{NP}$-complete. We introduce a generic approach to transform a public-key encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial public-key encryption scheme. Based on this translation and variants of standard lattice-based or coding-based PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in $\textsf{NP}\setminus \textsf{coNP}_{/\textsf{poly}}$. Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich's super-bits (RANDOM '97), which is crucial for demonstrating the hardness of $\textsf{Avoid}$ and $\textsf{RPP}$ against nondeterministic algorithms.
Last updated:  2023-12-09
Zero-Knowledge Functional Elementary Databases
Xinxuan Zhang and Yi Deng
Zero-knowledge elementary databases (ZK-EDBs) enable a prover to commit a database ${D}$ of key-value $(x,v)$ pairs and later provide a convincing answer to the query ``send me the value $D(x)$ associated with $x$'' without revealing any extra knowledge (including the size of ${D}$). After its introduction, several works extended it to allow more expressive queries, but the expressiveness achieved so far is still limited: only a relatively simple queries--range queries over the keys and values-- can be handled by known constructions. In this paper we introduce a new notion called zero knowledge functional elementary databases (ZK-FEDBs), which allows the most general functional queries. Roughly speaking, for any Boolean circuit $f$, ZK-FEDBs allows the ZK-EDB prover to provide convincing answers to the queries of the form ``send me all records ${(x,v)}$ in ${{D}}$ satisfying $f(x,v)=1$,'' without revealing any extra knowledge (including the size of ${D}$). We present a construction of ZK-FEDBs in the random oracle model and generic group model, whose proof size is only linear in the length of record and the size of query circuit, and is independent of the size of input database $D$. Our technical constribution is two-fold. Firstly, we introduce a new variant of zero-knowledge sets (ZKS) which supports combined operations on sets, and present a concrete construction that is based on groups with unknown order. Secondly, we develop a tranformation that tranforms the query of Boolean circuit into a query of combined operations on related sets, which may be of independent interest.
Last updated:  2023-12-09
BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers
Qi Pang, Jinhao Zhu, Helen Möllering, Wenting Zheng, and Thomas Schneider
The advent of transformers has brought about significant advancements in traditional machine learning tasks. However, their pervasive deployment has raised concerns about the potential leakage of sensitive information during inference. Existing approaches using secure multiparty computation (MPC) face limitations when applied to transformers due to the extensive model size and resource-intensive matrix-matrix multiplications. In this paper, we present BOLT, a privacy-preserving inference framework for transformer models that supports efficient matrix multiplications and nonlinear computations. Combined with our novel machine learning optimizations, BOLT reduces the communication cost by 10.91x. Our evaluation on diverse datasets demonstrates that BOLT maintains comparable accuracy to floating-point models and achieves 4.8-9.5x faster inference across various network settings compared to the state-of-the-art system.
Last updated:  2023-12-09
Revisiting BBS Signatures
Stefano Tessaro and Chenzhi Zhu
BBS signatures were implicitly proposed by Boneh, Boyen, and Shacham (CRYPTO ’04) as part of their group signature scheme, and explicitly cast as stand-alone signatures by Camenisch and Lysyanskaya (CRYPTO ’04). A provably secure version, called BBS+, was then devised by Au, Susilo, and Mu (SCN ’06), and is currently the object of a standardization effort which has led to a recent RFC draft. BBS+ signatures are suitable for use within anonymous credential and DAA systems, as their algebraic structure enables efficient proofs of knowledge of message-signature pairs that support partial disclosure. BBS+ signatures consist of one group element and two scalars. As our first contribution, we prove that a variant of BBS+ producing shorter signatures, consisting only of one group element and one scalar, is also secure. The resulting scheme is essentially the original BBS proposal, which was lacking a proof of security. Here we show it satisfies, under the q-SDH assumption, the same provable security guarantees as BBS+. We also provide a complementary tight analysis in the algebraic group model, which heuristically justifies instantiations with potentially shorter signatures. Furthermore, we devise simplified and shorter zero-knowledge proofs of knowledge of a BBS message-signature pair that support partial disclosure of the message. Over the BLS12-381 curve, our proofs are 896 bits shorter than the prior proposal by Camenisch, Drijvers, and Lehmann (TRUST ’16), which is also adopted by the RFC draft. Finally, we show that BBS satisfies one-more unforgeability in the algebraic group model in a scenario, arising in the context of credentials, where the signer can be asked to sign arbitrary group elements, meant to be commitments, without seeing their openings.
Last updated:  2023-12-08
Asymptotics of hybrid primal lattice attacks
Daniel J. Bernstein
The literature gives the impression that (1) existing heuristics accurately predict how effective lattice attacks are, (2) non-ternary lattice systems are not vulnerable to hybrid multi-decoding primal attacks, and (3) the asymptotic exponents of attacks against non-ternary systems have stabilized. This paper shows that 1 contradicts 2 and that 1 contradicts 3: the existing heuristics imply that hybrid primal key-recovery attacks are exponentially faster than standard non-hybrid primal key-recovery attacks against the LPR PKE with any constant error width. This is the first report since 2015 of an exponential speedup in heuristic non-quantum primal attacks against non-ternary LPR. Quantitatively, for dimension n, modulus n^{Q_0+o(1)}, and error width w, a surprisingly simple hybrid attack reduces heuristic costs from 2^{(ρ+o(1))n} to 2^{(ρ-ρ H_0+o(1))n}, where z_0=2Q_0/(Q_0+1/2)^2, ρ=z_0 log_4(3/2), and H_0=1/(1+(lg w)/0.057981z_0). This raises the questions of (1) what heuristic exponent is achieved by more sophisticated hybrid attacks and (2) what impact hybrid attacks have upon concrete cryptosystems whose security analyses have ignored hybrid attacks, such as Kyber-512.
Last updated:  2023-12-08
zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
Haochen Sun, Tonghe Bai, Jason Li, and Hongyang Zhang
The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep neural networks. To protect the intellectual properties of AI developers, directly examining the training process by accessing the model parameters and training data is often prohibited for verifiers. In response to this challenge, we present zero-knowledge deep learning (zkDL), an efficient zero-knowledge proof for deep learning training. To address the long-standing challenge of verifiable computations of non-linearities in deep learning training, we introduce zkReLU, a specialized proof for the ReLU activation and its backpropagation. zkReLU turns the disadvantage of non-arithmetic relations into an advantage, leading to the creation of FAC4DNN, our specialized arithmetic circuit design for modelling neural networks. This design aggregates the proofs over different layers and training steps, without being constrained by their sequential order in the training process. With our new CUDA implementation that achieves full compatibility with the tensor structures and the aggregated proof design, zkDL enables the generation of complete and sound proofs in less than a second per batch update for an 8-layer neural network with 10M parameters and a batch size of 64, while provably ensuring the privacy of data and model parameters. To our best knowledge, we are not aware of any existing work on zero-knowledge proof of deep learning training that is scalable to million-size networks.
Last updated:  2023-12-08
QCB is Blindly Unforgeable
Jannis Leuther and Stefan Lucks
QCB is a proposal for a post-quantum secure, rate-one authenticated encryption with associated data scheme (AEAD) based on classical OCB3 and \(\Theta\)CB, which are vulnerable against a quantum adversary in the Q2 setting. The authors of QCB prove integrity under plus-one unforgeability, whereas the proof of the stronger definition of blind unforgeability has been left as an open problem. After a short overview of QCB and the current state of security definitions for authentication, this work proves blind unforgeability of QCB. Finally, the strategy of using tweakable block ciphers in authenticated encryption is generalised to a generic blindly unforgeable AEAD model.
Last updated:  2023-12-08
Nomadic: Normalising Maliciously-Secure Distance with Cosine Similarity for Two-Party Biometric Authentication
Nan Cheng, Melek Önen, Aikaterini Mitrokotsa, Oubaïda Chouchane, Massimiliano Todisco, and Alberto Ibarrondo
Computing the distance between two non-normalized vectors \mathbfit{x} and \mathbfit{y}, represented by \Delta(\mathbfit{x},\mathbfit{y}) and comparing it to a predefined public threshold ττ is an essential functionality used in privacy-sensitive applications such as biometric authentication, identification, machine learning algorithms ({\em e.g.,} linear regression, k-nearest neighbors, etc.), and typo-tolerant password-based authentication. Tackling a widely used distance metric, {\sc Nomadic} studies the privacy-preserving evaluation of cosine similarity in a two-party (2PC) distributed setting. We illustrate this setting in a scenario where a client uses biometrics to authenticate to a service provider, outsourcing the distance calculation to two computing servers. In this setting, we propose two novel 2PC protocols to evaluate the normalising cosine similarity between non-normalised two vectors followed by comparison to a public threshold, one in the semi-honest and one in the malicious setting. Our protocols combine additive secret sharing with function secret sharing, saving one communication round by employing a new building block to compute the composition of a function ff yielding a binary result with a subsequent binary gate. Overall, our protocols outperform all prior works, requiring only two communication rounds under a strong threat model that also deals with malicious inputs via normalisation. We evaluate our protocols in the setting of biometric authentication using voice, and the obtained results reveal a notable efficiency improvement compared to existing state-of-the-art works.
Last updated:  2023-12-08
Unclonable Cryptography in the Plain Model
Céline Chevalier, Paul Hermouet, and Quoc-Huy Vu
By leveraging the no-cloning principle of quantum mechanics, unclonable cryptography enables us to achieve novel cryptographic protocols that are otherwise impossible classically. Two most notable examples of unclonable cryptography are quantum copy-protection and unclonable encryption. Despite receiving a lot of attention in recent years, two important open questions still remain: copy- protection for point functions in the plain model, which is usually considered as feasibility demonstration, and unclonable encryption with unclonable indistinguishability security in the plain model. In this work, by relying on previous works of Coladangelo, Liu, Liu, and Zhandry (Crypto’21) and Culf and Vidick (Quantum’22), we establish a new monogamy-of-entanglement property for subspace coset states, which allows us to obtain the following new results: • We show that copy-protection of point functions exists in the plain model, with different challenge distributions (including arguably the most natural ones). • We show, for the first time, that unclonable encryption with unclonable indistinguishability security exists in the plain model.
Last updated:  2023-12-08
Faster constant-time evaluation of the Kronecker symbol with application to elliptic curve hashing
Diego F. Aranha, Benjamin Salling Hvass, Bas Spitters, and Mehdi Tibouchi
We generalize the Bernstein-Yang (BY) algorithm for constant-time modular inversion to compute the Kronecker symbol, of which the Jacobi and Legendre symbols are special cases. We start by developing a basic and easy-to-implement divstep version of the algorithm defined in terms of full-precision division steps. We then describe an optimized version due to Hamburg over word-sized inputs, similar to the jumpdivstep version of the BY algorithm, and formally verify its correctness. Along the way, we introduce a number of optimizations for implementing both versions in constant time and at high-speed. The resulting algorithms are particularly suitable for the special case of computing the Legendre symbol with dense prime $p$, where no efficient addition chain is known for the conventional approach by exponentiation to $\frac{p-1}{2}$. This is often the case for the base field of popular pairing-friendly elliptic curves. Our high-speed implementation for a range of parameters shows that the new algorithm is up to 40 times faster than the conventional exponentiation approach, and up to 25.7\% faster than the previous state of the art. We illustrate the performance of the algorithm with an application for hashing to elliptic curves, where the observed savings amount to 14.7\% -- 48.1\% when used for testing quadratic residuosity within the SwiftEC hashing algorithm. We also apply our techniques to the CTIDH isogeny-based key exchange, with savings of 3.5--13.5\%.
Last updated:  2023-12-08
GRandLine: Adaptively Secure DKG and Randomness Beacon with (Almost) Quadratic Communication Complexity
Renas Bacho, Christoph Lenzen, Julian Loss, Simon Ochsenreither, and Dimitrios Papachristoudis
A randomness beacon is a source of continuous and publicly verifiable randomness which is of crucial importance for many applications. Existing works on distributed randomness beacons suffer from at least one of the following drawbacks: (i) security only against a static/non-adaptive adversary, (ii) each epoch takes many rounds of communication, or (iii) computationally expensive tools such as Proof-of-Work (PoW) or Verifiable Delay Functions (VDF). In this paper, we introduce $\mathsf{GRandLine}$, the first adaptively secure randomness beacon protocol that overcomes all these limitations while preserving simplicity and optimal resilience in the synchronous network setting. We achieve our result in two steps. First, we design a novel distributed key generation (DKG) protocol $\mathsf{GRand}$ that runs in $\mathcal{O}(\lambda n^2\log{n})$ bits of communication but, unlike most conventional DKG protocols, outputs both secret and public keys as group elements. Second, following termination of $\mathsf{GRand}$, parties can use their keys to derive a sequence of randomness beacon values, where each random value costs only a single asynchronous round and $\mathcal{O}(\lambda n^2)$ bits of communication. We implement $\mathsf{GRandLine}$ and evaluate it using a network of up to 64 parties running in geographically distributed AWS instances. Our evaluation shows that $\mathsf{GRandLine}$ can produce about 2 beacon outputs per second in a network of 64 parties. We compare our protocol to the state-of-the-art randomness beacon protocols in the same setting and observe that it vastly outperforms them.
Last updated:  2023-12-08
Cryptanalysis of QARMAv2
Hosein Hadipour and Yosuke Todo
QARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMA with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2 and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers. This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.
Last updated:  2023-12-08
A Scalable Coercion-resistant Blockchain Decision-making Scheme
Zeyuan Yin, Bingsheng Zhang, Andrii Nastenko, Roman Oliynykov, and Kui Ren
Typically, a decentralized collaborative blockchain decision-making mechanism is realized by remote voting. To date, a number of blockchain voting schemes have been proposed; however, to the best of our knowledge, none of these schemes achieve coercion-resistance. In particular, for most blockchain voting schemes, the randomness used by the voting client can be viewed as a witness/proof of the actual vote, which enables improper behaviors such as coercion and vote-buying. Unfortunately, the existing coercion-resistant voting schemes cannot be directly adopted in the blockchain context. In this work, we design the first scalable coercion-resistant blockchain decision-making scheme that supports private differential voting power and 1-layer liquid democracy as introduced by Zhang et al. (NDSS'19). Its overall complexity is $O(n)$, where $n$ is the number of voters. Moreover, the ballot size is reduced from Zhang et al.'s $\Theta(m)$ to $\Theta(1)$, where $m$ is the number of experts and/or candidates. Its incoercibility is formally proven under the UC incoercibility framework by Alwen et al. (Crypto'15). We implement a prototype of the scheme and the evaluation result shows that our scheme's tally procedure is more than 6x faster than VoteAgain (USENIX'20) in an election with over 10,000 voters and over 50\% extra ballot rate.
Last updated:  2023-12-08
How to Use Quantum Indistinguishability Obfuscation
Andrea Coladangelo and Sam Gunn
Quantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs. As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection. Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs --- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection. Finally, we construct qsiO relative to an efficient quantum oracle.
Last updated:  2023-12-08
Intermediate Certificate Suppression in Post-Quantum TLS: An Approximate Membership Querying Approach
Dimitrios Sikeridis, Sean Huntley, David Ott, and Michael Devetsikiotis
Quantum computing advances threaten the security of today's public key infrastructure, and have led to the pending standardization of alternative, quantum-resistant key encapsulation and digital signature cryptography schemes. Unfortunately, authentication algorithms based on the new post-quantum (PQ) cryptography create significant performance bottlenecks for TLS due to larger certificate chains which introduce additional packets and round-trips. The TLS handshake slowdown will be unacceptable to many applications, and detrimental to the broader adoption of quantum safe cryptography standards. In this paper, we propose a novel framework for Intermediate Certificate Authority (ICA) certificate suppression in TLS that reduces the authentication message size and prevents excessive round-trip delays. Our approach utilizes an approximate membership query (AMQ) data structure (probabilistic filter) to advertise known ICA certs to remote TLS endpoints so that unnecessary ICA certificates are omitted from the TLS handshake exchange. We showcase the extend of the PQ authentication overhead challenge in TLS, and evaluate the feasibility of AMQ filters for ICA suppression in terms of space and computational overhead. Finally, we experimentally evaluate the potential gains form our approach and showcase a $70\%$ reduction in exchanged ICA cert data that translates to 15-50 MB of savings in PQ TLS and for certain Web-based application scenarios.
Last updated:  2023-12-08
In-depth Correlation Power Analysis Attacks on a Hardware Implementation of CRYSTALS-Dilithium
Huaxin Wang, Yiwen Gao, Yuejun Liu, Qian Zhang, and Yongbin Zhou
During the standardisation process of post-quantum cryptography, NIST encourages research on side-channel analysis for candidate schemes. As the recommended lattice signature scheme, CRYSTALS-Dilithium, when implemented on hardware, has only been subjected to the side-channel attack presented by Steffen et al. in IACR ePrint 2022. This attack is not complete and requires excessive traces. Therefore, we investigate the leakage of an FPGA (Kintex7) implementation of CRYSTALS-Dilithium using the CPA method, where with a minimum of 70000 traces partial private key coefficients can be recovered. As far as we know, this is the first work that applies power leakage to sidechannel attacks on FPGA implementations of CRYSTALS-Dilithium. Furthermore, we optimise the attack by extracting Point-of-Interests using known information due to parallelism (named CPA-PoI) and by iteratively utilising parallel leakages (named CPA-ITR). We experimentally demonstrate that when recovering the same number of key coefficients, the CPA-PoI and CPA-ITR reduce the number of traces used by up to 16.67 percent and 25 percent, respectively, compared to the CPA method. When attacking with the same number of traces, the CPA-PoI method and the CPA-ITR method increase the number of recovered key coefficients by up to 55.17 percent and 93.10 percent, respectively, compared to the CPA method. Our experiments confirm that the FPGA implementation of CRYSTALS-Dilithium is also very vulnerable to side-channel analysis.
Last updated:  2023-12-08
Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries
Taipei Lu, Bingsheng Zhang, Lichun Li, and Kui Ren
Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine learning scenarios, e.g., medical care and finance. In this work, we introduce $\mathsf{Aegis}$~-- a high-performance PPML platform built on top of a maliciously secure 3-PC framework over ring $\mathbb{Z}_{2^\ell}$. In particular, we propose a novel 2-round secure comparison (a.k.a., sign bit extraction) protocol in the preprocessing model. The communication of its semi-honest version is only 25% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al.(S&P 2023); both communication and round complexity of its malicious version are approximately 50% of the SOTA (BLAZE) by Patra and Suresh (NDSS 2020), for $\ell=64$. Moreover, the communication of our maliciously secure inner product protocol is merely $3\ell$ bits, reducing 50% from the SOTA (Swift) by Koti et al. (USENIX 2021). Finally, the resulting ReLU and MaxPool PPML protocols outperform the SOTA by $4\times$ in the semi-honest setting and $10\times$ in the malicious setting, respectively.
Last updated:  2023-12-08
Fully Parallel, One-Cycle Random Shuffling for Efficient Countermeasure in Post-Quantum Cryptography
Jong-Yeon Park, Dongsoo Lee, Seonggyeom Kim, Wonil lee, Bo Gyeong Kang, and Kouichi Sakurai
Hiding countermeasures are the most widely utilized techniques for thwarting side-channel attacks, and their significance has been further emphasized with the advent of Post Quantum Cryptography (PQC) algorithms, owing to the extensive use of vector operations. Commonly, the Fisher-Yates algorithm is adopted in hiding countermeasures with permuted operation for its security and efficiency in implementation, yet the inherently sequential nature of the algorithm imposes limitations on hardware acceleration. In this work, we propose a novel method named Addition Round Rotation ARR, which can introduce a time-area trade-off with block-based permutation. Our findings indicate that this approach can achieve a permutation complexity level commensurate with or exceeding $2^{128}$ in a single clock cycle while maintaining substantial resistance against second-order analysis. To substantiate the security of our proposed method, we introduce a new validation technique --Identity Verification. This technique allows theoretical validation of the proposed algorithm's security and is consistent with the experimental results. Finally, we introduce an actual hardware design and provide the implementation results on Application-Specific Integrated Circuit (ASIC). The measured performance demonstrates that our proposal fully supports the practical applicability.
Last updated:  2023-12-08
Reverie: an end-to-end accumulation scheme from Cyclefold
Lev Soukhanov
Recent advances in SNARK recursion and incrementally-verifiable computation are vast, but most of the efforts seem to be focused on a particular design goal - proving the result of a large computation known completely in advance. There are other possible applications, requiring different design tradeoffs. Particularly interesting direction is a case with a swarm of collaborating provers, communicating over a peer-to-peer network - which requires to also optimize the amount of data exchanged between the participants of the swarm. One notable such application is Ethereum's consensus, which requires to aggregate millions of signatures of individual validators. In this technical note, we propose an informal notion of an end-to-end IVC scheme, which means that the amount of data that the prover needs exchange with the previous prover to continue the computation is small. We explore the existing design space from this point of view, and suggest an approach to constructing such a scheme by combining the PlonK proof systemwith the recent Cyclefold construction.
Last updated:  2023-12-07
The Blockwise Rank Syndrome Learning problem and its applications to cryptography
Nicolas Aragon, Pierre Briaud, Victor Dyseryn, Philippe Gaborit, and Adrien Vinçotte
Recently the notion of blockwise error in a context of rank based cryptography has been introduced by Sont et al. at AsiaCrypt 2023 . This notion of error, very close to the notion sum-rank metric, permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before the multi-syndromes approach introduced for LRPC and RQC schemes had also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes. In the present paper we show that the two previous approaches (blockwise errors and multi-syndromes) can be combined in a unique approach which leads to very efficient generalized RQC and LRPC schemes. In order to do so, we introduce a new problem, the Blockwise Rank Support Learning problem, which consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduce have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme. The new approach proposed in this paper permits to reach a 40 % gain in terms of parameters size when compared to previous results, obtaining even better results in terms of size than for the KYBER scheme whose total sum is 1.5 kB. Besides the description of theses new schemes the paper provides new attacks for the l-RD problem introduced in the paper by Song et al. of AsiaCrypt 2023, in particular these new attacks permit to cryptanalyze all blockwise LRPC parameters they proposed (with an improvement of more than 40bits in the case of structural attacks). We also describe combinatorial attacks and algebraic attacks, for the new Blockwise Rank Support Learning problem we introduce.
Last updated:  2023-12-07
When Cryptography Needs a Hand: Practical Post-Quantum Authentication for V2V Communications
Geoff Twardokus, Nina Bindel, Hanif Rahbari, and Sarah McCarthy
We tackle the atypical challenge of supporting post-quantum cryptography (PQC) and its significant overhead in safety-critical vehicle-to-vehicle (V2V) communications, dealing with strict overhead and latency restrictions within the limited radio spectrum for V2V. For example, we show that the current use of spectrum to support signature verification in V2V makes it nearly impossible to adopt PQC. Accordingly, we propose a scheduling technique for message signing certificate transmissions (which we find are currently up to 93% redundant) that learns to adaptively reduce the use of radio spectrum. In combination, we design the first integration of PQC and V2V, which satisfies the above stringent constraints given the available spectrum. Specifically, we analyze the three PQ signature algorithms selected for standardization by NIST, as well as XMSS (RFC 8391), and propose a Partially Hybrid authentication protocol—a tailored fusion of classical cryptography and PQC—for use in the V2V ecosystem during the nascent transition period we outline towards fully PQ V2V. Our provably secure protocol efficiently balances security and performance, as demonstrated experimentally with software-defined radios (USRPs), commercial V2V devices, and road traffic and V2V simulators. We show our joint transmission scheduling optimization and Partially Hybrid design are scalable and reliable under realistic conditions, adding a negligible average delay (0.39 ms per message) against the current state-of-the-art.
Last updated:  2023-12-07
Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Sebastian Angel, Eleftherios Ioannids, Elizabeth Margolin, Srinath Setty, and Jess Woods
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata Skipping Alternating Finite Automata (SAFA) that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Last updated:  2023-12-07
Impossibility of Efficient Information-Theoretic Fuzzy Extraction
Benjamin Fuller
Fuzzy extractors convert noisy signals from the physical world into reliable cryptographic keys. Fuzzy min-entropy is an important measure of the ability of a fuzzy extractor to distill keys from a distribution: in particular, it bounds the length of the key that can be derived (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). In general, fuzzy min-entropy that is superlogarithmic in the security parameter is required for a noisy distribution to be suitable for key derivation. There is a wide gap between what is possible with respect to computational and information-theoretic adversaries. Under the assumption of general-purpose obfuscation, keys can be securely derived from all distributions with superlogarithmic entropy. Against information-theoretic adversaries, however, it is impossible to build a single fuzzy extractor that works for all distributions (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). A weaker information-theoretic goal is to build a fuzzy extractor for each particular probability distribution. This is the approach taken by Woodage et al. (Crypto 2017). Prior approaches use the full description of the probability mass function and are inefficient. We show this is inherent: for a quarter of distributions with fuzzy min-entropy and $2^k$ points there is no secure fuzzy extractor that uses less $2^{\Theta(k)}$ bits of information about the distribution.} This result rules out the possibility of efficient, information-theoretic fuzzy extractors for many distributions with fuzzy min-entropy. We show an analogous result with stronger parameters for information-theoretic secure sketches. Secure sketches are frequently used to construct fuzzy extractors.
Last updated:  2023-12-07
Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux
Elia Anzuoni and Tommaso Gagliardoni
We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries. We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.
Last updated:  2023-12-07
Falcon Takes Off - A Hardware Implementation of the Falcon Signature Scheme
Michael Schmid, Dorian Amiet, Jan Wendler, Paul Zbinden, and Tao Wei
Falcon is one out of three post-quantum signature schemes which have been selected for standardization by NIST in July 2022. To the best of our knowledge, Falcon is the only selected algorithm that does not yet have a publicly reported hardware description that performs signing or key generation. The reason might be that the Falcon signature and key generation algorithms do not fit well in hardware due to the use of floating-point numbers and recursive functions. This publication describes the first hardware implementation for Falcon signing and key generation. To overcome the complexity of the Falcon algorithms, High-Level Synthesis (HLS) was preferred over a hardware description language like Verilog or VHDL. Our HLS code is based on the C reference implementation available at NIST. We describe the required modifications in order to be compliant with HLS, such as rewriting recursive functions into iterative versions. The hardware core at security level 5 requires 45,223 LUTs, 41,370 FFs, 182 DSPs, and 37 BRAMs to calculate one signature in 8.7 ms on a Zynq UltraScale+ FPGA. Security level 5 key generation takes 320.3 ms and requires 100,649 LUTs, 91,029 FFs, 1,215 DSPs, and 69 BRAMs.
Last updated:  2023-12-07
On Active Attack Detection in Messaging with Immediate Decryption
Khashayar Barooti, Daniel Collins, Simone Colombo, Loı̈s Huguenin-Dumittan, and Serge Vaudenay
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
Last updated:  2023-12-07
Multi-Signatures for Ad-hoc and Privacy-Preserving Group Signing
Anja Lehmann and Cavit Özbay
Multi-signatures allow to combine individual signatures from different signers on the same message into a short aggregated signature. Newer schemes further allow to aggregate the individual public keys, such that the combined signature gets verified against a short aggregated key. This makes them a versatile alternative to threshold or distributed signatures: the aggregated key can serve as group key, and signatures under that key can only be computed with the help of all signers. What makes multi-signatures even more attractive is their simple key management, as users can re-use the same secret key in several and ad-hoc formed groups. In that context, it will be desirable to not sacrifice privacy as soon as keys get re-used and ensure that users are not linkable across groups. In fact, when multi-signatures with key aggregation were proposed, it was claimed that aggregated keys hide the signers' identities or even the fact that it is a combined key at all. In our work, we show that none of the existing multi-signature schemes provide these privacy guarantees when keys get re-used in multiple groups. This is due to the fact that all known schemes deploy deterministic key aggregation. To overcome this limitation, we propose a new variant of multi-signatures with probabilistic yet verifiable key aggregation. We formally define the desirable privacy and unforgeability properties in the presence of key re-use. This also requires to adapt the unforgeability model to the group setting, and ensure that key-reuse does not weaken the expected guarantees. We present a simple BLS-based scheme that securely realizes our strong privacy and security guarantees. We also formalize and investigate the privacy that is possible by deterministic schemes, and prove that existing schemes provide the advertised privacy features as long as one public key remains secret.
Last updated:  2023-12-07
Lattice Based Signatures with Additional Functionalities
Swati Rawal, Sahadeo Padhye, and Debiao He
Digital signatures is a cryptographic protocol that can provide the added assurances of identity, status, proof of origin of an electronic document, and can acknowledge informed consent by the signer. Lattice based assumptions have seen a certain rush in recent years to fulfil the desire to expand the hardness assumption beyond factoring or discrete logarithm problem on which digital signatures can rely. In this article, we cover the recent progress made in digital signatures based on lattice assumptions. The article briefly discusses the working of each signature scheme, then investigates the progress made in recent years and compare them with different aspects of security and efficiency. Besides, it provides some future direction which can be helpful in future work in this area.
Last updated:  2023-12-07
Cutting the GRASS: Threshold GRoup Action Signature Schemes
Michele Battagliola, Giacomo Borin, Alessio Meneghetti, and Edoardo Persichetti
Group actions are fundamental mathematical tools, with a long history of use in cryptography. Indeed, the action of finite groups at the basis of the discrete logarithm problem is behind a very large portion of modern cryptographic systems. With the advent of post-quantum cryptography, however, the method for building protocols shifted towards a different paradigm, centered on the difficulty of discerning 'noisy' objects, as is the case for lattices, codes, and multivariate systems. This method yields promising results for 'core' primitives such as encryption or signature, but can be less than ideal in the case when more advanced functionalities are required. In this work, we show that isomorphism problems which stem from cryptographic group actions, can be viable building blocks for threshold signature schemes. In particular, we construct a full $N$-out-of-$N$ threshold signature scheme, and discuss the efficiency issues arising from extending it to the generic $T$-out-of-$N$ case. To give a practical outlook on our constructions, we instantiate them with the LESS and MEDS frameworks, which are two flavors of code-based cryptographic group actions. Finally, we highlight some ideas that would allow for a more efficient and compact $(T,N)$ threshold variant of LESS, whose security relies on new hardness assumptions.
Last updated:  2023-12-07
The statistical nature of leakage in SSE schemes and its role in passive attacks
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted the shortcomings of these schemes. The literature remains vague about the consequences of these attacks for real-world applications: are these attacks dangerous in practice? Is it safe to use these schemes? Do we even need countermeasures? This paper introduces a novel mathematical model for attackers' knowledge using statistical estimators. Our model reveals that any attacker's knowledge is inherently noisy, which limits attack effectiveness. This inherent noise can be considered a security guarantee, a natural attack mitigation. Capitalizing on this insight, we develop a risk assessment protocol to guide real-world deployments. Our findings demonstrate that limiting the index size is an efficient leverage to bound attack accuracy. Finally, we employ similar statistical methods to enhance attack analysis methodology. Hence, our work offers a fresh perspective on SSE attacks and provides practitioners and researchers with novel methodological tools.
Last updated:  2023-12-07
Blockchain Governance via Sharp Anonymous Multisignatures
Wonseok Choi, Xiangyu Liu, and Vassilis Zikas
Electronic voting has occupied a large part of the cryptographic protocols literature. The recent reality of blockchains---in particular their need for online governance mechanisms---has put new parameters and requirements to the problem. We identify the key requirements of a blockchain governance mechanism, namely correctness (including eliminative double votes), voter anonymity, and traceability, and investigate mechanisms that can achieve them with minimal interaction and under assumptions that fit the blockchain setting. First, we define a signature-like primitive, which we term sharp anonymous multisignatures (in short, #AMS) that tightly meets the needs of blockchain governance. In a nutshell, #AMSs allow any set of parties to generate a signature, e.g., on a proposal to be voted-upon, which if posted on the blockchain hides the identities of the signers/voters, but reveals their number. This can be seen as a (strict) generalization of threshold ring signatures (TRS). We next turn to constructing such #AMSs and using them in various governance scenarios---e.g., single vs. multiple vote per voter. To this direction, we observe that although the definition of TRS does not imply #AMS, one can compile some of the existing TRS constructions into #AMS. This raises the question: What is the TRS structure that allows such a compilation? To answer the above, we devise templates for TRSs. Our templates encapsulate and abstract the structure that allows for the above compilation---most of the TRS schemes that can be compiled into #AMS are, in fact, instantiations of our template. This abstraction makes our template generic for instantiating TRSs and #AMSs from different cryptographic assumptions (e.g., DDH, LWE, etc). One of our templates is based on chameleon hashing and we explore a framework of lossy chameleon hashes to fully understand its nature. Finally, we turn to how #AMS schemes can be used in our applications. We provide fast (in some cases non-interactive) #AMS-based blockchain governance mechanisms for a wide spectrum of assumptions on the honesty (semi-honest vs malicious) and availability of voters and proposers.
Last updated:  2023-12-07
Cryptanalysis of Lattice-Based Sequentiality Assumptions and Proofs of Sequential Work
Chris Peikert and Yi Tang
This note describes a total break of the sequentiality assumption (and broad generalizations thereof) underlying the candidate lattice-based proof of sequential work (PoSW) recently proposed by Lai and Malavolta at CRYPTO 2023. Specifically, for sequentiality parameter $T$ and SIS parameters $n,q,m = n \log q$, the attack computes a solution of norm $(m+1)^{\log_{k} T}$ (or norm $O(\sqrt{m})^{\log_{k} T}$ with high probability) in depth $\tilde{O}_{n,q}(k \log_{k} T)$, where the integer $k \leq T$ may be freely chosen. (The $\tilde{O}$ notation hides polylogarithmic factors in the variables appearing in its subscript.) In particular, with the typical parameterization $\log q = \tilde{O}_{n,T}(1)$, for $k=2$ the attack finds a solution of quasipolynomial norm $O(\sqrt{m})^{\log T}$ in only *polylogarithmic* $\tilde{O}_{n,T}(1)$ depth; this strongly falsifies the assumption that finding such a solution requires depth *linear* in $T$. Alternatively, setting $k = T^{\varepsilon}$, the attack finds a solution of polynomial norm $O(\sqrt{m})^{1/\varepsilon}$ in depth $\tilde{O}_{n,T}(T^{\varepsilon})$, for any constant $\epsilon > 0$. We stress that the attack breaks the *assumption* underlying the proposed PoSW, but not the *PoSW itself* as originally defined. However, the attack does break a *slight modification* of the original PoSW, which has an essentially identical security proof (under the same kind of falsified assumption). This suggests that whatever security the original PoSW may have is fragile, and further motivates the search for a PoSW based on a sound lattice-based assumption.
Last updated:  2023-12-07
ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols
Benedikt Bünz and Binyi Chen
Accumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.
Last updated:  2023-12-07
On the Black-Box Impossibility of Multi-Designated Verifiers Signature Schemes from Ring Signature Schemes
Kyosuke Yamashita and Keisuke Hara
From the work by Laguillaumie and Vergnaud in ICICS'04, it has been widely believed that multi-designated verifier signature schemes (MDVS) can be constructed from ring signature schemes in general. However in this paper, somewhat surprisingly, we prove that it is impossible to construct an MDVS scheme from a ring signature scheme in a black-box sense (in the standard model). The impossibility stems from the difference between the definitions of unforgeability. To the best of our knowledge, existing works demonstrating the constructions do not provide formal reduction from an MDVS scheme to a ring signature scheme, and thus the impossibility has been overlooked for a long time.
Last updated:  2023-12-06
Optimal Flexible Consensus and its Application to Ethereum
Joachim Neu, Srivatsan Sridhar, Lei Yang, and David Tse
Classic BFT consensus protocols guarantee safety and liveness for all clients if fewer than one-third of replicas are faulty. However, in applications such as high-value payments, some clients may want to prioritize safety over liveness. Flexible consensus allows each client to opt for a higher safety resilience, albeit at the expense of reduced liveness resilience. We present the first construction that allows optimal safety-liveness tradeoff for every client simultaneously. This construction is modular and is realized as an add-on applied on top of an existing consensus protocol. The add-on consists of an additional round of voting and permanent locking done by the replicas, to sidestep a sub-optimal quorum-intersection-based constraint present in previous solutions. We adapt our construction to the existing Ethereum protocol to derive optimal flexible confirmation rules that clients can adopt unilaterally without requiring system-wide changes. This is possible because existing Ethereum protocol features can double as the extra voting and locking. We demonstrate an implementation using Ethereum's consensus API.
Last updated:  2023-12-06
A Multiparty Commutative Hashing Protocol based on the Discrete Logarithm Problem
Daniel Zentai, Mihail Plesa, and Robin Frot
Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets and suppose that a set of participants $P=\{P_1,P_2,\dots,P_n\}$ would like to calculate the keyed hash value of some message $m\in\mathcal{X}$ known to a single participant in $P$ called the data owner. Also, suppose that each participant $P_i$ knows a secret value $x_i\in\mathcal{X}$. In this paper, we will propose a protocol that enables the participants in this setup to calculate the value $y=H(m,x_1,x_2,\dots ,x_n)$ of a hash function $H:\mathcal{X}^{n+1}\rightarrow\mathcal{Y}$ such that: - The function $H$ is a one-way function. - Participants in $P\backslash\{P_i\}$ cannot obtain $x_i$. - Participants other than the data owner cannot obtain $m$. - The hash value $y=H(m,x_1,x_2,\dots ,x_n)$ remains the same regardless the order of the secret $x_i$ values.
Last updated:  2023-12-06
Predicting performance for post-quantum encrypted-file systems
Daniel J. Bernstein
Public-key cryptography is widely deployed for encrypting stored files. This paper uses microbenchmarks and purchase costs to predict the performance of various post-quantum KEMs in this application, in particular concluding that Classic McEliece is (1) the most efficient option and (2) easily affordable.
Last updated:  2023-12-06
Leaking-Cascade: an Optimal Construction for KEM Hybridization
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Hybrid post-quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure. Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys out- put by a parallel execution of classical and post-quantum KEMs. While this architecture is straightforward, it appears to lack computational efficiency and bandwidth optimization. Hence, we propose a novel method for more effectively hybridizing several KEMs, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “leaking-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA2-secure KEM. In terms of performance, our leaking-cascade scheme is at least as computationally efficient and has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that may exceed 13 % compared to the latter. Moreover, we prove that for given PKEs that need to be hybridized, the leaking-cascade has an optimal ciphertext communication cost.
Last updated:  2023-12-06
Security Analysis of an Image Encryption Scheme Based on a New Secure Variant of Hill Cipher and 1D Chaotic Maps
George Teseleanu
In 2019, Essaid et al. introduced a chaotic map-based encryption scheme for color images. Their approach employs three improved chaotic maps to dynamically generate the key bytes and matrix required by the cryptosystem. It should be noted that these parameters are dependent on the size of the source image. According to the authors, their method offers adequate security (i.e. $279$ bits) for transmitting color images over unsecured channels. However, we show in this paper that this is not the case. Specifically, we present two cryptanalytic attacks that undermine the security of Essaid et al.'s encryption scheme. In the case of the chosen plaintext attack, we require only two chosen plaintexts to completely break the scheme. The second attack is a a chosen ciphertext attack, which requires two chosen ciphertexts and compared to the first one has a rough complexity of $2^{24}$. The attacks are feasible due to the fact that the key bits and matrix generated by the algorithm remain unaltered for distinct plaintext images.
Last updated:  2023-12-06
Thwarting Last-Minute Voter Coercion
Rosario Giustolisi, Maryam Sheikhi Garjan, and Carsten Schuermann
Counter-strategies are key components of coercion-resistant voting schemes, allowing voters to submit votes that represent their own intentions in an environment controlled by a coercer. By deploying a counter-strategy a voter can prevent the coercer from learning if the voter followed the coercer’s instructions or not. Two effective counter-strategies have been proposed in the literature, one based on fake credentials and another on revoting. While fake-credential schemes assume that voters hide cryptographic keys away from the coercer, revoting schemes assume that voters can revote after being coerced. In this work, we present a new counter-strategy technique that enables flexible vote updating, that is, a revoting approach that provides protection against coercion even if the adversary is able to coerce a voter at the very last minute of the voting phase. We demonstrate that our technique is effective by implementing it in Loki, an Internet-based coercion-resistant voting scheme that allows revoting. We prove that Loki satisfies a game-based definition of coercion-resistance that accounts for flexible vote updating. To the best of our knowledge, we provide the first technique that enables deniable coercion- resistant voting and that can evade last-minute voter coercion.
Last updated:  2023-12-06
Security Analysis of an Image Encryption Based on the Kronecker Xor Product, the Hill Cipher and the Sigmoid Logistic Map
George Teseleanu
In 2023, Mfungo et al. introduce an image encryption scheme that employs the Kronecker xor product, the Hill cipher and a chaotic map. Their proposal uses the chaotic map to dynamically generate two out of the three secret keys employed by their scheme. Note that both keys are dependent on the size of the original image, while the Hill key is static. Despite the authors' assertion that their proposal offers sufficient security ($149$ bits) for transmitting color images over unsecured channels, we found that this is not accurate. To support our claim, we present a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. Note that in this case Mfungo et al.'s scheme has a complexity of $\mathcal O(2^{33})$, and thus our attack is two times faster than an encryption. The reason why this attack is viable is that the two keys remain unchanged for different plaintext images of the same size, while the Hill key remains unaltered for all images.
Last updated:  2023-12-06
SoK: Post-Quantum TLS Handshake
Nouri Alnahawi, Johannes Müller, Jan Oupický, and Alexander Wiesmaier
Transport Layer Security (TLS) is the backbone security protocol of the Internet. As this fundamental protocol is at risk from future quantum attackers, many proposals have been made to protect TLS against this threat by implementing post-quantum cryptography (PQC). The widespread interest in post-quantum TLS has given rise to a large number of solutions over the last decade. These proposals differ in many aspects, including the security properties they seek to protect, the efficiency and trustworthiness of their post-quantum building blocks, and the application scenarios they consider, to name a few. Based on an extensive literature review, we classify existing solutions according to their general approaches, analyze their individual contributions, and present the results of our extensive performance experiments. Based on these insights, we identify the most reasonable candidates for post-quantum TLS, which research problems in this area have already been solved, and which are still open. Overall, our work provides a well-founded reference point for researching post-quantum TLS and preparing TLS in practice for the quantum age.
Last updated:  2023-12-06
Integral Multiset: A Novel Framework for Integral Attacks over Finite Fields
Weizhe Wang and Deng Tang
In recent years, symmetric primitives that focus on arithmetic metrics over large finite fields, characterized as arithmetization-oriented (\texttt{AO}) ciphers, are widely used in advanced protocols such as secure multi-party computations (MPC), fully homomorphic encryption (FHE) and zero-knowledge proof systems (ZK). To ensure good performance in protocols, these \texttt{AO} ciphers are commonly designed with a small number of multiplications over finite fields and low multiplicative depths. This feature makes \texttt{AO} ciphers vulnerable to algebraic attacks, especially integral attacks. While a far-developed analysis for integral attacks on traditional block ciphers defined over $\mathbb{F}_2$ exists, there is still a lack of research on this kind of attacks over large finite fields. Previous integral attacks over large finite fields are primarily higher-order differential attacks, which construct distinguishers by simply utilizing algebraic degrees without fully exploiting other algebraic properties of finite fields. In this paper, we propose a new concept called \textit{integral multiset}, which provides a clear characterization of the integral property of multiset over the finite field $\mathbb{F}_{p^n}$. Based on multiplicative subgroups of finite fields, we present a new class of integral multisets that exhibits completely different integral property compared to the previously studied multisets based on vector subspaces over the finite field $\mathbb{F}_2$. In addition, we also present a method for merging existing integral multisets to create a new one with better integral property. Furthermore, combining with monomial detection techniques, we propose a framework for searching for integral distinguishers based on integral multisets. We apply our new framework to some competitive \texttt{AO} ciphers, including \textsf{MiMC} and \textsf{Chaghri}. For all these ciphers, we successfully find integral distinguishers with lower time and data complexity. Especially for \textsf{MiMC}, the complexity of some distinguishers we find is only a half or a quarter of the previous best one. Due to the specific algebraic structure, all of our results could not be obtained by higher-order differential attacks. Furthermore, our framework perfectly adapts to various monomial detection techniques like general monomial prediction proposed by Cui et al. at ASIACRYPT 2022 and coefficient grouping invented by Liu et al. at EUROCRYPT 2023. We believe that our work will provide new insight into integral attacks over large finite fields.
Last updated:  2023-12-06
B2T: The Third Logical Value of a Bit
Dipesh, Vishesh Mishra, and Urbi chatterjee
Modern computing systems predominantly operate on the binary number system that accepts only ‘0’ or ‘1’ as logical values leading to computational homogeneity. But this helps in creating leakage patterns that can be exploited by adversaries to carry out hardware and software-level attacks. Recent research has shown that ternary systems, operating on three logical values (‘0′, ‘1', and ‘z') can surpass binary systems in terms of performance and security. In this paper, we first propose a novel approach that assigns logical values based on the direction of current flow within a conducting element, rather than relying on the voltage scale. Furthermore, we also present the mathematical models for each ternary gate.
Last updated:  2023-12-06
An Improved Method for Evaluating Secret Variables and Its Application to WAGE
Weizhe Wang, Haoyang Wang, and Deng Tang
The cube attack is a powerful cryptanalysis technique against symmetric ciphers, especially stream ciphers. The adversary aims to recover secret key bits by solving equations that involve the key. To simplify the equations, a set of plaintexts called a cube is summed up together. Traditional cube attacks use only linear or quadratic superpolies, and the size of cube is limited to an experimental range, typically around 40. However, cube attack based on division property, proposed by Todo et al. at CRYPTO 2017, overcomes these limitations and enables theoretical cube attacks on many lightweight stream ciphers. For a given cube $I$, they evaluate the set $J$ of secret key bits involved in the superpoly and require $2^{|I|+|J|}$ encryptions to recover the superpoly. However, the secret variables evaluation method proposed by Todo et al. sometimes becomes unresponsive and fails to solve within a reasonable time. In this paper, we propose an improvement to Todo's method by breaking down difficult-to-solve problems into several smaller sub-problems. Our method retains the efficiency of Todo's method while effectively avoiding unresponsive situations. We apply our method to the WAGE cipher, an NLFSR-based authenticated encryption algorithm and one of the second round candidates in the NIST LWC competition. Specifically, we successfully mount cube attacks on 29-round WAGE, as well as on 24-round WAGE with a sponge constraint. To the best of our knowledge, this is the first cube attack against the WAGE cipher, which provides a more accurate characterization of the WAGE's resistance against algebraic attacks.
Last updated:  2023-12-05
Verifiable Distributed Aggregation Functions
Hannah Davis, Christopher Patton, Mike Rosulek, and Phillipp Schoppmann
The modern Internet is built on systems that incentivize collection of information about users. In order to minimize privacy loss, it is desirable to prevent these systems from collecting more information than is required for the application. The promise of multi-party computation is that data can be aggregated without revealing individual measurements to the data collector. This work offers a provable security treatment for "Verifiable Distributed Aggregation Functions (VDAFs)", a class of multi-party computation protocols being considered for standardization by the IETF. We propose a formal framework for the analysis of VDAFs and apply it to two constructions. The first is Prio3, one of the candidates for standardization. This VDAF is based on the Prio system of Corrigan-Gibbs and Boneh (NSDI 2017). We prove that Prio3 achieves our security goals with only minor changes to the draft. The second construction, called Doplar, is introduced by this paper. Doplar is a round-reduced variant of the Poplar system of Boneh et al. (IEEE S&P 2021), itself a candidate for standardization. The cost of this improvement is a modest increase in overall bandwidth and computation.
Last updated:  2023-12-05
Accountable Bulletin Boards: Definition and Provably Secure Implementation
Mike Graf, Ralf Küsters, Daniel Rausch, Simon Egger, Marvin Bechtold, and Marcel Flinspach
Bulletin boards (BB) are important cryptographic building blocks that, at their core, provide a broadcast channel with memory. BBs are widely used within many security protocols, including secure multi-party computation protocols, e-voting systems, and electronic auctions. Even though the security of protocols crucially depends on the underlying BB, as also highlighted by recent works, the literature on constructing secure BBs is sparse. The so-far only provably secure BBs require trusted components and sometimes also networks without message loss, which makes them unsuitable for applications with particularly high security needs where these assumptions might not always be met. In this work, we fill this gap by leveraging the concepts of accountability and universal composability (UC). More specifically, we propose the first ideal functionality for accountable BBs that formalizes the security requirements of such BBs in UC. We then propose Fabric$^\ast_\text{BB}$ as a slight extension designed on top of Fabric$^\ast$, which is a variant of the prominent Hyperledger Fabric distributed ledger protocol, and show that Fabric$^\ast_\text{BB}$ UC-realizes our ideal BB functionality. This result makes Fabric$^\ast_\text{BB}$ the first provably accountable BB, an often desired, but so far not formally proven property for BBs, and also the first BB that has been proven to be secure based only on standard cryptographic assumptions and without requiring trusted BB components or network assumptions. Through an implementation and performance evaluation we show that Fabric$^\ast_\text{BB}$ is practical for many applications of BBs.
Last updated:  2023-12-05
COMMON: Order Book with Privacy
Albert Garreta, Adam Gągol, Aikaterini-Panagiota Stouka, Damian Straszak, and Michal Zajac
Decentralized Finance (DeFi) has witnessed remarkable growth and innovation, with Decentralized Exchanges (DEXes) playing a pivotal role in shaping this ecosystem. As numerous DEX designs emerge, challenges such as price inefficiency and lack of user privacy continue to prevail. This paper introduces a novel DEX design, termed COMMON, that addresses these two predominant challenges. COMMON operates as an order book, natively integrated with a shielded token pool, thus providing anonymity to its users. Through the integration of zk-SNARKs, order batching, and Multiparty Computation (MPC) COMMON allows to conceal also the values in orders. This feature, paired with users never leaving the shielded pool when utilizing COMMON, provides a high level of privacy. To enhance price efficiency, we introduce a two-stage order matching process: initially, orders are internally matched, followed by an open, permissionless Dutch Auction to present the assets to Market Makers. This design effectively enables aggregating multiple sources of liquidity as well as helps reducing the adverse effects of Maximal Extractable Value (MEV), by redirecting most of the MEV profits back to the users.
Last updated:  2023-12-05
Different Flavours of HILL Pseudoentropy and Yao Incompressibility Entropy
Pihla Karanko
There are two popular ways to measure computational entropy in cryptography: (HILL) pseudoentropy and (Yao) incompressibility entropy. Both of these computational entropy notions are based on a natural intuition. - A random variable $X$ has $k$ bits of pseudoentropy if there exists a random variable $Y$ that has $k$ bits 'real' entropy and $Y$ is computationally indistinguishable from $X$. - A random variable $X$ has $k$ bits of incompressibility entropy if $X$ cannot be efficiently compressed to less than $k$ bits. It is also intuitive, that if a random variable has high pseudoentropy, then it should also have high incompressibility entropy, because a high-entropy distribution cannot be compressed. However, the above intuitions are not precise. Does 'real entropy' refer to Shannon entropy or min-entropy? What kind of correctness do we require from the compressor algorithm? Different papers use slightly different variations of both pseudoentropy and incompressibility entropy. In this note we study these subtle differences and see how they affect the parameters in the implication that pseudoentropy implies incompressibility.
Last updated:  2023-12-05
When NTT Meets SIS: Efficient Side-channel Attacks on Dilithium and Kyber
Zehua Qiao, Yuejun Liu, Yongbin Zhou, Mingyao Shao, and Shuo Sun
In 2022, NIST selected Kyber and Dilithium as post-quantum cryptographic standard algorithms. The Number Theoretic Transformation (NTT) algorithm, which facilitates polynomial multiplication, has become a primary target for side-channel attacks. Among these, Correlation Power Analysis (CPA) attacks against NTT have received much attention, which aims to recover all the coefficients of the private key in NTT domain. The necessity to recover all these coefficients not only limits efficiency but also directly impacts the feasibility of such attacks. Thus, a crucial question emerges: can the remaining coefficients be recovered using only a subset of known ones? In this work, we respond affirmatively by introducing overdetermined system-based and SIS-assisted key recovery methods for both Dilithium and Kyber, tailored for scenarios with incomplete NTT domain private keys. The SIS-assisted method, by embedding NTT transform matrix into the SIS search problem, offers a complete key recovery with the minimum known coefficients in NTT domain. For Kyber512 and Dilithium2, only 64 and 32 coefficients are enough to recover a subset of the private key with 256 coefficients, respectively. Furthermore, we propose a parameter-adjustable CPA scheme to expedite the recovery of a single coefficient in NTT domain. Combining this CPA scheme with the SIS-assisted approach, we executed practical attacks on both unprotected and masked implementations of Kyber and Dilithium on an ARM Cortex-M4. The results demonstrate that we can recover a subset of 256 private key coefficients for Dilithium2 using 2,000 power traces in 0.5 minutes, while Kyber512 requires 0.4 minutes and 500 power traces. These attacks achieve a 400$\times$ speedup compared to the best-known attacks against Dilithium. Moreover, we successfully break the first-order mask implementations and explore the potential applicable to higher-order implementations.
Last updated:  2023-12-05
Practical key-recovery attack on MQ-Sign
Thomas Aulbach, Simona Samardjiska, and Monika Trimoska
This note describes attacks on the UOV-based signature scheme called MQ-Sign. In more detail, it presents a polynomial-time key-recovery attack on the variants MQ-Sign-RS and MQ-Sign-SS and an improved direct attack on the variant MQ-Sign-SR. The scheme is a first-round candidate in the Korean Post-Quantum Cryptography Competition. Our attack exploits the sparsity of the secret central polynomials in combination with the specific structure of the secret linear map $S$. We provide a verification script for the polynomial-time attack, that recovers the secret key in less than seven seconds for security level V. Furthermore, we added an implementation of the non-guessing part of the direct attack, confirming our complexity estimates.
Last updated:  2023-12-05
Projective Space Stern Decoding and Application to SDitH
Uncategorized
Kevin Carrier, Valérian Hatey, and Jean-Pierre Tillich
Show abstract
Uncategorized
We show that here standard decoding algorithms for generic linear codes over a finite field can speeded up by a factor which is essentially the size of the finite field by reducing it to a low weight codeword problem and working in the relevant projective space. We apply this technique to SDitH and show that the parameters of both the original submission and the updated version fall short of meeting the security requirements asked by the NIST.
Last updated:  2023-12-05
Robust Combiners and Universal Constructions for Quantum Cryptography
Taiga Hiroka, Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases. Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them. On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
Last updated:  2023-12-05
Cache Side-Channel Attacks Through Electromagnetic Emanations of DRAM Accesses
Julien Maillard, Thomas Hiscock, Maxime Lecomte, and Christophe Clavier
Remote side-channel attacks on processors exploit hardware and micro-architectural effects observable from software measurements. So far, the analysis of micro-architectural leakages over physical side-channels (power consumption, electromagnetic field) received little treatment. In this paper, we argue that those attacks are a serious threat, especially against systems such as smartphones and Internet-of-Things (IoT) devices which are physically exposed to the end-user. Namely, we show that the observation of Dynamic Random Access Memory (DRAM) accesses with an electromagnetic (EM) probe constitutes a reliable alternative to time measurements in cache side-channel attacks. We describe the EVICT+EM attack, that allows recovering a full AES key on a T-Tables implementation with similar number of encryptions than state-of-the-art EVICT+RELOAD attacks on the studied ARM platforms. This new attack paradigm removes the need for shared memory and exploits EM radiations instead of high precision timers. Then, we introduce PRIME+EM, which goal is to reverse-engineer cache usage patterns. This attack allows to recover the layout of lookup tables within the cache. Finally, we present COLLISION+EM, a collision-based attack on a System-on-chip (SoC) that does not require malicious code execution, and show its practical efficiency in recovering key material on an ARM TrustZone application. Those results show that physical observation of the micro-architecture can lead to improved attacks.
Last updated:  2023-12-05
Efficient Secure Multiparty Computation for Multidimensional Arithmetics and Its Application in Privacy-Preserving Biometric Identification
Dongyu Wu, Bei Liang, Zijie Lu, and Jintai Ding
Over years of the development of secure multi-party computation (MPC), many sophisticated functionalities have been made pratical and multi-dimensional operations occur more and more frequently in MPC protocols, especially in protocols involving datasets of vector elements, such as privacy-preserving biometric identification and privacy-preserving machine learning. In this paper, we introduce a new kind of correlation, called tensor triples, which is designed to make multi-dimensional MPC protocols more efficient. We will discuss the generation process, the usage, as well as the applications of tensor triples and show that it can accelerate privacy-preserving biometric identification protocols, such as FingerCode, Eigenfaces and FaceNet, by more than 1000 times.
Last updated:  2023-12-05
Fallen Sanctuary: A Higher-Order and Leakage-Resilient Rekeying Scheme
Rei Ueno, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This paper presents a provably secure, higher-order, and leakage-resilient (LR) rekeying scheme named LR Rekeying with Random oracle Repetition (LR4), along with a quantitative security evaluation methodology. Many existing LR primitives are based on a concept of leveled implementation, which still essentially require a leak-free sanctuary (i.e., differential power analysis (DPA)-resistant component(s)) for some parts. In addition, although several LR pseudorandom functions (PRFs) based on only bounded DPA-resistant components have been developed, their validity and effectiveness for rekeying usage still need to be determined. In contrast, LR4 is formally proven under a leakage model that captures the practical goal of side-channel attack (SCA) protection (e.g., masking with a practical order) and assumes no unbounded DPA-resistant sanctuary. This proof suggests that LR4 resists exponential invocations (up to the birthday bound of key size) without using any unbounded leak-free component, which is the first of its kind. Moreover, we present a quantitative SCA success rate evaluation methodology for LR4 that combines the bounded leakage models for LR cryptography and a state-of-the-art information-theoretical SCA evaluation method. We validate its soundness and effectiveness as a DPA countermeasure through a numerical evaluation; that is, the number of secure calls of a symmetric primitive increases exponentially by increasing a security parameter under practical conditions.
Last updated:  2023-12-04
Analyzing UTXO-Based Blockchain Privacy Threats
Simin Ghesmati, Walid Fdhila, and Edgar Weippl
While blockchain technologies leverage compelling characteristics in terms of decentralization, immutability, and transparency, user privacy in public blockchains remains a fundamental challenge that requires particular attention. This is mainly due to the history of all transactions being accessible and available to anyone, thus making it possible for an attacker to infer data about users that is supposed to remain private. In this paper, we provide a threat model of possible privacy attacks on users utilizing the Bitcoin blockchain. To this end, we followed the LINDDUN GO methodology to identify threats and suggest possible mitigation.
Last updated:  2023-12-04
Automatic Verification of Cryptographic Block Function Implementations with Logical Equivalence Checking
Li-Chang Lai, Jiaxiang Liu, Xiaomu Shi, Ming-Hsien Tsai, Bow-Yaw Wang, and Bo-Yin Yang
Given a fixed-size block, cryptographic block functions gen- erate outputs by a sequence of bitwise operations. Block functions are widely used in the design of hash functions and stream ciphers. Their correct implementations hence are crucial to computer security. We pro- pose a method that leverages logic equivalence checking to verify assem- bly implementations of cryptographic block functions. Logic equivalence checking is a well-established technique from hardware verification. Using our proposed method, we verify two dozen assembly implementations of ChaCha20, SHA-256, and SHA-3 block functions from OpenSSL and XKCP automatically. We also compare the performance of our technique with the conventional SMT-based technique in experiments.
Last updated:  2023-12-04
Constructing Secure Multi-Party Computation with Identifiable Abort
Nicholas Brandt, Sven Maier, Tobias Müller, and Jörn Müller-Quade
Composable protocols for Multi-Party Computation that provide security with Identifiable Abort against a dishonest majority require some form of setup, e.g. correlated randomness among the parties. While this is a very useful model, it has the downside that the setup's randomness must be programmable, otherwise security becomes provably impossible. Since programmability is more realistic for smaller setups (in terms of number of parties), it is crucial to minimize the correlation complexity (degree of correlation) of the setup's randomness. We give a tight tradeoff between the correlation complexity \(\beta\) and the corruption threshold \(t\). Our bounds are strong in that \(\beta\)-wise correlation is sufficient for statistical security while \(\beta-1\)-wise correlation is insufficient even for computational security. In particular, for strong security, i.e., \(t < n\), full \(n\)-wise correlation is necessary. However, for any constant fraction of honest parties, we provide a protocol with constant correlation complexity which tightens the gap between the theoretical model and the setup's implementation in the real world. In contrast, previous state-of-the-art protocols require full \(n\)-wise correlation regardless of \(t\).
Last updated:  2023-12-04
One-out-of-$q$ OT Combiners
Oriol Farràs and Jordi Ribes-González
In $1$-out-of-$q$ Oblivious Transfer (OT) protocols, a sender Alice is able to send one of $q\ge 2$ messages to a receiver Bob, all while being oblivious to which message was transferred. Moreover, the receiver learns only one of these messages. Oblivious Transfer combiners take $n$ instances of OT protocols as input, and produce an OT protocol that is secure if sufficiently many of the $n$ original OT instances are secure. We present new $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries. Our combiners arise from secret sharing techniques. We show that given an $\mathbb{F}_q$-linear secret sharing scheme on a set of $n$ participants and adversary structure $\mathcal{A}$, we can construct $n$-server, $1$-out-of-$q$ OT combiners that are secure against an adversary corrupting either Alice and a set of servers in $\mathcal{A}$, or Bob and a set of servers $B$ with $\bar{B}\notin\mathcal{A}$. If the normalized total share size of the scheme is $\ell$, then the resulting OT combiner requires $\ell$ calls to OT protocols, and the total amount of bits exchanged during the protocol is $(q^2+q+1)\ell\log q$. We also present a construction based on $1$-out-of-$2$ OT combiners that uses the protocol of Crépeau, Brassard and Robert (FOCS 1986). This construction provides smaller communication costs for certain adversary structures, such as threshold ones: For any prime power $q\geq n$, there are $n$-server, $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries corrupting either Alice or Bob, and a minority of the OT candidates, exchanging $O(qn\log q)$ bits in total.
Last updated:  2023-12-04
EstraNet: An Efficient Shift-Invariant Transformer Network for Side-Channel Analysis
Suvadeep Hajra, Siddhartha Chowdhury, and Debdeep Mukhopadhyay
Deep Learning (DL) based Side-Channel Analysis (SCA) has been extremely popular recently. DL-based SCA can easily break implementations protected by masking countermeasures. DL-based SCA has also been highly successful against implementations protected by various trace desynchronization-based countermeasures like random delay, clock jitter, and shuffling. Over the years, many DL models have been explored to perform SCA. Recently, Transformer Network (TN) based model has also been introduced for SCA. Though the previously introduced TN-based model is successful against implementations jointly protected by masking and random delay countermeasures, it is not scalable to long traces (having a length greater than a few thousand) due to its quadratic time and memory complexity. This work proposes a novel shift-invariant TN-based model with linear time and memory complexity. The contributions of the work are two-fold. First, we introduce a novel TN-based model called EstraNet for SCA. EstraNet has linear time and memory complexity in trace length, significantly improving over the previously proposed TN-based model’s quadratic time and memory cost. EstraNet is also shift-invariant, making it highly effective against countermeasures like random delay and clock jitter. Secondly, we evaluated EstraNet on three SCA datasets of masked implementations with random delay and clock jitter effects. Our experimental results show that EstraNet significantly outperforms several benchmark models, demonstrating up to an order of magnitude reduction in the number of attack traces required to reach guessing entropy 1.
Last updated:  2023-12-04
XorSHAP: Privacy-Preserving Explainable AI for Decision Tree Models
Dimitar Jetchev and Marius Vuille
Explainable AI (XAI) refers to the development of AI systems and machine learning models in a way that humans can understand, interpret and trust the predictions, decisions and outputs of these models. A common approach to explainability is feature importance, that is, determining which input features of the model have the most significant impact on the model prediction. Two major techniques for computing feature importance are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). While very generic, these methods are computationally expensive even in plaintext. Applying them in the privacy-preserving setting when part or all of the input data is private is therefore a major computational challenge. In this paper, we present $\texttt{XorSHAP}$ - the first practical privacy-preserving algorithm for computing Shapley values for decision tree ensemble models in the semi-honest Secure Multiparty Computation (SMPC) setting with full threshold. Our algorithm has complexity $O(T \widetilde{M} D 2^D)$, where $T$ is the number of decision trees in the ensemble, $D$ is the depth of the decision trees and $\widetilde{M}$ is the maximum of the number of features $M$ and $2^D$ (the number of leaf nodes of a tree), and scales to real-world datasets. Our implementation is based on Inpher's $\texttt{Manticore}$ framework and simultaneously computes (in the SMPC setting) the Shapley values for 100 samples for an ensemble of $T = 60$ trees of depth $D = 4$ and $M = 100$ features in just 7.5 minutes, meaning that the Shapley values for a single prediction are computed in just 4.5 seconds for the same decision tree ensemble model. Additionally, it is parallelization-friendly, thus, enabling future work on massive hardware acceleration with GPUs.
Last updated:  2023-12-04
Efficient Issuer-Hiding Authentication, Application to Anonymous Credential
Olivier Sanders and Jacques Traoré
Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer. In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.
Last updated:  2023-12-04
Batch Proofs are Statistically Hiding
Nir Bitansky, Chethan Kamath, Omer Paneth, Ron Rothblum, and Prashant Nalini Vasudevan
Batch proofs are proof systems that convince a verifier that $x_1,\dots,x_t \in \mathcal{L}$, for some $\mathsf{NP}$ language $\mathcal{L}$, with communication that is much shorter than sending the $t$ witnesses. In the case of *statistical soundness* (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for $\mathsf{UP}$, the class of *unique-witness* $\mathsf{NP}$ languages. In the case of computational soundness (where both honest and dishonest provers are efficient), *non-interactive* solutions are now known for all of $\mathsf{NP}$, assuming standard lattice or group assumptions. We exhibit the first negative results regarding the existence of batch proofs and arguments: - Statistically sound batch proofs for $\mathcal{L}$ imply that $\mathcal{L}$ has a statistically witness indistinguishable ($\mathsf{SWI}$) proof, with inverse polynomial $\mathsf{SWI}$ error, and a non-uniform honest prover. The implication is unconditional for obtaining honest-verifier $\mathsf{SWI}$ or for obtaining full-fledged $\mathsf{SWI}$ from public-coin protocols, whereas for private-coin protocols full-fledged $\mathsf{SWI}$ is obtained assuming one-way functions. This poses a barrier for achieving batch proofs beyond $\mathsf{UP}$ (where witness indistinguishability is trivial). In particular, assuming that $\mathsf{NP}$ does not have $\mathsf{SWI}$ proofs, batch proofs for all of $\mathsf{NP}$ do not exist. - Computationally sound batch proofs (a.k.a batch arguments or $\mathsf{BARG}$s) for $\mathsf{NP}$, together with one-way functions, imply statistical zero-knowledge ($\mathsf{SZK}$) arguments for $\mathsf{NP}$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover. Thus, constant-round interactive $\mathsf{BARG}$s from one-way functions would yield constant-round $\mathsf{SZK}$ arguments from one-way functions. This would be surprising as $\mathsf{SZK}$ arguments are currently only known assuming constant-round statistically-hiding commitments. We further prove new positive implications of non-interactive batch arguments to non-interactive zero knowledge arguments (with explicit uniform prover and verifier): - Non-interactive $\mathsf{BARG}$s for $\mathsf{NP}$, together with one-way functions, imply non-interactive computational zero-knowledge arguments for $\mathsf{NP}$. Assuming also dual-mode commitments, the zero knowledge can be made statistical. Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language $\mathcal{L}$ into an $\mathsf{SWI}$ protocol for $\mathcal{L}$.
Last updated:  2023-12-04
$\textsf{Asterisk}$: Super-fast MPC with a Friend
Banashri Karmakar, Nishat Koti, Arpita Patra, Sikhar Patranabis, Protik Paul, and Divya Ravi
Secure multiparty computation$~$(MPC) enables privacy-preserving collaborative computation over sensitive data held by multiple mutually distrusting parties. Unfortunately, in the most natural setting where a majority of the parties are maliciously corrupt$~$(also called the $\textit{dishonest majority}$ setting), traditional MPC protocols incur high overheads and offer weaker security guarantees than are desirable for practical applications. In this paper, we explore the possibility of circumventing these drawbacks and achieving practically efficient dishonest majority MPC protocols with strong security guarantees by assuming an additional semi-honest, non-colluding helper party $\mathrm{HP}$. We believe that this is a more realistic alternative to assuming an honest majority, since many real-world applications of MPC involving potentially large numbers of parties$~$(such as dark pools) are typically enabled by a central governing entity that can be modeled as the $\mathrm{HP}$. In the above model, we are the first to design, implement and benchmark a practically-efficient and general multi-party framework, $\textsf{Asterisk}$. Our framework requires invoking $\mathrm{HP}$ only a constant number of times, achieves the strong security guarantee of $\textit{fairness}$ (either all parties learn the output or none do), scales to hundreds of parties, outperforms all existing dishonest majority MPC protocols, and is, in fact, competitive with state-of-the-art honest majority MPC protocols. Our experiments show that $\textsf{Asterisk}$ achieves $228-288\times$ speedup in preprocessing as compared to the best dishonest majority MPC protocol. With respect to online time, $\textsf{Asterisk}$ supports $100$-party evaluation of a circuit with $10^6$ multiplication gates in approximately $20$ seconds. We also implement and benchmark practically efficient and highly scalable dark pool instances using $\textsf{Asterisk}$. The corresponding run times showcase the effectiveness of $\textsf{Asterisk}$ in enabling efficient realizations of real-world privacy-preserving applications with strong security guarantees.
Last updated:  2023-12-04
New Public-Key Cryptosystem Blueprints Using Matrix Products in $\mathbb F_p$
Remi Geraud-Stewart and David Naccache
Given a set of matrices $\mathbf{A} := \{A_0, \dotsc, A_{k-1}\}$, and a matrix $M$ guaranteed to be the product of some ordered subset of $\mathbf{L}\subset\mathbf{A}$, can $\mathbf{L}$ be efficiently recovered? We begin by observing that the answer is positive under some assumptions on $\mathbf{A}$. Noting that appropriate transformations seem to make $\mathbf{L}$'s recovery difficult we provide the blueprint of two new public-key cryptosystems based upon this problem. We term those constructions "blueprints because, given their novelty, we are still uncertain of their exact security. Yet, we daringly conjecture that even if attacks are found on the proposed constructions, these attacks could be thwarted by adjustments in the key generation, key size or the encryption mechanism, thereby resulting on the long run in fully-fledged public-key cryptosystems that do not seem to belong to any of the mainstream public-key encryption paradigms known to date.
Last updated:  2023-12-04
A Novel Power-Sum PRG with Applications to Lattice-Based zkSNARKs
Charanjit S Jutla, Eamonn W. Postlethwaite, and Arnab Roy
zkSNARK is a cryptographic primitive that allows a prover to prove to a resource constrained verifier, that it has indeed performed a specified non-deterministic computation correctly, while hiding private witnesses. In this work we focus on lattice based zkSNARK, as this serves two important design goals. Firstly, we get post-quantum zkSNARK schemes with $O(\log (\mbox{Circuit size}))$ sized proofs (without random oracles) and secondly, the easy verifier circuit allows further bootstrapping by arbitrary (zk)SNARK schemes that offer additional or complementary properties. However, this goal comes with considerable challenges. The only known lattice-based bilinear maps are obtained using multi-linear maps of Garg, Gentry, and Halevi 2013 (GGH13), which have undergone considerable cryptanalytic attacks, in particular annihilation attacks. In this work, we propose a (level-2) GGH13-encoding based zkSNARK which we show to be secure in the weak-multilinear map model of Miles-Sahai-Zhandry assuming a novel pseudo-random generator (PRG). We argue that the new PRG assumption is plausible based on the well-studied Newton's identity on power-sum polynomials, as well as an analysis of hardness of computing Grobner bases for these polynomials. The particular PRG is designed for efficient implementation of the zkSNARK. Technically, we leverage the 2-linear instantiation of the GGH13 graded encoding scheme to provide us with an analogue of bilinear maps and adapt the Groth16 (Groth, Eurocrypt 2016) protocol, although with considerable technical advances in design and proof. The protocol is non-interactive in the CRS model.
Last updated:  2023-12-04
Beyond Security: Achieving Fairness in Mailmen-Assisted Timed Data Delivery
Shiyu Li, Yuan Zhang, Yaqing Song, Hongbo Liu, Nan Cheng, Hongwei Li, Dahai Tao, and Kan Yang
Timed data delivery is a critical service for time-sensitive applications that allows a sender to deliver data to a recipient, but only be accessible at a specific future time. This service is typically accomplished by employing a set of mailmen to complete the delivery mission. While this approach is commonly used, it is vulnerable to attacks from realistic adversaries, such as a greedy sender (who accesses the delivery service without paying the service charge) and malicious mailmen (who release the data prematurely without being detected). Although some research works have been done to address these adversaries, most of them fail to achieve fairness. In this paper, we formally define the fairness requirement for mailmen-assisted timed data delivery and propose a practical scheme, dubbed DataUber, to achieve fairness. DataUber ensures that honest mailmen receive the service charge, lazy mailmen do not receive the service charge, and malicious mailmen are punished. Specifically, DataUber consists of two key techniques: 1) a new cryptographic primitive, i.e., Oblivious and Verifiable Threshold Secret Sharing (OVTSS), enabling a dealer to distribute a secret among multiple participants in a threshold and verifiable way without knowing any one of the shares, and 2) a smart-contract-based complaint mechanism, allowing anyone to become a reporter to complain about a mailman's misbehavior to a smart contract and receive a reward. Furthermore, we formally prove the security of DataUber and demonstrate its practicality through a prototype implementation.
Last updated:  2023-12-04
Privacy-Preserving Cross-Facility Early Warning for Unknown Epidemics
Shiyu Li, Yuan Zhang, Yaqing Song, Fan Wu, Feng Lyu, Kan Yang, and Qiang Tang
Syndrome-based early epidemic warning plays a vital role in preventing and controlling unknown epidemic outbreaks. It monitors the frequency of each syndrome, issues a warning if some frequency is aberrant, identifies potential epidemic outbreaks, and alerts governments as early as possible. Existing systems adopt a cloud-assisted paradigm to achieve cross-facility statistics on the syndrome frequencies. However, in these systems, all symptom data would be directly leaked to the cloud, which causes critical security and privacy issues. In this paper, we first analyze syndrome-based early epidemic warning systems and formalize two security notions, i.e., symptom confidentiality and frequency confidentiality, according to the inherent security requirements. We propose EpiOracle, a cross-facility early warning scheme for unknown epidemics. EpiOracle ensures that the contents and frequencies of syndromes will not be leaked to any unrelated parties; moreover, our construction uses only a symmetric-key encryption algorithm and cryptographic hash functions (e.g., [CBC]AES and SHA-3), making it highly efficient. We formally prove the security of EpiOracle in the random oracle model. We also implement an EpiOracle prototype and evaluate its performance using a set of real-world symptom lists. The evaluation results demonstrate its practical efficiency.
Last updated:  2023-12-04
A Simple and Efficient Framework of Proof Systems for NP
Yuyu Wang, Chuanjie Su, Jiaxin Pan, and Yu Chen
In this work, we propose a simple framework of constructing efficient non-interactive zero-knowledge proof (NIZK) systems for all NP. Compared to the state-of-the-art construction by Groth, Ostrovsky, and Sahai (J. ACM, 2012), our resulting NIZK system reduces the proof size and proving and verification cost without any trade-off, i.e., neither increasing computation cost, CRS size nor resorting to stronger assumptions. Furthermore, we extend our framework to construct a batch argument (BARG) system for all NP. Our construction remarkably improves the efficiency of BARG by Waters and Wu (Crypto 2022) without any trade-off.
Last updated:  2023-12-03
Adding more parallelism to the AEGIS authenticated encryption algorithms
Frank Denis
While the round function of the AEGIS authenticated encryption algorithms is highly parallelizable, their mode of operation is not. We introduce two new modes to overcome that limitation: AEGIS-128X and AEGIS-256X, that require minimal changes to existing implementations and retain the security properties of AEGIS-128L and AEGIS-256.
Last updated:  2023-12-03
Fast batched asynchronous distributed key generation
Jens Groth and Victor Shoup
We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
Last updated:  2023-12-03
ID-CAKE: Identity-based Cluster Authentication and Key Exchange Scheme for Message Broadcasting and Batch Verification in VANETs
Apurva K Vangujar, Alia Umrani, and Paolo Palmieri
Vehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance. This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.
Last updated:  2023-12-03
Optimizing AES Threshold Implementation under the Glitch-Extended Probing Model
Fu Yao, Hua Chen, Yongzhuang Wei, Enes Pasalic, Feng Zhou, and Limin Fan
Threshold Implementation (TI) is a well-known Boolean masking technique that provides provable security against side-channel attacks. In the presence of glitches, the probing model was replaced by the so-called glitch-extended probing model which specifies a broader security framework. In CHES 2021, Shahmirzadi et al. introduced a general search method for finding first-order 2-share TI schemes without fresh randomness (under the presence of glitches) for a given encryption algorithm. Although it handles well single-output Boolean functions, this method has to store output shares in registers when extended to vector Boolean functions, which results in more chip area and increased latency. Therefore, the design of TI schemes that have low implementation cost under the glitch-extended probing model appears to be an important research challenge. In this paper, we propose an approach to design the first-order glitch-extended probing secure TI schemes when quadratic functions are employed in the substitution layer. This method only requires a small amount of fresh random bits and a single clock cycle for its implementation. In particular, the random bits in our approach are reusable and compatible with the changing of the guards technique. Our dedicated TI scheme for the AES cipher gives 20.23% smaller implementation area and 4.2% faster encryption compared to the TI scheme of AES (without using fresh randomness) proposed in CHES 2021. Additionally, we propose a parallel implementation of two S-boxes that further reduces latency (about 39.83%) at the expense of increasing the chip area by 9%. We have positively confirmed the security of AES under the glitch-extended probing model using the verification tool - SILVER and the side-channel leakage assessment method - TVLA.
Last updated:  2023-12-03
Demystifying DeFi MEV Activities in Flashbots Bundle
Zihao Li, Jianfeng Li, Zheyuan He, Xiapu Luo, Ting Wang, Xiaoze Ni, Wenwu Yang, Xi Chen, and Ting Chen
Decentralized Finance, mushrooming in permissionless blockchains, has attracted a recent surge in popularity. Due to the transparency of permissionless blockchains, opportunistic traders can compete to earn revenue by extracting Miner Extractable Value (MEV), which undermines both the consensus security and efficiency of blockchain systems. The Flashbots bundle mechanism further aggravates the MEV competition because it empowers opportunistic traders with the capability of designing more sophisticated MEV extraction. In this paper, we conduct the first systematic study on DeFi MEV activities in Flashbots bundle by developing ActLifter, a novel automated tool for accurately identifying DeFi actions in transactions of each bundle, and ActCluster, a new approach that leverages iterative clustering to facilitate us to discover known/unknown DeFi MEV activities. Extensive experimental results show that ActLifter can achieve nearly 100% precision and recall in DeFi action identification, significantly outperforming state-of-the-art techniques. Moreover, with the help of ActCluster, we obtain many new observations and discover 17 new kinds of DeFi MEV activities, which occur in 53.12% of bundles but have not been reported in existing studies.
Last updated:  2023-12-03
A note on quantum approximate optimization algorithm
Zhengjun Cao
The general quantum approximate optimization algorithm (QAOA) produces approximate solutions for combinatorial optimization problems. The algorithm depends on a positive integer $p$ and the quality of approximation improves as $p$ is increased. In this note, we put some questions about the general QAOA. We also find the recursive QAOA for MaxCut problem is flawed because all quantum gates involved in the algorithm are single qubit gates. No any entangling gate is used, which results in that the quantum computing power cannot be certified for the problem.
Last updated:  2023-12-03
A Novel CCA Attack for NTRU+ KEM
Joohee Lee, Minju Lee, Hansol Ryu, and Jaehui Park
The KpqC competition has begun in 2022, that aims to standardize Post-Quantum Cryptography (PQC) in the Republic of Korea. Among the 16 submissions of the KpqC competition, the lattice-based schemes exhibit the most promising and balanced features in performance. In this paper, we propose an effective classical CCA attack to recover the transmitted session key for NTRU+, one of the lattice-based Key Encapsulation Mechanisms (KEM) proposed in the KpqC competition, for the first time. With the proposed attacks, we show that all the suggested parameters of NTRU+ do not satisfy the claimed security. We also suggest a way to modify the NTRU+ scheme to defend our attack.
Last updated:  2023-12-02
Quantifying risks in cryptographic selection processes
Daniel J. Bernstein
There appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.
Last updated:  2023-12-02
Report on evaluation of KpqC candidates
Jolijn Cottaar, Kathrin Hövelmanns, Andreas Hülsing, Tanja Lange, Mohammad Mahzoun, Alex Pellegrini, Alberto Ravagnani, Sven Schäge, Monika Trimoska, and Benne de Weger
This report analyzes the 16 submissions to the Korean post-quantum cryptography (KpqC) competition.
Last updated:  2023-12-02
Some remarks on how to hash faster onto elliptic curves
Dmitrii Koshelev
This article proposes four optimizations of indifferentiable hashing onto (prime-order subgroups of) ordinary elliptic curves over finite fields $\mathbb{F}_{\!q}$. One of them is dedicated to elliptic curves $E$ without non-trivial automorphisms provided that $q \equiv 2 \ (\mathrm{mod} \ 3)$. The second deals with $q \equiv 2, 4 \ (\mathrm{mod} \ 7)$ and an elliptic curve $E_7$ of $j$-invariant $-3^3 5^3$. The corresponding section plays a rather theoretical role, because (the quadratic twist of) $E_7$ is not used in real-world cryptography. The other two optimizations take place for the subgroups $\mathbb{G}_1$, $\mathbb{G}_2$ of pairing-friendly curves. The performance gain comes from the smaller number of required exponentiations in $\mathbb{F}_{\!q}$ for hashing to $E(\mathbb{F}_{\!q})$, $E_7(\mathbb{F}_{\!q})$, and $\mathbb{G}_2$ as well as from the absence of necessity to hash directly onto $\mathbb{G}_1$ in certain settings. In particular, the last insight allows to drastically speed up verification of the aggregate BLS signature incorporated in many blockchain technologies. The new results affect, for example, the pairing-friendly curve BLS12-381 (the most popular in practice at the moment) and a few plain curves from the American standard NIST SP 800-186. Among other things, a taxonomy of state-of-the-art hash functions to elliptic curves is presented. Finally, the article discusses how to hash over highly $2$-adic fields $\mathbb{F}_{\!q}$.
Last updated:  2023-12-02
There Is Always a Way Out! Destruction-Resistant Key Management: Formal Definition and Practical Instantiation
Yuan Zhang, Yaqing Song, Shiyu Li, Weijia Li, Zeqi Lai, and Qiang Tang
A central advantage of deploying cryptosystems is that the security of large high-sensitive data sets can be reduced to the security of a very small key. The most popular way to manage keys is to use a $(t,n)-$threshold secret sharing scheme: a user splits her/his key into $n$ shares, distributes them among $n$ key servers, and can recover the key with the aid of any $t$ of them. However, it is vulnerable to device destruction: if all key servers and user's devices break down, the key will be permanently lost. We propose a $\mathrm{\underline{D}}$estruction-$\mathrm{\underline{R}}$esistant $\mathrm{\underline{K}}$ey $\mathrm{\underline{M}}$anagement scheme, dubbed DRKM, which ensures the key availability even if destruction occurs. In DRKM, a user utilizes her/his $n^{*}$ personal identification factors (PIFs) to derive a cryptographic key but can retrieve the key using any $t^{*}$ of the $n^{*}$ PIFs. As most PIFs can be retrieved by the user $\textit{per se}$ without requiring $\textit{stateful}$ devices, destruction resistance is achieved. With the integration of a $(t,n)-$threshold secret sharing scheme, DRKM also provides $\textit{portable}$ key access for the user (with the aid of any $t$ of $n$ key servers) before destruction occurs. DRKM can be utilized to construct a destruction-resistant cryptosystem (DRC) in tandem with any backup system. We formally prove the security of DRKM, implement a DRKM prototype, and conduct a comprehensive performance evaluation to demonstrate its high efficiency. We further utilize Cramer's Rule to reduce the required buffer to retrieve a key from 25 MB to 40 KB (for 256-bit security).
Last updated:  2023-12-02
Hidden Stream Ciphers and TMTO Attacks on TLS 1.3, DTLS 1.3, QUIC, and Signal
John Preuß Mattsson
Transport Layer Security (TLS) 1.3 and the Signal protocol are very important and widely used security protocols. We show that the key update function in TLS 1.3 and the symmetric key ratchet in Signal can be modeled as non-additive synchronous stream ciphers. This means that the efficient Time Memory Tradeoff Attacks for stream ciphers can be applied. The implication is that TLS 1.3, QUIC, DTLS 1.3, and Signal offer a lower security level against TMTO attacks than expected from the key sizes. We provide detailed analyses of the key update mechanisms in TLS 1.3 and Signal, illustrate the importance of ephemeral key exchange, and show that the process that DTLS 1.3 and QUIC use to calculate AEAD limits is flawed. We provide many concrete recommendations for the analyzed protocols.
Last updated:  2023-12-02
Rectangular Attack on VOX
Gilles Macario-Rat, Jacques Patarin, Benoit Cogliati, Jean-Charles Faugère, Pierre-Alain Fouque, Louis Gouin, Robin Larrieu, and Brice Minaud
VOX has been submitted to the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition in June 2023. VOX is a strengthened variant of UOV which uses the Quotient-Ring (QR) setting to reduce the public-key size. At the end of August 2023, Furue and Ikamatsu posted on the NIST mailing-list a post, indicating that the parameters of VOX can be attacked efficiently using the rectangular attack in the QR setting. In this note, we explain the attack in the specific case of VOX, we detail the complexity, and show that as Furue and Ikematsu indicated, the attack can be completely avoided by adding one more constraint on the parameter selection. Finally, we show that this constraint does not increase the sizes of the public keys or signature.
Last updated:  2023-12-01
Succinct Arguments over Towers of Binary Fields
Benjamin E. Diamond and Jim Posen
We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.
Last updated:  2023-12-01
Reduction from sparse LPN to LPN, Dual Attack 3.0
Kévin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, and Jean-Pierre Tillich
The security of code-based cryptography relies primarily on the hardness of decoding generic linear codes. Until very recently, all the best algorithms for solving the decoding problem were information set decoders ($\mathsf{ISD}$). However, recently a new algorithm called RLPN-decoding which relies on a completely different approach was introduced and it has been shown that RLPN outperforms significantly $\mathsf{ISD}$ decoders for a rather large range of rates. This RLPN decoder relies on two ingredients, first reducing decoding to some underlying LPN problem, and then computing efficiently many parity-checks of small weight when restricted to some positions. We revisit RLPN-decoding by noticing that, in this algorithm, decoding is in fact reduced to a sparse-LPN problem, namely with a secret whose Hamming weight is small. Our new approach consists this time in making an additional reduction from sparse-LPN to plain-LPN with a coding approach inspired by $\mathsf{coded}$-$\mathsf{BKW}$. It outperforms significantly the $\mathsf{ISD}$'s and RLPN for code rates smaller than $0.42$. This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography. We depart completely from the traditional analysis of this kind of algorithm which uses a certain number of independence assumptions that have been strongly questioned recently in the latter domain. We give instead a formula for the LPN noise relying on duality which allows to analyze the behavior of the algorithm by relying only on the analysis of a certain weight distribution. By using only a minimal assumption whose validity has been verified experimentally we are able to justify the correctness of our algorithm. This key tool, namely the duality formula, can be readily adapted to the lattice setting and is shown to give a simple explanation for some phenomena observed on dual attacks in lattices in [DP23].
Last updated:  2023-12-01
Arke: Scalable and Byzantine Fault Tolerant Privacy-Preserving Contact Discovery
Nicolas Mohnblatt, Alberto Sonnino, Kobi Gurkan, and Philipp Jovanovic
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Last updated:  2023-12-01
DY Fuzzing: Formal Dolev-Yao Models Meet Cryptographic Protocol Fuzz Testing
Max Ammann, Lucca Hirschi, and Steve Kremer
Critical and widely used cryptographic protocols have repeatedly been found to contain flaws in their design and their implementation. A prominent class of such vulnerabilities is logical attacks, e.g. attacks that exploit flawed protocol logic. Automated formal verification methods, based on the Dolev-Yao (DY) attacker, formally define and excel at finding such flaws, but operate only on abstract specification models. Fully automated verification of existing protocol implementations is today still out of reach. This leaves open whether such implementations are secure. Unfortunately, this blind spot hides numerous attacks, such as recent logical attacks on widely used TLS implementations introduced by implementation bugs. We answer by proposing a novel and effective technique that we call DY model-guided fuzzing, which precludes logical attacks against protocol implementations. The main idea is to consider as possible test cases the set of abstract DY executions of the DY attacker, and use a novel mutation-based fuzzer to explore this set. The DY fuzzer concretizes each abstract execution to test it on the program under test. This approach enables reasoning at a more structural and security-related level of messages represented as formal terms (e.g. decrypt a message and re-encrypt it with a different key) as opposed to random bit-level modifications that are much less likely to produce relevant logical adversarial behaviors. We implement a full-fledged and modular DY protocol fuzzer. We demonstrate its effectiveness by fuzzing three popular TLS implementations, resulting in the discovery of four novel vulnerabilities.
Last updated:  2023-12-01
Quantum Security of the UMTS-AKA Protocol and its Primitives, Milenage and TUAK
Paul Frixons, Sébastien Canard, and Loïc Ferreira
The existence of a quantum computer is one of the most significant threats cryptography has ever faced. However, it seems that real world protocols received little attention so far with respect to their future security. Indeed merely relying upon post-quantum primitives may not suffice in order for a security protocol to be resistant in a full quantum world. In this paper, we consider the fundamental UMTS key agreement used in 3G but also in 4G (LTE), and in the (recently deployed) 5G technology. We analyze the protocol in a quantum setting, with quantum communications (allowing superposition queries by the involved parties), and where quantum computation is granted to the adversary. We prove that, assuming the underlying symmetric-key primitive is quantum-secure, the UMTS key agreement is also quantum-secure. We also give a quantum security analysis of the underlying primitives, namely Milenage and TUAK. To the best of our knowledge this paper provides the first rigorous proof of the UMTS key agreement in a strong quantum setting. Our result shows that in the quantum world to come, the UMTS technology remains a valid scheme in order to secure the communications of billions of users.
Last updated:  2023-12-01
Learning with Errors over Group Rings Constructed by Semi-direct Product
Jiaqi Liu and Fang-Wei Fu
The Learning with Errors (LWE) problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the LWE problem called Group ring LWE (GR-LWE). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-LWE problem described in \cite{lyubashevsky2010ideal}, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-LWE, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem (SIVP) in ideal lattices with polynomial approximate factor to the search version of GR-LWE. This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case SIVP problem is directly reduced to the (average-case) decision GR-LWE problem. The pseudorandomness of GR-LWE samples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems.
Last updated:  2023-12-01
Accurate Score Prediction for Dual-Sieve Attacks
Léo Ducas and Ludo N. Pulles
The Dual-Sieve Attack on Learning with Errors (LWE), or more generally Bounded Distance Decoding (BDD), has seen many improvements in the recent years, and ultimately led to claims that it outperforms the primal attack against certain lattice-based schemes in the PQC standardization process organised by NIST. However, the work of Ducas--Pulles (Crypto '23) revealed that the so-called "Independence Heuristic", which all recent dual attacks used, leads to wrong predictions in a contradictory regime, which is relevant for the security of cryptoschemes. More specifically, the stated distributions of scores for the actual solution and for incorrect candidates were both incorrect. In this work, we propose to use the weaker heuristic that the output vectors of a lattice sieve are uniformly distributed in a ball. Under this heuristic, we give an analysis of the score distribution in the case of an error of fixed length. Integrating over this length, we extend this analysis to any radially distributed error, in particular the gaussian as a fix for the score distribution of the actual solution. This approach also provides a prediction for the score of incorrect candidates, using a ball as an approximation of the Voronoi cell of a lattice. We compare the predicted score distributions to extensive experiments, and observe them to be qualitatively and quantitatively quite accurate. This constitutes a first step towards fixing the analysis of the dual-sieve attack: we can now accurately estimate false-positives and false-negatives. Now that the analysis is fixed, one may consider how to fix the attack itself, namely exploring the opportunities to mitigate a large number of false-positives.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.