Papers updated in last 183 days (1606 results)

Last updated:  2024-03-01
Fully Succinct Batch Arguments for NP from Indistinguishability Obfuscation
Rachit Garg, Kristin Sheridan, Brent Waters, and David J. Wu
Non-interactive batch arguments for $\mathsf{NP}$ provide a way to amortize the cost of $\mathsf{NP}$ verification across multiple instances. In particular, they allow a prover to convince a verifier of multiple $\mathsf{NP}$ statements with communication that scales sublinearly in the number of instances. In this work, we study fully succinct batch arguments for $\mathsf{NP}$ in the common reference string (CRS) model where the length of the proof scales not only sublinearly in the number of instances $T$, but also sublinearly with the size of the $\mathsf{NP}$ relation. Batch arguments with these properties are special cases of succinct non-interactive arguments (SNARGs); however, existing constructions of SNARGs either rely on idealized models or strong non-falsifiable assumptions. The one exception is the Sahai-Waters SNARG based on indistinguishability obfuscation. However, when applied to the setting of batch arguments, we must impose an a priori bound on the number of instances. Moreover, the size of the common reference string scales linearly with the number of instances. In this work, we give a direct construction of a fully succinct batch argument for $\mathsf{NP}$ that supports an unbounded number of statements from indistinguishability obfuscation and one-way functions. Then, by additionally relying on a somewhere statistically binding (SSB) hash function, we show how to extend our construction to obtain a fully succinct and updatable batch argument. In the updatable setting, a prover can take a proof $\pi$ on $T$ statements $(x_1, \ldots, x_T)$ and "update" it to obtain a proof $\pi'$ on $(x_1, \ldots, x_T, x_{T + 1})$. Notably, the update procedure only requires knowledge of a (short) proof for $(x_1, \ldots, x_T)$ along with a single witness $w_{T + 1}$ for the new instance $x_{T + 1}$. Importantly, the update does not require knowledge of witnesses for $x_1, \ldots, x_T$.
Last updated:  2024-03-01
Revisiting Differential-Linear Attacks via a Boomerang Perspective with Application to AES, Ascon, CLEFIA, SKINNY, PRESENT, KNOT, TWINE, WARP, LBlock, Simeck, and SERPENT
Hosein Hadipour, Patrick Derbez, and Maria Eichlseder
In 1994, Langford and Hellman introduced differential-linear (DL) cryptanalysis, with the idea of decomposing the block cipher E into two parts, EU and EL, such that EU exhibits a high-probability differential trail, while EL has a high-correlation linear trail.Combining these trails forms a distinguisher for E, assuming independence between EU and EL. The dependency between the two parts of DL distinguishers remained unaddressed until EUROCRYPT 2019, where Bar-On et al. introduced the DLCT framework, resolving the issue up to one S-box layer. However, extending the DLCT framework to formalize the dependency between the two parts for multiple rounds remained an open problem. In this paper, we first tackle this problem from the perspective of boomerang analysis. By examining the relationships between DLCT, DDT, and LAT, we introduce a set of new tables facilitating the formulation of dependencies between the two parts of the DL distinguisher across multiple rounds. Then, as the main contribution, we introduce a highly versatile and easy-to-use automatic tool for exploring DL distinguishers, inspired by automatic tools for boomerang distinguishers. This tool considers the dependency between differential and linear trails across multiple rounds. We apply our tool to various symmetric-key primitives, and in all applications, we either present the first DL distinguishers or enhance the best-known ones. We achieve successful results against Ascon, AES, SERPENT, PRESENT, SKINNY, TWINE, CLEFIA, WARP, LBlock, Simeck, and KNOT. Furthermore, we demonstrate that, in some cases, DL distinguishers outperform boomerang distinguishers significantly.
Last updated:  2024-03-01
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
Jiahui He, Kai Hu, Hao Lei, and Meiqin Wang
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium. In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup. To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over $2^{40}$ monomials. This leads to the best cube attacks on the target ciphers.
Last updated:  2024-02-29
Collision Resistance from Multi-Collision Resistance for all Constant Parameters
Jan Buzek and Stefano Tessaro
A $t$-multi-collision-resistant hash function ($t$-MCRH) is a family of shrinking functions for which it is computationally hard to find $t$ distinct inputs mapping to the same output for a function sampled from this family. Several works have shown that $t$-MCRHs are sufficient for many of the applications of collision-resistant hash functions (CRHs), which correspond to the special case of $t = 2$. An important question is hence whether $t$-MCRHs for $t > 2$ are fundamentally weaker objects than CRHs. As a first step towards resolving this question, Rothblum and Vasudevan (CRYPTO '22) recently gave non-black-box constructions of infinitely-often secure CRHs from $t$-MCRHs for $t \in \{3,4\}$ assuming the MCRH is sufficiently shrinking. Earlier on, Komargodski and Yogev (CRYPTO '18) also showed that $t$-MCRHs for any constant $t$ imply the weaker notion of a distributional CRH. In this paper, we remove the limitations of prior works, and completely resolve the question of the power of $t$-MCRHs for constant $t$ in the infinitely-often regime, showing that the existence of such a function family always implies the existence of an infinitely-often secure CRH. As in the works mentioned above, our construction is non-blackbox and non-constructive. We further give a new domain extension result for MCRHs that enables us to show that the underlying MCRH need only have arbitrarily small linear shrinkage (mapping $(1 + \epsilon)n$ bits to $n$ bits for any fixed $\epsilon > 0$) to imply the existence of CRHs.
Last updated:  2024-02-29
SyRA: Sybil-Resilient Anonymous Signatures with Applications to Decentralized Identity
Elizabeth Crites, Aggelos Kiayias, and Amirreza Sarencheh
We introduce a new cryptographic primitive, called Sybil-Resilient Anonymous (SyRA) signature, which enables users to generate, on demand, unlinkable pseudonyms tied to any given context, and issue digital signatures on their behalf. Concretely, given a personhood relation, an issuer (who may be a distributed entity) enables users to prove their personhood and extract an associated long-term key, which can then be used to issue signatures for any given context and message. Sybil-resilient anonymous signatures achieve two key security properties: 1) Sybil resilience: ensures that every user is entitled to at most one pseudonym per context, and 2) anonymity: requires that no information about the user is leaked through their various pseudonyms or the signatures they issue on their pseudonyms’ behalf. We conceptualize SyRA signatures as an ideal functionality in the Universal Composition (UC) setting and realize the functionality via an efficient, pairing-based construction that utilizes two levels of verifiable random functions (VRFs) and which may be of independent interest. A key feature of this approach is the statelessness of the issuer: we achieve the core properties of Sybil resilience and anonymity without requiring the issuer to retain any information about past user interactions. SyRA signatures have various applications in multiparty systems, such as e-voting (e.g., for decentralized governance) and cryptocurrency airdrops, making them an attractive option for deployment in decentralized identity (DID) systems.
Last updated:  2024-02-29
The impact of data-heavy, post-quantum TLS 1.3 on the Time-To-Last-Byte of real-world connections
Panos Kampanakis and Will Childs-Klein
It has been shown that post-quantum key exchange and authentication with ML-KEM and ML-DSA, NIST’s postquantum algorithm picks, will have an impact on TLS 1.3 performance used in the Web or other applications. Studies so far have focused on the overhead of quantum-resistant algorithms on TLS time-to-first-byte (handshake time). Although these works have been important in quantifying the slowdown in connection establishment, they do not capture the full picture regarding real-world TLS 1.3 connections which carry sizable amounts of data. Intuitively, the introduction of an extra 10KB of ML-KEM and ML-DSA exchanges in the connection negotiation will inflate the connection establishment time proportionally more than it will increase the total connection time of a Web connection carrying 200KB of data. In this work, we quantify the impact of ML-KEM and ML-DSA on typical TLS 1.3 connections which transfer a few hundreds of KB from the server to the client. We study the slowdown in the time-to-last-byte of postquantum connections under normal network conditions and in more unstable environments with high packet delay variability and loss probabilities. We show that the impact of ML-KEM and ML-DSA on the TLS 1.3 time-to-last-byte under stable network conditions is lower than the impact on the time-to-first-byte and diminishes as the transferred data increases. The time-to-last-byte increase stays below 5% for high-bandwidth, stable networks. It goes from 32% increase of the time-to-first-byte to under 15% increase of the time-to-last-byte when transferring 50KiB of data or more under low-bandwidth, stable network conditions. Even when congestion control affects connection establishment, the additional slowdown drops below 10% as the connection data increases to 200KiB. We also show that connections under lossy or volatile network conditions could see higher impact from post-quantum handshakes, but these connections’ time-to-lastbyte increase still drops as the transferred data increases. Finally, we show that such connections are already significantly slow and volatile regardless of the TLS handshake.
Last updated:  2024-02-29
Time-Lock Puzzles with Efficient Batch Solving
Jesko Dujmovic, Rachit Garg, and Giulio Malavolta
Time-Lock Puzzles (TLPs) are a powerful tool for concealing messages until a predetermined point in time. When solving multiple puzzles, it becomes crucial to have the ability to "batch-solve" puzzles, i.e., simultaneously open multiple puzzles while working to solve a "single one". Unfortunately, all previously known TLP constructions equipped for batch solving rely on super-polynomially secure indistinguishability obfuscation, making them impractical. In light of this challenge, we present novel TLP constructions that offer batch-solving capabilities without using heavy cryptographic hammers. Our proposed schemes are simple and concretely efficient, and they can be constructed based on well-established cryptographic assumptions based on pairings or learning with errors (LWE). Along the way, we introduce new constructions of puncturable key-homomorphic PRFs both in the lattice and in the pairing setting, which may be of independent interest. Our analysis leverages an interesting connection to Hall's marriage theorem and incorporates an optimized combinatorial approach, enhancing the practicality and feasibility of our TLP schemes. Furthermore, we introduce the concept of "rogue-puzzle attacks", where maliciously crafted puzzle instances may disrupt the batch-solving process of honest puzzles. We then propose constructions of concrete and efficient TLPs designed to prevent such attacks.
Last updated:  2024-02-29
Two-Round Threshold Lattice-Based Signatures from Threshold Homomorphic Encryption
Kamil Doruk Gur, Jonathan Katz, and Tjerand Silde
Much recent work has developed efficient protocols for threshold signatures, where $n$ parties share a signing key and some threshold $t$ of those parties must interact to produce a signature. Yet efficient threshold signatures with post-quantum security have been elusive, with the state-of-the-art being a two-round scheme by Damgård et al. (PKC'21) based on lattices that supports only the full threshold case (i.e., $t=n$). We show here a two-round threshold signature scheme based on standard lattice assumptions that supports arbitrary thresholds $t\leq n$. Estimates of our scheme's performance at the $128$-bit security level show that in the $3$-out-of-$5$ case, we obtain signatures of size $46.6$ KB and public keys of size $13.6$ KB. We achieve $\approx 5\times$ improved parameters if only a small number of signatures are ever issued with the same key. As an essential building block and independent contribution, we construct an actively secure threshold (linearly) homomorphic encryption scheme that supports arbitrary thresholds $t \leq n$.
Last updated:  2024-02-29
Strong PUF Security Metrics: Sensitivity of Responses to Single Challenge Bit Flips
Wolfgang Stefani, Fynn Kappelhoff, Martin Gruber, Yu-Neng Wang, Sara Achour, Debdeep Mukhopadhyay, and Ulrich Rührmair
This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong Physical Unclonable Functions (PUFs). These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial complexity analyses, and allow simple iterative design optimization. Moreover, they are computationally more efficient and far easier to standardize than typical ML-studies. This manuscript treats one very natural, but also very impactful metric, and investigates the effects that the alteration of single challenge bits has on the associated PUF-responses. We define several concrete metric scores based on this idea, and demonstrate their predictive power by applying them to various popular Strong PUF design families as test cases. This includes XOR Arbiter PUFs, XOR Bistable Ring PUFs, and Feed-Forward Arbiter PUFs, whose practical security is particularly well known after two decades of intense research. In passing, our manuscript also suggests techniques for representing our metric scores graphically, and for interpreting them in a meaningful manner. Our work demonstrates that if comparable methods had existed earlier, various Strong PUF candidates deemed secure and broken later could have been recognized and winnowed early on.
Last updated:  2024-02-29
Single Pass Client-Preprocessing Private Information Retrieval
Arthur Lazzaretti and Charalampos Papamanthou
Recently, many works have considered Private Information Retrieval (PIR) with client-preprocessing: In this model a client and a server jointly run a preprocessing phase, after which client queries can run in time sublinear in the size of the database. In addition, such approaches store no additional bits per client at the server, allowing us to scale PIR to a large number of clients. In this work, we propose the first client-preprocessing PIR scheme with ``single pass'' client-preprocessing. In particular, our scheme is concretely optimal with respect to preprocessing, in the sense that it requires exactly one linear pass over the database. This is in stark contrast with existing works, whose preprocessing is proportional to $\lambda \cdot N$, where $\lambda$ is the security parameter (e.g., $\lambda=128$). Our approach yields a preprocessing speedup of 45-100$\times$ and a query speedup of up to 20$\times$ when compared to previous state-of-the-art schemes (e.g., Checklist, USENIX 2021), making preprocessing PIR more attractive for a myriad of use cases that are ``session-based''. In addition to fast preprocessing, our scheme features extremely fast updates (additions and edits)---in constant time. Previously, the best known approach for handling updates in client-preprocessing PIR had time complexity $O(\log N)$, while also adding a $\log N$ factor to the bandwidth. We implement our update algorithm and show concrete speedups of about 20$\times$ in update time when compared to the previous state-of-the-art updatable scheme (e.g., Checklist, USENIX 2021).
Last updated:  2024-02-29
Finding the Impossible: Automated Search for Full Impossible-Differential, Zero-Correlation, and Integral Attacks
Hosein Hadipour, Sadegh Sadeghi, and Maria Eichlseder
Impossible differential (ID), zero-correlation (ZC), and integral attacks are a family of important attacks on block ciphers. For example, the impossible differential attack was the first cryptanalytic attack on 7 rounds of AES. Evaluating the security of block ciphers against these attacks is very important but also challenging: Finding these attacks usually implies a combinatorial optimization problem involving many parameters and constraints that is very hard to solve using manual approaches. Automated solvers, such as Constraint Programming (CP) solvers, can help the cryptanalyst to find suitable attacks. However, previous CP-based methods focus on finding only the ID, ZC, and integral distinguishers, often only in a limited search space. Notably, none can be extended to a unified optimization problem for finding full attacks, including efficient key-recovery steps. In this paper, we present a new CP-based method to search for ID, ZC, and integral distinguishers and extend it to a unified constraint optimization problem for finding full ID, ZC, and integral attacks. To show the effectiveness and usefulness of our method, we applied it to several block ciphers, including SKINNY, CRAFT, SKINNYe-v2, and SKINNYee. For the ISO standard block cipher SKINNY, we significantly improve all existing ID, ZC, and integral attacks. In particular, we improve the integral attacks on SKINNY-$n$-$3n$ and SKINNY-$n$-$2n$ by 3 and 2 rounds, respectively, obtaining the best cryptanalytic results on these variants in the single-key setting. We improve the ZC attack on SKINNY-$n$-$n$ (SKINNY-$n$-$2n$) by 2 (resp. 1) rounds. We also improve the ID attacks on all variants of SKINNY. Particularly, we improve the time complexity of the best previous single-tweakey (related-tweakey) ID attack on SKINNY-$128$-$256$ (resp. SKINNY-$128$-$384$) by a factor of $2^{22.57}$ (resp. $2^{15.39}$). On CRAFT, we propose a 21-round (20-round) ID (resp. ZC) attack, which improves the best previous single-tweakey attack by 2 (resp. 1) rounds. Using our new model, we also provide several practical integral distinguishers for reduced-round SKINNY, CRAFT, and Deoxys-BC. Our method is generic and applicable to other strongly aligned block ciphers.
Last updated:  2024-02-29
Improved Search for Integral, Impossible-Differential and Zero-Correlation Attacks: Application to Ascon, ForkSKINNY, SKINNY, MANTIS, PRESENT and QARMAv2
Hosein Hadipour, Simon Gerhalter, Sadegh Sadeghi, and Maria Eichlseder
Integral, impossible-differential (ID), and zero-correlation (ZC) attacks are three of the most important attacks on block ciphers. However, manually finding these attacks can be a daunting task, which is why automated methods are becoming increasingly important. Most automatic tools regarding integral, ZC, and ID attacks have focused only on finding distinguishers rather than complete attacks. At EUROCRYPT 2023, Hadipour et al. proposed a generic and efficient constraint programming (CP) model based on satisfiability for finding ID, ZC, and integral distinguishers. This new model can be extended to a unified CP model for finding full key recovery attacks. However, it has limitations, including determining the contradiction location beforehand and a cell-wise model unsuitable for weakly aligned ciphers like Ascon and PRESENT. They also deferred developing a CP model for the partial-sum technique in key recovery as future work. In this paper, we enhance Hadipour et al.'s method in several ways. First, we remove the limitation of determining the contradiction location in advance. Second, we show how to extend the distinguisher model to a bit-wise model, considering the internal structure of S-boxes and keeping the model based on satisfiability. Third, we introduce a CP model for the partial-sum technique for the first time. To show the usefulness and versatility of our approach, we apply it to various designs, from strongly aligned ones like ForkSKINNY and QARMAv2 to weakly aligned ones such as Ascon and PRESENT, yielding significantly improved results. To mention a few of our results, we improve the integral distinguisher of QARMAv2-128 (resp. QARMAv2-64) by 7 (resp. 5) rounds, and the integral distinguisher of ForkSKINNY by 1 round, only thanks to our cell-wise distinguishe modelings. By using our new bit-wise modeling, our tool can find a group of $2^{155}$ 5-round ID and ZC distinguishers for Ascon in only one run, taking a few minutes on a regular laptop. The new CP model for the partial-sum technique enhances integral attacks on all SKINNY variants, notably improving the best attack on SKINNY-$n$-$n$ in the single-key setting by 1 round. We also enhance ID attacks on ForkSKINNY and provide the first analysis of this cipher in a limited reduced-round setting. Our methods are generic and applicable to other block ciphers.
Last updated:  2024-02-29
Powers-of-Tau to the People: Decentralizing Setup Ceremonies
Valeria Nikolaenko, Sam Ragsdale, Joseph Bonneau, and Dan Boneh
We introduce the first decentralized trusted setup protocols for constructing a powers-of-tau structured reference string. Facilitated by a blockchain platform, our protocols can run in a permissionless manner, with anybody able to participate in exchange for paying requisite transaction fees. The result is secure as long as any single party participates honestly. We introduce several protocols optimized for different sized powers-of-tau setups and using an on-chain or off-chain data availability model to store the resulting string. We implement our most efficient protocol on top of Ethereum, demonstrating practical concrete performance numbers.
Last updated:  2024-02-29
Connecting Leakage-Resilient Secret Sharing to Practice: Scaling Trends and Physical Dependencies of Prime Field Masking
Sebastian Faust, Loïc Masure, Elena Micheli, Maximilian Orlt, and François-Xavier Standaert
Symmetric ciphers operating in (small or mid-size) prime fields have been shown to be promising candidates to maintain security against low-noise (or even noise-free) side-channel leakage. In order to design prime ciphers that best trade physical security and implementation efficiency, it is essential to understand how side-channel security evolves with the field size (i.e., scaling trends). Unfortunately, it has also been shown that such a scaling trend depends on the leakage functions and cannot be explained by the standard metrics used to analyze Boolean masking with noise. In this work, we therefore initiate a formal study of prime field masking for two canonical leakage functions: bit leakages and Hamming weight leakages. By leveraging theoretical results from the leakage-resilient secret sharing literature, we explain formally why (1) bit leakages correspond to a worst-case and do not encourage operating in larger fields, and (2) an opposite conclusion holds for Hamming weight leakages, where increasing the prime field modulus p can contribute to a security amplification that is exponential in the number of shares,with log(p) seen as security parameter like the noise variance in Boolean masking. We combine these theoretical results with experimental ones and show that the interest masking in larger prime fields can degrade gracefully when leakage functions slightly deviate from the Hamming weight abstraction, motivating further research towards characterizing (ideally wide) classes of leakage functions offering such guarantees.
Last updated:  2024-02-29
Closing the Efficiency Gap between Synchronous and Network-Agnostic Consensus
Giovanni Deligios and Mose Mizrahi Erbes
In the consensus problem, $n$ parties want to agree on a common value, even if some of them are corrupt and arbitrarily misbehave. If the parties have a common input $m$, then they must agree on $m$. Protocols solving consensus assume either a synchronous communication network, where messages are delivered within a known time, or an asynchronous network with arbitrary delays. Asynchronous protocols only tolerate $t_a < n/3$ corrupt parties. Synchronous ones can tolerate $t_s < n/2$ corruptions with setup, but their security completely breaks down if the synchrony assumptions are violated. Network-agnostic consensus protocols, as introduced by Blum, Katz, and Loss [TCC'19], are secure regardless of network conditions, tolerating up to $t_s$ corruptions with synchrony and $t_a$ without, under provably optimal assumptions $t_a \leq t_s$ and $2t_s + t_a < n$. Despite efforts to improve their efficiency, all known network-agnostic protocols fall short of the asymptotic complexity of state-of-the-art purely synchronous protocols. In this work, we introduce a novel technique to compile any synchronous and any asynchronous consensus protocols into a network-agnostic one. This process only incurs a small constant number of overhead rounds, so that the compiled protocol matches the optimal round complexity for synchronous protocols. Our compiler also preserves under a variety of assumptions the asymptotic communication complexity of state-of-the-art synchronous and asynchronous protocols. Hence, it closes the current efficiency gap between synchronous and network-agnostic consensus. As a plus, our protocols support $\ell$-bit inputs, and can be extended to achieve communication complexity $O(n^2\kappa + \ell n)$ under the assumptions for which this is known to be possible for purely synchronous protocols.
Last updated:  2024-02-29
Õptimal Parallel Broadcast
Gilad Asharov and Anirudh Chandramouli
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits. This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Moreover, we show a lower bound for parallel broadcast, showing that our protocol is optimal up to logarithmic factors and in expectation. Our main contribution is a framework for obtaining perfectly secure broadcast with an expected constant number of rounds from a statistically secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O({\sf poly} \log n)$ bits. All our protocols are adaptively secure.
Last updated:  2024-02-29
Security analysis of the Classic McEliece, HQC and BIKE schemes in low memory
Yu Li and Li-Ping Wang
With the advancement of NIST PQC standardization, three of the four candidates in Round 4 are code-based schemes, namely Classic McEliece, HQC and BIKE. Currently, one of the most important tasks is to further analyze their security levels for the suggested parameter sets. At PKC 2022 Esser and Bellini restated the major information set decoding (ISD) algorithms by using nearest neighbor search and then applied these ISD algorithms to estimate the bit security of Classic McEliece, HQC and BIKE under the suggested parameter sets. However, all major ISD algorithms consume a large amount of memory, which in turn affects their time complexities. In this paper, we reestimate the bit-security levels of the parameter sets suggested by these three schemes in low memory by applying $K$-list sum algorithms to ISD algorithms. Compared with Esser-Bellini's results, our results achieve the best gains for Classic McEliece, HQC, and BIKE, with reductions in bit-security levels of $11.09$, $12.64$, and $12.19$ bits, respectively.
Last updated:  2024-02-29
To extend or not to extend: Agile Masking Instructions for PQC
Markus Krausz, Georg Land, Florian Stolz, Dennis Naujoks, Jan Richter-Brockmann, Tim Güneysu, and Lucie Kogelheide
Splitting up sensitive data into multiple shares – termed masking – has proven an effective countermeasure against various types of Side-Channel Analysis (SCA) on cryptographic implementations. However, in software this approach not only leads to dramatic performance overheads for non-linear operations, but also suffers from microarchitectural leakage, which is hard to avoid. Both problems can be addressed with one solution: masked hardware accelerators. In this context, Gao et al. [GGM+ 21] presented a RISC-V Instruction Set Extension (ISE) with masked Boolean and arithmetic instructions to accelerate masked software implementations of block ciphers. In this work, we demonstrate how this ISE can be applied to and extended for Post-Quantum Cryptography (PQC) components, forming a crypto-agile solution. We provide masked implementations based on three different ISE constellations for multiple highly relevant components, including Cumulative Distribution Table (CDT) sampling and polynomial rotation, which, to the best of our knowledge, have not been masked before. With the masked instructions, we measure speedups of more than one order of magnitude compared to sophisticated bitsliced implementations and even up to two orders of magnitude for non-bitsliced implementations. We assert the first-order security of our implementation with a practical evaluation.
Last updated:  2024-02-29
Efficient and Generic Methods to Achieve Active Security in Private Information Retrieval and More Advanced Database Search
Reo Eriguchi, Kaoru Kurosawa, and Koji Nuida
Motivated by secure database search, we present secure computation protocols for a function $f$ in the client-servers setting, where a client can obtain $f(x)$ on a private input $x$ by communicating with multiple servers each holding $f$. Specifically, we propose generic compilers from passively secure protocols, which only keep security against servers following the protocols, to actively secure protocols, which guarantee privacy and correctness even against malicious servers. Our compilers are applied to protocols computing any class of functions, and are efficient in that the overheads in communication and computational complexity are only polynomial in the number of servers, independent of the complexity of functions. We then apply our compilers to obtain concrete actively secure protocols for various functions including private information retrieval (PIR), bounded-degree multivariate polynomials and constant-depth circuits. For example, our actively secure PIR protocols achieve exponentially better computational complexity in the number of servers than the currently best-known protocols. Furthermore, our protocols for polynomials and constant-depth circuits reduce the required number of servers compared to the previous actively secure protocols. In particular, our protocol instantiated from the sparse Learning Parity with Noise (LPN) assumption is the first actively secure protocol for multivariate polynomials which has the minimum number of servers, without assuming fully homomorphic encryption.
Last updated:  2024-02-29
Universal Composable Password Authenticated Key Exchange for the Post-Quantum World
You Lyu, Shengli Liu, and Shuai Han
In this paper, we construct the first password authenticated key exchange (PAKE) scheme from isogenies with Universal Composable (UC) security in the random oracle model (ROM). We also construct the first two PAKE schemes with UC security in the quantum random oracle model (QROM), one is based on the learning with error (LWE) assumption, and the other is based on the group-action decisional Diffie- Hellman (GA-DDH) assumption in the isogeny setting. To obtain our UC-secure PAKE scheme in ROM, we propose a generic construction of PAKE from basic lossy public key encryption (LPKE) and CCA-secure PKE. We also introduce a new variant of LPKE, named extractable LPKE (eLPKE). By replacing the basic LPKE with eLPKE, our generic construction of PAKE achieves UC security in QROM. The LPKE and eLPKE have instantiations not only from LWE but also from GA-DDH, which admit four specific PAKE schemes with UC security in ROM or QROM, based on LWE or GA-DDH.
Last updated:  2024-02-29
Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, and Uri Stemmer
One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $t\in[T]$ we learn (in an online fashion) that $\Delta_t\geq 0$ new events have occurred, and must respond with an estimate $n_t\approx\sum_{j=1}^t \Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy. The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $O\left(\log(T)+\log^2(n)\right)$, and Henzinger et al. (2023) showed a lower bound of $\Omega\left(\min\{\log n, \log T\}\right)$. We show a new lower bound of $\Omega\left(\min\{n,\log T\}\right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $\log^2 n=O(\log T)$. Our lower bound has the following implications: (1) We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017). (2) Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi. (3) Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
Last updated:  2024-02-29
Two-Round Maliciously-Secure Oblivious Transfer with Optimal Rate
Pedro Branco, Nico Döttling, and Akshayaram Srinivasan
We give a construction of a two-round batch oblivious transfer (OT) protocol in the CRS model that is UC-secure against malicious adversaries and has (near) optimal communication cost. Specifically, to perform a batch of $k$ oblivious transfers where the sender's inputs are bits, the sender and the receiver need to communicate a total of $3k + o(k) \cdot \mathsf{poly}(\lambda)$ bits. We argue that $3k$ bits are required by any protocol with a black-box and straight-line simulator. The security of our construction is proven assuming the hardness of Quadratic Residuosity (QR) and the Learning Parity with Noise (LPN).
Last updated:  2024-02-29
Post-Quantum Security of Tweakable Even-Mansour, and Applications
Gorjan Alagic, Chen Bai, Jonathan Katz, Christian Majenz, and Patrick Struck
The tweakable Even-Mansour construction yields a tweakable block cipher from a public random permutation. We prove post-quantum security of tweakable Even-Mansour when attackers have quantum access to the random permutation but only classical access to the secretly-keyed construction, the relevant setting for most real-world applications. We then use our results to prove post-quantum security—in the same model—of the symmetric-key schemes Chaskey (an ISO-standardized MAC), Elephant (an AEAD finalist of NIST's lightweight cryptography standardization effort), and a variant of Minalpher (an AEAD second-round candidate of the CAESAR competition).
Last updated:  2024-02-29
Preimage Attacks on Reduced-Round Ascon-Xof
Seungjun Baek, Giyoon Kim, and Jongsung Kim
Ascon, a family of algorithms that supports authenticated encryption and hashing, has been selected as the new standard for lightweight cryptography in the NIST Lightweight Cryptography Project. Ascon’s permutation and authenticated encryption have been actively analyzed, but there are relatively few analyses on the hashing. In this paper, we concentrate on preimage attacks on Ascon-Xof. We focus on linearizing the polynomials leaked by the hash value to find its inverse. In an attack on 2-round Ascon-Xof, we carefully construct the set of guess bits using a greedy algorithm in the context of guess-and-determine. This allows us to attack Ascon-Xof more efficiently than the method in Dobraunig et al., and we fully implement our attack to demonstrate its effectiveness. We also provide the number of guess bits required to linearize one output bit after 3- and 4-round Ascon’s permutation, respectively. In particular, for the first time, we connect the result for 3-round Ascon to a preimage attack on Ascon-Xof with a 64-bit output. Our attacks primarily focus on analyzing weakened versions of Ascon-Xof, where the weakening involves setting all the IV values to 0 and omitting the round constants. Although our attacks do not compromise the security of the full Ascon-Xof, they provide new insights into their security.
Last updated:  2024-02-29
A New Approach for Non-Interactive Zero-Knowledge from Learning with Errors
Brent Waters
We put forward a new approach for achieving non-interactive zero-knowledge proofs (NIKZs) from the learning with errors (LWE) assumption (with subexponential modulus to noise ratio). We provide a LWE-based construction of a hidden bits generator that gives rise to a NIZK via the celebrated hidden bits paradigm. A noteable feature of our construction is its simplicity. Our construction employs lattice trapdoors, but beyond that uses only simple operations. Unlike prior solutions we do not rely on a correlation intractability argument nor do we utilize fully homomorphic encryption techniques. Our solution provides a new methodology that adds to the diversity of techniques for solving this fundamental problem.
Last updated:  2024-02-29
Perfectly-Secure Multiparty Computation with Linear Communication Complexity over Any Modulus
Daniel Escudero, Yifan Song, and Wenhao Wang
Consider the task of secure multiparty computation (MPC) among $n$ parties with perfect security and guaranteed output delivery, supporting $t<n/3$ active corruptions. Suppose the arithmetic circuit $C$ to be computed is defined over a finite ring $\mathbb{Z}/q\mathbb{Z}$, for an arbitrary $q\in\mathbb{Z}$. It is known that this type of MPC over such ring is possible, with communication that scales as $O(n|C|)$, assuming that $q$ scales as $\Omega(n)$. However, for constant-size rings $\mathbb{Z}/q\mathbb{Z}$ where $q = O(1)$, the communication is actually $O(n\log n|C|)$ due to the need of the so-called ring extensions. In most natural settings, the number of parties is variable but the ``datatypes'' used for the computation are fixed (e.g. 64-bit integers). In this regime, no protocol with linear communication exists. In this work we provide an MPC protocol in this setting: perfect security, G.O.D. and $t<n/3$ active corruptions, that enjoys linear communication $O(n|C|)$, even for constant-size rings $\mathbb{Z}/q\mathbb{Z}$. This includes as important particular cases small fields such as $\mathbb{F}_2$, and also the ring $\mathbb{Z}/2^k\mathbb{Z}$. The main difficulty in achieving this result is that widely used techniques such as linear secret-sharing cannot work over constant-size rings, and instead, one must make use of ring extensions that add $\Omega(\log n)$ overhead, while packing $\Omega(\log n)$ ring elements in each extension element in order to amortize this cost. We make use of reverse multiplication-friendly embeddings (RMFEs) for this packing, and adapt recent techniques in network routing (Goyal et al. CRYPTO'22) to ensure this can be efficiently used for non-SIMD circuits. Unfortunately, doing this naively results in a restriction on the minimum width of the circuit, which leads to an extra additive term in communication of $\mathsf{poly}(n)\cdot\mathsf{depth}(C)$. One of our biggest technical contributions lies in designing novel techniques to overcome this limitation by packing elements that are distributed across different layers. To the best of our knowledge, all works that have a notion of packing (e.g. RMFE or packed secret-sharing) group gates across the same layer, and not doing so, as in our work, leads to a unique set of challenges and complications.
Last updated:  2024-02-28
Garbled Circuit Lookup Tables with Logarithmic Number of Ciphertexts
David Heath, Vladimir Kolesnikov, and Lucien K. L. Ng
Garbled Circuit (GC) is a basic technique for practical secure computation. GC handles Boolean circuits; it consumes significant network bandwidth to transmit encoded gate truth tables, each of which scales with the computational security parameter $\kappa$. GC optimizations that reduce bandwidth consumption are valuable. It is natural to consider a generalization of Boolean two-input one-output gates (represented by $4$-row one-column lookup tables, LUTs) to arbitrary $N$-row $m$-column LUTs. Known techniques for this do not scale, with naive size-$O(Nm\kappa)$ garbled LUT being the most practical approach in many scenarios. Our novel garbling scheme -- logrow -- implements GC LUTs while sending only a logarithmic in $N$ number of ciphertexts! Specifically, let $n = \lceil \log_2 N \rceil$. We allow the GC parties to evaluate a LUT for $(n-1)\kappa + nm\kappa + Nm$ bits of communication. logrow is compatible with modern GC advances, e.g. half gates and free XOR. Our work improves state-of-the-art GC handling of several interesting applications, such as privacy-preserving machine learning, floating-point arithmetic, and DFA evaluation.
Last updated:  2024-02-28
Algorithms for Matrix Code and Alternating Trilinear Form Equivalences via New Isomorphism Invariants
Anand Kumar Narayanan, Youming Qiao, and Gang Tang
We devise algorithms for finding equivalences of trilinear forms over finite fields modulo linear group actions. Our focus is on two problems under this umbrella, Matrix Code Equivalence (MCE) and Alternating Trilinear Form Equivalence (ATFE), since their hardness is the foundation of the NIST round-$1$ signature candidates MEDS and ALTEQ respectively. We present new algorithms for MCE and ATFE, which are further developments of the algorithms for polynomial isomorphism and alternating trilinear form equivalence, in particular by Bouillaguet, Fouque, and Véber (Eurocrypt 2013), and Beullens (Crypto 2023). Key ingredients in these algorithms are new easy-to-compute distinguishing invariants under the respective group actions. For MCE, we associate new isomorphism invariants to corank-$1$ points of matrix codes, which lead to a birthday-type algorithm. We present empirical justifications that these isomorphism invariants are easy-to-compute and distinguishing, and provide an implementation of this algorithm. This algorithm has some implications to the security of MEDS. The invariant function for ATFE is similar, except it is associated with lower rank points. Modulo certain assumptions on turning the invariant function into canonical forms, our algorithm for ATFE improves on the runtime of the previously best known algorithm of Beullens (Crypto 2023). Finally, we present quantum variants of our classical algorithms with cubic runtime improvements.
Last updated:  2024-02-28
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-02-28
Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit
Markku-Juhani O. Saarinen
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. SLotH supports all 12 parameter sets of SLH-DSA. The configurable architecture contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores, and can protect secret key material with side-channel secure PRF and Winternitz chains. We demonstrate that very significant performance gains can be obtained from hardware features that facilitate hash padding formats and iterative hashing specific to SLH-DSA. These features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. A small RISC-V control core executes the drivers, as is typical in RoT systems such as OpenTitan or Caliptra. Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SLH-DSA-SHA2 parameter sets have approximately half of the speed. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE. We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Last updated:  2024-02-28
PROTEUS: A Tool to generate pipelined Number Theoretic Transform Architectures for FHE and ZKP applications
Florian Hirner, Ahmet Can Mert, and Sujoy Sinha Roy
Number Theoretic Transform (NTT) is a fundamental building block in emerging cryptographic constructions like fully homomorphic encryption, post-quantum cryptography and zero-knowledge proof. In this work, we introduce Proteus, an open-source parametric hardware to generate pipelined architectures for the NTT. For a given parameter set including the polynomial degree and size of the coefficient modulus, Proteus can generate Radix-2 NTT architectures using Single-path Delay Feedback (SDF) and Multi-path Delay Commutator (MDC) approaches. We also present a detailed analysis of NTT implementation approaches and use several optimizations to achieve the best NTT configuration. Our evaluations demonstrate performance gain up to 1.8× compared to SDF and MDC-based NTT implementations in the literature. Our SDF and MDC architectures use 1.75× and 6.5× less DSPs, and 3× and 10.5× less BRAMs, respectively, compared to state-of-the-art SDF and MDC-based NTT implementations.
Last updated:  2024-02-28
Compute, but Verify: Efficient Multiparty Computation over Authenticated Inputs
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, and Nitin Singh
Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some certification authority. We propose a generic and efficient compiler that transforms any linear secret sharing based honest-majority MPC protocol into one with input authentication. Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party honest majority MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t<n/3$, our compiler preserves the stronger identifiable abort security of the underlying MPC protocol. No existing solution for authenticated MPC achieves this regardless of the corruption threshold. Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment.
Last updated:  2024-02-28
PsiBench: Pragmatic Benchmark of Two-party Private Set Intersection.
Ziyuan Liang, Weiran Liu, Hanwen Feng, Feng Han, Liqiang Peng, Li Peng, Chao Li, Guorui Xu, Lei Zhang, and Fan Zhang
Private Set Intersection (PSI) allows two parties to obtain the intersection of their data sets while revealing nothing else. PSI is attractive in many scenarios and has wide applications in academia and industry. Over the last three decades, a large number of PSI protocols have been proposed using different cryptographic techniques, under different assumptions, for different scenarios. The inherent complexity, heterogeneous constructions, and rapid evolution of PSI protocols make it difficult to have a unified perspective and further promote the field. We make the following three contributions to present a pragmatic benchmark of two-party PSI. First, we propose the Map-and-Compare framework, which generalizes almost all efficient PSI constructions to date, and intuitively explains the idea and challenge of PSI constructions. Based on the framework, we divide existing proposals into several categories and perform a systematic analysis of the features and use cases of different PSI variants. Second, we present a Java-based benchmark library that implements almost all two-party PSI protocols (which are considered to define the state-of-the-art in terms of concrete performance) and supports rapid prototyping of new PSI protocols. Third, by using our library as a common ground, we provide a comprehensive and impartial comparison of all our PSI implementations in detail. We discuss the performance of different proposals in various settings.
Last updated:  2024-02-28
$\mathsf{Mercury}$: Constant-Round Protocols for Multi-Party Computation with Rationals
Luke Harmon and Gaetan Delavignette
Most protocols for secure multi-party computation (MPC) work over fields or rings, which means that encoding techniques are needed to map rational-valued data into the algebraic structure being used. Leveraging an encoding technique introduced in recent work of Harmon et al. that is compatible with any MPC protocol over a prime-order field, we present Mercury - a family of protocols for addition, multiplication, subtraction, and division of rational numbers. Notably, the output of our division protocol is exact (i.e., it does not use iterative methods). Our protocols offer improvements in both round complexity and communication complexity when compared with prior art, and are secure for a dishonest minority of semi-honest parties.
Last updated:  2024-02-28
Key Recovery Attack on the Partial Vandermonde Knapsack Problem
Dipayan Das and Antoine Joux
The Partial Vandermonde (PV) Knapsack problem is an algebraic variant of the low-density inhomogeneous SIS problem. The problem has been used as a building block for various lattice-based constructions, including signatures (ACNS'14, ACISP'18), encryptions (DCC'15,DCC'20), and signature aggregation (Eprint'20). At Crypto'22, Boudgoust, Gachon, and Pellet-Mary proposed a key distinguishing attack on the PV Knapsack exploiting algebraic properties of the problem. Unfortunately, their attack doesn't offer key recovery, except for worst-case keys. In this paper, we propose an alternative attack on the PV Knapsack problem, which provides key recovery for a much larger set of keys. Like the Crypto'22 attack, it is based on lattice reduction and uses a dimension reduction technique to speed-up the underlying lattice reduction algorithm and enhance its performance. As a side bonus, our attack transforms the PV Knapsack problem into uSVP instances instead of SVP instances in the Crypto'22 attack. This also helps the lattice reduction algorithm, both from a theoretical and practical point of view. We use our attack to re-assess the hardness of the concrete parameters used in the literature. It appears that many contain a non-negligible fraction of weak keys, which are easily identified and extremely susceptible to our attack. For example, a fraction of $2^{-19}$ of the public keys of a parameter set from ACISP'18 can be solved in about $30$ hours on a moderate server using off-the-shelf lattice reduction. This parameter set was initially claimed to have a $129$-bit security against key recovery attack. Its security was reduced to $87$-bit security using the distinguishing attack from Crypto'22. Similarly, the ACNS'14 proposal also includes a parameter set containing a fraction of $2^{-19}$ of weak keys; those can be solved in about $17$ hours.
Last updated:  2024-02-28
Combined Threshold Implementation
Jakob Feldtkeller, Jan Richter-Brockmann, Pascal Sasdrich, and Tim Güneysu
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects. In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model. Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Last updated:  2024-02-28
Algebraic Algorithm for the Alternating Trilinear Form Equivalence Problem
Lars Ran, Simona Samardjiska, and Monika Trimoska
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography. ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine. In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones. Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack. Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Last updated:  2024-02-28
Time-Averaged Analysis of Selfish Mining in Bitcoin
Roozbeh Sarenche, Ren Zhang, Svetla Nikova, and Bart Preneel
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase his relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in selfish mining unprofitability. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our result shows that even when considering the orphan blocks in the DAM, normal selfish mining can still be profitable; however, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM.
Last updated:  2024-02-28
Lin2-Xor Lemma: an OR-proof that leads to the membership proof and signature
Anton A. Sokolov
In this paper we introduce an novel two-round public coin OR-proof protocol that extends in a natural way to the log-size membership proof and signature in a prime-order group. In the lemma called Lin2-Xor we prove that our OR-proof is perfectly complete and has witness-extended emulation under the discrete logarithm assumption. We derive from it a log-size one-out-of many proof, which retains the perfect completeness and witness-extended emulation. Both of our OR- and membership- proofs easily acquire the special honest verifier zero-knowledge property under the decisional Diffie-Hellman assumption. We sketch out a setup-free pairings-free log-size linkable ring signature with strong security model on top of our membership proof. Many recently proposed discrete-log setup-free pairings-free log-size ring signatures are based on the ideas of commitment-to-zero proving system by Groth and Kohlweiss or on the Bulletproofs inner-product compression method by Bünz et al. Our Lin2-Xor lemma provides an alternative technique which, using the general reduction similar to Bulletproofs, leads directly to the log-size linkable ring signature under the same prerequisites.
Last updated:  2024-02-28
Early Stopping for Any Number of Corruptions
Julian Loss and Jesper Buus Nielsen
Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to $t=n-1$ malicious corruptions and terminates in $O(\min\{f^2,t+1\})$ rounds for any execution with $f\leq t$ actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to $t=(1-\epsilon)n$ malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority.
Last updated:  2024-02-28
Integrating Causality in Messaging Channels
Shan Chen and Marc Fischlin
Causal reasoning plays an important role in the comprehension of communication, but it has been elusive so far how causality should be properly preserved by instant messaging services. To the best of our knowledge, causality preservation is not even treated as a desired security property by most (if not all) existing secure messaging protocols like Signal. This is probably due to the intuition that causality seems already preserved when all received messages are intact and displayed according to their sending order. Our starting point is to notice that this intuition is wrong. Until now, for messaging channels (where conversations take place), both the proper causality model and the provably secure constructions have been left open. Our work fills this gap, with the goal to facilitate the formal understanding of causality preservation in messaging. First, we focus on the common two-user secure messaging channels and model the desired causality preservation property. We take the popular Signal protocol as an example and analyze the causality security of its cryptographic core (the double-ratchet mechanism). We show its inadequacy with a simple causality attack, then fix it such that the resulting Signal channel is causality-preserving, even in a strong sense that guarantees post-compromise security. Our fix is actually generic: it can be applied to any bidirectional channel to gain strong causality security. Then, we model causality security for the so-called message franking channels. Such a channel additionally enables end users to report individual abusive messages to a server (e.g., the service provider), where this server relays the end-to-end-encrypted communication between users. Causality security in this setting further allows the server to retrieve the necessary causal dependencies of each reported message, essentially extending isolated reported messages to message flows. This has great security merit for dispute resolution, because a benign message may be deemed abusive when isolated from the context. As an example, we apply our model to analyze Facebook’s message franking scheme. We show that a malicious user can easily trick Facebook (i.e., the server) to accuse an innocent user. Then we fix this issue by amending the underlying message franking channel to preserve the desired causality.
Last updated:  2024-02-28
On cubic-like bent Boolean functions
Claude Carlet and Irene Villa
Cubic bent Boolean functions (i.e. bent functions of algebraic degree at most 3) have the property that, for every nonzero element $a$ of $\mathbb{F}_2^n$, the derivative $D_af(x)=f(x)+f(x+a)$ of $f$ admits at least one derivative $D_bD_af(x)=f(x)+f(x+a)+f(x+b)+f(x+a+b)$ that is equal to constant function 1. We study the general class of those Boolean functions having this property, which we call cubic-like bent. We study the properties of such functions and the structure of their constant second-order derivatives. We characterize them by means of their Walsh transform (that is, by their duals), by the Walsh transform of their derivatives and by other means. We study them within the Maiorana-McFarland class of bent functions, providing characterizations and constructions and showing the existence of cubic-like bent functions of any algebraic degree between 2 and $\frac n2$.
Last updated:  2024-02-28
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Jiaxin Pan, Doreen Riepel, and Runzhi Zeng
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction. Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones. Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
Last updated:  2024-02-28
The NISQ Complexity of Collision Finding
Yassine Hamoudi, Qipeng Liu, and Makrand Sinha
Collision-resistant hashing, a fundamental primitive in modern cryptography, ensures that there is no efficient way to find distinct inputs that produce the same hash value. This property underpins the security of various cryptographic applications, making it crucial to understand its complexity. The complexity of this problem is well-understood in the classical setting and $\Theta(N^{1/2})$ queries are needed to find a collision. However, the advent of quantum computing has introduced new challenges since quantum adversaries - equipped with the power of quantum queries - can find collisions much more efficiently. Brassard, Höyer and Tapp and Aaronson and Shi established that full-scale quantum adversaries require $\Theta(N^{1/3})$ queries to find a collision, prompting a need for longer hash outputs, which impacts efficiency in terms of the key lengths needed for security. This paper explores the implications of quantum attacks in the Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three different models for NISQ algorithms and achieve tight bounds for all of them: (1) A hybrid algorithm making adaptive quantum or classical queries but with a limited quantum query budget, or (2) A quantum algorithm with access to a noisy oracle, subject to a dephasing or depolarizing channel, or (3) A hybrid algorithm with an upper bound on its maximum quantum depth; i.e., a classical algorithm aided by low-depth quantum circuits. In fact, our results handle all regimes between NISQ and full-scale quantum computers. Previously, only results for the pre-image search problem were known for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while nothing was known about the collision finding problem. Along with our main results, we develop an information-theoretic framework for recording query transcripts of quantum-classical algorithms. The main feature of this framework is that it allows us to record queries in two incompatible bases - classical queries in the standard basis and quantum queries in the Fourier basis - consistently. We call the framework the hybrid compressed oracle as it naturally interpolates between the classical way of recording queries and the compressed oracle framework of Zhandry for recording quantum queries.
Last updated:  2024-02-28
Key-Recovery Attack on a Public-Key Encryption Related to Planted Clique
Caicai Chen and Chris Jones
Hudoba proposed a public key encryption (PKE) scheme and conjectured its security to be based on the Planted Clique problem. In this note, we show that this scheme is not secure. We do so by devising an efficient algorithm for the even neighbor independent set problem proposed by Hudoba. This leaves open the possibility of building PKE based on Planted Clique.
Last updated:  2024-02-28
FlexHi: A Flexible Hierarchical Threshold Signature Scheme
Muhammed Ali Bingol, Sermin Kocaman, Ali Dogan, and Sibel Kurt Toplu
Threshold signature schemes have gained prominence in enhancing the security and flexibility of digital signatures, allowing a group of participants to collaboratively create signatures while maintaining a predefined threshold of participants for validity. However, conventional threshold signatures treat all participants equally, lacking the capability to accommodate hierarchical structures often seen in real-world applications. Hierarchical Threshold Signature Schemes (HTSS) naturally extend the concept of simple threshold signatures, offering a solution that aligns with hierarchical organizational structures. Our paper introduces a novel, efficient, and flexible HTSS that employs independent polynomials at each hierarchical level, removing limitations on threshold values. This adaptability enables us to tailor the scheme to diverse requirements, whether signing requires only top-level nodes or lower-level participants' involvement. Based on our analysis, our FlexHi integrated into the FROST scheme outperforms Tassa's hierarchical scheme on FROST and operates approximately 30% to 40% faster, depending on the number of participants and the chosen threshold values. This demonstrates that, in addition to flexibility, our scheme has practical benefits through improved performance.
Last updated:  2024-02-28
Signatures with Memory-Tight Security in the Quantum Random Oracle Model
Keita Xagawa
Memory tightness of reductions in cryptography, in addition to the standard tightness related to advantage and running time, is important when the underlying problem can be solved efficiently with large memory, as discussed in Auerbach, Cash, Fersch, and Kiltz (CRYPTO 2017). Diemert, Geller, Jager, and Lyu (ASIACRYPT 2021) and Ghoshal, Ghosal, Jaeger, and Tessaro (EUROCRYPT 2022) gave memory-tight proofs for the multi-challenge security of digital signatures in the random oracle model. This paper studies the memory-tight reductions for _post-quantum_ signature schemes in the _quantum_ random oracle model. Concretely, we show that signature schemes from lossy identification are multi-challenge secure in the quantum random oracle model via memory-tight reductions. Moreover, we show that the signature schemes from lossy identification achieve more enhanced securities considering _quantum_ signing oracles proposed by Boneh and Zhandry (CRYPTO 2013) and Alagic, Majenz, Russel, and Song (EUROCRYPT 2020). We additionally show that signature schemes from preimage-sampleable functions achieve those securities via memory-tight reductions.
Last updated:  2024-02-28
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
Qi Feng, Kang Yang, Kaiyi Zhang, Xiao Wang, Yu Yu, Xiang Xie, and Debiao He
EdDSA, standardized by both IRTF and NIST, is a variant of the well-known Schnorr signature based on Edwards curves, and enjoys the benefit of statelessly and deterministically deriving nonces (i.e., it does not require reliable source of randomness or state continuity). Recently, NIST calls for multi-party threshold EdDSA signatures in one mode of deriving nonce statelessly and deterministically and verifying such derivation via zero-knowledge (ZK) proofs. Multi-party full-threshold EdDSA signatures in the dishonest-majority malicious setting have the advantage of strong security guarantee, and specially cover the two-party case. However, it is challenging to translate the stateless and deterministic benefit of EdDSA to the multi-party setting, as no fresh randomness is available for the protocol execution. We present the notion of information-theoretic message authenticated codes (IT-MACs) over groups in the multi-verifier setting, and adopt the recent pseudorandom correlation function (PCF) to generate IT-MACs statelessly and deterministically. Furthermore, we generalize the two-party IT-MACs-based ZK protocol by Baum et al. (Crypto'21) into the multi-verifier setting, which may be of independent interest. Together with multi-verifier extended doubly-authenticated bits (mv-edabits) with errors, we design a multi-verifier zero-knowledge (MVZK) protocol to derive nonces statelessly and deterministically. Building upon the MVZK protocol, we propose a stateless deterministic multi-party EdDSA signature, tolerating all-but-one malicious corruptions. Compared to the state-of-the-art multi-party EdDSA signature by Garillot et al. (Crypto'21), we improve communication cost by a factor of $61\times$, at the cost of increasing computation cost by about $2.25\times$ and requiring three extra rounds.
Last updated:  2024-02-28
Aloha-HE: A Low-Area Hardware Accelerator for Client-Side Operations in Homomorphic Encryption
Florian Krieger, Florian Hirner, Ahmet Can Mert, and Sujoy Sinha Roy
Homomorphic encryption (HE) has gained broad attention in recent years as it allows computations on encrypted data enabling secure cloud computing. Deploying HE presents a notable challenge since it introduces a performance overhead by orders of magnitude. Hence, most works target accelerating server-side operations on hardware platforms, while little attention has been given to client-side operations. In this paper, we present a novel design methodology to implement and accelerate the client-side HE operations on area-constrained hardware. We show how to design an optimized floating-point unit tailored for the encoding of complex values. In addition, we introduce a novel hardware-friendly algorithm for modulo-reduction of floating-point numbers and propose various concepts for achieving efficient resource sharing between modular ring and floating-point arithmetic. Finally, we use this methodology to implement an end-to-end hardware accelerator, Aloha-HE, for the client-side operations of the CKKS scheme. In contrast to existing work, Aloha-HE supports both encoding and encryption and their counterparts within a unified architecture. Aloha-HE achieves a speedup of up to 59x compared to prior hardware solutions.
Last updated:  2024-02-28
Efficacy and Mitigation of the Cryptanalysis on AIM
Seongkwang Kim, Jincheol Ha, Mincheol Son, and Byeonghak Lee
Recent advancements in post-quantum cryptography have highlighted signature schemes based on the MPC-in-the-Head (MPCitH) framework due to their reliance only on the one-way function of the underlying primitive. This reliance offers a diverse set of assumptions regarding the difficulty of post-quantum cryptographic problems. In this context, Kim et al. proposed $\mathsf{AIM}$, an MPCitH-compatible one-way function. This function is distinguished by its large algebraic S-boxes and parallel architecture, contributing to the reduced size of signatures, as presented at CCS 2023. However, $\mathsf{AIM}$ has faced several cryptanalytic challenges, which have potentially weakened its security by up to 15 bits. This paper provides a comprehensive overview of these cryptanalytic methods and proposes $\mathsf{AIM2}$, an enhanced version that addresses these identified vulnerabilities. We conduct an extensive analysis of its resilience to algebraic attacks and detail the modifications in its efficiency.
Last updated:  2024-02-28
Security analysis of the iMessage PQ3 protocol
Douglas Stebila
The iMessage PQ3 protocol is an end-to-end encrypted messaging protocol designed for exchanging data in long-lived sessions between two devices. It aims to provide classical and post-quantum confidentiality for forward secrecy and post-compromise secrecy, as well as classical authentication. Its initial authenticated key exchange is constructed from digital signatures plus elliptic curve Diffie–Hellman and post-quantum key exchanges; to derive per-message keys on an ongoing basis, it employs an adaptation of the Signal double ratchet that includes a post-quantum key encapsulation mechanism. This paper presents the cryptographic details of the PQ3 protocol and gives a reductionist security analysis by adapting the multi-stage key exchange security analysis of Signal by Cohn-Gordon et al. (J. Cryptology, 2020). The analysis shows that PQ3 provides confidentiality with forward secrecy and post-compromise security against both classical and quantum adversaries, in both the initial key exchange as well as the continuous rekeying phase of the protocol.
Last updated:  2024-02-27
On Central Primitives for Quantum Cryptography with Classical Communication
Kai-Min Chung, Eli Goldin, and Matthew Gray
Recent work has introduced the "Quantum-Computation Classical-Communication" (QCCC) (Chung et. al.) setting for cryptography. There has been some evidence that One Way Puzzles (OWPuzz) are the natural central cryptographic primitive for this setting (Khurana and Tomer). For a primitive to be considered central it should have several characteristics. It should be well behaved (which for this paper we will think of as having amplification, combiners, and universal constructions); it should be implied by a wide variety of other primitives; and it should be equivalent to some class of useful primitives. We present combiners, correctness and security amplifica- tion, and a universal construction for OWPuzz. Our proof of security amplification uses a new and cleaner version construction of EFI from OWPuzz (in comparison to the result of Khurana and Tomer) that generalizes to weak OWPuzz and is the most technically involved section of the paper. It was previously known that OWPuzz are implied by other primitives of interest including commitments, symmetric key encryp- tion, one way state generators (OWSG), and therefore pseudorandom states (PRS). However we are able to rule out OWPuzz’s equivalence to many of these primitives by showing a black box separation between general OWPuzz and a restricted class of OWPuzz (those with efficient verification, which we call EV − OWPuzz). We then show that EV − OWPuzz are also implied by most of these primitives, which separates them from OWPuzz as well. This separation also separates extending PRS from highly compressing PRS answering an open question of Ananth et. al.
Last updated:  2024-02-27
Adaptively Secure Streaming Functional Encryption
Pratish Datta, Jiaxin Guan, Alexis Korb, and Amit Sahai
This paper introduces the first adaptively secure streaming functional encryption (sFE) scheme for P/Poly. sFE stands as an evolved variant of traditional functional encryption (FE), catering specifically to contexts with vast and/or dynamically evolving data sets. sFE is designed for applications where data arrives in a streaming fashion and is computed on in an iterative manner as the stream arrives. Unlike standard FE, in sFE: (1) encryption is possible without knowledge of the full data set, (2) partial decryption is possible given only a prefix of the input. Guan, Korb, and Sahai introduced this concept in their recent publication [CRYPTO 2023], where they constructed an sFE scheme for P/Poly using a compact standard FE scheme for the same. However, their sFE scheme only achieved semi-adaptive-function-selective security, which constrains the adversary to obtain all functional keys prior to seeing any ciphertext for the challenge stream. This limitation severely limits the scenarios where sFE can be applied, and therefore fails to provide a suitable theoretical basis for sFE. In contrast, the adaptive security model empowers the adversary to arbitrarily interleave requests for functional keys with ciphertexts related to the challenge stream. Guan, Korb, and Sahai identified achieving adaptive security for sFE as the key question left open by their work. We resolve this open question positively by constructing an adaptively secure sFE construction from indistinguishability obfuscation for P/Poly and injective PRGs. By combining our work with that of Jain, Lin, and Sahai [STOC 2021, EUROCRYPT 2022], we obtain the first adaptively secure sFE scheme for P/Poly based on sub-exponential hardness of well-studied computational problems
Last updated:  2024-02-27
WARPfold : Wrongfield ARithmetic for Protostar folding
Lev Soukhanov
Inspired by range-check trick from recent Latticefold paper we construct elliptic-curve based IVC capable of simulating non-native arithmetic efficiently. We explain the general principle (which can be applied to both Protostar and Hypernova), and describe the Wrongfield ARithmetic for Protostar folding in details. Our construction supports circuits over mutilple non-native fields simultaneously and allows interfacing between them using range-checked elements. WARPfold can be used to warp between different proof systems and construct folding schemes over curves not admitting a dual partner (such as BLS12-381).
Last updated:  2024-02-27
Fully Malicious Authenticated PIR
Marian Dietz and Stefano Tessaro
Authenticated PIR enables a server to initially commit to a database of $N$ items, for which a client can later privately obtain individual items with complexity sublinear in $N$, with the added guarantee that the retrieved item is consistent with the committed database. A crucial requirement is privacy with abort, i.e., the server should not learn anything about a query even if it learns whether the client aborts. This problem was recently considered by Colombo et al. (USENIX '23), who proposed solutions secure under the assumption that the database is committed to honestly. Here, we close this gap for their DDH-based scheme, and present a solution that tolerates fully malicious servers that provide potentially malformed commitments. Our scheme has communication and client computational complexity $\mathcal{O}_{\lambda}(\sqrt{N})$, does not require any additional assumptions, and does not introduce heavy machinery (e.g., generic succinct proofs). We do so by introducing validation queries, which, from the server's perspective, are computationally indistinguishable from regular PIR queries. Provided that the server succeeds in correctly answering $\kappa$ such validation queries, the client is convinced with probability $1-\frac{1}{2^\kappa}$ that the server is unable to break privacy with abort.
Last updated:  2024-02-27
Funshade: Function Secret Sharing for Two-Party Secure Thresholded Distance Evaluation
Alberto Ibarrondo, Hervé Chabanne, and Melek Önen
We propose a novel privacy-preserving, two-party computation of various distance metrics (e.g., Hamming distance, Scalar Product) followed by a comparison with a fixed threshold, which is known as one of the most useful and popular building blocks for many different applications including machine learning, biometric matching, etc. Our solution builds upon recent advances in function secret sharing and makes use of an optimized version of arithmetic secret sharing. Thanks to this combination, our new solution named Funshade is the first to require only one round of communication and two ring elements of communication in the online phase, outperforming all prior state-of-the-art schemes while relying on lightweight cryptographic primitives. Lastly, we implement our solution from scratch in portable C and expose it in Python, testifying its high performance by running secure biometric identification against a database of 1 million records in ∼10 seconds with full correctness and 32-bit precision, without parallelization.
Last updated:  2024-02-27
Lower Bounds for Lattice-based Compact Functional Encryption
Erkan Tairi and Akın Ünal
Functional encryption (FE) is a primitive where the holder of a master secret key can control which functions a user can evaluate on encrypted data. It is a powerful primitive that even implies indistinguishability obfuscation (iO), given sufficiently compact ciphertexts (Ananth-Jain, CRYPTO'15 and Bitansky-Vaikuntanathan, FOCS'15). However, despite being extensively studied, there are FE schemes, such as function-hiding inner-product FE (Bishop-Jain-Kowalczyk, AC'15, Abdalla-Catalano-Fiore-Gay-Ursu, CRYPTO’18) and compact quadratic FE (Baltico-Catalano-Fiore-Gay, Lin, CRYPTO’17), that can be only realized using pairings. This raises the question if there are some mathematical barriers that hinder us from realizing these FE schemes from other assumptions. In this paper, we study the difficulty of constructing lattice-based compact FE. We generalize the impossibility results of Ünal (EC'20) for lattice-based function-hiding FE, and extend it to the case of compact FE. Concretely, we prove lower bounds for lattice-based compact FE schemes which meet some (natural) algebraic restrictions at encryption and decryption, and have ciphertexts of linear size and secret keys of minimal degree. We see our results as important indications of why it is hard to construct lattice-based FE schemes for new functionalities, and which mathematical barriers have to be overcome.
Last updated:  2024-02-27
FuLeakage: Breaking FuLeeca by Learning Attacks
Felicitas Hörmann and Wessel van Woerden
FuLeeca is a signature scheme submitted to the recent NIST call for additional signatures. It is an efficient hash-and-sign scheme based on quasi-cyclic codes in the Lee metric and resembles the lattice-based signature Falcon. FuLeeca proposes a so-called concentration step within the signing procedure to avoid leakage of secret-key information from the signatures. However, FuLeeca is still vulnerable to learning attacks, which were first observed for lattice-based schemes. We present three full key-recovery attacks by exploiting the proximity of the code-based FuLeeca scheme to lattice-based primitives. More precisely, we use a few signatures to extract an $n/2$-dimensional circulant sublattice of the given length-$n$ code, that still contains the exceptionally short secret-key vector. This significantly reduces the classical attack cost and, in addition, leads to a full key recovery in quantum-polynomial time. Furthermore, we exploit a bias in the concentration procedure to classically recover the full key for any security level with at most 175,000 signatures in less than an hour.
Last updated:  2024-02-27
Eureka: A General Framework for Black-box Differential Privacy Estimators
Yun Lu, Malik Magdon-Ismail, Yu Wei, and Vassilis Zikas
Differential privacy (DP) is a key tool in privacy-preserving data analysis. Yet it remains challenging for non-privacy-experts to prove the DP of their algorithms. We propose a methodology for domain experts with limited data privacy background to empirically estimate the privacy of an arbitrary mechanism. Our Eureka moment is a new link---which we prove---between the problems of DP parameter-estimation and Bayes optimal classifiers in ML, which we believe can be of independent interest. Our estimator uses this link to achieve two desirable properties: (1) black-box, i.e., it does not require knowledge of the underlying mechanism, and (2) it has a theoretically-proven accuracy, depending on the underlying classifier used, allowing plug-and-play use of different classifiers. More concretely, motivated by the impossibility of the above task for unrestricted input domains (which we prove), we introduce a natural, application-inspired relaxation of DP which we term relative DP. Intuitively, relative DP defines a mechanism's privacy relative to an input set $T$, circumventing the above impossibility when $T$ is finite. Importantly, it preserves the key intuitive privacy guarantee of DP while enjoying a number of desirable DP properties---scalability, composition, and robustness to post-processing. We then devise a black-box poly-time $(\epsilon,\delta)$-relative DP estimator for any poly-size $T$---the first privacy estimator to support mechanisms with large output spaces while having tight accuracy bounds. As a result of independent interest, we generalize our theory to develop the first Distributional Differential Privacy (DDP) estimator. We benchmark our estimator in a proof-of-concept implementation. First, using kNN as the classifier we show that our method (1) produces a tight, analytically computed $(\epsilon, \delta)$-DP trade-off of low-dimensional Laplace and Gaussian mechanisms---the first to do so, (2) accurately estimates the privacy spectrum of DDP mechanisms, and (3) can verify a DP mechanism's implementations, e.g., Sparse Vector Technique, Noisy Histogram, and Noisy max. Our implementation and experiments demonstrate the potential of our framework, and highlight its computational bottlenecks in estimating DP, e.g., in terms of the size of $\delta$ and the data dimensionality. Our second, neural-network-based instantiation makes a first step in showing that our method can be extended to mechanisms with high-dimensional outputs.
Last updated:  2024-02-27
Improved Meet-in-the-Middle Nostradamus Attacks on AES-like Hashing
Xiaoyang Dong, Jian Guo, Shun Li, Phuong Pham, and Tianyu Zhang
The Nostradamus attack was originally proposed as a security vulnerability for a hash function by Kelsey and Kohno at EUROCRYPT 2006. It requires the attacker to commit to a hash value y of an iterated hash function H. Subsequently, upon being provided with a message prefix P, the adversary’s task is to identify a suffix S such that H(P||S) equals y. Kelsey and Kohno demonstrated a herding attack requiring $O(\sqrt{n}\cdot 2^{2n/3})$ evaluations of the compression function of H, where n represents the output and state size of the hash, placing this attack between preimage attacks and collision searches in terms of complexity. At ASIACRYPT 2022, Benedikt et al. transform Kelsey and Kohno’s attack into a quantum variant, decreasing the time complexity from $O(\sqrt{n}\cdot 2^{2n/3})$ to $O(\sqrt[3]{n}\cdot 2^{3n/7})$. At ToSC 2023, Zhang et al. proposed the first dedicated Nostradamus attack on AES-like hashing in both classical and quantum settings. In this paper, we have made revisions to the multi-target technique incorporated into the meet-in-the-middle automatic search framework. This modification leads to a decrease in time complexity during the online linking phase, effectively reducing the overall attack time complexity in both classical and quantum scenarios. Specifically, we can achieve more rounds in the classical setting and reduce the time complexity for the same round in the quantum setting.
Last updated:  2024-02-27
A generic algorithm for efficient key recovery in differential attacks – and its associated tool
Christina Boura, Nicolas David, Patrick Derbez, Rachelle Heim Boissier, and María Naya-Plasencia
Differential cryptanalysis is an old and powerful attack against block ciphers. While different techniques have been introduced throughout the years to improve the complexity of this attack, the key recovery phase remains a tedious and error-prone procedure. In this work, we propose a new algorithm and its associated tool that permits, given a distinguisher, to output an efficient key guessing strategy. Our tool can be applied to SPN ciphers whose linear layer consists of a bit-permutation and whose key schedule is linear or almost linear. It can be used not only to help cryptanalysts find the best differential attack on a given cipher but also to assist designers in their security analysis. We applied our tool to four targets: RECTANGLE, PRESENT-80, SPEEDY-7-192 and GIFT-64. We extend the previous best attack on RECTANGLE-128 by one round and the previous best differential attack against PRESENT-80 by 2 rounds. We improve a previous key recovery step in an attack against SPEEDY and present more efficient key recovery strategies for RECTANGLE-80 and GIFT. Our tool outputs the results in only a second for most targets.
Last updated:  2024-02-27
Power Contracts: Provably Complete Power Leakage Models for Processors
Roderick Bloem, Barbara Gigerl, Marc Gourjon, Vedad Hadžić, Stefan Mangard, and Robert Primas
The protection of cryptographic software implementations against power-analysis attacks is critical for applications in embedded systems. A commonly used algorithmic countermeasure against these attacks is masking, a secret-sharing scheme that splits a sensitive computation into computations on multiple random shares. In practice, the security of masking schemes relies on several assumptions that are often violated by microarchitectural side-effects of CPUs. Many past works address this problem by studying these leakage effects and building corresponding leakage models that can then be integrated into a software verification workflow. However, these models have only been derived empirically, putting in question the otherwise rigorous security statements made with verification. We solve this problem in two steps. First, we introduce a contract layer between the (CPU) hardware and the software that allows the specification of microarchitectural side-effects on masked software in an intuitive language. Second, we present a method for proving the correspondence between contracts and CPU netlists to ensure the completeness of the specified leakage models. Then, any further security proofs only need to happen between software and contract, which brings benefits such as reduced verification runtime, improved user experience, and the possibility of working with vendor-supplied contracts of CPUs whose design is not available on netlist-level due to IP restrictions. We apply our approach to the popular RISC-V IBEX core, provide a corresponding formally verified contract, and describe how this contract could be used to verify masked software implementations.
Last updated:  2024-02-27
Improved Differential Meet-In-The-Middle Cryptanalysis
Zahra Ahmadian, Akram Khalesi, Dounia M'foukh, Hossein Moghimi, and María Naya-Plasencia
In this paper, we extend the applicability of differential meet- in-the-middle attacks, proposed at Crypto 2023, to truncated differen- tials, and in addition, we introduce three new ideas to improve this type of attack: we show how to add longer structures than the original pa- per, we show how to improve the key recovery steps by introducing some probability in them, and we combine this type of attacks with the state- test technique, that was introduced in the context of impossible differ- ential attacks. Furthermore, we have developed a MILP-based tool to automate the search for a truncated differential-MITM attack with op- timized overall complexity, incorporating some of the proposed improve- ments. Thanks to this, we can build the best known attacks on the cipher CRAFT, reaching 23 rounds against 21 previously; we provide a new at- tack on 23-round SKINNY-64-192, and we improve the best attacks on SKINNY-128-384.
Last updated:  2024-02-27
RAMenPaSTA: Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs
Khai Hanh Tang, Minh Pham, and Chan Nam Ngo
Incremental Verifiable Computation (IVC) allows a prover to prove to a verifier the correct execution of a sequential computation. Recent works focus on improving the universality and efficiency of IVC Schemes, which can be categorized into Accumulation and Folding-based IVCs with Folding-based ones being more efficient (due to their deferred proof generation until the final step). Unfortunately, both approaches satisfy only heuristic security as they model the Random Oracle (RO) as a circuit in their non-constant depth recursive composition of the base Scheme. Such drawback is two-fold: to connect the consecutive execution step the RO is recursively modeled as a circuit during the folding or the accumulating process, and again in the final SNARK wrapper circuit (a common practice in Folding-based IVCs). We revisit this problem, with a focus on the Folding-based IVCs due to their efficiency, and propose the detachment of RO invocation from the folding circuit. We can instead accumulate such invocations, yielding the so-called Conditional Folding (CF) Scheme to overcome the first drawback. One can consider our CF Scheme a hybrid Folding-Accumulation Scheme with provable security. We provide a non-trivial practical construction for our CF scheme that is natively parallelizable, which offers great efficiency. We rigorously prove the security of our CF scheme (also for the case of folding in parallel; and our scheme can be made non-interactive using Fiat-Shamir). Our CF scheme is generic and does not require trusted setup. It can be adapted to construct the first IVC for RAM programs, i.e. Parallelizable Scalable Transparent Arguments of Knowledge for RAM Programs that we dub RAMenPaSTA, that can be used to build zero-knowledge virtual machines (zkVMs). Both our CF Scheme and RAMenPaSTA can be of independent research interests.
Last updated:  2024-02-27
A Zero-Dimensional Gröbner Basis for Poseidon
Matthias Johann Steiner
In this paper we construct dedicated weight orders $>$ so that a $>$-Gröbner bases of Poseidon can be found via linear transformations for the preimage as well as the CICO problem. In particular, with our Gröbner bases we can exactly compute the $\mathbb{F}_q$-vector space dimension of the quotient space for all possible Poseidon configurations. This in turn resolves previous attempts to assess the security of Poseidon against Gröbner basis attacks, since the vector space dimension quantifies the complexity of computing the variety of a zero-dimensional polynomial system.
Last updated:  2024-02-27
Automating Collision Attacks on RIPEMD-160
Yingxin Li, Fukang Liu, and Gaoli Wang
As an ISO/IEC standard, the hash function RIPEMD-160 has been used to generate the Bitcoin address with SHA-256. However, due to the complex double-branch structure of RIPEMD-160, the best collision attack only reaches 36 out of 80 steps of RIPEMD-160, and the best semi-free-start (SFS) collision attack only reaches 40 steps. To improve the 36-step collision attack proposed at EUROCRYPT 2023, we explored the possibility of using different message differences to increase the number of attacked steps, and we finally identified one choice allowing a 40-step collision attack. To find the corresponding 40-step differential characteristic, we re-implement the MILP-based method to search for signed differential characteristics with SAT/SMT. As a result, we can find a colliding message pair for 40-step RIPEMD-160 in practical time, which significantly improves the best collision attack on RIPEMD-160. For the best SFS collision attack published at ToSC 2019, we observe that the bottleneck is the probability of the right-branch differential characteristics as they are fully uncontrolled in the message modification. To address this issue, we utilize our SAT/SMT-based tool to search for high-probability differential characteristics for the right branch. Consequently, we can mount successful SFS collision attacks on 41, 42 and 43 steps of RIPEMD-160, thus significantly improving the SFS collision attacks. In addition, we also searched for a 44-step differential characteristic, but the differential probability is too low to allow a meaningful SFS collision attack.
Last updated:  2024-02-27
New Records in Collision Attacks on SHA-2
Yingxin Li, Fukang Liu, and Gaoli Wang
The SHA-2 family including SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA512/256 is a U.S. federal standard pub- lished by NIST. Especially, there is no doubt that SHA-256 is one of the most important hash functions used in real-world applications. Due to its complex design compared with SHA-1, there is almost no progress in collision attacks on SHA-2 after ASIACRYPT 2015. In this work, we retake this challenge and aim to significantly improve collision attacks on the SHA-2 family. First, we observe from many existing attacks on SHA-2 that the current advanced tool to search for SHA-2 characteristics has reached the bottleneck. Specifically, longer differential characteristics could not be found, and this causes that the collision attack could not reach more steps. To address this issue, we adopt Liu et al.’s MILP-based method and implement it with SAT/SMT for SHA-2, where we also add more techniques to detect contradictions in SHA-2 characteristics. This answers an open problem left in Liu et al.’s paper to apply the technique to SHA-2. With this SAT/SMT-based tool, we search for SHA-2 charac- teristics by controlling its sparsity in a dedicated way. As a result, we successfully find the first practical semi-free-start (SFS) colliding message pair for 39-step SHA-256, improving the best 38-step SFS collision attack published at EUROCRYPT 2013. In addition, we also report the first practical free-start (FS) collision attack on 40-step SHA-224, while the previously best theoretic 40-step attack has time complexity 2110. More- over, for the first time, we can mount practical and theoretic collision attacks on 28-step and 31-step SHA-512, respectively, which improve the best collision attack only reaching 27 steps of SHA-512 at ASIACRYPT 2015. In a word, with new techniques to find SHA-2 characteristics, we have made some notable progress in the analysis of SHA-2 after the major achievements made at EUROCRYPT 2013 and ASIACRYPT 2015.
Last updated:  2024-02-27
A Computational Tsirelson's Theorem for the Value of Compiled XOR Games
Uncategorized
David Cui, Giulio Malavolta, Arthur Mehta, Anand Natarajan, Connor Paddock, Simon Schmidt, Michael Walter, and Tina Zhang
Show abstract
Uncategorized
Nonlocal games are a foundational tool for understanding entanglement and constructing quantum protocols in settings with multiple spatially separated quantum devices. In this work, we continue the study initiated by Kalai et al. (STOC '23) of compiled nonlocal games, played between a classical verifier and a single cryptographically limited quantum device. Our main result is that the compiler proposed by Kalai et al. is sound for any two-player XOR game. A celebrated theorem of Tsirelson shows that for XOR games, the quantum value is exactly given by a semidefinite program, and we obtain our result by showing that the SDP upper bound holds for the compiled game up to a negligible error arising from the compilation. This answers a question raised by Natarajan and Zhang (FOCS '23), who showed soundness for the specific case of the CHSH game. Using our techniques, we obtain several additional results, including (1) tight bounds on the compiled value of parallel-repeated XOR games, (2) operator self-testing statements for any compiled XOR game, and (3) a ``nice" sum-of-squares certificate for any XOR game, from which operator rigidity is manifest.
Last updated:  2024-02-27
The Algebraic Freelunch Efficient Gröbner Basis Attacks Against Arithmetization-Oriented Primitives
Augustin Bariant, Aurélien Boeuf, Axel Lemoine, Irati Manterola Ayala, Morten Øygarden, Léo Perrin, and Håvard Raddum
In this paper, we present a new type of algebraic attack that applies to many recent arithmetization-oriented families of permutations, such as those used in Griffin, Anemoi, ArionHash, and XHash8, whose security relies on the hardness of the constrained-input constrained-output (CICO) problem. We introduce the FreeLunch approach: the monomial ordering is chosen so that the natural polynomial system encoding the CICO problem already is a Gröbner basis. In addition, we present a new dedicated resolution algorithm for FreeLunch systems of complexity lower than applicable state-of-the-art FGLM algorithms. We show that the FreeLunch approach challenges the security of fullround instances of Anemoi, Arion and Griffin. We confirm these theoretical results with experimental results on those three permutations. In particular, using the FreeLunch attack combined with a new technique to bypass 3 rounds of Griffin, we recover a CICO solution for 7 out of 10 rounds of Griffin in less than four hours on one core of AMD EPYC 7352 (2.3GHz).
Last updated:  2024-02-27
A data aggregation protocol based on TFHE
Maria Ferrara, Antonio Tortora, and Maria Tota
Torus Fully Homomorphic Encryption (TFHE) is a probabilistic cryptosytem over the real torus which allows one to operate directly on encrypted data without first decrypting them. We present an aggregation protocol based on a variant of TFHE for computing the sum of sensitive data, working only with the corresponding ciphertexts. Our scheme is an ideal choice for a system of smart meters - electronic devices for measuring energy consumption - that demands consumers’ privacy. In contrast to some other solutions, our proposal does not require any communication among smart meters and it is quantum-safe.
Last updated:  2024-02-27
An Efficient Adaptive Attack Against FESTA
Guoqing Zhou and Maozhi Xu
At EUROCRYPT’23, Castryck and Decru, Maino et al., and Robert present efficient attacks against supersingular isogeny Diffie-Hellman key exchange protocol (SIDH). Drawing inspiration from these attacks, Andrea Basso, Luciano Maino, and Giacomo Pope introduce FESTA, an isogeny-based trapdoor function, along with a corresponding IND-CCA secure public key encryption (PKE) protocol at ASIACRYPT’23. FESTA incorporates either a diagonal or circulant matrix into the secret key to mask torsion points. In this paper, we employ a side-channel attack to construct an auxiliary verification oracle. By querying this oracle, we propose an adaptive attack strategy to recover the secret key in FESTA when the secret matrix is circulant. Compared with existing attacks, our strategy is more efficient and formal. Leveraging these findings, we implement our attack algorithms to recover the circulant matrix in secret key. Finally, we demonstrate that if the secret matrix is circulant, then the adversary can successfully recover FESTA’s secret key with a polynomial number of decryption machine queries. Consequently, our paper illustrates that FESTA PKE protocol with secret circulant matrix does not achieve IND-CCA security.
Last updated:  2024-02-27
A Holistic Security Analysis of Monero Transactions
Cas Cremers, Julian Loss, and Benedikt Wagner
Monero is a popular cryptocurrency with strong privacy guarantees for users’ transactions. At the heart of Monero’s privacy claims lies a complex transaction system called RingCT, which combines several building blocks such as linkable ring signatures, homomorphic commitments, and range proofs, in a unique fashion. In this work, we provide the first rigorous security analysis for RingCT (as given in Zero to Monero, v2.0.0, 2020) in its entirety. This is in contrast to prior works that only provided security arguments for parts of RingCT. To analyze Monero’s transaction system, we introduce the first holistic security model for RingCT. We then prove the security of RingCT in our model. Our framework is modular: it allows to view RingCT as a combination of various different sub-protocols. Our modular approach has the benefit that these components can be easily updated in future versions of RingCT, with only minor modifications to our analysis. At a technical level, we split our analysis in two parts. First, we identify which security notions for building blocks are needed to imply security for the whole system. Interestingly, we observe that existing and well-established notions (e.g., for the linkable ring signature) are insufficient. Second, we analyze all building blocks as implemented in Monero and prove that they satisfy our new notions. Here, we leverage the algebraic group model to overcome subtle problems in the analysis of the linkable ring signature component. As another technical highlight, we show that our security goals can be mapped to a suitable graph problem, which allows us to take advantage of the theory of network flows in our analysis. This new approach is also useful for proving security of other cryptocurrencies.
Last updated:  2024-02-27
Accelerating BGV Bootstrapping for Large $p$ Using Null Polynomials Over $\mathbb{Z}_{p^e}$
Shihe Ma, Tairong Huang, Anyu Wang, and Xiaoyun Wang
The BGV scheme is one of the most popular FHE schemes for computing homomorphic integer arithmetic. The bootstrapping technique of BGV is necessary to evaluate arbitrarily deep circuits homomorphically. However, the BGV bootstrapping performs poorly for large plaintext prime $p$ due to its digit removal procedure exhibiting a computational complexity of at least $O(\sqrt{p})$. In this paper, we propose optimizations for the digit removal procedure with large $p$ by leveraging the properties of null polynomials over the ring $\mathbb{Z}_{p^e}$. Specifically, we demonstrate that it is possible to construct low-degree null polynomials based on two observations of the input to the digit removal procedure: 1) the support size of the input can be upper-bounded by $(2B+1)^2$; 2) the size of the lower digits to be removed can be upper-bounded by $B$. Here $B$ can be controlled within a narrow interval $[22,23]$ in our parameter selection, making the degree of these null polynomials much smaller than $p$ for large values of $p$. These low-degree null polynomials can significantly reduce the polynomial degrees during homomorphic digit removal, thereby decreasing both running time and capacity consumption. Theoretically, our optimizations reduce the computational cost of extracting a single digit from $O(\sqrt{pe})$ (by Chen and Han) or $O(\sqrt{p}\sqrt[4]{e})$ (by Geelen et al.) to $\min(2B+1,\sqrt{\lceil e/t\rceil(2B+1)})$ for some $t\ge 1$. We implement and benchmark our method on HElib with $p=17,127,257,8191$ and $65537$. With our optimized digit removal, we achieve a bootstrapping throughput $1.38\sim151$ times that in HElib, with the speedup increasing with the value of $p$. For $p=65537$, we accelerate the digit removal step by 80 times and reduce the bootstrapping time from more than 12 hours to less than 14 minutes.
Last updated:  2024-02-27
Probabilistic Extensions: A One-Step Framework for Finding Rectangle Attacks and Beyond
Uncategorized
Ling Song, Qianqian Yang, Yincen Chen, Lei Hu, and Jian Weng
Show abstract
Uncategorized
In differential-like attacks, the process typically involves extending a distinguisher forward and backward with probability 1 for some rounds and recovering the key involved in the extended part. Particularly in rectangle attacks, a holistic key recovery strategy can be employed to yield the most efficient attacks tailored to a given distinguisher. In this paper, we treat the distinguisher and the extended part as an integrated entity and give a one-step framework for finding rectangle attacks with the purpose of reducing the overall complexity or attacking more rounds. In this framework, we propose to allow probabilistic differential propagations in the extended part and incorporate the holistic recovery strategy. Additionally, we introduce the ``split-and-bunch technique'' to further reduce the time complexity. Beyond rectangle attacks, we extend these foundational concepts to encompass differential attacks as well. To demonstrate the efficiency of our framework, we apply it to Deoxys-BC-384, SKINNY, ForkSkinny, and CRAFT, achieving a series of refined and improved rectangle attacks and differential attacks. Notably, we obtain the first 15-round attack on Deoxys-BC-384, narrowing its security margin to only one round. Furthermore, our differential attack on CRAFT extends to 23 rounds, covering two more rounds than the previous best attacks.
Last updated:  2024-02-27
Partial Differential Fault Analysis on Ascon
Yang Gao
Authenticated Encryption with Associated Data (AEAD) is a trend in applied cryptography because it combine confidentiality, integrity, and authentication into one algorithm and is more efficient than using block ciphers and hash functions separately. The Ascon algorithm, as the winner in both the CAESAR competition and the NIST LwC competition, will soon become the AEAD standard for protecting the Internet of Things and micro devices with limited computing resources. We propose a partial differential fault analysis (PDFA) technology for the Ascon algorithm, using stuck-at fault and random-nibble fault models respectively. Theoretically, after 9.9 full-round fault injections or 263 single nibble fault injections, 128-bit key can be completely recovered. In addition, we conducted the first discussion of this analysis method under different nonce configurations. In the Nonce-respect case, an average of 130 additional Tag queries are required to complete the guessing of the faulty tag, afterwards equating this case with the Nonce-misuse case. Subsequent experimental results proved the correctness of the theoretical model. Finally we discuss some countermeasures against proposed attacks, and we propose a new S-box that can be used to replace the existing S-box in ASCON to render PDFA ineffective.
Last updated:  2024-02-27
Fast and Accurate: Efficient Full-Domain Functional Bootstrap and Digit Decomposition for Homomorphic Computation
Shihe Ma, Tairong Huang, Anyu Wang, Qixian Zhou, and Xiaoyun Wang
The functional bootstrap in FHEW/TFHE allows for fast table lookups on ciphertexts and is a powerful tool for privacy-preserving computations. However, the functional bootstrap suffers from two limitations: the negacyclic constraint of the lookup table (LUT) and the limited ability to evaluate large-precision LUTs. To overcome the first limitation, several full-domain functional bootstraps (FDFB) have been developed, enabling the evaluation of arbitrary LUTs. Meanwhile, algorithms based on homomorphic digit decomposition have been proposed to address the second limitation. Although these algorithms provide effective solutions, they are yet to be optimized. This paper presents four new FDFB algorithms and two new homomorphic decomposition algorithms that improve the state-of-the-art. Our FDFB algorithms reduce the output noise, thus allowing for more efficient and compact parameter selection. Across all parameter settings, our algorithms reduce the runtime by up to $39.2\%$. Our homomorphic decomposition algorithms also run at 2.0x and 1.5x the speed of prior algorithms. We have implemented and benchmarked all previous FDFB and homomorphic decomposition algorithms and our methods in OpenFHE.
Last updated:  2024-02-27
VeriSimplePIR: Verifiability in SimplePIR at No Online Cost for Honest Servers
Leo de Castro and Keewoo Lee
We present VeriSimplePIR, a verifiable version of the state-of-the-art semi-honest SimplePIR protocol. VeriSimplePIR is a stateful verifiable PIR scheme guaranteeing that all queries are consistent with a fixed, well-formed database. It is the first efficient verifiable PIR scheme to not rely on an honest digest to ensure security; any digest, even one produced by a malicious server, is sufficient to commit to some database. This is due to our extractable verification procedure, which can extract the entire database from the consistency proof checked against each response. Furthermore, VeriSimplePIR ensures this strong security guarantee without compromising the performance of SimplePIR. The online communication overhead is roughly $1.1$-$1.5\times$ SimplePIR, and the online computation time on the server is essentially the same. We achieve this low overhead via a novel one-time preprocessing protocol that generates a reusable proof that can verify any number of subsequent query-response pairs as long as no malicious behavior is detected. As soon as the verification procedure rejects a response from the server, the offline phase must be rerun to compute a new proof. VeriSimplePIR represents an approach to maliciously secure cryptography that is highly optimized for honest parties while maintaining security even in the presence of malicious adversaries.
Last updated:  2024-02-27
VOProof: Efficient zkSNARKs from Vector Oracle Compilers
Yuncong Zhang, Alan Szepieniec, Ren Zhang, Shi-Feng Sun, Geng Wang, and Dawu Gu
The design of zkSNARKs is increasingly complicated and requires familiarity with a broad class of cryptographic and algebraic tools. This complexity in zkSNARK design also increases the difficulty in zkSNARK implementation, analysis, and optimization. To address this complexity, we develop a new workflow for designing and implementing zkSNARKs, called $\mathsf{VOProof}$. In $\mathsf{VOProof}$, the designer only needs to construct a \emph{Vector Oracle (VO) protocol} that is intuitive and straightforward to design, and then feeds this protocol to our \emph{VO compiler} to transform it into a fully functional zkSNARK. This new workflow conceals most algebraic and cryptographic operations inside the compiler, so that the designer is no longer required to understand these cumbersome and error prone procedures. Moreover, our compiler can be fine-tuned to compile one VO protocol into multiple zkSNARKs with different tradeoffs. We apply $\mathsf{VOProof}$ to construct three general-purpose zkSNARKs targeting three popular representations of arithmetic circuits: the Rank-1 Constraint System (R1CS), the Hadamard Product Relation (HPR), and the $\mathsf{PLONK}$ circuit. These zkSNARKs have shorter and more intuitive descriptions, thus are easier to implement and optimize compared to prior works. To evaluate their performance, we implement a Python framework for describing VO protocols and compiling them into working Rust code of zkSNARKs. Our evaluation shows that the $\mathsf{VOProof}$-based zkSNARKs have competitive performance, especially in proof size and verification time, e.g., both reduced by roughly $50\%$ compared to $\mathsf{Marlin}$ (Chiesa et al., EUROCRYPT 2020). These improvements make the $\mathsf{VOProof}$-based zkSNARKs more preferable in blockchain scenarios where the proof size and verification time are critical.
Last updated:  2024-02-26
From Random Probing to Noisy Leakages Without Field-Size Dependence
Gianluca Brian, Stefan Dziembowski, and Sebastian Faust
Side channel attacks are devastating attacks targeting cryptographic implementations. To protect against these attacks, various countermeasures have been proposed -- in particular, the so-called masking scheme. Masking schemes work by hiding sensitive information via secret sharing all intermediate values that occur during the evaluation of a cryptographic implementation. Over the last decade, there has been broad interest in designing and formally analyzing such schemes. The random probing model considers leakage where the value on each wire leaks with some probability $\epsilon$. This model is important as it implies security in the noisy leakage model via a reduction by Duc et al. (Eurocrypt 2014). Noisy leakages are considered the "gold-standard" for analyzing masking schemes as they accurately model many real-world physical leakages. Unfortunately, the reduction of Duc et al. is non-tight, and in particular requires that the amount of noise increases by a factor of $|\mathbb{F}|$ for circuits that operate over $\mathbb{F}$ (where $\mathbb{F}$ is a finite field). In this work, we give a generic transformation from random probing to average probing, which avoids this loss of $|\mathbb{F}|$. Since the average probing is identical to the noisy leakage model (Eurocrypt 2014), this yields for the first time a security analysis of masked circuits where the noise parameter $\delta$ in the noisy leakage model is independent of $|\mathbb{F}|$. The latter is particularly important for cryptographic schemes operating over large fields, e.g., the AES or the recently standardized post-quantum schemes.
Last updated:  2024-02-26
Tight Indistinguishability Bounds for the XOR of Independent Random Permutations by Fourier Analysis
Itai Dinur
The XOR of two independent permutations (XoP) is a well-known construction for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). The idealized construction (where the permutations are uniformly chosen and independent) and its variants have been extensively analyzed over nearly 25 years. The best-known information-theoretic indistinguishability bound for the XoP construction (due to Dutta, Nandi and Saha~[IEEE Trans. Inf. Theory]) is about $q^2/2^{2n}$, where $q$ is the number of queries and $n$ is the block length. The XoP construction has also been recently analyzed in the multi-user setting and the best known bound for it (by Chen, Choi, and Lee~[CRYPTO'23]) is about $\sqrt{u} q_{\max}^2/2^{2n}$, where $u$ is the number of users and $q_{\max}$ is the number of queries per user. A generalization of the XoP construction outputs the XOR of $r \geq 2$ independent permutations, and has also received significant attention. In this paper, we improve all previous bounds obtained in the literature for the (generalized) XoP construction when $q > 2^{n/2}$ (assuming $q < 2^{n}/2$). Specifically, for the basic XoP construction with $r=2$, we obtain an indistinguishability bound of $q/2^{1.5n}$ in the single-user setting and $\sqrt{u} q_{\max}/2^{1.5n}$ in the multi-user setting. Hence, if $q_{\max} = \Theta(2^{n})$ (and $q_{\max} < 2^{n}/2$), then our bound of $\sqrt{u} q_{\max}/2^{1.5n}$ implies that the adversary's advantage remains negligible as long as $u = o(2^n)$. On the other hand, with the previous bound of $\sqrt{u} q_{\max}^2/2^{2n}$, the adversary's advantage may already be a constant with a single user ($u=1$). For the generalized XoP construction, we obtain a bound of $q/2^{(r - 0.5)n}$ in the single-user setting and $\sqrt{u} q_{\max}/2^{(r - 0.5)n}$ in the multi-user setting. Consequently, the gap between our results and the best previous ones increases sharply with $r$. For example, the best-known bound for $r = 3$, obtained by Choi et al. [ASIACRYPT'22] (in the multi-user setting), is about $\sqrt{u} q_{\max}^2/2^{2.5 n}$, while we obtain $\sqrt{u} q_{\max}/2^{2.5 n}$. Since all of our bounds are matched (up to constant factors) for $q > 2^{n/2}$ by attacks published by Patarin in 2008 (and their generalizations to the multi-user setting), they are all tight. We obtain our results by Fourier analysis of Boolean functions. Yet, most of our technical work is not directly related to the analyzed cryptosystems. It rather involves analyzing fundamental Fourier properties of the density function associated with sampling without replacement from the domain $\{0,1\}^n$. We believe that this analysis is of broad interest.
Last updated:  2024-02-26
Stronger Security for Non-Interactive Threshold Signatures: BLS and FROST
Mihir Bellare, Stefano Tessaro, and Chenzhi Zhu
We give a unified syntax, and a hierarchy of definitions of security of increasing strength, for non-interactive threshold signature schemes. They cover both fully non-interactive schemes (these are ones that have a single-round signing protocol, the canonical example being threshold-BLS) and ones, like FROST, that have a prior round of message-independent pre-processing. The definitions in the upper echelon of our hierarchy ask for security that is well beyond any currently defined, let alone proven to be met by the just-mentioned schemes, yet natural, and important for modern applications like securing digital wallets. We prove that BLS and FROST are better than advertised, meeting some of these stronger definitions. Yet, they fall short of meeting our strongest definition, a gap we fill for FROST via a simple enhancement to the scheme. We also surface subtle differences in the security achieved by variants of FROST.
Last updated:  2024-02-26
ORTOA: One Round Trip Oblivious Access
Sujaya Maiyya, Yuval Steinhart, Divyakant Agrawal, Prabhanjan Ananth, and Amr El Abbadi
Many applications relying on cloud storage services typically encrypt their data to ensure data privacy. However, reading or writing the encrypted data to serve client requests reveals the type of client operation to a potentially untrusted cloud. An adversary can exploit this information leak to compromise a user’s privacy by tracking read/write access patterns. Existing approaches such as Oblivious RAM (ORAM) schemes hide the type of client access by always reading and then writing the data sequentially for both reads and writes, rendering one of these rounds redundant with respect to a client request. To mitigate this redundancy, we propose ORTOA- a family of protocols enabling single-round data access on remote storage without revealing the operation type. Specifically, we propose three protocols, two using existing cryptographic primitives of fully homomorphic encryption and trusted execution environments (TEEs), and a new primitive inspired by garbled circuits. Each of these protocols has different trust assumptions, allowing an application to choose the option best suited for its needs. To our knowledge, ORTOA is the first to propose generalized protocols to obfuscate the type of access in a single round, reducing the communication overhead in half. The proposed techniques can pave the way for novel ORAM schemes that hide both the type of access and the access pattern in a single round. Our experimental results show ORTOA achieving throughput gains of 1.7x-3.2x compared to a baseline requiring two rounds for access type concealment, with the baseline incurring latency 1.5-1.9x that of ORTOA for 160B-sized objects.
Last updated:  2024-02-26
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
Valerie Gilchrist, Laurane Marco, Christophe Petit, and Gang Tang
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Last updated:  2024-02-26
Split-State Non-Malleable Codes and Secret Sharing Schemes for Quantum Messages
Naresh Goud Boddu, Vipul Goyal, Rahul Jain, and João Ribeiro
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is $2$-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model. In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length $n$, any message length at most $n^{\Omega(1)}$, and error $\varepsilon=2^{-{n^{\Omega(1)}}}$. In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to $1/11$.
Last updated:  2024-02-26
The Impact of Reversibility on Parallel Pebbling
Jeremiah Blocki, Blake Holman, and Seunghoon Lee
The (parallel) classical black pebbling game is a helpful abstraction which allows us to analyze the resources (time, space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$ on a (parallel) computer. In particular, the parallel black pebbling game has been used as a tool to quantify the (in)security of Data-Independent Memory-Hard Functions (iMHFs). Recently Blocki et al. (TCC 2022) introduced the parallel reversible pebbling game as a tool to analyze resource requirements when we additionally require that computation is reversible. Intuitively, the parallel reversible pebbling game extends the classical parallel black pebbling game by imposing restrictions on when pebbles can be removed. By contrast, the classical black pebbling game imposes no restrictions on when pebbles can be removed to free up space. One of the primary motivations of the parallel reversible pebbling game is to provide a tool to analyze the full cost of quantum preimage attacks against an iMHF. However, while there is an extensive line of work analyzing pebbling complexity in the (parallel) black pebbling game, comparatively little is known about the parallel reversible pebbling game. Our first result is a lower bound of $\Omega\left(N^{1+1/\sqrt{\log N}} \right)$ on the reversible cumulative pebbling cost for a line graph on $N$ nodes. This yields a separation between classical and reversible pebbling costs demonstrating that the reversibility constraint can increase cumulative pebbling costs (and space-time costs) by a multiplicative factor of $\Omega\left(N^{1/\sqrt{\log N}} \right)$ --- the classical pebbling cost (space-time or cumulative) for a line graph is just $\mathcal{O}(N)$. On the positive side, we prove that any classical parallel pebbling can be transformed into a reversible pebbling strategy whilst increasing space-time (resp. cumulative memory) costs by a multiplicative factor of at most $\mathcal{O}\left(N^{2/\sqrt{\log N}}\right)$ (resp. $\mathcal{O}\left(N^{\mathcal{O}(1)/\sqrt[4]{\log N}}\right)$). We also analyze the impact of the reversibility constraint on the cumulative pebbling cost of depth-robust and depth-reducible DAGs exploiting reversibility to improve constant factors in a prior lower bound of Alwen et al. (EUROCRYPT 2017). For depth-reducible DAGs we show that the state-of-the-art recursive pebbling techniques of Alwen et al. (EUROCRYPT 2017) can be converted into a recursive reversible pebbling attack without any asymptotic increases in pebbling costs. Finally, we extend a result of Blocki et al. (ITCS 2020) to show that it is Unique Games hard to approximate the reversible cumulative pebbling cost of a DAG $G$ to within any constant factor.
Last updated:  2024-02-26
Practical Attack on All Parameters of the DME Signature Scheme
Pierre Briaud, Maxime Bros, Ray Perlner, and Daniel Smith-Tone
DME is a multivariate scheme submitted to the call for additional signatures recently launched by NIST. Its performance is one of the best among all the candidates. The public key is constructed from the alternation of very structured linear and non-linear components that constitute the private key, the latter being defined over an extension field. We exploit these structures by proposing an algebraic attack which is practical on all DME parameters.
Last updated:  2024-02-26
IS-CUBE: An isogeny-based compact KEM using a boxed SIDH diagram
Tomoki Moriya
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. One of the benefits of using isogeny-based cryptography is its compactness. In particular, a key exchange scheme SIDH allowed us to use a $4\lambda$-bit prime for the security parameter $\lambda$. Unfortunately, SIDH was broken in 2022 by some studies. After that, some isogeny-based key exchange and public key encryption schemes have been proposed; however, most of these schemes use primes whose sizes are not guaranteed as linearly related to the security parameter $\lambda$. As far as we know, the remaining schemes have not been implemented due to the computation of isogenies of high dimensional abelian varieties, or they need to use a ``weak" curve (\textit{i.e.}, a curve whose endomorphism ring is known) as the starting curve. In this study, we propose a novel compact isogeny-based key encapsulation mechanism named IS-CUBE via Kani's theorem and a $3$-dimensional SIDH diagram. A prime used in IS-CUBE is of the size of about $8\lambda$ bits, and we can use a random supersingular elliptic curve for the starting curve. The public key of IS-CUBE is about $3$ times larger than that of SIKE, and the ciphertext of IS-CUBE is about $4$ times larger than that of SIKE from theoretical estimation. In practice, compared to FESTA, the public key of IS-CUBE is slightly larger and its ciphertext is slightly smaller. The core idea of IS-CUBE comes from the hardness of some already known computational problems and a novel computational problem (the Long Isogeny with Torsion (LIT) problem), which is the problem to compute a hidden isogeny from two given supersingular elliptic curves and information of torsion points of relatively small order.
Last updated:  2024-02-26
Leakage-Tolerant Circuits
Yuval Ishai and Yifan Song
A leakage-resilient circuit for $f:\{0,1\}^n\to\{0,1\}^m$ is a randomized Boolean circuit $C$ mapping a randomized encoding of an input $x$ to an encoding of $y=f(x)$, such that applying any leakage function $L\in \cal L$ to the wires of $C$ reveals essentially nothing about $x$. A leakage-tolerant circuit achieves the stronger guarantee that even when $x$ and $y$ are not protected by any encoding, the output of $L$ can be simulated by applying some $L'\in \cal L$ to $x$ and $y$ alone. Thus, $C$ is as secure as an ideal hardware implementation of $f$ with respect to leakage from $\cal L$. Leakage-resilient circuits were constructed for low-complexity classes $\cal L$, including (length-$t$ output) $\mathcal{AC}0$ functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where $L$ outputs the values of $t$ wires in $C$. We initiate a systematic study of leakage-tolerant circuits for natural classes $\cal L$ of global leakage functions, obtaining the following main results. Leakage-tolerant circuits for depth-1 leakage. Every circuit $C_f$ for $f$ can be efficiently compiled into an $\cal L$-tolerant circuit $C$ for $f$, where $\cal L$ includes all leakage functions $L$ that output either $t$ parities or $t$ disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in $2^{O(t)}$ time. We provide partial evidence that this may be inherent. Application to stateful leakage-resilient circuits. Using a general transformation from leakage-tolerant circuits, we obtain the first construction of stateful $t$-leakage-resilient circuits that tolerate a continuous parity leakage, and the first such construction for disjunction/conjunction leakage in which the circuit size grows sub-quadratically with $t$. Interestingly, here we can obtain $\mathtt{poly}(t)$-time simulation even in the case of parities.
Last updated:  2024-02-26
Transaction Fee Mechanism Design in a Post-MEV World
Maryam Bahrani, Pranav Garimidi, and Tim Roughgarden
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with passive block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of active block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the phrase ``maximal extractable value (MEV).'' We first prove that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: With active block producers, no non-trivial or approximately welfare maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These impossibility results can be interpreted as a mathematical justification for augmenting transaction fee mechanisms with additional components such as orderflow auctions, block producer competition, trusted hardware, or cryptographic techniques. We then proceed to a more fine-grained model of block production that is inspired by current practice, in which we distinguish the roles of ``searchers'' (who actively identify opportunities for value extraction from the application layer and compete for the right to take advantage of them) and ``proposers'' (who participate directly in the blockchain protocol and make the final choice of the published block). Searchers can effectively act as an ``MEV oracle'' for a transaction fee mechanism, thereby enlarging the design space. Here, we first consider a transaction fee mechanism that resembles how searchers have traditionally been incorporated into the block production process, with each transaction effectively sold off to a searcher through a first-price auction. We then explore the design space with searchers more generally, and design a mechanism that circumvents our impossibility results for mechanisms without searchers. Our mechanism (the ``SAKA'' mechanism) is deterministic, incentive-compatible (for users, searchers, and the block producer), and sybil-proof, and it guarantees roughly 50% of the maximum-possible welfare when transaction sizes are small relative to block sizes. We conclude with a matching negative result: even when transactions are small relative to blocks, no incentive-compatible, sybil proof, and deterministic transaction fee mechanism can guarantee more than 50% of the maximum-possible welfare.
Last updated:  2024-02-26
Deep Learning Based Analysis of Key Scheduling Algorithm of Advanced Ciphers
Narendra Kumar Patel and Hemraj Shobharam Lamkuche
The advancements in information technology have made the Advanced Encryption Standard (AES) and the PRESENT cipher indispensable in ensuring data security and facilitating private transactions. AES is renowned for its flexibility and widespread use in various fields, while the PRESENT cipher excels in lightweight cryptographic situations. This paper delves into a dual examination of the Key Scheduling Algorithms (KSAs) of AES and the PRESENT cipher, which play a crucial role in generating round keys for their respective encryption techniques. By implementing deep learning methods, particularly a Neural Network model, our study aims to unravel the complexities of these KSAs and shed light on their inner workings.
Last updated:  2024-02-26
Fuzzy Private Set Intersection with Large Hyperballs
Aron van Baarsen and Sihang Pu
Traditional private set intersection (PSI) involves a receiver and a sender holding sets $X$ and $Y$, respectively, with the receiver learning only the intersection $X\cap Y$. We turn our attention to its fuzzy variant, where the receiver holds \(|X|\) hyperballs of radius \(\delta\) in a metric space and the sender has $|Y|$ points. Representing the hyperballs by their center, the receiver learns the points $x\in X$ for which there exists $y\in Y$ such that $\mathsf{dist}(x,y)\leq \delta$ with respect to some distance metric. Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume. This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general \(L_{p\in[1,\infty]}\) distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
Last updated:  2024-02-26
Practical Two-party Computational Differential Privacy with Active Security
Fredrik Meisingseth, Christian Rechberger, and Fabian Schmid
In this work we revisit the problem of using general-purpose MPC schemes to emulate the trusted dataholder in central differential privacy, to achieve same accuracy but without the need to trust one single dataholder. In particular, we consider the two-party model of having two computational parties (or dataholders) each with their own dataset wishing to compute a canonical differentially private mechanism on their combined data and to do so with active security. We start by remarking that available definitions of computational DP (CDP) for protocols are somewhat ill-suited for such a use-case, due to them using formalisms that either are much weaker than one can typically get for MPC protocols, or they are too strict in the sense that they need significant adjustment in order to be realisable by using common DP and MPC techniques. With this in mind we propose a new version of simulation-based CDP, called SIM$^*$-CDP, specifically geared towards being easy to use for MPC practitioners and more closely capture guarantees granted by using state-of-the-art MPC schemes to compute standard DP mechanism. We demonstrate the merit of the SIM$^*$-CDP definition by showing how to satsify it by use of an available distributed protocol for sampling truncated geometric noise. Further, we use the protocol to compute two-party inner products with computational DP and with similar levels of accuracy as in the central model, being the first to do so. Finally, we provide an open-sourced implementation and benchmark its practical performance.
Last updated:  2024-02-26
Twinkle: Threshold Signatures from DDH with Full Adaptive Security
Uncategorized
Renas Bacho, Julian Loss, Stefano Tessaro, Benedikt Wagner, and Chenzhi Zhu
Show abstract
Uncategorized
Sparkle is the first threshold signature scheme in the pairing-free discrete logarithm setting (Crites, Komlo, Maller, Crypto 2023) to be proven secure under adaptive corruptions. However, without using the algebraic group model, Sparkle's proof imposes an undesirable restriction on the adversary. Namely, for a signing threshold $t<n$, the adversary is restricted to corrupt at most $t/2$ parties. In addition, Sparkle's proof relies on a strong one-more assumption. In this work, we propose Twinkle, a new threshold signature scheme in the pairing-free setting which overcomes these limitations. Twinkle is the first pairing-free scheme to have a security proof under up to $t$ adaptive corruptions without relying on the algebraic group model. It is also the first such scheme with a security proof under adaptive corruptions from a well-studied non-interactive assumption, namely, the Decisional Diffie-Hellman (DDH) assumption. We achieve our result in two steps. First, we design a generic scheme based on a linear function that satisfies several abstract properties and prove its adaptive security under a suitable one-more assumption related to this function. In the context of this proof, we also identify a gap in the security proof of Sparkle and develop new techniques to overcome this issue. Second, we give a suitable instantiation of the function for which the corresponding one-more assumption follows from DDH.
Last updated:  2024-02-26
The Complexity of Algebraic Algorithms for LWE
Matthias Johann Steiner
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound. Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE. In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed. Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Last updated:  2024-02-26
How to Validate a Verification?
Houda Ferradi
This paper introduces \textsl{signature validation}, a primitive allowing any \underline{t}hird party $T$ (\underline{T}héodore) to verify that a \underline{v}erifier $V$ (\underline{V}adim) computationally verified a signature $s$ on a message $m$ issued by a \underline{s}igner $S$ (\underline{S}arah). A naive solution consists in sending by Sarah $x=\{m,\sigma_s\}$ where $\sigma_s$ is Sarah's signature on $m$ and have Vadim confirm reception by a signature $\sigma_v$ on $x$. Unfortunately, this only attests \textsl{proper reception} by Vadim, i.e. that Vadim \textsl{could have checked} $x$ and not that Vadim \textsl{actually verified} $x$. By ``actually verifying'' we mean providing a proof or a convincing argument that a program running on Vadim's machine checked the correctness of $x$. This paper proposes several solutions for doing so, thereby providing a useful building-block in numerous commercial and legal interactions for proving informed consent.
Last updated:  2024-02-26
Toothpicks: More Efficient Fork-Free Two-Round Multi-Signatures
Jiaxin Pan and Benedikt Wagner
Tightly secure cryptographic schemes can be implemented with standardized parameters, while still having a sufficiently high security level backed up by their analysis. In a recent work, Pan and Wagner (Eurocrypt 2023) presented the first tightly secure two-round multi-signature scheme without pairings, called Chopsticks. While this is an interesting first theoretical step, Chopsticks is much less efficient than its non-tight counterparts. In this work, we close this gap by proposing a new tightly secure two-round multi-signature scheme that is as efficient as non-tight schemes. Our scheme is based on the DDH assumption without pairings. Compared to Chopsticks, we reduce the signature size by more than a factor of 3 and the communication complexity by more than a factor of 2. Technically, we achieve this as follows: (1) We develop a new pseudorandom path technique, as opposed to the pseudorandom matching technique in Chopsticks. (2) We construct a more efficient commitment scheme with suitable properties, which is an important primitive in both our scheme and Chopsticks. Surprisingly, we observe that the commitment scheme does not have to be binding, enabling our efficient construction.
Last updated:  2024-02-26
Attribute-Based Signatures with Advanced Delegation, and Tracing
Cécile Delerablée, Lénaïck Gouriou, and David Pointcheval
Attribute-based cryptography allows fine-grained control on the use of the private key. In particular, attribute-based signature (ABS) specifies the capabilities of the signer, which can only sign messages associated to a policy that is authorized by his set of attributes. Furthermore, we can expect signature to not leak any information about the identity of the signer. ABS is a useful tool for identity-preserving authentication process which requires granular access-control, and can furthermore be enhanced with additional properties, for example delegation where users are able to manage a set of keys derived from their original one. In this paper, we address delegation of signing keys. Our first delegation works for any subset of the original attributes, which is the intuitive approach of delegation. Furthermore, we also provide another kind of delegation where the delegator can choose a policy at delegation time to produce keys that can sign any message under this specific policy. This last approach to delegation is a direct application of a new version of the indexing technique, which was first introduced by Okamoto and Takashima in order to prove adaptive security in ABS and its counterpart for encryption, ABE. On top of that, we prove that our scheme is compatible with a well studied feature of ABS, traceability, by using an approach based on Linearly-Homomorphic signatures. All our schemes also guarantee the anonymity of the real signer. The unforgeability of our schemes is proven using the SXDH assumption, and our constructions use the Dual Pairing Vector Spaces (DPVS) framework developed by Okamoto and Takashima, which has been widely used for all kind of attribute and functional cryptography mechanisms.
Last updated:  2024-02-26
Registered Functional Encryptions from Pairings
Ziqi Zhu, Jiangtao Li, Kai Zhang, Junqing Gong, and Haifeng Qian
This work initiates the study of concrete registered functional encryption (Reg-FE) beyond ``all-or-nothing'' functionalities: - We build the first Reg-FE for linear function or inner-product evaluation (Reg-IPFE) from pairings. The scheme achieves adaptive IND-security under $k$-Lin assumption in the prime-order bilinear group. A minor modification yields the first Registered Inner-Product Encryption (Reg-IPE) scheme from $k$-Lin assumption. Prior work achieves the same security in the generic group model. -We build the first Reg-FE for quadratic function (Reg-QFE) from pairings. The scheme achieves very selective simulation-based security (SIM-security) under bilateral $k$-Lin assumption in the prime-order bilinear group. Here, ``very selective'' means that the adversary claims challenge messages, all quadratic functions to be registered and all corrupted users at the beginning. Besides focusing on the compactness of the master public key and helper keys, we also aim for compact ciphertexts in Reg-FE. Let $L$ be the number of slots and $n$ be the input size. Our first Reg-IPFE has weakly compact ciphertexts of size $O(n\cdot\log L)$ while our second Reg-QFE has compact ciphertexts of size $O(n+\log L)$. Technically, for our first Reg-IPFE, we employ nested dual-system method within the context of Reg-IPFE; for our second Reg-QFE, we follow Wee's ``IPFE-to-QFE'' transformation [TCC' 20] but devise a set of new techniques that make our pairing-based Reg-IPFE compatible. Along the way, we introduce a new notion named Pre-Constrained Registered IPFE which generalizes slotted Reg-IPFE by constraining the form of functions that can be registered.
Last updated:  2024-02-26
Authenticated private information retrieval
Simone Colombo, Kirill Nikitin, Henry Corrigan-Gibbs, David J. Wu, and Bryan Ford
This paper introduces protocols for authenticated private information retrieval. These schemes enable a client to fetch a record from a remote database server such that (a) the server does not learn which record the client reads, and (b) the client either obtains the "authentic" record or detects server misbehavior and safely aborts. Both properties are crucial for many applications. Standard private-information-retrieval schemes either do not ensure this form of output authenticity, or they require multiple database replicas with an honest majority. In contrast, we offer multi-server schemes that protect security as long as at least one server is honest. Moreover, if the client can obtain a short digest of the database out of band, then our schemes require only a single server. Performing an authenticated private PGP-public-key lookup on an OpenPGP key server's database of 3.5 million keys (3 GiB), using two non-colluding servers, takes under 1.2 core-seconds of computation, essentially matching the time taken by unauthenticated private information retrieval. Our authenticated single-server schemes are 30-100$\times$ more costly than state-of-the-art unauthenticated single-server schemes, though they achieve incomparably stronger integrity properties.
Last updated:  2024-02-26
Practical Constructions for Single Input Functionality against a Dishonest Majority
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
Single Input Functionality (SIF) is a special case of MPC, where only one distinguished party called dealer holds the secret input. SIF allows the dealer to complete a computation task and send to other parties their respective outputs without revealing any additional information about its secret input. SIF has many applications, including multiple-verifier zero-knowledge and verifiable relation sharing, etc. Recently, several works devote to round-efficient realization of SIF, and achieve 2-round communication in the honest majority setting (Applebaum et al., Crypto 2022; Baum et al., CCS 2022; Yang and Wang, Asiacrypt 2022). In this work, we focus on concrete efficiency and propose \emph{the first} practical construction for SIF against \emph{a dishonest majority} in the preprocessing model; moreover, the online phase of our protocol is only 2-round and is highly efficient, as it requires no cryptographic operations and achieves information theoretical security. For SIF among 5 parties, our scheme takes 152.34ms (total) to evaluate an AES-128 circuit with 7.36ms online time. Compared to the state-of-the-art (honest majority) solution (Baum et al., CCS 2022), our protocol is roughly 2$\times$ faster in the online phase, although more preprocessing time is needed. Compared to the state-of-the-art generic MPC against a dishonest majority (Wang et al., CCS 2017; Cramer et al., Crypto 2018), our protocol outperforms them with respect to both total running time and online running time.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.