Papers updated in last 365 days (2332 results)

Last updated:  2022-08-16
The Generalized Montgomery Coordinate: A New Computational Tool for Isogeny-based Cryptography
Tomoki Moriya, Hiroshi Onuki, Yusuke Aikawa, and Tsuyoshi Takagi
Recently, some studies have constructed one-coordinate arithmetics on elliptic curves. For example, formulas of the $x$-coordinate of Montgomery curves, $x$-coordinate of Montgomery$^-$ curves, $w$-coordinate of Edwards curves, $w$-coordinate of Huff's curves, $\omega$-coordinates of twisted Jacobi intersections have been proposed. These formulas are useful for isogeny-based cryptography because of their compactness and efficiency. In this paper, we define a novel function on elliptic curves called the generalized Montgomery coordinate that has the five coordinates described above as special cases. For a generalized Montgomery coordinate, we construct an explicit formula of scalar multiplication that includes the division polynomial, and both a formula of an image point under an isogeny and that of a coefficient of the codomain curve. Finally, we present two applications of the theory of a generalized Montgomery coordinate. The first one is the construction of a new efficient formula to compute isogenies on Montgomery curves. This formula is more efficient than the previous one for high degree isogenies as the $\sqrt{\vphantom{2}}$\'{e}lu's formula in our implementation. The second one is the construction of a new generalized Montgomery coordinate for Montgomery$^-$ curves used for CSURF.
Last updated:  2022-08-16
A Novel High-performance Implementation of CRYSTALS-Kyber with AI Accelerator
Lipeng Wan, Fangyu Zheng, Guang Fan, Rong Wei, Lili Gao, Jiankuo Dong, Jingqiang Lin, and Yuewu Wang
Public-key cryptography, including conventional cryptosystems and post-quantum cryptography, involves computation-intensive workloads. With noticing the extraordinary computing power of AI accelerators, in this paper, we further explore the feasibility to introduce AI accelerators into high-performance cryptographic computing. Since AI accelerators are dedicated to machine learning or neural networks, the biggest challenge is how to transform cryptographic workloads into their operations, while ensuring the correctness of the results and bringing convincing performance gains. After investigating and analysing the workload of NVIDIA AI accelerator, Tensor Core, we choose to utilize it to accelerate the polynomial multiplication, usually the most time-consuming part in lattice-based cryptography. We take measures to accommodate the matrix-multiply-and-add mode of Tensor Core and make a trade-off between precision and performance, to leverage it as a high-performance NTT box performing NTT/INTT through CUDA C++ WMMA APIs. Meanwhile, we take CRYSTALS-Kyber, the candidate to be standardized by NIST, as a case study on RTX 3080 with the Ampere Tensor Core. The empirical results show that the customized NTT of polynomial vector ($n=256,k=4$) with our NTT box obtains a speedup around 6.47x that of the state-of-the-art implementation on the same GPU platform. Compared with the AVX2 implementation submitted to NIST, our Kyber-1024 can achieve a speedup of 26x, 36x, and 35x for each phase.
Last updated:  2022-08-15
Universally Composable End-to-End Secure Messaging
Ran Canetti, Palak Jain, Marika Swanberg, and Mayank Varia
We provide a full-fledged security analysis of the Signal end-to-end messaging protocol within the UC framework. In particular: (1) We formulate an ideal functionality that captures end-to-end secure messaging, in a setting with PKI and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time and obtain their entire internal states. In particular our analysis captures the forward and backwards secrecy properties of Signal and the conditions under which they break. (2) We model the various components of Signal (PKI and long-term keys, backbone "asymmetric ratchet", epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities that are analysed separately and then composed using the UC and Global-State UC theorems. (3) We use the Random Oracle Model to model non-committing encryption for arbitrary-length messages, but the rest of the analysis is in the plain model based on standard primitives. In particular, we show how to realize Signal's key derivation functions in the standard model, from generic components, and under minimalistic cryptographic assumptions. Our analysis improves on previous ones in the guarantees it provides, in its relaxed security assumptions, and in its modularity. We also uncover some weaknesses of Signal that were not previously discussed. Our modeling differs from previous UC models of secure communication in that the protocol is modeled as a set of local algorithms, keeping the communication network completely out of scope. We also make extensive, layered use of global-state composition within the plain UC framework. These innovations may be of separate interest.
Last updated:  2022-08-15
Secure Joint Communication and Sensing
Onur Gunlu, Matthieu Bloch, Rafael F. Schaefer, and Aylin Yener
This work considers the problem of mitigating information leakage between communication and sensing in systems jointly performing both operations. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to convey information, while simultaneously performing channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a sensitive attribute, e.g., location. For independent and identically distributed states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The partial characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Last updated:  2022-08-15
How to Backdoor (Classic) McEliece and How to Guard Against Backdoors
Tobias Hemmert, Alexander May, Johannes Mittmann, and Carl Richard Theodor Schneider
We show how to backdoor the McEliece cryptosystem such that a backdoored public key is indistinguishable from a usual public key, but allows to efficiently retrieve the underlying secret key. For good cryptographic reasons, McEliece uses a small random seed 𝛅 that generates via some pseudo random generator (PRG) the randomness that determines the secret key. Our backdoor mechanism works by encoding an encryption of 𝛅 into the public key. Retrieving 𝛅 then allows to efficiently recover the (backdoored) secret key. Interestingly, McEliece can be used itself to encrypt 𝛅, thereby protecting our backdoor mechanism with strong post-quantum security guarantees. Our construction also works for the current Classic McEliece NIST standard proposal for non-compressed secret keys, and therefore opens the door for widespread maliciously backdoored implementations. Fortunately, our backdoor mechanism can be detected by the owner of the (backdoored) secret key if 𝛅 is stored after key generation as specified by the Classic McEliece proposal. Thus, our results provide strong advice for implementers to store 𝛅 inside the secret key and use 𝛅 to guard against backdoor mechanisms.
Last updated:  2022-08-14
Quantum Implementation and Analysis of DEFAULT
Kyungbae Jang, Anubhab Baksi, Jakub Breier, Hwajeong Seo, and Anupam Chattopadhyay
In this paper, we present the quantum implementation and analysis of the recently proposed block cipher, DEFAULT. DEFAULT is consisted of two components, namely DEFAULT-LAYER and DEFAULT-CORE. Two instances of DEFAULT-LAYER is used before and after DEFAULT-CORE (the so-called `sandwich construction'). We discuss about the the various choices made to keep the cost for the basic quantum circuit and that of the Grover's oracle search, and compare it with the levels of quantum security specified by the United States' National Institute of Standards and Technology (NIST). All in all, our work nicely fits in the research trend of finding the possible quantum vulnerability of symmetric key ciphers.
Last updated:  2022-08-14
Lattice-Based Zero-Knowledge Proofs and Applications: Shorter, Simpler, and More General
Vadim Lyubashevsky, Ngoc Khanh Nguyen, and Maxime Plancon
We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector $s$ satisfying $As=t\bmod q$. The currently most-efficient technique for constructing such a proof works by showing that the $\ell_\infty$ norm of $s$ is small. It creates a commitment to a polynomial vector $m$ whose CRT coefficients are the coefficients of $s$ and then shows that (1) $A\cdot \mathsf{CRT}(m)=t\bmod\,q$ and (2) in the case that we want to prove that the $\ell_\infty$ norm is at most $1$, the polynomial product $(m - 1)\cdot m\cdot(m+1)$ equals to $0$. While these schemes are already quite good for practical applications, the requirement of using the CRT embedding and only being naturally adapted to proving the $\ell_\infty$-norm, somewhat hinders the efficiency of this approach. In this work, we show that there is a more direct and more efficient way to prove that the coefficients of $s$ have a small $\ell_2$ norm which does not require an equivocation with the $\ell_\infty$ norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors $ r$ and $s$ can be made to appear as a coefficient of a product (or sum of products) between polynomials which are functions of $r$ and $s$. Thus, by using a polynomial product proof system and hiding all but one coefficient, we are able to prove knowledge of the inner product of two vectors modulo $q$. Using a cheap, approximate range proof, one can then lift the proof to be over $\mathbb{Z}$ instead of $\mathbb{Z}_q$. Our protocols for proving short norms work over all (interesting) polynomial rings, but are particularly efficient for rings like $\mathbb{Z}[X]/(X^n+1)$ in which the function relating the inner product of vectors and polynomial products happens to be a ``nice'' automorphism. The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.
Last updated:  2022-08-13
The More You Know: Improving Laser Fault Injection with Prior Knowledge
Marina Krček, Thomas Ordas, Daniele Fronte, and Stjepan Picek
We consider finding as many faults as possible on the target device in the laser fault injection security evaluation. Since the search space is large, we require efficient search methods. Recently, an evolutionary approach using a memetic algorithm was proposed and shown to find more interesting parameter combinations than random search, which is commonly used. Unfortunately, once a variation on the bench or target is introduced, the process must be repeated to find suitable parameter combinations anew. To negate the effect of variation, we propose a novel method combining a memetic algorithm with a machine learning approach called a decision tree. Our approach improves the memetic algorithm by using prior knowledge of the target introduced in the initial phase of the memetic algorithm. In our experiments, the decision tree rules enhance the performance of the memetic algorithm by finding more interesting faults in different samples of the same target. Our approach shows more than two orders of magnitude better performance than random search and up to 60% better performance than previous state-of-the-art results with a memetic algorithm. Another advantage of our approach is human-readable rules, allowing the first insights into the explainability of target characterization for laser fault injection.
Last updated:  2022-08-12
ppSAT: Towards Two-Party Private SAT Solving
Ning Luo, Samuel Judson, Timos Antonopoulos, Ruzica Piskac, and Xiao Wang
We design and implement a privacy-preserving Boolean satisfiability (ppSAT) solver, which allows mutually distrustful parties to evaluate the conjunction of their input formulas while maintaining privacy. We first define a family of security guarantees reconcilable with the (known) exponential complexity of SAT solving, and then construct an oblivious variant of the classic DPLL algorithm which can be integrated with existing secure two-party computation (2PC) techniques. We further observe that most known SAT solving heuristics are unsuitable for 2PC, as they are highly data-dependent in order to minimize the number of exploration steps. Faced with how best to trade off between the number of steps and the cost of obliviously executing each one, we design three efficient oblivious heuristics, one deterministic and two randomized. As a result of this effort we are able to evaluate our ppSAT solver on small but practical instances arising from the haplotype inference problem in bioinformatics. We conclude by looking towards future directions for making ppSAT solving more practical, most especially the integration of conflict-driven clause learning (CDCL).
Last updated:  2022-08-12
MR-DSS – Smaller MinRank-based (Ring-)Signatures
Emanuele Bellini, Andre Esser, Carlo Sanna, and Javier Verbel
In the light of NIST’s announced reopening of the call for digital signature proposals in 2023 due to lacking diversity, there is a strong need for constructions based on other established hardness assumptions. In this work we construct a new post-quantum secure digital signature scheme based on the $MinRank$ problem, a problem with a long history of applications in cryptanalysis that led to a strong belief in its hardness. Initially following a design by Courtois (Asiacrypt '01) based on the Fiat--Shamir transform, we make use of several recent developments in the design of sigma protocols to reduce signature size and improve efficiency. This includes the recently introduced $sigma \; protocol \; with \; helper$ paradigm (Eurocrypt '19) and combinations with $cut$-$and$-$choose$ techniques (CCS '18). Moreover, we introduce several improvements to the core of the scheme to further reduce its signature size.
Last updated:  2022-08-12
Uncle Maker: (Time)Stamping Out The Competition in Ethereum
Aviv Yaish, Gilad Stern, and Aviv Zohar
We present an attack on Ethereum's consensus mechanism which can be used by miners to obtain consistently higher mining rewards compared to the honest protocol. This attack is novel in that it does not entail withholding blocks or any behavior which has a non-zero probability of earning less than mining honestly, in contrast with the existing literature. This risk-less attack relies instead on manipulating block timestamps, and carefully choosing whether and when to do so. We present this attack as an algorithm, which we then analyze to evaluate the revenue a miner obtains from it, and its effect on a miner's absolute and relative share of the main-chain blocks. The attack allows an attacker to replace competitors' main-chain blocks after the fact with a block of its own, thus causing the replaced block's miner to lose all transactions fees for the transactions contained within the block, which will be demoted from the main-chain. This block, although ``kicked-out'' of the main-chain, will still be eligible to be referred to by other main-chain blocks, thus becoming what is commonly called in Ethereum an uncle. We proceed by defining multiple variants of this attack, and assessing whether any of these attacks has been performed in the wild. Surprisingly, we find that this is indeed true, making this the first case of a confirmed consensus-level manipulation performed on a major cryptocurrency. Additionally, we implement a variant of this attack as a patch for geth, Ethereum's most popular client, making it the first consensus-level attack on Ethereum which is implemented as a patch. Finally, we suggest concrete fixes for Ethereum's protocol and implemented them as a patch for geth which can be adopted quickly and mitigate the attack and its variants.
Last updated:  2022-08-12
Optimal encodings to elliptic curves of $j$-invariants $0$, $1728$
Dmitrii Koshelev
This article provides new constant-time encodings $\mathbb{F}_{\!q}^* \to E(\mathbb{F}_{\!q})$ to ordinary elliptic $\mathbb{F}_{\!q}$-curves $E$ of $j$-invariants $0$, $1728$ having a small prime divisor of the Frobenius trace. Therefore all curves of $j = 1728$ are covered. This circumstance is also true for the Barreto--Naehrig curves BN512, BN638 from the international cryptographic standards ISO/IEC 15946-5, TCG Algorithm Registry, and FIDO ECDAA Algorithm. Many $j = 1728$ curves as well as BN512, BN638 are not appropriate for the most efficient prior encodings. So, in fact, only universal SW (Shallue--van de Woestijne) one was previously applicable to them. However this encoding (in contrast to the new ones) cannot be computed at the cost of one exponentiation in the field $\mathbb{F}_{\!q}$.
Last updated:  2022-08-12
Foundations of Coin Mixing Services
Noemi Glaeser, Matteo Maffei, Giulio Malavolta, Pedro Moreno-Sanchez, Erkan Tairi, and Sri AravindaKrishnan Thyagarajan
Coin mixing services allow users to mix their cryptocurrency coins and thus enable unlinkable payments in a way that prevents tracking of honest users' coins by both the service provider and the users themselves. The easy bootstrapping of new users and backwards compatibility with cryptocurrencies (such as Bitcoin) with limited support for scripts are attractive features of this architecture, which has recently gained considerable attention in both academia and industry. A recent work of Tairi et al. [IEEE S&P 2021] formalizes the notion of a coin mixing service and proposes A$^{2}$L, a new cryptographic protocol that simultaneously achieves high efficiency and interoperability. In this work, we identify a gap in their formal model and substantiate the issue by showing two concrete counterexamples: we show how to construct two encryption schemes that satisfy their definitions but lead to a completely insecure system. To amend this situation, we investigate secure constructions of coin mixing services. First, we develop the notion of blind conditional signatures (BCS), which acts as the cryptographic core for coin mixing services. We propose game-based security definitions for BCS and propose A$^{2}$L$^{+}$, a modified version of the protocol by Tairi et al. that satisfies our security definitions. Our analysis is in an idealized model (akin to the algebraic group model) and assumes the hardness of the one-more discrete logarithm problem. Finally, we propose A$^{2}$L$^\text{UC}$, another construction of BCS that achieves the stronger notion of UC-security (in the standard model), albeit with a significant increase in computation cost. This suggests that constructing a coin mixing service protocol secure under composition requires more complex cryptographic machinery than initially thought.
Last updated:  2022-08-11
A Fast, Practical and Simple Shortest Path Protocol for Multiparty Computation
Abdelrahaman Aly and Sara Cleemput
We present a simple and fast protocol to securely solve the (single source) Shortest Path Problem, based on Dijkstra's algorithm over Secure Multiparty Computation. Our protocol improves current state of the art by Aly et al. [FC 2013 & ICISC 2014] and can offer perfect security against both semi-honest and malicious adversaries. Furthermore, it is the first data oblivious protocol to achieve quadratic complexity in the number of communication rounds. Moreover, our protocol can be easily be adapted to form a subroutine in other combinatorial mechanisms. Our focus is usability; hence, we provide an open source implementation and exhaustive benchmarking under different adversarial settings and players setups.
Last updated:  2022-08-11
Breaking SIDH in polynomial time
Damien Robert
We show that we can break SIDH in polynomial time, even with a random starting curve E_0.
Last updated:  2022-08-11
On The Insider Security of MLS
Joël Alwen, Daniel Jost, and Marta Mularczyk
The Messaging Layer Security (MLS) protocol is an open standard for end-to-end (E2E) secure group messaging being developed by the IETF poised for deployment to consumers, industry, and government. It is designed to provide E2E privacy and authenticity for messages in long lived sessions whenever possible despite the participation (at times) of malicious insiders that can adaptively interact with the PKI at will, actively deviate from the protocol, leak honest parties' states, and fully control the network. The core of the MLS protocol (from which it inherits essentially all of its efficiency and security properties) is a Continuous Group Key Agreement (CGKA) protocol. It provides asynchronous E2E group management by allowing group members to agree on a fresh independent symmetric key after every change to the group's state (e.g. when someone joins/leaves the group). In this work, we make progress towards a precise understanding of the insider security of MLS (Draft 12). On the theory side, we overcome several subtleties to formulate the first notion of insider security for CGKA (or group messaging). Next, we isolate the core components of MLS to obtain a CGKA protocol we dub Insider Secure TreeKEM (ITK). Finally, we give a rigorous security proof for ITK. In particular, this work also initiates the study of insider secure CGKA and group messaging protocols. Along the way we give three new (very practical) attacks on MLS and corresponding fixes. (Those fixes have now been included into the standard.) We also describe a second attack against MLS-like CGKA protocols proven secure under all previously considered security notions (including those designed specifically to analyze MLS). These attacks highlight the pitfalls in simplifying security notions even in the name of tractability.
Last updated:  2022-08-11
New Unbounded Verifiable Data Streaming for Batch Query with Almost Optimal Overhead
Jiaojiao Wu, Jianfeng Wang, Xinwei Yong, Xinyi Huang, and Xiaofeng Chen
Verifiable Data Streaming (VDS) enables a resource-limited client to continuously outsource data to an untrusted server in a sequential manner while supporting public integrity verification and efficient update. However, most existing VDS schemes require the client to generate all proofs in advance and store them at the server, which leads to a heavy computation burden on the client. In addition, all the previous VDS schemes can perform batch query (i.e., retrieving multiple data entries at once), but are subject to linear communication cost $l$, where $l$ is the number of queried data. In this paper, we first introduce a new cryptographic primitive named Double-trapdoor Chameleon Vector Commitment (DCVC), and then present an unbounded VDS scheme $\mathsf{VDS_1}$ with optimal communication cost in the random oracle model from aggregatable cross-commitment variant of DCVC. Furthermore, we propose, to our best knowledge, the first unbounded VDS scheme $\mathsf{VDS}_2$ with optimal communication and storage overhead in the standard model by integrating Double-trapdoor Chameleon Hash Function (DCH) and Key-Value Commitment (KVC). Both of our schemes enjoy constant-size public key. Finally, we demonstrate the efficiency of our two VDS schemes with a comprehensive performance evaluation.
Last updated:  2022-08-11
The Limits of Provable Security Against Model Extraction
Ari Karchmer
Can we hope to provide provable security against model extraction attacks? As a step towards a theoretical study of this question, we unify and abstract a wide range of "observational" model extraction defense mechanisms -- roughly, those that attempt to detect model extraction using a statistical analysis conducted on the distribution over the adversary's queries. To accompany the abstract observational model extraction defense, which we call OMED for short, we define the notion of complete defenses -- the notion that benign clients can freely interact with the model -- and sound defenses -- the notion that adversarial clients are caught and prevented from reverse engineering the model. We then propose a system for obtaining provable security against model extraction by complete and sound OMEDs, using (average-case) hardness assumptions for PAC-learning. Our main result nullifies our proposal for provable security, by establishing a computational incompleteness theorem for the OMED: any efficient OMED for a machine learning model computable by a polynomial size decision tree that satisfies a basic form of completeness cannot satisfy soundness, unless the subexponential Learning Parity with Noise (LPN) assumption does not hold. To prove the incompleteness theorem, we introduce a class of model extraction attacks called natural Covert Learning attacks based on a connection to the Covert Learning model of Canetti and Karchmer (TCC '21), and show that such attacks circumvent any defense within our abstract mechanism in a black-box, nonadaptive way. Finally, we further expose the tension between Covert Learning and OMEDs by proving that Covert Learning algorithms require the nonexistence of provable security via efficient OMEDs. Therefore, we observe a "win-win" result by obtaining a characterization of the existence of provable security via efficient OMEDs by the nonexistence of natural Covert Learning algorithms.
Last updated:  2022-08-10
RPM: Robust Anonymity at Scale
Donghang Lu and Aniket Kate
This work presents RPM, a scalable anonymous communication protocol suite using secure multiparty computation (MPC) with the offline-online model. We generate random, unknown permutation matrices in a secret-shared fashion and achieve improved (online) performance and the lightest communication and computation overhead for the clients compared to the state of art robust anonymous communication protocols. Using square-lattice shuffling, we make our protocol scale well as the number of clients increases. We provide three protocol variants, each targeting different input volumes and MPC frameworks/libraries. Besides, due to the modular design, our protocols can be easily generalized to support more MPC functionalities and security properties as they get developed. We also illustrate how to generalize our protocols to support two-way anonymous communication and secure sorting. We have implemented our protocols using the MP-SPDZ library suit and the benchmark illustrates that our protocols achieve unprecedented online phase performance with practical offline phases.
Last updated:  2022-08-10
MuSig-L: Lattice-Based Multi-Signature With Single-Round Online Phase
Cecilia Boschini, Akira Takahashi, and Mehdi Tibouchi
Multi-signatures are protocols that allow a group of signers to jointly produce a single signature on the same message. In recent years, a number of practical multi-signature schemes have been proposed in the discrete-log setting, such as MuSigT (CRYPTO'21) and DWMS (CRYPTO'21). The main technical challenge in constructing a multi-signature scheme is to achieve a set of several desirable properties, such as (1) security in the plain public-key (PPK) model, (2) concurrent security, (3) low online round complexity, and (4) key aggregation. However, previous lattice-based, post-quantum counterparts to Schnorr multi-signatures fail to satisfy these properties. In this paper, we introduce MuSigL, a lattice-based multi-signature scheme simultaneously achieving these design goals for the first time. Unlike the recent, round-efficient proposal of Damgård et al. (PKC'21), which had to rely on lattice-based trapdoor commitments, we do not require any additional primitive in the protocol, while being able to prove security from the standard module-SIS and LWE assumptions. The resulting output signature of our scheme therefore looks closer to the usual Fiat--Shamir-with-abort signatures.
Last updated:  2022-08-10
Efficient Pseudorandom Correlation Generators from Ring-LPN
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, and Peter Scholl
Secure multiparty computation can often utilize a trusted source of correlated randomness to achieve better efficiency. A recent line of work, initiated by Boyle et al. (CCS 2018, Crypto 2019), showed how useful forms of correlated randomness can be generated using a cheap, one-time interaction, followed by only "silent" local computation. This is achieved via a pseudorandom correlation generator (PCG), a deterministic function that stretches short correlated seeds into long instances of a target correlation. Previous works constructed concretely efficient PCGs for simple but useful correlations, including random oblivious transfer and vector-OLE, together with efficient protocols to distribute the PCG seed generation. Most of these constructions were based on variants of the Learning Parity with Noise (LPN) assumption. PCGs for other useful correlations had poor asymptotic and concrete efficiency. In this work, we design a new class of efficient PCGs based on different flavors of the ring-LPN assumption. Our new PCGs can generate OLE correlations, authenticated multiplication triples, matrix product correlations, and other types of useful correlations over large fields. These PCGs are more efficient by orders of magnitude than the previous constructions and can be used to improve the preprocessing phase of many existing MPC protocols.
Last updated:  2022-08-10
PUF-COTE: A PUF Construction with Challenge Obfuscation and Throughput Enhancement
Boyapally Harishma, Durba Chatterjee, Kuheli Pratihar, Sayandeep Saha, and Debdeep Mukhopadhyay
Physically Unclonable Functions~(PUFs) have been a potent choice for enabling low-cost, secure communication. However, the state-of-the-art strong PUFs generate single-bit response. So, we propose PUF-COTE: a high throughput architecture based on linear feedback shift register and a strong PUF as the ``base''-PUF. At the same time, we obfuscate the challenges to the ``base''-PUF of the final construction. We experimentally evaluate the quality of the construction by implementing it on Artix 7 FPGAs. We evaluate the statistical quality of the responses~(using NIST SP800-92 test suit and standard PUF metrics: uniformity, uniqueness, reliability, strict avalanche criterion, ML-based modelling), which is a crucial factor for cryptographic applications.
Last updated:  2022-08-10
Arithmetization of Σ¹₁ relations in Halo 2
Morgan Thomas
Orbis Labs presents a method for compiling (“arithmetizing”) relations, expressed as Σ¹₁ formulas in the language of rings, into Halo 2 arithmetic circuits. This method offers the possibility of creating arithmetic circuits without laborious and error-prone manual circuit design and implementation, by instead expressing the relation to be arithmetized in a concise mathematical notation and generating the circuit based on that expression.
Last updated:  2022-08-10
Orbis Specification Language: a type theory for zk-SNARK programming
Morgan Thomas
Orbis Specification Language (OSL) is a language for writing statements to be proven by zk-SNARKs. zk-SNARK theories allow for proving wide classes of statements. They usually require the statement to be proven to be expressed as a constraint system, called an arithmetic circuit, which can take various forms depending on the theory. It is difficult to express complex statements in the form of arithmetic circuits. OSL is a language of statements which is similar to type theories used in proof engineering, such as Agda and Coq. OSL has a feature set which is sufficiently limited to make it feasible to compile a statement expressed in OSL to an arithmetic circuit which expresses the same statement. This work builds on Σ¹₁ arithmetization [5] in Halo 2 [3, 4], by defining a frontend for a user-friendly circuit compiler.
Last updated:  2022-08-10
Constructing and Deconstructing Intentional Weaknesses in Symmetric Ciphers
Christof Beierle, Tim Beyne, Patrick Felke, and Gregor Leander
Deliberately weakened ciphers are of great interest in political discussion on law enforcement, as in the constantly recurring crypto wars, and have been put in the spotlight of academics by recent progress. A paper at Eurocrypt 2021 showed a strong indication that the security of the widely-deployed stream cipher GEA-1 was deliberately and secretly weakened to 40 bits in order to fulfill European export restrictions that have been in place in the late 1990s. However, no explanation of how this could have been constructed was given. On the other hand, we have seen the MALICIOUS design framework, published at CRYPTO 2020, that allows to construct tweakable block ciphers with a backdoor, where the difficulty of recovering the backdoor relies on well-understood cryptographic assumptions. The constructed tweakable block cipher however is rather unusual and very different from, say, general-purpose ciphers like the AES. In this paper, we pick up both topics. For GEA-1 we thoroughly explain how the weakness was constructed, solving the main open question of the work mentioned above. By generalizing MALICIOUS we -- for the first time -- construct backdoored tweakable block ciphers that follow modern design principles for general-purpose block ciphers, i.e., more natural-looking deliberately weakened tweakable block ciphers.
Last updated:  2022-08-10
Finding All Impossible Differentials When Considering the DDT
Kai Hu, Thomas Peyrin, and Meiqin Wang
Impossible differential (ID) cryptanalysis is one of the most important attacks on block ciphers. The Mixed Integer Linear Programming (MILP) model is a popular method to determine whether a specific difference pair is an ID. Unfortunately, due to the huge search space (approximately $2^{2n}$ for a cipher with a block size $n$ bits), we cannot leverage this technique to exhaust all difference pairs, which is a well-known long-standing problem. In this paper, we propose a systematic method to find all IDs for SPN block ciphers. The idea is to partition the whole difference pair space into lots of small disjoint sets, each of which has a representative difference pair. All difference pairs in one small set are possible if its representative pair is possible, and this can be conveniently checked by the MILP model. In this way, the overall search space is drastically reduced to a practical size by excluding the sets containing no IDs. We then examine the remaining difference pairs to identify all IDs (if some IDs exist). If our method cannot find any ID, the target cipher is proved free of ID distinguishers. Our method works especially well for SPN ciphers with block size 64. We apply our method to SKINNY-64 and successfully find all 432 and 12 truncated IDs (we find all IDs but all of them can be assembled into certain truncated IDs) for 11 and 12 rounds, respectively. We also prove, for the first time, that 13-round SKINNY-64 is free of ID distinguishers even when considering the differential transitions through the Difference Distribution Table (DDT). Similarly, we find all 12 truncated IDs (all IDs are assembled into 12 truncated IDs) for 13-round CRAFT and prove there is no ID for 14 rounds. For SbPN cipher GIFT-64, we prove that there is no ID for 8 rounds. For SPN ciphers with larger block sizes, we show that our idea is also useful to strengthen the current search methods. For example, if we consider the Sbox to be ideal and only consider the branch number information of the diffusion matrix, we can find all 6,750 truncated IDs for 6-round Rijndael-192 in 1 second and prove that there is no truncated ID for 7 rounds. Previously, we need to solve approximately $2^{48}$ MILP models to achieve the same goal. For GIFT-128, we exhausted all difference patterns that have an active superbox in the plaintext and ciphertext and proved there is no ID of such patterns for 8 rounds. Although we have searched for a larger or even full space for IDs, no longer ID distinguishers have been found. This implies the reasonableness of the intuition that a small number (usually one or two) of active bits/words at the beginning and end of an ID will be the longest.
Last updated:  2022-08-10
Practical Decentralized Oracle Contracts for Cryptocurrencies
Varun Madathil, Sri AravindaKrishnan Thyagarajan, Dimitrios Vasilopoulos, Lloyd Fournier, Giulio Malavolta, and Pedro Moreno-Sanchez
We consider a scenario where two mutually distrustful parties, Alice and Bob, want to perform a payment conditioned on the outcome of some real-world event. A semi-trusted oracle (or a threshold number of oracles in a distributed trust setting) is entrusted to attest that such an outcome indeed occurred, and only then the payment is made successfully. We refer to such scenario as \emph{oracle-based conditional (ObC) payments} that are ubiquitous in many real-world applications, like financial adjudication, pre-scheduled payments or trading, and are a necessary building block to introduce information about real-world events into blockchains. The focus of this work is to realize such ObC payments with provable security guarantees and efficient instantiations. To do this, we propose a new cryptographic primitive that we call \emph{verifiable witness encryption based on threshold signatures (VweTS)}: Users can encrypt signatures on messages in a verifiable manner, such that, the decryption is successful only if a threshold number of signers (e.g., oracles) sign another message (e.g., the description of an event outcome). We require two security notions: (1) \emph{one-wayness} that guarantees that without the threshold number of signatures, the ciphertext hides the encrypted signature, and (2) \emph{verifiability}, that guarantees that a ciphertext that verifies successfully can be successfully decrypted to reveal the underlying signature. We present provably secure and efficient instantiations of VweTS where the encrypted signature can be some of the widely used schemes like Schnorr, ECDSA or BLS signatures. To provide verifiability in a practically efficient manner, we make use of a new batching technique for cut-and-choose, inspired by the work of Lindell-Riva on garbled circuits. Our VweTS instantiations can be readily used to realize ObC payments on virtually all cryptocurrencies of today in a fungible, cost-efficient, and scalable manner. Our instantiations are the first to support ObC payments in a distributed trust setting without requiring any form of synchrony or coordination among the users and the oracles. To demonstrate the practicality of our scheme, we present a prototype implementation and our benchmarks in commodity hardware show that the computation overhead is less than 13 seconds even for a threshold of 4 out of 7 and a payment conditioned on up to 1000 different real-world event outcomes, while the communication overhead is below 1.3MB. Therefore, our approach is practical even in commodity hardware.
Last updated:  2022-08-10
A Complete Characterization of Security for Linicrypt Block Cipher Modes
Tommy Hollenberg, Mike Rosulek, and Lawrence Roy
We give characterizations of IND\$-CPA security for a large, natural class of encryption schemes. Specifically, we consider encryption algorithms that invoke a block cipher and otherwise perform linear operations (e.g., XOR and multiplication by fixed field elements) on intermediate values. This class of algorithms corresponds to the Linicrypt model of Carmer & Rosulek (Crypto 2016). Our characterization for this class of encryption schemes is sound but not complete. We then focus on a smaller subclass of block cipher modes, which iterate over the blocks of the plaintext, repeatedly applying the same Linicrypt program. For these Linicrypt block cipher modes, we are able to give a sound and complete characterization of IND\$-CPA security. Our characterization is linear-algebraic in nature and is easy to check for a candidate mode. Interestingly, we prove that a Linicrypt block cipher mode is secure if and only if it is secure against adversaries who choose all-zeroes plaintexts.
Last updated:  2022-08-09
Optimal Single-Server Private Information Retrieval
Mingxun Zhou, Wei-Kai Lin, Yiannis Tselekounis, and Elaine Shi
We construct a single-server pre-processing Private Information Retrieval (PIR) scheme with optimal bandwidth and server computation (up to poly-logarithmic factors), assuming hardness of the Learning With Errors (LWE) problem. Our scheme achieves amortized $\widetilde{O}_{\lambda}(\sqrt{n})$ server and client computation and $\widetilde{O}_\lambda(1)$ bandwidth per query, completes in a single roundtrip, and requires $\widetilde{O}_\lambda(\sqrt{n})$ client storage. In particular, we achieve a significant reduction in bandwidth over the state-of-the-art scheme by Corrigan-Gibbs, Henzinger, and Kogan (Eurocrypt'22): their scheme requires as much as $\widetilde{O}_{\lambda}(\sqrt{n})$ bandwidth per query, with comparable computational and storage overhead as ours.
Last updated:  2022-08-09
The Power of the Differentially Oblivious Shuffle in Distributed Privacy Mechanisms
Mingxun Zhou and Elaine Shi
The shuffle model has been extensively investigated in the distributed differential privacy (DP) literature. For a class of useful computational tasks, the shuffle model allows us to achieve privacy-utility tradeoff similar to those in the central model, while shifting the trust from a central data curator to a ``trusted shuffle'' which can be implemented through either trusted hardware or cryptography. Very recently, several works explored cryptographic instantiations of a new type of shuffle with relaxed security, called {\it differentially oblivious (DO) shuffles}. These works demonstrate that by relaxing the shuffler's security from simulation-style secrecy to differential privacy, we can achieve asymptotical efficiency improvements. A natural question arises, can we replace the shuffler in distributed DP mechanisms with a DO-shuffle while retaining a similar privacy-utility tradeoff? In this paper, we prove an optimal privacy amplification theorem by composing any locally differentially private (LDP) mechanism with a DO-shuffler, achieving parameters that tightly match the shuffle model. Moreover, we explore multi-message protocols in the DO-shuffle model, and construct mechanisms for the real summation and histograph problems. Our error bounds approximate the best known results in the multi-message shuffle-model up to sub-logarithmic factors. Our results also suggest that just like in the shuffle model, allowing each client to send multiple messages is fundamentally more powerful than restricting to a single message. As an application, we derive the result of using repeated DO-shuffling for privacy-preserving time-series data aggregation.
Last updated:  2022-08-09
On Non-uniform Security for Black-box Non-Interactive CCA Commitments
Rachit Garg, Dakshita Khurana, George Lu, and Brent Waters
We obtain a black-box construction of non-interactive CCA commitments against non-uniform adversaries. This makes black-box use of an appropriate base commitment scheme for small tag spaces, variants of sub-exponential hinting PRG (Koppula and Waters, Crypto 2019) and variants of keyless sub-exponentially collision-resistant hash function with security against non-uniform adversaries (Bitansky, Kalai and Paneth, STOC 2018 and Bitansky and Lin, TCC 2018). All prior works on non-interactive non-malleable or CCA commitments without setup first construct a "base" scheme for a relatively small identity/tag space, and then build a tag amplification compiler to obtain commitments for an exponential-sized space of identities. Prior black-box constructions either add multiple rounds of interaction (Goyal, Lee, Ostrovsky and Visconti, FOCS 2012) or only achieve security against uniform adversaries (Garg, Khurana, Lu and Waters, Eurocrypt 2021). Our key technical contribution is a novel tag amplification compiler for CCA commitments that replaces the non-interactive proof of consistency required in prior work. Our construction satisfies the strongest known definition of non-malleability, i.e., CCA2 (chosen commitment attack) security. In addition to only making black-box use of the base scheme, our construction replaces sub-exponential NIWIs with sub-exponential hinting PRGs, which can be obtained based on assumptions such as (sub-exponential) CDH or LWE.
Last updated:  2022-08-09
Revisiting Algebraic Attacks on MinRank and on the Rank Decoding Problem
Magali Bardet, Pierre Briaud, Maxime Bros, Philippe Gaborit, and Jean-Pierre Tillich
The Rank Decoding problem (RD) is at the core of rank-based cryptography. Cryptosystems such as ROLLO and RQC, which made it to the second round of the NIST Post-Quantum Standardization Process, as well as the Durandal signature scheme, rely on it or its variants. This problem can also be seen as a structured version of MinRank, which is ubiquitous in multivariate cryptography. Recently, [1,2] proposed attacks based on two new algebraic modelings, namely the MaxMinors modeling which is specific to RD and the Support-Minors modeling which applies to MinRank in general. Both improved significantly the complexity of algebraic attacks on these two problems. In the case of RD and contrarily to what was believed up to now, these new attacks were shown to be able to outperform combinatorial attacks and this even for very small field sizes. However, we prove here that the analysis performed in [2] for one of these attacks which consists in mixing the MaxMinors modeling with the Support-Minors modeling to solve RD is too optimistic and leads to underestimate the overall complexity. This is done by exhibiting linear dependencies between these equations and by considering an Fqm version of these modelings which turns out to be instrumental for getting a better understanding of both systems. Moreover, by working over Fqm rather than over Fq, we are able to drastically reduce the number of variables in the system and we (i) still keep enough algebraic equations to be able to solve the system, (ii) are able to analyze rigorously the complexity of our approach. This new approach may improve the older MaxMinors approach on RD from [1,2] for certain parameters. We also introduce a new hybrid approach on the Support-Minors system whose impact is much more general since it applies to any MinRank problem. This technique improves significantly the complexity of the Support-Minors approach for small to moderate field sizes. References: [1] An Algebraic Attack on Rank Metric Code-Based Cryptosystems, Bardet, Briaud, Bros, Gaborit, Neiger, Ruatta, Tillich, EUROCRYPT 2020. [2] Improvements of Algebraic Attacks for solving the Rank Decoding and MinRank problems, Bardet, Bros, Cabarcas, Gaborit, Perlner, Smith-Tone, Tillich, Verbel, ASIACRYPT 2020.
Last updated:  2022-08-09
Oblivious Extractors and Improved Security in Biometric-based Authentication Systems
Ivan De Oliveira Nunes, Peter Rindal, and Maliheh Shirvanian
We study the problem of biometric-based authentication with template confidentiality. Typical schemes addressing this problem, such as Fuzzy Vaults (FV) and Fuzzy Extractors (FE), allow a server, aka Authenticator, to store “random looking” Helper Data (HD) instead of biometric templates in clear. HD hides information about the corresponding biometric while still enabling secure biometric-based authentication. Even though these schemes reduce the risk of storing biometric data, their correspondent authentication procedures typically require sending the HD (stored by the Authenticator) to a client who claims a given identity. The premise here is that only the identity owner - i.e., the person whose biometric was sampled to originally generate the HD - is able to provide the same biometric to reconstruct the proper cryptographic key from HD. As a side effect, the ability to freely retrieve HD, by simply claiming a given identity, allows invested adversaries to perform offline statistical attacks (a biometric analog for dictionary attacks on hashed passwords) or re-usability attacks (if the FE scheme is not reusable) on the HD to eventually recover the user’s biometric. In this work we develop Oblivious Extractors: a new construction that allows an Authenticator to authenticate a user without requiring neither the user to send a biometric to the Authenticator, nor the server to send the HD to the client. Oblivious Extractors provide concrete security advantages for biometric-based authentication systems. From the perspective of secure storage, an oblivious extractor is as secure as its non-oblivious fuzzy extractor counterpart. In addition, it enhances security against aforementioned statistical and re-usability attacks. To demonstrate the construction’s practicality, we implement and evaluate a biometric-based authentication prototype using Oblivious Extractors.
Last updated:  2022-08-09
Perfectly-Secure Synchronous MPC with Asynchronous Fallback Guarantees
Ananya Appan, Anirudh Chandramouli, and Ashish Choudhury
Secure multi-party computation (MPC) is a fundamental problem in secure distributed computing. An MPC protocol allows a set of $n$ mutually distrusting parties to carry out any joint computation of their private inputs, without disclosing any additional information about their inputs. MPC with information-theoretic security (also called unconditional security) provides the strongest security guarantees and remains secure even against computationally unbounded adversaries. Perfectly-secure MPC protocols is a class of information-theoretically secure MPC protocols, which provides all the security guarantees in an error-free fashion. The focus of this work is perfectly-secure MPC. Known protocols are designed assuming either a synchronous or an asynchronous communication network. It is well known that perfectly-secure synchronous MPC protocol is possible as long as adversary can corrupt any $t_s < n/3$ parties. On the other hand, perfectly-secure asynchronous MPC protocol can tolerate up to $t_a < n/4$ corrupt parties. A natural question is does there exist a single MPC protocol for the setting where the parties are not aware of the exact network type and which can tolerate up to $t_s < n/3$ corruptions in a synchronous network and up to $t_a < n/4$ corruptions in an asynchronous network. We design such a best-of-both-worlds perfectly-secure MPC protocol, provided $3t_s + t_a < n$ holds. For designing our protocol, we design two important building blocks, which are of independent interest. The first building block is a best-of-both-worlds Byzantine agreement (BA) protocol tolerating $t < n/3$ corruptions and which remains secure, both in a synchronous as well as asynchronous network. The second building block is a polynomial-based best-of-both-worlds verifiable secret-sharing (VSS) protocol, which can tolerate up to $t_s$ and $t_a$ corruptions in a synchronous and in an asynchronous network respectively.
Last updated:  2022-08-09
FIDO2, CTAP 2.1, and WebAuthn 2: Provable Security and Post-Quantum Instantiation
Nina Bindel, Cas Cremers, and Mang Zhao
The FIDO2 protocol is a globally used standard for passwordless authentication, building on an alliance between major players in the online authentication space. While already widely deployed, the standard is still under active development. Since version 2.1 of its CTAP sub-protocol, FIDO2 can potentially be instantiated with post-quantum secure primitives. We provide the first formal security analysis of FIDO2 with the CTAP 2.1 and WebAuthn 2 sub-protocols. Our security models build on work by Barbosa et al. for their analysis of FIDO2 with CTAP 2.0 and WebAuthn 1, which we extend in several ways. First, we provide a more fine-grained security model that allows us to prove more relevant protocol properties, such as guarantees about token binding agreement, the None attestation mode, and user verification. Second, we can prove post-quantum security for FIDO2 under certain conditions and minor protocol extensions. Finally, we show that for some threat models, the downgrade resilience of FIDO2 can be improved, and show how to achieve this with a simple modification.
Last updated:  2022-08-09
Making NSEC5 Practical for DNSSEC
Dimitrios Papadopoulos, Duane Wessels, Shumon Huque, Moni Naor, Jan Včelák, Leonid Reyzin, and Sharon Goldberg
NSEC5 is a proposed modification to DNSSEC that guarantees two security properties: (1) privacy against offline zone enumeration, and (2) integrity of zone contents, even if an adversary compromises the authoritative nameserver responsible for responding to DNS queries for the zone. In this work, we redesign NSEC5 in order to make it practical and performant. Our NSEC5 redesign features a new verifiable random function (VRF) based on elliptic curve cryptography (ECC), along with a cryptographic proof of its security. This VRF is also of independent interest, as it is being standardized by the IETF and being used by several other projects. We show how to integrate NSEC5 using our ECC-based VRF into DNSSEC, leveraging precomputation to improve performance and DNS protocol-level optimizations to shorten responses. Next, we present the first full-fledged implementation of NSEC5 for both nameserver and recursive resolver, and evaluate performance under aggressive DNS query loads. We find that our redesigned NSEC5 can be viable even for high-throughput scenarios.
Last updated:  2022-08-09
Masked-degree SIDH
Tomoki Moriya
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. SIDH is a compact and efficient isogeny-based key exchange, and SIKE, which is the SIDH-based key encapsulation mechanism, remains the NIST PQC Round 4. However, by the brilliant attack provided by Castryck and Decru, the original SIDH is broken in polynomial time (with heuristics). To break the original SIDH, there are three important pieces of information in the public key: information about the endomorphism ring of a starting curve, some image points under a cyclic hidden isogeny, and the degree of the isogeny. In this paper, we proposed the new isogeny-based scheme named \textit{masked-degree SIDH}. This scheme is the variant of SIDH that masks most information about degrees of hidden isogenies, and the first trial against Castryck--Decru attack. The main idea to cover degrees is to use many primes to compute isogenies that allow the degree to be more flexible. Though the size of the prime $p$ for this scheme is slightly larger than that of SIDH, this scheme resists current attacks using degrees of isogenies like the attack of Castryck and Decru. The most effective attack for masked-degree SIDH has $\tilde{O}(p^{1/(8\log_2{(\log_2{p})})})$ time complexity with classical computers and $\tilde{O}(p^{1/(16\log_2{(\log_2{p})})})$ time complexity with quantum computers in our analysis.
Last updated:  2022-08-09
Random Oracles and Non-Uniformity
Sandro Coretti, Yevgeniy Dodis, Siyao Guo, and John Steinberger
We revisit security proofs for various cryptographic primitives in the auxiliary-input random-oracle model (AI-ROM), in which an attacker $A$ can compute arbitrary $S$ bits of leakage about the random oracle $\mathcal O$ before attacking the system and then use additional $T$ oracle queries to $\mathcal O$ during the attack. This model has natural applications in settings where traditional random-oracle proofs are not useful: (a) security against non-uniform attackers; (b) security against preprocessing. We obtain a number of new results about the AI-ROM: Unruh (CRYPTO '07) introduced the pre-sampling technique, which generically reduces security proofs in the AI-ROM to a much simpler $P$-bit-fixing random-oracle model (BF-ROM), where the attacker can arbitrarily fix the values of $\mathcal O$ on some $P$ coordinates, but then the remaining coordinates are chosen at random. Unruh's security loss for this transformation is $\sqrt{ST/P}$. We improve this loss to the optimal value $O(ST/P)$, which implies nearly tight bounds for a variety of indistinguishability applications in the AI-ROM. While the basic pre-sampling technique cannot give tight bounds for unpredictability applications, we introduce a novel "multiplicative version" of pre-sampling, which allows to dramatically reduce the size of $P$ of the pre-sampled set to $P=O(ST)$ and yields nearly tight security bounds for a variety of unpredictability applications in the AI-ROM. Qualitatively, it validates Unruh's "polynomial pre-sampling conjecture"---disproved in general by Dodis et al. (EUROCRYPT '17)---for the special case of unpredictability applications. Using our techniques, we reprove nearly all AI-ROM bounds obtained by Dodis et al. (using a much more laborious compression technique), but we also apply it to many settings where the compression technique is either inapplicable (e.g., computational reductions) or appears intractable (e.g., Merkle-Damgard hashing). We show that for any salted Merkle-Damgard hash function with m-bit output there exists a collision-finding circuit of size $\Theta(2^{m/3})$ (taking salt as the input), which is significantly below the $2^{m/2}$ birthday security conjectured against uniform attackers. We build two general compilers showing how to generically extend the security of applications proven in the traditional ROM to the AI-ROM. One compiler simply prepends a public salt to the random oracle and shows that salting generically provably defeats preprocessing. Overall, our results make it much easier to get concrete security bounds in the AI-ROM. These bounds in turn give concrete conjectures about the security of these applications (in the standard model) against non-uniform attackers.
Last updated:  2022-08-09
Improved Plantard Arithmetic for Lattice-based Cryptography
Junhao Huang, Jipeng Zhang, Haosong Zhao, Zhe Liu, Ray C. C. Cheung, Çetin Kaya Koç, and Donglong Chen
This paper presents an improved Plantard's modular arithmetic (Plantard arithmetic) tailored for Lattice-Based Cryptography (LBC). Based on the improved Plantard arithmetic, we present faster implementations of two LBC schemes, Kyber and NTTRU, running on Cortex-M4. The intrinsic advantage of Plantard arithmetic is that one multiplication can be saved from the modular multiplication of a constant. However, the original Plantard arithmetic is not very practical in LBC schemes because of the limitation on the unsigned input range. In this paper, we improve the Plantard arithmetic and customize it for the existing LBC schemes with theoretical proof. The improved Plantard arithmetic not only inherits its aforementioned advantage but also accepts signed inputs, produces signed output, and enlarges its input range compared with the original design. Moreover, compared with the state-of-the-art Montgomery arithmetic, the improved Plantard arithmetic has a larger input range and smaller output range, which allows better lazy reduction strategies during the NTT/INTT implementation in current LBC schemes. All these merits make it possible to replace the Montgomery arithmetic with the improved Plantard arithmetic in LBC schemes on some platforms. After applying this novel method to Kyber and NTTRU schemes using 16-bit NTT on Cortex-M4 devices, we show that the proposed design outperforms the known fastest implementation that uses Montgomery and Barrett arithmetic. Specifically, compared with the state-of-the-art Kyber implementation, applying the improved Plantard arithmetic in Kyber results in a speedup of 25.02% and 18.56% for NTT and INTT, respectively. Compared with the reference implementation of NTTRU, our NTT and INTT achieve speedup by 83.21% and 78.64%, respectively. As for the LBC KEM schemes, we set new speed records for Kyber and NTTRU running on Cortex-M4.
Last updated:  2022-08-08
Maliciously Secure Massively Parallel Computation for All-but-One Corruptions
Rex Fernando, Yuval Gelles, Ilan Komargodski, and Elaine Shi
The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS ’20), focused on the honest majority setting. Later, Fernando et al. (TCC ’20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against semi-honest attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called malicious security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a programmable random oracle), it is provably impossible to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol.
Last updated:  2022-08-08
An attack on SIDH with arbitrary starting curve
Luciano Maino and Chloe Martindale
We present an attack on SIDH which does not require any endomorphism information on the starting curve. Our attack is not polynomial-time, but significantly reduces the security of SIDH and SIKE; our analysis and preliminary implementation suggests that our algorithm will be feasible for the Microsoft challenge parameters $p = 2^{110}3^{67}-1$ on a regular computer. Our attack applies to any isogeny-based cryptosystem that publishes the images of points under the secret isogeny, for example Séta [26] and B-SIDH [9]. It does not apply to CSIDH [8], CSI-FiSh [3], or SQISign [11].
Last updated:  2022-08-08
Parallelizable Delegation from LWE
Cody Freitag, Rafael Pass, and Naomi Sirkin
We present the first non-interactive delegation scheme for P with time-tight parallel prover efficiency based on standard hardness assumptions. More precisely, in a time-tight delegation scheme–which we refer to as a SPARG (succinct parallelizable argument)–the prover's parallel running time is t + polylog(t), while using only polylog(t) processors and where t is the length of the computation. (In other words, the proof is computed essentially in parallel with the computation, with only some minimal additive overhead in terms of time). Our main results show the existence of a publicly-verifiable, non-interactive, SPARG for P assuming polynomial hardness of LWE. Our SPARG construction relies on the elegant recent delegation construction of Choudhuri, Jain, and Jin (FOCS'21) and combines it with techniques from Ephraim et al (EuroCrypt'20). We next demonstrate how to make our SPARG time-independent–where the prover and verifier do not need to known the running-time t in advance; as far as we know, this yields the first construction of a time-tight delegation scheme with time-independence based on any hardness assumption. We finally present applications of SPARGs to the constructions of VDFs (Boneh et al, Crypto'18), resulting in the first VDF construction from standard polynomial hardness assumptions (namely LWE and the minimal assumption of a sequentially hard function).
Last updated:  2022-08-08
Aggregate Measurement via Oblivious Shuffling
Erik Anderson, Melissa Chase, Wei Dai, F. Betul Durak, Kim Laine, Siddhart Sharma, and Chenkai Weng
We introduce a new secure aggregation method for computing aggregate statistics over secret shared data in a client-server setting. Our protocol is particularly suitable for ad conversion measurement computations, where online advertisers and ad networks want to measure the performance of ad campaigns without requiring privacy-invasive techniques, such as third-party cookies. Our protocol has linear complexity in the number of data points and guarantees differentially private outputs. We formally analyze the security and privacy of our protocol and present a performance evaluation with comparison to other approaches proposed for a similar task.
Last updated:  2022-08-08
DiSSECT: Distinguisher of Standard & Simulated Elliptic Curves via Traits
Vladimir Sedlacek, Vojtech Suchanek, Antonin Dufka, Marek Sys, and Vashek Matyas
It can be tricky to trust elliptic curves standardized in a non-transparent way. To rectify this, we propose a systematic methodology for analyzing curves and statistically comparing them to the expected values of a large number of generic curves with the aim of identifying any deviations in the standard curves. For this purpose, we put together the largest publicly available database of standard curves. To identify unexpected properties of standard generation methods and curves, we simulate over 250 000 curves by mimicking the generation process of four standards. We compute 22 different properties of curves and analyze them with automated methods to pinpoint deviations in standard curves, pointing to possible weaknesses.
Last updated:  2022-08-08
An algebraic attack to the Bluetooth stream cipher E0
Roberto La Scala, Sergio Polese, Sharwan K. Tiwari, and Andrea Visconti
In this paper we study the security of the Bluetooth stream cipher E0 from the viewpoint it is a “difference stream cipher”, that is, it is defined by a system of explicit difference equations over the finite field GF(2). This approach highlights some issues of the Bluetooth encryption such as the invertibility of its state transition map, a special set of 14 bits of its 132-bit state which when guessed implies linear equations among the other bits and finally a small number of spurious keys, with 83 guessed bits, which are compatible with a keystream of about 60 bits. Exploiting these issues, we implement an algebraic attack using Gröbner bases, SAT solvers and Binary Decision Diagrams. Testing activities suggest that the version based on Gröbner bases is the best one and it is able to attack E0 in about 2^79 seconds on an Intel i9 CPU. To the best of our knowledge, this work improves any previous attack based on a short keystream, hence fitting with Bluetooth specifications.
Last updated:  2022-08-08
Secure and Private Source Coding with Private Key and Decoder Side Information
Onur Gunlu, Rafael F. Schaefer, Holger Boche, and H. Vincent Poor
The problem of secure source coding with multiple terminals is extended by considering a remote source whose noisy measurements are the correlated random variables used for secure source reconstruction. The main additions to the problem include 1) all terminals noncausally observe a noisy measurement of the remote source; 2) a private key is available to all legitimate terminals; 3) public communication link between the encoder and decoder is rate-limited; and 4) secrecy leakage to the eavesdropper is measured with respect to the encoder input, whereas privacy leakage is measured with respect to the remote source. Exact rate regions are characterized for a lossy source coding problem with a private key, remote source, and decoder side information under security, privacy, communication, and distortion constraints. By replacing the distortion constraint with a reliability constraint, we obtain the exact rate region also for the lossless case. Furthermore, the lossy rate region for scalar discrete-time Gaussian sources and channels is established.
Last updated:  2022-08-08
Towards Practical Homomorphic Time-Lock Puzzles: Applicability and Verifiability
Yi Liu, Qi Wang, and Siu-Ming Yiu
Time-lock puzzle schemes allow one to encrypt messages for the future. More concretely, one can efficiently generate a time-lock puzzle for a secret/solution $s$, such that $s$ remains hidden until a specified time $T$ has elapsed, even for any parallel adversaries. However, since computation on secrets within multiple puzzles can be performed only when \emph{all} of these puzzles are solved, the usage of classical time-lock puzzles is greatly limited. Homomorphic time-lock puzzle (HTLP) schemes were thus proposed to allow evaluating functions over puzzles directly without solving them. However, although efficient HTLP schemes exist, more improvements are still needed for practicability. In this paper, we improve HTLP schemes to broaden their application scenarios from the aspects of \emph{applicability} and \emph{verifiability}. In terms of applicability, we design the \emph{first} multiplicatively HTLP scheme with the solution space over $\mathbb{Z}_n^*$, which is more expressible than the original one, \eg representing integers. Then, to fit HTLP into scenarios requiring verifiability that is missing in existing schemes, we propose three \emph{simple} and \emph{fast} protocols for both the additively HTLP scheme and our multiplicatively HTLP scheme, respectively. The first two protocols allow a puzzle solver to convince others of the correctness of the solution or the invalidity of the puzzle so that others do not need to solve the puzzle themselves. The third protocol allows a puzzle generator to prove the validity of his puzzles. It is shown that a puzzle in our scheme is only $1.25$KB, and one multiplication on puzzles takes simply $0.01$ms. Meanwhile, the overhead of each protocol is less than $0.6$KB in communication and $40$ms in computation. Hence, HTLP still demonstrates excellent efficiency in both communication and computation with these versatile properties.
Last updated:  2022-08-08
Roulette: A Diverse Family of Feasible Fault Attacks on Masked Kyber
Jeroen Delvaux
At Indocrypt 2021, Hermelink, Pessl, and Pöppelmann presented a fault attack against Kyber in which a system of linear inequalities over the private key is generated and solved. The attack requires a laser and is, understandably, demonstrated with simulations—not actual equipment. We facilitate and diversify the attack in four ways, thereby admitting cheaper and more forgiving fault-injection setups. Firstly, the attack surface is enlarged: originally, the two input operands of the ciphertext comparison are covered, and we additionally cover re-encryption modules such as binomial sampling and butterflies in the last layer of the inverse number-theoretic transform (INTT). This extra surface also allows an attacker to bypass the custom countermeasure that was proposed in the Indocrypt paper. Secondly, the fault model is relaxed: originally, precise bit flips are required, and we additionally support set-to-0 faults, random faults, arbitrary bit flips, and instruction skips. Thirdly, masking and blinding methods that randomize intermediate variables kindly help our attack, whereas the IndoCrypt attack is like most other fault attacks either hindered or unaltered by countermeasures against passive side-channel analysis (SCA). Randomization helps because we randomly fault intermediate prime-field elements until a desired set of values is hit. If these prime-field elements are represented on a circle, which is a common visualization, our attack is analogous to spinning a roulette wheel until the ball lands in a desired set of pockets. Hence, the nickname. Fourthly, we accelerate and improve the error tolerance of solving the system of linear inequalities: run times of roughly 100 minutes are reduced to roughly one minute, and inequality error rates of roughly 1% are relaxed to roughly 25%. Benefiting from the four advances above, we use a reasonably priced ChipWhisperer board to break a masked implementation of Kyber running on an ARM Cortex-M4 through clock glitching.
Last updated:  2022-08-08
Orion: Zero Knowledge Proof with Linear Prover Time
Tiancheng Xie, Yupeng Zhang, and Dawn Song
Zero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes with succinct proof size suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing succinct proof systems, and the proof size is an order of magnitude smaller than a recent scheme proposed in Golovnev et al. 2021. In particular, we develop two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the densest subgraph algorithm. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. The testing algorithm based on densest subgraph may be of independent interest for other applications of expander graphs. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic in the size of the computation. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.
Last updated:  2022-08-08
Multi-Input Attribute Based Encryption and Predicate Encryption
Shweta Agrawal, Anshu Yadav, and Shota Yamada
Motivated by several new and natural applications, we initiate the study of multi-input predicate encryption (${\sf miPE}$) and further develop multi-input attribute based encryption (${\sf miABE}$). Our contributions are: 1. Formalizing Security: We provide definitions for ${\sf miABE}$ and ${\sf miPE}$ in the {symmetric} key setting and formalize security in the standard indistinguishability (IND) paradigm, against unbounded collusions. 2. Two-input ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and Pairings: We provide the first constructions for two-input key-policy ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and pairings. Our construction leverages a surprising connection between techniques recently developed by Agrawal and Yamada (Eurocrypt, 2020) in the context of succinct single-input ciphertext-policy ${\sf ABE}$, to the seemingly unrelated problem of two-input key-policy ${\sf ABE}$. Similarly to Agrawal-Yamada, our construction is proven secure in the bilinear generic group model. By leveraging inner product functional encryption and using (a variant of) the KOALA knowledge assumption, we obtain a construction in the standard model analogously to Agrawal, Wichs and Yamada (TCC, 2020). 3. Heuristic two-input ${\sf ABE}$ for ${\sf P}$ from Lattices: We show that techniques developed for succinct single-input ciphertext-policy ${\sf ABE}$ by Brakerski and Vaikuntanathan (ITCS 2022) can also be seen from the lens of ${\sf miABE}$ and obtain the first two-input key-policy ${\sf ABE}$ from lattices for ${\sf P}$. 4. Heuristic three-input ${\sf ABE}$ and ${\sf PE}$ for ${\sf NC}_1$ from Pairings and Lattices: We obtain the first three-input ${\sf ABE}$ for ${\sf NC}_1$ by harnessing the powers of both the Agrawal-Yamada and the Brakerski-Vaikuntanathan constructions. 5. Multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ via Lockable Obfuscation: We provide a generic compiler that lifts multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ by relying on the hiding properties of Lockable Obfuscation (${\sf LO}$) by Wichs-Zirdelis and Goyal-Koppula-Waters (FOCS 2018), which can be based on ${\sf LWE}$. Our compiler generalizes such a compiler for single-input setting to the much more challenging setting of multiple inputs. By instantiating our compiler with our new two and three-input ${\sf ABE}$ schemes, we obtain the first constructions of two and three-input ${\sf PE}$ schemes. Our constructions of multi-input ${\sf ABE}$ provide the first improvement to the compression factor of non-trivially exponentially efficient Witness Encryption defined by Brakerski et al. (SCN 2018) without relying on compact functional encryption or indistinguishability obfuscation. We believe that the unexpected connection between succinct single-input ciphertext-policy ${\sf ABE}$ and multi-input key-policy ${\sf ABE}$ may lead to a new pathway for witness encryption.
Last updated:  2022-08-08
SIM: Secure Interval Membership Testing and Applications to Secure Comparison
Albert Yu, Donghang Lu, Aniket Kate, and Hemanta K. Maji
The offline-online model is a leading paradigm for practical secure multi-party computation (MPC) protocol design that has successfully reduced the overhead for several prevalent privacy-preserving computation functionalities common to diverse application domains. However, the prohibitive overheads associated with secure comparison -- one of these vital functionalities -- often bottlenecks current and envisioned MPC solutions. Indeed, an efficient secure comparison solution has the potential for significant real-world impact through its broad applications. This work identifies and presents SIM, a secure protocol for the functionality of interval membership testing. This security functionality, in particular, facilitates secure less-than-zero testing and, in turn, secure comparison. A key technical challenge is to support a fast online protocol for testing in large rings while keeping the precomputation tractable. Motivated by the map-reduce paradigm, this work introduces the innovation of (1) computing a sequence of intermediate functionalities on a partition of the input into input blocks and (2) securely aggregating the output from these intermediate outputs. This innovation allows controlling the size of the precomputation through a granularity parameter representing these input blocks' size -- enabling application-specific automated compiler optimizations. To demonstrate our protocols' efficiency, we implement and test their performance in a high-demand application: privacy-preserving machine learning. The benchmark results show that switching to our protocols yields significant performance improvement, which indicates that using our protocol in a plug-and-play fashion can improve the performance of various security applications. Our new paradigm of protocol design may be of independent interest because of its potential for extensions to other functionalities of practical interest.
Last updated:  2022-08-07
New Low-Memory Algebraic Attacks on LowMC in the Picnic Setting
Fukang Liu, Willi Meier, Santanu Sarkar, and Takanori Isobe
The security of the post-quantum signature scheme Picnic is highly related to the difficulty of recovering the secret key of LowMC from a single plaintext-ciphertext pair. Since Picnic is one of the alternate third-round candidates in NIST post-quantum cryptography standardization process, it has become urgent and important to evaluate the security of LowMC in the Picnic setting. The best attacks on LowMC with full S-box layers used in Picnic3 were achieved with Dinur's algorithm. For LowMC with partial nonlinear layers, e.g. 10 S-boxes per round adopted in Picnic2, the best attacks on LowMC were published by Banik et al. with the meet-in-the-middle (MITM) method. In this paper, we improve the attacks on LowMC in a model where memory consumption is costly. First, a new attack on 3-round LowMC with full S-box layers with negligible memory complexity is found, which can outperform Bouillaguet et al.'s fast exhaustive search attack and can achieve better time-memory tradeoffs than Dinur's algorithm. Second, we extend the 3-round attack to 4 rounds to significantly reduce the memory complexity of Dinur's algorithm at the sacrifice of a small factor of time complexity. For LowMC instances with 1 S-box per round, our attacks are shown to be much faster than the MITM attacks. For LowMC instances with 10 S-boxes per round, we can reduce the memory complexity from 32GB ($2^{38}$ bits) to only 256KB ($2^{21}$ bits) using our new algebraic attacks rather than the MITM attacks, while the time complexity of our attacks is about $2^{3.2}\sim 2^{5}$ times higher than that of the MITM attacks. A notable feature of our new attacks (apart from the 4-round attack) is their simplicity. Specifically, only some basic linear algebra is required to understand them and they can be easily implemented.
Last updated:  2022-08-07
Generic Construction of UC-Secure Oblivious Transfer
Olivier Blazy and Céline Chevalier
We show how to construct a completely generic UC-secure oblivious transfer scheme from a collision-resistant chameleon hash scheme (CH) and a CCA encryption scheme accepting a smooth projective hash function (SPHF). Our work is based on the work of Abdalla et al. at Asiacrypt 2013, where the authors formalize the notion of SPHF-friendly commitments, i.e. accepting an SPHF on the language of valid commitments (to allow implicit decommitment), and show how to construct from them a UC-secure oblivious transfer in a generic way. But Abdalla et al. only gave a DDH-based construction of SPHF-friendly commitment schemes, furthermore highly relying on pairings. In this work, we show how to generically construct an SPHF-friendly commitment scheme from a collision-resistant CH scheme and an SPHF-friendly CCA encryption scheme. This allows us to propose an instantiation of our schemes based on the DDH, as efficient as that of Abdalla et al., but without requiring any pairing. Interestingly, our generic framework also allows us to propose an instantiation based on the learning with errors (LWE) assumption. For the record, we finally propose a last instantiation based on the decisional composite residuosity (DCR) assumption.
Last updated:  2022-08-07
Practical Statistically-Sound Proofs of Exponentiation in any Group
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Karen Klein, and Krzysztof Pietrzak
For a group $\mathbb{G}$ of unknown order, a *Proof of Exponentiation* (PoE) allows a prover to convince a verifier that a tuple $(y,x,q,T)\in \mathbb{G}^2\times\mathbb{N}^2$ satisfies $y=x^{q^T}$. PoEs have recently found exciting applications in constructions of verifiable delay functions and succinct arguments of knowledge. The current PoEs that are practical in terms of proof-size only provide restricted soundness guarantees: Wesolowski's protocol (Journal of Cryptology 2020) is only computationally-sound (i.e., it is an argument), whereas Pietrzak's protocol (ITCS 2019) is statistically-sound only in groups that come with the promise of not having any small subgroups. On the other hand, the only statistically-sound PoE in *arbitrary* groups of unknown order is due to Block et al. (CRYPTO 2021), and it can be seen as an elaborate parallel repetition of Pietrzak's PoE. Therefore, to achieve $\lambda$ bits of security, say $\lambda=80$, the number of repetitions required -- and hence the (multiplicative) overhead incurred in proof-size -- is as large as $\lambda$. In this work, we propose a statistically-sound PoE for arbitrary groups for the case where the exponent $q$ is the product of all primes up to some bound $B$. For such a structured exponent, we show that it suffices to run only $\lambda/\log(B)$ parallel instances of Pietrzak's PoE. This reduces the concrete proof-size compared to Block et al. by an order of magnitude. Furthermore, we show that in the known applications where PoEs are used as a building block such structured exponents are viable. Finally, we also discuss batching of our PoE, showing that many proofs (for the same $\mathbb{G}$ and $q$ but different $x$ and $T$) can be batched by adding only a single element to the proof per additional statement.
Last updated:  2022-08-07
Guide to Fully Homomorphic Encryption over the [Discretized] Torus
Marc Joye
First posed as a challenge in 1978 by Rivest et al., fully homomorphic encryption—the ability to evaluate any function over encrypted data— was only solved in 2009 in a breakthrough result by Gentry (Commun. ACM, 2010). After a decade of intense research, practical solutions have emerged and are being pushed for standardization. This guide is intended to practitioners. It explains the inner-workings of TFHE, a torus-based fully homomorphic encryption scheme. More exactly, it describes its implementation on a discretized version of the torus. It also explains in detail the technique of the programmable bootstrapping.
Last updated:  2022-08-07
Physically Related Functions: A New Paradigm for Light-weight Key-Exchange
Durba Chatterjee, Harishma Boyapally, Sikhar Patranabis, Urbi Chatterjee, Debdeep Mukhopadhyay, and Aritra Hazra
In this paper, we propose a novel concept named Physically Related Function(PReF) which are devices with hardware roots of trust. It enables secure key-exchange with no pre-established/embedded secret keys. This work is motivated by the need to perform key-exchange between lightweight resource-constrained devices. We present a proof-of-concept realization of our contributions in hardware using FPGAs.
Last updated:  2022-08-06
Time-Deniable Signatures
Gabrielle Beck, Arka Rai Choudhuri, Matthew Green, Abhishek Jain, and Pratyush Ranjan Tiwari
In this work we propose time-deniable signatures (TDS), a new primitive that facilitates deniable authentication in protocols such as DKIM-signed email. As with traditional signatures, TDS provide strong authenticity for message content, at least for a sender-chosen period of time. Once this time period has elapsed, however, time-deniable signatures can be forged by any party who obtains a signature. This forgery property ensures that signatures serve a useful authentication purpose for a bounded time period, while also allowing signers to plausibly disavow the creation of older signed content. Most critically, and unlike many past proposals for deniable authentication, TDS do not require interaction with the receiver or the deployment of any persistent cryptographic infrastructure or services beyond the signing process (e.g., APIs to publish secrets or author timestamp certificates.) We first investigate the security definitions for time-deniability, demonstrating that past definitional attempts are insufficient (and indeed, allow for broken signature schemes.) We then propose an efficient construction of TDS based on well-studied assumptions.
Last updated:  2022-08-06
PERKS: Persistent and Distributed Key Acquisition for Secure Storage from Passwords
Gareth T. Davies and Jeroen Pijnenburg
We investigate how users of instant messaging (IM) services can acquire strong encryption keys to back up their messages and media with strong cryptographic guarantees. Many IM users regularly change their devices and use multiple devices simultaneously, ruling out any long-term secret storage. Extending the end-to-end encryption guarantees from just message communication to also incorporate backups has so far required either some trust in an IM or outsourced storage provider, or use of costly third-party encryption tools with unclear security guarantees. Recent works have proposed solutions for password-protected key material, however all require one or more servers to generate and/or store per-user information, inevitably invoking a cost to the users. We define distributed key acquisition (DKA) as the primitive for the task at hand, where a user interacts with one or more servers to acquire a strong cryptographic key, and both user and server are required to store as little as possible. We present a construction framework that we call PERKS---Password-based Establishment of Random Keys for Storage---providing efficient, modular and simple protocols that utilize Oblivious Pseudorandom Functions (OPRFs) in a distributed manner with minimal storage by the user (just the password) and servers (a single global key for all users). Along the way we introduce a formal treatment of DKA, and provide proofs of security for our constructions in their various flavours. Our approach enables key rotation by the OPRF servers, and for this we incorporate updatable encryption. Finally, we show how our constructions fit neatly with recent research on encrypted outsourced storage to provide strong security guarantees for the outsourced ciphertexts.
Last updated:  2022-08-06
Analysing the HPKE Standard
Joël Alwen, Bruno Blanchet, Eduard Hauck, Eike Kiltz, Benjamin Lipp, and Doreen Riepel
The Hybrid Public Key Encryption (HPKE) scheme is an emerging standard currently under consideration by the Crypto Forum Research Group (CFRG) of the IETF as a candidate for formal approval. Of the four modes of HPKE, we analyse the authenticated mode HPKE_Auth in its single-shot encryption form as it contains what is, arguably, the most novel part of HPKE and has applications to other upcoming standards such as MLS. HPKE_Auth’s intended application domain is captured by a new primitive which we call Authenticated Public Key Encryption (APKE). We provide syntax and security definitions for APKE schemes, as well as for the related Authenticated Key Encapsulation Mechanisms (AKEMs). We prove security of the AKEM scheme DH-AKEM underlying HPKE_Auth based on the Gap Diffie-Hellman assumption and provide general AKEM/DEM composition theorems with which to argue about HPKE_Auth’s security. To this end, we also formally analyse HPKE_Auth’s key schedule and key derivation functions. To increase confidence in our results we use the automatic theorem proving tool CryptoVerif. All our bounds are quantitative and we discuss their practical implications for HPKE_Auth. As an independent contribution we propose the new framework of nominal groups that allows us to capture abstract syntactical and security properties of practical elliptic curves, including the Curve25519 and Curve448 based groups (which do not constitute cyclic groups).
Last updated:  2022-08-06
Coefficient Grouping: Breaking Chaghri and More
Fukang Liu, Ravi Anand, Libo Wang, Willi Meier, and Takanori Isobe
We propose an efficient technique called coefficient grouping to evaluate the algebraic degree of the FHE-friendly cipher Chaghri, which has been accepted for ACM CCS 2022. It is found that the algebraic degree increases linearly rather than exponentially. As a consequence, we can construct a 13-round distinguisher with time and data complexity of $2^{63}$ and mount a 13.5-round key-recovery attack with time complexity of about $2^{119.6}$. In particular, a higher-order differential attack on 8 rounds of Chaghri can be achieved with time and data complexity of $2^{38}$. Hence, it indicates that the full 8 rounds are far from being secure. Furthermore, we also demonstrate the application of our coefficient grouping technique to the design of secure cryptographic components. As a result, a countermeasure is found for Chaghri and it has little overhead compared with the original design. Since more and more symmetric primitives defined over a large finite field are emerging, we believe our new technique can have more applications in the future research.
Last updated:  2022-08-06
Public Key Authenticated Encryption with Keyword Search from LWE
Leixiao Cheng and Fei Meng
Public key encryption with keyword search (PEKS) inherently suffers from the inside keyword guessing attack. To resist against this attack, Huang et al. proposed the public key authenticated encryption with keyword search (PAEKS), where the sender not only encrypts a keyword, but also authenticates it. To further resist against quantum attacks, Liu et al. proposed a generic construction of PAEKS and the first quantum-resistant PAEKS instantiation based on lattices. Later, Emura pointed out some issues in Liu et al.'s construction and proposed a new generic construction of PAEKS. The basic construction methodology of Liu et al. and Emura is the same, i.e., each keyword is converted into an extended keyword using the shared key calculated by a word-independent smooth projective hash functions (SPHF), and PEKS is used for the extended keyword. In this paper, we first analyze the schemes of Liu et al. and Emura, and point out some issues regarding their construction and security model. In short, in their lattice-based instantiations, the sender and receiver use a lattice-based word independent SPHF to compute the same shared key to authenticate keywords, leading to a super-polynomial modulus $q$; their generic constructions need a trusted setup assumption or the designated-receiver setting; Liu et al. failed to provide convincing evidence that their scheme satisfies their claimed security. Then, we propose two new lattice-based PAEKS schemes with totally different construction methodology from Liu et al. and Emura. Specifically, in our PAEKS schemes, instead of using the shared key calculated by SPHF, the sender and receiver achieve keyword authentication by using their own secret key to sample a set of short vectors related to the keyword. In this way, the modulus $q$ in our schemes could be of polynomial size, which results in much smaller size of the public key, ciphertext and trapdoor. In addition, our schemes need neither a trusted setup assumption nor the designated-receiver setting. Finally, our schemes can be proven secure in stronger security model, and thus provide stronger security guarantee for both ciphertext privacy and trapdoor privacy.
Last updated:  2022-08-05
One Server for the Price of Two: Simple and Fast Single-Server Private Information Retrieval
Alexandra Henzinger, Matthew M. Hong, Henry Corrigan-Gibbs, Sarah Meiklejohn, and Vinod Vaikuntanathan
We present SimplePIR, the fastest private information retrieval (PIR) scheme known to date. SimplePIR is a single-server PIR scheme, whose security holds under the learning-with-errors assumption. To answer a client’s PIR query, the SimplePIR server performs one 32-bit multiplication and one 32-bit addition per database byte. SimplePIR achieves 6.5 GB/s/core server throughput, which is 7% faster than the fastest two-server PIR schemes (which require non-colluding servers). SimplePIR has relatively large communication costs: to make queries to a 1 GB database, the client must download a 124 MB “hint” about the database contents; thereafter, the client may make an unbounded number of queries, each requiring 242 KB of communication. We present a second single-server scheme, DoublePIR, that shrinks the hint to 16 MB at the cost of slightly higher per-query communication (345 KB) and slightly lower throughput (5.2 GB/s/core). Finally, we apply our PIR schemes, together with a new data structure for approximate set membership, to the problem of private auditing in Certificate Transparency. We achieve a strictly stronger notion of privacy than Google Chrome’s current approach with a modest, 13× larger communication overhead.
Last updated:  2022-08-05
Quantum Cryptanalysis of $5$ rounds Feistel schemes and Benes schemes
Maya Chartouny, Jacques Patarin, and Ambre Toulemonde
In this paper, we provide new quantum cryptanalysis results on $5$ rounds (balanced) Feistel schemes and on Benes schemes. More precisely, we give an attack on $5$ rounds Feistel schemes in $\Theta(2^{2n/3})$ quantum complexity and an attack on Benes schemes in $\Theta(2^{2n/3})$ quantum complexity, where $n$ is the number of bits of the internel random functions.
Last updated:  2022-08-05
An efficient key recovery attack on SIDH (preliminary version)
Wouter Castryck and Thomas Decru
We present an efficient key recovery attack on the Supersingular Isogeny Diffie-Hellman protocol (SIDH), based on a "glue-and-split" theorem due to Kani. Our attack exploits the existence of a small non-scalar endomorphism on the starting curve, and it also relies on the auxiliary torsion point information that Alice and Bob share during the protocol. Our Magma implementation breaks the instantiation SIKEp434, which aims at security level 1 of the Post-Quantum Cryptography standardization process currently ran by NIST, in about one hour on a single core. This is a preliminary version of a longer article in preparation.
Last updated:  2022-08-05
Correlated Pseudorandomness from Expand-Accumulate Codes
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Lisa Kohl, Nicolas Resch, and Peter Scholl
A pseudorandom correlation generator (PCG) is a recent tool for securely generating useful sources of correlated randomness, such as random oblivious transfers (OT) and vector oblivious linear evaluations (VOLE), with low communication cost. We introduce a simple new design for PCGs based on so-called expand-accumulate codes, which first apply a sparse random expander graph to replicate each message entry, and then accumulate the entries by computing the sum of each prefix. Our design offers the following advantages compared to state-of-the-art PCG constructions: - Competitive concrete efficiency backed by provable security against relevant classes of attacks; - An offline-online mode that combines near-optimal cache-friendliness with simple parallelization; - Concretely efficient extensions to pseudorandom correlation functions, which enable incremental generation of new correlation instances on demand, and to new kinds of correlated randomness that include circuit-dependent correlations. To further improve the concrete computational cost, we propose a method for speeding up a full-domain evaluation of a puncturable pseudorandom function (PPRF). This is independently motivated by other cryptographic applications of PPRFs.
Last updated:  2022-08-05
Dynamic Local Searchable Symmetric Encryption
Brice Minaud and Michael Reichle
In this article, we tackle for the first time the problem of dynamic memory-efficient Searchable Symmetric Encryption (SSE). In the term "memory-efficient" SSE, we encompass both the goals of local SSE, and page-efficient SSE. The centerpiece of our approach is a new connection between those two goals. We introduce a map, called the Generic Local Transform, which takes as input a page-efficient SSE scheme with certain special features, and outputs an SSE scheme with strong locality properties. We obtain several results. 1. First, we build a dynamic SSE scheme with storage efficiency $O(1)$ and page efficiency only $\tilde{O}(\log \log (N/p))$, where $p$ is the page size, called LayeredSSE. The main technique behind LayeredSSE is a new weighted extension of the two-choice allocation process, of independent interest. 2. Second, we introduce the Generic Local Transform, and combine it with LayeredSSE to build a dynamic SSE scheme with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $\tilde{O}(\log\log N)$, under the condition that the longest list is of size $O(N^{1-1/\log \log \lambda})$. This matches, in every respect, the purely static construction of Asharov et al. from STOC 2016: dynamism comes at no extra cost. 3. Finally, by applying the Generic Local Transform to a variant of the Tethys scheme by Bossuat et al. from Crypto 2021, we build an unconditional static SSE with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log^\varepsilon N)$, for an arbitrarily small constant $\varepsilon > 0$. To our knowledge, this is the construction that comes closest to the lower bound presented by Cash and Tessaro at Eurocrypt 2014.
Last updated:  2022-08-05
Nonce-Misuse Resilience of Romulus-N and GIFT-COFB
Akiko Inoue, Chun Guo, and Kazuhiko Minematsu
We analyze nonce-misuse resilience (NMRL) security of Romulus-N and GIFT-COFB, the two finalists of NIST Lightweight Cryptography project for standardizing lightweight authenticated encryption. NMRL, introduced by Ashur et al. at CRYPTO 2017, is a relaxed security notion from a stronger, nonce-misuse resistance notion. We proved that Romulus-N and GIFT-COFB have nonce-misuse resilience. For Romulus-N, we showed the perfect privacy (NMRL-PRIV) and n/2-bit authenticity (NMRL-AUTH) with graceful degradation with respect to nonce repetition. For GIFT-COFB, we showed n/4-bit security for both NMRL-PRIV and NMRL-AUTH notions.
Last updated:  2022-08-05
Spatial Encryption Revisited: From Delegatable Multiple Inner Product Encryption and More
Huy Quoc Le, Dung Hoang Duong, Willy Susilo, and Josef Pieprzyk
Spatial Encryption (SE), which involves encryption and decryption with affne/vector objects, was introduced by Boneh and Hamburg at Asiacrypt 2008. Since its introduction, SE has been shown as a versatile and elegant tool for implementing many other important primitives such as (Hierarchical) Identity-based Encryption ((H)IBE), Broadcast (H)IBE, Attribute-based Encryption, and Forward-secure cryptosystems. This paper revisits SE toward a more compact construction in the lattice setting. In doing that, we introduce a novel primitive called Delegatable Multiple Inner Product Encryption (DMIPE). It is a delegatable generalization of Inner Product Encryption (IPE) but different from the Hierarchical IPE (HIPE) (Okamoto and Takashima at Asiacrypt 2009). We point out that DMIPE and SE are equivalent in the sense that there are security-preserving conversions between them. As a proof of concept, we then successfully instantiate a concrete DMIPE construction relying on the hardness of the decisional learning with errors problem. In turn, the DMIPE design implies a more compact lattice-based SE in terms of sizes compared with SEs converted from HIPE (e.g., Xagawa’s HIPE at PKC 2013) using the framework by Chen et al. (Designs, Codes, and Cryptography, 2014). Furthermore, we demonstrate that one can also use SE to implement the Allow-/Deny-list encryption, which subsumes, e.g., puncturable encryption (Green and Miers at IEEE S&P 2015).
Last updated:  2022-08-05
Structure-Aware Private Set Intersection, With Applications to Fuzzy Matching
Gayathri Garimella, Mike Rosulek, and Jaspal Singh
In two-party private set intersection (PSI), Alice holds a set $X$, Bob holds a set $Y$, and they learn (only) the contents of $X \cap Y$. We introduce structure-aware PSI protocols, which take advantage of situations where Alice's set $X$ is publicly known to have a certain structure. The goal of structure-aware PSI is to have communication that scales with the description size of Alice's set, rather its cardinality. We introduce a new generic paradigm for structure-aware PSI based on function secret-sharing (FSS). In short, if there exists compact FSS for a class of structured sets, then there exists a semi-honest PSI protocol that supports this class of input sets, with communication cost proportional only to the FSS share size. Several prior protocols for efficient (plain) PSI can be viewed as special cases of our new paradigm, with an implicit FSS for unstructured sets. Our PSI protocol can be instantiated from a significantly weaker flavor of FSS, which has not been previously studied. We develop several improved FSS techniques that take advantage of these relaxed requirements, and which are in some cases exponentially better than existing FSS. Finally, we explore in depth a natural application of structure-aware PSI. If Alice's set $X$ is the union of many radius-$\delta$ balls in some metric space, then an intersection between $X$ and $Y$ corresponds to fuzzy PSI, in which the parties learn which of their points are within distance $\delta$. In structure-aware PSI, the communication cost scales with the number of balls in Alice's set, rather than their total volume. Our techniques lead to efficient fuzzy PSI for $\ell_\infty$ and $\ell_1$ metrics (and approximations of $\ell_2$ metric) in high dimensions. We implemented this fuzzy PSI protocol for 2-dimensional $\ell_\infty$ metrics. For reasonable input sizes, our protocol requires 45--60% less time and 85% less communication than competing approaches that simply reduce the problem to plain PSI.
Last updated:  2022-08-05
Time-Space Tradeoffs for Sponge Hashing: Attacks and Limitations for Short Collisions
Cody Freitag, Ashrujit Ghoshal, and Ilan Komargodski
Sponge hashing is a novel alternative to the popular Merkle-Damgård hashing design. The sponge construction has become increasingly popular in various applications, perhaps most notably, it underlies the SHA-3 hashing standard. Sponge hashing is parametrized by two numbers, $r$ and $c$ (bitrate and capacity, respectively), and by a fixed-size permutation on $r+c$ bits. In this work, we study the collision resistance of sponge hashing instantiated with a random permutation by adversaries with arbitrary $S$-bit auxiliary advice input about the random permutation that make $T$ online queries. Recent work by Coretti et al. (CRYPTO '18) showed that such adversaries can find collisions (with respect to a random $c$-bit initialization vector) with advantage $\Theta(ST^2/2^c + T^2/ 2^{r})$. Although the above attack formally breaks collision resistance in some range of parameters, its practical relevance is limited since the resulting collision is very long (on the order of $T$ blocks). Focusing on the task of finding short collisions, we study the complexity of finding a $B$-block collision for a given parameter $B\ge 1$. We give several new attacks and limitations. Most notably, we give a new attack that results in a single-block collision and has advantage $$ \Omega \left(\left(\frac{S^{2}T}{2^{2c}}\right)^{2/3} + \frac{T^2}{2^r}\right). $$ In certain range of parameters (e.g., $ST^2>2^c$), our attack outperforms the previously-known best attack. To the best of our knowledge, this is the first natural application for which sponge hashing is provably less secure than the corresponding instance of Merkle-Damgård hashing. Our attack relies on a novel connection between single-block collision finding in sponge hashing and the well-studied function inversion problem. We also give a general attack that works for any $B\ge 2$ and has advantage $\Omega({STB}/{2^{c}} + {T^2}/{2^{\min\{r,c\}}})$, adapting an idea of Akshima et al. (CRYPTO '20). We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions. Most notably, we prove (via a highly non-trivial compression argument) that the above attack is optimal for $B=2$ in some range of parameters.
Last updated:  2022-08-05
Multimodal Private Signatures
Khoa Nguyen, Fuchun Guo, Willy Susilo, and Guomin Yang
We introduce Multimodal Private Signature (MPS) - an anonymous signature system that offers a novel accountability feature: it allows a designated opening authority to learn some partial information $\mathsf{op}$ about the signer's identity $\mathsf{id}$, and nothing beyond. Such partial information can flexibly be defined as $\mathsf{op} = \mathsf{id}$ (as in group signatures), or as $\mathsf{op} = \mathbf{0}$ (like in ring signatures), or more generally, as $\mathsf{op} = G_j(\mathsf{id})$, where $G_j(\cdot)$ is a certain disclosing function. Importantly, the value of $\mathsf{op}$ is known in advance by the signer, and hence, the latter can decide whether she/he wants to disclose that piece of information. The concept of MPS significantly generalizes the notion of tracing in traditional anonymity-oriented signature primitives, and can enable various new and appealing privacy-preserving applications. We formalize the definitions and security requirements for MPS. We next present a generic construction to demonstrate the feasibility of designing MPS in a modular manner and from commonly used cryptographic building blocks (ordinary signatures, public-key encryption and NIZKs). We also provide an efficient construction in the standard model based on pairings, and a lattice-based construction in the random oracle model.
Last updated:  2022-08-05
zkQMC: Zero-Knowledge Proofs For (Some) Probabilistic Computations Using Quasi-Randomness
Zachary DeStefano, Dani Barrack, and Michael Dixon
We initiate research into efficiently embedding probabilistic computations in probabilistic proofs by introducing techniques for capturing Monte Carlo methods and Las Vegas algorithms in zero knowledge and exploring several potential applications of these techniques. We design and demonstrate a technique for proving the integrity of certain randomized computations, such as uncertainty quantification methods, in non-interactive zero knowledge (NIZK) by replacing conventional randomness with low-discrepancy sequences. This technique, known as the Quasi-Monte Carlo (QMC) method, functions as a form of weak algorithmic derandomization to efficiently produce adversarial-resistant worst-case uncertainty bounds for the results of Monte Carlo simulations. The adversarial resistance provided by this approach allows the integrity of results to be verifiable both in interactive and non-interactive zero knowledge without the need for additional statistical or cryptographic assumptions. To test these techniques, we design a custom domain specific language and implement an associated compiler toolchain that builds zkSNARK gadgets for expressing QMC methods. We demonstrate the power of this technique by using this framework to benchmark zkSNARKs for various examples in statistics and physics. Using $N$ samples, our framework produces zkSNARKs for numerical integration problems of dimension $d$ with $O\left(\frac{(\log N)^d}{N}\right)$ worst-case error bounds. Additionally, we prove a new result using discrepancy theory to efficiently and soundly estimate the output of computations with uncertain data with an $O\left(d\frac{\log N}{\sqrt[d]{N}}\right)$ worst-case error bound. Finally, we show how this work can be applied more generally to allow zero-knowledge proofs to capture a subset of decision problems in $\mathsf{BPP}$, $\mathsf{RP}$, and $\mathsf{ZPP}$.
Last updated:  2022-08-04
A Forward-secure Efficient Two-factor Authentication Protocol
Steven J. Murdoch and Aydin Abadi
Two-factor authentication(2FA)schemes that rely on a combination of knowledge factors (e.g., PIN) and device possession have gained popularity. Some of these schemes remain secure even against strong adversaries that (a) observe the traffic between a client and server, and (b) have physical access to the client’s device, or its PIN, or breach the server. However, these solutions have several shortcomings; namely, they (i) require a client to remember multiple secret values to prove its identity, (ii) involve several modular exponentiations, and (iii) are in the non-standard random oracle model. In this work, we present a 2FA protocol that resists such a strong adversary while addressing the above shortcomings. Our protocol requires a client to remember only a single secret value/PIN, does not involve any modular exponentiations, and is in a standard model. It is the first one that offers these features without using trusted chipsets. This protocol also imposes up to 40% lower communication overhead than the state-of-the-art solutions do.
Last updated:  2022-08-04
Interactive Non-Malleable Codes Against Desynchronizing Attacks in the Multi-Party Setting
Nils Fleischhacker, Suparno Ghoshal, and Mark Simkin
Interactive Non-Malleable Codes were introduced by Fleischhacker et al. (TCC 2019) in the two party setting with synchronous tampering. The idea of this type of non-malleable code is that it "encodes" an interactive protocol in such a way that, even if the messages are tampered with according to some class $\mathcal{F}$ of tampering functions, the result of the execution will either be correct, or completely unrelated to the inputs of the participating parties. In the synchronous setting the adversary is able to modify the messages being exchanged but cannot drop messages nor desynchronize the two parties by first running the protocol with the first party and then with the second party. In this work, we define interactive non-malleable codes in the non-synchronous multi-party setting and construct such interactive non-malleable codes for the class $\mathcal{F}^{s}_{\textsf{bounded}}$ of bounded-state tampering functions. The construction is applicable to any multi-party protocol with a fixed message topology.
Last updated:  2022-08-04
FairTraDEX: A Decentralised Exchange Preventing Value Extraction
Conor McMenamin, Vanesa Daza, Matthias Fitzi, and Padraic O'Donoghue
We present FairTraDEX, a decentralized exchange (DEX) protocol based on frequent batch auctions (FBAs), which provides formal game-theoretic guarantees against extractable value. FBAs when run by a trusted third-party provide unique game-theoretic optimal strategies which ensure players are shown prices equal to the liquidity provider's fair price, excluding explicit, pre-determined fees. FairTraDEX replicates the key features of an FBA that provide these game-theoretic guarantees using a combination of set-membership in zero-knowledge protocols and an escrow-enforced commit-reveal protocol. We extend the results of FBAs to handle monopolistic and/or malicious liquidity providers. We provide real-world examples that demonstrate that the costs of executing orders in existing academic and industry-standard protocols become prohibitive as order size increases due to basic value extraction techniques, popularized as maximal extractable value. We further demonstrate that FairTraDEX protects against these execution costs, guaranteeing a fixed fee model independent of order size, the first guarantee of it's kind for a DEX protocol. We also provide detailed Solidity and pseudo-code implementations of FairTraDEX, making FairTraDEX a novel and practical contribution.
Last updated:  2022-08-04
Zswap: zk-SNARK Based Non-Interactive Multi-Asset Swaps
Felix Engelmann, Thomas Kerber, Markulf Kohlweiss, and Mikhail Volkhov
Privacy-oriented cryptocurrencies, like Zcash or Monero, provide fair transaction anonymity and confidentiality but lack important features compared to fully public systems, like Ethereum. Specifically, supporting assets of multiple types and providing a mechanism to atomically exchange them, which is critical for e.g. decentralized finance (DeFi), is challenging in the private setting. By combining insights and security properties from Zcash and SwapCT (PETS 21, an atomic swap system for Monero), we present a simple zk-SNARKs-based transaction scheme, called Zswap, which is carefully malleable to allow the merging of transactions, while preserving anonymity. Our protocol enables multiple assets and atomic exchanges by making use of sparse homomorphic commitments with aggregated open randomness, together with Zcash-friendly simulation-extractable non-interactive zero-knowledge (NIZK) proofs. This results in a provably secure privacy-preserving transaction protocol, with efficient swaps, and overall performance close to that of existing deployed private cryptocurrencies. It is similar to Zcash Sapling and benefits from existing code bases and implementation expertise.
Last updated:  2022-08-04
Quantum Security of FOX Construction based on Lai-Massey Scheme
Amit Kumar Chauhan and Somitra Sanadhya
The Lai-Massey scheme is an important cryptographic approach to design block ciphers from secure pseudorandom functions. It has been used in the designs of IDEA and IDEA-NXT. At ASIACRYPT'99, Vaudenay showed that the 3-round and 4-round Lai-Massey scheme are secure against chosen-plaintext attacks (CPAs) and chosen-ciphertext attacks (CCAs), respectively, in the classical setting. At SAC'04, Junod and Vaudenay proposed a new family of block ciphers based on the Lai-Massey scheme, namely FOX. In this work, we analyze the security of the FOX cipher in the quantum setting, where the attacker can make quantum superposed queries to the oracle. Our results are as follows: $-$ The 3-round FOX construction is not a pseudorandom permutation against quantum chosen-plaintext attacks (qCPAs), and the 4-round FOX construction is not a strong pseudorandom permutation against quantum chosen-ciphertext attacks (qCCAs). Essentially, we build quantum distinguishers against the 3-round and 4-round FOX constructions, using Simon's algorithm. $-$ The 4-round FOX construction is a pseudorandom permutation against qCPAs. Concretely, we prove that the 4-round FOX construction is secure up to $O(2^{n/12})$ quantum queries. Our security proofs use the compressed oracle technique introduced by Zhandry. More precisely, we use an alternative formalization of the compressed oracle technique introduced by Hosoyamada and Iwata.
Last updated:  2022-08-04
Lattice Codes for Lattice-Based PKE
Shanxiang Lyu, Ling Liu, Junzuo Lai, Cong Ling, and Hao Chen
The public key encryption (PKE) protocol in lattice-based cryptography (LBC) can be modeled as a noisy point-to-point communication system, where the communication channel is similar to the additive white Gaussian noise (AWGN) channel. To improve the error correction performance, this paper investigates lattice-based PKE from the perspective of lattice codes. We propose an efficient labeling function that converts between binary information bits and lattice codewords. The proposed labeling is feasible for a wide range of lattices, including Construction-A and Construction-D lattices. Based on Barnes-Wall lattices, a few improved parameter sets with either higher security or smaller ciphertext size are proposed for FrodoPKE.
Last updated:  2022-08-03
Piranha: A GPU Platform for Secure Computation
Jean-Luc Watson, Sameer Wagh, and Raluca Ada Popa
Secure multi-party computation (MPC) is an essential tool for privacy-preserving machine learning (ML). However, secure training of large-scale ML models currently requires a prohibitively long time to complete. Given that large ML inference and training tasks in the plaintext setting are significantly accelerated by Graphical Processing Units (GPUs), this raises the natural question: can secure MPC leverage GPU acceleration? A few recent works have studied this question in the context of accelerating specific components or protocols, but do not provide a general-purpose solution. Consequently, MPC developers must be both experts in cryptographic protocol design and proficient at low-level GPU kernel development to achieve good performance on any new protocol implementation. We present Piranha, a general-purpose, modular platform for accelerating secret sharing-based MPC protocols using GPUs. Piranha allows the MPC community to easily leverage the benefits of a GPU without requiring GPU expertise. Piranha contributes a three-layer architecture: (1) a device layer that can independently accelerate secret-sharing protocols by providing integer-based kernels absent in current general-purpose GPU libraries, (2) a modular protocol layer that allows developers to maximize utility of limited GPU memory with in-place computation and iterator-based support for non-standard memory access patterns, and (3) an application layer that allows applications to remain completely agnostic to the underlying protocols they use. To demonstrate the benefits of Piranha, we implement 3 state-of-the-art linear secret sharing MPC protocols for secure NN training: 2-party SecureML (IEEE S&P ’17), 3-party Falcon (PETS ’21), and 4-party FantasticFour (USENIX Security ’21). Compared to their CPU-based implementations, the same protocols implemented on top of Piranha’s protocol-agnostic acceleration exhibit a 16−48× decrease in training time. For the first time, Piranha demonstrates the feasibility of training a realistic neural network (e.g. VGG), end-to-end, using MPC in a little over one day. Piranha is open source and available at https://github.com/ucbrise/piranha.
Last updated:  2022-08-03
How to Prove Schnorr Assuming Schnorr: Security of Multi- and Threshold Signatures
Elizabeth Crites, Chelsea Komlo, and Mary Maller
This work investigates efficient multi-party signature schemes in the discrete logarithm setting. We focus on a concurrent model, in which an arbitrary number of signing sessions may occur in parallel. Our primary contributions are: (1) a modular framework for proving the security of Schnorr multisignature and threshold signature schemes, (2) an optimization of the two-round threshold signature scheme $\mathsf{FROST}$ that we call $\mathsf{FROST2}$, and (3) the application of our framework to prove the security of $\mathsf{FROST2}$ as well as a range of other multi-party schemes. We begin by demonstrating that our framework is applicable to multisignatures. We prove the security of a variant of the two-round $\mathsf{MuSig2}$ scheme with proofs of possession and a three-round multisignature $\mathsf{SimpleMuSig}$. We introduce a novel three-round threshold signature $\mathsf{SimpleTSig}$ and propose an optimization to the two-round $\mathsf{FROST}$ threshold scheme that we call $\mathsf{FROST2}$. $\mathsf{FROST2}$ reduces the number of scalar multiplications required during signing from linear in the number of signers to constant. We apply our framework to prove the security of $\mathsf{FROST2}$ under the one-more discrete logarithm assumption and $\mathsf{SimpleTSig}$ under the discrete logarithm assumption in the programmable random oracle model.
Last updated:  2022-08-03
Statistical Decoding 2.0: Reducing Decoding to LPN
Kevin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, and Jean-Pierre Tillich
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than $0.3$. It gives for the first time after $60$ years, a better decoding algorithm for a significant range which does not belong to the ISD family.
Last updated:  2022-08-03
An $\mathcal{O}(n)$ Algorithm for Coefficient Grouping
Fukang Liu
In this note, we study a specific optimization problem arising in the recently proposed coefficient grouping technique, which is used for the degree evaluation. Specifically, we show that there exists an efficient algorithm running in time $\mathcal{O}(n)$ to solve a basic optimization problem relevant to upper bound the algebraic degree. We expect that some results in this note can inspire more studies on other optimization problems in the coefficient grouping technique.
Last updated:  2022-08-03
PipeMSM: Hardware Acceleration for Multi-Scalar Multiplication
Charles. F. Xavier
Multi-Scalar Multiplication (MSM) is a fundamental computational problem. Interest in this problem was recently prompted by its application to ZK-SNARKs, where it often turns out to be the main computational bottleneck. In this paper we set forth a pipelined design for computing MSM. Our design is based on a novel algorithmic approach and hardware-specific optimizations. At the core, we rely on a modular multiplication technique which we deem to be of independent interest. We implemented and tested our design on FPGA. We highlight the promise of optimized hardware over state-of-the-art GPU- based MSM solver in terms of speed and energy expenditure.
Last updated:  2022-08-03
Quantum-Resistant Password-Based Threshold Single-Sign-On Authentication with Updatable Server Private Key
Jingwei Jiang, Ding Wang, Guoyin Zhang, and Zhiyuan Chen
Passwords are the most prevalent authentication mechanism and proliferate on nearly every new web service. As users are overloaded with the tasks of managing dozens even hundreds of passwords, accordingly password-based single-sign-on (SSO) schemes have been proposed. In password-based SSO schemes, the authentication server needs to maintain a sensitive password file, which is an attractive target for compromise and poses a single point of failure. Hence, the notion of password-based threshold authentication (PTA) system has been proposed. However, a static PTA system is threatened by perpetual leakage (e.g., the adversary perpetually compromises servers). In addition, most of the existing PTA schemes are built on the intractability of conventional hard problems and become insecure in the quantum era. In this work, we first propose a threshold oblivious pseudorandom function (TOPRF) to harden the password so that PTA schemes can resist offline password guessing attacks. Then, we employ the threshold homomorphic aggregate signature (THAS) over lattices to construct the first quantum-resistant password-based threshold single-sign-on authentication scheme with the updatable server private key. Our scheme resolves various issues arising from user corruption and server compromise, and it is formally proved secure against quantum adversaries. Comparison results show that our scheme is superior to its counterparts.
Last updated:  2022-08-03
On the Hardness of the Finite Field Isomorphism Problem
Dipayan Das and Antoine Joux
The finite field isomorphism FFI problem, introduced in PKC'18, is an alternative to average-case lattice problems (like LWE, SIS, or NTRU). As an application, the FFI problem is used to construct a fully homomorphic encryption scheme in the same paper. In this work, we prove that the decision variant of the FFI problem can be solved in polynomial time for any field characteristics $q= \Omega(\beta n^2)$, where $q,\beta,n$ parametrize the FFI problem. Then we use our result from the FFI distinguisher to propose a polynomial-time attack on the semantic security of the fully homomorphic encryption scheme. Furthermore, for completeness, we also study the search variant of the FFI problem and show how to state it as a $q$-ary lattice problem, which was previously unknown. As a result, we can solve the search problem for some previously intractable parameters using a simple lattice reduction approach.
Last updated:  2022-08-03
Key-Recovery Attacks on CRAFT and WARP (Full Version)
Ling Sun, Wei Wang, and Meiqin Wang
This paper considers the security of CRAFT and WARP. We present a practical key-recovery attack on full-round CRAFT in the related-key setting with only one differential characteristic, and the theoretical time complexity of the attack is $2^{36.09}$ full-round encryptions. The attack is verified in practice. The test result indicates that the theoretical analysis is valid, and it takes about $15.69$ hours to retrieve the key. A full-round key-recovery attack on WARP in the related-key setting is proposed, and the time complexity is $2^{44.58}$ full-round encryptions. The theoretical attack is implemented on a round-reduced version of WARP, which guarantees validity. Besides, we give a 33-round multiple zero-correlation linear attack on WARP, which is the longest attack on the cipher in the single-key attack setting. We note that the attack results in this paper do not threaten the security of CRAFT and WARP as the designers do not claim security under the related-key attack setting.
Last updated:  2022-08-03
Fast Hashing to $G_2$ in Direct Anonymous Attestation
Yu Dai, Fangguo Zhang, and Chang-An Zhao
To reduce the workload of the Trusted Platform Module~(TPM) without affecting the security in pairing-based direct anonymous attestation~(DAA) schemes, it is feasible to select pairing-friendly curves that provide fast group operations in the first pairing subgroup. In this scenario, the BW13-P310 and BW19-P286 curves become competitive. In order to improve the efficiency of the DAA schemes based on these curves, it is also necessary to design an efficient algorithm for hashing to $G_2$. In this paper, we first generalize the previous work to address the bottlenecks involved in hashing to $G_2$ on the two curves. On this basis, we further optimize the hashing algorithm, which would be nearly twice as fast as the previous one in theory. These techniques actually can be applied to a large class of curves. We also implement the proposed algorithms over the BW13-P310 curve on a 64-bit computing platform.
Last updated:  2022-08-03
Opportunistic Algorithmic Double-Spending: How I learned to stop worrying and hedge the Fork
Nicholas Stifter, Aljosha Judmayer, Philipp Schindler, and Edgar Weippl
In this paper, we outline a novel form of attack we refer to as Opportunistic Algorithmic Double-Spending (OpAl ). OpAl attacks avoid equivocation, i.e., do not require conflicting transactions, and are carried out automatically in case of a fork. Algorithmic double-spending is facilitated through transaction semantics that dynamically depend on the context and ledger state at the time of execution. Hence, OpAl evades common double-spending detection mechanisms and can opportunistically leverage forks, even if the malicious sender themselves is not responsible for, or even actively aware of, any fork. Forkable ledger designs with expressive transaction semantics, especially stateful EVM-based smart contract platforms such as Ethereum, are particularly vulnerable. Hereby, the cost of modifying a regular transaction to opportunistically perform an OpAl attack is low enough to consider it a viable default strategy. While Bitcoin’s stateless UTXO model, or Cardano’s EUTXO model, appear more robust against OpAl , we nevertheless demonstrate scenarios where transactions are semantically malleable and thus vulnerable. To determine whether OpAl -like semantics can be observed in practice, we analyze the execution traces of 922562 transactions on the Ethereum blockchain. Hereby, we are able to identify transactions, which may be associated with frontrunning and MEV bots, that exhibit some of the design patterns also employed as part of the herein presented attack.
Last updated:  2022-08-03
Faster Sounder Succinct Arguments and IOPs
Justin Holmgren and Ron Rothblum
Succinct arguments allow a prover to convince a verifier that a given statement is true, using an extremely short proof. A major bottleneck that has been the focus of a large body of work is in reducing the overhead incurred by the prover in order to prove correctness of the computation. By overhead we refer to the cost of proving correctness, divided by the cost of the original computation. In this work, for a large class of Boolean circuits $C=C(x,w)$, we construct succinct arguments for the language $\{ x : \exists w\; C(x,w)=1\}$, with $2^{-\lambda}$ soundness error, and with prover overhead $\mathsf{polylog}(\lambda)$. This result relies on the existence of (sub-exponentially secure) linear-size computable collision-resistant hash functions. The class of Boolean circuits that we can handle includes circuits with a repeated sub-structure, which arise in natural applications such as batch computation/verification, hashing and related block chain applications. The succinct argument is obtained by constructing \emph{interactive oracle proofs} for the same class of languages, with $\mathsf{polylog}(\lambda)$ prover overhead, and soundness error $2^{-\lambda}$. Prior to our work, the best IOPs for Boolean circuits either had prover overhead of $\mathsf{polylog}(|C|)$ based on efficient PCPs due to Ben-Sasson et al. (STOC, 2013) or $\mathsf{poly}(\lambda)$ due to Rothblum and Ron-Zewi (STOC, 2022).
Last updated:  2022-08-03
A New Look at Blockchain Leader Election: Simple, Efficient, Sustainable and Post-Quantum
Muhammed F. Esgin, Oguzhan Ersoy, Veronika Kuchta, Julian Loss, Amin Sakzad, Ron Steinfeld, Wayne Yang, and Raymond K. Zhao
In this work, we study the blockchain leader election problem. The purpose of such protocols is to elect a leader who decides on the next block to be appended to the blockchain, for each block proposal round. Solutions to this problem are vital for the security of blockchain systems. We introduce an efficient blockchain leader election method with security based solely on standard assumptions for cryptographic hash functions (rather than public-key cryptographic assumptions) and that does not involve a racing condition as in Proof-of-Work based approaches. Thanks to the former feature, our solution provides the highest confidence in security, even in the post-quantum era. A particularly scalable application of our solution is in the Proof-of-Stake setting, and we investigate our solution in the Algorand blockchain system. We believe our leader election approach can be easily adapted to a range of other blockchain settings. At the core of Algorand's leader election is a verifiable random function (VRF). Our approach is based on introducing a simpler primitive which still suffices for the blockchain leader election problem. In particular, we analyze the concrete requirements in an Algorand-like blockchain setting to accomplish leader election, which leads to the introduction of indexed VRF (iVRF). An iVRF satisfies modified uniqueness and pseudorandomness properties (versus a full-fledged VRF) that enable an efficient instantiation based on a hash function without requiring any complicated zero-knowledge proofs of correct PRF evaluation. We further extend iVRF to an authenticated iVRF with forward-security, which meets all the requirements to establish an Algorand-like consensus. Our solution is simple, flexible and incurs only a 32-byte additional overhead when combined with the current best solution to constructing a forward-secure signature (in the post-quantum setting). We implemented our (authenticated) iVRF proposal in C language on a standard computer and show that our proposal significantly outperforms other quantum-safe VRF proposals in almost all metrics. Particularly, iVRF evaluation and verification can be executed in 0.02 ms, which is even faster than ECVRF used in Algorand.
Last updated:  2022-08-02
Sequential Digital Signatures for Cryptographic Software-Update Authentication
Bertram Poettering and Simon Rastikian
Consider a computer user who needs to update a piece of software installed on their computing device. To do so securely, a commonly accepted ad-hoc method stipulates that the old software version first retrieves the update information from the vendor's public repository, then checks that a cryptographic signature embedded into it verifies with the vendor's public key, and finally replaces itself with the new version. This updating method seems to be robust and lightweight, and to reliably ensure that no malicious third party (e.g., a distribution mirror) can inject harmful code into the update process. Unfortunately, recent prominent news reports (SolarWinds, Stuxnet, TikTok, Zoom, ...) suggest that nation state adversaries are broadening their efforts related to attacking software supply chains. This calls for a critical re-evaluation of the described signature based updating method with respect to the real-world security it provides against particularly powerful adversaries. We approach the setting by formalizing a cryptographic primitive that addresses specifically the secure software updating problem. We define strong, rigorous security models that capture forward security (stealing a vendor's key today doesn't allow modifying yesterday's software version) as well as a form of self-enforcement that helps protecting vendors against coercion attacks in which they are forced, e.g. by nation state actors, to misuse or disclose their keys. We note that the common signature based software authentication method described above meets neither the one nor the other goal, and thus represents a suboptimal solution. Hence, after formalizing the syntax and security of the new primitive, we propose novel, efficient, and provably secure constructions.
Last updated:  2022-08-02
A toolbox for verifiable tally-hiding e-voting systems
Véronique Cortier, Pierrick Gaudry, and Quentin Yang
In most verifiable electronic voting schemes, one key step is the tally phase, where the election result is computed from the encrypted ballots. A generic technique consists in first applying (verifiable) mixnets to the ballots and then revealing all the votes in the clear. This however discloses much more information than the result of the election itself (that is, the winners) and may offer the possibility to coerce voters. In this paper, we present a collection of building blocks for designing tally-hiding schemes based on multi-party computations. As an application, we propose the first efficient tally-hiding schemes with no leakage for four important counting functions: D'Hondt, Condorcet, STV, and Majority Judgment. We prove that they can be used to design a private and verifiable voting scheme. We also unveil unknown flaws or leakage in several previously proposed tally-hiding schemes.
Last updated:  2022-08-02
He-HTLC: Revisiting Incentives in HTLC
Sarisht Wadhwa, Jannis Stoeter, Fan Zhang, and Kartik Nayak
Hashed Time-Locked Contracts (HTLCs) are a widely used primitive in blockchain systems such as payment channels, atomic swaps, etc. Unfortunately, HTLC is incentive-incompatible and is vulnerable to bribery attacks. The state-of-the-art solution is MAD-HTLC (Oakland'21), which proposes an elegant idea that leverages miners' profit-driven nature to defeat bribery attacks. In this paper, we show that MAD-HTLC is still vulnerable as it only considers a somewhat narrow set of passive strategies by miners. Through a family of novel reverse-bribery attacks, we show concrete active strategies that miners can take to break MAD-HTLC and profit at the loss of MAD-HTLC users. For these attacks, we present their implementation and game-theoretical profitability analysis. Based on the learnings from our attacks, we propose a new HTLC realization, He-HTLC (Our specification is lightweight and inert to incentive manipulation attacks. Hence, we call it He-HTLC where He stands for Helium.) that is provably secure against all possible strategic manipulation (passive and active). In addition to being secure in a stronger adversary model, He-HTLC achieves other desirable features such as low and user-adjustable collateral, making it more practical to implement and use the proposed schemes. We implemented He-HTLC on Bitcoin and the transaction cost of He-HTLC is comparative to average Bitcoin transaction fees.
Last updated:  2022-08-02
Efficient Computation of (2^n,2^n)-Isogenies
Sabrina Kunzweiler
Elliptic curves are abelian varieties of dimension one; the two-dimensional analogue are abelian surfaces. In this work we present an algorithm to compute $(2^n,2^n)$-isogenies of abelian surfaces defined over finite fields. These isogenies are the natural generalization of $2^n$-isogenies of elliptic curves. Our algorithm is designed to be used in higher-dimensional variants of isogeny-based cryptographic protocols such as G2SIDH which is a genus-$2$ version of the Supersingular Isogeny Diffie-Hellman (SIDH) key exchange. We analyze the performance of our algorithm in cryptographically relevant settings and show that it significantly improves upon previous implementations. Different results deduced in the development of our algorithm are also interesting beyond this application. For instance, we derive a formula for the evaluation of $(2,2)$-isogenies. Given an element in Mumford coordinates, this formula outputs the (unreduced) Mumford coordinates of its image under the $(2,2)$-isogeny. Furthermore, we study $4$-torsion points on Jacobians of hyperelliptic curves and explain how to extract square-roots of coefficients of $2$-torsion points from these points.
Last updated:  2022-08-02
Modeling and Simulating the Sample Complexity of solving LWE using BKW-Style Algorithms
Qian Guo, Erik Mårtensson, and Paul Stankovski Wagner
The Learning with Errors (LWE) problem receives much attention in cryptography, mainly due to its fundamental significance in post-quantum cryptography. Among its solving algorithms, the Blum-Kalai-Wasserman (BKW) algorithm, originally proposed for solving the Learning Parity with Noise (LPN) problem, performs well, especially for certain parameter settings with cryptographic importance. The BKW algorithm consists of two phases, the reduction phase and the solving phase. In this work, we study the performance of distinguishers used in the solving phase. We show that the Fast Fourier Transform (FFT) distinguisher from Eurocrypt’15 has the same sample complexity as the optimal distinguisher, when making the same number of hypotheses. We also show via simulation that it performs much better than previous theory predicts and develop a sample complexity model that matches the simulations better. We also introduce an improved, pruned version of the FFT distinguisher. Finally, we indicate, via extensive experiments, that the sample dependency due to both LF2 and sample amplification is limited.
Last updated:  2022-08-02
A Signature-Based Gröbner Basis Algorithm with Tail-Reduced Reductors (M5GB)
Manuel Hauke, Lukas Lamster, Reinhard Lüftenegger, and Christian Rechberger
Gröbner bases are an important tool in computational algebra and, especially in cryptography, often serve as a boilerplate for solving systems of polynomial equations. Research regarding (efficient) algorithms for computing Gröbner bases spans a large body of dedicated work that stretches over the last six decades. The pioneering work of Bruno Buchberger in 1965 can be considered as the blueprint for all subsequent Gröbner basis algorithms to date. Among the most efficient algorithms in this line of work are signature-based Gröbner basis algorithms, with the first of its kind published in the late 1990s by Jean-Charles Faugère under the name $\texttt{F5}$. In addition to signature-based approaches, Rusydi Makarim and Marc Stevens investigated a different direction to efficiently compute Gröbner bases, which they published in 2017 with their algorithm $\texttt{M4GB}$. The ideas behind $\texttt{M4GB}$ and signature-based approaches are conceptually orthogonal to each other because each approach addresses a different source of inefficiency in Buchberger's initial algorithm by different means. We amalgamate those orthogonal ideas and devise a new Gröbner basis algorithm, called $\texttt{M5GB}$, that combines the concepts of both worlds. In that capacity, $\texttt{M5GB}$ merges strong signature-criteria to eliminate redundant S-pairs with concepts for fast polynomial reductions borrowed from $\texttt{M4GB}$. We provide proofs of termination and correctness and a proof-of-concept implementation in C++ by means of the Mathic library. The comparison with a state-of-the-art signature-based Gröbner basis algorithm (implemented via the same library) validates our expectations of an overall faster runtime for quadratic overdefined polynomial systems that have been used in comparisons before in the literature and are also part of cryptanalytic challenges.
Last updated:  2022-08-02
Quantum Attacks on Lai-Massey Structure
Shuping Mao, Tingting Guo, Peng Wang, and Lei Hu
Aaram Yun et al. considered that Lai-Massey structure has the same security as Feistel structure. However, Luo et al. showed that 3-round Lai-Massey structure can resist quantum attacks of Simon's algorithm, which is different from Feistel structure. We give quantum attacks against a typical Lai-Massey structure. The result shows that there exists a quantum CPA distinguisher against 3-round Lai-Massey structure and a quantum CCA distinguisher against 4-round Lai-Massey Structure, which is the same as Feistel structure. We extend the attack on Lai-Massey structure to quasi-Feistel structure. We show that if the combiner of quasi-Feistel structure is linear, there exists a quantum CPA distinguisher against 3-round balanced quasi-Feistel structure and a quantum CCA distinguisher against 4-round balanced quasi-Feistel Structure.
Last updated:  2022-08-02
Privacy when Everyone is Watching: An SOK on Anonymity on the Blockchain
Roy Rinberg and Nilaksh Agarwal
Blockchain technologies rely on a public ledger, where typically all transactions are pseudoanonymous and fully traceable. This poses a major flaw in its large scale adoption of cryptocurrencies, the primary application of blockchain technologies, as most individuals do not want to disclose their finances to the pub- lic. Motivated by the explosive growth in private-Blockchain research, this Statement-of-Knowledge (SOK) explores the ways to obtain privacy in this public ledger ecosystem. The authors first look at the underly- ing technology underling all zero-knowledge applications on the blockchain: zk-SNARKs (zero-knowledge Succinct Non-interactive ARguments of Knowledge). We then explore the two largest privacy coins as of today, ZCash and Monero, as well as TornadoCash, a popular Ethereum Tumbler solution. Finally, we look at the opposing incentives behind privacy solutions and de-anonymization techniques, and the future of privacy on the blockchain.
Last updated:  2022-08-01
ToSHI - Towards Secure Heterogeneous Integration: Security Risks, Threat Assessment, and Assurance
Nidish Vashistha, Md Latifur Rahman, Md Saad Ul Haque, Azim Uddin, Md Sami Ul Islam Sami, Amit Mazumder Shuo, Paul Calzada, Farimah Farahmandi, Navid Asadizanjani, Fahim Rahman, and Mark Tehranipoor
The semiconductor industry is entering a new age in which device scaling and cost reduction will no longer follow the decades-long pattern. Packing more transistors on a monolithic IC at each node becomes more difficult and expensive. Companies in the semiconductor industry are increasingly seeking technological solutions to close the gap and enhance cost-performance while providing more functionality through integration. Putting all of the operations on a single chip (known as a system on a chip, or SoC) presents several issues, including increased prices and greater design complexity. Heterogeneous integration (HI), which uses advanced packaging technology to merge components that might be designed and manufactured independently using the best process technology, is an attractive alternative. However, although the industry is motivated to move towards HI, many design and security challenges must be addressed. This paper presents a three-tier security approach for secure heterogeneous integration by investigating supply chain security risks, threats, and vulnerabilities at the chiplet, interposer, and system-in-package levels. Furthermore, various possible trust validation methods and attack mitigation were proposed for every level of heterogeneous integration. Finally, we shared our vision as a roadmap toward developing security solutions for a secure heterogeneous integration.
Last updated:  2022-08-01
Caulk+: Table-independent lookup arguments
Jim Posen and Assimakis A. Kattis
The recent work of Caulk introduces the security notion of position hiding linkability for vector commitment schemes, providing a zero-knowledge argument that a committed vector's elements comprise a subset of some other committed vector. The protocol has very low cost to the prover in the case where the size $m$ of the subset vector is much smaller than the size $n$ of the one containing it. The asymptotic prover complexity is $O(m^2 + m \log n)$, where the $\log n$ dependence comes from a subprotocol showing that the roots of a blinded polynomial are all $n$th roots of unity. In this work, we show how to simplify this argument, replacing the subprotocol with a polynomial divisibility check and thereby reducing the asymptotic prover complexity to $O(m^2)$, removing any dependence on $n$.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.