Papers updated in last 365 days (Page 23 of 2726 results)

Last updated:  2023-07-11
How to Compile Polynomial IOP into Simulation-Extractable SNARKs: A Modular Approach
Markulf Kohlweiss, Mahak Pancholi, Akira Takahashi
Most succinct arguments (SNARKs) are initially only proven knowledge sound (KS). We show that the commonly employed compilation strategy from polynomial interactive oracle proofs (PIOP) via polynomial commitments to knowledge sound SNARKS actually also achieves other desirable properties: weak unique response (WUR) and trapdoorless zero-knowledge (TLZK); and that together they imply simulation extractability (SIM-EXT). The factoring of SIM-EXT into KS + WUR + TLZK is becoming a cornerstone of the analysis of non-malleable SNARK systems. We show how to prove WUR and TLZK for PIOP compiled SNARKs under mild falsifiable assumptions on the polynomial commitment scheme. This means that the analysis of knowledge soundness from PIOP properties that inherently relies on non-falsifiable or idealized assumption such as the algebraic group model (AGM) or generic group model (GGM) need not be repeated. While the proof of WUR requires only mild assumptions on the PIOP, TLZK is a different matter. As perfectly hiding polynomial commitments sometimes come at a substantial performance premium, SNARK designers prefer to employ deterministic commitments with some leakage. This results in the need for a stronger zero-knowledge property for the PIOP. The modularity of our approach implies that any analysis improvements, e.g. in terms of tightness, credibility of the knowledge assumption and model of the KS analysis, or the precision of capturing real-world optimizations for TLZK also benefits the SIM-EXT guarantees.
Last updated:  2023-07-11
Security-Preserving Distributed Samplers: How to Generate any CRS in One Round without Random Oracles
Damiano Abram, Brent Waters, Mark Zhandry
A distributed sampler is a way for several mutually distrusting parties to non-interactively generate a common reference string (CRS) that all parties trust. Previous work constructs distributed samplers in the random oracle model, or in the standard model with very limited security guarantees. This is no accident, as standard model distributed samplers with full security were shown impossible. In this work, we provide new definitions for distributed samplers which we show achieve meaningful security guarantees in the standard model. In particular, our notion implies that the hardness of a wide range of security games is preserved when the CRS is replaced with a distributed sampler. We also show how to realize our notion of distributed samplers. A core technical tool enabling our construction is a new notion of single-message zero knowledge.
Last updated:  2023-07-11
Bypassing Android isolation with fuel gauges: new risks with advanced power ICs
Vincent Giraud, David Naccache
Efficient power management is critical for embedded devices, both for extending their lifetime and ensuring safety. However, this can be a challenging task due to the unpredictability of the batteries commonly used in such devices. To address this issue, dedicated Integrated Circuits known as "fuel gauges" are often employed outside of the System-On-Chip. These devices provide various metrics about the available energy source and are highly accurate. However, their precision can also be exploited by malicious actors to compromise platform confidentiality if the Operating System fails to intervene. Depending on the fuel gauge and OS configuration, several attack scenarios are possible. In this article, we focus on Android and demonstrate how it is possible to bypass application isolation to recover PINs entered in other processes.
Last updated:  2023-07-11
Practical Asynchronous Distributed Key Generation: Improved Efficiency, Weaker Assumption, and Standard Model
Haibin Zhang, Sisi Duan, Chao Liu, Boxin Zhao, Xuanji Meng, Shengli Liu, Yong Yu, Fangguo Zhang, Liehuang Zhu
Distributed key generation (DKG) allows bootstrapping threshold cryptosystems without relying on a trusted party, nowadays enabling fully decentralized applications in blockchains and multiparty computation (MPC). While we have recently seen new advancements for asynchronous DKG (ADKG) protocols, their performance remains the bottleneck for many applications, with only one protocol being implemented (DYX+ ADKG, IEEE S&P 2022). DYX+ ADKG relies on the Decisional Composite Residuosity assumption (being expensive to instantiate) and the Decisional Diffie-Hellman assumption, incurring a high latency (more than 100s with a failure threshold of 16). Moreover, the security of DYX+ ADKG is based on the random oracle model (ROM) which takes hash function as an ideal function; assuming the existence of random oracle is a strong assumption, and up to now, we cannot find any theoretically-sound implementation. Furthermore, the ADKG protocol needs public key infrastructure (PKI) to support the trustworthiness of public keys. The strong models (ROM and PKI) further limit the applicability of DYX+ ADKG, as they would add extra and strong assumptions to underlying threshold cryptosystems. For instance, if the original threshold cryptosystem works in the standard model, then the system using DYX+ ADKG would need to use ROM and PKI. In this paper, we design and implement a modular ADKG protocol that offers improved efficiency and stronger security guarantees. We explore a novel and much more direct reduction from ADKG to the underlying blocks, reducing the computational overhead and communication rounds of ADKG in the normal case. Our protocol works for both the low-threshold and high-threshold scenarios, being secure under the standard assumption (the well-established discrete logarithm assumption only) in the standard model (no trusted setup, ROM, or PKI).
Last updated:  2023-07-10
Streebog as a Random Oracle
Liliya Akhmetzyanova, Alexandra Babueva, Andrey Bozhko
The random oracle model is an instrument used for proving that protocol has no structural flaws when settling with standard hash properties is impossible or fairly difficult. In practice, however, random oracles have to be instantiated with some specific hash functions, which are not random oracles. Hence, in the real world, an adversary has broader capabilities than considered in the random oracle proof — it can exploit the peculiarities of a specific hash function to achieve its goal. In a case when a hash function is based on some building block, one can go further and show that even if the adversary has access to that building block, the hash function still behaves like a random oracle under some assumptions made about the building block. Thereby, the protocol can be proved secure against more powerful adversaries under less complex assumptions. The indifferentiability notion formalizes that approach. In this paper we study whether Streebog, a Russian standardized hash function, can instantiate a random oracle from that point of view. We prove that Streebog is indifferentiable from a random oracle under an ideal cipher assumption for the underlying block cipher.
Last updated:  2023-07-10
The Reality of Backdoored S-Boxes - An Eye Opener
Shah Fahd, Mehreen Afzal, Waseem Iqbal, Dawood Shah, Ijaz Khalid
The analysis of real-life incidents has revealed that state-level efforts are made to camouflage the intentional flaws in the mathematical layer of an S-Box to exploit the information-theoretic properties, i.e., Kuznyechik. To extract and investigate the common features in the backdoored S-Box(es), this research thoroughly examines them from the perspective of 24 cryptanalytic attack vectors available in the open literature. We have debunked the earlier claims by the backdoor engineers that their designs are stealthy against statistical distinguishers. A backdoored architecture fulfils the notions of randomness but lacks the strength to resist sophisticated cryptanalytic attacks. Our analysis has revealed that during the backdoor insertion phase, a malicious designer compromises vital cryptographic properties, prominently the algebraic degree, differential trails, avalanche characteristics and leaving the open ground for hybrid attacks. It is observed that these mappings attain the upper bound of BCT, FBCT and DLCT, thus paving the way for hybrid attacks with high probability.
Last updated:  2023-07-09
Digital signature schemes using non-square matrices or scrap automorphisms
Jiale Chen, Dima Grigoriev, Vladimir Shpilrain
We offer two very transparent digital signature schemes: one using non-square matrices and the other using scrap automorphisms. The former can be easily converted to a public key encryption scheme.
Last updated:  2023-07-09
Optical Cryptanalysis: Recovering Cryptographic Keys from Power LED Light Fluctuations
Ben Nassi, Ofek Vayner, Etay Iluz, Dudi Nassi, Or Hai Cohen, Jan Jancar, Daniel Genkin, Eran Tromer, Boris Zadov, Yuval Elovici
Although power LEDs have been integrated in various devices that perform cryptographic operations for decades, the cryptanalysis risk they pose has not yet been investigated. In this paper, we present optical cryptanalysis, a new form of cryptanalytic side-channel attack, in which secret keys are extracted by using a photodiode to measure the light emitted by a device’s power LED and analyzing subtle fluctuations in the light intensity during cryptographic operations. We analyze the optical leakage of power LEDs of various consumer devices and the factors that affect the optical SNR. We then demonstrate end-to-end optical cryptanalytic attacks against a range of consumer devices (smartphone, smartcard, and Raspberry Pi, along with their USB peripherals) and recover secret keys (RSA, ECDSA, SIKE) from prior and recent versions of popular cryptographic libraries (GnuPG, Libgcrypt, PQCrypto-SIDH) from a maximum distance of 25 meters
Last updated:  2023-07-08
Deterministic Wallets for Adaptor Signatures
Andreas Erwig, Siavash Riahi
Adaptor signatures are a new cryptographic primitive that binds the authentication of a message to the revelation of a secret value. In recent years, this primitive has gained increasing popularity both in academia and practice due to its versatile use-cases in different Blockchain applications such as atomic swaps and payment channels. The security of these applications, however, crucially relies on users storing and maintaining the secret values used by adaptor signatures in a secure way. For standard digital signature schemes, cryptographic wallets have been introduced to guarantee secure storage of keys and execution of the signing procedure. However, no prior work has considered cryptographic wallets for adaptor signatures. In this work, we introduce the notion of adaptor wallets. Adaptor wallets allow parties to securely use and maintain adaptor signatures in the Blockchain setting. Our adaptor wallets are both deterministic and operate in the hot/cold paradigm, which was first formalized by Das et al. (CCS 2019) for standard signature schemes. We introduce a new cryptographic primitive called adaptor signatures with rerandomizable keys, and use it to generically construct adaptor wallets. We further show how to instantiate adaptor signatures with rerandomizable keys from the ECDSA signature scheme and discuss that they can likely be built for Schnorr and Katz-Wang schemes as well. Finally, we discuss the limitations of the existing ECDSA- and Schnorr-based adaptor signatures w.r.t. deterministic wallets in the hot/cold setting and prove that it is impossible to overcome these drawbacks given the current state-of-the-art design of adaptor signatures.
Last updated:  2023-07-08
Efficient Arguments and Proofs for Batch Arithmetic Circuit Satisfiability
Jieyi Long
In this paper, we provide a systematic treatment for the batch arithmetic circuit satisfiability and evaluation problem. Building on the core idea which treats circuit inputs/outputs as a low-degree polynomials, we explore various interactive argument and proof schemes that can produce succinct proofs with short verification time. In particular, for the batch satisfiability problem, we provide a construction of succinct interactive argument of knowledge for generic log-space uniform circuits based on the bilinear pairing and common reference string assumption. Our argument has size in $O(poly(\lambda) \cdot (|\mathbf{w}| + d \log |C|))$, where $\lambda$ is the security parameter, $|\mathbf{w}|$ is the size of the witness, and $d$ and $|C|$ are the depth and size of the circuit, respectively. Note that the argument size is independent of the batch size. To the best of our knowledge, asymptotically it is the smallest among all known batch argument schemes that allow public verification. The batch satisfiablity problem simplifies to a batch evaluation problem when the circuit only takes in public inputs (i.e., no witness). For the evaluation problem, we construct statistically sound interactive proofs for various special yet highly important types of circuits, including linear circuits, and circuits representing sum of polynomials. Our proposed protocols are able to achieve proof sizes independent of the batch size. We also describe protocols optimized specifically for batch FFT and batch matrix multiplication which achieve desirable properties, including lower prover time and better composability. We believe these protocols are of interest in their own right and can be used as primitives in more complex applications.
Last updated:  2023-07-08
On the Scholz conjecture on addition chains
Theophilus Agama
Applying the pothole method on the factors of numbers of the form $2^n-1$, we prove the stronger inequality $$\iota(2^n-1)\leq n+1-\sum \limits_{j=1}^{\lfloor \frac{\log n}{\log 2}\rfloor}\xi(n,j)+3\lfloor\frac{\log n}{\log 2}\rfloor$$ for all $n\in \mathbb{N}$ with $n\geq 64$ for $0\leq \xi(n,j)<1$, where $\iota(\cdot)$ denotes the length of the shortest addition chain producing $\cdot$. This inequality is stronger than $$\iota(r)<\frac{\log r}{\log 2}(1+\frac{1}{\log \log r}+\frac{2\log 2}{(\log r)^{1-\log 2}})$$ in the case $r=2^n-1$ but slightly weaker than the conjectured inequality $$\iota(2^n-1)\leq n-1+\iota(n).$$
Last updated:  2023-07-08
A Note on ``A Lightweight and Privacy-Preserving Mutual Authentication and Key Agreement Protocol for Internet of Drones Environment''
Zhengjun Cao, Lihua Liu
We show that the key agreement scheme [IEEE Internet Things J., 9(12), 2022, 9918--9933] is flawed. In order to authenticate each other, all participants use message authentication code (MAC) to generate tags for exchanged data. But MAC is a cryptographic technique which requires that the sender and receiver share a symmetric key. The scheme tries to establish a new shared key by using an old shared key, which results in a vicious circle. To the best of our knowledge, it is the first time to discuss such a flaw in the related literatures.
Last updated:  2023-07-07
VERICA - Verification of Combined Attacks: Automated formal verification of security against simultaneous information leakage and tampering
Jan Richter-Brockmann, Jakob Feldtkeller, Pascal Sasdrich, Tim Güneysu
Physical attacks, including passive Side-Channel Analysis and active Fault Injection Analysis, are considered among the most powerful threats against physical cryptographic implementations. These attacks are well known and research provides many specialized countermeasures to protect cryptographic implementations against them. Still, only a limited number of combined countermeasures, i.e., countermeasures that protect implementations against multiple attacks simultaneously, were proposed in the past. Due to increasing complexity and reciprocal effects, design of efficient and reliable combined countermeasures requires longstanding expertise in hardware design and security. With the help of formal security specifications and adversary models, automated verification can streamline development cycles, increase quality, and facilitate development of robust cryptographic implementations. In this work, we revise and refine formal security notions for combined protection mechanisms and specifically embed them in the context of hardware implementations. Based on this, we present the first automated verification framework that can verify physical security properties of hardware circuits with respect to combined physical attacks. To this end, we conduct several case studies to demonstrate the capabilities and advantages of our framework, analyzing secure building blocks (gadgets), S-boxes build from Toffoli gates, and the ParTI scheme. For the first time, we reveal security flaws in analyzed structures due to reciprocal effects, highlighting the importance of continuously integrating security verification into modern design and development cycles.
Last updated:  2023-07-07
Comparative Study of HDL algorithms for Intrusion Detection System in Internet of Vehicles
Manoj Srinivas Botla, Jai Bala Srujan Melam, Raja Stuthi Paul Pedapati, Srijanee Mookherji, Vanga Odelu, Rajendra Prasath
Internet of vehicles (IoV) has brought technological revolution in the fields of intelligent transport system and smart cities. With the rise in self-driven cars and AI managed traffic system, threats to such systems have increased significantly. There is an immediate need to mitigate such attacks and ensure security, trust and privacy. Any malfunctioning or misbehaviour in an IoV based system can lead to fatal accidents. This is because IoV based systems are sensitive in nature involving human lives either on or off the roads. Any compromise to such systems can affect user safety and incur in service delays. For IoV users, the Intrusion Detection System (IDS) is crucial to protect them from different malware-based attacks and to ensure the security of users and infrastructures. Machine Learning approaches are used for extracting useful features from network traffic and also for predicting the patterns of anomalous activities. We use two datasets, namely Balanced DDoS dataset and Car-Hacking Dataset for comparative study of intrusion detection using various machine learning approaches. The comparative study shows the differences of various machine learning and deep learning approaches against two datasets.
Last updated:  2023-07-07
Revisiting Security Estimation for LWE with Hints from a Geometric Perspective
Dana Dachman-Soled, Huijing Gong, Tom Hanson, Hunter Kippen
The Distorted Bounded Distance Decoding Problem (DBDD) was introduced by Dachman-Soled et al. [Crypto ’20] as an intermediate problem between LWE and unique-SVP (uSVP). They presented an approach that reduces an LWE instance to a DBDD instance, integrates side information (or “hints”) into the DBDD instance, and finally reduces it to a uSVP instance, which can be solved via lattice reduction. They showed that this principled approach can lead to algorithms for side-channel attacks that perform better than ad-hoc algorithms that do not rely on lattice reduction. The current work focuses on new methods for integrating hints into a DBDD instance. We view hints from a geometric perspective, as opposed to the distributional perspective from the prior work. Our approach provides the rigorous promise that, as hints are integrated into the DBDD instance, the correct solution remains a lattice point contained in the specified ellipsoid. We instantiate our approach with two new types of hints: (1) Inequality hints, corresponding to the region of intersection of an ellipsoid and a halfspace; (2) Combined hints, corresponding to the region of intersection of two ellipsoids. Since the regions in (1) and (2) are not necessarily ellipsoids, we replace them with ellipsoidal approximations that circumscribe the region of intersection. Perfect hints are reconsidered as the region of intersection of an ellipsoid and a hyperplane, which is itself an ellipsoid. The compatibility of “approximate,” “modular,” and “short vector” hints from the prior work is examined. We apply our techniques to the decryption failure and side-channel attack settings. We show that “inequality hints” can be used to model decryption failures, and that our new approach yields a geometric analogue of the “failure boosting” technique of D’anvers et al. [ePrint, ’18]. We also show that “combined hints” can be used to fuse information from a decryption failure and a side-channel attack, and provide rigorous guarantees despite the data being non-Gaussian. We provide experimental data for both applications. The code that we have developed to implement the integration of hints and hardness estimates extends the Toolkit from prior work and has been released publicly.
Last updated:  2023-07-06
Provably Secure Blockchain Protocols from Distributed Proof-of-Deep-Learning
Xiangyu Su, Mario Larangeira, Keisuke Tanaka
Proof-of-useful-work (PoUW), an alternative to the widely used proof-of-work (PoW), aims to re-purpose the network's computing power. Namely, users evaluate meaningful computational problems, e.g., solving optimization problems, instead of computing numerous hash function values as in PoW. A recent approach utilizes the training process of deep learning as ``useful work''. However, these works lack security analysis when deploying them with blockchain-based protocols, let alone the informal and over-complicated system design. This work proposes a distributed proof-of-deep-learning (D-PoDL) scheme concerning PoUW's requirements. With a novel hash-traininßg-hash structure and model-referencing mechanism, our scheme is the first deep learning-based PoUW scheme that enables achieving better accuracy distributively. Next, we introduce a transformation from the D-PoDL scheme to a generic D-PoDL blockchain protocol which can be instantiated with two chain selection rules, i.e., the longest-chain rule and the weight-based blockchain framework (LatinCrypt' 21). This work is the first to provide formal proofs for deep learning-involved blockchain protocols concerning the robust ledger properties, i.e., chain growth, chain quality, and common prefix. Finally, we implement the D-PoDL scheme to discuss the effectiveness of our design.
Last updated:  2023-07-06
Peer-to-Peer Energy Trading Meets Blockchain: Consensus via Score-Based Bid Assignment
Xiangyu Su, Xavier Défago, Mario Larangeira, Kazuyuki Mori, Takuya Oda, Yuta Okumura, Yasumasa Tamura, Keisuke Tanaka
The demand for peer-to-peer energy trading (P2PET) grows alongside the advancement of smart grids. A P2PET system enables its peers to trade energy as in a double-sided auction market by issuing auction bids to buy or sell energy. A robust public ledger, that satisfies the standard properties of persistence and liveness, is necessary for the system to record trading agreements, i.e., combinations between buy and sell bids which would form a \emph{transaction}. The Bitcoin based blockchain satisfies such properties as proven in the backbone protocol (EuroCrypt'15). However, existing blockchain-based P2PET approaches rely on general-purpose blockchains with smart contract capabilities, unavoidably incurring in high operational costs. Therefore, this work intends to design a dedicated blockchain for the ledger of P2PET. We first revisit the blockchain data structure to support auction bids. Then, we abstract the process of forming transactions, i.e., matching bids, with a score-based many-to-many Bid Assignment Problem (BAP). Leveraging our proposed BAP in addition to the corresponding scoring function, we propose a ``proof-of'' scheme, namely Proof-of-Bid-Assignment (PoBA), and design the corresponding blockchain aided protocol. The key difference from any previous work, we are aware of, is that our protocol selects blocks according to the score of their content, i.e., bids and transactions. Hence, a higher-scored block would be preferable to the underlying P2PET system regardless of whether the block's generator is honest, since, intuitively, it would increase the number of trading agreements. Finally, by modeling PoBA with a universal sampler (AsiaCrypt'16) and analyzing honest users' local chain dynamics, we prove the security of our design with respect to the standard ledger properties.
Last updated:  2023-07-06
Universal Amplification of KDM Security: From 1-Key Circular to Multi-Key KDM
Brent Waters, Daniel Wichs
An encryption scheme is Key Dependent Message (KDM) secure if it is safe to encrypt messages that can arbitrarily depend on the secret keys themselves. In this work, we show how to upgrade essentially the weakest form of KDM security into the strongest one. In particular, we assume the existence of a symmetric-key bit-encryption that is circular-secure in the $1$-key setting, meaning that it maintains security even if one can encrypt individual bits of a single secret key under itself. We also rely on a standard CPA-secure public-key encryption. We construct a public-key encryption scheme that is KDM secure for general functions (of a-priori bounded circuit size) in the multi-key setting, meaning that it maintains security even if one can encrypt arbitrary functions of arbitrarily many secret keys under each of the public keys. As a special case, the latter guarantees security in the presence of arbitrary length key cycles. Prior work already showed how to amplify $n$-key circular to $n$-key KDM security for general functions. Therefore, the main novelty of our work is to upgrade from $1$-key to $n$-key security for arbitrary $n$. As an independently interesting feature of our result, our construction does not need to know the actual specification of the underlying 1-key circular secure scheme, and we only rely on the existence of some such scheme in the proof of security. In particular, we present a universal construction of a multi-key KDM-secure encryption that is secure as long as some 1-key circular-secure scheme exists. While this feature is similar in spirit to Levin's universal construction of one-way functions, the way we achieve it is quite different technically, and does not come with the same ``galactic inefficiency''.
Last updated:  2023-07-06
Breaking and Fixing Garbled Circuits when a Gate has Duplicate Input Wires
Raine Nieminen, Thomas Schneider
Garbled circuits are a fundamental cryptographic primitive that allows two or more parties to securely evaluate an arbitrary Boolean circuit without revealing any information beyond the output using a constant number of communication rounds. Garbled circuits have been introduced by Yao (FOCS’86) and generalized to the multi-party setting by Beaver, Micali and Rogaway (STOC’90). Since then, several works have improved their efficiency by providing different garbling schemes and several implementations exist. Starting with the seminal Fairplay compiler (USENIX Security’04), several implementation frameworks decoupled the task of compiling the function to be evaluated into a Boolean circuit from the engine that securely evaluates that circuit, e.g., using a secure two-party computation protocol based on garbled circuits. In this paper, we show that this decoupling of circuit generation and evaluation allows a subtle attack on several prominent garbling schemes. It occurs when violating the implicit assumption on the circuit that gates have different input wires which is most often not explicitly specified in the respective papers. The affected garbling schemes use separate calls to a deterministic encryption function for the left and right input wire of a gate to derive pseudo-random encryption pads that are XORed together. When a circuit contains a gate where the left and right input wire are the same, these two per-wire encryption pads cancel out and we demonstrate that this can result in a complete break of privacy. We show how the vulnerable garbling schemes can be fixed easily.
Last updated:  2023-07-06
A Cryptographic Layer for the Interoperability of CBDC and Cryptocurrency Ledgers
Diego Castejon-Molina, Alberto del Amo Pastelero, Dimitrios Vasilopoulos, Pedro Moreno-Sanchez
Cryptocurrencies are used in several, distinct use cases, thereby sustaining the existence of many ledgers that are heterogeneous in terms of design and purpose. In addition, the interest of central banks in deploying Central Bank Digital Currency (CBDC) has spurred a blooming number of conceptually different proposals from central banks and academia. As a result of the diversity of cryptocurrency and CBDC ledgers, interoperability, i.e., the seamless transfer of value between users that operate on different ledgers, has become an interesting research problem. In fact, interoperability has been explored both in CBDC and cryptocurrencies, and numerous proposals exist. However, these proposals are tailored to the characteristics of the ledgers for which they are designed. For instance, some rely on trusted hardware, others rely on the scripting capabilities of the underlying ledger or on specific cryptographic assumptions of the transaction authorization mechanism (e.g., adaptor signatures), while others rely on a trusted entity. This fragmentation results in the repetitive development of interoperablity protocols that address the same applications across the various ledgers. In this work, we propose an alternative approach: decouple the transaction authorization details of each ledger from the definition of cross-ledger applications. To do so, we define a middle layer that abstracts the core functionality for authorizing transactions in any ledger and the security notions of interest. This middle layer serves two purposes: (i) it becomes the main cryptographic building block to (re)define cross-ledger applications in a ledger agnostic manner; (ii) for any ledger that exists (and the new to come), it suffices to prove that it is a secure instance of the middle layer to be compatible with cross-ledger protocols. We define two new primitives for our middle layer, the basic payment ledger (BL) and the conditional payment ledger (CL). We prove that the two most common transaction authorization mechanism (digital signatures and zero knowledge proofs) are secure constructions for BL. We also prove that common smart contracts (e.g., HTLC and PTLC) and the combination of adaptor signatures with verifiable timelock puzzles are also secure constructions for CL. Finally, we discuss how to design some popular applications (e.g. atomic swaps between ledgers) using our middle layer.
Last updated:  2023-07-06
DIDO: Data Provenance from Restricted TLS 1.3 Websites
Kwan Yin Chan, Handong Cui, Tsz Hon Yuen
Public data can be authenticated by obtaining from a trustworthy website with TLS. Private data, such as user profile, are usually restricted from public access. If a user wants to authenticate his private data (e.g., address) provided by a restricted website (e.g., user profile page of a utility company website) to a verifier, he cannot simply give his username and password to the verifier. DECO (CCS 2020) provides a solution for liberating these data without introducing undesirable trust assumption, nor requiring server-side modification. Their implementation is mainly based on TLS 1.2. In this paper, we propose an optimized solution for TLS 1.3 websites. We tackle a number of open problems, including the support of X25519 key exchange in TLS 1.3, the design of round-optimal three-party key exchange, the architecture of two-party computation of TLS 1.3 key scheduling, and circuit design optimized for two-party computation. We test our implementation with real world website and show that our optimization is necessary to avoid timeout in TLS handshake.
Last updated:  2023-07-06
OccPoIs: Points of Interest based on Neural Network's Key Recovery in Side-Channel Analysis through Occlusion
Trevor Yap, Shivam Bhasin, Stjepan Picek
Deep neural networks (DNNs) represent a powerful technique for assessing cryptographic security concerning side-channel analysis (SCA) due to their ability to aggregate leakages automatically, rendering attacks more efficient without preprocessing. Nevertheless, despite their effectiveness, DNNs employed in SCA are predominantly black-box algorithms, posing considerable interpretability challenges. In this paper, we propose a novel technique called Key Guessing Occlusion (KGO) that acquires a minimal set of sample points required by the DNN for key recovery, which we call OccPoIs. These OccPoIs provide information on which areas of the traces are important to the DNN for retrieving the key, enabling evaluators to know where to refine their cryptographic implementation. After obtaining the OccPoIs, we first explore the leakages found in these OccPoIs to understand what the DNN is learning with first-order Correlation Power Analysis (CPA). We show that KGO obtains relevant sample points that have a high correlation with the given leakage model but also acquires sample points that first-order CPA fails to capture. Furthermore, unlike the first-order CPA in the masking setting, KGO obtains these OccPoIs without the knowledge of the shares or mask. Next, we employ the template attack (TA) using the OccPoIs to investigate if KGO could be used as a feature selection tool. We show that using the OccPoIs with TA can recover the key for all the considered synchronized datasets and is consistent as a feature selection tool even on datasets protected by first-order masking. Furthermore, it also allows a more efficient attack than other feature selections on the first-order masking dataset called ASCADf.
Last updated:  2023-07-06
Efficient Verification of the Wesolowski Verifiable Delay Function for Distributed Environments
Uncategorized
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
Show abstract
Uncategorized
Verifiable Delay Functions (VDFs) are a set of new crypto- graphic schemes ensuring that an agent has spent some time (evaluation phase) in a unparalleled computation. A key requirement for such a construction is that the verification of the computation’s correctness has to be done in a significantly shorter time than the evaluation phase. This has led VDFs to recently gain exposure in large-scale decentralized projects as a core component of consensus algorithms or spam-prevention mechanisms. In this work, due to the increasing relevance and the lack of literature, we will focus on the optimization of the verification phase of Wesolowski’s VDF and provide a three-axis of improvement concerning multi-exponentiation computation, prime testing techniques, and hash- ing tricks. We will show that our optimizations reduce the computation time of the verification phase between 12% and 35% for the range of parameters considered.
Last updated:  2023-07-05
Multi-User CDH Problems and the Concrete Security of NAXOS and HMQV
Eike Kiltz, Jiaxin Pan, Doreen Riepel, Magnus Ringerud
We introduce CorrGapCDH, the Gap Computational Diffie-Hellman problem in the multi-user setting with Corruptions. In the random oracle model, our assumption tightly implies the security of the authenticated key exchange protocols NAXOS in the eCK model and (a simplified version of) X3DH without ephemeral key reveal. We prove hardness of CorrGapCDH in the generic group model, with optimal bounds matching the one of the discrete logarithm problem. We also introduce CorrCRGapCDH, a stronger Challenge-Response variant of our assumption. Unlike standard GapCDH, CorrCRGapCDH implies the security of the popular AKE protocol HMQV in the eCK model, tightly and without rewinding. Again, we prove hardness of CorrCRGapCDH in the generic group model, with (almost) optimal bounds. Our new results allow implementations of NAXOS, X3DH, and HMQV without having to adapt the group sizes to account for the tightness loss of previous reductions. As a side result of independent interest, we also obtain modular and simple security proofs from standard GapCDH with tightness loss, improving previously known bounds.
Last updated:  2023-07-05
SoK: Privacy-Preserving Signatures
Alishah Chator, Matthew Green, Pratyush Ranjan Tiwari
Modern security systems depend fundamentally on the ability of users to authenticate their communications to other parties in a network. Unfortunately, cryptographic authentication can substantially undermine the privacy of users. One possible solution to this problem is to use privacy-preserving cryptographic authentication. These protocols allow users to authenticate their communications without revealing their identity to the verifier. In the non-interactive setting, the most common protocols include blind, ring, and group signatures, each of which has been the subject of enormous research in the security and cryptography literature. These primitives are now being deployed at scale in major applications, including Intel's SGX software attestation framework. The depth of the research literature and the prospect of large-scale deployment motivate us to systematize our understanding of the research in this area. This work provides an overview of these techniques, focusing on applications and efficiency.
Last updated:  2023-07-05
Poseidon: A New Hash Function for Zero-Knowledge Proof Systems
Lorenzo Grassi, Dmitry Khovratovich, Christian Rechberger, Arnab Roy, Markus Schofnegger
The area of practical computational integrity proof systems, like SNARKs, STARKs, Bulletproofs, is seeing a very dynamic development with several constructions having appeared recently with improved properties and relaxed setup requirements. Many use cases of such systems involve, often as their most expensive part, proving the knowledge of a preimage under a certain cryptographic hash function, which is expressed as a circuit over a large prime field. A notable example is a zero-knowledge proof of coin ownership in the Zcash cryptocurrency, where the inadequacy of the SHA-256 hash function for such a circuit caused a huge computational penalty. In this paper, we present a modular framework and concrete instances of cryptographic hash functions which work natively with GF(p) objects. Our hash function Poseidon uses up to 8x fewer constraints per message bit than Pedersen Hash. Our construction is not only expressed compactly as a circuit, but can also be tailored for various proof systems using specially crafted polynomials, thus bringing another boost in performance. We demonstrate this by implementing a 1-out-of-a-billion membership proof with Merkle trees in less than a second by using Bulletproofs.
Last updated:  2023-07-05
Overloading the Nonce: Rugged PRPs, Nonce-Set AEAD, and Order-Resilient Channels
Jean Paul Degabriele, Vukašin Karadžić
We introduce a new security notion that lies right in between pseudorandom permutations (PRPs) and strong pseudorandom permutations (SPRPs). We call this new security notion and any (tweakable) cipher that satisfies it a $\textit{rugged pseudorandom permutation}$ (RPRP). Rugged pseudorandom permutations lend themselves to some interesting applications, have practical benefits, and lead to novel cryptographic constructions. Our focus is on variable-length tweakable RPRPs, and analogous to the encode-then-encipher paradigm of Bellare and Rogaway, we can generically transform any such cipher into different AEAD schemes with varying security properties. However, the benefit of RPRPs is that they can be constructed more efficiently as they are weaker primitives than SPRPs (the notion traditionally required by the encode-then-encipher paradigm). We can construct RPRPs using only two layers of processing, whereas SPRPs typically require three layers of processing over the input data. We also identify a new transformation that yields RUP-secure AEAD schemes with more compact ciphertexts than previously known. Further extending this approach, we arrive at a new generalized notion of authenticated encryption and a matching construction, which we refer to as $\textit{nonce-set AEAD}$. Nonce-set AEAD is particularly well-suited in the context of secure channels, like QUIC and DTLS, that operate over unreliable transports and employ a window mechanism at the receiver's end of the channel. We conclude by presenting a generic construction for transforming a nonce-set AEAD scheme into an order-resilient secure channel. Our channel construction sheds new light on order-resilient channels and additionally leads to more compact ciphertexts when instantiated from RPRPs.
Last updated:  2023-07-05
PQC Cloudization: Rapid Prototyping of Scalable NTT/INTT Architecture to Accelerate Kyber
Mojtaba Bisheh-Niasar, Daniel Lo, Anjana Parthasarathy, Blake Pelton, Bharat Pillilli, Bryan Kelly
The advent of quantum computers poses a serious challenge to the security of cloud infrastructures and services, as they can potentially break the existing public-key cryptosystems, such as Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC). Even though the gap between today’s quantum computers and the threats they pose to current public-key cryptography is large, the cloud landscape should act proactively and initiate the transition to the post-quantum era as early as possible. To comply with that, the U.S. government issued a National Security Memorandum in May 2022 that mandated federal agencies to migrate to post-quantum cryptosystems (PQC) by 2035. To ensure the long-term security of cloud computing, it is imperative to develop and deploy PQC resistant to quantum attacks. A promising class of post-quantum cryptosystems is based on lattice problems, which require polynomial arithmetic. In this paper, we propose and implement a scalable number-theoretic transform (NTT) architecture that significantly enhances the performance of polynomial multiplication. Our proposed design exploits multi-levels of parallelism to accelerate the NTT computation on reconfigurable hardware. We use the high-level synthesis (HLS) method to implement our design, which allows us to describe the NTT algorithm in a high-level language and automatically generate optimized hardware code. HLS facilitates rapid prototyping and enables us to explore different design spaces and trade-offs on the hardware platforms. Our experimental results show that our design achieves 11$\times$ speedup compared to the state-of-the-art requiring only 14 clock cycles for an NTT computation over a polynomial of degree 256. To demonstrate the applicability of our design, we also present a coprocessor architecture for Kyber, a key encapsulation mechanism (KEM) chosen by the NIST post-quantum standardization process, that utilizes our scalable NTT core.
Last updated:  2023-07-05
MAYO: Optimized Implementation with Revised Parameters for ARMv7-M
Arianna Gringiani, Alessio Meneghetti, Edoardo Signorini, Ruggero Susella
We present an optimized constant-time implementation of the MAYO signature scheme on ARMv7-M. MAYO is a novel multivariate proposal based on the trapdoor function of the Unbalanced Oil and Vinegar scheme. Our implementation builds on existing techniques for UOV-based schemes and introduces a new approach for evaluating the polar forms of quadratic maps. We modify MAYO's original parameters to achieve greater benefits from the proposed optimizations, resulting in slightly larger keys and shorter signatures for the same level of security. We evaluate the optimized implementation with the new parameters on the STM32H753ZIT6 microcontroller and measure its performance for the signing and verification procedures. At NIST security level I, signing requires approximately 43M cycles, and verification requires approximately 6M cycles. Both are 2.6 times faster than the results obtained from the original parameters.
Last updated:  2023-07-05
Automated Analysis of Halo2 Circuits
Fatemeh Heidari Soureshjani, Mathias Hall-Andersen, MohammadMahdi Jahanara, Jeffrey Kam, Jan Gorzny, Mohsen Ahmadvand
Zero-knowledge proof systems are becoming increasingly prevalent and being widely used to secure decentralized financial systems and protect the privacy of users. Given the sensitivity of these applications, zero-knowledge proof systems are a natural target for formal verification methods. We describe methods for checking one such proof system: Halo2. We use abstract interpretation and an SMT solver to check various properties of Halo2 circuits. Using abstract interpretation, we can detect unused gates, unconstrained cells, and unused columns. Using an SMT solver, we can detect under-constrained (in the sense that for the same public input they have two efficiently computable satisfying assignments) circuits. This is the first work we are aware of that applies lightweight formal methods to PLONKish arithmetization and Halo2 circuits.
Last updated:  2023-07-05
SNARGs for Monotone Policy Batch NP
Zvika Brakerski, Maya Farber Brodsky, Yael Tauman Kalai, Alex Lombardi, Omer Paneth
We construct a succinct non-interactive argument ($\mathsf{SNARG}$) for the class of monotone policy batch $\mathsf{NP}$ languages under the Learning with Errors ($\mathsf{LWE}$) assumption. This class is a subclass of $\mathsf{NP}$ that is associated with a monotone function~$f:\{0,1\}^k\rightarrow\{0,1\}$ and an $\mathsf{NP}$ language $\mathcal L$, and contains instances $(x_1,\ldots,x_k)$ such that $f(b_1,\ldots,b_k)=1$ where $b_j=1$ if and only if $x_j\in \mathcal L$. Our $\mathsf{SNARG}$s are arguments of knowledge in the non-adaptive setting, and satisfy a new notion of somewhere extractability against adaptive adversaries. This is the first $\mathsf{SNARG}$ under standard hardness assumptions for a sub-class of $\mathsf{NP}$ that is not known to have a (computational) non-signaling $\mathsf{PCP}$ with small locality. Indeed, our approach necessarily departs from the known framework of constructing $\mathsf{SNARG}$s dating back to [Kalai-Raz-Rothblum, STOC '13] Our construction combines existing quasi-arguments for $\mathsf{NP}$ (based on batch arguments for $\mathsf{NP}$) with a novel ingredient which we call a predicate-extractable hash ($\mathsf{PEH}$) family. This notion generalizes the notion of a somewhere extractable hash. Whereas a somewhere extractable hash allows to extract a single input coordinate, our $\mathsf{PEH}$ extracts a global property of the input. We view this primitive to be of independent interest, and believe that it will find other applications.
Last updated:  2023-07-05
Public-Key Encryption, Local Pseudorandom Generators, and the Low-Degree Method
Andrej Bogdanov, Pravesh Kothari, Alon Rosen
The low-degree method postulates that no efficient algorithm outperforms low-degree polynomials in certain hypothesis-testing tasks. It has been used to understand computational indistinguishability in high-dimensional statistics. We explore the use of the low-degree method in the context of cryptography. To this end, we apply it in the design and analysis of a new public-key encryption scheme whose security is based on Goldreich's pseudorandom generator. The scheme is a combination of two proposals of Applebaum, Barak, and Wigderson, and inherits desirable features from both.
Last updated:  2023-07-04
An Algorithm for Persistent Homology Computation Using Homomorphic Encryption
Dominic Gold, Koray Karabina, Francis C. Motta
Topological Data Analysis (TDA) offers a suite of computational tools that provide quantified shape features in high dimensional data that can be used by modern statistical and predictive machine learning (ML) models. In particular, persistent homology (PH) takes in data (e.g., point clouds, images, time series) and derives compact representations of latent topological structures, known as persistence diagrams (PDs). Because PDs enjoy inherent noise tolerance, are interpretable and provide a solid basis for data analysis, and can be made compatible with the expansive set of well-established ML model architectures, PH has been widely adopted for model development including on sensitive data, such as genomic, cancer, sensor network, and financial data. Thus, TDA should be incorporated into secure end-to-end data analysis pipelines. In this paper, we take the first step to address this challenge and develop a version of the fundamental algorithm to compute PH on encrypted data using homomorphic encryption (HE).
Last updated:  2023-07-04
How Does Nakamoto Set His Clock? Full Analysis of Nakamoto Consensus in Bounded-Delay Networks
Juan A. Garay, Aggelos Kiayias, Nikos Leonardos
Nakamoto consensus, arguably the most exciting development in distributed computing in the last few years, is in a sense a recasting of the traditional state-machine-replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability. While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its target recalculation and, notably, the timestamp adjustment mechanism—specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain (namely, 11)— which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open. The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool that we call "hot-hand executions," which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called "concentrated chain quality," which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, safety, liveness as well as (consistent) timekeeping.
Last updated:  2023-07-04
Private Coin Verifiable Delay Function
Peter Chvojka
We construct the first tight verifiable delay function (VDF) where the evaluation algorithm only evaluates sequentially the function and hence outputs and empty proof, verification is independent of time parameter $T$ and setup has constant size parameters. Our VDF is based on repeated squaring in hidden order groups, but it requires that coins used to sample a random instance must be kept secret in order to guarantee sequentiality. We denote such a VDF as a private coin verifiable delay function and show that it can be used to obtain multiplicatively homomorphic non-interactive timed commitment with efficient publicly verifiable force decommitment algorithm.
Last updated:  2023-07-04
A private set intersection protocol based on multi-party quantum computation for greatest common divisor
Muhammad Imran
Private set intersection (PSI) is a cryptographic primitive that allows two or more parties to learn the intersection of their input sets and nothing else. In this paper, we present a private set intersection protocol based on a new secure multi-party quantum protocol for greatest common divisor (GCD). The protocol is mainly inspired by the recent quantum private set union protocol based on least common multiple by Liu, Yang, and Li. Performance analysis guarantees the correctness and it also shows that the proposed protocols are completely secure in semi-honest model. Moreover, the complexity is proven to be efficient (poly logarithmic) in the size of the input sets.
Last updated:  2023-07-04
MinRank in the Head: Short Signatures from Zero-Knowledge Proofs
Gora Adj, Luis Rivera-Zamarripa, Javier Verbel
In recent years, many digital signature scheme proposals have been built from the so-called MPC-in-the-head paradigm. This has shown to be an outstanding way to design efficient signatures with security based on hard problems. MinRank is an NP-complete problem extensively studied due to its applications to cryptanalysis since its introduction in 1999. However, only a few schemes base their security on its intractability, and their signature size is large compared with other proposals based on NP problems. This paper introduces the first MinRank-based digital signature scheme that uses the MPC-in-the-head, enabling it to achieve small signature sizes and running times. For NIST's category I parameter set, we obtain signatures of 6.5KB, which is competitive with the shortest proposals in the literature that are based on non-structured problems.
Last updated:  2023-07-04
AKE Zoo: 100 two-party protocols (to be continued)
Evgeny Alekseev, Alexandra Babueva, Olga Zazykina
The problem of designing authenticated key establishment protocols has a rich history. Since 1976 more than a hundred different protocols have been proposed. But the task of comparing and classifying existing protocols is usually complicated by the fact that they are described in different terms and with different levels of detail. This paper contains intermediate results on enumeration and uniform description of AKE protocols. We publish it in order to get feedback on the description principles used. Here we describe 100 AKE protocols (there are much more such protocols, but we found these earlier) in identical terms and the same level of detail. The proposed descriptions are not structured (chronologically only) but classifying of these protocols is future work direction.
Last updated:  2023-07-04
An Analysis of Requirements and Privacy Threats in Mobile Data Donations
Leonie Reichert
In recent years, personal and medical data collected through mobile apps has become a useful data source for researchers. Platforms like Apple ResearchKit try to make it as easy as possible for non-experts to set up such data collection campaigns. However, since the collected data is sensitive, it must be well protected. Methods that provide technical privacy guarantees often limit the usefulness of the data and results. In this paper, we model and analyze mobile data donation to better understand the requirements that must be fulfilled by privacy-preserving approaches. To this end, we give an overview of the functionalities researchers require from data donation apps by analyzing existing apps. We also create a model of the current practice and analyze it using the LINDDUN privacy framework to identify privacy threats.
Last updated:  2023-07-04
End-to-end Privacy Preserving Training and Inference for Air Pollution Forecasting with Data from Rival Fleets
Gauri Gupta, Krithika Ramesh, Anwesh Bhattacharya, Divya Gupta, Rahul Sharma, Nishanth Chandran, Rijurekha Sen
Privacy-preserving machine learning (PPML) promises to train machine learning (ML) models by combining data spread across multiple data silos. Theoretically, secure multiparty computation (MPC) allows multiple data owners to train models on their joint data without revealing the data to each other. However, the prior implementations of this secure training using MPC have three limitations: they have only been evaluated on CNNs, and LSTMs have been ignored; fixed point approximations have affected training accuracies compared to training in floating point; and due to significant latency overheads of secure training via MPC, its relevance for practical tasks with streaming data remains unclear. The motivation of this work is to report our experience of addressing the practical problem of secure training and inference of models for urban sensing problems, e.g., traffic congestion estimation, or air pollution monitoring in large cities, where data can be contributed by rival fleet companies while balancing the privacy-accuracy trade-offs using MPC-based techniques. Our first contribution is to design a custom ML model for this task that can be efficiently trained with MPC within a desirable latency. In particular, we design a GCN-LSTM and securely train it on time-series sensor data for accurate forecasting, within 7 minutes per epoch. As our second contribution, we build an end-toend system of private training and inference that provably matches the training accuracy of cleartext ML training. This work is the first to securely train a model with LSTM cells. Third, this trained model is kept secret-shared between the fleet companies and allows clients to make sensitive queries to this model while carefully handling potentially invalid queries. Our custom protocols allow clients to query predictions from privately trained models in milliseconds, all the while maintaining accuracy and cryptographic security
Last updated:  2023-07-04
A Side-Channel Attack on a Bitsliced Higher-Order Masked CRYSTALS-Kyber Implementation
Ruize Wang, Martin Brisfors, Elena Dubrova
In response to side-channel attacks on masked implementations of post-quantum cryptographic algorithms, a new bitsliced higher-order masked implementation of CRYSTALS-Kyber has been presented at CHES'2022. The bitsliced implementations are typically more difficult to break by side-channel analysis because they execute a single instruction across multiple bits in parallel. However, in this paper, we reveal new vulnerabilities in the masked Boolean to arithmetic conversion procedure of this implementation that make the shared and secret key recovery possible. We also present a new chosen ciphertext construction method which maximizes secret key recovery probability for a given message bit recovery probability. We demonstrate practical shared and secret key recovery attacks on the first-, second- and third-order masked implementations of Kyber-768 in ARM Cortex-M4 using profiled deep learning-based power analysis.
Last updated:  2023-07-04
Random Oracle Combiners: Breaking the Concatenation Barrier for Collision-Resistance
Yevgeniy Dodis, Niels Ferguson, Eli Goldin, Peter Hall, Krzysztof Pietrzak
Suppose two parties have hash functions $h_1$ and $h_2$ respectively, but each only trusts the security of their own. We wish to build a hash combiner $C^{h_1, h_2}$ which is secure so long as either one of the underlying hash functions is. This question has been well-studied in the regime of collision resistance. In this case, concatenating the two hash outputs clearly works. Unfortunately, a long series of works (Boneh and Boyen, CRYPTO'06; Pietrzak, Eurocrypt'07; Pietrzak, CRYPTO'08) showed no (noticeably) shorter combiner for collision resistance is possible. We revisit this pessimistic state of affairs, motivated by the observation that collision-resistance is insufficient for many applications of cryptographic hash functions anyway. We argue the right formulation of the "hash combiner" is what we call random oracle (RO) combiners. Indeed, we circumvent the previous lower bounds for collision resistance by constructing a simple length-preserving RO combiner $$\widetilde{C}_{\mathcal{Z}_1,\mathcal{Z}_2}^{h_1,h_2}(M) = h_1(M, \mathcal{Z}_1) \oplus h_2(M, \mathcal{Z}_2),$$ where $\mathcal{Z}_1, \mathcal{Z}_2$ are random salts of appropriate length. We show that this extra randomness is necessary for RO combiners, and indeed our construction is somewhat tight with this lower bound. On the negative side, we show that one cannot generically apply the composition theorem to further replace "monolithic" hashes $h_1$ and $h_2$ by some simpler indifferentiable construction (such as the Merkle-Damgård transformation) from smaller components, such as fixed-length compression functions. Despite this issue, we directly prove collision resistance of the Merkle-Damgård variant of our combiner, where $h_1$ and $h_2$ are replaced by iterative Merkle-Damgård hashes applied to fixed-length compression functions. Thus, we can still subvert the concatenation barrier for collision-resistance combiners using practically small components.
Last updated:  2023-07-04
Constant-Round YOSO MPC Without Setup
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.
Last updated:  2023-07-04
Shorter Hash-and-Sign Lattice-Based Signatures
Thomas Espitau, Mehdi Tibouchi, Alexandre Wallet, Yang Yu
Lattice-based digital signature schemes following the hash-and-sign design paradigm of Gentry, Peikert and Vaikuntanathan (GPV) tend to offer an attractive level of efficiency, particularly when instantiated with structured compact trapdoors. In particular, NIST postquantum finalist Falcon is both quite fast for signing and verification and quite compact: NIST notes that it has the smallest bandwidth (as measured in combined size of public key and signature) of all round 2 digital signature candidates. Nevertheless, while Falcon--512, for instance, compares favorably to ECDSA--384 in terms of speed, its signatures are well over 10 times larger. For applications that store large number of signatures, or that require signatures to fit in prescribed packet sizes, this can be a critical limitation. In this paper, we explore several approaches to further improve the size of hash-and-sign lattice-based signatures, particularly instantiated over NTRU lattices like Falcon and its recent variant Mitaka. In particular, while GPV signatures are usually obtained by sampling lattice points according to some \emph{spherical} discrete Gaussian distribution, we show that it can be beneficial to sample instead according to a suitably chosen \emph{ellipsoidal} discrete Gaussian: this is because only half of the sampled Gaussian vector is actually output as the signature, while the other half is recovered during verification. Making the half that actually occurs in signatures shorter reduces signature size at essentially no security loss (in a suitable range of parameters). Similarly, we show that reducing the modulus $q$ with respect to which signatures are computed can improve signature size as well as verification key size almost ``for free''; this is particularly true for constructions like Falcon and Mitaka that do not make substantial use of NTT-based multiplication (and rely instead on transcendental FFT). Finally, we show that the Gaussian vectors in signatures can be represented in a more compact way with appropriate coding-theoretic techniques, improving signature size by an additional 7 to 14%. All in all, we manage to reduce the size of, e.g., Falcon signatures by 30--40% at the cost of only 4--6 bits of Core-SVP security.
Last updated:  2023-07-03
An STP-based model toward designing S-boxes with good cryptographic properties
Zhenyu Lu, Sihem Mesnager, Tingting Cui, Yanhong Fan, Meiqin Wang
The substitution box (S-box) is an important nonlinear component in most symmetric cryptosystems and thus should have good properties. Its difference distribution table (DDT) and linear approximation table (LAT) affect the security of the cipher against differential and linear cryptanalysis. In most previous work, differential uniformity and linearity of an S-box are two primary cryptographic properties to impact the resistance against differential and linear attacks. In some cases, the branch number and fixed point are also be considered. However, other important cryptographic properties such as the frequency of differential uniformity (resp. linearity) and the number of Bad Input and Bad Output (BIBO) patterns in DDT (resp. LAT) are often ignored. These properties substantially affect lightweight cryptography based on substitution bit permutation networks (SbPN) such as PRESENT, GIFT and RECTANGLE. This paper introduces a new method to search for S-boxes satisfying all above criteria simultaneously. In our strategy, we transform the process of searching for S-boxes under certain constraints on cryptographic properties into a satisfiability (SAT) problem. As applications, we use our new approach to search out 4-bit and 5-bit S-boxes with the same or better cryptographic properties compared with the S-boxes from well-known ciphers. Finally, we also utilize our method to verify a conjecture proposed by Boura et al. in the case of all 3-bit and 4-bit S-boxes. We propose a proposition and two corollaries to reduce the search space in this verification.
Last updated:  2023-07-03
Reduction of the secret key length in the perfect cipher by data compression and randomisation
Boris Ryabko
Perfect ciphers have been a very attractive cryptographic tool ever since C. Shannon described them. Note that, by definition, if a perfect cipher is used, no one can get any information about the encrypted message without knowing the secret key. We consider the problem of reducing the key length of perfect ciphers, because in many applications the length of the secret key is a crucial parameter. This paper describes a simple method of key length reduction. This method gives a perfect cipher and is based on the use of data compression and randomisation, and the average key length can be made close to Shannon entropy (which is the key length limit). It should be noted that the method can effectively use readily available data compressors (archivers).
Last updated:  2023-07-03
Short Signatures from Regular Syndrome Decoding in the Head
Eliana Carozza, Geoffroy Couteau, Antoine Joux
We introduce a new candidate post-quantum digital signature scheme from the regular syndrome decoding (RSD) assumption, an established variant of the syndrome decoding assumption which asserts that it is hard to find $w$-regular solutions to systems of linear equations over $\mathbb{F}_2$ (a vector is regular if it is a concatenation of $w$ unit vectors). Our signature is obtained by introducing and compiling a new 5-round zero-knowledge proof system constructed using the MPC-in-the-head paradigm. At the heart of our result is an efficient MPC protocol in the preprocessing model that checks the correctness of a regular syndrome decoding instance by using a share ring-conversion mechanism. The analysis of our construction is non-trivial and forms a core technical contribution of our work. It requires careful combinatorial analysis and combines several new ideas, such as analyzing soundness in a relaxed setting where a cheating prover is allowed to use any witness sufficiently close to a regular vector. We complement our analysis with an in-depth overview of existing attacks against RSD. Our signatures are competitive with the best-known code-based signatures, ranging from $12.52$ KB (fast setting, with a signing time of the order of a few milliseconds on a single core of a standard laptop) to about $9$ KB (short setting, with estimated signing time of the order of 15ms).
Last updated:  2023-07-03
OWF Candidates Based on: Xors, Error Detection Codes, Permutations, Polynomials, Interaction and Nesting
Pawel Cyprys, Shlomi Dolev, Oded Margalit
Our research focuses on achieving perfect provable encryption by drawing inspiration from the principles of a one-time pad. We explore the potential of leveraging the unique properties of the one-time pad to design effective one-way functions. Our methodology involves the application of the exclusive-or (xor) operation to two randomly chosen strings. To address concerns related to preimage mappings, we incorporate error detection codes. Additionally, we utilize permutations to overcome linearity issues in the computation process. In order to enhance the security of our approach, we propose the integration of a secret-sharing scheme based on a linear polynomial. This helps mitigate collisions and adds an additional layer of perfect security. We thoroughly investigate the interactions between different aspects of one-way functions to strengthen the reliability of commitments. Lastly, we explore the possibility of nesting one-way functions as a countermeasure against potential backdoors. Through our study, we aim to contribute to the advancement of secure encryption techniques by leveraging the inherent strengths of the one-time pad and carefully considering the interplay of various components in the design of one-way functions.
Last updated:  2023-07-03
Security Analysis of a Color Image Encryption Scheme Based on a Fractional‑Order Hyperchaotic System
George Teseleanu
In 2022, Hosny et al. introduce an image encryption scheme that employs a fractional-order chaotic system. Their approach uses the hyper-chaotic system to generate the system's main parameter, namely a secret permutation which is dependent on the size and the sum of the pixels of the source image. According to the authors, their scheme offers adequate security (i.e. $498$ bits) for transmitting color images over unsecured channels. Nevertheless, in this paper we show that the scheme's security is independent on the secret parameters used to initialize the hyper-chaotic system. More precisely, we provide a brute-force attack whose complexity is $\mathcal O(2^{10.57}(WH)^3)$ and needs $2^{9.57}WH$ oracle queries, where $W$ and $H$ are the width and the height of the encrypted image. For example, for an image of size $4000 \times 30000$ ($12$ megapixels image) we obtain a security margin of $81.11$ bits, which is six times lower than the claimed bound. To achieve this result, we present two cryptanalytic attacks, namely a chosen plaintext attack and a chosen ciphertext attack.
Last updated:  2023-07-03
CHEX-MIX: Combining Homomorphic Encryption with Trusted Execution Environments for Two-party Oblivious Inference in the Cloud
Deepika Natarajan, Andrew Loveless, Wei Dai, Ronald Dreslinski
Data, when coupled with state-of-the-art machine learning models, can enable remarkable applications. But, there exists an underlying tension: users wish to keep their data private, and model providers wish to protect their intellectual property. Homomorphic encryption (HE) and multi-party computation (MPC) techniques have been proposed as solutions to this problem; however, both techniques require model providers to fully trust the server performing the machine learning computation. This limits the scale of inference applications, since it prevents model providers from leveraging shared public cloud infrastructures. In this work, we present CHEX-MIX, a solution to the problem of privacy-preserving machine learning between two mutually distrustful parties in an untrusted cloud setting. CHEX-MIX relies on a combination of HE and trusted execution environments (TEEs), using HE to provide clients with confidentiality guarantees, and TEEs to provide model providers with confidentiality guarantees and protect the integrity of computation from malicious cloud adversaries. Unlike prior solutions to this problem, such as multi-key HE, single-key HE, MPC, or TEE-only techniques, our solution assumes that both the client and the cloud can be malicious, makes no collusion assumptions, and frees model providers from needing to maintain private online infrastructures. Our results show that CHEX-MIX can execute at high efficiency, with low communication cost, while providing security guarantees unaddressed by prior work. Compared to a recent multi-key HE work that allows partial cloud offload, for example, CHEX-MIX achieves a 3× lower communication cost and a 3× faster computation time.
Last updated:  2023-07-03
Zero Knowledge Virtual Machine step by step
Tim Dokchitser, Alexandr Bulkin
This paper's primary purpose is to provide a source of introductory information into building a zero knowledge proof system for general computation. We review how to build such a system with a polynomial commitment scheme, and how to implement a fully functional command set in terms of zero knowledge primitives.
Last updated:  2023-07-03
DiLizium 2.0: Revisiting Two-Party Crystals-Dilithium
Peeter Laud, Nikita Snetkov, Jelizaveta Vakarjuk
In previous years there has been an increased interest in designing threshold signature schemes. Most of the recent works focus on constructing threshold versions of ECDSA or Schnorr signature schemes due to their appealing usage in blockchain technologies. Additionally, a lot of research is being done on cryptographic schemes that are resistant to quantum computer attacks. In this work, we propose a new version of the two-party Dilithium signature scheme. The security of our scheme is based on the hardness of Module-LWE and Module-SIS problems. In our construction, we follow a similar logic as Damgård et al. (PKC 2021) and use an additively homomorphic commitment scheme. However, compared to them, our protocol uses signature compression techniques from the original Dilithium signature scheme which makes it closer to the version submitted to the NIST PQC competition. We focus on two-party signature schemes in the context of user authentication.
Last updated:  2023-07-03
Continued Fractions Applied to a Family of RSA-like Cryptosystems
George Teseleanu, Paul Cotan
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Murru and Saettone presented in 2017 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2+p+1)(q^2+q+1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is immune to Wiener's continued fraction attack. Unfortunately, Nitaj \emph{et. al.} developed exactly such an attack. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k [(p^n-1)(q^n-1)]/[(p-1)(q-1)] = 1$, where $n>1$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Last updated:  2023-07-03
hodlCoin: A Financial Game
Joachim Zahnentferner
The hodlCoin game is a competitive zero-sum massively multiplayer financial game where the goal is to hodl an asset for long periods of time. By hodling, a player deposits coins of a given asset in a common reserve and receives a proportional amount of hodlCoins. Players who un-hodl pay a fee that is accumulated in the common reserve. Thus, the longer a player hodls, in comparison with other players, the more the player will benefit from fees paid by the players who are un-hodling earlier. Surprisingly, we prove here that, thanks to the accumulation of fees, the price of hodlCoins in comparison with the underlying asset never decreases.
Last updated:  2023-07-03
State Machines across Isomorphic Layer 2 Ledgers
Maxim Jourenko, Mario Larangeira
With the ever greater adaptation of blockchain systems, smart contract based ecosystems have formed to provide financial services and other utility. This results in an ever increasing demand for transactions on blockchains, however, the amount of transactions per second on a given ledger is limited. Layer-2 systems attempt to improve scalability by taking transactions off-chain, with building blocks that are two party channels which are concatenated to form networks. Interaction between two parties requires (1) routing such a network, (2) interaction with and collateral from all intermediaries on the routed path and (3) interactions are often more limited compared to what can be done on the ledger. In contrast to that design, recent constructions such as Hydra Heads (FC’21) are both multi-party and isomorphic, allowing interactions to have the same expressiveness as on the ledger making it akin to a ledger located on Layer-2. The follow up Interhead Construction (MARBLE’22) further extends the protocol to connect Hydra Heads into networks by means of a “virtual” Hydra Head construction. This work puts forth an even greater generalization of the Interhead Protocol, allowing for inter- action across different Layer-2 ledgers with a multitude of improvements. As concrete example, our design is modular and lightweight, which makes it viable for both full virtual ledger constructions as well as straightfor- ward one-time interactions and payments systems.
Last updated:  2023-07-02
Implementation and performance of a RLWE-based commitment scheme and ZKPoK for its linear and multiplicative relations
Ramiro Martínez, Paz Morillo, Sergi Rovira
In this paper we provide the implementation details and performance analysis of the lattice-based post-quantum commitment scheme introduced by Martínez and Morillo in their work titled «RLWE-Based Zero-Knowledge Proofs for Linear and Multiplicative Relations» together with the corresponding Zero-Knowledge Proofs of Knowledge (ZKPoK) of valid openings, linear and multiplicative relations among committed elements. We bridge the gap between the existing theoretical proposals and practical applications, thoroughly revisiting the security proofs of the aforementioned paper to obtain tight conditions that allow us to find the best sets of parameters for actual instantiations of the commitment scheme and its companion ZKPoK. Our implementation is very flexible and its parameters can be adjusted to obtain a trade-off between speed and memory usage, analyzing how suitable for practical use are the underlying lattice-based techniques. Moreover, our implementation further extends the literature of exact Zero-Knowledge proofs, providing ZKPoK of committed elements without any soundness slack.
Last updated:  2023-07-02
Leaking Arbitrarily Many Secrets: Any-out-of-Many Proofs and Applications to RingCT Protocols
Tianyu Zheng, Shang Gao, Yubo Song, Bin Xiao
Ring Confidential Transaction (RingCT) protocol is an effective cryptographic component for preserving the privacy of cryptocurrencies. However, existing RingCT protocols are instantiated from one-out-of-many proofs with only one secret, leading to low efficiency and weak anonymity when handling transactions with multiple inputs. Additionally, current partial knowledge proofs with multiple secrets are neither secure nor efficient to be applied in a RingCT protocol. In this paper, we propose a novel \emph{any-out-of-many proof}, a logarithmic-sized zero-knowledge proof scheme for showing the knowledge of arbitrarily many secrets out of a public list. Unlike other partial knowledge proofs that have to reveal the number of secrets [ACF21], our approach proves the knowledge of multiple secrets without leaking the exact number of them. Furthermore, we improve the efficiency of our method with a generic inner-product transformation to adopt the Bulletproofs compression [BBB+18], which reduces the proof size to $2 \lceil \log_2(N) \rceil \! + \! 9$. Based on our proposed proof scheme, we further construct a compact RingCT protocol for privacy cryptocurrencies, which can provide a logarithmic-sized communication complexity for transactions with multiple inputs. More importantly, as the only known RingCT protocol instantiated from the partial knowledge proofs, our protocol can achieve the highest anonymity level compared with other approaches like Omniring [LRR+19]. For other applications, such as multiple ring signatures, our protocol can also be applied with some modifications. We believe our techniques are also applicable in other privacy-preserving scenarios, such as multiple ring signatures and coin-mixing in the blockchain.
Last updated:  2023-07-01
Improved Multi-User Security Using the Squared-Ratio Method
Yu Long Chen, Wonseok Choi, Changmin Lee
Proving security bounds in contexts with a large number of users is one of the central problems in symmetric-key cryptography today. This paper introduces a new method for information-theoretic multi-user security proofs, called ``the Squared-Ratio Method''. At its core, the method requires the expectation of the square of the ratio of observing the so-called good transcripts (from Patarin's H-coefficient technique) in the real and the ideal world. Central to the method is the observation that for information-theoretic adversaries, the KL-divergence for the multi-user security bound can be written as a summation of the KL-divergence of every single user. We showcase the Squared-Ratio Method on three examples: the Xor of two Permutations by Bellare et al. (EUROCRYPT '98) and Hall et al. (CRYPTO '98), the Encrypted Davies-Mayer by Cogliati and Seurin (CRYPTO '16), and the two permutation variant of the nEHtM MAC algorithm by Dutta et al. (EUROCRYPT '19). With this new tool, we provide improved bounds for the multi-user security of these constructions. Our approach is modular in the sense that the multi-user security can be obtained directly from single-user results.
Last updated:  2023-06-30
EDEN - a practical, SNARK-friendly combinator VM and ISA
Logan Allen, Brian Klatt, Philip Quirk, Yaseen Shaikh
Succinct Non-interactive Arguments of Knowledge (SNARKs) enable a party to cryptographically prove a statement regarding a computation to another party that has constrained resources. Practical use of SNARKs often involves a Zero-Knowledge Virtual Machine (zkVM) that receives an input program and input data, then generates a SNARK proof of the correct execution of the input program. Most zkVMs emulate the von Neumann architecture and must prove relations between a program's execution and its use of Random Access Memory. However, there are conceptually simpler models of computation that are naturally modeled in a SNARK yet are still practical for use. Nock is a minimal, homoiconic combinator function, a Turing-complete instruction set that is practical for general computation, and is notable for its use in Urbit. We introduce Eden, an Efficient Dyck Encoding of Nock that serves as a practical, SNARK-friendly combinator function and instruction set architecture. We describe arithmetization techniques and polynomial equations used to represent the Eden ISA in an Interactive Oracle Proof. Eden provides the ability to prove statements regarding the execution of any program that compiles down to the Eden ISA. We present the Eden zkVM, a particular instantiation of Eden as a zk-STARK.
Last updated:  2023-06-30
Derecho: Privacy Pools with Proof-Carrying Disclosures
Josh Beal, Ben Fisch
A privacy pool enables clients to deposit units of a cryptocurrency into a shared pool where ownership of deposited currency is tracked via a system of cryptographically hidden records. Clients may later withdraw from the pool without linkage to previous deposits. Some privacy pools also support hidden transfer of currency ownership within the pool. In August 2022, the U.S. Department of Treasury sanctioned Tornado Cash, the largest Ethereum privacy pool, on the premise that it enables illicit actors to hide the origin of funds, citing its usage by the DPRK-sponsored Lazarus Group to launder over \$455 million dollars worth of stolen cryptocurrency. This ruling effectively made it illegal for U.S. persons/institutions to use or accept funds that went through Tornado Cash, sparking a global debate among privacy rights activists and lawmakers. Against this backdrop, we present Derecho, a system that institutions could use to request cryptographic attestations of fund origins rather than naively rejecting all funds coming from privacy pools. Derecho is a novel application of proof-carrying data, which allows users to propagate allowlist membership proofs through a privacy pool's transaction graph. Derecho is backwards-compatible with existing Ethereum privacy pool designs, adds no overhead in gas costs, and costs users only a few seconds to produce attestations.
Last updated:  2023-06-30
Aggregate Signatures with Versatile Randomization and Issuer-Hiding Multi-Authority Anonymous Credentials
Omid Mir, Balthazar Bauer, Scott Griffy, Anna Lysyanskaya, Daniel Slamanig
Anonymous credentials (AC) have emerged as a promising privacy-preserving solu- tion for user-centric identity management. They allow users to authenticate in an anonymous and unlinkable way such that only required information (i.e., attributes) from their credentials are re- vealed. With the increasing push towards decentralized systems and identity, e.g., self-sovereign identity (SSI) and the concept of verifiable credentials, this also necessitates the need for suit- able AC systems. For instance, when relying on existing AC systems, obtaining credentials from different issuers requires the presentation of independent credentials, which can become cum- bersome. Consequently, it is desirable for AC systems to support the so-called multi-authority (MA) feature. It allows a compact and efficient showing of multiple credentials from different is- suers. Another important property is called issuer hiding (IH). This means that showing a set of credentials is not revealed which issuer has issued which credentials but only whether a verifier- defined policy on the acceptable set of issuers is satisfied. This issue becomes particularly acute in the context of MA, where a user could be uniquely identified by the combination of issuers in their showing. Unfortunately, there are no AC schemes that satisfy both these properties simul- taneously. To close this gap, we introduce the concept of Issuer-Hiding Multi-Authority Anonymous Cre- dentials (IhMA). Our proposed solution involves the development of two new signature primi- tives with versatile randomization features which are independent of interest: 1) Aggregate Sig- natures with Randomizable Tags and Public Keys (AtoSa) and 2) Aggregate Mercurial Signatures (ATMS), which extend the functionality of AtoSa to additionally support the randomization of messages and yield the first instance of an aggregate (equivalence-class) structure-preserving sig- nature. These primitives can be elegantly used to obtain IhMA with different trade-offs but have applications beyond. We formalize all notations and provide rigorous security definitions for our proposed primi- tives. We present provably secure and efficient instantiations of the two primitives as well as corresponding IhMA systems. Finally, we provide benchmarks based on an implementation to demonstrate the practical efficiency of our constructions
Last updated:  2023-06-30
An Efficient Data-Independent Priority Queue and its Application to Dark Pools
Sahar Mazloom, Benjamin E. Diamond, Antigoni Polychroniadou, Tucker Balch
We introduce a new data-independent priority queue which supports amortized polylogarithmic-time insertions and constant-time deletions, and crucially, (non-amortized) constant-time \textit{read-front} operations, in contrast with a prior construction of Toft (PODC'11). Moreover, we reduce the number of required comparisons. Data-independent data structures - first identified explicitly by Toft, and further elaborated by Mitchell and Zimmerman (STACS'14) - facilitate computation on encrypted data without branching, which is prohibitively expensive in secure computation. Using our efficient data-independent priority queue, we introduce a new privacy-preserving dark pool application, which significantly improves upon prior constructions which were based on costly sorting operations. Dark pools are securities-trading venues which attain ad-hoc order privacy, by matching orders outside of publicly visible exchanges. In this paper, we describe an efficient and secure dark pool (implementing a full continuous double auction), building upon our priority queue construction. Our dark pool's security guarantees are cryptographic - based on secure multiparty computation (MPC) - and do not require that the dark pool operators be trusted. Our approach improves upon the asymptotic and concrete efficiency attained by previous efforts. Existing cryptographic dark pools process new orders in time which grows linearly in the size of the standing order book; ours does so in polylogarithmic time. We describe a concrete implementation of our protocol, with malicious security in the honest majority setting. We also report benchmarks of our implementation, and compare these to prior works. Our protocol reduces the total running time by several orders of magnitude, compared to prior secure dark pool solutions.
Last updated:  2023-06-30
Rapidash: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
Fair exchange is a fundamental primitive enabled by blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the participating users as strategic players, and assume the miners are honest and passive. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be broken entirely in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). In this work, we provide the first formal treatment of side-contract-resilient fair exchange where users and miners may enter into arbitrary contracts on the side. We propose a new fair exchange protocol called Rapidash, and prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Rapidash satisfies a coalition-resistant Nash equilibrium absent external incentives. Further, even when there exist arbitrary but bounded external incentives, Rapidash still protects honest players and ensures that they cannot be harmed. Last but not least, our game-theoretic formulations also lay the theoretical groundwork for studying side-contract-resilient fair exchange protocols. Finally, to showcase the instantiability of Rapidash with a wide range of blockchain systems, we present instantiations of Rapidash that are compatible with Bitcoin and Ethereum while incurring only a minimal overhead in terms of costs for the users.
Last updated:  2023-06-29
smartFHE: Privacy-Preserving Smart Contracts from Fully Homomorphic Encryption
Ravital Solomon, Rick Weber, Ghada Almashaqbeh
Despite the great potential and flexibility of smart contract-enabled blockchains, building privacy-preserving applications using these platforms remains an open question. Existing solutions fall short since they ask end users to coordinate and perform the computation off-chain themselves. While such an approach reduces the burden of the miners of the system, it largely limits the ability of lightweight users to enjoy privacy since performing the actual computation on their own and attesting to its correctness is expensive even with state-of-the-art proof systems. To address this limitation, we propose smartFHE, a framework to support private smart contracts using fully homomorphic encryption (FHE). To the best of our knowledge, smartFHE is the first to use FHE in the blockchain model; moreover, it is the first to support arbitrary privacy-preserving applications for lightweight users under the same computation-on-demand model pioneered by Ethereum. smartFHE does not overload the user since miners are instead responsible for performing the private computation. This is achieved by employing FHE so miners can compute over encrypted data and account balances. Users are only responsible for proving well-formedness of their private inputs using efficient zero-knowledge proof systems (ZKPs). We formulate a notion for a privacy-preserving smart contract (PPSC) scheme and show a concrete instantiation of our smartFHE framework. We address challenges resulting from using FHE in the blockchain setting---including concurrency and dealing with leveled schemes. We also show how to choose suitable FHE and ZKP schemes to instantiate our framework, since naively choosing these will lead to poor performance in practice. We formally prove correctness and security of our construction. Finally, we conduct experiments to evaluate its efficiency, including comparisons with a state-of-the-art scheme and testing several private smart contract applications. We have open-sourced our (highly optimized) ZKP library, which could be of independent interest.
Last updated:  2023-06-29
A Deep Learning Aided Differential Distinguisher Improvement Framework with More Lightweight and Universality
Jiashuo Liu, Jiongjiong Ren, Shaozhen Chen
In CRYPTO 2019, Gohr opens up a new direction for cryptanalysis. He successfully applied deep learning to differential cryptanalysis against the NSA block cipher SPECK32/64, achieving higher accuracy than traditional differential distinguishers. Until now, one of the mainstream research directions is increasing the training sample size and utilizing different neural networks to improve the accuracy of neural distinguishers. This conversion mindset may lead to a huge number of parameters, heavy computing load, and a large number of memory in the distinguishers training process. However, in the practical application of cryptanalysis, the applicability of the attacks method in a resource-constrained environment is very important. Therefore, we focus on the cost optimization and aim to reduce network parameters for differential neural cryptanalysis. In this paper, we propose two cost-optimized neural distinguisher improvement methods from the aspect of data format and network structure, respectively. Firstly, we obtain a partial output difference neural distinguisher using only 4-bits training data format which is constructed with a new advantage bits search algorithm based on two key improvement conditions. In addition, we perform an interpretability analysis of the new neural distinguishers whose results are mainly reflected in the relationship between the neural distinguishers, truncated differential, and advantage bits. Secondly, we replace the traditional convolution with the depthwise separable convolution to reduce the training cost without affecting the accuracy as much as possible. Overall, the number of training parameters can be reduced by less than 50\% by using our new network structure for training neural distinguishers. Finally, we apply the network structure to the partial output difference neural distinguishers. The combinatorial approach have led to a further reduction in the number of parameters (approximately 30\% of Gohr's distinguishers for SPECK).
Last updated:  2023-06-29
A Framework for Statistically Sender Private OT with Optimal Rate
Pedro Branco, Nico Döttling, Akshayaram Srinivasan
Statistical sender privacy (SSP) is the strongest achievable security notion for two-message oblivious transfer (OT) in the standard model, providing statistical security against malicious receivers and computational security against semi-honest senders. In this work we provide a novel construction of SSP OT from the Decisional Diffie-Hellman (DDH) and the Learning Parity with Noise (LPN) assumptions achieving (asymptotically) optimal amortized communication complexity, i.e. it achieves rate 1. Concretely, the total communication complexity for $k$ OT instances is $2k(1+o(1))$, which (asymptotically) approaches the information-theoretic lower bound. Previously, it was only known how to realize this primitive using heavy rate-1 FHE techniques [Brakerski et al., Gentry and Halevi TCC'19]. At the heart of our construction is a primitive called statistical co-PIR, essentially a a public key encryption scheme which statistically erases bits of the message in a few hidden locations. Our scheme achieves nearly optimal ciphertext size and provides statistical security against malicious receivers. Computational security against semi-honest senders holds under the DDH assumption.
Last updated:  2023-06-29
PSI with computation or Circuit-PSI for Unbalanced Sets from Homomorphic Encryption
Yongha Son, Jinhyuck Jeong
Circuit-based Private Set Intersection (circuit-PSI) refers to cryptographic protocols that let two parties with input set $X$ and $Y$ compute a function $f$ over the intersection set $X \cap Y$, without revealing any other information. The research efforts for circuit-PSI mainly focus on the case where input set sizes $|X|$ and $|Y|$ are similar so far, and they scale poorly for extremely unbalanced set sizes $|X| \gg |Y|$. Recently, Lepoint \textit{et al.} (ASIACRYPT'21) proposed the first dedicated solutions for this problem, which has online cost only linear in the small set size $|Y|$. However, it requires an expensive setup phase that requires huge storage of about $O(|X|)$ on the small set holder side, which can be problematic in applications where the small set holder is assumed to have restricted equipment. In this work, we suggest new efficient proposals for circuit-PSI tailored for unbalanced inputs, which feature {\emph{zero}} small set holder side storage, and comparable online phase performance to the previous work. At the technical core, we use homomorphic encryption (HE) based {\emph{plain}} PSI protocols of Cong \textit{et al.} (CCS'21), with several technically non-trivial arguments on algorithm and security. We demonstrate the superiority of our proposals in several input set sizes by an implementation. As a representative example, for input sets of size $2^{24}$ and $2^{12}$, our proposals require {\emph{zero}} storage on the small set holder whereas Lepoint \textit{et al.} requires over $7$GB. The online phase remains similar; over LAN network setting, ours takes $7.5$ (or $20.9$s) seconds with $45$MB (or $11.7$MB) communication, while Lepoint \textit{et al.} requires $4.2$ seconds with $117$MB communication.
Last updated:  2023-06-29
Cryptanalysis of rank-metric schemes based on distorted Gabidulin codes
Pierre Briaud, Pierre Loidreau
In this work, we introduce a new attack for the Loidreau scheme [PQCrypto 2017] and its more recent variant LowMS. This attack is based on a constrained linear system for which we provide two solving approaches: - The first one is an enumeration algorithm inspired from combinatorial attacks on the Rank Decoding (RD) Problem. While the attack technique remains very simple, it allows us to obtain the best known structural attack on the parameters of these two schemes. - The second one is to rewrite it as a bilinear system over Fq. Even if Gröbner basis techniques on this second system seem infeasible, we provide a detailed analysis of the first degree fall polynomials which arise when applying such algorithms.
Last updated:  2023-06-29
FuLeeca: A Lee-based Signature Scheme
Stefan Ritterhoff, Georg Maringer, Sebastian Bitzer, Violetta Weger, Patrick Karl, Thomas Schamberger, Jonas Schupp, Antonia Wachter-Zeh
In this work we introduce a new code-based signature scheme, called \textsf{FuLeeca}, based on the NP-hard problem of finding codewords of given Lee-weight. The scheme follows the Hash-and-Sign approach applied to quasi-cyclic codes. Similar approaches in the Hamming metric have suffered statistical attacks, which revealed the small support of the secret basis. Using the Lee metric, we are able to thwart such attacks. We use existing hardness results on the underlying problem and study adapted statistical attacks. We propose parameters for \textsf{FuLeeca}~and compare them to an extensive list of proposed post-quantum secure signature schemes including the ones already standardized by NIST. This comparison reveals that \textsf{FuLeeca}~is competitive. For example, for NIST category I, i.e., 160 bit of classical security, we obtain an average signature size of 1100 bytes and public key sizes of 1318 bytes. Comparing the total communication cost, i.e., the sum of the signature and public key size, we see that \textsf{FuLeeca} is only outperformed by Falcon while the other standardized schemes Dilithium and SPHINCS+ show larger communication costs than \textsf{FuLeeca}.
Last updated:  2023-06-28
On Provable White-Box Security in the Strong Incompressibility Model
Estuardo Alpirez Bock, Chris Brzuska, Russell W. F. Lai
Incompressibility is a popular security notion for white-box cryptography and captures that a large encryption program cannot be compressed without losing functionality. Fouque, Karpman, Kirchner and Minaud (FKKM) defined strong incompressibility, where a compressed program should not even help to distinguish encryptions of two messages of equal length. Equivalently, the notion can be phrased as indistinguishability under chosen-plaintext attacks and key-leakage (LK-IND-CPA), where the leakage rate is high. In this paper, we show that LK-IND-CPA security with superlogarithmic-length leakage, and thus strong incompressibility, cannot be proven under standard (i.e. single-stage) assumptions, if the encryption scheme is key-fixing, i.e. a polynomial number of message-ciphertext pairs uniquely determine the key with high probability. Our impossibility result refutes a claim by FKKM that their big-key generation mechanism achieves strong incompressibility when combined with any PRG or any conventional encryption scheme, since the claim is not true for encryption schemes which are key-fixing (or for PRGs which are injective). In particular, we prove that the cipher block chaining (CBC) block cipher mode is key-fixing when modelling the cipher as a truly random permutation for each key. Subsequent to and inspired by our work, FKKM prove that their original big-key generation mechanism can be combined with a random oracle into an LK-IND-CPA-secure encryption scheme, circumventing the impossibility result by the use of an idealised model. Along the way, our work also helps clarifying the relations between incompressible white-box cryptography, big-key symmetric encryption, and general leakage resilient cryptography, and their limitations.
Last updated:  2023-06-28
Reusable Secure Computation in the Plain Model
Vipul Goyal, Akshayaram Srinivasan, Mingyuan Wang
Consider the standard setting of two-party computation where the sender has a secret function $f$ and the receiver has a secret input $x$ and the output $f(x)$ is delivered to the receiver at the end of the protocol. Let us consider the unidirectional message model where only one party speaks in each round. In this setting, Katz and Ostrovsky (Crypto 2004) showed that at least four rounds of interaction between the parties are needed in the plain model (i.e., no trusted setup) if the simulator uses the adversary in a black-box way (a.k.a. black-box simulation). Suppose the sender and the receiver would like to run multiple sequential iterations of the secure computation protocol on possibly different inputs. For each of these iterations, do the parties need to start the protocol from scratch and exchange four messages? In this work, we explore the possibility of \textit{amortizing} the round complexity or in other words, \textit{reusing} a certain number of rounds of the secure computation protocol in the plain model. We obtain the following results. 1. Under standard cryptographic assumptions, we construct a four-round two-party computation protocol where (i) the first three rounds of the protocol could be reused an unbounded number of times if the receiver input remains the same and only the sender input changes, and (ii) the first two rounds of the protocol could be reused an unbounded number of times if the receiver input needs to change as well. In other words, the sender sends a single additional message if only its input changes, and in the other case, we need one message each from the receiver and the sender. The number of additional messages needed in each of the above two modes is optimal and, additionally, our protocol allows arbitrary interleaving of these two modes. 2. We also extend these results to the multiparty setting (in the simultaneous message exchange model) and give round-optimal protocols such that (i) the first two rounds could be reused an unbounded number of times if the inputs of the parties need to change and (ii) the first three rounds could be reused an unbounded number of times if the inputs remain the same but the functionality to be computed changes. As in the two-party setting, we allow arbitrary interleaving of the above two modes of operation.
Last updated:  2023-06-28
BLAC: A Blockchain-based Lightweight Access Control Scheme in Vehicular Social Networks
Yuting Zuo, Li Xu, Yuexin Zhang, Chenbin Zhao, Zhaozhe Kang
Vehicular Social Networks (VSNs) rely on data shared by users to provide convenient services. Data is outsourced to the cloud server and the distributed roadside unit in VSNs. However, roadside unit has limited resources, so that data sharing process is inefficient and is vulnerable to security threats, such as illegal access, tampering attack and collusion attack. In this article, to overcome the shortcomings of security, we define a chain tolerance semi-trusted model to describe the credibility of distributed group based on the anti tampering feature of blockchain. We further propose a Blockchain-based Lightweight Access Control scheme in VSNs that resist tampering and collusion attacks, called BLAC. To overcome the shortcomings of efficiency, we design a ciphertext piece storage algorithm and a recovery one to achieve lightweight storage cost. In the decryption operation, we separate a pre-decryption algorithm based on outsourcing to achieve lightweight decryption computation cost on the user side. Finally, we present the formal security analyses and the simulation experiments for BLAC, and compare the results of experiments with existing relevant schemes. The security analyses show that our scheme is secure, and the results of experiments show that our scheme is lightweight and practical.
Last updated:  2023-06-28
On the Non-Malleability of ECVRF in the Algebraic Group Model
Uncategorized
Willow Barkan-Vered, Franklin Harding, Jonathan Keller, Jiayu Xu
Show abstract
Uncategorized
ECVRF is a verifiable random function (VRF) scheme used in multiple cryptocurrency systems. It has recently been proven to satisfy the notion of non-malleability which is useful in applications to blockchains (Peikert and Xu, CT-RSA 2023); however, the existing proof uses the rewinding technique and has a quadratic security loss. In this work, we re-analyze the non-malleability of ECVRF in the algebraic group model (AGM) and give a tight proof. We also compare our proof with the unforgeability proof for the Schnorr signature scheme in the AGM (Fuchsbauer, Plouviez and Seurin, EUROCRYPT 2020).
Last updated:  2023-06-27
Oblivious Transfer from Rerandomizable PKE
Shuaishuai Li, Cong Zhang, Dongdai Lin
The relationship between oblivious transfer (OT) and public-key encryption (PKE) has been studied by Gertner et al. (FOCS 2000). They showed that OT can be constructed from special types of PKE, i.e., PKE with oblivious sampleability of public keys or ciphertexts. In this work, we give new black-box constructions of OT from PKE without any oblivious sampleability. Instead, we require that the PKE scheme is rerandomizable, meaning that one can use the public key to rerandomize a ciphertext into a fresh ciphertext. We give two different OT protocols with different efficiency features based on rerandomizable PKE. For $1$-out-of-$n$ OT, in our first OT protocol, the sender has sublinear (in $n$) cost, and in our second OT protocol, the cost of the receiver is independent of $n$. As a comparison, in the PKE-based OT protocols of Gertner et al., both the sender and receiver have linear cost.
Last updated:  2023-06-27
Oblivious Accumulators
Foteini Baldimtsi, Ioanna Karantaidou, Srinivasan Raghuraman
A cryptographic accumulator is a succinct set commitment scheme with efficient (non-)membership proofs that typically supports updates (additions and deletions) on the accumulated set. When elements are added to or deleted from the set, an update message is issued. The collection of all the update messages essentially leaks the underlying accumulated set which in certain applications is not desirable. In this work, we define oblivious accumulators, a set commitment with concise membership proofs that hides the elements and the set size from every entity: an outsider, a verifier or other element holders. We formalize this notion of privacy via two properties: element hiding and add-delete indistinguishability. We also define almost-oblivious accumulators, that only achieve a weaker notion of privacy called add-delete unlinkability. Such accumulators hide the elements but not the set size. We consider the trapdoorless, decentralized setting where different users can add and delete elements from the accumulator and compute membership proofs. We then give a generic construction of an oblivious accumulator based on key-value commitments (KVC). We also show a generic way to construct KVCs from an accumulator and a vector commitment scheme. Finally, we give lower bounds on the communication (size of update messages) required for oblivious accumulators and almost-oblivious accumulators.
Last updated:  2023-06-27
Private Timestamps and Selective Verification of Notarised Data on a Blockchain
Enrique Larraia, Owen Vaughan
In this paper, we present a novel method for timestamping and data notarisation on a distributed ledger. The problem with on-chain hashes is that a cryptographic hash is a deterministic function that it allows the blockchain be used as an oracle that confirms whether potentially leaked data is authentic (timestamped or notarised by the user). Instead, we suggest using on-chain Pedersen commitments and off-chain zero knowledge proofs (ZKP) for designated verifiers to prove the link between the data and the on-chain commitment. Our technique maintains the privacy of the data, and retains control of who can access it and when they can access it. This holds true even on a public blockchain, and even if the data is leaked by authorised parties. Indeed, an authorised data consumer (a designated-verifier for the ZKP), who discloses the data publicly, cannot convince anyone about the legitimacy of the data (in the sense that it is consistent with the information uploaded to the blockchain), because the ZKP proof is valid only for them. Our techniques can be used in scenarios where it is required to audit highly-sensitive data (e.g. application logs) by specific third parties, or to provide on-demand data certification by notaries
Last updated:  2023-06-27
Cryptography with Weights: MPC, Encryption and Signatures
Sanjam Garg, Abhishek Jain, Pratyay Mukherjee, Rohit Sinha, Mingyuan Wang, Yinuo Zhang
The security of several cryptosystems rests on the trust assumption that a certain fraction of the parties are honest. This trust assumption has enabled a diverse of cryptographic applications such as secure multiparty computation, threshold encryption, and threshold signatures. However, current and emerging practical use cases suggest that this paradigm of one-person-one-vote is outdated. In this work, we consider {\em weighted} cryptosystems where every party is assigned a certain weight and the trust assumption is that a certain fraction of the total weight is honest. This setting can be translated to the standard setting (where each party has a unit weight) via virtualization. However, this method is quite expensive, incurring a multiplicative overhead in the weight. We present new weighted cryptosystems with significantly better efficiency. Specifically, our proposed schemes incur only an {\em additive} overhead in weights. \begin{itemize} \item We first present a weighted ramp secret-sharing scheme where the size of the secret share is as short as $O(w)$ (where $w$ corresponds to the weight). In comparison, Shamir's secret sharing with virtualization requires secret shares of size $w\cdot\lambda$, where $\lambda=\log |\mathbb{F}|$ is the security parameter. \item Next, we use our weighted secret-sharing scheme to construct weighted versions of (semi-honest) secure multiparty computation (MPC), threshold encryption, and threshold signatures. All these schemes inherit the efficiency of our secret sharing scheme and incur only an additive overhead in the weights. \end{itemize} Our weighted secret-sharing scheme is based on the Chinese remainder theorem. Interestingly, this secret-sharing scheme is {\em non-linear} and only achieves statistical privacy. These distinct features introduce several technical hurdles in applications to MPC and threshold cryptosystems. We resolve these challenges by developing several new ideas.
Last updated:  2023-06-27
SPDH-Sign: towards Efficient, Post-quantum Group-based Signatures
Christopher Battarbee, Delaram Kahrobaei, Ludovic Perret, Siamak F. Shahandashti
In this paper, we present a new diverse class of post-quantum group-based Digital Signature Schemes (DSS). The approach is significantly different from previous examples of group-based digital signatures and adopts the framework of group action-based cryptography: we show that each finite group defines a group action relative to the semidirect product of the group by its automorphism group, and give security bounds on the resulting signature scheme in terms of the group-theoretic computational problem known as the Semidirect Discrete Logarithm Problem (SDLP). Crucially, we make progress towards being able to efficiently compute the novel group action, and give an example of a parameterised family of groups for which the group action can be computed for any parameters, thereby negating the need for expensive offline computation or inclusion of redundancy required in other schemes of this type.
Last updated:  2023-06-27
Enforcing Data Geolocation Policies in Public Cloud using Trusted Computing
Syed Zair Abbas, Mudassar Aslam
With the advancement in technology, Cloud computing always amazes the world with revolutionizing solutions that automate and simplify complex computational tasks. The advantages like no maintenance cost, accessibility, data backup, pay-per-use models, unlimited storage, and processing power encourage individuals and businesses to migrate their workload to the cloud. Despite the numerous advantages of cloud computing, the geolocation of data in the cloud environment is a massive concern, which relates to the performance and government legislation that will be applied to data. The unclarity of data geolocation can cause compliance concerns. In this work, we have presented a technique that will allow users to restrict the geolocation of their data in the cloud environment. We have used trusted computing mechanisms to attest the host and its geolocation remotely. With this model, the user will upload the data whose decryption key will be shared with a third-party attestation server only. The decryption key will be sealed to the TPM of the host after successful attestation guaranteeing the authorized geolocation and platform state.
Last updated:  2023-06-27
Cryptanalysis of the Cryptosystems Based on the Generalized Hidden Discrete Logarithm Problem
Ma Yanlong
In this paper, we will show the hidden discrete logarithm problem(HDLP) and the generalized form of HDLP(GHDLP) over non-commutative associative algebras (FNAAs) can be reduced to discrete logarithm problem(DLP) in a finite field through analyzing the eigenvalues of the representation matrix. Through the analysis of computational complexity, we will show that HDLP and GHDLP is not are not good improvements of DLP.With all the instruments in hand, we will show how some schemes based on GHDLP can be broken. Thus we can conclude that, all ideas of constructing cryptographic schemes based on the two problem are of no practical significance.
Last updated:  2023-06-26
An extension of Overbeck's attack with an application to cryptanalysis of Twisted Gabidulin-based schemes.
Alain Couvreur, Ilaria Zappatore
In this article, we discuss the decoding of Gabidulin and related codes from a cryptographic point of view, and we observe that these codes can be decoded solely from the knowledge of a generator matrix. We then extend and revisit Gibson and Overbeck attacks on the generalized GPT encryption scheme (instantiated with the Gabidulin code) for different ranks of the distortion matrix. We apply our attack to the case of an instantiation with twisted Gabidulin codes.
Last updated:  2023-06-26
Publicly Verifiable Zero-Knowledge and Post-Quantum Signatures From VOLE-in-the-Head
Carsten Baum, Lennart Braun, Cyprien Delpech de Saint Guilhem, Michael Klooß, Emmanuela Orsini, Lawrence Roy, Peter Scholl
We present a new method for transforming zero-knowledge protocols in the designated verifier setting into public-coin protocols, which can be made non-interactive and publicly verifiable. Our transformation applies to a large class of ZK protocols based on oblivious transfer. In particular, we show that it can be applied to recent, fast protocols based on vector oblivious linear evaluation (VOLE), with a technique we call VOLE-in-the-head, upgrading these protocols to support public verifiability. Our resulting ZK protocols have linear proof size, and are simpler, smaller and faster than related approaches based on MPC-in-the-head. To build VOLE-in-the-head while supporting both binary circuits and large finite fields, we develop several new technical tools. One of these is a new proof of security for the SoftSpokenOT protocol (Crypto 2022), which generalizes it to produce certain types of VOLE correlations over large fields. Secondly, we present a new ZK protocol that is tailored to take advantage of this form of VOLE, which leads to a publicly verifiable VOLE-in-the-head protocol with only 2x more communication than the best, designated-verifier VOLE-based protocols. We analyze the soundness of our approach when made non-interactive using the Fiat-Shamir transform, using round-by-round soundness. As an application of the resulting NIZK, we present FAEST, a post-quantum signature scheme based on AES. FAEST is the first AES-based signature scheme to be smaller than SPHINCS+, with signature sizes between 5.6 and 6.6kB at the 128-bit security level. Compared with the smallest version of SPHINCS+ (7.9kB), FAEST verification is slower, but the signing times are between 8x and 40x faster.
Last updated:  2023-06-26
Exploiting algebraic structures in probing security
Maxime Plançon
The so-called $\omega$-encoding, introduced by Goudarzi, Joux and Rivain (Asiacrypt 2018), generalizes the commonly used arithmetic encoding. By using the additionnal structure of this encoding, they proposed a masked multiplication gadget (GJR) with quasilinear (randomness and operations) complexity. A follow-up contribution by Goudarzi, Prest, Rivain and Vergnaud in this line of research appeared in TCHES 2021. The authors revisited the aforementioned multiplication gadget (GPRV), and brought the IOS security notion for refresh gadgets to allow secure composition between probing secure gadgets. In this paper, we propose a follow up on GPRV, that is, a region-probing secure arithmetic circuit masked compiler. Our contribution stems from a single Lemma, linking algebra and probing security for a wide class of circuits, further taking advantage of the algebraic structure of $\omega$-encoding, and the extension field structure of the underlying field $\mathbb F$ that was so far left unexploited. On the theoretical side, we propose a security notion for $\boldsymbol{\omega}_d$-masked circuits which we call Reducible-To-Independent-K-linear (RTIK). When the number of shares $d$ is less than or equal to the degree $k$ of $\mathbb F$, RTIK circuits achieve region-probing security. Moreover, RTIK circuits may be composed naively and remain RTIK. We also propose a weaker version of IOS, which we call KIOS, for refresh gadgets. This notion allows to compose RTIK circuits with a randomness/security tradeoff compared to the naive composition. To substantiate our new definitions, we also provide examples of competitively efficient gadgets verifying the latter weaker security notions. Explicitly, we give 1) two refresh gadgets that use $d-1$ random field elements to refresh a length $d$ encoding, both of which are KIOS but not IOS, and 2) a multiplication gadget with bilinear multiplication complexity $d^{\log 3}$ and uses $d$ fresh random elements per run. Our compiler outperforms ISW asymptotically, but for our security proofs to hold, we do require that the number of shares $d$ is less than or equal to the degree of $\mathbb F$ as an extension, so that there is sufficient structure to exploit.
Last updated:  2023-06-26
SoK: Delay-based Cryptography
Liam Medley, Angelique Faye Loe, Elizabeth A. Quaglia
In this work, we provide a systematisation of knowledge of delay-based cryptography, in which we discuss and compare the existing primitives within cryptography that utilise a time-delay. We start by considering the role of time within cryptography, explaining broadly what a delay aimed to achieve at its inception and now, in the modern age. We then move on to describing the underlying assumptions used to achieve these goals, and analyse topics including trust, decentralisation and concrete methods to implement a delay. We then survey the existing primitives, discussing their security properties, instantiations and applications. We make explicit the relationships between these primitives, identifying a hierarchy and the theoretical gaps that exist. We end this systematisation of knowledge by highlighting relevant future research directions within the field of delay-based cryptography, from which this area would greatly benefit.
Last updated:  2023-06-26
A proposal for quantum GRS algorithm and the cryptanalysis for ROLLO and RQC
Asuka Wakasugi, Mitsuru Tada
Code-Based Cryptosystem, CBC, is one of the candidates for Post-Quantum Cryptosystems, PQCs. Its security primarily bases on the Syndrome Decoding Problem, SDP. In this paper, we focus on the rank CBC whose security relies on the rank SDP. The GRS (Gaborit-Ruatta-Schrek) algorithm is well known as the current best decoding algorithm for the rank SDP. We propose the quantum version of the GRS algorithm. Then, we introduce the attack strategy using that quantum algorithm for previous rank CBCs remained at the 2nd Round of the NIST's PQC standardization project, and consider the quantum security for those cryptosystems. We present a result that is effective for RQC by our attack method, so give new RQC's instances which is secure against that attack.
Last updated:  2023-06-26
A note on ``a multi-instance cancelable fingerprint biometric based secure session key agreement protocol employing elliptic curve cryptography and a double hash function''
Zhengjun Cao, Lihua Liu
We show that the key agreement scheme [Multim. Tools Appl. 80:799-829, 2021] is flawed. (1) The scheme is a hybrid which piles up various tools such as public key encryption, signature, symmetric key encryption, hash function, cancelable templates from thumb fingerprints, and elliptic curve cryptography. These tools are excessively used because key agreement is just a simple cryptographic primitive in contrast to public key encryption. (2) The involved reliance is very intricate. Especially, the requirement for a secure channel between two parties is generally unavailable.
Last updated:  2023-06-26
Glimpse: On-Demand PoW Light Client with Constant-Size Storage for DeFi
Giulia Scaffino, Lukas Aumayr, Zeta Avarikioti, Matteo Maffei
Cross-chain communication is instrumental in unleashing the full potential of blockchain technologies, as it allows users and developers to exploit the unique design features and the profit opportunities of different existing blockchains. The majority of interoperability solutions are provided by centralized exchanges and bridge protocols based on a trusted majority, both introducing undesirable trust assumptions compared to native blockchain assets. Hence, increasing attention has been given to decentralized solutions: Light and super-light clients paved the way for chain relays, which allow verifying on a blockchain the state of another blockchain by respectively verifying and storing a linear and logarithmic amount of data. Unfortunately, relays turn out to be inefficient in terms of computational costs, storage, or compatibility. We introduce Glimpse, an on-demand bridge that leverages a novel on-demand light client construction with only constant on-chain storage, cost, and computational overhead. Glimpse is expressive, enabling a plethora of DeFi and off-chain applications such as lending, pegs, proofs of oracle attestations, and betting hubs. Glimpse also remains compatible with blockchains featuring a limited scripting language such as the Liquid Network (a pegged sidechain of Bitcoin), for which we present a concrete instantiation. We prove Glimpse security in the Universal Composability (UC) framework and further conduct an economic analysis. We evaluate the cost of Glimpse for Bitcoin-like chains: verifying a simple transaction has at most 700 bytes of on-chain overhead, resulting in a one-time fee of $3, only twice as much as a standard Bitcoin transaction.
Last updated:  2023-06-26
Third-Party Private Set Intersection
Foo Yee Yeo, Jason H. M. Ying
Private set intersection (PSI) enables two parties, each holding a private set to compute their intersection without revealing other information in the process. We introduce a variant of conventional PSI termed as third-party PSI, whereby the intersection output of the two parties is only known to an inputless third party. In this setting, the two parties who participate in the protocol have no knowledge of the intersection result or any information of the set content of the other party. In general, third-party PSI settings arise where there is a need for an external party to obtain the intersection outcome without leakage of additional information to any other party. This setting is motivated by an increasing importance in several real-world applications. We describe protocols which achieve this functionality with minimal communication overhead. To the best of knowledge, our work is the first of its kind to explore this variant of PSI.
Last updated:  2023-06-26
Fast ORAM with Server-aided Preprocessing and Pragmatic Privacy-Efficiency Trade-off
Vladimir Kolesnikov, Stanislav Peceny, Ni Trieu, Xiao Wang
Data-dependent accesses to memory are necessary for many real-world applications, but their cost remains prohibitive in secure computation. Prior work either focused on minimizing the need for data-dependent access in these applications, or reduced its cost by improving oblivious RAM for secure computation (SC-ORAM). Despite extensive efforts to improve SC-ORAM, the most concretely efficient solutions still require $\approx0.7$s per access to arrays of $2^{30}$ entries. This plainly precludes using MPC in a number of settings. In this work, we take a pragmatic approach, exploring how concretely cheap MPC RAM access could be made if we are willing to allow one of the participants to learn the access pattern. We design a highly efficient Shared-Output Client-Server ORAM (SOCS-ORAM) that has constant overhead, uses one round-trip of interaction per access, and whose access cost is independent of array size. SOCS-ORAM is useful in settings with hard performance constraints, where one party in the computation is more trust-worthy and is allowed to learn the RAM access pattern. Our SOCS-ORAM is assisted by a third helper party that helps initialize the protocol and is designed for the honest-majority semi-honest corruption model. We implement our construction in C++ and report its performance. For an array of length $2^{30}$ with $4$B entries, we communicate $13$B per access and take essentially no overhead beyond network latency.
Last updated:  2023-06-25
Privacy Preserving Records Sharing using Blockchain and Format Preserving Encryption
Sai Sandilya Konduru, Vishal Saraswat
Healthcare providers cannot share their patients' encrypted data among themselves because of interoperability issues. Many blockchain- based solutions have been proposed to allow for sharing medical data in a privacy-preserving manner, but interoperability problems persist. In this paper, we present a protocol called Blockchain-Format Preserving Encryption (B-FPE) to preserve patients' data privacy. Each patient is provided with an FPE key at the time of registration. All medical records are encrypted with the FPE key and stored in the blockchain. All the blockchain transactions are signed using group signatures. We use group signatures for signing the transactions to maintain the anonymity of healthcare providers. The new encrypted data block is concatenated to the blockchain. We present two cases: The regular phase, in which a patient is in a conscious state to share their FPE key with the healthcare provider, and the Emergency phase, in which a patient is not in a conscious state to share their key with the healthcare provider. In the latter case, the healthcare provider reconstructs the FPE key and decrypts the ciphertext. We assume this decryption happens in an oblivious manner.
Last updated:  2023-06-25
Detection of Password Reuse and Credential Stuffing: A Server-side Approach
Sai Sandilya Konduru, Sweta Mishra
Considering password-based authentication technique, password memorability is a real challenge on users. Hence, password reuse across different web applications is a common trend among users which makes websites vulnerable to credential stuffing attack. A solution as password manager helps the users to create random passwords for different websites on the user machine. However, it has practical challenges. Password database breach detection is another related and challenging task. Among recent developments for breach detection, honeyword-based approach is much appreciated by the research community. However, honeyword generation itself is a challenging part of the solution. In this work, we propose i) Password Reuse Detection (PRD) protocol for detecting password reuse using a secure two party private set intersection; ii) Breach Detection (BD) protocol that detects credential stuffing attacks using two party private set inclusion protocol based on random oblivious transfer. Both the proposals are designed for the authentication servers of the respective applications and need communication between multiple websites following the work by wang et al. Through analysis we show that our PRD protocol is around 2.8 times faster, and space efficient than existing works for 5000 honeywords. Our near to real-time BD protcol is around 2 times faster than existing works.
Last updated:  2023-06-25
MUXProofs: Succinct Arguments for Machine Computation from Tuple Lookups
Zijing Di, Lucas Xia, Wilson Nguyen, Nirvan Tyagi
Proofs for machine computation allow for proving the correct execution of arbitrary programs that operate over fixed instruction sets (e.g., RISC-V, EVM, Wasm). A standard approach for proving machine computation is to prove a universal set of constraints that encode the full instruction set at each step of program execution. This approach incurs prover cost per execution step on the order of the sum of instruction constraints for instructions in the set despite only a single instruction being executed. Existing approaches that avoid the universal cost per step (and incur only the cost of a single instruction’s constraints per step) either fail to provide zero-knowledge of program execution or rely on recursive proof composition techniques where security derives from heuristic non-black-box random oracle instantiation. We present a new protocol for proving machine execution that resolves the above limitations, allowing for prover efficiency on the order of executed instructions while achieving zero-knowledge and avoiding the use of proof recursion. Our core technical contribution is a new primitive that we call a tuple lookup argument which is used to allow a prover to build up a machine execution “on-the-fly”. Our tuple lookup argument relies on univariate polynomial commitments in which tuples are encoded as evaluations on cosets of a multiplicative subgroup. We instantiate our protocol by combining our tuple lookup with the popular Marlin succinct non-interactive proof system.
Last updated:  2023-06-24
New Representations of the AES Key Schedule
Gaëtan Leurent, Clara Pernot
In this paper we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32-bit chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a permutation with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme. Our new representation also gives efficient ways to combine information from the first subkeys and information from the last subkeys, in order to reconstruct the corresponding master key. This results in small improvements to previous attacks: we improve impossible differential attacks against several variants of AES (and Rijndael), and a square attack against AES-192.
Last updated:  2023-06-24
On the Hardness of Scheme-Switching Between SIMD FHE Schemes
Karim Eldefrawy, Nicholas Genise, Nathan Manohar
Fully homomorphic encryption (FHE) schemes are either lightweight and can evaluate boolean circuits or are relatively heavy and can evaluate arithmetic circuits on encrypted vectors, i.e., they perform single instruction multiple data operations (SIMD). SIMD FHE schemes can either perform exact modular arithmetic in the case of the Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski-Fan-Vercauteren (BFV) schemes or approximate arithmetic in the case of the Cheon-Kim-Kim-Song (CKKS) scheme. While one can homomorphically switch between BGV/BFV and CKKS using the computationally expensive bootstrapping procedure, it is unknown how to switch between these schemes without bootstrapping. Finding more efficient methods than bootstrapping of converting between these schemes was stated as an open problem by Halevi and Shoup, Eurocrypt 2015. In this work, we provide strong evidence that homomorphic switching between BGV/BFV and CKKS is as hard as bootstrapping. In more detail, if one could efficiently switch between these SIMD schemes, then one could bootstrap these SIMD FHE schemes using a single call to a homomorphic scheme-switching algorithm without applying homomorphic linear transformations. Thus, one cannot hope to obtain significant improvements to homomorphic scheme-switching without also significantly improving the state-of-the-art for bootstrapping. We also explore the relative hardness of computing homomorphic comparison in these same SIMD FHE schemes as a secondary contribution. We show that given a comparison algorithm, one can bootstrap these schemes using a few calls to the comparison algorithm for typical parameter settings. While we focus on the comparison function in this work, the overall approach to demonstrate relative hardness of computing specific functions homomorphically extends beyond comparison to other useful functions such as min/max or ReLU.
Last updated:  2023-06-24
Fuzzification-based Feature Selection for Enhanced Website Content Encryption
Mike Wa Nkongolo
We propose a novel approach that utilizes fuzzification theory to perform feature selection on website content for encryption purposes. Our objective is to identify and select the most relevant features from the website by harnessing the principles of fuzzy logic. Fuzzification allows us to transform the crisp website content into fuzzy representations, enabling a more nuanced analysis of their characteristics. By considering the degree of membership of each feature in different fuzzy categories, we can evaluate their importance and relevance for encryption. This approach enables us to prioritize and focus on the features that exhibit higher membership degrees, indicating their significance in the encryption process. By employing fuzzification-based feature selection, we aim to enhance the effectiveness and efficiency of website content encryption, ultimately improving the overall internet security
Last updated:  2023-06-24
Block Cipher Doubling for a Post-Quantum World
Ritam Bhaumik, André Chailloux, Paul Frixons, Bart Mennink, María Naya-Plasencia
In order to maintain a similar security level in a post-quantum setting, many symmetric primitives should have to double their keys and increase their state sizes. So far, no generic way for doing this is known that would provide convincing quantum security guarantees. In this paper we propose a new generic construction, QuEME, that allows to double the key and the state size of a block cipher. The QuEME design is inspired by the ECB-Mix-ECB (EME) construction, but is defined for a different choice of mixing function that withstands our new quantum superposition attack that exhibits a periodic property found in collisions and that breaks EME and a large class of variants of it. We prove that QuEME achieves $n$-bit security in the classical setting, where $n$ is the block size of the underlying block cipher, and at least $n/6$-bit security in the quantum setting. We propose a concrete instantiation of this construction, called Double-AES, that is built with variants of AES-128.
Last updated:  2023-06-24
Efficient Private Multiset ID Protocols
Cong Zhang, Weiran Liu, Bolin Ding, Dongdai Lin
Private-ID (PID) protocol enables two parties, each holding a private set of items, to privately compute a set of random universal identifiers (UID) corresponding to the records in the union of their sets, where each party additionally learns which UIDs correspond to which items in its set but not if they belong to the intersection or not. PID is very useful in the privacy computation of databases query, e.g. inner join and join for compute. Known PID protocols all assume the input of both parties is a set. In the case of join, a more common scenario is that one party's primary key (unique) needs to join the other party's foreign key (duplicate). How to construct an efficient Private Multiset ID (PMID) protocol to support the above \emph{key-foreign key join} remains open. We resolve this problem by constructing efficient PMID protocols from Oblivious PRF, Private Set Union, and a newly introduced primitive called Deterministic-Value Oblivious Programmable PRF (dv-OPPRF). We also propose some PMID applications, including Private Inner Join, Private Full Join, and Private Join for Compute. We implement our PMID protocols and state-of-the-art PID protocols as performance baselines. The experiments show that the performances of our PMID are almost the same as the state-of-the-art PIDs when we set the multiplicity $U_x = U_y = 1$. Our PMID protocols scale well when either $U_x > 1$ or $U_y > 1$. The performances also correctly reflect excessive data expansion when both $U_x, U_y > 1$ for the more general \emph{cross join} case.
Last updated:  2023-06-23
Optimal Good-case Latency for Rotating Leader Synchronous BFT
Ittai Abraham, Kartik Nayak, Nibesh Shrestha
This paper explores the good-case latency of synchronous Byzantine Fault Tolerant (BFT) consensus protocols in the rotating leader setting. We first present a lower bound that relates the latency of a broadcast when the sender is honest and the latency of switching to the next sender. We then present a matching upper bound with a latency of $2\Delta$ ($\Delta$ is the pessimistic synchronous delay) with an optimistically responsive change to the next sender. The results imply that both our lower and upper bounds are tight. We implement and evaluate our protocol and show that our protocol obtains similar latency compared to state-of-the-art stable-leader protocol Sync~HotStuff while allowing optimistically responsive leader rotation.
Last updated:  2023-06-23
Fully Adaptive Schnorr Threshold Signatures
Elizabeth Crites, Chelsea Komlo, Mary Maller
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature. In this paper, we demonstrate that Sparkle achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t + 1 signers, then Sparkle is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme; moreover, the reduction is tight.
Last updated:  2023-06-23
On the 32-Character Zodiac Cipher
Floe Foxon
A possible new approach to the Zodiac Killer's 32-Character Cipher (Z32) is proposed based on the strengths and weaknesses of previous approaches and novel interpretations. This approach does not assume the use of anagrams or similar complex transposition methods; does not assume the identity of a particular Zodiac suspect; and assumes the use of homophonic substitution (as in Z408 and Z340), and simple transposition (as in Z340). Assumptions are clearly defined and tested with sensitivity tests. With Mount Diablo as the pole of a plane polar coordinate system, the instruction "set to Mag. N." is interpreted by setting the hour and minute hands of a watchface to the magnetic declination of the Bay Area circa 1970. Sensitivity tests reveal the exact year and location have little impact on the declination in this case. The hour and minute given by the hands are interpreted as the radial coordinate r and the angular coordinate theta, as in "Radians & # inches along the radians". The hand corresponding to each coordinate is tested, as are 12- and 24-hour interpretations. Impossible or improbable coordinates are excluded leaving one coordinate as a possible solution. This coordinate is explored as the possible plaintext for Z32 using the Z340 transposition method. Further exploration of the proposed method is necessary.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.