Papers updated in last 31 days (458 results)

Last updated:  2024-06-20
BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers
Qi Pang, Jinhao Zhu, Helen Möllering, Wenting Zheng, and Thomas Schneider
The advent of transformers has brought about significant advancements in traditional machine learning tasks. However, their pervasive deployment has raised concerns about the potential leakage of sensitive information during inference. Existing approaches using secure multiparty computation (MPC) face limitations when applied to transformers due to the extensive model size and resource-intensive matrix-matrix multiplications. In this paper, we present BOLT, a privacy-preserving inference framework for transformer models that supports efficient matrix multiplications and nonlinear computations. Combined with our novel machine learning optimizations, BOLT reduces the communication cost by 10.91x. Our evaluation on diverse datasets demonstrates that BOLT maintains comparable accuracy to floating-point models and achieves 4.8-9.5x faster inference across various network settings compared to the state-of-the-art system.
Last updated:  2024-06-20
Examining the Practical Side Channel Resilience of ARX-boxes
Yan Yan and Elisabeth Oswald
Implementations of ARX ciphers are hoped to have some intrinsic side channel resilience owing to the specific choice of cipher components: modular addition (A), rotation (R) and exclusive-or (X). Previous work has contributed to this understanding by developing theory regarding the side channel resilience of components (pioneered by the early works of Prouff) as well as some more recent practical investigations by Biryukov et al. that focused on lightweight cipher constructions. We add to this work by specifically studying ARX-boxes both mathematically as well as practically. Our results show that previous works' reliance on the simplistic assumption that intermediates independently leak (their Hamming weight) has led to the incorrect conclusion that the modular addition is necessarily the best target and that ARX constructions are therefore harder to attack in practice: we show that on an ARM M0, the best practical target is the exclusive or and attacks succeed with only tens of traces. In addition, we also provide results suggesting that the modular addition may also be a vulnerable target when partition based distinguishers are applied in side channel attacks.
Last updated:  2024-06-20
Plan your defense: A comparative analysis of leakage detection methods on RISC-V cores
Konstantina Miteloudi, Asmita Adhikary, Niels van Drueten, Lejla Batina, and Ileana Buhan
Hardening microprocessors against side-channel attacks is a critical aspect of ensuring their security. A key step in this process is identifying and mitigating “leaky" hardware modules, which leak information during the execution of cryptographic algorithms. In this paper, we explore how different leakage detection methods, the Side-channel Vulnerability Factor (SVF) and the Test Vector Leakage Assessment (TVLA), contribute to hardening of microprocessors. We conduct experiments on two RISC-V cores, SHAKTI and Ibex, using two cryptographic algorithms, SHA-3 and AES. Our findings suggest that SVF and TVLA can provide valuable insights into identifying leaky modules. However, the effectiveness of these methods can vary depending on the specific core and cryptographic algorithm in use. We conclude that the choice of leakage detection method should be based not only on computational cost but also on the specific requirements of the system, the implementation of the algorithm examined and the nature of the potential threats.
Last updated:  2024-06-20
Optimizing and Implementing Fischlin's Transform for UC-Secure Zero-Knowledge
Yi-Hsiu Chen and Yehuda Lindell
Fischlin's transform (CRYPTO 2005) is an alternative to the Fiat-Shamir transform that enables straight-line extraction when proving knowledge. In this work we focus on the problem of using the Fischlin transform to construct UC-secure zero-knowledge from Sigma protocols, since UC security -- that guarantees security under general concurrent composition -- requires straight-line (non-rewinding) simulators. We provide a slightly simplified transform that is much easier to understand, and present algorithmic and implementation optimizations that significantly improve the running time. It appears that the main obstacles to the use of Fischlin in practice is its computational cost and implementation complexity (with multiple parameters that need to be chosen). We provide clear guidelines and a simple methodology for choosing parameters, and show that with our optimizations the running-time is far lower than expected. For just one example, on a 2023 MacBook, the cost of proving the knowledge of discrete log with Fischlin is only 0.41ms (on a single core). This is 15 times slower than plain Fiat-Shamir on the same machine, which is a significant multiple but objectively not significant in many applications. We also extend the transform so that it can be applied to batch proofs, and show how this can be much more efficient than individually proving each statement. We hope that this paper will both encourage and help practitioners implement the Fischlin transform where relevant.
Last updated:  2024-06-20
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-06-20
Pairing-Free Blind Signatures from CDH Assumptions
Rutchathon Chairattana-Apirom, Stefano Tessaro, and Chenzhi Zhu
We present the first concurrently-secure blind signatures making black-box use of a pairing-free group for which unforgeability, in the random oracle model, can be proved {\em without} relying on the algebraic group model (AGM), thus resolving a long-standing open question. Prior pairing-free blind signatures without AGM proofs have only been proved secure for bounded concurrency, relied on computationally expensive non-black-box use of NIZKs, or had complexity growing with the number of signing sessions due to the use of boosting techniques. Our most efficient constructions rely on the chosen-target CDH assumption and can be seen as blind versions of signatures by Goh and Jarecki (EUROCRYPT '03) and Chevallier-Mames (CRYPTO '05). We also give a less efficient scheme with security based on (plain) CDH. The underlying signing protocols consist of four (in order to achieve regular unforgeability) or five moves (for strong unforgeability). All schemes are proved statistically blind in the random oracle model.
Last updated:  2024-06-20
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Jia Liu and Mark Manulis
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms. This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions. To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Last updated:  2024-06-20
PIR with compressed queries and amortized query processing
Uncategorized
Sebastian Angel, Hao Chen, Kim Laine, and Srinath Setty
Show abstract
Uncategorized
Private information retrieval (PIR) is a key building block in many privacy-preserving systems. Unfortunately, existing constructions remain very expensive. This paper introduces two techniques that make the computational variant of PIR (CPIR) more efficient in practice. The first technique targets a recent class of CPU-efficient CPIR protocols where the query sent by the client contains a number of ciphertexts proportional to the size of the database. We show how to compresses this query, achieving size reductions of up to 274X. The second technique is a new data encoding called probabilistic batch codes (PBCs). We use PBCs to build a multi-query PIR scheme that allows the server to amortize its computational cost when processing a batch of requests from the same client. This technique achieves up to 40× speedup over processing queries one at a time, and is significantly more efficient than related encodings. We apply our techniques to the Pung private communication system, which relies on a custom multi-query CPIR protocol for its privacy guarantees. By porting our techniques to Pung, we find that we can simultaneously reduce network costs by 36× and increase throughput by 3X.
Last updated:  2024-06-20
Side-Channel and Fault Resistant ASCON Implementation: A Detailed Hardware Evaluation (Extended Version)
Aneesh Kandi, Anubhab Baksi, Peizhou Gan, Sylvain Guilley, Tomáš Gerlich, Jakub Breier, Anupam Chattopadhyay, Ritu Ranjan Shrivastwa, Zdeněk Martinásek, and Shivam Bhasin
In this work, we present various hardware implementations for the lightweight cipher ASCON, which was recently selected as the winner of the NIST organized Lightweight Cryptography (LWC) competition. We cover encryption + tag generation and decryption + tag verification for the ASCON hash function and ASCON AEAD. On top of the usual (unprotected) implementation, we present side-channel protection (threshold countermeasure) and triplication/majority-based fault protection. To the best of our knowledge, this is the first protected hardware implementation of ASCON with respect to side-channel and fault inject protection. The side-channel and fault protections work orthogonal to each other (i.e., either one can be turned on/off without affecting the other). We present ASIC and FPGA benchmarks for all our implementations (hash and AEAD) with/without countermeasures for varying input sizes.
Last updated:  2024-06-20
A Refined Hardness Estimation of LWE in Two-step Mode
Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, and Baocang Wang
Recently, researchers have proposed many LWE estimators, such as lattice-estimator (Albrecht et al, Asiacrypt 2017) and leaky-LWE-Estimator (Dachman-Soled et al, Crypto 2020), while the latter has already been used in estimating the security level of Kyber and Dilithium using only BKZ. However, we prove in this paper that solving LWE by combining a lattice reduction step (by LLL or BKZ) and a target vector searching step (by enumeration or sieving), which we call a Two-step mode, is more efficient than using only BKZ. Moreover, we give a refined LWE estimator in Two-step mode by analyzing the relationship between the probability distribution of the target vector and the solving success rate in a Two-step mode LWE solving algorithm. While the latest Two-step estimator for LWE, which is the “primal-bdd” mode in lattice-estimator1, does not take into account some up-to-date results and lacks a thorough theoretical analysis. Under the same gate-count model, our estimation for NIST PQC standards drops by 2.1∼3.4 bits (2.2∼4.6 bits while considering more flexible blocksize and jump strategy) compared with leaky-LWE-Estimator. Furthermore, we also give a conservative estimation for LWE from the Two-step solving algorithm. Compared with the Core-SVP model, which is used in previous conservative estimations, our estimation relies on weaker assumptions and outputs higher evaluation results than the Core- SVP model. For NIST PQC standards, our conservative estimation is 4.17∼8.11 bits higher than the Core-SVP estimation. Hence our estimator can give a closer estimation for both upper bound and lower bound of LWE hardness.
Last updated:  2024-06-20
CoGNN: Towards Secure and Efficient Collaborative Graph Learning
Zhenhua Zou, Zhuotao Liu, Jinyong Shan, Qi Li, Ke Xu, and Mingwei Xu
Collaborative graph learning represents a learning paradigm where multiple parties jointly train a graph neural network (GNN) using their own proprietary graph data. To honor the data privacy of all parties, existing solutions for collaborative graph learning are either based on federated learning (FL) or secure machine learning (SML). Although promising in terms of efficiency and scalability due to their distributed training scheme, FL-based approaches fall short in providing provable security guarantees and achieving good model performance. Conversely, SML-based solutions, while offering provable privacy guarantees, are hindered by their high computational and communication overhead, as well as poor scalability as more parties participate. To address the above problem, we propose CoGNN, a novel framework that simultaneously reaps the benefits of both FL-based and SML-based approaches. At a high level, CoGNN is enabled by (i) a novel message passing mechanism that can obliviously and efficiently express the vertex data propagation/aggregation required in GNN training and inference and (ii) a two-stage Dispatch-Collect execution scheme to securely decompose and distribute the GNN computation workload for concurrent and scalable executions. We further instantiate the CoGNN framework, together with customized optimizations, to train Graph Convolutional Network (GCN) models. Extensive evaluations on three graph datasets demonstrate that compared with the state-of-the-art (SOTA) SML-based approach, CoGNN reduces up to $123$x running time and up to $522$x communication cost per party. Meanwhile, the GCN models trained using CoGNN have nearly identical accuracies as the plaintext global-graph training, yielding up to $11.06\%$ accuracy improvement over the GCN models trained via federated learning.
Last updated:  2024-06-19
Proxying is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability
Uncategorized
Zhongtang Luo, Yanxue Jia, Yaobin Shen, and Aniket Kate
Show abstract
Uncategorized
TLS oracles allow a TLS client to offer selective data provenance to an external (oracle) node such that the oracle node is ensured that the data is indeed coming from a pre-defined TLS server. Typically, the client/user supplies their credentials to the server and reveals selective data using zero-knowledge proofs to demonstrate certain server-offered information to oracles while ensuring the secrecy of the rest of the TLS transcript. Conceptually, this is a standard three-party secure computation between the TLS server, TLS client (prover), and the oracle (verifier) node; however, the key practical requirement for TLS oracles to ensure that data provenance process remains transparent to the TLS server. Recent TLS oracle protocols such as DECO enforce the communication pattern of server-client-verifier and utilize a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach introduces a significant performance penalty on the client/prover and the verifier. This raises the question of whether it is possible to reduce the overhead by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is available to the verifier. This work offers both positive and negative answers to this oracle proxy question: We first formalize the oracle proxy notion that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show allows overcoming the impossibility. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not under the standard model.
Last updated:  2024-06-19
Sustained Space and Cumulative Complexity Trade-offs for Data-Dependent Memory-Hard Functions
Jeremiah Blocki and Blake Holman
Memory-hard functions (MHFs) are a useful cryptographic primitive which can be used to design egalitarian proof of work puzzles and to protect low entropy secrets like passwords against brute-force attackers. Intuitively, a memory-hard function is a function whose evaluation costs are dominated by memory costs even if the attacker uses specialized hardware (FPGAs/ASICs), and several cost metrics have been proposed to quantify this intuition. For example, space-time cost looks at the product of running time and the maximum space usage over the entire execution of an algorithm. Alwen and Serbinenko (STOC 2015) observed that the space-time cost of evaluating a function multiple times may not scale linearly in the number of instances being evaluated and introduced the stricter requirement that a memory-hard function has high cumulative memory complexity (CMC) to ensure that an attacker's amortized space-time costs remain large even if the attacker evaluates the function on multiple different inputs in parallel. Alwen et al. (EUROCRYPT 2018) observed that the notion of CMC still gives the attacker undesirable flexibility in selecting space-time tradeoffs e.g., while the MHF scrypt has maximal CMC $\Omega(N^2)$, an attacker could evaluate the function with constant $O(1)$ memory in time $O(N^2)$. Alwen et al. introduced an even stricter notion of Sustained Space complexity and designed an MHF which has $s=\Omega(N/\log N)$ sustained complexity $t=\Omega(N)$ i.e., any algorithm evaluating the function in the parallel random oracle model must have at least $t=\Omega(N)$ steps where the memory usage is at least $\Omega(N/\log N)$. In this work, we use dynamic pebbling games and dynamic graphs to explore tradeoffs between sustained space complexity and cumulative memory complexity for data-dependent memory-hard functions such as Argon2id and scrypt. We design our own dynamic graph (dMHF) with the property that {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N)$ space, or (2) has CMC $\Omega(N^{3-\epsilon})$ --- substantially larger than $N^2$. For Argon2id we show that {\em any} dynamic pebbling strategy either(1) has $\Omega(N)$ rounds with $\Omega(N^{1-\epsilon})$ space, or (2) has CMC $\omega(N^2)$. We also present a dynamic version of DRSample (Alwen et al. 2017) for which {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N/\log N)$ space, or (2) has CMC $\Omega(N^3/\log N)$.
Last updated:  2024-06-19
Zeromorph: Zero-Knowledge Multilinear-Evaluation Proofs from Homomorphic Univariate Commitments
Tohru Kohrita and Patrick Towa
A multilinear polynomial is a multivariate polynomial of degree at most one in each variable. This paper introduces a new scheme to commit to multilinear polynomials and to later prove evaluations thereof. The scheme exponentially improves on the added prover costs for evaluation proofs to be zero-knowledge. The construction of the scheme is generic and relies only on the additive homomorphic property of any scheme to commit to univariate polynomials, and on a protocol to prove that committed polynomials satisfy public degree bounds. As the construction requires to check that several committed univariate polynomials do not exceed given, separate bounds, the paper also gives a method to batch executions of any degree-check protocol on homomorphic commitments. For an n-linear polynomial, the instantiation of the scheme with a hiding version of KZG commitments (Kate, Zaverucha and Goldberg at Asiacrypt 2010) leads to a scheme with an evaluation prover that performs only n + 5 extra (i.e., compared to the variant of the same scheme that is not zero-knowledge) first-group operations to achieve the zero-knowledge property. In contrast, previous constructions require an extra 2^n multi-scalar multiplication. The instantiation does so without any concessions on the other performance measures compared to the state of the art.
Last updated:  2024-06-19
Limits on the Power of Prime-Order Groups: Separating Q-Type from Static Assumptions
George Lu and Mark Zhandry
Subgroup decision techniques on cryptographic groups and pairings have been critical for numerous applications. Originally conceived in the composite-order setting, there is a large body of work showing how to instantiate subgroup decision techniques in the prime-order setting as well. In this work, we demonstrate the first barrier to this research program, by demonstrating an important setting where composite-order techniques cannot be replicated in the prime-order setting. In particular, we focus on the case of $q$-type assumptions, which are ubiquitous in group- and pairing-based cryptography, but unfortunately are less desirable than the more well-understood static assumptions. Subgroup decision techniques have had great success in removing $q$-type assumptions, even allowing $q$-type assumptions to be generically based on static assumptions on composite-order groups. Our main result shows that the same likely does not hold in the prime order setting. Namely, we show that a large class of $q$-type assumptions, including the security definition of a number of cryptosystems, cannot be proven secure in a black box way from any static assumption.
Last updated:  2024-06-19
The Complexity of the Crossbred Algorithm
Damien VIDAL, Sorina IONICA, and Claire Delaplace
The Crossbred algorithm is currently the state-of-the-art method for solving overdetermined multivariate polynomial systems over $\mathbb{F}_2$. Since its publication in 2015, several record breaking implementations have been proposed and demonstrate the power of this hybrid approach. Despite these practical results, the complexity of this algorithm and the choice of optimal parameters for it are difficult open questions. In this paper, we prove a bivariate generating series for potentially admissible parameters of the Crossbred algorithm.
Last updated:  2024-06-19
Securing Lightning Channels against Rational Miners
Lukas Aumayr, Zeta Avarikioti, Matteo Maffei, and Subhra Mazumdar
Payment channel networks (e.g., the Lightning Network in Bitcoin) constitute one of the most popular scalability solutions for blockchains. Their safety relies on parties being online to detect fraud attempts on-chain and being able to timely react by publishing certain transactions on-chain. However, a cheating party may bribe miners in order to censor those transactions, resulting in loss of funds for the cheated party: these attacks are known in the literature as timelock bribing attacks. In this work, we present the first channel construction that does not require parties to be online and, at the same time, is resistant to timelock bribing attacks. We start by proving for the first time that Lightning channels are secure against timelock bribing attacks in the presence of rational channel parties under the assumption that these parties constantly monitor the mempool and never deplete the channel in one direction. The latter underscores the importance of keeping a coin reserve in each channel as implemented in the Lightning Network, albeit for different reasons. We show, however, that the security of the Lightning Network against Byzantine channel parties does not carry over to a setting in which miners are rational and accept timelock bribes. Next, we introduce CRAB, the first Lightning-compatible channel construction that provides security against Byzantine channel parties and rational miners. CRAB leverages miners' incentives to safeguard the channel, thereby also forgoing the unrealistic assumption of channel parties constantly monitoring the mempool. Finally, we show how our construction can be refined to eliminate the major assumption behind payment channels, i.e., the need for online participation. To that end, we present Sleepy CRAB the first provably secure channel construction under rational miners that enables participants to go offline indefinitely. We also provide a proof-of-concept implementation of Sleepy CRAB and evaluate its cost in Bitcoin, thereby demonstrating its practicality.
Last updated:  2024-06-19
A new stand-alone MAC construct called SMAC
Dachao Wang, Alexander Maximov, Patrik Ekdahl, and Thomas Johansson
In this paper, we present a new efficient stand-alone MAC construct based on processing using the FSM part of the stream cipher family SNOW, which in turn uses the AES round function. It offers a combination of very high speed in software and hardware with a truncatable tag. Three concrete versions of SMAC are proposed with different security levels, although other use cases are also possible. For example, SMAC can be combined with an external ciphering engine in AEAD mode. Every design choice is justified and supported by the results of our analysis and simulations. We also provide an aggregated mode version SMAC-1$\times n$ whose performance in software reaches up to 925 Gbps (around 0.038 cpb) for long messages in a single thread. A novelty of the proposal is that it meets future performance requirements but is still not directly vulnerable to attacks using repeated nonce when the tag size is short, as is the case for other very fast MACs (MACs based on polynomial hashing). This can be an important aspect in practical applications.
Last updated:  2024-06-19
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Shuhong Gao and Kyle Yates
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
Last updated:  2024-06-19
A Survey of Polynomial Multiplications for Lattice-Based Cryptosystems
Vincent Hwang
We survey various mathematical tools used in software works multiplying polynomials in \[ \frac{\mathbb{Z}_q[x]}{\left\langle {x^n - \alpha x - \beta} \right\rangle}. \] In particular, we survey implementation works targeting polynomial multiplications in lattice-based cryptosystems Dilithium, Kyber, NTRU, NTRU Prime, and Saber with instruction set architectures/extensions Armv7-M, Armv7E-M, Armv8-A, and AVX2. There are three emphases in this paper: (i) modular arithmetic, (ii) homomorphisms, and (iii) vectorization. For modular arithmetic, we survey Montgomery, Barrett, and Plantard multiplications. For homomorphisms, we survey (a) various homomorphisms such as Cooley--Tukey FFT, Good--Thomas FFT, Bruun's FFT, Rader's FFT, Karatsuba, and Toom--Cook; (b) various algebraic techniques for adjoining nice properties to the coefficient rings, including localization, Schönhage's FFT, Nussbaumer's FFT, and coefficient ring switching; and (c) various algebraic techniques related to the polynomial moduli, including twisting, composed multiplication, evaluation at $\infty$, truncation, incomplete transformation, striding, and Toeplitz matrix-vector product. For vectorization, we survey the relations between homomorphisms and vector arithmetic. We then go through several case studies: We compare the implementations of modular multiplications used in Dilithium and Kyber, explain how the matrix-to-vector structure was exploited in Saber, and review the design choices of transformations for NTRU and NTRU Prime with vectorization. Finally, we outline several interesting implementation projects.
Last updated:  2024-06-19
Perfectly-secure Network-agnostic MPC with Optimal Resiliency
Shravani Patil and Arpita Patra
We study network-agnostic secure multiparty computation with perfect security. Traditionally MPC is studied assuming the underlying network is either synchronous or asynchronous. In a network-agnostic setting, the parties are unaware of whether the underlying network is synchronous or asynchronous. The feasibility of perfectly-secure MPC in synchronous and asynchronous networks has been settled a long ago. The landmark work of [Ben-Or, Goldwasser, and Wigderson, STOC'88] shows that $n > 3t_s$ is necessary and sufficient for any MPC protocol with $n$-parties over synchronous network tolerating $t_s$ active corruptions. In yet another foundational work, [Ben-Or, Canetti, and Goldreich, STOC'93] show that the bound for asynchronous network is $n > 4t_a$, where $t_a$ denotes the number of active corruptions. However, the same question remains unresolved for network-agnostic setting till date. In this work, we resolve this long-standing question. We show that perfectly-secure network-agnostic $n$-party MPC tolerating $t_s$ active corruptions when the network is synchronous and $t_a$ active corruptions when the network is asynchronous is possible if and only if $n > 2 \max(t_s,t_a) + \max(2t_a,t_s)$. When $t_a \geq t_s$, our bound reduces to $n > 4t_a$, whose tightness follows from the known feasibility results for asynchronous MPC. When $t_s > t_a$, our result gives rise to a new bound of $n > 2t_s + \max(2t_a,t_s)$. Notably, the previous network-agnostic MPC in this setting [Appan, Chandramouli, and Choudhury, PODC'22] only shows sufficiency for a loose bound of $n > 3t_s + t_a$. When $t_s > 2t_a$, our result shows tightness of $ n > 3t_s$, whereas the existing work shows sufficiency for $n > 3t_s+t_a$.
Last updated:  2024-06-19
A Formal Treatment of End-to-End Encrypted Cloud Storage
Matilda Backendal, Hannah Davis, Felix Günther, Miro Haller, and Kenneth G. Paterson
Users increasingly store their data in the cloud, thereby benefiting from easy access, sharing, and redundancy. To additionally guarantee security of the outsourced data even against a server compromise, some service providers have started to offer end-to-end encrypted (E2EE) cloud storage. With this cryptographic protection, only legitimate owners can read or modify the data. However, recent attacks on the largest E2EE providers have highlighted the lack of solid foundations for this emerging type of service. In this paper, we address this shortcoming by initiating the formal study of E2EE cloud storage. We give a formal syntax to capture the core functionality of a cloud storage system, capturing the real-world complexity of such a system's constituent interactive protocols. We then define game-based security notions for confidentiality and integrity of a cloud storage system against a fully malicious server. We treat both selective and fully adaptive client compromises. Our notions are informed by recent attacks on E2EE cloud storage providers. In particular we show that our syntax is rich enough to capture the core functionality of MEGA and that recent attacks on it arise as violations of our security notions. Finally, we present an E2EE cloud storage system that provides all core functionalities and that is both efficient and provably secure with respect to our selective security notions. Along the way, we discuss challenges on the path towards bringing the security of cloud storage up to par with other end-to-end primitives, such as secure messaging and TLS.
Last updated:  2024-06-19
Privacy-Preserving Dijkstra
Benjamin Ostrovsky
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps: First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs in a secret-shared form two arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such an integer format, this step could be skipped. Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, there exists a $d$-normalization algorithm that takes $O(1)$ rounds and $O((V+E))$ secure operations. We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O\left((V +E) \cdot \log V \right)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into RAM memory word. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
Last updated:  2024-06-18
Generic Construction of Dual-Server Public Key Authenticated Encryption with Keyword Search
Keita Emura
Chen et al. (IEEE Transactions on Cloud Computing 2022) introduced dual-server public key authenticated encryption with keyword search (DS-PAEKS), and proposed a DS-PAEKS scheme under the decisional Diffie-Hellman assumption. In this paper, we propose a generic construction of DS-PAEKS from PAEKS, public key encryption, and signatures. By providing a concrete attack, we show that the DS-PAEKS scheme of Chen et al. is vulnerable. That is, the proposed generic construction yields the first DS-PAEKS schemes. Our attack with a slight modification works against the Chen et al. dual-server public key encryption with keyword search (DS-PEKS) scheme (IEEE Transactions on Information Forensics and Security 2016). Moreover, we demonstrate that the Tso et al. generic construction of DS-PEKS from public key encryption (IEEE Access 2020) is also vulnerable. We also analyze other pairing-free PAEKS schemes (Du et al., Wireless Communications and Mobile Computing 2022 and Lu and Li, IEEE Transactions on Mobile Computing 2022). Though we did not find any attack against these schemes, we show that at least their security proofs are wrong.
Last updated:  2024-06-18
FABESA: Fast (and Anonymous) Attribute-Based Encryption under Standard Assumption
Long Meng, Liqun Chen, Yangguang Tian, and Mark Manulis
Attribute-Based Encryption (ABE) provides fine-grained access control to encrypted data and finds applications in various domains. The practicality of ABE schemes hinges on the balance between security and efficiency. The state-of-the-art adaptive secure ABE scheme, proven to be adaptively secure under standard assumptions (FAME, CCS'17), is less efficient compared to the fastest one (FABEO, CCS'22) which is only proven secure under the Generic Group Model (GGM). These traditional ABE schemes focus solely on message privacy. To address scenarios where attribute value information is also sensitive, Anonymous ABE (${\rm A}^{\rm 2}$BE) ensures the privacy of both the message and attributes. However, most ${\rm A}^{\rm 2}$BE schemes suffer from intricate designs with low efficiency, and the security of the fastest key-policy ${\rm A}^{\rm 2}$BE (proposed in FEASE, USENIX'24) relies on the GGM. In this paper, we propose novel fast key-policy and ciphertext-policy ABE schemes that (1) support both AND and OR gates for access policies, (2) have no restriction on the size and type of policies or attributes, (3) achieve adaptive security under the standard DLIN assumption, and (4) only need 4 pairings for decryption. As our ABE constructions automatically provide ciphertext anonymity, we easily transform our ABE schemes to ${\rm A}^{\rm 2}$BE schemes while maintaining the same features and high-level efficiency. The implementation results show that all our schemes achieve the best efficiency comparing to other schemes with adaptive security proven under standard assumptions. Specifically, our ABE schemes perform better than FAME and are close to FABEO. Our key-policy ${\rm A}^{\rm 2}$BE scheme performs close to the one in FEASE and our ciphertext-policy ${\rm A}^{\rm 2}$BE outperforms the state-of-the-art (Cui et al., ProvSec'16).
Last updated:  2024-06-18
VRaaS: Verifiable Randomness as a Service on Blockchains
Jacob Gorman, Lucjan Hanzlik, Aniket Kate, Easwar Vivek Mangipudi, Pratyay Mukherjee, Pratik Sarkar, and Sri AravindaKrishnan Thyagarajan
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services. We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\sf VRaaS}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts. Within our framework we study a generic design of Verifiable Random Function~(VRF)-based randomness service -- where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF. We investigate whether our definition is minimalistic in terms of the desired security properties - towards that, we show that a couple of insecure constructions fall short of realizing our definition. Using our definition we also discover practical vulnerabilities in other designs such as Algorand beacon, Pyth VRF and Band VRF that offer on-chain verifiable randomness.
Last updated:  2024-06-18
Homomorphic Evaluation of LWR-based PRFs and Application to Transciphering
Amit Deo, Marc Joye, Benoit Libert, Benjamin R. Curtis, and Mayeul de Bellabre
Certain applications such as FHE transciphering require randomness while operating over encrypted data. This randomness has to be obliviously generated in the encrypted domain and remain encrypted throughout the computation. Moreover, it should be guaranteed that independent-looking random coins can be obliviously generated for different computations. In this work, we consider the homomorphic evaluation of pseudorandom functions (PRFs) with a focus on practical lattice-based candidates. In the homomorphic PRF evaluation setting, given a fully homomorphic encryption of the PRF secret key $\vec{s}$, it should be possible to homomorphically compute encryptions of PRF evaluations $\{ \text{PRF}_{\vec{s}}(x_i) \}_{i=1}^M$ for public inputs $\{ x_i\}_{i=1}^M$. We consider this problem for PRF families based on the hardness of the Learning-With-Rounding (LWR) problem introduced by Banerjee, Peikert and Rosen (Eurocrypt '12). We build on the random-oracle variant of a PRF construction suggested by Banerjee et al. and demonstrate that it can be evaluated using only two sequential programmable bootstraps in the TFHE homomorphic encryption scheme. We also describe several modifications of this PRF---which we prove as secure as the original function---that support homomorphic evaluations using only one programmable bootstrap per slot. Numerical experiments were conducted using practically relevant FHE parameter sets from the TFHE-rs library. Our benchmarks show that a throughput of about $1000$ encrypted pseudorandom bits per second (resp. $900$ encrypted pseudorandom bits per second) can be achieved on an AWS hpc7a.96xlarge machine (resp. on a standard laptop with an Apple M2 chip), on a single thread. The PRF evaluation keys in our experiments have sizes roughly $40\%$ and $60\%$ of a bootstrapping key. Applying our solution to transciphering enables important bandwidth savings, typically trading $64$-bit values for $4$-bit values per transmitted ciphertext.
Last updated:  2024-06-18
Helium: Scalable MPC among Lightweight Participants and under Churn
Christian Mouchet, Sylvain Chatel, Apostolos Pyrgelis, and Carmela Troncoso
We introduce Helium, a novel framework that supports scalable secure multiparty computation (MPC) for lightweight participants and tolerates churn. Helium relies on multiparty homomorphic encryption (MHE) as its core building block. While MHE schemes have been well studied in theory, prior works fall short of addressing critical considerations paramount for adoption such as supporting resource-constrained and unstably connected participants. In this work, we systematize the requirements of MHE-based MPC protocols from a practical lens, and we propose a novel execution mechanism that addresses those considerations. We implement this execution mechanism in Helium, which makes it the first implemented framework to support MPC under network churn based solely on cryptographic assumptions. We show that a Helium network of $30$ parties connected with $100$Mbits/s links and experiencing a system-wide churn rate of $40$ failures per minute can compute the product between a fixed $512\times512$ secret matrix (e.g., a collectively-trained private model) and a fresh secret vector (e.g., a feature vector) $8.3$ times per second. This is $\sim\!\!1500$ times faster than a state-of-the-art MPC framework operating under no churn.
Last updated:  2024-06-18
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions. We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
Last updated:  2024-06-18
Quantum Lattice Enumeration in Limited Depth
Nina Bindel, Xavier Bonnetain, Marcel Tiepelt, and Fernando Virdia
In 2018, Aono et al. (ASIACRYPT 2018) proposed to use quantum backtracking algorithms (Montanaro, TOC 2018; Ambainis and Kokainis, STOC 2017) to speedup lattice point enumeration. Quantum lattice sieving algorithms had already been proposed (Laarhoven et al., PQCRYPTO 2013), being shown to provide an asymptotic speedup over classical counterparts, but also to lose competitiveness at dimensions relevant to cryptography if practical considerations on quantum computer architecture were taken into account (Albrecht et al., ASIACRYPT 2020). Aono et al.'s work argued that quantum walk speedups can be applied to lattice enumeration, achieving at least a quadratic asymptotic speedup à la Grover search while not requiring exponential amounts of quantum accessible classical memory, as it is the case for sieving. In this work, we explore how to lower bound the cost of using Aono et al.'s techniques on lattice enumeration with extreme cylinder pruning, assuming a limit to the maximum depth that a quantum computation can achieve without decohering, with the objective of better understanding the practical applicability of quantum backtracking in lattice cryptanalysis.
Last updated:  2024-06-18
DualRing-PRF: Post-Quantum (Linkable) Ring Signatures from Legendre and Power Residue PRFs
Xinyu Zhang, Ron Steinfeld, Joseph K. Liu, Muhammed F. Esgin, Dongxi Liu, and Sushmita Ruj
Ring signatures are one of the crucial cryptographic primitives used in the design of privacy-preserving systems. Such a signature scheme allows a signer to anonymously sign a message on behalf of a spontaneously formed group. It not only ensures the authenticity of the message but also conceals the true signer within the group. An important extension of ring signatures is linkable ring signatures, which prevent a signer from signing twice without being detected (under some constraints). Linkable ring signatures offer advantages in applications where full anonymity might jeopardise the intended purpose, such as privacy-oriented cryptocurrencies like Monero. In this work, we introduce post-quantum ring signature (DualRing-PRF) and linkable ring signature (DualRingL-PRF) schemes whose security solely rely on symmetric-key primitives (namely, Legendre PRF and power residue PRF). Our construction of the ring signature departs from previous approaches with similar security assumptions, offering the most competitive signature sizes for small and medium-sized rings. In particular, for a ring size of 16, DualRing-PRF has a communication overhead 1.4 times smaller than the state-of-the-art scheme proposed by Goel et al. (PETS’21). Furthermore, we demonstrate the extension of DualRing-PRF to incorporate linkability and non-slanderability. Compared to the existing one-time traceable ring signature (a variant of linkable ring signature) by Scafuro and Zhang (ESORICS’21), our construction supports many-time signing and achieves significantly smaller signature sizes when ring size exceeds 16. This advantage becomes more pronounced as the ring size increases.
Last updated:  2024-06-18
SNOW-SCA: ML-assisted Side-Channel Attack on SNOW-V
Harshit Saurabh, Anupam Golder, Samarth Shivakumar Titti, Suparna Kundu, Chaoyun Li, Angshuman Karmakar, and Debayan Das
This paper presents SNOW-SCA, the first power side-channel analysis (SCA) attack of a 5G mobile communication security standard candidate, SNOW-V, running on a 32-bit ARM Cortex-M4 microcontroller. First, we perform a generic known-key correlation (KKC) analysis to identify the leakage points. Next, a correlation power analysis (CPA) attack is performed, which reduces the attack complexity to two key guesses for each key byte. The correct secret key is then uniquely identified utilizing linear discriminant analysis (LDA). The profiled SCA attack with LDA achieves 100% accuracy after training with < 200 traces, which means the attack succeeds with just a single trace. Overall, using the combined CPA and LDA attack model, the correct secret key byte is recovered with < 50 traces collected using the ChipWhisperer platform. The entire 256-bit secret key of SNOW-V can be recovered incrementally using the proposed SCA attack. Finally, we suggest low-overhead countermeasures that can be used to prevent these SCA attacks.
Last updated:  2024-06-18
FHEDA: Efficient Circuit Synthesis with Reduced Bootstrapping for Torus FHE
Animesh Singh, Smita Das, Anirban Chakraborty, Rajat Sadhukhan, Ayantika Chatterjee, and Debdeep Mukhopadhyay
Fully Homomorphic Encryption (FHE) schemes are widely used cryptographic primitives for performing arbitrary computations on encrypted data. However, FHE incorporates a computationally intensive mechanism called bootstrapping, that resets the noise in the ciphertext to a lower level allowing the computation on circuits of arbitrary depth. This process can take significant time, ranging from several minutes to hours. To address the above issue, in this work, we propose an Electronic Design Automation (EDA) framework FHEDA that generates efficient Boolean representations of circuits compatible with the Torus-FHE (ASIACRYPT 2020) scheme. To the best of our knowledge, this is the first work in the EDA domain of FHE. We integrate logic synthesis and gate optimization techniques into our FHEDA framework for reducing the total number of bootstrapping operations in a Boolean circuit, which leads to a significant (up to 50%) reduction in homomorphic computation time. Our FHEDA is built upon the observation that in Torus-FHE two consecutive Boolean gate evaluations over fresh encryptions require only one bootstrapping instead of two, based on appropriate parameter choices. By integrating this observation with logic replacement techniques into FHEDA, we could reduce the total number of bootstrapping operations along with the circuit depth. This eventually reduces the homomorphic evaluation time of Boolean circuits. In order to verify the efficacy of our approach, we assess the performance of the proposed EDA flow on a diverse set of representative benchmarks including privacypreserving machine learning and different symmetric key block ciphers.
Last updated:  2024-06-18
Efficient 2PC for Constant Round Secure Equality Testing and Comparison
Tianpei Lu, Xin Kang, Bingsheng Zhang, Zhuo Ma, Xiaoyuan Zhang, Yang Liu, and Kui Ren
Secure equality testing and comparison are two important primitives that have been widely used in many secure computation scenarios, such as privacy-preserving machine learning, private set intersection, secure data mining, etc. In this work, we propose new constant-round two-party computation (2PC) protocols for secure equality testing and secure comparison. Our protocols are designed in the online/offline paradigm. Theoretically, for 32-bit integers, the online communication for our equality testing is only 76 bits, and the cost for our secure comparison is only 384 bits.Our benchmarks show that (i) our equality is $9 \times$ faster than the Guo \emph{et al.} (EUROCRYPT 2023) and $15 \times$ of the garbled circuit scheme (EMP-toolkit). (ii) our secure comparison protocol is $3 \times$ faster than Guo et al.(EUROCRYPT 2023), $6 \times$ faster than both Rathee et al. (CCS 2020) and garbled circuit scheme.
Last updated:  2024-06-18
SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation
Shams Tarek, Dipayan Saha, Sujan Kumar Saha, Mark Tehranipoor, and Farimah Farahmandi
Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). Large language models (LLMs) have emerged as a viable approach to address these shortcomings in security verification because of their natural language understanding, advanced reasoning, and knowledge transfer capabilities. However, their application to large designs is limited by inherent token limitation and memorization constraints. In this paper, we introduce SoCureLLM, an LLM-based framework that excels in identifying security vulnerabilities within SoC designs and creating a comprehensive security policy database. Our framework is adaptable and adept at processing varied, large-scale designs, overcoming the abovementioned issues of LLM. In evaluations, SoCureLLM detected 76.47% of security bugs across three vulnerable RISC-V SoCs, outperforming the state-of-the-art security verification methods. Furthermore, assessing three additional large-scale RISC-V SoC designs against various threat models led to the formulation of 84 novel security policies, enriching the security policy database. Previously requiring extensive manual effort to craft, these newly generated security policies can be used as guidelines for developing secured SoC designs.
Last updated:  2024-06-18
SoK: Programmable Privacy in Distributed Systems
Daniel Benarroch, Bryan Gillespie, Ying Tong Lai, and Andrew Miller
This Systematization of Knowledge conducts a survey of contemporary distributed blockchain protocols, with the aim of identifying cryptographic and design techniques which practically enable both expressive programmability and user data confidentiality. To facilitate a framing which supports the comparison of concretely very different protocols, we define an epoch-based computational model in the form of a flexible UC-style ideal functionality which divides the operation of privacy-preserving networks into three phases: Independent, Mediated, and Global computation. Our analysis of protocols focuses in particular on features of the Mediated computation phase, which provides the facility to execute non-trivial program logic on private inputs from multiple users. Specifically, we compare implementations in different protocols for private limit order auctions, which we find to be a representative application which is common and relatively simple, but which exhibits adversarial dynamics which demonstrate the capabilities of a non-trivial Mediated computation mechanism. In our analysis, we identify four protocols representative of different high-level approaches used to implement Mediated computations. We compare protocols according to the degree and flexibility of programmability, the privacy properties achieved, and the security assumptions required for correct operation. We conclude by offering recommendations and best practices for future programmable privacy designs.
Last updated:  2024-06-18
Malicious Security for PIR (almost) for Free
Brett Falk, Pratyush Mishra, and Matan Shtepel
Private Information Retrieval (PIR) enables a client to retrieve a database element from a semi-honest server while hiding the element being queried from the server. Maliciously-secure PIR (mPIR) [Colombo et al., USENIX Security '23] strengthens the guarantees of plain (i.e., semi-honest) PIR by ensuring that even a misbehaving server (a) cannot compromise client privacy via selective-failure attacks, and (b) must answer every query *consistently* (i.e., with respect to the same database). These additional security properties are crucial for many real-world applications. In this work we present a generic compiler that transforms any PIR scheme into an mPIR scheme in a black-box manner, minimal overhead, and without requiring additional cryptographic assumptions. Since mPIR trivially implies PIR, our compiler establishes the equivalence of mPIR and PIR. By instantiating our compiler with existing PIR schemes, we immediately obtain mPIR schemes with $O(N^\epsilon)$ communication cost. In fact, by applying our compiler to a recent doubly-efficient PIR [Lin et al., STOC '23], we are able to construct a *doubly-efficient* mPIR scheme that requires only $\text{polylog}(N)$ communication and server and client computation. In comparison, all prior work incur a $\Omega(\sqrt{N})$ cost in these metrics. Our compiler makes use of smooth locally-decodable codes (LDCs) that have a robust decoding procedure. We term these codes "subcode"-LDCs, because they are LDCs where the query responses are from an error-correcting code. This property is shared by Reed--Muller codes (whose query responses are Reed--Solomon codewords) and more generally lifted codes. Applying our compiler requires us to consider decoding in the face of *non-signaling adversaries*, for reasons analogous to the need for non-signaling PCPs in the succinct-argument literature. We show how to construct such decoders for Reed--Muller codes, and more generally for smooth locally-decodable codes that have a robust decoding procedure.
Last updated:  2024-06-18
Hadamard Product Arguments and Their Applications
Kyeongtae Lee, Donghwan Oh, Hankyung Ko, Jihye Kim, and Hyunok Oh
This paper introduces transparent and efficient arguments for Hadamard products between committed vectors from two source groups. For vectors of length $n$, the proofs consist of $\mathcal{O}(\log n)$ target group elements and $\mathcal{O}(1)$ additional elements. The verifier's workload is dominated by $\mathcal{O}(\log n)$ multi-exponentiations in the target group and $\mathcal{O}(1)$ pairings. We prove our security under the standard SXDH assumption. Additionally, we propose an aggregator for Groth16 pairing-based zk-SNARKs and a proof aggregation technique for the general case of the KZG pairing-based polynomial commitment scheme using our Hadamard product arguments. Both applications support logarithmic-sized aggregated proofs without requiring an additional trusted setup, significantly reducing the verifier’s pairing operations to $\mathcal{O}(1)$.
Last updated:  2024-06-18
FaultyGarble: Fault Attack on Secure Multiparty Neural Network Inference
Mohammad Hashemi, Dev Mehta, Kyle Mitard, Shahin Tajik, and Fatemeh Ganji
The success of deep learning across a variety of applications, including inference on edge devices, has led to increased concerns about the privacy of users’ data and deep learning models. Secure multiparty computation allows parties to remedy this concern, resulting in a growth in the number of such proposals and improvements in their efficiency. The majority of secure inference protocols relying on multiparty computation assume that the client does not deviate from the protocol and passively attempts to extract information. Yet clients, driven by different incentives, can act maliciously to actively deviate from the protocol and disclose the deep learning model owner’s private information. Interestingly, faults are well understood in multiparty computation-related literature, although fault attacks have not been explored. Our paper introduces the very first fault attack against secure inference implementations relying on garbled circuits as a prime example of multiparty computation schemes. In this regard, laser fault injection coupled with a model-extraction attack is successfully mounted against existing solutions that have been assumed to be secure against active attacks. Notably, the number of queries required for the attack is equal to that of the best model extraction attack mounted against the secure inference engines under the semi-honest scenario.
Last updated:  2024-06-18
Information-Theoretic Multi-Server PIR with Global Preprocessing
Ashrujit Ghoshal, Baitian Li, Yaohua Ma, Chenxin Dai, and Elaine Shi
We propose a new unified framework to construct multi-server, information-theoretic Private Information Retrieval (PIR) schemes that leverage global preprocesing to achieve sublinear computation per query. Despite a couple earlier attempts, our understanding of PIR schemes in the global preprocessing model remains limited, and so far, we only know a few sparse points in the broad design space. Our framework not only unifies earlier results in this space, but leads to several new results. First, we can improve the server space of the state-of-the-art scheme by a polynomial factor. Second, we can broaden the parameter space of known results, allowing a smooth tradeoff between bandwidth and computation. Third, while earlier schemes achieve better per-server bandwidth and computation as we add more servers, the server space actually grows w.r.t. the number of servers. We offer a new scalable family of schemes where the per-server bandwidth, computation, and space all decrease as we add more servers. This scalable family of schemes also implies the so-called ``doubly efficient'' PIR scheme with any super-constant number of servers, achieving $n^{1+o(1)}$ server space and preprocessing cost, and $n^{o(1)}$ bandwidth and computation per query.
Last updated:  2024-06-18
Pseudorandom Error-Correcting Codes
Miranda Christ and Sam Gunn
We construct pseudorandom error-correcting codes (or simply pseudorandom codes), which are error-correcting codes with the property that any polynomial number of codewords are pseudorandom to any computationally-bounded adversary. Efficient decoding of corrupted codewords is possible with the help of a decoding key. We build pseudorandom codes that are robust to substitution and deletion errors, where pseudorandomness rests on standard cryptographic assumptions. Specifically, pseudorandomness is based on either $2^{O(\sqrt{n})}$-hardness of LPN, or polynomial hardness of LPN and the planted XOR problem at low density. As our primary application of pseudorandom codes, we present an undetectable watermarking scheme for outputs of language models that is robust to cropping and a constant rate of random substitutions and deletions. The watermark is undetectable in the sense that any number of samples of watermarked text are computationally indistinguishable from text output by the original model. This is the first undetectable watermarking scheme that can tolerate a constant rate of errors. Our second application is to steganography, where a secret message is hidden in innocent-looking content. We present a constant-rate stateless steganography scheme with robustness to a constant rate of substitutions. Ours is the first stateless steganography scheme with provable steganographic security and any robustness to errors.
Last updated:  2024-06-18
Volatile and Persistent Memory for zkSNARKs via Algebraic Interactive Proofs
Alex Ozdemir, Evan Laufer, and Dan Boneh
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Last updated:  2024-06-17
Distributed PIR: Scaling Private Messaging via the Users' Machines
Elkana Tovey, Jonathan Weiss, and Yossi Gilad
This paper presents a new architecture for metadata-private messaging that counters scalability challenges by offloading most computations to the clients. At the core of our design is a distributed private information retrieval (PIR) protocol, where the responder delegates its work to alleviate PIR's computational bottleneck and catches misbehaving delegates by efficiently verifying their results. We introduce DPIR, a messaging system that uses distributed PIR to let a server storing messages delegate the work to the system's clients, such that each client contributes proportional processing to the number of messages it reads. The server removes clients returning invalid results, which DPIR leverages to integrate an incentive mechanism for honest client behavior by conditioning messaging through DPIR on correctly processing PIR requests from other users. The result is a metadata-private messaging system that asymptotically improves scalability over prior work with the same threat model. We show through experiments on a prototype implementation that DPIR concretely improves performance by $3.25\times$ and $4.31\times$ over prior work and that the performance gap grows with the user base size.
Last updated:  2024-06-17
Improved Boomerang Attacks on 6-Round AES
Augustin Bariant, Orr Dunkelman, Nathan Keller, Gaëtan Leurent, and Victor Mollimard
The boomerang attack is a cryptanalytic technique which allows combining two short high-probability differentials into a distinguisher for a large number of rounds. Since its introduction by Wagner in 1999, it has been applied to many ciphers. One of the best-studied targets is a 6-round variant of AES, on which the boomerang attack is outperformed only by the dedicated Square attack. Recently, two new variants of the boomerang attack were presented: retracing boomerang (Eurocrypt'20) and truncated boomerang (Eurocrypt'23). These variants seem incompatible: the former achieves lower memory complexity by throwing away most of the data in order to force dependencies, while the latter achieves lower time complexity by using large structures, which inevitably leads to a large memory complexity. In this paper we show that elements of the two techniques can be combined to get `the best of the two worlds' – the practical memory complexity of the retracing attack and the lower time complexity of the truncated attack. We obtain an attack with data complexity of $2^{57}$ (compared to $2^{59}$ and $2^{55}$ of truncated and retracing boomerang, respectively), memory complexity of $2^{33}$ (compared to $2^{59}$ and $2^{31}$), and time complexity of $2^{61}$ (compared to $2^{61}$ and $2^{80}$). This is the second-best attack on 6-round AES, after the Square attack.
Last updated:  2024-06-17
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
Yuval Ishai, Elaine Shi, and Daniel Wichs
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions). In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Last updated:  2024-06-17
Throwing Boomerangs into Feistel Structures: Application to CLEFIA, WARP, LBlock, LBlock-s and TWINE
Hosein Hadipour, Marcel Nageler, and Maria Eichlseder
Automatic tools to search for boomerang distinguishers have seen significant advances over the past few years. However, most previous work has focused on ciphers based on a Substitution Permutation Network (SPN), while analyzing the Feistel structure is of great significance. Boukerrou et al. recently provided a theoretical framework to formulate the boomerang switch over multiple Feistel rounds, but they did not provide an automatic tool to find distinguishers. In this paper, by enhancing the recently proposed method by Hadipour et al., we provide an automatic tool to search for boomerang distinguishers and apply it to block ciphers following the Generalized Feistel Structure (GFS). Applying our tool to a wide range of GFS ciphers, we show that it significantly improves the best previous results on boomerang analysis. In particular, we improve the best previous boomerang distinguishers for 20 and 21 rounds of WARP by a factor of $2^{38.28}$ and $2^{36.56}$, respectively. Thanks to the effectiveness of our method, we can extend the boomerang distinguishers of WARP by two rounds and distinguish 23 rounds of this cipher from a random permutation. Applying our method to the internationally-standardized cipher CLEFIA, we achieve a 9-round boomerang distinguisher which improves the best previous boomerang distinguisher by one round. Based on this distinguisher, we build a key-recovery attack on 11 rounds of CLEFIA, which improves the best previous sandwich attack on this cipher by one round. We also apply our method to LBlock, LBlock-s, and TWINE and improve the best previous boomerang distinguisher of these ciphers.
Last updated:  2024-06-17
Improved Rectangle Attacks on SKINNY and CRAFT
Hosein Hadipour and Nasour Bagheri
The boomerang and rectangle attacks are adaptions of differential cryptanalysis that regard the target cipher $E$ as a composition of two sub-ciphers, i.e., $E = E_{1}\circ E_{0}$, to construct a distinguisher for $E$ with probability $p^{2}q^{2}$ by concatenating two short differential trails for $E_{0}$ and $E_{1}$ with probability $p$ and $q$ respectively. According to the previous research, the dependency between these two differential characteristics has a great impact on the probability of boomerang and rectangle distinguishers. Dunkelman et al. proposed the sandwich attack to formalise such dependency that regards $E$ as three parts, i.e., $E = E_{1}\circ E_{m}\circ E_{0}$, where $E_{m}$ contains the dependency between two differential trails, satisfying some differential propagation with probability $r$. Accordingly, the entire probability is $p^{2}q^{2}r$. Recently, Song et al. have proposed a general framework to identify the actual boundaries of $E_{m}$ and systematically evaluate the probability of $E_{m}$ with any number of rounds, and applied their method to accurately evaluate the probabilities of the best SKINNY's boomerang distinguishers. In this paper, using a more advanced method to search for boomerang distinguishers, we show that the best previous boomerang distinguishers for SKINNY can be significantly improved in terms of probability and number of rounds. More precisely, we propose related-tweakey boomerang distinguishers for up to 19, 21, 23, and 25 rounds of SKINNY-64-128, SKINNY-128-256, SKINNY-64-192 and SKINNY-128-384 respectively, which improve the previous boomerang distinguishers of these variants of SKINNY by 1, 2, 1, and 1 round respectively. Based on the improved boomerang distinguishers for SKINNY, we provide related-tweakey rectangle attacks on 23 rounds of SKINNY-64-128, 24 rounds of SKINNY-128-256, 29 rounds of SKINNY-64-192, and 30 rounds of SKINNY-128-384. It is worth noting that our improved related-tweakey rectangle attacks on SKINNY-64-192, SKINNY-128-256 and SKINNY-128-384 can be directly applied for the same number of rounds of ForkSkinny-64-192, ForkSkinny-128-256 and ForkSkinny-128-384 respectively. CRAFT is another SKINNY-like tweakable block cipher for which we provide the security analysis against rectangle attack for the first time. As a result, we provide a 14-round boomerang distinguisher for CRAFT in the single-tweak model based on which we propose a single-tweak rectangle attack on 18 rounds of this cipher. Moreover, following the previous research regarding the evaluation of switching in multiple rounds of boomerang distinguishers, we also introduce new tools called Double Boomerang Connectivity Table $\tt{DBCT}$, $\tt{LBCT}^{\scriptsize{=}|}$, and $\tt{UBCT}^{\vDash}$ to evaluate the boomerang switch through the multiple rounds more accurately.
Last updated:  2024-06-17
Arithmetisation of computation via polynomial semantics for first-order logic
Murdoch J. Gabbay
We propose a compositional shallow translation from a first-order logic with equality, into polynomials; that is, we arithmetise the semantics of first-order logic. Using this, we can translate specifications of mathematically structured programming into polynomials, in a form amenable to succinct cryptographic verification. We give worked example applications, and we propose a proof-of-concept succinct verification scheme based on inner product arguments.
Last updated:  2024-06-17
Randomness Generation for Secure Hardware Masking - Unrolled Trivium to the Rescue
Gaëtan Cassiers, Loïc Masure, Charles Momin, Thorben Moos, Amir Moradi, and François-Xavier Standaert
Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations. In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an \emph{unrolled} fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes. According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers. As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalent (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.
Last updated:  2024-06-17
ZLR: a fast online authenticated encryption scheme achieving full security
Wonseok Choi, Seongha Hwang, Byeonghak Lee, and Jooyoung Lee
Online authenticated encryption has been considered of practical relevance in light-weight environments due to low latency and constant memory usage. In this paper, we propose a new tweakable block cipher-based online authenticated encryption scheme, dubbed ZLR, and its domain separation variant, dubbed DS-ZLR. ZLR and DS-ZLR follow the Encrypt-MixEncrypt paradigm. However, in contrast to existing schemes using the same paradigm such as ELmE and CoLM, ZLR and DS-ZLR enjoy n-bit security by using larger internal states with an efficient ZHash-like hashing algorithm. In this way, 2n-bit blocks are processed with only a single primitive call for hashing and two primitive calls for encryption and decryption, when they are based on an n-bit tweakable block cipher using n-bit (resp. 2n-bit) tweaks for ZLR (resp. DS-ZLR). Furthermore, they support pipelined computation as well as online nonce-misuse resistance. To the best of our knowledge, ZLR and DS-ZLR are the first pipelineable tweakable block cipher-based online authenticated encryption schemes of rate 2/3 that provide n-bit security with online nonce-misuse resistance.
Last updated:  2024-06-17
Phoenix: Hash-and-Sign with Aborts from Lattice Gadgets
Corentin Jeudy, Adeline Roux-Langlois, and Olivier Sanders
Preimage sampling is a fundamental tool in lattice-based cryptography, and its performance directly impacts that of the cryptographic mechanisms relying on it. In 2012, Micciancio and Peikert proposed a new way of generating trapdoors (and an associated preimage sampling procedure) with very interesting features. Unfortunately, in some applications such as digital signatures, the performance may not be as competitive as other approaches like Fiat-Shamir with Aborts. In an effort to improve preimage sampling for Micciancio-Peikert (MP) trapdoors, Lyubashevsky and Wichs (LW) introduced a new sampler which leverages rejection sampling but suffers from strong parameter requirements that hampered performance. As a consequence it seemed to be restricted to theoretical applications and has not been, to our knowledge, considered for real-world applications. Our first contribution is to revisit the LW sampler by proposing an improved analysis which yields much more compact parameters. This leads to gains on the preimage size of about 60% over the LW sampler, and up to 25% compared to the original MP sampling technique. It thus sheds a new light on the LW sampler, opening promising perspectives for the efficiency of advanced lattice-based constructions relying on such mechanisms. To provide further improvements, we show that it perfectly combines with the approximate trapdoors approach by Chen, Genise and Mukherjee, but with a smaller preimage error. Building upon those results, we introduce a hash-and-sign signature scheme called Phoenix. The scheme is based on the M-LWE and M-SIS assumptions and features attractive public key and signature sizes which are even smaller than those of the most recent gadget-based construction Eagle of Yu, Jia and Wang (Crypto’23). Moreover, Phoenix is designed to be implementation-friendly, avoiding in particular complex Gaussian samplers that are often hard to protect.
Last updated:  2024-06-17
Towards Optimal Parallel Broadcast under a Dishonest Majority
Daniel Collins, Sisi Duan, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, and Haochen Wang
The parallel broadcast (PBC) problem generalises the classic Byzantine broadcast problem to the setting where all $n$ nodes broadcast a message and deliver $O(n)$ messages. PBC arises naturally in many settings including multi-party computation. Recently, Tsimos, Loss, and Papamanthou (CRYPTO 2022) showed PBC protocols with improved communication, against an adaptive adversary who can corrupt all but a constant fraction $\epsilon$ of nodes (i.e., $f < (1 - \epsilon)n$). However, their study is limited to single-bit messages, and their protocols have large polynomial overhead in the security parameter $\kappa$: their TrustedPBC protocol achieves $\tilde{O}(n^2 \kappa^4)$ communication and $O(\kappa\log n)$ rounds. Since these factors of $\kappa$ are in practice often close (or at least polynomially related) to $n$, they add a significant overhead. In this work, we propose three parallel broadcast protocols for $L$-bit messages, for any size $L$, that significantly improve the communication efficiency of the state-of-the-art. We first propose a new extension protocol that uses a $\kappa$-bit PBC as a black box and achieves i) communication complexity of $O(L n^2 + \mathcal{P}(\kappa))$, where $\mathcal{P}(\kappa)$ is the communication complexity of the $\kappa$-bit PBC, and ii) round complexity same as the $\kappa$-bit PBC. By comparison, the state-of-the-art extension protocol for regular broadcast (Nayak et al., DISC 2020) incurs $O(n)$ additional rounds of communication. Next, we propose a protocol that is secure against a static adversary, for $\kappa$-bit messages with $O(n^2 \kappa^{1+K} + n\kappa^3 + \kappa^4)$ communication and $O(\kappa)$ round complexity, where $K$ is an arbitrarily small constant such that $0<K<1$. Finally, we propose an adaptively-secure protocol for $\kappa$-bit messages with $\tilde{O}(n^2\kappa^2 + n\kappa^3)$ communication overhead and $O(\kappa \log{n})$ round complexity by modifying and improving the next-best protocol TrustedPBC in several key ways. Notably, our latter two protocols are $\tilde{O}(\kappa^{2 - K})$ and $O(\kappa^2)$ times more communication-efficient, respectively, than the state-of-the-art protocols while achieving the same round complexity.
Last updated:  2024-06-16
ICICLE v2: Polynomial API for Coding ZK Provers to Run on Specialized Hardware
Karthik Inbasekar, Yuval Shekel, and Michael Asa
Polynomials play a central role in cryptography. In the context of Zero Knowledge Proofs (ZKPs), protocols can be exclusively expressed using polynomials, making them a powerful abstraction tool, as demonstrated in most ZK research papers. Our first contribution is a high-level framework that enables practitioners to implement ZKPs in a more natural way, based solely on polynomial primitives. ZK provers are considered computationally intensive algorithms with a high degree of parallelization. These algorithms benefit significantly from hardware acceleration, and deployed ZK systems typically include specialized hardware to optimize the performance of the prover code. Our second contribution is leveraging our polynomial API to abstract away low-level hardware primitives and automate their memory management. This device-agnostic design allows ZK engineers to prototype and build solutions while taking advantage of the performance gains offered by specialized hardware, such as GPUs and FPGAs, without needing to understand the hardware implementation details. Finally, our polynomial API is integrated into version 2 of the ICICLE library and is running in production. This paper also serves as a comprehensive documentation for the ICICLE v2 polynomial API.
Last updated:  2024-06-16
Dual Support Decomposition in the Head: Shorter Signatures from Rank SD and MinRank
Loïc Bidoux, Thibauld Feneuil, Philippe Gaborit, Romaric Neveu, and Matthieu Rivain
The MPC-in-the-Head (MPCitH) paradigm is widely used for building post-quantum signature schemes, as it provides a versatile way to design proofs of knowledge based on hard problems. Over the years, the MPCitH landscape has changed significantly, with the most recent improvements coming from VOLE-in-the-Head (VOLEitH) and Threshold-Computation-in-the-Head (TCitH). While a straightforward application of these frameworks already improve the existing MPCitH-based signatures, we show in this work that we can adapt the arithmetic constraints representing the underlying security assumptions (here called the modeling) to achieve smaller sizes using these new techniques. More precisely, we explore existing modelings for the rank syndrome decoding (RSD) and MinRank problems and we introduce a new modeling, named dual support decomposition, which achieves better sizes with the VOLEitH and TCitH frameworks by minimizing the size of the witnesses. While this modeling is naturally more efficient than the other ones for a large set of parameters, we show that it is possible to go even further and explore new areas of parameters. With these new modeling and parameters, we obtain low-size witnesses which drastically reduces the size of the arithmetic part of the signature. We apply the TCitH and VOLEitH frameworks to our new modeling for both RSD and MinRank and compare our results to the NIST candidates RYDE, MiRitH, and MIRA (MPCitH-based schemes from RSD and MinRank). We also note that recent techniques optimizing the sizes of GGM trees are applicable to our schemes and further reduce the signature sizes by a few hundred bytes. We obtain signature sizes below 3.5 kB for 128 bits of security with N=256 parties (a.k.a. leaves in the GGM trees) and going as low as ≈ 2.8 kB with N=2048, for both RSD and MinRank. This represents an improvement of more than 2 kB compared to the original submissions to the 2023 NIST call for additional signatures.
Last updated:  2024-06-16
Efficient Secure Communication Over Dynamic Incomplete Networks With Minimal Connectivity
Ivan Damgård, Divya Ravi, Lawrence Roy, Daniel Tschudi, and Sophia Yakoubov
We study the problem of implementing unconditionally secure reliable and private communication (and hence secure computation) in dynamic incomplete networks. Our model assumes that the network is always $k$-connected, for some $k$, but the concrete connection graph is adversarially chosen in each round of interaction. We show that, with $n$ players and $t$ malicious corruptions, perfectly secure communication is possible if and only if $k > 2t$. This disproves a conjecture from earlier work, that $k> 3t$ is necessary. Our new protocols are much more efficient than previous work; in particular, we improve the round and communication complexity by an exponential factor (in $n$) in both the semi-honest and the malicious corruption setting, leading to protocols with polynomial complexity.
Last updated:  2024-06-16
Unclonable Secret Sharing
Prabhanjan Ananth, Vipul Goyal, Jiahui Liu, and Qipeng Liu
Unclonable cryptography utilizes the principles of quantum mechanics to addresses cryptographic tasks that are impossible classically. We introduce a novel unclonable primitive in the context of secret sharing, called unclonable secret sharing (USS). In a USS scheme, there are $n$ shareholders, each holding a share of a classical secret represented as a quantum state. They can recover the secret once all parties (or at least $t$ parties) come together with their shares. Importantly, it should be infeasible to copy their own shares and send the copies to two non-communicating parties, enabling both of them to recover the secret. Our work initiates a formal investigation into the realm of unclonable secret sharing, shedding light on its implications, constructions, and inherent limitations. ** Connections: We explore the connections between USS and other quantum cryptographic primitives such as unclonable encryption and position verification, showing the difficulties to achieve USS in different scenarios. **Limited Entanglement: In the case where the adversarial shareholders do not share any entanglement or limited entanglement, we demonstrate information-theoretic constructions for USS. **Large Entanglement: If we allow the adversarial shareholders to have unbounded entanglement resources (and unbounded computation), we prove that unclonable secret sharing is impossible. On the other hand, in the quantum random oracle model where the adversary can only make a bounded polynomial number of queries, we show a construction secure even with unbounded entanglement. Furthermore, even when these adversaries possess only a polynomial amount of entanglement resources, we establish that any unclonable secret sharing scheme with a reconstruction function implementable using Cliffords and logarithmically many T-gates is also unattainable.
Last updated:  2024-06-16
A Note on (2, 2)-isogenies via Theta Coordinates
Jianming Lin, Saiyu Wang, and Chang-An Zhao
In this paper, we revisit the algorithm for computing chains of $(2, 2)$-isogenies between products of elliptic curves via theta coordinates proposed by Dartois et al. For each fundamental block of this algorithm, we provide a explicit inversion-free version. Besides, we exploit a novel technique of $x$-only ladder to speed up the computation of gluing isogeny. Finally, we present a mixed optimal strategy, which combines the inversion-elimination tool with the original methods together to execute a chain of $(2, 2)$-isogenies. We make a cost analysis and present a concrete comparison between ours and the previously known methods for inversion elimination. Furthermore, we implement the mixed optimal strategy for benchmark. The results show that when computing $(2, 2)$-isogeny chains with lengths of 126, 208 and 632, compared to Dartois, Maino, Pope and Robert's original implementation, utilizing our techniques can reduce $30.8\%$, $20.3\%$ and $9.9\%$ multiplications over the base field $\mathbb{F}_p$, respectively. Even for the updated version which employs their inversion-free methods, our techniques still possess a slight advantage.
Last updated:  2024-06-16
Cryptanalysis of QARMAv2
Hosein Hadipour and Yosuke Todo
QARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMAv1 with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2, and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers. This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al. for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we successfully present the first concrete key recovery attacks on reduced-round versions of QARMAv2. This includes attacking 13 rounds of QARMAv2-64-128 with a single tweak block ($\mathscr{T} = 1$), 14 rounds of QARMAv2-64-128 with two independent tweak blocks ($\mathscr{T} = 2$), and 16 rounds of QARMAv2-128-256 with two independent tweak blocks ($\mathscr{T} = 2$), all in an unbalanced setting. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.
Last updated:  2024-06-16
Finding the Impossible: Automated Search for Full Impossible-Differential, Zero-Correlation, and Integral Attacks
Hosein Hadipour, Sadegh Sadeghi, and Maria Eichlseder
Impossible differential (ID), zero-correlation (ZC), and integral attacks are a family of important attacks on block ciphers. For example, the impossible differential attack was the first cryptanalytic attack on 7 rounds of AES. Evaluating the security of block ciphers against these attacks is very important but also challenging: Finding these attacks usually implies a combinatorial optimization problem involving many parameters and constraints that is very hard to solve using manual approaches. Automated solvers, such as Constraint Programming (CP) solvers, can help the cryptanalyst to find suitable attacks. However, previous CP-based methods focus on finding only the ID, ZC, and integral distinguishers, often only in a limited search space. Notably, none can be extended to a unified optimization problem for finding full attacks, including efficient key-recovery steps. In this paper, we present a new CP-based method to search for ID, ZC, and integral distinguishers and extend it to a unified constraint optimization problem for finding full ID, ZC, and integral attacks. To show the effectiveness and usefulness of our method, we applied it to several block ciphers, including SKINNY, CRAFT, SKINNYe-v2, and SKINNYee. For the ISO standard block cipher SKINNY, we significantly improve all existing ID, ZC, and integral attacks. In particular, we improve the integral attacks on SKINNY-$n$-$3n$ and SKINNY-$n$-$2n$ by 3 and 2 rounds, respectively, obtaining the best cryptanalytic results on these variants in the single-key setting. We improve the ZC attack on SKINNY-$n$-$n$ (SKINNY-$n$-$2n$) by 2 (resp. 1) rounds. We also improve the ID attacks on all variants of SKINNY. Particularly, we improve the time complexity of the best previous single-tweakey (related-tweakey) ID attack on SKINNY-$128$-$256$ (resp. SKINNY-$128$-$384$) by a factor of $2^{22.57}$ (resp. $2^{15.39}$). On CRAFT, we propose a 21-round (20-round) ID (resp. ZC) attack, which improves the best previous single-tweakey attack by 2 (resp. 1) rounds. Using our new model, we also provide several practical integral distinguishers for reduced-round SKINNY, CRAFT, and Deoxys-BC. Our method is generic and applicable to other strongly aligned block ciphers.
Last updated:  2024-06-16
Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges
Hosein Hadipour and Maria Eichlseder
The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules.
Last updated:  2024-06-16
Cryptography at the Crossroads: Ethical Responsibility, the Cypherpunk Movement and Institutions
Eric Blair
This paper explores the intersection of cryptographic work with ethical responsibility and political activism, inspired by the Cypherpunk Manifesto and Phillip Rogaway's analysis of the moral character of cryptography. The discussion encompasses the historical context of cryptographic development, the philosophical underpinnings of the cypherpunk ideology, and contemporary challenges posed by mass surveillance and privacy concerns. By examining these facets, the paper calls for a renewed commitment to developing cryptographic solutions that prioritize human rights and societal good.
Last updated:  2024-06-16
Dumbo-MVBA: Optimal Multi-valued Validated Asynchronous Byzantine Agreement, Revisited
Yuan Lu, Zhenliang Lu, Qiang Tang, and Guiling Wang
Multi-valued validated asynchronous Byzantine agreement (MVBA), proposed in the elegant work of Cachin et al. (CRYPTO '01), is fundamental for critical fault-tolerant services such as atomic broadcast in the asynchronous network. It was left as an open problem to asymptotically reduce the $O(ln^2+n^2*lambda+n^3)$ communication (where $n$ is the number of parties, $l$ is the input length, and $lambda$ is the security parameter). Recently, Abraham et al. (PODC '19) removed the $n^3$ term to partially answer the question when input is small. However, in other typical cases, e.g., building atomic broadcast through MVBA, the input length $l >= n*lambda$, and thus the communication is dominated by the $ln^2$ term and the problem raised by Cachin et al. remains open. We fill the gap and answer the remaining part of the above open problem. In particular, we present two MVBA protocols with $O(ln+n^2*lambda)$ communicated bits, which is optimal when $l >= n*lambda$. We also maintain other benefits including optimal resilience to tolerate up to $n/3$ adaptive Byzantine corruptions, optimal expected constant running time, and optimal $O(n^2)$ messages. At the core of our design, we propose asynchronous provable dispersal broadcast (APDB) in which each input can be split and dispersed to every party and later recovered in an efficient way. Leveraging APDB and asynchronous binary agreement, we design an optimal MVBA protocol, Dumbo-MVBA; we also present a general self-bootstrap framework Dumbo-MVBA* to reduce the communication of any existing MVBA protocols. Finally, we demonstrate an enticing application of our $\mathsf{MVBA}$ protocols to construct expected constant-round asynchronous common subset ($\mathsf{ACS}$) with only $\mathcal{O}(\ell n^2 +\lambda n^2)$ communication complexity in expect, where $\ell$ is the input length of $\mathsf{ACS}$ represented in bits. The resulting $\mathsf{ACS}$ has asymptotically optimal communication cost when $\ell\geq\lambda$ and might have a broad array of applications like asynchronous atomic broadcasts or asynchronous multi-party computation.
Last updated:  2024-06-16
Analysis, modify and apply in IIOT form light-weight PSI in CM20
Zhuang Shan, Leyou Zhang, Qing Wu, and Qiqi Lai
Multi-party computation (\textsf{MPC}) is a major research interest in modern cryptography, and Privacy Set Intersection (\textsf{PSI}) is an important research topic within \textsf{MPC}. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other information. Therefore, \textsf{PSI} can be applied to various real-world scenarios, such as the Industrial Internet of Things (\textsf{IIOT}). Chase and Miao presented a paper on ``Light-weight PSI'' at CRYPTO 2020, highlighting its convenient structure and optimal communication cost. However, the drawback is that this protocol is deterministically encrypted and it was discovered through simulation that it is not resistant to probabilistic attacks. Building upon the ideas from CM20, this paper introduces the concept of a {\em perturbed pseudorandom generator}, constructs and proves its security, and replaces one of the hash functions (originally there were two) from CM20. In order to demonstrate the security of the \textsf{PSI} protocol proposed in this paper, a dedicated definition of Chosen Plaintext Attack (\textsf{CPA}) security model for this \textsf{PSI} protocol is provided. The paper then proceeds to prove that the \textsf{PSI} protocol satisfies this defined security model. Efficiency analysis shows that the \textsf{PSI} in this paper is comparable to CM20's \textsf{PSI}, whether on PC, pad, or phone.
Last updated:  2024-06-15
Consolidated Linear Masking (CLM): Generalized Randomized Isomorphic Representations, Powerful Degrees of Freedom and Low(er)-cost
Uncategorized
Itamar Levi and Osnat Keren
Show abstract
Uncategorized
Masking is a widely adopted countermeasure against side-channel analysis (SCA) that protects cryptographic implementations from information leakage. However, current masking schemes often incur significant overhead in terms of electronic cost. RAMBAM, a recently proposed masking technique that fits elegantly with the AES algorithm, offers ultra-low latency/area by utilizing redundant representations of finite field elements. This paper presents a comprehensive generalization of RAMBAM and various other masking schemes within a unified framework and a mathematical representation known as Consolidated Linear Masking (CLM), where masking schemes are formalized by their encoding. We establish a theoretical foundation for CLM linking randomized isomorphic (code) representations and the entropy provided by the redundancy to a revised notion of masking order. Our analysis reveals that RAMBAM is a specific instance of CLM as well as other masking constructions, thus paving the way for significant enhancements. For example, a $1^{st}$-order secure design can be achieved almost without increasing the size of the representation of the variables. This property scales up to any order and is versatile. We demonstrate how CLM enables: (1) randomized selection of the isomorphic field for improved security; (2) flexible choice of the randomization polynomial; (3) embedded mask-refreshing via the randomized isomorphic representation that reduces randomness requirements significantly as well as improves performance; (4) a wider range of isomorphic randomized mappings that significantly increases the available randomization space compared to RAMBAM; (5) considerable improvement in securing fault-injection attacks and inherent security against probing adversaries, i.e., more required probes. In addition, our framework addresses ways to improve the brute-force parameter choices in the original RAMBAM. By offering a unifying theoretical perspective for masking and practical enhancements, this work advances the design of efficient and secure masking countermeasures against SCA threats.
Last updated:  2024-06-15
AE Robustness as Indistinguishable Decryption Leakage under Multiple Failure Conditions
Ganyuan Cao
Robustness has emerged as a critical criterion for authenticated encryption, alongside confidentiality and integrity. In this study, we revisit the robustness of AEAD by focusing on descriptive errors. We introduce a new notion, IND-CCLA, to formalize this robustness. IND-CCLA expands on classical security notions defined for AEAD by incorporating the indistinguishability of leakage resulting from decryp- tion failures, including text-based values and descriptive error messages, especially in scenarios with multiple failure conditions. Through this notion, we investigate the discrepancy between a single-error decryption function and the actual leakage occurring during decryption. We introduce the concept of error unicity, which ensures that only one error is re- vealed—whether explicitly through decryption or implicitly through leakage—even when multiple failure conditions exist. Following this notion, we aim to mitigate the security risks associated with disclosing multiple errors through leakage and to maintain security even if one of the failure checks is flawed or compromised by an adversary. This notion captures the security of the scheme’s actual implementation, taking into account scenarios where an adversary might exploit side channels or implementation-level attacks to bypass the failure checks. We further extend this notion to IND-sf-CCLA to address stateful security involving out-of-order ciphertexts. We provide a concrete proof of the robustness of the Encode- then-Encipher ($\textsf{EtE}$) paradigm using our notions, demonstrating its ability to handle multiple failure conditions. We show the practical relevance of our notions by briefly discussing the non-malleability when a scheme is partially compromised and revisiting generic compositions for AE from this point. Additionally, we briefly present a transformation from our notion to a simulatable one, supporting future research on composable security regarding decryption leakage.
Last updated:  2024-06-15
Diffuse Some Noise: Diffusion Models for Measurement Noise Removal in Side-channel Analysis
Sengim Karayalcin, Guilherme Perin, and Stjepan Picek
Resilience against side-channel attacks is an important consideration for cryptographic implementations deployed in devices with physical access to the device. However, noise in side-channel measurements has a significant impact on the complexity of these attacks, especially when an implementation is protected with masking. Therefore, it is important to assess the ability of an attacker to deal with noise. While some previous works have considered approaches to remove (some) noise from measurements, these approaches generally require considerable expertise to be effectively employed or necessitate the ability of the attacker to capture a 'clean' set of traces without the noise. In this paper, we introduce a method for utilizing diffusion models to remove measurement noise from side-channel traces in a fully non-profiled setting. Denoising traces using our method considerably lowers the complexity of mounting attacks in both profiled and non-profiled settings. For instance, for a collision attack against the ASCADv2 dataset, we reduced the number of traces required to retrieve the key by 40%, and we showed similar improvements for ESHARD using a state-of-the-art MORE attack. Furthermore, we provide analyses into the scenarios where our method is useful and generate insights into how the diffusion networks denoise traces.
Last updated:  2024-06-15
Efficient and Secure Post-Quantum Certificateless Signcryption for Internet of Medical Things
Shiyuan Xu, Xue Chen, Yu Guo, Siu-Ming Yiu, Shang Gao, and Bin Xiao
Internet of Medical Things (IoMT) has gained significant research focus in both academic and medical institutions. Nevertheless, the sensitive data involved in IoMT raises concerns regarding user validation and data privacy. To address these concerns, certificateless signcryption (CLSC) has emerged as a promising solution, offering authenticity, confidentiality, and unforgeability. Unfortunately, most existing CLSC schemes are impractical for IoMT due to their heavy computational and storage requirements. Additionally, these schemes are vulnerable to quantum computing attacks. Therefore, research focusing on designing an efficient post-quantum CLSC scheme is still far-reaching. In this work, we propose PQ-CLSC, a novel post-quantum CLSC scheme that ensures quantum safety for IoMT. Our proposed design facilitates secure transmission of medical data between physicians and patients, effectively validating user legitimacy and minimizing the risk of private information leakage. To achieve this, we leverage lattice sampling algorithms and hash functions to generate the particial secret key and then employ the sign-then-encrypt method to obtain the ciphertext. We also formally and prove the security of our design, including indistinguishability against chosen-ciphertext attacks (IND-CCA2) and existential unforgeability against chosen-message attacks (EU-CMA) security. Finally, through comprehensive performance evaluation, our signcryption overhead is only 30%-55% compared to prior arts, while our computation overhead is just around 45% of other existing schemes. The evaluation results demonstrate that our solution is practical and efficient.
Last updated:  2024-06-14
Algorithm Substitution Attacks on Public Functions
Mihir Bellare, Doreen Riepel, and Laura Shea
We study the possibility of algorithm substitution attacks (ASAs) on functions with no secret-key material, such as hash functions, and verification algorithms of signature schemes and proof systems. We consider big-brother's goal to be three-fold: It desires utility (it can break the scheme), exclusivity (nobody else can) and undetectability (outsiders can't detect its presence). We start with a general setting in which big-brother is aiming to subvert an arbitrary public function. We give, in this setting, strong definitions for the three goals. We then present a general construction of an ASA, and prove that it meets these definitions. We use this to derive, as applications, ASAs on hash functions, signature schemes and proof systems. As a further application of the first two, we give an ASA on X.509 certificates. While ASAs were traditionally confined to exfiltrating secret keys, our work shows that they are possible and effective at subverting public functions where there are no keys to exfiltrate. Our constructions serve to help defenders and developers identify potential attacks by illustrating how they might be built.
Last updated:  2024-06-14
Extending class group action attacks via pairings
Joseph Macula and Katherine E. Stange
We introduce a new tool for the study of isogeny-based cryptography, namely pairings which are sesquilinear (conjugate linear) with respect to the $\mathcal{O}$-module structure of an elliptic curve with CM by an imaginary quadratic order $\mathcal{O}$. We use these pairings to study the security of problems based on the class group action on collections of oriented ordinary or supersingular elliptic curves. This extends work of of both (Castryck, Houben, Merz, Mula, Buuren, Vercauteren, 2023) and (De Feo, Fouotsa, Panny, 2024).
Last updated:  2024-06-14
Shared OT and Its Applications to Unconditional Secure Integer Equality, Comparison and Bit-Decomposition
Lucas Piske, Jeroen van de Graaf, Anderson C. A. Nascimento, and Ni Trieu
We present unconditionally perfectly secure protocols in the semi-honest setting for several functionalities: (1) private elementwise equality; (2) private bitwise integer comparison; and (3) bit-decomposition. These protocols are built upon a new concept called Shared Oblivious Transfer (Shared OT). Shared OT extends the one-out-of-N String OT by replacing strings with integers modulo $M$ and allowing additive secret-sharing of all inputs and outputs. These extensions can be implemented by simple local computations without incurring additional OT invocations. We believe our Shared OT may be of independent interest. Our protocols demonstrate the best round, communication, and computational complexities compared to all other protocols secure in a similar setting. Moreover, all of our protocols involve either 2 or 3 rounds.
Last updated:  2024-06-14
CheckOut: User-Controlled Anonymization for Customer Loyalty Programs
Matthew Gregoire, Rachel Thomas, and Saba Eskandarian
To resist the regimes of ubiquitous surveillance imposed upon us in every facet of modern life, we need technological tools that subvert surveillance systems. Unfortunately, while cryptographic tools frequently demonstrate how we can construct systems that safeguard user privacy, there is limited motivation for corporate entities engaged in surveillance to adopt these tools, as they often clash with profit incentives. This paper demonstrates how, in one particular aspect of everyday life -- customer loyalty programs -- users can subvert surveillance and attain anonymity, without necessitating any cooperation or modification in the behavior of their surveillors. We present the CheckOut system, which allows users to coordinate large anonymity sets of shoppers to hide the identity and purchasing habits of each particular user in the crowd. CheckOut scales up and systematizes past efforts to subvert loyalty surveillance, which have been primarily ad-hoc and manual affairs where customers physically swap loyalty cards to mask their real identities. CheckOut allows increased scale while ensuring that the necessary computing infrastructure does not itself become a new centralized point of privacy failure. Of particular importance to our scheme is a protocol for loyalty programs that offer reward points, where we demonstrate how CheckOut can assist users in paying each other back for loyalty points accrued while using each others' loyalty accounts. We present two different mechanisms to facilitate redistributing rewards points, offering trade-offs in functionality, performance, and security.
Last updated:  2024-06-14
Hintless Single-Server Private Information Retrieval
Baiyu Li, Daniele Micciancio, Mariana Raykova, and Mark Schultz-Wu
We present two new constructions for private information retrieval (PIR) in the classical setting where the clients do not need to do any preprocessing or store any database dependent information, and the server does not need to store any client-dependent information. Our first construction (HintlessPIR) eliminates the client preprocessing step from the recent LWE-based SimplePIR (Henzinger et. al., USENIX Security 2023) by outsourcing the "hint" related computation to the server, leveraging a new concept of homomorphic encryption with composable preprocessing. We realize this concept with RLWE encryption schemes, and by leveraging the composibility of this technique we are able to preprocess almost all the expensive parts of the homomorphic computation and reuse them across multiple protocol executions. As a concrete application, we propose highly efficient matrix vector multiplication that allows us to build HintlessPIR. For a database of size 8GB, HintlessPIR achieves throughput about 6.37GB/s without requiring transmission of any client or server state. We additionally formalize the matrix vector multiplication protocol as a novel primitive that we call LinPIR, which may be of independent interest. In our second construction (TensorPIR) we reduce the communication of HintlessPIR from square root to cubic root in the database size. For this purpose we extend our HE with preprocessing techniques to composition of key-switching keys and the query expansion algorithm. We show how to use RLWE encryption with preprocessing to outsource LWE decryption for ciphertexts generated by homomorphic multiplications. This allows the server to do more complex processing using a more compact query under LWE. We implement and benchmark HintlessPIR which achieves better concrete costs than TensorPIR for a large set of databases of interest. We show that it improves the communication of recent preprocessing constructions when clients do not have large numbers of queries or the database updates frequently. The computation cost for removing the hint is small and decreases as the database becomes larger, and it is always more efficient than other constructions with client hints such as Spiral PIR (Menon and Wu, S&P 2022). In the setting of anonymous queries we also improve on Spiral's communication.
Last updated:  2024-06-14
Secure Account Recovery for a Privacy-Preserving Web Service
Ryan Little, Lucy Qin, and Mayank Varia
If a web service is so secure that it does not even know—and does not want to know—the identity and contact info of its users, can it still offer account recovery if a user forgets their password? This paper is the culmination of the authors' work to design a cryptographic protocol for account recovery for use by a prominent secure matching system: a web-based service that allows survivors of sexual misconduct to become aware of other survivors harmed by the same perpetrator. In such a system, the list of account-holders must be safeguarded, even against the service provider itself. In this work, we design an account recovery system that, on the surface, appears to follow the typical workflow: the user types in their email address, receives an email containing a one-time link, and answers some security questions. Behind the scenes, the defining feature of our recovery system is that the service provider can perform email-based account validation without knowing, or being able to learn, a list of users' email addresses. Our construction uses standardized cryptography for most components, and it has been deployed in production at the secure matching system. As a building block toward our main construction, we design a new cryptographic primitive that may be of independent interest: an oblivious pseudorandom function that can either have a fully-private input or a partially-public input, and that reaches the same output either way. This primitive allows us to perform online rate limiting for account recovery attempts, without imposing a bound on the creation of new accounts. We provide an open-source implementation of this primitive and provide evaluation results showing that the end-to-end interaction time takes 8.4-60.4 ms in fully-private input mode and 3.1-41.2 ms in partially-public input mode.
Last updated:  2024-06-14
Nopenena Untraceable Payments: Defeating Graph Analysis with Small Decoy Sets
Jayamine Alupotha, Mathieu Gestin, and Christian Cachin
Decentralized payments have evolved from using pseudonymous identifiers to much more elaborate mechanisms to ensure privacy. They can shield the amounts in payments and achieve untraceability, e.g., decoy-based untraceable payments use decoys to obfuscate the actual asset sender or asset receiver. There are two types of decoy-based payments: full decoy set payments that use all other available users as decoys, e.g., Zerocoin, Zerocash, and ZCash, and user-defined decoy set payments where the users select small decoy sets from available users, e.g., Monero, Zether, and QuisQuis. Existing decoy-based payments face at least two of the following problems: (1) degrading untraceability due to the possibility of payment-graph analysis in user-defined decoy payments, (2) trusted setup, (3) availability issues due to expiring transactions in full decoy sets and epochs, and (4) an ever-growing set of unspent outputs since transactions keep generating outputs without saying which ones are spent. QuisQuis is the first one to solve all these problems; however, QuisQuis requires large cryptographic proofs for validity. We introduce Nopenena (means ``cannot see''): account-based, confidential, and user-defined decoy set payment protocol, that has short proofs and also avoids these four issues. Additionally, Nopenena can be integrated with zero-knowledge contracts like Zether's $\Sigma-$Bullets and Confidential Integer Processing (CIP) to build decentralized applications. Nopenena payments are about 80% smaller than QuisQuis payments due to Nopenena's novel cryptographic protocol. Therefore, decentralized systems benefit from Nopenena's untraceability and efficiency.
Last updated:  2024-06-14
Solving the Tensor Isomorphism Problem for special orbits with low rank points: Cryptanalysis and repair of an Asiacrypt 2023 commitment scheme
Valerie Gilchrist, Laurane Marco, Christophe Petit, and Gang Tang
The Tensor Isomorphism Problem (TIP) has been shown to be equivalent to the matrix code equivalence problem, making it an interesting candidate on which to build post-quantum cryptographic primitives. These hard problems have already been used in protocol development. One of these, MEDS, is currently in Round 1 of NIST's call for additional post-quantum digital signatures. In this work, we consider the TIP for a special class of tensors. The hardness of the decisional version of this problem is the foundation of a commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (Asiacrypt 2023). We present polynomial-time algorithms for the decisional and computational versions of TIP for special orbits, which implies that the commitment scheme is not secure. The key observations of these algorithms are that these special tensors contain some low-rank points, and their stabilizer groups are not trivial. With these new developments in the security of TIP in mind, we give a new commitment scheme based on the general TIP that is non-interactive, post-quantum, and statistically binding, making no new assumptions. Such a commitment scheme does not currently exist in the literature.
Last updated:  2024-06-14
Efficient Execution Auditing for Blockchains under Byzantine Assumptions
Jeff Burdges, Alfonso Cevallos, Handan Kılınç Alper, Chen-Da Liu-Zhang, Fatemeh Shirazi, Alistair Stewart, Rob Habermeier, Robert Klotzner, and Andronik Ordian
Security of blockchain technologies primarily relies on decentralization making them resilient against a subset of entities being taken down or corrupt. Blockchain scaling, crucial to decentralisation, has been addressed by architectural changes: i.e., the load of the nodes is reduced by parallelisation, called sharding or by taking computation load off the main blockchain via rollups. Both sharding and rollups have limitations in terms of decentralization and security. A crucial component in these architectures is a layer that allows to efficiently check the validity of incoming blocks in the system. We provide the first formalization and analysis of ELVES, the auditing layer that is currently deployed in the Polkadot and Kusama blockchains. In this layer, “auditing committees” are formed independently for each block, and security relies on the fact that it is prohibitively expensive in expectation for an adversary to make ELVES to accept a block that is not valid. In addition, ELVES has the following characteristics: 1) Auditing committees wind up orders of magnitude smaller than pre-assigned committees. In fact, the size of the committees adapts automatically to network conditions but remains a low constant in expectation, in the order of tens or low hundreds; 2) Although synchronous per se, ELVES tolerates instant adaptive crashes, mirroring realistic network capabilities. Surprisingly, the committee-size analysis of our protocol is ’all but simple’ and involves a novel strengthening of Cantelli’s inequality, which may be of independent interest.
Last updated:  2024-06-14
Designs for practical SHE schemes based on Ring-LWR
Madalina Bolboceanu, Anamaria Costache, Erin Hales, Rachel Player, Miruna Rosca, and Radu Titiu
The Learning with Errors problem (LWE) and its variants are among the most popular assumptions underlying lattice-based cryptography. The Learning with Rounding problem (LWR) can be thought of as a deterministic variant of LWE. While lattice-based cryptography is known to enable many advanced constructions, constructing Fully Homomorphic Encryption schemes based on LWR remains an under-explored part of the literature. In this work, we present a thorough study of Somewhat Homomorphic Encryption schemes based on Ring-LWR that are the analogue of the Ring-LWE-based BFV scheme. Our main contribution is to present and analyse two new schemes, in the LPR and Regev paradigms. The Regev-type scheme can be seen as a generalisation of the only prior work in this direction (Costache-Smart, 2017). Both our schemes present several im- provements compared to this prior work, and in particular we resolve the “tangled modulus” issue in the Costache-Smart scheme that led to unmanageable noise growth. Our schemes inherit the many benefits of being based on LWR, including ease of implementation, avoiding the need for expensive Gaussian sampling, improved resistance to side channels, suitability for hardware, and improved ciphertext size. Indeed, we give a detailed comparison showing that the LPR and Regev-type schemes marginally outperform the BFV scheme in terms of ciphertext size. Moreover, we show that both our schemes support RNS variants, which would make their practical performance competitive with BFV.
Last updated:  2024-06-14
Information-Theoretic 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key additive somewhat homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is additive somewhat homomorphic and we give protocols to handle addition and multiplication. In one scheme, the server handles circuit multiplication gates by returning the multiplicands to the client which does the multiplication and sends back the encrypted product. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy is not information-theoretic, but rather depends on hardness of the subset sum problem. Correctness for the server in the malicious model can be verified by a 3rd party with high probability where the client and server privacy are information-theoretically protected from the verifier. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Last updated:  2024-06-14
Flood and Submerse: Distributed Key Generation and Robust Threshold Signature from Lattices
Thomas Espitau, Guilhem Niot, and Thomas Prest
We propose a new framework based on random submersions — that is projection over a random subspace blinded by a small Gaussian noise — for constructing verifiable short secret sharing and showcase it to construct efficient threshold lattice-based signatures in the hash-and-sign paradigm, when based on noise flooding. This is, to our knowledge, the first hash-and-sign lattice-based threshold signature. Our threshold signature enjoys the very desirable property of robustness, including at key generation. In practice, we are able to construct a robust hash-and-sign threshold signature for threshold and provide a typical parameter set for threshold T = 16 and signature size 13kB. Our constructions are provably secure under standard MLWE assumption in the ROM and only require basic primitives as building blocks. In particular, we do not rely on FHE-type schemes.
Last updated:  2024-06-14
Signer Revocability for Threshold Ring Signatures
Da Teng and Yanqing Yao
t-out-of-n threshold ring signature (TRS) is a type of anonymous signature designed for t signers to jointly sign a message while hiding their identities among n parties that include themselves. However, can TRS address those needs if one of the signers wants to revoke his signature or, additively, sign separately later? Can non-signers be revoked without compromising anonymity? Previous research has only discussed opposing situations. The present study introduces a novel property for TRS- revocability- addressing the need for improved flexibility and privacy security in TRS. Our proposed revocable threshold ring signature (RTRS) scheme is innovative in several ways: (1) It allows a signer to non-interactively revoke their identity and update the signature from t-out-of-n to t − 1-out-of-n; (2) It is possible to reduce the ring size and clip non-signers along with revoked signers while maintaining the anonymity level. We analyze and define the boundaries for these operations and implement and evaluate our structure. With a sufficiently large ring size, we can optimize the signature size, resulting in better signing performance as compared to the extensible signature scheme.
Last updated:  2024-06-14
MixBuy: Contingent Payment in the Presence of Coin Mixers
Diego Castejon-Molina, Dimitrios Vasilopoulos, and Pedro Moreno-Sanchez
A contingent payment protocol involves two mutually distrustful parties, a buyer and a seller, operating on the same blockchain, and a digital product, whose ownership is not tracked on a blockchain (e.g. a digital book, but not a NFT). The buyer holds coins on the blockchain and transfers them to the seller in exchange for the product. However, if the blockchain does not hide transaction details, any observer can learn that a buyer purchased some product from a seller. In this work, we take contingent payment a step further: we consider a buyer who wishes to buy a digital product from a seller routing the payment via an untrusted mixer. Crucially, we require that said payment is unlinkable, meaning that the mixer (or any other observer) does not learn which buyer is paying which seller. We refer to such setting as unlinkable contingent payment (UCP). We present MixBuy, a system that realizes UCP. Mixbuy relies on \emph{oracle-based unlinkable contingent payment} (O-UCP), a novel four-party cryptographic protocol where the mixer pays the seller and the seller provides the buyer with the product only if a semi-trusted notary attests that the buyer has paid the mixer. More specifically, we require four security notions: (i) mixer security that guarantees that if the mixer pays the seller, the intermediary must get paid from the buyer; (ii) seller security that guarantees that if the seller delivers the product to the buyer, the seller must get paid from the intermediary; (iii) buyer security that guarantees that if the buyer pays the intermediary, the buyer must obtain the product; and (iv) unlinkability that guarantees that given a set of buyers and sellers, the intermediary should not learn which buyer paid which seller. We present a provably secure and efficient cryptographic construction for O-UCP. Our construction can be readily used to realize UCP on most blockchains, as it has minimal functionality requirements (i.e., digital signatures and timelocks). To demonstrate the practicality of our construction, we provide a proof of concept for O-UCP and our benchmarks in commodity hardware show that the communication overhead is small (a few kB per message) and the running time is below one second.
Last updated:  2024-06-14
SNARGs under LWE via Propositional Proofs
Zhengzhong Jin, Yael Tauman Kalai, Alex Lombardi, and Vinod Vaikuntanathan
We construct a succinct non-interactive argument (SNARG) system for every NP language $\mathcal{L}$ that has a propositional proof of non-membership for each $x\notin \mathcal{L}$. The soundness of our SNARG system relies on the hardness of the learning with errors (LWE) problem. The common reference string (CRS) in our construction grows with the space required to verify the propositional proof, and the size of the proof grows poly-logarithmically in the length of the propositional proof. Unlike most of the literature on SNARGs, our result implies SNARGs for languages $\mathcal L$ with proof length shorter than logarithmic in the deterministic time complexity of $\mathcal L$. Our SNARG improves over prior SNARGs for such ``hard'' NP languages (Sahai and Waters, STOC 2014, Jain and Jin, FOCS 2022) in several ways: - For languages with polynomial-length propositional proofs of non-membership, our SNARGs are based on a single, polynomial-time falsifiable assumption, namely LWE. - Our construction handles propositional proofs of super-polynomial length, as long as they have bounded space, under the subexponential LWE assumption. - Our SNARGs have a transparent setup, meaning that no private randomness is required to generate the CRS. Moreover, our approach departs dramatically from these prior works: we show how to design SNARGs for hard languages without publishing a program (in the CRS) that has the power to verify $\mathsf{NP}$ witnesses. The key new idea in our cryptographic construction is what we call a ``locally unsatisfiable extension'' of the $\mathsf{NP}$ verification circuit $\{C_x\}_x$. We say that an $\mathsf{NP}$ verifier has a locally unsatisfiable extension if for every $x\not\in \mathcal L$, there exists an extension $E_x$ of $C_x$ that is not even locally satisfiable in the sense of a local assignment generator [Paneth-Rothblum, TCC 2017]. Crucially, we allow $E_x$ to be depend arbitrarily on $x$ rather than being efficiently constructible. In this work, we show -- via a ``hash-and-BARG'' for a hidden, encrypted computation -- how to build SNARGs for all languages with locally unsatisfiable extensions. We additionally show that propositional proofs of unsatisfiability generically imply the existence of locally unsatisfiable extensions, which allows us to deduce our main results. As an illustrative example, our results imply a SNARG for the decisional Diffie-Hellman (DDH) language under the LWE assumption.
Last updated:  2024-06-14
ElectionGuard: a Cryptographic Toolkit to Enable Verifiable Elections
Josh Benaloh, Michael Naehrig, Olivier Pereira, and Dan S. Wallach
ElectionGuard is a flexible set of open-source tools that---when used with traditional election systems---can produce end-to-end verifiable elections whose integrity can be verified by observers, candidates, media, and even voters themselves. ElectionGuard has been integrated into a variety of systems and used in actual public U.S. elections in Wisconsin, California, Idaho, Utah, and Maryland as well as in caucus elections in the U.S. Congress. It has also been used for civic voting in the Paris suburb of Neuilly-sur-Seine and for an online election by a Switzerland/Denmark-based organization. The principal innovation of ElectionGuard is the separation of the cryptographic tools from the core mechanics and user interfaces of voting systems. This separation allows the cryptography to be designed and built by security experts without having to re-invent and replace the existing infrastructure. Indeed, in its preferred deployment, ElectionGuard does not replace the existing vote counting infrastructure but instead runs alongside and produces its own independently-verifiable tallies. Although much of the cryptography in ElectionGuard is, by design, not novel, some significant innovations are introduced which greatly simplify the process of verification. This paper describes the design of ElectionGuard, its innovations, and many of the learnings from its implementation and growing number of real-world deployments.
Last updated:  2024-06-13
Scalable Private Set Union, with Stronger Security
Yanxue Jia, Shi-Feng Sun, Hong-Sheng Zhou, and Dawu Gu
Private Set Union (PSU) protocol allows parties, each holding an input set, to jointly compute the union of the sets without revealing anything else. In the literature, scalable PSU protocols follow the “split-execute-assemble” paradigm (Kolesnikov et al., ASIACRYPT 2019); in addition, those fast protocols often use Oblivious Transfer as building blocks. Kolesnikov et al. (ASIACRYPT 2019) and Jia et al. (USENIX Security 2022), pointed out that certain security issues can be introduced in the “split-execute-assemble” paradigm. In this work, surprisingly, we observe that the typical way of invoking Oblivious Transfer also causes unnecessary leakage, and only the PSU protocols based on additively homomorphic encryption (AHE) can avoid the leakage. However, the AHE-based PSU protocols are far from being practical. To bridge the gap, we also design a new PSU protocol that can avoid the unnecessary leakage. Unlike the AHE-based PSU protocols, our new construction only relies on symmetric-key operations other than base OTs, thereby being much more scalable. The experimental results demonstrate that our protocol can obtain at least 873.74× speedup over the best-performing AHE-based scheme. Moreover, our performance is comparable to that of the state-of-the-art PSU protocol (Chen et al., USENIX Security 2023), which also suffers from the unnecessary leakage.
Last updated:  2024-06-13
Length Leakage in Oblivious Data Access Mechanisms
Grace Jia, Rachit Agarwal, and Anurag Khandelwal
This paper explores the problem of preventing length leakage in oblivious data access mechanisms with passive persistent adversaries. We show that designing mechanisms that prevent both length leakage and access pattern leakage requires navigating a three-way tradeoff between storage footprint, bandwidth footprint, and the information leaked to the adversary. We establish powerful lower bounds on achievable storage and bandwidth footprints for a variety of leakage profiles, and present constructions that perfectly or near-perfectly match the lower bounds.
Last updated:  2024-06-13
Distributed Fiat-Shamir Transform: from Threshold Identification Protocols to Signatures
Michele Battagliola and Andrea Flamini
The recent surge of distribute technologies caused an increasing interest towards threshold signature protocols, that peaked with the recent NIST First Call for Multi-Party Threshold Schemes. Since its introduction, the Fiat-Shamir Transform has been the most popular way to design standard digital signature schemes. Many threshold signature schemes are designed in a way that recalls the structure of digital signatures created using Fiat Shamir, by having the signers generate a common commitment, compute the challenge as the hash of it, and then jointly create the response. In this work we formalize this approach. In particular we introduce the notion of threshold identification scheme and threshold sigma protocol. Next, we introduce the concept of generalized Fiat-Shamir transform, that links the security of the threshold signature with the underlying threshold identification protocol. Our framework seeks to be an alternative, easier way to design concurrently secure threshold digital signatures and we show its potentiality providing an alternative security proof for Sparkle, a recent threshold Schnorr signature, and GRASS, a full threshold signature based on cryptographic group actions.
Last updated:  2024-06-13
Critical Perspectives on Provable Security: Fifteen Years of "Another Look" Papers
Uncategorized
Neal Koblitz and Alfred Menezes
Show abstract
Uncategorized
We give an overview of our critiques of “proofs” of security and a guide to our papers on the subject that have appeared over the past decade and a half. We also provide numerous additional examples and a few updates and errata.
Last updated:  2024-06-13
Pairing-Free Blind Signatures from Standard Assumptions in the ROM
Julia Kastner, Ky Nguyen, and Michael Reichle
Blind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more. However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first practical construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate a variant of Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 10.98 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient \emph{relaxed} range-proofs for large ranges with subversion zero-knowledge and compact commitments to elements of arbitrary groups.
Last updated:  2024-06-13
Communication Complexity vs Randomness Complexity in Interactive Proofs
Benny Applebaum, Kaartik Bhushan, and Manoj Prabhakaran
In this note, we study the interplay between the communication from a verifier in a general private-coin interactive protocol and the number of random bits it uses in the protocol. Under worst-case derandomization assumptions, we show that it is possible to transform any $I$-round interactive protocol that uses $\rho$ random bits into another one for the same problem with the additional property that the verifier's communication is bounded by $O(I\cdot \rho)$. Importantly, this is done with a minor, logarithmic, increase in the communication from the prover to the verifier and while preserving the randomness complexity. Along the way, we introduce a new compression game between computationally-bounded compressor and computationally-unbounded decompressor and a new notion of conditioned efficient distributions that may be of independent interest. Our solutions are based on a combination of perfect hashing and pseudorandom generators.
Last updated:  2024-06-13
Adaptively Secure BLS Threshold Signatures from DDH and co-CDH
Sourav Das and Ling Ren
Threshold signatures are one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for the Boldyreva scheme, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM). In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing groups in the Random Oracle Model~(ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security.
Last updated:  2024-06-13
Improved Search for Integral, Impossible-Differential and Zero-Correlation Attacks: Application to Ascon, ForkSKINNY, SKINNY, MANTIS, PRESENT and QARMAv2
Hosein Hadipour, Simon Gerhalter, Sadegh Sadeghi, and Maria Eichlseder
Integral, impossible-differential (ID), and zero-correlation (ZC) attacks are three of the most important attacks on block ciphers. However, manually finding these attacks can be a daunting task, which is why automated methods are becoming increasingly important. Most automatic tools regarding integral, ZC, and ID attacks have focused only on finding distinguishers rather than complete attacks. At EUROCRYPT 2023, Hadipour et al. proposed a generic and efficient constraint programming (CP) model based on satisfiability for finding ID, ZC, and integral distinguishers. This new model can be extended to a unified CP model for finding full key recovery attacks. However, it has limitations, including determining the contradiction location beforehand and a cell-wise model unsuitable for weakly aligned ciphers like Ascon and PRESENT. They also deferred developing a CP model for the partial-sum technique in key recovery as future work. In this paper, we enhance Hadipour et al.'s method in several ways. First, we remove the limitation of determining the contradiction location in advance. Second, we show how to extend the distinguisher model to a bit-wise model, considering the internal structure of S-boxes and keeping the model based on satisfiability. Third, we introduce a CP model for the partial-sum technique for the first time. To show the usefulness and versatility of our approach, we apply it to various designs, from strongly aligned ones like ForkSKINNY and QARMAv2 to weakly aligned ones such as Ascon and PRESENT, yielding significantly improved results. To mention a few of our results, we improve the integral distinguisher of QARMAv2-128 (resp. QARMAv2-64) by 7 (resp. 5) rounds, and the integral distinguisher of ForkSKINNY by 1 round, only thanks to our cell-wise distinguishe modelings. By using our new bit-wise modeling, our tool can find a group of $2^{155}$ 5-round ID and ZC distinguishers for Ascon in only one run, taking a few minutes on a regular laptop. The new CP model for the partial-sum technique enhances integral attacks on all SKINNY variants, notably improving the best attack on SKINNY-$n$-$n$ in the single-key setting by 1 round. We also enhance ID attacks on ForkSKINNY and provide the first analysis of this cipher in a limited reduced-round setting. Our methods are generic and applicable to other block ciphers.
Last updated:  2024-06-13
Notes on (failed) attempts to instantiate TLR3
Alexander Maximov
In this short paper we share our experience on instantiating the width-extension construct TLR3, based on a variety of tweakable block cipher constructs. As many of our attempts failed, we highlight the complexity of getting a practical tweakable block cipher and the gap between theory and practice.
Last updated:  2024-06-13
Secret-Shared Shuffle with Malicious Security
Xiangfu Song, Dong Yin, Jianli Bai, Changyu Dong, and Ee-Chien Chang
A secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt'20) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS'21, NDSS'22) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library.
Last updated:  2024-06-13
Efficient Quantum Algorithm for SUBSET-SUM Problem
Sanchita Ghosh, Anant Sharma, Sreetama Das, and Shibdas Roy
Problems in the complexity class $NP$ are not all known to be solvable, but are verifiable given the solution, in polynomial time by a classical computer. The complexity class $BQP$ includes all problems solvable in polynomial time by a quantum computer. Prime factorization is in $NP$ class, and is also in $BQP$ class, owing to Shor's algorithm. The hardest of all problems within the $NP$ class are called $NP$-complete. If a quantum algorithm can solve an $NP$-complete problem in polynomial time, it would imply that a quantum computer can solve all problems in $NP$ in polynomial time. Here, we present a polynomial-time quantum algorithm to solve an $NP$-complete variant of the $SUBSET-SUM$ problem, thereby, rendering $NP\subseteq BQP$. We illustrate that given a set of integers, which may be positive or negative, a quantum computer can decide in polynomial time whether there exists any subset that sums to zero. There are many real-world applications of our result, such as finding patterns efficiently in stock-market data, or in recordings of the weather or brain activity. As an example, the decision problem of matching two images in image processing is $NP$-complete, and can be solved in polynomial time, when amplitude amplification is not required.
Last updated:  2024-06-13
DISCO: Dynamic Searchable Encryption with Constant State
Xiangfu Song, Yu Zheng, Jianli Bai, Changyu Dong, Zheli Liu, and Ee-Chien Chang
Dynamic searchable encryption (DSE) with forward and backward privacy reduces leakages in early-stage schemes. Security enhancement comes with a price -- maintaining updatable keyword-wise state information. State information, if stored locally, incurs significant client-side storage overhead for keyword-rich datasets, potentially hindering real-world deployments. We propose DISCO, a simple and efficient framework for designing DSE schemes using constant client state. DISCO combines range-constrained pseudorandom functions (RCPRFs) over a global counter and leverages nice properties from the underlying primitives and index structure to simultaneously achieve forward-and-backward privacy and constant client state. To configure DISCO concretely, we identify a set of RCPRF properties that are vital for the resulting DISCO instantiations. By configuring DISCO with different RCPRFs, we resolve efficiency and usability issues in existing schemes. We further optimize DISCO's concrete efficiency without downgrading security. We implement DISCO constructions and report performance, showing trade-offs from different DISCO constructions. Besides, we compare the practical efficiency of DISCO with existing non-constant-state DSE schemes, demonstrating DISCO's competitive efficiency.
Last updated:  2024-06-13
Return of the Kummer: a toolbox for genus 2 cryptography
Maria Corte-Real Santos and Krijn Reijnders
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension 2 analogue of x-only arithmetic on elliptic curves. Kummer surfaces have been suggested in (hyper-)elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute (2,2)-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of (2,2)-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_p$. As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves. Curiously, the isogeny is then defined over $\mathbb{F}_p$ rather than $\mathbb{F}_{p^2}$. Contrary to expectation, the cost of SQIsign verification using Kummer surfaces does not explode: verification costs only 1.5 times more in terms of finite field operations than the SQIsign variant AprèsSQI, optimised for fast verification. Furthermore, as Kummer surfaces allow a much higher degree of parallelization, Kummer-based protocols over $\mathbb{F}_p$ could potentially outperform elliptic curve analogues over $\mathbb{F}_{p^2}$ in terms of clock cycles and actual performance.
Last updated:  2024-06-12
A Modular Approach to Registered ABE for Unbounded Predicates
Nuttapong Attrapadung and Junichi Tomida
Registered attribute-based encryption (Reg-ABE), introduced by Hohenberger et al. (Eurocrypt’23), emerges as a pivotal extension of attribute-based encryption (ABE), aimed at mitigating the key-escrow problem. Although several Reg-ABE schemes with black-box use of cryptography have been proposed so far, there remains a significant gap in the class of achievable predicates between vanilla ABE and Reg-ABE. To narrow this gap, we propose a modular framework for constructing Reg-ABE schemes for a broader class of predicates. Our framework is a Reg-ABE analog of the predicate transformation framework for ABE introduced by Attrapadung (Eurocrypt’19) and later refined by Attrapadung and Tomida (Asiacrypt’20) to function under the standard MDDH assumption. As immediate applications, our framework implies the following new Reg-ABE schemes under the standard MDDH assumption: – the first Reg-ABE scheme for (non-)monotone span programs with the traditional completely unbounded property. – the first Reg-ABE scheme for general non-monotone span programs (also with the completely unbounded property) as defined in the case of vanilla ABE by Attrapadung and Tomida (Asiacrypt’20). Here, the term “completely unbounded” signifies the absence of restrictions on attribute sets for users and policies associated with ciphertexts. From a technical standpoint, we first substantially modify pair encoding schemes (PES), originally devised for vanilla ABE by Attrapadung (Eurocrypt’14), to make them compatible with Reg-ABE. Subsequently, we present a series of predicate transformations through which we can construct complex predicates, particularly those with an “unbounded” characteristic, starting from simple ones. Finally, we define new properties of PES necessary for constructing Reg-ABE schemes and prove that these properties are preserved through the transformations. This immediately implies that we can obtain Reg-ABE schemes for any predicates derived via predicate transformations.
Last updated:  2024-06-12
CryptAttackTester: high-assurance attack analysis
Daniel J. Bernstein and Tung Chou
Quantitative analyses of the costs of cryptographic attack algorithms play a central role in comparing cryptosystems, guiding the search for improved attacks, and deciding which cryptosystems to standardize. Unfortunately, these analyses often turn out to be wrong. Sometimes errors are not caught until years later. This paper introduces CryptAttackTester (CAT), a software framework for high-assurance quantification of attack effectiveness. CAT enforces complete definitions of attack algorithms all the way down through the model of computation, enforces complete definitions of probability predictions and cost predictions all the way down through the cost metric, and systematically tests the predictions on small-scale inputs. For example, CAT gives a fully defined meaning to the statement "the median cost of brute-force search for an AES-128 key is under 2^{141.89} bit operations", and provides clear, auditable reasons to believe that the statement is correct. This does not rule out all possible analysis errors, but with CAT it is no longer possible for bugs to hide inside ambiguous or untested security-level claims. The paper gives various examples of errors in the literature that survived typical informal testing practices and that would have been caught if CAT-enforced links had been in place. As an important case study, the bulk of the current CAT release consists of full definitions of a broad spectrum of algorithms for information-set decoding (ISD), along with cost/probability predictions for each algorithm. ISD is the top attack strategy against the McEliece cryptosystem. The predictions cover interactions between (1) high-level search strategies from Prange, Lee–Brickell, Leon, Stern, Dumer, May–Meurer–Thomae, and Becker–Joux–May–Meurer; (2) random walks from Omura, Canteaut–Chabaud, Canteaut–Sendrier, and Bernstein–Lange–Peters; and (3) speedups in core subroutines such as linear algebra and sorting. The predictions also account for various attack overheads that were omitted from previous analyses. These gaps add up to roughly 10 bits, depending on parameters. CAT's tests catch much smaller errors than this. The cost metric selected in CAT has a very simple definition, is a lower bound for the price-performance ratio of non-quantum special-purpose hardware (although the bound is loose for attacks bottlenecked by long-distance communication), and allows many optimization efforts to be shared with the design of cryptographic circuits.
Last updated:  2024-06-12
Provably Secure Butterfly Key Expansion from the CRYSTALS Post-Quantum Schemes
Edward Eaton, Philippe Lamontagne, and Peter Matsakis
This work presents the first provably secure protocol for Butterfly Key Expansion (BKE) -- a tripartite protocol for provisioning users with pseudonymous certificates -- based on post-quantum cryptographic schemes. Our work builds upon the CRYSTALS family of post-quantum algorithms that have been selected for standardization by NIST. We extend those schemes by imbuing them with the additional functionality of public key expansion: a process by which pseudonymous public keys can be derived by a single public key. Our work is the most detailed analysis yet of BKE: we formally define desired properties of BKE -- unforgeability and unlinkability -- as cryptographic games, and prove that BKE implemented with our modified CRYSTALS schemes satisfy those properties. We implemented our scheme by modifying the Kyber and Dilithium algorithms from the LibOQS project, and we report on our parameter choices and the performance of the schemes.
Last updated:  2024-06-12
Quantum-Safe Public Key Blinding from MPC-in-the-Head Signature Schemes
Sathvika Balumuri, Edward Eaton, and Philippe Lamontagne
Key blinding produces pseudonymous digital identities by rerandomizing public keys of a digital signature scheme. It is used in anonymous networks to provide the seemingly contradictory goals of anonymity and authentication. Current key blinding schemes are based on the discrete log assumption. Eaton, Stebila and Stracovsky (LATINCRYPT 2021) proposed the first key blinding schemes from lattice assumptions. However, the large public keys and lack of QROM security means they are not ready to replace existing solutions. We present a new way to build key blinding schemes form any MPC-in-the-Head signature scheme. These schemes rely on well-studied symmetric cryptographic primitives and admit short public keys. We prove a general framework for constructing key blinding schemes and for proving their security in the quantum random oracle model (QROM). We instantiate our framework with the recent AES-based Helium signature scheme (Kales and Zaverucha, 2022). Blinding Helium only adds a minor overhead to the signature and verification time. Both Helium and the aforementioned lattice-based key blinding schemes were only proven secure in the ROM. This makes our results the first QROM proof of Helium and the first fully quantum-safe public key blinding scheme.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.