All papers in 2024 (1063 results)

Last updated:  2024-06-30
VIMz: Verifiable Image Manipulation using Folding-based zkSNARKs
Stefan Dziembowski, Shahriar Ebrahimi, and Parisa Hassanizadeh
With the rise of generative AI technology, the media's credibility as a source of truth has been significantly compromised. This highlights the need to verify the authenticity of media and its originality. Ensuring the integrity of media during capture using the device itself presents a straightforward solution to this challenge. However, raw captured media often require certain refinements or redactions before publication. Zero-knowledge proofs (ZKP) offer a solution by allowing attestation of the correctness of specific transformations applied to an authorized image. While shown to be feasible, previous approaches faced challenges in practice due to their high prover complexity. In this paper, we aim to develop a practical framework for efficiently proving the authenticity of HD and 4K images on commodity hardware. Our goal is to minimize prover complexity by utilizing the folding-based zkSNARKs technique, resulting in VIMz, the first practical verifiable image manipulation system of this kind. VIMz leverages Nova's folding scheme to achieve low complexity recursive zkSNARK proofs of authentic image manipulation. Our implementation results demonstrate a substantial reduction in prover complexity—up to a 3$\times$ speedup in time and a 96$\times$ reduction in memory (from 309 GB in [Kang et al., arXiv 2022] to only 3.2 GB). Moreover, the low memory consumption allows VIMz to prove the correctness of multiple chained transformations simultaneously, further increasing the performance (up to 3.5$\times$). Additionally, we propose a trustless smart contract system that autonomously verifies the proofs of media authenticity, achieving trustless copyright and ownership management, aligning with the standards of the Coalition for Content Provenance and Authenticity (C2PA). Such a system serves as a foundational infrastructure for constructing trustless media marketplaces with diverse applications.
Last updated:  2024-06-29
Compact Key Function Secret Sharing with Non-linear Decoder
Chandan Kumar, Sikhar Patranabis, and Debdeep Mukhopadhyay
We present a variant of Function Secret Sharing (FSS) schemes tailored for point, comparison, and interval functions, featuring compact key sizes at the expense of additional comparison. While existing FSS constructions are primarily geared towards $2$-party scenarios, exceptions such as the work by Boyle et al. (Eurocrypt 2015) and Riposte (S&P 2015) have introduced FSS schemes for $p$-party scenarios ($p \geq 3$). This paper aims to achieve the most compact $p$-party FSS key size to date. We achieve a noteworthy reduction in key size, a $2^p$-factor decrease compared to state-of-the-art FSS constructions (including computationally efficient constructions using symmetric-key primitives) of distributed point function (DPF). Compared to the previous public-key-based FSS design for DPF, we also get a key size reduction equal to a $2^{n/2}$-sized row vector, where $2^n$ is the domain size of the point function. This reduction in key size comes at the cost of a required comparison operation by the decoder (hence called a non-linear decoder), a departure from prior schemes. In $p$-party scenarios, our construction outperforms existing FSS constructions in key size, remaining on par with Riposte in evaluation time and showing significant improvement over Boyle et al. In addition to constructing FSS for distributed point functions (DPF), we extend our approach to distributed comparison and interval functions, achieving the most efficient key size to date. Our distributed comparison function exhibits a key-size reduction by a factor of $q^{p-1}$, where $q$ denotes the size of the algebraic group used in the scheme's construction. The reduced key size of the comparison function has practical implications, particularly in applications like privacy-preserving machine learning (PPML), where thousands of comparison functions are employed in each neural network layer. To demonstrate the effectiveness of our improvements, we design and prototype-implement a scalable privacy-preserving framework for neural networks over distributed models. Specifically, we implement a distributed rectified linear unit (ReLU) activation function using our distributed comparison function, showcasing the efficacy of our proposed scheme.
Last updated:  2024-06-29
Insta-Pok3r: Real-time Poker on Blockchain
Sanjam Garg, Aniket Kate, Pratyay Mukherjee, Rohit Sinha, and Sriram Sridhar
We develop a distributed service for generating correlated randomness (e.g. permutations) for multiple parties, where each party’s output is private but publicly verifiable. This service provides users with a low-cost way to play online poker in real-time, without a trusted party. Our service is backed by a committee of compute providers, who run a multi-party computation (MPC) protocol to produce an (identity-based) encrypted permutation of a deck of cards, in an offline phase well ahead of when the players’ identities are known. When the players join, what we call the online phase, they decrypt their designated cards immediately after deriving the identity-based decryption keys, a much simpler computation. In addition, the MPC protocol also generates a publicly-verifiable proof that the output is a permutation. In our construction, we introduce a new notion of succinctly verifiable multi-identity based encryption (SVME), which extends the existing notion of verifiable encryption to a multi-identity-based setting, but with a constant sized proof – this may be of independent interest. We instantiate this for a permutation relation (defined over a small set) along with identity-based encryption, polynomial commitments and succinct proofs – our choices are made to enable a distributed computation when the card deck is always secret shared. Moreover, we design a new protocol to efficiently generate a secret-sharing of random permutation of a small set, which is run prior to distributed SVME. Running these protocols offline simplifies the online phase substantially, as parties only derive their identity-specific keys privately via secure channels with the MPC committee, and then decrypt locally to obtain their decks. We provide a rigorous UC-based formalization in a highly modularized fashion. Finally, we demonstrate practicality with an implementation that shows that for 8 MPC parties, gen- erating a secret publicly-verifiable permutation of 64 cards takes under 3 seconds, while accessing cards for a player takes under 0.3 seconds.
Last updated:  2024-06-29
Quirky Interactive Reductions of Knowledge
Joseph Johnston
Interactive proofs and arguments of knowledge can be generalized to the concept of interactive reductions of knowledge, where proving knowledge of a witness for one NP language is reduced to proving knowledge of a witness for another NP language. We take this generalization and specialize it to a class of reductions we refer to as `quirky interactive reductions of knowledge' (or QUIRKs). This name reflects our particular design choices within the broad and diverse world of interactive reduction methods. A central design choice is allowing the prover to rewind or regress to any previous reduction and repeat it as many times as desired. We prove completeness and extractability properties for QUIRKs. We also offer tools for constructing extraction algorithms along with several simple examples of usage.
Last updated:  2024-06-28
HEProfiler: An In-Depth Profiler of Approximate Homomorphic Encryption Libraries
Jonathan Takeshita, Nirajan Koirala, Colin McKechney, and Taeho Jung
Fully Homomorphic Encryption (FHE) allows computation on encrypted data. Various software libraries have implemented the approximate- arithmetic FHE scheme CKKS, which is highly useful for applications in machine learning and data analytics; each of these libraries have differing performance and features. It is useful for developers and researchers to learn details about these libraries’ performance and their differences. Some previous work has profiled FHE and CKKS implementations for this purpose, but these comparisons are limited in their fairness and completeness. In this article, we compare four major libraries supporting the CKKS scheme. Working with the maintainers of each of the PALISADE, Microsoft SEAL, HElib, and HEAAN libraries, we devise methods for fair comparisons of these libraries, even with their widely varied development strategies and library architectures. To show the practical performance of these libraries, we present HEProfiler, a simple and extensible framework for profiling C++ FHE libraries. Our experimental evaluation is complete in both the scope of tasks tested and metrics evaluated, allowing us to draw conclusions about the behaviors of different libraries under a wide range of real-world workloads. This is the first work-giving experimental comparisons of different bootstrapping-capable CKKS libraries.
Last updated:  2024-06-28
Natively Compatible Super-Efficient Lookup Arguments and How to Apply Them
Matteo Campanelli, Dario Fiore, and Rosario Gennaro
Lookup arguments allow an untrusted prover to commit to a vector $\vec f \in \mathbb{F}^n$ and show that its entries reside in a predetermined table $\vec t \in \mathbb{F}^N$. One of their key applications is to augment general-purpose SNARKs making them more efficient on subcomputations that are hard to arithmetize. In order for this "augmentation" to work out, a SNARK and a lookup argument should have some basic level of compatibility with respect to the commitment on $\vec f$. However, not all existing efficient lookup arguments are fully compatible with other efficient general-purpose SNARKs. This incompatibility can for example occur whenever SNARKs use multilinear extensions under the hood (e.g. Spartan) but the lookup argument is univariate in flavor (e.g. Caulk or $\mathsf{cq}$). In this paper we discuss how to widen the spectrum of "super-efficient" lookup arguments (where the proving time is independent of the size of the lookup table): we present a new construction inspired by $\mathsf{cq}$and based on multilinear polynomial encodings (MLE). Our construction is the first lookup argument for any table that is also natively compatible with MLE-based SNARKs at comparable costs with other state-of-the-art lookup arguments, particularly when the large table is unstructured. This case arises in various applications, such as using lookups to prove that the program in a virtual machine is fetching the right instruction and when proving the correct computation of floating point arithmetic (e.g., in verifiable machine learning). We also introduce a second more general construction: a compiler that, given any super-efficient lookup argument compatible with univariate SNARKs, converts it into a lookup argument compatible with MLE-based SNARKs with a very small overhead. Finally, we discuss SNARKs that we can compose with our constructions as well as approaches for this composition to work effectively.
Last updated:  2024-06-28
Password-authenticated Key Exchange and Applications
Kristian Gjøsteen
We analyse a two password-authenticated key exchange protocols, a variant of CPace and a protocol related to the well-known SRP protocol. Our security results are tight. The first result gives us some information about trade-offs for design choices in CPace. The second result provides information about the security of SRP. Our analysis is done in a new game-based security definition for password-authenticated key exchange. Our definition accomodates arbitrary password sampling methodologies. Our definition also supports modular security analysis, which we illustrate by giving two example applications of password-authenticated key exchange: password-authenticated secure channels and password-authenticated device authorisation, capturing popular applications of passwords.
Last updated:  2024-06-28
Shuffle Arguments Based on Subset-Checking
Behzad Abdolmaleki, Prastudy Fauzi, Toomas Krips, and Janno Siim
Zero-knowledge shuffle arguments are a useful tool for constructing mix-nets which enable anonymous communication. We propose a new shuffle argument using a novel technique that probabilistically checks that each weighted set of input elements corresponds to some weighted set of output elements, with weights from the same set as the input element weights. We achieve this using standard discrete log assumptions and the shortest integer solution (SIS) assumption. Our shuffle argument has prover and verifier complexity linear in the size of the shuffled set, and communication complexity logarithmic both in the shuffled set size and security parameter.
Last updated:  2024-06-28
Enhancing Local Verification: Aggregate and Multi-Signature Schemes
Ahmet Ramazan Ağırtaş, Neslihan Yaman Gökce, and Oğuz Yayla
An aggregate signature scheme is a digital signature protocol that enables the aggregation of multiple signatures. Given n signatures on n distinct messages from n different users, it is possible to combine all these signatures into a single, concise signature. This single signature, along with the n original messages, convinces the verifier that the n users indeed signed their respective n original messages. However, the verifier must have access to all the original messages to perform the verification, highlighting a potential limitation in terms of accessibility and efficiency. Goyal and Vaikuntanathan introduced the concept of local verification, which allows the verifier to determine if a specific message m is part of the aggregated signature by only accessing the message m. In this paper, we extend the single-signer locally verifiable aggregate signature scheme initially proposed by Goyal and Vaikuntanathan, adapting it to a multi-signer context. Our generalization allows the verifier to validate multiple signatures simultaneously using an auxiliary value generated by the LocalOpen algorithm, thereby enhancing verification efficiency. Furthermore, we integrate this approach into the multi-signature scheme proposed by Boneh, Drijvers, and Neven, demonstrating its broader applicability and potential benefits in complex cryptographic systems.
Last updated:  2024-06-28
Optimized Computation of the Jacobi Symbol
Jonas Lindstrøm and Kostas Kryptos Chalkias
The Jacobi Symbol is an essential primitive in cryptographic applications such as primality testing, integer factorization, and various encryption schemes. By exploring the interdependencies among modular reductions within the algorithmic loop, we have developed a refined method that significantly enhances computational efficiency. Our optimized algorithm, implemented in the Rust language, achieves a performance increase of 72% over conventional textbook methods and is twice as fast as the previously fastest known Rust implementation. This work not only provides a detailed analysis of the optimizations but also includes comprehensive benchmark comparisons to illustrate the practical advantages of our methods. Our algorithm is publicly available under an open-source license, promoting further research on foundational cryptographic optimizations.
Last updated:  2024-06-28
Stochastic Secret Sharing with $1$-Bit Shares and Applications to MPC
Benny Applebaum and Eliran Kachlon
The problem of minimizing the share size of threshold secret-sharing schemes is a basic research question that has been extensively studied. Ideally, one strives for schemes in which the share size equals the secret size. While this is achievable for large secrets (Shamir, CACM '79), no similar solutions are known for the case of binary, single-bit secrets. Current approaches often rely on so-called ramp secret sharing that achieves a constant share size at the expense of a slight gap between the privacy and the correctness thresholds. In the case of single-bit shares, this leads to a large gap which is typically unacceptable. The possibility of a meaningful notion of secret sharing scheme with 1-bit shares and almost optimal threshold has been left wide open. Of special interest is the case of threshold 0.5, which is motivated by information-theoretic honest-majority secure multiparty computation (MPC). In this work, we present a new stochastic model for secret-sharing where each party is corrupted by the adversary with probability $p$, independently of the other parties, and correctness and privacy are required to hold with high probability over the choice of the corrupt parties. We present new secret sharing schemes with single-bit shares that tolerate any constant corruption probability $p<0.5$. Our construction is based on a novel connection between such stochastic secret-sharing schemes and error-correcting codes that achieve capacity over the binary erasure channel. Our schemes are linear and multiplicative. We demonstrate the usefulness of the model by using our new schemes to construct MPC protocols with security against an adversary that passively corrupts an arbitrary subset of $0.499n$ of the parties, where the online communication per party consists of a single bit per AND gate and zero communication per XOR gate. Unlike competing approaches for communication-efficient MPC, our solution is applicable even in a real-time model in which the parties should compute a Boolean circuit whose gates arrive in real-time, one at a time, and are not known in advance.
Last updated:  2024-06-28
A New Fine Tuning Method for FHEW/TFHE Bootstrapping with IND-CPAD Security
Deokhwa Hong, Young-Sik Kim, Yongwoo Lee, and Eunyoung Seo
Fully homomorphic encryption (FHE) schemes enable computations on encrypted data, making them a crucial component of privacy-enhancing technologies. Ducas and Micciancio introduced FHEW (Eurocrypt '15), and Chillotti et al. improved it in TFHE (Asiacrypt '16), both of which provide homomorphic binary (or larger) gate evaluations with fast latency due to their small parameters. However, their evaluation failure probability is highly sensitive to parameter selection, resulting in a limited set of viable parameters and a trade-off between failure probability and runtime. Recently, Cheon et al. proposed a key recovery attack against FHEW/TFHE schemes based on a new security model for FHE, called IND-CPA-D security, which was first introduced by Li and Micciancio (Eurocrypt '21). To prevent this attack, it is necessary to make the failure probability negligible (e.g., $2^{-128}$). However, due to limited choice parameters, it is forced to use a parameter set with unnecessarily low failure probabilities than needed, causing inefficiencies in runtime. We propose a new bootstrapping method for FHEW/TFHE, providing a precise balance between runtime and failure probability, and easy to implement. The proposed methods enable the selection of parameter sets that achieve negligible failure probabilities for each desired security level while optimizing runtime.
Last updated:  2024-06-28
Adaptor Signatures: New Security Definition and A Generic Construction for NP Relations
Xiangyu Liu, Tzannetos Ioannis, and Vassilis Zikas
An adaptor signatures (AS) scheme is an extension of digital signatures that allows the signer to generate a pre-signature for an instance of a hard relation. This pre-signature can later be adapted to a full signature with a corresponding witness. Meanwhile, the signer can extract a witness from both the pre-signature and the signature. AS have recently garnered more attention due to its scalability and interoperability. Dai et al. [INDOCRYPT 2022] proved that AS can be constructed for any NP relation using a generic construction. However, their construction has a shortcoming: the associated witness is exposed by the adapted signature. This flaw poses limits the applications of AS, even in its motivating setting, i.e., blockchain, where the adapted signature is typically uploaded to the blockchain and is public to everyone. To address this issue, in this work we augment the security definition of AS by a natural property which we call witness hiding. We then prove the existence of AS for any NP relation, assuming the existence of one-way functions. Concretely, we propose a generic construction of witness-hiding AS from signatures and a weak variant of trapdoor commitments, which we term trapdoor commitments with a specific adaptable message. We instantiate the latter based on the Hamiltonian cycle problem. Since the Hamiltonian cycle problem is NP-complete, we can obtain witness hiding adaptor signatures for any NP relation.
Last updated:  2024-06-28
On Sequential Functions and Fine-Grained Cryptography
Jiaxin Guan and Hart Montgomery
A sequential function is, informally speaking, a function $f$ for which a massively parallel adversary cannot compute "substantially" faster than an honest user with limited parallel computation power. Sequential functions form the backbone of many primitives that are extensively used in blockchains such as verifiable delay functions (VDFs) and time-lock puzzles. Despite this widespread practical use, there has been little work studying the complexity or theory of sequential functions. Our main result is a black-box oracle separation between sequential functions and one-way functions: in particular, we show the existence of an oracle $\mathcal{O}$ that implies a sequential function but not a one-way function. This seems surprising since sequential functions are typically constructed from very strong assumptions that imply one-way functions and also since time-lock puzzles are known to imply one-way functions (Bitansky et al., ITCS '16). We continue our exploration of the theory of sequential functions. We show that, informally speaking, the decisional, worst-case variant of a certain class of sequential function called a continuous iterative sequential function (CISF) is PSPACE-complete. A CISF is, in a nutshell, a sequential function $f$ that can be written in the form $f \left(k, x \right) = g^{k} \left(x \right)$ for some function $g$ where $k$ is an input determining the number of "rounds" the function is evaluated. We then show that more general forms of sequential functions are not contained in PSPACE relative to a random oracle. Given these results, we then ask if it is possible to build any interesting cryptographic primitives from sequential functions that are not one-way. It turns out that even if we assume just the existence of a CISF that is not one-way, we can build certain "fine-grained" cryptographic primitives where security is defined similarly to traditional primitives with the exception that it is only guaranteed for some (generally polynomial) amount of time. In particular, we show how to build "fine-grained" symmetric key encryption and "fine-grained" MACs from a CISF. We also show how to build fine-grained public-key encryption from a VDF with a few extra natural properties and indistinguishability obfucsation (iO) for null circuits. We do not assume one-way functions. Finally, we define a primitive that we call a commutative sequential function - essentially a sequential function that can be computed in sequence to get the same output in two different ways - and show that it implies fine-grained key exchange.
Last updated:  2024-06-28
KyberSlash: Exploiting secret-dependent division timings in Kyber implementations
Daniel J. Bernstein, Karthikeyan Bhargavan, Shivam Bhasin, Anupam Chattopadhyay, Tee Kiah Chia, Matthias J. Kannwischer, Franziskus Kiefer, Thales Paiva, Prasanna Ravi, and Goutam Tamvada
This paper presents KyberSlash1 and KyberSlash2 – two timing vulnerabilities in several implementations (including the official reference code) of the Kyber Post-Quantum Key Encapsulation Mechanism, currently undergoing standardization as ML-KEM. We demonstrate the exploitability of both KyberSlash1 and KyberSlash2 on two popular platforms: the Raspberry Pi 2 (Arm Cortex-A7) and the Arm Cortex-M4 microprocessor. Kyber secret keys are reliably recovered within minutes for KyberSlash2 and a few hours for KyberSlash1. We responsibly disclosed these vulnerabilities to maintainers of various libraries and they have swiftly been patched. We present two approaches for detecting and avoiding similar vulnerabilities. First, we patch the dynamic analysis tool Valgrind to allow detection of variable-time instructions operating on secret data, and apply it to more than 1000 implementations of cryptographic primitives in SUPERCOP. We report multiple findings. Second, we propose a more rigid approach to guarantee the absence of variable-time instructions in cryptographic software using formal methods.
Last updated:  2024-06-27
Distributional Secure Merge
Gayathri Garimella, Srinivasan Raghuramam, and Peter Rindal
Secure merge refers to the problem of merging two sorted lists. The problem appears in different settings where each list is held by one of two parties, or the lists are themselves shared among two or more parties. The output of a secure merge protocol is secret shared. Each variant of the problem offers many useful applications. The difficulty in designing secure merge protocols vis-a-vis insecure merge protocols (which work in linear time with a single pass over the lists) has to do with operations having to be oblivious or data-independent. In particular, the protocol cannot leak the positions of items of each list in the final merged list. On account of this, sorting-based secure merge protocols have been a common solution to the problem. However, as they introduce (poly)logarithmic overheads, there has been active investigation into the task of building (near) linear time secure merge protocols. Most recently, Hemenway et al. put forth a protocol for secure merge that does achieve linear communication and computation and a round complexity of $O({\log\log n})$, where $n$ is the length of the lists being merged. While this shows the feasibility of a linear time secure merge, it still leaves room for the design of a concretely efficient linear time secure merge. In this work, we consider a relaxation of the problem where the lists are uniformly random. We show a secure merge protocol for uniformly random lists that achieves $O({n\log\log n})$, i.e., near linear communication and computation and a round complexity of $O({\log\log n})$, where $n$ is the length of the lists being merged. Our protocol design is general and can be instantiated in a variety of settings so long as the building blocks (basic ones such as comparisons and shuffles) can be realized in said settings. Although we do not achieve the same asymptotic guarantees as Hemenway et al., our work is concretely efficient. We implement our protocol and compare it to the state of the art sorting protocols and demonstrate an order of magnitude improvement in running times and communication for lists of size of $2^{20}$. We also extend our protocol to work for lists sampled from arbitrary distributions. In particular, when the lists are (close to) identically distributed, we achieve the same efficiency as uniform lists. This immediately improve the performance of many crucial applications including PSI & Secure Join, thus illustrating the significance and applicability of our protocol in practice.
Last updated:  2024-06-27
Improved Multi-Party Fixed-Point Multiplication
Saikrishna Badrinarayanan, Eysa Lee, Peihan Miao, and Peter Rindal
Machine learning is widely used for a range of applications and is increasingly offered as a service by major technology companies. However, the required massive data collection raises privacy concerns during both training and inference. Privacy-preserving machine learning aims to solve this problem. In this setting, a collection of servers secret share their data and use secure multi-party computation to train and evaluate models on the joint data. All prior work focused on the scenario where the number of servers is two or three. In this work, we study the problem where there are $N \geq 3$ servers amongst whom the data is secret shared. A key component of machine learning algorithms is to perform fixed-point multiplication with truncation of secret shared decimal values. In this work, we design new protocols for multi-party secure fixed-point multiplication where each of the $N$ parties have one share each of the two values to be multiplied and receive one share of the product at the end of the protocol. We consider three forms of secret sharing - replicated, Shamir, and additive, and design an efficient protocol secure in the presence of a semi-honest adversary for each of the forms. Our protocols are more communication efficient than all prior work on performing multi-party fixed-point multiplication. Additionally, for replicated secret sharing, we design another efficient protocol that is secure in the presence of a malicious adversary. Finally, we leverage our fixed-point multiplication protocols to design secure multi-party computation (MPC) protocols for arbitrary arithmetic circuits that have addition and fixed-point multiplication with truncation gates. All our protocols are proven secure using a standard simulation based security definition. Our protocols for replicated and Shamir sharing work in the presence of an honest majority of parties while the one for additive sharing can tolerate a dishonest majority as well.
Last updated:  2024-06-27
The Sum-Check Protocol over Fields of Small Characteristic
Suyash Bagad, Yuval Domb, and Justin Thaler
The sum-check protocol of Lund, Fortnow, Karloff, and Nisan underlies SNARKs with the fastest known prover. In many of its applications, the prover can be implemented with a number of field operations that is linear in the number, $n$, of terms being summed. We describe an optimized prover implementation when the protocol is applied over an extension field of a much smaller base field. The rough idea is to keep most of the prover's multiplications over the base field, at the cost of performing more $\textit{total}$ field multiplications. When the sum-check protocol is applied to a product of polynomials that all output values in the base field, our algorithm reduces the number of extension field operations by multiple orders of magnitude. In other settings, our improvements are more modest but nonetheless meaningful. In SNARK design, the sum-check protocol is often combined with a polynomial commitment scheme, which are growing faster, especially when the values being committed are small. These improved commitment schemes are likely to render the sum-check prover the overall bottleneck, which our results help to mitigate.
Last updated:  2024-06-27
Efficient Secret Sharing for Large-Scale Applications
Sarvar Patel, Giuseppe Persiano, Joon Young Seo, and Kevin Yeo
Threshold secret sharing enables distributing a message to $n$ parties such that no subset of fewer than $t$ parties can learn the message, whereas any subset of at least $t$ parties can recover the message. Despite being a fundamental primitive, secret sharing still suffers from one significant drawback, where its message reconstruction algorithm is computationally expensive for large privacy thresholds $t$. In this paper, we aim to address this significant drawback. We study general $(t,c)$-ramp secret sharing schemes where the number of parties c needed to reconstruct the secret may be larger than $t$. We present a ramp secret sharing scheme whose reconstruction time is 2-7.8x faster than prior constructions suitable against adversaries that adaptively corrupt parties. For $t = 2^{20}$, our new protocol has reconstruction time of 5 seconds whereas prior work requires nearly half a minute. We see improvements starting from as small as $t = 256$. Furthermore, we obtain correctness threshold as small as $c \ge 1.05t$. To obtain our construction, we first improve the secret sharing frameworks by Cramer et al. (EUROCRYPT'15) and Applebaum et al. (CRYPTO'23) from erasure codes. Our new framework obtains secret sharing schemes that may be used against adversaries with adaptive corruptions while requiring only weaker correctness guarantees from the underlying erasure code with a distributed generation property. Furthermore, our new framework also maintains the linear homomorphism of the prior works. Afterwards, we present a concretely efficient erasure code from random band matrices that satisfies the distributed generation property. We show that our secret sharing scheme can improve many real-world applications. In secure aggregation protocols for federated learning, we obtain up to 22% reductions in computational cost by replacing Shamir's scheme with our construction. We extend our protocol to obtain a verifiable ramp secret sharing scheme where each party can verify the consistency of the shares. Our new verifiable ramp secret sharing has 8.2-25.2x faster sharing and 2.7-23.2x faster reconstruction time compared to prior works. Finally, we present an improved distributed verifiable random function that may be used for decentralized randomness beacons.
Last updated:  2024-06-27
Searching for differential addition chains
Daniel J. Bernstein, Jolijn Cottaar, and Tanja Lange
The literature sometimes uses slow algorithms to find minimum-length continued-fraction differential addition chains to speed up subsequent computations of multiples of points on elliptic curves. This paper introduces two faster algorithms to find these chains. The first algorithm prunes more effectively than previous algorithms. The second algorithm uses a meet-in-the-middle approach and appears to have a limiting cost exponent below 1.
Last updated:  2024-06-30
Cryptography in the Common Haar State Model: Feasibility Results and Separations
Prabhanjan Ananth, Aditya Gulati, and Yao-Ting Lin
Common random string model is a popular model in classical cryptography. We study a quantum analogue of this model called the common Haar state (CHS) model. In this model, every party participating in the cryptographic system receives many copies of one or more i.i.d Haar random states. We study feasibility and limitations of cryptographic primitives in this model and its variants: - We present a construction of pseudorandom function-like states with security against computationally unbounded adversaries, as long as the adversaries only receive (a priori) bounded number of copies. By suitably instantiating the CHS model, we obtain a new approach to construct pseudorandom function-like states in the plain model. - We present separations between pseudorandom function-like states (with super-logarithmic length) and quantum cryptographic primitives, such as interactive key agreement and bit commitment, with classical communication. To show these separations, we prove new results on the indistinguishability of identical versus independent Haar states against LOCC (local operations, classical communication) adversaries.
Last updated:  2024-06-27
Efficient Verifiable Differential Privacy with Input Authenticity in the Local and Shuffle Model
Tariq Bontekoe, Hassan Jameel Asghar, and Fatih Turkmen
Local differential privacy (LDP) is an efficient solution for providing privacy to client's sensitive data while simultaneously releasing aggregate statistics without relying on a trusted central server (aggregator) as in the central model of differential privacy. The shuffle model with LDP provides an additional layer of privacy, by disconnecting the link between clients and the aggregator, further improving the utility of LDP. However, LDP has been shown to be vulnerable to malicious clients who can perform both input and output manipulation attacks, i.e., before and after applying the LDP mechanism, to skew the aggregator's results. In this work, we show how to prevent malicious clients from compromising LDP schemes. Specifically, we give efficient constructions to prevent both input ánd output manipulation attacks from malicious clients for generic LDP algorithms. Our proposed schemes for verifiable LDP (VLDP), completely protect from output manipulation attacks, and prevent input attacks using signed data, requiring only one-time interaction between client and server, unlike existing alternatives [28, 33]. Most importantly, we are the first to provide an efficient scheme for VLDP in the shuffle model. We describe and prove secure, two schemes for VLDP in the regular model, and one in the shuffle model. We show that all schemes are highly practical, with client runtimes of < 2 seconds, and server runtimes of 5-7 milliseconds per client.
Last updated:  2024-06-27
Embedding Integer Lattices as Ideals into Polynomial Rings
Yihang Cheng, Yansong Feng, and Yanbin Pan
Many lattice-based crypstosystems employ ideal lattices for high efficiency. However, the additional algebraic structure of ideal lattices usually makes us worry about the security, and it is widely believed that the algebraic structure will help us solve the hard problems in ideal lattices more efficiently. In this paper, we study the additional algebraic structure of ideal lattices further and find that a given ideal lattice in a polynomial ring can be embedded as an ideal into infinitely many different polynomial rings by the coefficient embedding. We design an algorithm to verify whether a given full-rank lattice in $\mathbb{Z}^n$ is an ideal lattice and output all the polynomial rings that the given lattice can be embedded into as an ideal with bit operations $\mathcal{O}(n^3(\log n + B)^2(\log n)^2)$, where $n$ is the dimension of the lattice and $B$ is the upper bound of the bit length of the entries of the input lattice basis. We would like to point out that Ding and Lindner proposed an algorithm for identifying ideal lattices and outputting a single polynomial ring of which the input lattice can be regarded as an ideal with bit operations $\mathcal{O}(n^5B^2)$ in 2007. However, we find a flaw in Ding and Lindner's algorithm, and it causes some ideal lattices can't be identified by their algorithm.
Last updated:  2024-06-26
PeaceFounder: centralised E2E verifiable evoting via pseudonym braiding and history trees
Janis Erdmanis
PeaceFounder is a centralised E2E verifiable e-voting system that leverages pseudonym braiding and history trees. The immutability of the bulletin board is maintained replication-free by voter’s client devices with locally stored consistency-proof chains. Meanwhile, pseudonym braiding done via an exponentiation mix before the vote allows anonymisation to be transactional with a single braider at a time. In contrast to existing E2E verifiable e-voting systems, it is much easier to deploy as the system is fully centralised, free from threshold decryption ceremonies, trusted setup phases and bulletin board replication. Furthermore, the body of a vote is signed with a braided pseudonym, enabling unlimited ballot types.
Last updated:  2024-06-26
Reduction from Average-Case M-ISIS to Worst-Case CVP Over Perfect Lattices
Samuel Lavery
This paper presents a novel reduction from the average-case hardness of the Module Inhomogeneous Short Integer Solution (M-ISIS) problem to the worst-case hardness of the Closest Vector Problem (CVP) by defining and leveraging “perfect” lattices for cryptographic purposes. Perfect lattices, previously only theoretical constructs, are characterized by their highly regular structure, optimal density, and a central void, which we term the “Origin Cell.” The simplest Origin Cell is a hypercube with edge length 1 centered at the origin, guaranteed to be devoid of any valid lattice points. By exploiting the unique properties of the Origin Cell, we recalibrate the parameters of the M-ISIS and CVP problems. Our results demonstrate that solving M-ISIS on average over perfect lattices is at least as hard as solving CVP in the worst case, thereby providing a robust hardness guarantee for M-ISIS. Additionally, perfect lattices facilitate exceptionally compact cryptographic variables, enhancing the efficiency of cryptographic schemes. This significant finding enhances the theoretical foundation of lattice-based cryptographic problems and confirms the potential of perfect lattices in ensuring strong cryptographic security. The Appendix includes SageMath code to demonstrate the reproducibility of the reduction process from M-ISIS to CVP.
Last updated:  2024-06-26
Constraint-Packing and the Sum-Check Protocol over Binary Tower Fields
Quang Dao and Justin Thaler
SNARKs based on the sum-check protocol often invoke the ``zero-check PIOP''. This reduces the vanishing of many constraints to a single sum-check instance applied to an $n$-variate polynomial of the form $g(x) = \text{eq}(r,x) \cdot p(x)$, where $p$ is a product of multilinear polynomials, $r$ is a random vector, and $\text{eq}$ is the multilinear extension of the equality function. In recent SNARK designs, $p(x)$ is defined over a ``small'' base field, while $r$ is drawn from a large extension field $\mathbb{F}$ for security. Recent papers (Bagad, Domb, and Thaler 2024; Gruen 2024) have optimized the sum-check protocol prover for this setting. However, these works still require the prover to ``pre-compute'' all evaluations of $\text{eq}(r, x)$ as $x$ ranges over $\{0, 1\}^{n}$, and this computation involves about $n$ multiplications over the extension field $\mathbb{F}$. In this note, we describe a modification to the zero-check PIOP in the case of binary tower fields that reduces this ``pre-computation'' cost by a factor of close to $\log |\mathbb{F}|$, which is $128$ in important applications. We show that our modification is sound, and that it strictly generalizes a (possibly folklore) technique of constraint-packing over field extensions.
Last updated:  2024-06-26
A note on adding zero-knowledge to STARKs
Ulrich Haboeck and Al Kindi
We discuss zero-knowledge in the context of FRI-based STARKs using techniques desirable in practice: Randomization by polynomials over the basefield, and decomposing the overall quotient into polynomials of smaller degree.
Last updated:  2024-06-26
A note on the G-FFT
Ulrich Haboeck
For primes $p$ with $p+1$ being smooth, the G-FFT from Li and Xing [LX23] is an algebraic FFT, which at first glance seems equivalent to the circle FFT from [IACR eprint 2024/278]: It also uses the circle curve over $\mathbb F_p$ (in other words the projective line) as underlying domain, and interpolates by low-degree functions with poles over the same set of points. However, their approach to control the degree of the FFT basis is fundamentally different. The G-FFT makes use of punctured Riemann-Roch spaces, and the construction works with the group doubling map only, no projection onto the $x$-axis involved. In this note we give an elementary description of the G-FFT without using abstract algebra. We describe a variant which uses a simpler, and in our opinion more natural function space, and which treats the exceptional point of the domain (the group identity) differently. In comparison to the circle FFT, the G-FFT (both the original as well as our variant) has the following downsides. Interpolation and domain evaluation costs the double number of multiplications (the twiddle is not an ``odd'' function), and the function space is not invariant under the group action, causing additional overhead when applied in STARKs.
Last updated:  2024-06-26
Reading It like an Open Book: Single-trace Blind Side-channel Attacks on Garbled Circuit Frameworks
Sirui Shen and Chenglu Jin
Garbled circuits (GC) are a secure multiparty computation protocol that enables two parties to jointly compute a function using their private data without revealing it to each other. While garbled circuits are proven secure at the protocol level, implementations can still be vulnerable to side-channel attacks. Recently, side-channel analysis of GC implementations has garnered significant interest from researchers. We investigate popular open-source GC frameworks and discover that the AES encryption used in the garbling process follows a secret-dependent sequence. This vulnerability allows private inputs to be exposed through side-channel analysis. Based on this finding, we propose a side-channel attack on garbled circuits to recover the private inputs of both parties. Our attack does not require access to any plaintexts or ciphertexts in the protocol and is single-trace, adhering to the constraint that a garbled circuit can be executed only once. Furthermore, unlike existing attacks that can only target input non-XOR gates, our method applies to both input and internal non-XOR gates. Consequently, the secrets associated with every non-XOR gate are fully exposed as in an open book. We comprehensively evaluate our attack in various scenarios. First, we perform the attack on single-platform software implementations of standard AES and interleaved AES on a 32-bit ARM processor, achieving a $100\%$ success rate in both cases. Next, we target a hardware implementation on a Xilinx Artix-7 FPGA, where the resolution of power consumption measurements and the number of samples are significantly limited. In this scenario, our attack achieves a success rate of $79.58\%$. Finally, we perform a cross-platform attack on two processors with different microarchitectures representing the two parties. The differing execution cycles and power sensors across the platforms increase the difficulty of side-channel analysis. Despite these challenges, our point-of-interest (POI) selection method allows our attack to achieve a $100\%$ success rate in this scenario as well. We also discuss effective countermeasures that can be readily applied to GC frameworks to mitigate this vulnerability.
Last updated:  2024-06-26
A Practical Protocol for Quantum Oblivious Transfer from One-Way Functions
Eleni Diamanti, Alex B. Grilo, Adriano Innocenzi, Pascal Lefebvre, Verena Yacoub, and Álvaro Yángüez
We present a new simulation-secure quantum oblivious transfer (QOT) protocol based on one-way functions in the plain model. With a focus on practical implementation, our protocol surpasses prior works in efficiency, promising feasible experimental realization. We address potential experimental errors and their correction, offering analytical expressions to facilitate the analysis of the required quantum resources. Technically, we achieve simulation security for QOT through an equivocal and relaxed-extractable quantum bit commitment.
Last updated:  2024-06-26
Adaptively Secure 5 Round Threshold Signatures from MLWE/MSIS and DL with Rewinding
Shuichi Katsumata, Michael Reichle, and Kaoru Takemure
T-out-of-N threshold signatures have recently seen a renewed interest, with various types now available, each offering different tradeoffs. However, one property that has remained elusive is adaptive security. When we target thresholdizing existing efficient signatures schemes based on the Fiat-Shamir paradigm such as Schnorr, the elusive nature becomes clear. This class of signature schemes typically rely on the forking lemma to prove unforgeability. That is, an adversary is rewound and run twice within the security game. Such a proof is at odds with adaptive security, as the reduction must be ready to answer 2(T-1) secret key shares in total, implying that it can reconstruct the full secret key. Indeed, prior works either assumed strong idealized models such as the algebraic group model (AGM) or modified the underlying signature scheme so as not to rely on rewinding based proofs. In this work, we propose a new proof technique to construct adaptively secure threshold signatures for existing rewinding-based Fiat-Shamir signatures. As a result, we obtain the following: 1. The first adaptively secure 5 round lattice-based threshold signature under the MLWE and MSIS assumptions in the ROM. The resulting signature is a standard signature of Raccoon, a lattice-based signature scheme by del Pino et al., submitted to the additional NIST call for proposals. 2. The first adaptively secure 5 round threshold signature under the DL assumption in the ROM. The resulting signature is a standard Schnorr signature. To the best of our knowledge, this is the first adaptively secure threshold signature based on DL even assuming stronger models like AGM. Our work is inspired by the recent statically secure lattice-based 3 round threshold signature by del Pino et al. (Eurocrypt~2024) based on Raccoon. While they relied on so-called one-time additive masks to solve lattice specific issues, we notice that these masks can also be a useful tool to achieve adaptive security. At a very high level, we use these masks throughout the signing protocol to carefully control the information the adversary can learn from the signing transcripts. Intuitively, this allows the reduction to return a total of 2(T-1) randomly sampled secret key shares to the adversary consistently and without being detected, resolving the above paradoxical situation. Lastly, by allowing the parties to maintain a simple state, we can compress our 5 round schemes into 4 rounds.
Last updated:  2024-06-26
Threshold OPRF from Threshold Additive HE
Animesh Singh, Sikhar Patranabis, and Debdeep Mukhopadhyay
An oblivious pseudorandom function (OPRF) is a two-party protocol in which a party holds an input and the other party holds the PRF key, such that the party having the input only learns the PRF output and the party having the key would not learn the input. Now, in a threshold oblivious pseudorandom function (TOPRF) protocol, a PRF key K is initially shared among T servers. A client can obtain a PRF value by interacting with t(≤ T) servers but is unable to compute the same with up to (t − 1) servers. In this paper, we present a practically efficient homomorphic encryption (HE)-based post-quantum secure TOPRF protocol. Our proposed approach, which is based on a novel use of threshold HE, is agnostic of the underlying PRF and outperforms existing fully homomorphic encryption (FHE)-based approaches for TOPRF computation by several orders of magnitude in terms of running time. The FHE-based approaches require bootstrapping, a computationally extensive operation, and the primary bottleneck for evaluating large-depth circuits. Whereas, our proposed approach is based on a multi-party computation (MPC) protocol that uses a threshold additive HE scheme based on Regev’s cryptosystem (J’ACM 2009) alternative to FHE-based approaches. Concretely, we show a novel replacement of bootstrapping required in traditional FHE schemes by a threshold additive HE-based interactive protocol that performs masked decryption followed by table look-ups, jointly performed by a group of servers holding secret shares of the HE decryption key. Finally, We present a practical validation of our approach by realizing an AES-based TOPRF with an evaluation time of less than 1 second on consumer-grade server(s).
Last updated:  2024-06-26
SACfe: Secure Access Control in Functional Encryption with Unbounded Data
Uddipana Dowerah, Subhranil Dutta, Frank Hartmann, Aikaterini Mitrokotsa, Sayantan Mukherjee, and Tapas Pal
Privacy is a major concern in large-scale digital applications, such as cloud-computing, machine learning services, and access control. Users want to protect not only their plain data but also their associated attributes (e.g., age, location, etc). Functional encryption (FE) is a cryptographic tool that allows fine-grained access control over encrypted data. However, existing FE fall short as they are either inefficient and far from reality or they leak sensitive user-specific information. We propose SACfe, a novel attribute-based FE scheme that provides secure, fine-grained access control and hides both the user’s attributes and the function applied to the data, while preserving the data’s confidentiality. Moreover, it enables users to encrypt unbounded-length messages along with an arbitrary number of hidden attributes into ciphertexts. We design SACfe, a protocol for performing linear computation on encrypted data while enforcing access control based on inner product predicates. We show how SACfe can be used for online biometric authentication for privacy-preserving access control. As an additional contribution, we introduce an attribute-based linear FE for unbounded length of messages and functions where access control is realized by monotone span programs. We implement our protocols using the CiFEr cryptographic library and show its efficiency for practical settings.
Last updated:  2024-06-26
GRASP: Accelerating Hash-based PQC Performance on GPU Parallel Architecture
Yijing Ning, Jiankuo Dong, Jingqiang Lin, Fangyu Zheng, Yu Fu, Zhenjiang Dong, and Fu Xiao
$SPHINCS^+$, one of the Post-Quantum Cryptography Digital Signature Algorithms (PQC-DSA) selected by NIST in the third round, features very short public and private key lengths but faces significant performance challenges compared to other post-quantum cryptographic schemes, limiting its suitability for real-world applications. To address these challenges, we propose the GPU-based paRallel Accelerated $SPHINCS^+$ (GRASP), which leverages GPU technology to enhance the efficiency of $SPHINCS^+$ signing and verification processes. We propose an adaptable parallelization strategy for $SPHINCS^+$, analyzing its signing and verification processes to identify critical sections for efficient parallel execution. Utilizing CUDA, we perform bottom-up optimizations, focusing on memory access patterns and hypertree computation, to enhance GPU resource utilization. These efforts, combined with kernel fusion technology, result in significant improvements in throughput and overall performance. Extensive experimentation demonstrates that our optimized CUDA implementation of $SPHINCS^+$ achieves superior performance. Specifically, our GRASP scheme delivers throughput improvements ranging from 1.37× to 5.13× compared to state-of-the-art GPU-based solutions and surpasses the NIST reference implementation by over three orders of magnitude, highlighting a significant performance advantage.
Last updated:  2024-06-25
Oblivious Single Access Machines: A New Model for Oblivious Computation
Ananya Appan, David Heath, and Ling Ren
Oblivious RAM (ORAM) allows a client to securely outsource memory storage to an untrusted server. It has been shown that no ORAM can simultaneously achieve small bandwidth blow-up, small client storage, and a single roundtrip of latency. We consider a weakening of the RAM model, which we call the Single Access Machine (SAM) model. In the SAM model, each memory slot can be written to at most once and read from at most once. We adapt existing tree-based ORAM to obtain an oblivious SAM (OSAM) that has $O(\log n)$ bandwidth blow-up (which we show is optimal), small client storage, and a single roundtrip. OSAM unlocks improvements to oblivious data structures/algorithms. For instance, we achieve oblivious unbalanced binary trees (e.g. tries, splay trees). By leveraging splay trees, we obtain a notion of caching ORAM, where an access in the worst case incurs amortized $O(\log^2 n)$ bandwidth blow-up and $O(\log n)$ roundtrips, but in many common cases (e.g. sequential scans) incurs only amortized $O(\log n)$ bandwidth blow-up and $O(1)$ roundtrips. We also give new oblivious graph algorithms, including computing minimum spanning trees and single source shortest paths, in which the OSAM client reads/writes $O(|E| \cdot \log |E|)$ words using $O(|E|)$ roundtrips, where $|E|$ is the number of edges. This improves over prior custom solutions by a log factor. At a higher level, OSAM provides a general model for oblivious computation. We construct a programming interface around OSAM that supports arbitrary pointer-manipulating programs such that dereferencing a pointer to an object incurs $O(\log d \log n)$ bandwidth blowup and $O(\log d)$ roundtrips, where $d$ is the number of pointers to that object. This new interface captures a wide variety of data structures and algorithms (e.g., trees, tries, doubly-linked lists) while matching or exceeding prior best asymptotic results. It both unifies much of our understanding of oblivious computation and allows the programmer to write oblivious algorithms combining various common data structures/algorithms and beyond.
Last updated:  2024-06-25
FASIL: A challenge-based framework for secure and privacy-preserving federated learning
Ferhat Karakoç, Betül Güvenç Paltun, Leyli Karaçay, Ömer Tuna, Ramin Fuladi, and Utku Gülen
Enhancing privacy in federal learning (FL) without considering robustness can create an open door for attacks such as poisoning attacks on the FL process. Thus, addressing both the privacy and security aspects simultaneously becomes vital. Although, there are a few solutions addressing both privacy and security in the literature in recent years, they have some drawbacks such as requiring two non-colluding servers, heavy cryptographic operations, or peer-to-peer communication topology. In this paper, we introduce a novel framework that allows the server to run some analysis for detection and mitigation of attacks towards the FL process, while satisfying the confidentiality requirements for the training data against the server. We evaluate the effectiveness of the framework in terms of security and privacy by performing experiments on some concrete examples. We also provide two instantiations of the framework with two different secure aggregation protocols to give a more concrete view how the framework works and we analyse the computation and communication overhead of the framework.
Last updated:  2024-06-28
Structured-Seed Local Pseudorandom Generators and their Applications
Dung Bui, Geoffroy Couteau, and Nikolas Melissaris
In this note, we introduce structured-seed local pseudorandom generators, a relaxation of local pseudorandom generators. We provide constructions of this primitive under the sparse-LPN assumption, and explore its implications.
Last updated:  2024-06-25
MaSTer: Maliciously Secure Truncation for Replicated Secret Sharing without Pre-Processing
Martin Zbudila, Erik Pohle, Aysajan Abidin, and Bart Preneel
Secure multi-party computation (MPC) in a three-party, honest majority scenario is currently the state-of-the-art for running machine learning algorithms in a privacy-preserving manner. For efficiency reasons, fixed-point arithmetic is widely used to approximate computation over decimal numbers. After multiplication in fixed-point arithmetic, truncation is required to keep the result's precision. In this paper, we present an efficient three-party truncation protocol secure in the presence of an active adversary without pre-processing and improve on the current state-of-the-art in MPC over rings using replicated secret sharing (RSS). By adding an efficient consistency check, we lift the efficient but only passively secure three-party truncation protocol from the ABY3 framework by Mohassel and Rindal into the malicious setting without pre-processed data. Our benchmark indicates performance improvements of an order of magnitude in the offline phase for a single batch training. Finally, we apply our protocol to a real-world application for diagnostic prediction based on publicly available ECG heartbeat data. We achieve an improvement by a factor of two in the total throughput for both LAN and WAN settings.
Last updated:  2024-06-25
Polynomial sharings on two secrets: Buy one, get one free
Paula Arnold, Sebastian Berndt, Thomas Eisenbarth, and Maximilian Orlt
While passive side-channel attacks and active fault attacks have been studied intensively in the last few decades, strong attackers combining these attacks have only been studied relatively recently. Due to its simplicity, most countermeasures against passive attacks are based on additive sharing. Unfortunately, extending these countermeasures against faults often leads to quite a significant performance penalty, either due to the use of expensive cryptographic operations or a large number of shares due to massive duplication. Just recently, Berndt, Eisenbarth, Gourjon, Faust, Orlt, and Seker thus proposed to use polynomial sharing against combined attackers (CRYPTO 2023). While they construct gadgets secure against combined attackers using only a linear number of shares, the overhead introduced might still be too large for practical scenarios. In this work, we show how the overhead of nearly all known constructions using polynomial sharing can be reduced by nearly half by embedding two secrets in the coefficients of one polynomial at the expense of increasing the degree of the polynomial by one. We present a very general framework that allows adapting these constructions to this new sharing scheme and prove the security of this approach against purely passive side-channel attacks, purely active fault attacks, and combined attacks. Furthermore, we present new gadgets allowing us to operate upon the different secrets in a number of useful ways.
Last updated:  2024-06-25
Attribute-Based Threshold Issuance Anonymous Counting Tokens and Its Application to Sybil-Resistant Self-Sovereign Identity
Reyhaneh Rabaninejad, Behzad Abdolmaleki, Sebastian Ramacher, Daniel Slamanig, and Antonis Michalas
Self-sovereign identity (SSI) systems empower users to (anonymously) establish and verify their identity when accessing both digital and real-world resources, emerging as a promising privacy-preserving solution for user-centric identity management. Recent work by Maram et al. proposes the privacy-preserving Sybil-resistant decentralized SSI system CanDID (IEEE S&P 2021). While this is an important step, notable shortcomings undermine its efficacy. The two most significant among them being the following: First, unlinkability breaks in the presence of a single malicious issuer. Second, it introduces interactiveness, as the users are required to communicate each time with issuers to collect credentials intended for use in interactions with applications. This contradicts the goal of SSI, whose aim is to give users full control over their identities. This paper first introduces the concept of publicly verifiable attribute-based threshold anonymous counting tokens (tACT). Unlike recent approaches confined to centralized settings (Benhamouda et al., ASIACRYPT 2023), tACT operates in a distributed-trust environment. Accompanied by a formal security model and a provably secure instantiation, tACT introduces a novel dimension to token issuance, which, we believe, holds independent interest. Next, the paper leverages the proposed tACT scheme to construct an efficient Sybil-resistant SSI system. This system supports various functionalities, including threshold issuance, unlinkable multi-show selective disclosure, and non-interactive, non-transferable credentials that offer constant-size credentials. Finally, our benchmark results show an efficiency improvement in our construction when compared to CanDID all while accommodating a greater number of issuers and additionally reducing to a one-round protocol that can be run in parallel with all issuers.
Last updated:  2024-06-25
Constant-Size Unbounded Multi-Hop Fully Homomorphic Proxy Re-Encryption from Lattices
Feixiang Zhao, Huaxiong Wang, and Jian Weng
Proxy re-encryption is a cryptosystem that achieves efficient encrypted data sharing by allowing a proxy to transform a ciphertext encrypted under one key into another ciphertext under a different key. Homomorphic proxy re-encryption (HPRE) extends this concept by integrating homomorphic encryption, allowing not only the sharing of encrypted data but also the homomorphic computations on such data. The existing HPRE schemes, however, are limited to a single or bounded number of hops of ciphertext re-encryptions. To address this limitation, this paper introduces a novel lattice-based, unbounded multi-hop fully homomorphic proxy re-encryption (FHPRE) scheme, with constant-size ciphertexts. Our FHPRE scheme supports an unbounded number of reencryption operations and enables arbitrary homomorphic computations over original, re-encrypted, and evaluated ciphertexts. Additionally, we propose a potential application of our FHPRE scheme in the form of a non-interactive, constant-size multi-user computation system for cloud computing environments.
Last updated:  2024-06-24
Competitive Policies for Online Collateral Maintenance
Ghada Almashaqbeh, Sixia Chen, and Alexander Russell
Layer-two blockchain protocols emerged to address scalability issues related to fees, storage cost, and confirmation delay of on-chain transactions. They aggregate off-chain transactions into a fewer on-chain ones, thus offering immediate settlement and reduced transaction fees. To preserve security of the underlying ledger, layer-two protocols often work in a collateralized model; resources are committed on-chain to backup off-chain activities. A fundamental challenge that arises in this setup is determining a policy for establishing, committing, and replenishing the collateral in a way that maximizes the value of settled transactions. In this paper, we study this problem under two settings that model collateralized layer-two protocols. The first is a general model in which a party has an on-chain collateral $C$ with a policy to decide on whether to settle or discard each incoming transaction. The policy also specifies when to replenish $C$ based on the remaining collateral value. The second model considers a discrete setup in which $C$ is divided among $k$ wallets, each of which is of size $C/k$, such that when a wallet is full, and so cannot settle any incoming transactions, it will be replenished. We devise several online policies for these models, and show how competitive they are compared to optimal (offline) policies that have full knowledge of the incoming transaction stream. To the best of our knowledge, we are the first to study and formulate online competitive policies for collateral and wallet management in the blockchain setting.
Last updated:  2024-06-24
ammBoost: State Growth Control for AMMs
Nicholas Michel, Mohamed E. Najd, and Ghada Almashaqbeh
Automated market makers (AMMs) are a form of decentralized cryptocurrency exchanges and considered a prime example of Decentralized Finance (DeFi) applications. Their popularity and high trading activity have resulted in millions of on-chain transactions leading to serious scalability issues. In this paper, we address the on-chain storage overhead problem of AMMs by utilizing a new sidechain architecture as a layer 2 solution, building a system called ammBoost. Our system reduces the amount of on-chain transactions, boosts throughput, and supports blockchain pruning. We devise several techniques to enable layer 2 processing for AMMs while preserving correctness and security of the underlying AMM. We also build a proof-of-concept of ammBoost for a Uniswap-inspired use case to empirically evaluate its performance. Our experiments show that ammBoost decreases the gas cost by 94.53% and the chain growth by at least 80%, and that it can support up to 500x of the daily traffic volume observed for Uniswap in practice.
Last updated:  2024-06-24
chainBoost: A Secure Performance Booster for Blockchain-based Resource Markets
Zahra Motaqy, Mohamed E. Najd, and Ghada Almashaqbeh
Cryptocurrencies and blockchain technology provide an innovative model for reshaping digital services. Driven by the movement toward Web 3.0, recent systems started to provide distributed services, such as computation outsourcing or file storage, on top of the currency exchange medium. By allowing anyone to join and collect cryptocurrency payments for serving others, these systems create decentralized markets for trading digital resources. Yet, there is still a big gap between the promise of these markets and their practical viability. Existing initiatives are still early-stage and have already encountered security and efficiency obstacles. At the same time, existing work around promising ideas, specifically sidechains, fall short in exploiting their full potential in addressing these problems. To bridge this gap, we propose chainBoost, a secure performance booster for decentralized resource markets. It expedites service related operations, reduces the blockchain size, and supports flexible service-payment exchange modalities at low overhead. At its core, chainBoost employs a sidechain, that has a (security and semantic) mutual-dependence with the mainchain, to which the system offloads heavy/frequent operations. To enable it, we develop a novel sidechain architecture composed of temporary and permanent blocks, a block suppression mechanism to prune the sidechain, a syncing protocol to permit arbitrary data exchange between the two chains, and an autorecovery protocol to support robustness and resilience. We analyze the security of chainBoost, and implement a proof-of-concept prototype for a distributed file storage market as a use case. For a market handling around 2000 transactions per round, our experiments show up to 11x improvement in throughput and 94% reduction in confirmation time. They also show that chainBoost can reduce the main blockchain size by about 90%, and that it outperforms comparable optimistic rollup solutions by reducing transaction finality by 99.7%.
Last updated:  2024-06-24
Exploiting Clock-Slew Dependent Variability in CMOS Digital Circuits Towards Power and EM SCA Resilience
Archisman Ghosh, Md. Abdur Rahman, Debayan Das, Santosh Ghosh, and Shreyas Sen
Mathematically secured cryptographic implementations leak critical information in terms of power, EM emanations, etc. Several circuit-level countermeasures are proposed to hinder side channel leakage at the source. Circuit-level countermeasures (e.g., IVR, STELLAR, WDDL, etc) are often preferred as they are generic and have low overhead. They either dither the voltage randomly or attenuate the meaningful signature at $V_{DD}$ port. Although any digital implementation has two generic ports, namely clock and $V_{DD}$, circuit-level countermeasures primarily focus on $V_{DD}$ port, and countermeasures using the clock are mainly unexplored. System-level clock randomization is ineffective due to post-processing techniques. This work, for the first time, presents clock-based countermeasures by providing a controlled slew that exploits the inherent variability of digital circuits in terms of power consumption and transforms power/EM emanation into a complex function of data and slew. Due to this, minimum traces-to-disclosure (MTD) improves by 100$\times$ with respect to the unprotected one. Moreover, the slewed clock reduces the leaky frequency, and the clock randomization countermeasure is more effective as it becomes more difficult} to post-process in the frequency domain. Clock slew and randomization together have a cumulative effect(1800x) more than the multiplication of individual techniques (100x & 5x respectively). In brief, this paper presents a clock-level generic synthesizable countermeasure technique that improved the minimum-traces-to-disclosure (MTD) by 1800$\times$ and incurs only 11% area overhead, $<3\%$ power overhead (measured) and $<6\%$ performance overhead (measured). Moreover, this can be easily combined with other power-port-based mitigation techniques for enhanced security.
Last updated:  2024-06-24
Sparsity-Aware Protocol for ZK-friendly ML Models: Shedding Lights on Practical ZKML
Alan Li, Qingkai Liang, and Mo Dong
As deep learning is being widely adopted across various domains, ensuring the integrity of models has become increasingly crucial. Despite the recent advances in Zero-Knowledge Machine Learning (ZKML) techniques, proving the inference over large ML models is still prohibitive. To enable practical ZKML, model simplification techniques like pruning and quantization should be applied without hesitation. Contrary to conventional belief, recent development in ML space have demonstrated that these simplification techniques not only condense complex models into forms with sparse, low-bit weight matrices, but also maintain exceptionally high model accuracies that matches its unsimplified counterparts. While such transformed models seem inherently ZK-friendly, directly applying existing ZK proof frameworks still lead to suboptimal inference proving performance. To make ZKML truly practical, a quantization-and-pruning-aware ZKML framework is needed. In this paper, we propose SpaGKR, a novel sparsity-aware ZKML framework that is proven to surpass capabilities of existing ZKML methods. SpaGKR is a general framework that is widely applicable to any computation structure where sparsity arises. It is designed to be modular - all existing GKR-based ZKML frameworks can be seamlessly integrated with it to get remarkable compounding performance enhancements. We tailor SpaGKR specifically to the most commonly-used neural network structure - the linear layer, and propose the SpaGKR-LS protocol that achieves asymptotically optimal prover time. Notably, when applying SpaGKR-LS to a special series of simplified model - ternary network, it achieves further efficiency gains by additionally leveraging the low-bit nature of model parameters.
Last updated:  2024-06-24
Accelerating pairings on BW10 and BW14 Curves
Senegue Gomez Nyamsi, Laurian Guimagang Azebaze, and Emmanuel Fouotsa
Since the advent of pairing based cryptography, many researchers have developed several techniques and variants of pairings to optimise the speed of pairing computations. The selection of the elliptic curve for a given pairing based protocol is crucial for operations in the first and second pairing groups of points of the elliptic curve and for many cryptographic schemes. A new variant of superoptimal pairing was proposed in 2023, namely x-superoptimal pairing on curves with odd prime embedding degrees BW13-310 and BW19-286. This paper extends the definition of the x-superoptimal pairing on elliptic curves with even embedding degrees BW10-511 and BW14-351 at 128 bits security level. We provide a suitable formula of the x-superoptimal pairing on BW10-511 and BW14-351 where the Miller loop is about $13.5\%$ and $21.6\%$ faster than the optimal ate pairing on BW10-511 and BW14-351 respectively. The correctness of the x-superoptimal pairing on BW10-511 and BW14-351 and bilinearity has been verified by a Magma code.
Last updated:  2024-06-24
A Succinct Range Proof for Polynomial-based Vector Commitment
Rui Gao, Zhiguo Wan, Yuncong Hu, and Huaqun Wang
A range proof serves as a protocol for the prover to prove to the verifier that a committed number lies in a specified range, such as $[0,2^n)$, without disclosing the actual value. Range proofs find extensive application in various domains. However, the efficiency of many existing schemes diminishes significantly when confronted with batch proofs encompassing multiple elements. To improve the scalability and efficiency, we propose MissileProof, a vector range proof scheme, proving that every element in the committed vector is within $[0,2^n)$. We first reduce this argument to a bi-to-univariate SumCheck problem and a bivariate polynomial ZeroTest problem. Then generalizing the idea of univariate SumCheck PIOP, we design a bi-to-univariate SumCheck PIOP. By introducing a random polynomial, we construct the bivariate polynomial ZeroTest using a univariate polynomial ZeroTest and a univariate polynomial SumCheck PIOP. Finally, combining the PIOP for vector range proof, a KZG-based polynomial commitment scheme and the Fiat-Shamir transformation, we get a zero-knowledge succinct non-interactive vector range proof. Compared with existing schemes, our scheme has the optimal proof size ($O(1)$), the optimal commitment length ($O(1)$), and the optimal verification time ($O(1)$), at the expense of slightly sacrificing proof time ($O(l\log l\cdot n\log n)$ operations on the prime field for FFT and $O(ln)$ group exponentiations in $\mathbb{G}$). Moreover, we implemented an anti-money-laundering stateless blockchain based on the MissileProof. The gas consumption of the verification smart contract is reduced by 85%.
Last updated:  2024-06-24
Expediting Homomorphic Computation via Multiplicative Complexity-aware Multiplicative Depth Minimization
Mingfei Yu and Giovanni De Micheli
Fully homomorphic encryption (FHE) enables secure data processing without compromising data access, but its computational cost and slower execution compared to plaintext operations pose challenges. The growing interest in FHE-based secure computation necessitates the acceleration of homomorphic computations. While existing research primarily targets the reduction of the multiplicative depth (MD) of homomorphic circuits, this paper addresses the trade-off between MD reduction and the increase in multiplicative complexity (MC), a critical gap often overlooked during circuit optimization and potentially resulting in suboptimal outcomes. Three contributions are presented: (a) an exact synthesis paradigm for optimal homomorphic circuit implementations, (b) an efficient heuristic algorithm named MC-aware MD minimization, and (c) a homomorphic circuit optimization flow combining MC-aware MD minimization with existing MD reduction techniques. Experimental results demonstrate a 21.32% average reduction in homomorphic computation time and showcase significantly improved efficiency in circuit optimization.
Last updated:  2024-06-24
Grafting: Complementing RNS in CKKS
Jung Hee Cheon, Hyeongmin Choe, Minsik Kang, and Jaehyung Kim
The RNS variant of the CKKS scheme (SAC 2018) is widely implemented due to its computational efficiency. However, the current optimized implementations of the RNS-CKKS scheme have a limitation when choosing the ciphertext modulus. It requires the scale factors to be approximately equal to a factor (or a product of factors) of the ciphertext modulus. This restriction causes inefficiency when the scale factor is not close to the power of the machine's word size, wasting the machine's computation budget. In this paper, we solve this implementation-side issue algorithmically by introducing \emph{Grafting}, a ciphertext modulus management system. In Grafting, we mitigate the link between the ciphertext modulus and the application-dependent scale factor. We efficiently enable rescaling by an arbitrary amount of bits by suggesting a method managing the ciphertext modulus with mostly word-sized factors. Thus, we can fully utilize the machine architecture with word-sized factors of the ciphertext modulus while keeping the application-dependent scale factors. This also leads to hardware-friendly RNS-CKKS implementation as a side effect. Furthermore, we apply our technique to Tuple-CKKS multiplication (CCS 2023), solving a restriction due to small scale factors. Our proof-of-concept implementation shows that the overall complexity of RNS-CKKS is almost proportional to the number of coprime factors comprising the ciphertext modulus, of size smaller than the machine's word size. This results in a substantial speed-up from Grafting: $17$-$51$% faster homomorphic multiplications and $43$% faster CoeffsToSlots in bootstrapping, implemented based on the HEaaN library. We estimate that the computational gain could range up to $1.71\times$ speed-up for the current parameters used in the RNS-CKKS libraries.
Last updated:  2024-06-22
Tempora-Fusion: Time-Lock Puzzle with Efficient Verifiable Homomorphic Linear Combination
Uncategorized
Aydin Abadi
Show abstract
Uncategorized
To securely transmit sensitive information into the future, Time-Lock Puzzles (TLPs) have been developed. Their applications include scheduled payments, timed commitments, e-voting, and sealed-bid auctions. Homomorphic TLP is a key variant of TLP that enables computation on puzzles from different clients. This allows a solver/server to tackle only a single puzzle encoding the computation's result. However, existing homomorphic TLPs lack support for verifying the correctness of the computation results. We address this limitation by introducing Tempora-Fusion, a TLP that allows a server to perform homomorphic linear combinations of puzzles from different clients while ensuring verification of computation correctness. This scheme avoids asymmetric-key cryptography for verification, thus paving the way for efficient implementations. We discuss our scheme's application in various domains, such as federated learning, scheduled payments in online banking, and e-voting.
Last updated:  2024-06-22
Supersonic OT: Fast Unconditionally Secure Oblivious Transfer
Aydin Abadi and Yvo Desmedt
Oblivious Transfer (OT) is a fundamental cryptographic protocol with applications in secure Multi-Party Computation, Federated Learning, and Private Set Intersection. With the advent of quantum computing, it is crucial to develop unconditionally secure core primitives like OT to ensure their continued security in the post-quantum era. Despite over four decades since OT's introduction, the literature has predominantly relied on computational assumptions, except in cases using unconventional methods like noisy channels or a fully trusted party. Introducing “Supersonic OT”, a highly efficient and unconditionally secure OT scheme that avoids public-key-based primitives, we offer an alternative to traditional approaches. Supersonic OT enables a receiver to obtain a response of size O(1). Its simple (yet non-trivial) design facilitates easy security analysis and implementation. The protocol employs a basic secret-sharing scheme, controlled swaps, the one-time pad, and a third-party helper who may be corrupted by a semi-honest adversary. Our implementation and runtime analysis indicate that a single instance of Supersonic OT completes in 0.35 milliseconds, making it up to 2000 times faster than the state-of-the-art base OT.
Last updated:  2024-06-29
Secure Vickrey Auctions with Rational Parties
Chaya Ganesh, Shreyas Gupta, Bhavana Kanukurthi, and Girisha Shankar
In this work, we construct a second price (Vickrey) auction protocol (SPA), which does not require any auctioneers and ensures total privacy in the presence of rational parties participating in auction. In particular, the confidentiality of the highest bid and the identity of the second highest bidder are protected. We model the bidders participating in the second price auction as rational, computationally bounded and privacy-sensitive parties. These are self-interested agents who care about winning the auction more than learning about the private bids of other parties. A rational party does not deviate from the protocol arbitrarily but does so only for its own individual `advantage' -- without any consideration for others. Such an advantage is modeled using suitable utility functions. We show that for rational and computationally bounded parties participating in our second-price auctions protocol, there exists a privacy-preserving dominant strategy equilibrium in which every party prefers to follow the protocol rather than to deviate. Our protocol is implemented using open-source cryptographic constructs. Running our SPA protocol on commodity hardware with $15$ bidders, with bids of length $10$ bits, completes in $1.26$sec and has total communication of $0.77$MB whereas, under similar conditions, Atlas (semi-honest) protocol takes $40\%$ more time ($2.11$ sec) and $87\%$ more communication ($6.09$MB).
Last updated:  2024-06-28
FSSiBNN: FSS-based Secure Binarized Neural Network Inference with Free Bitwidth Conversion
Peng Yang, Zoe Lin Jiang, Jiehang Zhuang, Junbin Fang, Siu Ming Yiu, and Xuan Wang
Neural network inference as a service enables a cloud server to provide inference services to clients. To ensure the privacy of both the cloud server's model and the client's data, secure neural network inference is essential. Binarized neural networks (BNNs), which use binary weights and activations, are often employed to accelerate inference. However, achieving secure BNN inference with secure multi-party computation (MPC) is challenging because MPC protocols cannot directly operate on values of different bitwidths and require bitwidth conversion. Existing bitwidth conversion schemes expand the bitwidths of weights and activations, leading to significant communication overhead. To address these challenges, we propose FSSiBNN, a secure BNN inference framework featuring free bitwidth conversion based on function secret sharing (FSS). By leveraging FSS, which supports arbitrary input and output bitwidths, we introduce a bitwidth-reduced parameter encoding scheme. This scheme seamlessly integrates bitwidth conversion into FSS-based secure binary activation and max pooling protocols, thereby eliminating the additional communication overhead. Additionally, we enhance communication efficiency by combining and converting multiple BNN layers into fewer matrix multiplication and comparison operations. We precompute matrix multiplication tuples for matrix multiplication and FSS keys for comparison during the offline phase, enabling constant-round online inference. In our experiments, we evaluated various datasets and models, comparing our results with state-of-the-art frameworks. Compared with the two-party framework XONN (USENIX Security '19), FSSiBNN achieves approximately 7$\times$ faster inference times and reduces communication overhead by about 577$\times$. Compared with the three-party frameworks SecureBiNN (ESORICS '22) and FLEXBNN (TIFS '23), FSSiBNN is approximately 2.5$\times$ faster in inference time and reduces communication overhead by 1.3$\times$ to 16.4$\times$.
Last updated:  2024-06-21
Improved Reductions from Noisy to Bounded and Probing Leakages via Hockey-Stick Divergences
Maciej Obremski, João Ribeiro, Lawrence Roy, François-Xavier Standaert, and Daniele Venturi
There exists a mismatch between the theory and practice of cryptography in the presence of leakage. On the theoretical front, the bounded leakage model, where the adversary learns bounded-length but noiseless information about secret components, and the random probing model, where the adversary learns some internal values of a leaking implementation with some probability, are convenient abstractions to analyze the security of numerous designs. On the practical front, side-channel attacks produce long transcripts which are inherently noisy but provide information about all internal computations, and this noisiness is usually evaluated with closely related metrics like the mutual information or statistical distance. Ideally, we would like to claim that resilience to bounded leakage or random probing implies resilience to noisy leakage evaluated according to these metrics. However, prior work (Duc, Dziembowski and Faust, Eurocrypt 2014; Brian et al., Eurocrypt 2021) has shown that proving such reductions with useful parameters is challenging. In this work, we study noisy leakage models stemming from hockey-stick divergences, which generalize statistical distance and are also the basis of differential privacy. First, we show that resilience to bounded leakage and random probing implies resilience to our new noisy leakage model with improved parameters compared to models based on the statistical distance or mutual information. Second, we establish composition theorems for our model, showing that these connections extend to a setting where multiple leakages are obtained from a leaking implementation. We complement our theoretical results with a discussion of practical relevance, highlighting that (i) the reduction to bounded leakage applies to realistic leakage functions with noise levels that are decreased by several orders of magnitude compared to Brian et al., and (ii) the reduction to random probing usefully generalizes the seminal work of Duc, Dziembowski, and Faust, although it remains limited when the field size in which masking operates grows (i.e., hockey-stick divergences can better hide the field size dependency of the noise requirements, but do not annihilate it).
Last updated:  2024-06-21
A Deep Study of The Impossible Boomerang Distinguishers: New Construction Theory and Automatic Search Methods
Xichao Hu, Dengguo Feng, Lin Jiao, Yonglin Hao, Xinxin Gong, and Yongqiang Li
The impossible boomerang attack (IBA) is a combination of the impossible differential attack and boomerang attack, which has demonstrated remarkable power in the security evaluation of AES and other block ciphers. However, this method has not received sufficient attention in the field of symmetric cipher analysis. The only existing search method for impossible boomerang distinguishers (IBD), the core of IBAs, is the $\mathcal{UB}\text{-method}$, but it is considered rather rudimentary given current technological advancements and may result in missed opportunities for effective attacks. Therefore, this paper delves into a comprehensive study on the construction theory and automatic search method of IBDs. Theoretically, we propose 5 IBD constructions aligned with the techniques of arbitrary S-box, boomerang distinguisher, Boomerang Connectivity Table, U/L/EBCT and mixed tables for differential propagation for SPN-network block ciphers, and 2 IBD constructions accompanied by state propagation for block ciphers with any structure. Furthermore, we investigate the relationship among these IBD constructions and demonstrate that the most superior IBD aligns precisely with the original definition. Technically, we develop a general SAT-based automatic search tool for IBDs by introducing optimized search strategies of the composite model method and the mixed model method. This tool not only considers the details of each operation but also takes into account the impact of key schedule in a single-key setting. As applications, we first acquire 59584 4-round 1 active word truncated IBDs for AES-128, and 192 of those IBDs cannot be detected by the $\mathcal{UB} \text{-method}$. For Midori64, we first demonstrate the non-existence of $7$-round $1$ active word truncated IBDs, and obtain $7296$ $6$-round $1$ active word truncated IBDs, which is complementary to the finding that there are no existing $6$-round $1$ active word truncated IDs. For PRESENT-80, we get the first 6-round IBDs which cannot be detected by the $\mathcal{UB}\text{-method}$. Those results indicate that our method outperforms the $\mathcal{UB}\text{-method}$ and offer an advantage over IDs. We believe that our work can bring new insights to symmetric cipher analysis.
Last updated:  2024-06-21
On the vector subspaces of $\mathbb{F}_{2^n}$ over which the multiplicative inverse function sums to zero
Claude Carlet
We study the behavior of the multiplicative inverse function (which plays an important role in cryptography and in the study of finite fields), with respect to a recently introduced generalization of almost perfect nonlinearity (APN), called $k$th-order sum-freedom, that extends a classical characterization of APN functions, and has also some relationship with integral attacks. This generalization corresponds to the fact that a vectorial function $F:\mathbb F_2^n\mapsto \mathbb F_2^m$ sums to a nonzero value over every $k$-dimensional affine subspace of $\mathbb F_2^n$, for some $k\leq n$. The sum of the values of the inverse function $x\in \mathbb F_{2^n}\mapsto x^{2^n-2}\in \mathbb F_{2^n}$ over any affine subspace $A$ of $\mathbb{F}_{2^n}$ not containing 0 (i.e. being not a vector space) is easy to address: there exists a simple expression of such sum which shows that it never vanishes. We study in the present paper the case of a vector subspace (a linear subspace), which is much less simple to handle. We show that the sum depends on a coefficient in subspace polynomials. We derive several expressions of this coefficient. Then we study for which values of $k$ the multiplicative inverse function can sum to nonzero values over all $k$-dimensional vector subspaces. We show that, for every $k$ not co-prime with $n$, it sums to zero over at least one $k$-dimensional $\mathbb{F}_2$-subspace of $\mathbb{F}_{2^n}$. We study the behavior of the inverse function over direct sums of vector spaces and we deduce that the property of the inverse function to be $k$th-order sum-free happens for $k$ if and only if it happens for $n-k$. We derive several results on the sums of values of the inverse function over vector subspaces, addressing in particular the cases of dimension at most 3 (equivalently, of co-dimension at most 3). We leave other cases open and provide computer investigation results.
Last updated:  2024-06-21
Delegated-Query Oblivious Transfer and its Practical Applications
Yvo Desmedt and Aydin Abadi
Databases play a pivotal role in the contemporary World Wide Web and the world of cloud computing. Unfortunately, numerous privacy violations have recently garnered attention in the news. To enhance database privacy, we consider Oblivious Transfer (OT), an elegant cryptographic technology. Our observation reveals that existing research in this domain primarily concentrates on theoretical cryptographic applications, overlooking various practical aspects: - OTs assume parties have direct access to databases. Our "1-out-of-2 Delegated-Query OT" enables parties to privately query a database, without direct access. - With the rise of cloud computing, physically separated databases may no longer remain so. Our "1-out-of-2 Delegated-Query Multi-Receiver OT" protects privacy in such evolving scenarios. - Research often ignores the limitations of thin clients, e.g., Internet of Things devices. To address this, we propose a compiler that transforms any 1-out-of-n OT into a thin client version.
Last updated:  2024-06-21
Differential Fault Attack on HE-Friendly Stream Ciphers: Masta, Pasta and Elisabeth
Weizhe Wang and Deng Tang
In this paper, we propose the Differential Fault Attack (DFA) on three Homomorphic Encryption (HE) friendly stream ciphers Masta, Pasta, and Elisabeth. Both Masta and Pasta are Rasta-like ciphers with publicly derived and pseudorandom affine layers. The design of Elisabeth is an extension of FLIP and FiLIP, following the group filter permutator paradigm. All these three ciphers operate on elements over $\mathbb{Z}_p$ or $\mathbb{Z}_{2^n}$, rather than $\mathbb{Z}_2$. We can recover the secret keys of all the targeted ciphers through DFA. In particular, for Elisabeth, we present a new method to determine the filtering path, which is vital to make the attack practical. Our attacks on various instances of Masta are practical and require only one block of keystream and a single word-based fault. By injecting three word-based faults, we can theoretically mount DFA on two instances of Pasta, Pasta-3 and Pasta-4. For our DFA on Elisabeth-4, the only instance of the Elisabeth family, a single bit-based fault injection is needed. With 15000 normal and faulty keystream words, the DFA on Elisabeth-4 can be completed within several minutes.
Last updated:  2024-06-21
Relaxed Vector Commitment for Shorter Signatures
Seongkwang Kim, Byeonghak Lee, and Mincheol Son
The MPC-in-the-Head (MPCitH) paradigm has recently gained traction as a foundation for post-quantum signature schemes, offering robust security without the need for trapdoors. Despite its strong security profile, MPCitH-based schemes suffer from high computational overhead and large signature sizes, limiting their practical application. This work addresses these inefficiencies by enhancing vector commitments within MPCitH-based schemes. We introduce the concept of vector semi-commitment, which relaxes traditional vector commitment requirements without compromising security, thus reducing signature size while maintaining performance. We instantiate vector semi-commitment schemes in both the random oracle model and the ideal cipher model, leveraging recent optimizations such as the Half-tree technique. Additionally, we propose a key injection technique that further minimizes signature size by embedding the secret key into the Half-GGM tree. We apply these improvements to the BN++ signature scheme and prove it fully secure in the ideal cipher model. Implementing these improvements in the $\mathsf{AIMer}$ v2.0 signature scheme, we achieve up to 18% shorter signatures and up to 112% faster signing and verification speeds, setting new benchmarks for MPCitH-based schemes.
Last updated:  2024-06-21
zkVoting : Zero-knowledge proof based coercion-resistant and E2E verifiable e-voting system
Seongho Park, Jaekyoung Choi, Jihye Kim, and Hyunok Oh
We introduce ${zkVoting}$, a coercion-resistant e-voting system that utilizes a fake keys approach based on a novel nullifiable commitment scheme. This scheme allows voters to receive both real and fake commitment keys from a registrar. Each ballot includes this commitment, but only the tallier can efficiently discern the fake ballots, simplifying the tally process to $\mathcal{O}(n)$ and ensuring coercion resistance. ${zkVoting}$ also preserves voter anonymity by ensuring each ballot conceals the voter's identity. Additionally, by integrating zero-knowledge proofs, ${zkVoting}$ achieves end-to-end (E2E) verifiability. We formally prove its security and demonstrate its practicality for real-world applications, with a ballot casting time of 2.3 seconds and a tally time of 3.9 milliseconds per ballot.
Last updated:  2024-06-22
Elementary Formulas for Greatest Common Divisors and Semiprime Factors
Joseph M. Shunia
We present new formulas for computing greatest common divisors (GCDs) and extracting the prime factors of semiprimes using only elementary arithmetic operations: addition, subtraction, multiplication, floored division, and exponentiation. Our GCD formula simplifies a result of Mazzanti, and is derived using Kronecker substitution techniques from our previous work. We utilize the GCD formula, along with recent developments on arithmetic terms for square roots and factorials, to derive explicit expressions for the prime factors of a semiprime $n=pq$.
Last updated:  2024-06-20
Guidance for Efficient Selection of Secure Parameters for Fully Homomorphic Encryption
Elena Kirshanova, Chiara Marcolla, and Sergi Rovira
The field of Fully Homomorphic Encryption (FHE) has seen many theoretical and computational advances in recent years, bringing the technology closer to practicality than ever before. For this reason, practitioners from neighbouring fields such as machine learning have sought to understand FHE to provide privacy to their work. Unfortunately, selecting secure and efficient parameters in FHE is a daunting task due to the many interdependencies between the parameters involved. In this work, we solve this problem by moving away from the standard parameter selection procedure, introducing formulas which provide secure and optimal parameters for any lattice-based scheme. We build our formulas from a strong theoretical foundation based on cryptanalysis against LWE.
Last updated:  2024-06-20
File-Injection Attacks on Searchable Encryption, Based on Binomial Structures
Tjard Langhout, Huanhuan Chen, and Kaitai Liang
One distinguishable feature of file-inject attacks on searchable encryption schemes is the 100% query recovery rate, i.e., confirming the corresponding keyword for each query. The main efficiency consideration of file-injection attacks is the number of injected files. In the work of Zhang et al. (USENIX 2016), $|\log_2|K||$ injected files are required, each of which contains $|K|/2$ keywords for the keyword set $K$. Based on the construction of the uniform $(s,n)$-set, Wang et al. need fewer injected files when considering the threshold countermeasure. In this work, we propose a new attack that further reduces the number of injected files where Wang et al. need up to 38% more injections to achieve the same results. The attack is based on an increment $(s,n)$-set, which is also defined in this paper.
Last updated:  2024-06-20
ProxCode: Efficient Biometric Proximity Searchable Encryption from Error Correcting Codes
Maryam Rezapour and Benjamin Fuller
This work builds approximate proximity searchable encryption. Secure biometric databases are the primary application. Prior work (Kuzu, Islam, and Kantarcioglu, ICDE 2012) combines locality-sensitive hashes, or LSHs, (Indyk, STOC ’98), and oblivious multimaps. The multimap associates LSH outputs as keywords to biometrics as values. When the desired result set is of size at most one, we show a new preprocessing technique and system called ProxCode that inserts shares of a linear secret sharing into the map instead of the full biometric. Instead of choosing shares independently, shares are correlated so exactly one share is associated with each keyword/LSH output. As a result, one can rely on a map instead of a multimap. Secure maps are easier to construct with low leakage than multimaps. For many parameters, this approach reduces the required number of LSHs for a fixed accuracy. Our scheme yields the most improvement when combining a high accuracy requirement with a biometric with large underlying noise. Our approach builds on any secure map. We evaluate the scheme accuracy for both iris data and random data.
Last updated:  2024-06-20
Measuring Conditional Anonymity - A Global Study
Pascal Berrang, Paul Gerhart, and Dominique Schröder
The realm of digital health is experiencing a global surge, with mobile applications extending their reach into various facets of daily life. From tracking daily eating habits and vital functions to monitoring sleep patterns and even the menstrual cycle, these apps have become ubiquitous in their pursuit of comprehensive health insights. Many of these apps collect sensitive data and promise users to protect their privacy - often through pseudonymization. We analyze the real anonymity that users can expect by this approach and report on our findings. More concretely: 1. We introduce the notion of conditional anonymity sets derived from statistical properties of the population. 2. We measure anonymity sets for two real-world applications and present overarching findings from 39 countries. 3. We develop a graphical tool for people to explore their own anonymity set. One of our case studies is a popular app for tracking the menstruation cycle. Our findings for this app show that, despite their promise to protect privacy, the collected data can be used to identify users up to groups of 5 people in 97% of all the US counties, allowing the de-anonymization of the individuals. Given that the US Supreme Court recently overturned abortion rights, the possibility of determining individuals is a calamity.
Last updated:  2024-06-22
Dishonest Majority Multi-Verifier Zero-Knowledge Proofs for Any Constant Fraction of Corrupted Verifiers
Daniel Escudero, Antigoni Polychroniadou, Yifan Song, and Chenkai Weng
In this work we study the efficiency of Zero-Knowledge (ZK) arguments of knowledge, particularly exploring Multi-Verifier ZK (MVZK) protocols as a midway point between Non-Interactive ZK and Designated-Verifier ZK, offering versatile applications across various domains. We introduce a new MVZK protocol designed for the preprocessing model, allowing any constant fraction of verifiers to be corrupted, potentially colluding with the prover. Our contributions include the first MVZK over rings. Unlike recent prior works on fields in the dishonest majority case, our protocol demonstrates communication complexity independent of the number of verifiers, contrasting the linear complexity of previous approaches. This key advancement ensures improved scalability and efficiency. We provide an end-to-end implementation of our protocol. The benchmark shows that it achieves a throughput of 1.47 million gates per second for 64 verifiers with $50\%$ corruption, and 0.88 million gates per second with $75\%$ corruption.
Last updated:  2024-06-24
Great-LaKeys: An Improved Threshold-PRF and a Novel Exponent-VRF from LWR
Matthias Geihs
Building on the recently proposed LWR-based threshold-PRF LaKey, we propose two new constructions. First, we propose an optimized threshold-PRF with significantly reduced round and communication complexity. We achieve this by improving the underlying bit truncation protocol, as well as the lower bound on the required number of LWR instances. Second, we show that the same underlying PRF construction lends itself as a basis for a novel and efficient exponent-VRF. We implement prototypes of both of our contributions and demonstrate their practical performance.
Last updated:  2024-06-21
Cross-chain bridges via backwards-compatible SNARKs
Sergio Juárez, Mark Blunden, Joris Koopman, Anish Mohammed, Kapil Shenvi Pause, and Steve Thakur
In recent years, SNARKs have shown great promise as a tool for building trustless bridges to connect the heterogeneous ecosystem of blockchains. Unfortunately, the parameters hardwired for many of the widely used blockchains are incongruous with the conventional SNARKs, which results in unsatisfactory performance. This bottleneck necessitates new proof systems tailored for efficiency in these environments. The primary focus of this paper is on succinct bridges from Cosmos to Ethereum, which largely boils down to efficient proofs of multiple Ed25519 signatures. However, these techniques can be ported to settings that require succinct proofs of multiple secp256k1 or BLS12-381 signatures. We describe our succinct validity bridging scheme Overfly, which uses a field-agnostic SNARK to circumvent the huge overhead of non-native field arithmetic arising from Ed25519 scalar multiplications in the circuit. We also explore the schemes deVirgo and zkTree, which exploit the parallelization of proof generation and the subsequent aggregation of proofs. Our benchmarks indicate that it is crucial to sidestep non-native arithmetic to the extent that it is possible. We also found that two or more proof systems need to be securely amalgamated to optimize a succinct validity bridging scheme.
Last updated:  2024-06-20
On Knowledge-Soundness of Plonk in ROM from Falsifiable Assumptions
Helger Lipmaa, Roberto Parisella, and Janno Siim
Lipmaa, Parisella, and Siim [Eurocrypt, 2024] proved the extractability of the KZG polynomial commitment scheme under the falsifiable assumption ARSDH. They also showed that variants of real-world zk-SNARKs like Plonk can be made knowledge-sound in the random oracle model (ROM) under the ARSDH assumption. However, their approach did not consider various batching optimizations, resulting in their variant of Plonk having approximately 3.5 times longer argument. Our contributions are: (1) We prove that several batch-opening protocols for KZG, used in modern zk-SNARKs, have computational special-soundness under the ARSDH assumption. (2) We prove that interactive Plonk has computational special-soundness under the ARSDH assumption and a new falsifiable assumption TriRSDH. We also prove that a minor modification of the interactive Plonk has computational special-soundness under only the ARSDH assumption. The Fiat-Shamir transform can be applied to obtain non-interactive versions, which are secure in the ROM under the same assumptions.
Last updated:  2024-06-19
Limits on the Power of Prime-Order Groups: Separating Q-Type from Static Assumptions
George Lu and Mark Zhandry
Subgroup decision techniques on cryptographic groups and pairings have been critical for numerous applications. Originally conceived in the composite-order setting, there is a large body of work showing how to instantiate subgroup decision techniques in the prime-order setting as well. In this work, we demonstrate the first barrier to this research program, by demonstrating an important setting where composite-order techniques cannot be replicated in the prime-order setting. In particular, we focus on the case of $q$-type assumptions, which are ubiquitous in group- and pairing-based cryptography, but unfortunately are less desirable than the more well-understood static assumptions. Subgroup decision techniques have had great success in removing $q$-type assumptions, even allowing $q$-type assumptions to be generically based on static assumptions on composite-order groups. Our main result shows that the same likely does not hold in the prime order setting. Namely, we show that a large class of $q$-type assumptions, including the security definition of a number of cryptosystems, cannot be proven secure in a black box way from any static assumption.
Last updated:  2024-06-19
The Complexity of the Crossbred Algorithm
Damien VIDAL, Sorina IONICA, and Claire Delaplace
The Crossbred algorithm is currently the state-of-the-art method for solving overdetermined multivariate polynomial systems over $\mathbb{F}_2$. Since its publication in 2015, several record breaking implementations have been proposed and demonstrate the power of this hybrid approach. Despite these practical results, the complexity of this algorithm and the choice of optimal parameters for it are difficult open questions. In this paper, we prove a bivariate generating series for potentially admissible parameters of the Crossbred algorithm.
Last updated:  2024-06-19
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Shuhong Gao and Kyle Yates
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
Last updated:  2024-06-19
Perfectly-secure Network-agnostic MPC with Optimal Resiliency
Shravani Patil and Arpita Patra
We study network-agnostic secure multiparty computation with perfect security. Traditionally MPC is studied assuming the underlying network is either synchronous or asynchronous. In a network-agnostic setting, the parties are unaware of whether the underlying network is synchronous or asynchronous. The feasibility of perfectly-secure MPC in synchronous and asynchronous networks has been settled a long ago. The landmark work of [Ben-Or, Goldwasser, and Wigderson, STOC'88] shows that $n > 3t_s$ is necessary and sufficient for any MPC protocol with $n$-parties over synchronous network tolerating $t_s$ active corruptions. In yet another foundational work, [Ben-Or, Canetti, and Goldreich, STOC'93] show that the bound for asynchronous network is $n > 4t_a$, where $t_a$ denotes the number of active corruptions. However, the same question remains unresolved for network-agnostic setting till date. In this work, we resolve this long-standing question. We show that perfectly-secure network-agnostic $n$-party MPC tolerating $t_s$ active corruptions when the network is synchronous and $t_a$ active corruptions when the network is asynchronous is possible if and only if $n > 2 \max(t_s,t_a) + \max(2t_a,t_s)$. When $t_a \geq t_s$, our bound reduces to $n > 4t_a$, whose tightness follows from the known feasibility results for asynchronous MPC. When $t_s > t_a$, our result gives rise to a new bound of $n > 2t_s + \max(2t_a,t_s)$. Notably, the previous network-agnostic MPC in this setting [Appan, Chandramouli, and Choudhury, PODC'22] only shows sufficiency for a loose bound of $n > 3t_s + t_a$. When $t_s > 2t_a$, our result shows tightness of $ n > 3t_s$, whereas the existing work shows sufficiency for $n > 3t_s+t_a$.
Last updated:  2024-06-19
A Formal Treatment of End-to-End Encrypted Cloud Storage
Matilda Backendal, Hannah Davis, Felix Günther, Miro Haller, and Kenneth G. Paterson
Users increasingly store their data in the cloud, thereby benefiting from easy access, sharing, and redundancy. To additionally guarantee security of the outsourced data even against a server compromise, some service providers have started to offer end-to-end encrypted (E2EE) cloud storage. With this cryptographic protection, only legitimate owners can read or modify the data. However, recent attacks on the largest E2EE providers have highlighted the lack of solid foundations for this emerging type of service. In this paper, we address this shortcoming by initiating the formal study of E2EE cloud storage. We give a formal syntax to capture the core functionality of a cloud storage system, capturing the real-world complexity of such a system's constituent interactive protocols. We then define game-based security notions for confidentiality and integrity of a cloud storage system against a fully malicious server. We treat both selective and fully adaptive client compromises. Our notions are informed by recent attacks on E2EE cloud storage providers. In particular we show that our syntax is rich enough to capture the core functionality of MEGA and that recent attacks on it arise as violations of our security notions. Finally, we present an E2EE cloud storage system that provides all core functionalities and that is both efficient and provably secure with respect to our selective security notions. Along the way, we discuss challenges on the path towards bringing the security of cloud storage up to par with other end-to-end primitives, such as secure messaging and TLS.
Last updated:  2024-06-19
Privacy-Preserving Dijkstra
Benjamin Ostrovsky
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps: First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs in a secret-shared form two arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such an integer format, this step could be skipped. Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, there exists a $d$-normalization algorithm that takes $O(1)$ rounds and $O((V+E))$ secure operations. We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O\left((V +E) \cdot \log V \right)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into RAM memory word. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
Last updated:  2024-06-28
CoGNN: Towards Secure and Efficient Collaborative Graph Learning
Zhenhua Zou, Zhuotao Liu, Jinyong Shan, Qi Li, Ke Xu, and Mingwei Xu
Collaborative graph learning represents a learning paradigm where multiple parties jointly train a graph neural network (GNN) using their own proprietary graph data. To honor the data privacy of all parties, existing solutions for collaborative graph learning are either based on federated learning (FL) or secure machine learning (SML). Although promising in terms of efficiency and scalability due to their distributed training scheme, FL-based approaches fall short in providing provable security guarantees and achieving good model performance. Conversely, SML-based solutions, while offering provable privacy guarantees, are hindered by their high computational and communication overhead, as well as poor scalability as more parties participate. To address the above problem, we propose CoGNN, a novel framework that simultaneously reaps the benefits of both FL-based and SML-based approaches. At a high level, CoGNN is enabled by (i) a novel message passing mechanism that can obliviously and efficiently express the vertex data propagation/aggregation required in GNN training and inference and (ii) a two-stage Dispatch-Collect execution scheme to securely decompose and distribute the GNN computation workload for concurrent and scalable executions. We further instantiate the CoGNN framework, together with customized optimizations, to train Graph Convolutional Network (GCN) models. Extensive evaluations on three graph datasets demonstrate that compared with the state-of-the-art (SOTA) SML-based approach, CoGNN reduces up to $123$x running time and up to $522$x communication cost per party. Meanwhile, the GCN models trained using CoGNN have nearly identical accuracies as the plaintext global-graph training, yielding up to $11.06\%$ accuracy improvement over the GCN models trained via federated learning.
Last updated:  2024-06-25
FABESA: Fast (and Anonymous) Attribute-Based Encryption under Standard Assumption
Long Meng, Liqun Chen, Yangguang Tian, and Mark Manulis
Attribute-Based Encryption (ABE) provides fine-grained access control to encrypted data and finds applications in various domains. The practicality of ABE schemes hinges on the balance between security and efficiency. The state-of-the-art adaptive secure ABE scheme, proven to be adaptively secure under standard assumptions (FAME, CCS'17), is less efficient compared to the fastest one (FABEO, CCS'22) which is only proven secure under the Generic Group Model (GGM). These traditional ABE schemes focus solely on message privacy. To address scenarios where attribute value information is also sensitive, Anonymous ABE (${\rm A}^{\rm 2}$BE) ensures the privacy of both the message and attributes. However, most ${\rm A}^{\rm 2}$BE schemes suffer from intricate designs with low efficiency, and the security of the fastest key-policy ${\rm A}^{\rm 2}$BE (proposed in FEASE, USENIX'24) relies on the GGM. In this paper, we propose novel fast key-policy and ciphertext-policy ABE schemes that (1) support both AND and OR gates for access policies, (2) have no restriction on the size and type of policies or attributes, (3) achieve adaptive security under the standard DLIN assumption, and (4) only need 4 pairings for decryption. As our ABE constructions automatically provide ciphertext anonymity, we easily transform our ABE schemes to ${\rm A}^{\rm 2}$BE schemes while maintaining the same features and high-level efficiency. The implementation results show that all our schemes achieve the best efficiency comparing to other schemes with adaptive security proven under standard assumptions. Specifically, our ABE schemes perform better than FAME and are close to FABEO. Our key-policy ${\rm A}^{\rm 2}$BE scheme performs close to the one in FEASE and our ciphertext-policy ${\rm A}^{\rm 2}$BE outperforms the state-of-the-art (Cui et al., ProvSec'16).
Last updated:  2024-06-18
DualRing-PRF: Post-Quantum (Linkable) Ring Signatures from Legendre and Power Residue PRFs
Xinyu Zhang, Ron Steinfeld, Joseph K. Liu, Muhammed F. Esgin, Dongxi Liu, and Sushmita Ruj
Ring signatures are one of the crucial cryptographic primitives used in the design of privacy-preserving systems. Such a signature scheme allows a signer to anonymously sign a message on behalf of a spontaneously formed group. It not only ensures the authenticity of the message but also conceals the true signer within the group. An important extension of ring signatures is linkable ring signatures, which prevent a signer from signing twice without being detected (under some constraints). Linkable ring signatures offer advantages in applications where full anonymity might jeopardise the intended purpose, such as privacy-oriented cryptocurrencies like Monero. In this work, we introduce post-quantum ring signature (DualRing-PRF) and linkable ring signature (DualRingL-PRF) schemes whose security solely rely on symmetric-key primitives (namely, Legendre PRF and power residue PRF). Our construction of the ring signature departs from previous approaches with similar security assumptions, offering the most competitive signature sizes for small and medium-sized rings. In particular, for a ring size of 16, DualRing-PRF has a communication overhead 1.4 times smaller than the state-of-the-art scheme proposed by Goel et al. (PETS’21). Furthermore, we demonstrate the extension of DualRing-PRF to incorporate linkability and non-slanderability. Compared to the existing one-time traceable ring signature (a variant of linkable ring signature) by Scafuro and Zhang (ESORICS’21), our construction supports many-time signing and achieves significantly smaller signature sizes when ring size exceeds 16. This advantage becomes more pronounced as the ring size increases.
Last updated:  2024-06-24
Side-Channel and Fault Resistant ASCON Implementation: A Detailed Hardware Evaluation (Extended Version)
Aneesh Kandi, Anubhab Baksi, Peizhou Gan, Sylvain Guilley, Tomáš Gerlich, Jakub Breier, Anupam Chattopadhyay, Ritu Ranjan Shrivastwa, Zdeněk Martinásek, and Shivam Bhasin
In this work, we present various hardware implementations for the lightweight cipher ASCON, which was recently selected as the winner of the NIST organized Lightweight Cryptography (LWC) competition. We cover encryption + tag generation and decryption + tag verification for the ASCON AEAD and also the ASCON hash function. On top of the usual (unprotected) implementation, we present side-channel protection (threshold countermeasure) and triplication/majority-based fault protection. To the best of our knowledge, this is the first protected hardware implementation of ASCON with respect to side-channel and fault inject protection. The side-channel and fault protections work orthogonal to each other (i.e., either one can be turned on/off without affecting the other). We present ASIC and FPGA benchmarks for all our implementations (hash and AEAD) with/without countermeasures for varying input sizes.
Last updated:  2024-06-18
SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation
Shams Tarek, Dipayan Saha, Sujan Kumar Saha, Mark Tehranipoor, and Farimah Farahmandi
Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). Large language models (LLMs) have emerged as a viable approach to address these shortcomings in security verification because of their natural language understanding, advanced reasoning, and knowledge transfer capabilities. However, their application to large designs is limited by inherent token limitation and memorization constraints. In this paper, we introduce SoCureLLM, an LLM-based framework that excels in identifying security vulnerabilities within SoC designs and creating a comprehensive security policy database. Our framework is adaptable and adept at processing varied, large-scale designs, overcoming the abovementioned issues of LLM. In evaluations, SoCureLLM detected 76.47% of security bugs across three vulnerable RISC-V SoCs, outperforming the state-of-the-art security verification methods. Furthermore, assessing three additional large-scale RISC-V SoC designs against various threat models led to the formulation of 84 novel security policies, enriching the security policy database. Previously requiring extensive manual effort to craft, these newly generated security policies can be used as guidelines for developing secured SoC designs.
Last updated:  2024-06-18
SoK: Programmable Privacy in Distributed Systems
Daniel Benarroch, Bryan Gillespie, Ying Tong Lai, and Andrew Miller
This Systematization of Knowledge conducts a survey of contemporary distributed blockchain protocols, with the aim of identifying cryptographic and design techniques which practically enable both expressive programmability and user data confidentiality. To facilitate a framing which supports the comparison of concretely very different protocols, we define an epoch-based computational model in the form of a flexible UC-style ideal functionality which divides the operation of privacy-preserving networks into three phases: Independent, Mediated, and Global computation. Our analysis of protocols focuses in particular on features of the Mediated computation phase, which provides the facility to execute non-trivial program logic on private inputs from multiple users. Specifically, we compare implementations in different protocols for private limit order auctions, which we find to be a representative application which is common and relatively simple, but which exhibits adversarial dynamics which demonstrate the capabilities of a non-trivial Mediated computation mechanism. In our analysis, we identify four protocols representative of different high-level approaches used to implement Mediated computations. We compare protocols according to the degree and flexibility of programmability, the privacy properties achieved, and the security assumptions required for correct operation. We conclude by offering recommendations and best practices for future programmable privacy designs.
Last updated:  2024-06-18
Hadamard Product Arguments and Their Applications
Kyeongtae Lee, Donghwan Oh, Hankyung Ko, Jihye Kim, and Hyunok Oh
This paper introduces transparent and efficient arguments for Hadamard products between committed vectors from two source groups. For vectors of length $n$, the proofs consist of $\mathcal{O}(\log n)$ target group elements and $\mathcal{O}(1)$ additional elements. The verifier's workload is dominated by $\mathcal{O}(\log n)$ multi-exponentiations in the target group and $\mathcal{O}(1)$ pairings. We prove our security under the standard SXDH assumption. Additionally, we propose an aggregator for Groth16 pairing-based zk-SNARKs and a proof aggregation technique for the general case of the KZG pairing-based polynomial commitment scheme using our Hadamard product arguments. Both applications support logarithmic-sized aggregated proofs without requiring an additional trusted setup, significantly reducing the verifier’s pairing operations to $\mathcal{O}(1)$.
Last updated:  2024-06-18
FaultyGarble: Fault Attack on Secure Multiparty Neural Network Inference
Mohammad Hashemi, Dev Mehta, Kyle Mitard, Shahin Tajik, and Fatemeh Ganji
The success of deep learning across a variety of applications, including inference on edge devices, has led to increased concerns about the privacy of users’ data and deep learning models. Secure multiparty computation allows parties to remedy this concern, resulting in a growth in the number of such proposals and improvements in their efficiency. The majority of secure inference protocols relying on multiparty computation assume that the client does not deviate from the protocol and passively attempts to extract information. Yet clients, driven by different incentives, can act maliciously to actively deviate from the protocol and disclose the deep learning model owner’s private information. Interestingly, faults are well understood in multiparty computation-related literature, although fault attacks have not been explored. Our paper introduces the very first fault attack against secure inference implementations relying on garbled circuits as a prime example of multiparty computation schemes. In this regard, laser fault injection coupled with a model-extraction attack is successfully mounted against existing solutions that have been assumed to be secure against active attacks. Notably, the number of queries required for the attack is equal to that of the best model extraction attack mounted against the secure inference engines under the semi-honest scenario.
Last updated:  2024-06-18
Volatile and Persistent Memory for zkSNARKs via Algebraic Interactive Proofs
Alex Ozdemir, Evan Laufer, and Dan Boneh
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Last updated:  2024-06-17
Distributed PIR: Scaling Private Messaging via the Users' Machines
Elkana Tovey, Jonathan Weiss, and Yossi Gilad
This paper presents a new architecture for metadata-private messaging that counters scalability challenges by offloading most computations to the clients. At the core of our design is a distributed private information retrieval (PIR) protocol, where the responder delegates its work to alleviate PIR's computational bottleneck and catches misbehaving delegates by efficiently verifying their results. We introduce DPIR, a messaging system that uses distributed PIR to let a server storing messages delegate the work to the system's clients, such that each client contributes proportional processing to the number of messages it reads. The server removes clients returning invalid results, which DPIR leverages to integrate an incentive mechanism for honest client behavior by conditioning messaging through DPIR on correctly processing PIR requests from other users. The result is a metadata-private messaging system that asymptotically improves scalability over prior work with the same threat model. We show through experiments on a prototype implementation that DPIR concretely improves performance by $3.25\times$ and $4.31\times$ over prior work and that the performance gap grows with the user base size.
Last updated:  2024-06-17
Improved Boomerang Attacks on 6-Round AES
Augustin Bariant, Orr Dunkelman, Nathan Keller, Gaëtan Leurent, and Victor Mollimard
The boomerang attack is a cryptanalytic technique which allows combining two short high-probability differentials into a distinguisher for a large number of rounds. Since its introduction by Wagner in 1999, it has been applied to many ciphers. One of the best-studied targets is a 6-round variant of AES, on which the boomerang attack is outperformed only by the dedicated Square attack. Recently, two new variants of the boomerang attack were presented: retracing boomerang (Eurocrypt'20) and truncated boomerang (Eurocrypt'23). These variants seem incompatible: the former achieves lower memory complexity by throwing away most of the data in order to force dependencies, while the latter achieves lower time complexity by using large structures, which inevitably leads to a large memory complexity. In this paper we show that elements of the two techniques can be combined to get `the best of the two worlds' – the practical memory complexity of the retracing attack and the lower time complexity of the truncated attack. We obtain an attack with data complexity of $2^{57}$ (compared to $2^{59}$ and $2^{55}$ of truncated and retracing boomerang, respectively), memory complexity of $2^{33}$ (compared to $2^{59}$ and $2^{31}$), and time complexity of $2^{61}$ (compared to $2^{61}$ and $2^{80}$). This is the second-best attack on 6-round AES, after the Square attack.
Last updated:  2024-06-17
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
Yuval Ishai, Elaine Shi, and Daniel Wichs
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions). In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Last updated:  2024-06-17
ZLR: a fast online authenticated encryption scheme achieving full security
Wonseok Choi, Seongha Hwang, Byeonghak Lee, and Jooyoung Lee
Online authenticated encryption has been considered of practical relevance in light-weight environments due to low latency and constant memory usage. In this paper, we propose a new tweakable block cipher-based online authenticated encryption scheme, dubbed ZLR, and its domain separation variant, dubbed DS-ZLR. ZLR and DS-ZLR follow the Encrypt-MixEncrypt paradigm. However, in contrast to existing schemes using the same paradigm such as ELmE and CoLM, ZLR and DS-ZLR enjoy n-bit security by using larger internal states with an efficient ZHash-like hashing algorithm. In this way, 2n-bit blocks are processed with only a single primitive call for hashing and two primitive calls for encryption and decryption, when they are based on an n-bit tweakable block cipher using n-bit (resp. 2n-bit) tweaks for ZLR (resp. DS-ZLR). Furthermore, they support pipelined computation as well as online nonce-misuse resistance. To the best of our knowledge, ZLR and DS-ZLR are the first pipelineable tweakable block cipher-based online authenticated encryption schemes of rate 2/3 that provide n-bit security with online nonce-misuse resistance.
Last updated:  2024-06-17
Towards Optimal Parallel Broadcast under a Dishonest Majority
Daniel Collins, Sisi Duan, Julian Loss, Charalampos Papamanthou, Giorgos Tsimos, and Haochen Wang
The parallel broadcast (PBC) problem generalises the classic Byzantine broadcast problem to the setting where all $n$ nodes broadcast a message and deliver $O(n)$ messages. PBC arises naturally in many settings including multi-party computation. Recently, Tsimos, Loss, and Papamanthou (CRYPTO 2022) showed PBC protocols with improved communication, against an adaptive adversary who can corrupt all but a constant fraction $\epsilon$ of nodes (i.e., $f < (1 - \epsilon)n$). However, their study is limited to single-bit messages, and their protocols have large polynomial overhead in the security parameter $\kappa$: their TrustedPBC protocol achieves $\tilde{O}(n^2 \kappa^4)$ communication and $O(\kappa\log n)$ rounds. Since these factors of $\kappa$ are in practice often close (or at least polynomially related) to $n$, they add a significant overhead. In this work, we propose three parallel broadcast protocols for $L$-bit messages, for any size $L$, that significantly improve the communication efficiency of the state-of-the-art. We first propose a new extension protocol that uses a $\kappa$-bit PBC as a black box and achieves i) communication complexity of $O(L n^2 + \mathcal{P}(\kappa))$, where $\mathcal{P}(\kappa)$ is the communication complexity of the $\kappa$-bit PBC, and ii) round complexity same as the $\kappa$-bit PBC. By comparison, the state-of-the-art extension protocol for regular broadcast (Nayak et al., DISC 2020) incurs $O(n)$ additional rounds of communication. Next, we propose a protocol that is secure against a static adversary, for $\kappa$-bit messages with $O(n^2 \kappa^{1+K} + n\kappa^3 + \kappa^4)$ communication and $O(\kappa)$ round complexity, where $K$ is an arbitrarily small constant such that $0<K<1$. Finally, we propose an adaptively-secure protocol for $\kappa$-bit messages with $\tilde{O}(n^2\kappa^2 + n\kappa^3)$ communication overhead and $O(\kappa \log{n})$ round complexity by modifying and improving the next-best protocol TrustedPBC in several key ways. Notably, our latter two protocols are $\tilde{O}(\kappa^{2 - K})$ and $O(\kappa^2)$ times more communication-efficient, respectively, than the state-of-the-art protocols while achieving the same round complexity.
Last updated:  2024-06-16
ICICLE v2: Polynomial API for Coding ZK Provers to Run on Specialized Hardware
Karthik Inbasekar, Yuval Shekel, and Michael Asa
Polynomials play a central role in cryptography. In the context of Zero Knowledge Proofs (ZKPs), protocols can be exclusively expressed using polynomials, making them a powerful abstraction tool, as demonstrated in most ZK research papers. Our first contribution is a high-level framework that enables practitioners to implement ZKPs in a more natural way, based solely on polynomial primitives. ZK provers are considered computationally intensive algorithms with a high degree of parallelization. These algorithms benefit significantly from hardware acceleration, and deployed ZK systems typically include specialized hardware to optimize the performance of the prover code. Our second contribution is leveraging our polynomial API to abstract away low-level hardware primitives and automate their memory management. This device-agnostic design allows ZK engineers to prototype and build solutions while taking advantage of the performance gains offered by specialized hardware, such as GPUs and FPGAs, without needing to understand the hardware implementation details. Finally, our polynomial API is integrated into version 2 of the ICICLE library and is running in production. This paper also serves as a comprehensive documentation for the ICICLE v2 polynomial API.
Last updated:  2024-06-16
Efficient Secure Communication Over Dynamic Incomplete Networks With Minimal Connectivity
Ivan Damgård, Divya Ravi, Lawrence Roy, Daniel Tschudi, and Sophia Yakoubov
We study the problem of implementing unconditionally secure reliable and private communication (and hence secure computation) in dynamic incomplete networks. Our model assumes that the network is always $k$-connected, for some $k$, but the concrete connection graph is adversarially chosen in each round of interaction. We show that, with $n$ players and $t$ malicious corruptions, perfectly secure communication is possible if and only if $k > 2t$. This disproves a conjecture from earlier work, that $k> 3t$ is necessary. Our new protocols are much more efficient than previous work; in particular, we improve the round and communication complexity by an exponential factor (in $n$) in both the semi-honest and the malicious corruption setting, leading to protocols with polynomial complexity.
Last updated:  2024-06-16
A Note on (2, 2)-isogenies via Theta Coordinates
Jianming Lin, Saiyu Wang, and Chang-An Zhao
In this paper, we revisit the algorithm for computing chains of $(2, 2)$-isogenies between products of elliptic curves via theta coordinates proposed by Dartois et al. For each fundamental block of this algorithm, we provide a explicit inversion-free version. Besides, we exploit a novel technique of $x$-only ladder to speed up the computation of gluing isogeny. Finally, we present a mixed optimal strategy, which combines the inversion-elimination tool with the original methods together to execute a chain of $(2, 2)$-isogenies. We make a cost analysis and present a concrete comparison between ours and the previously known methods for inversion elimination. Furthermore, we implement the mixed optimal strategy for benchmark. The results show that when computing $(2, 2)$-isogeny chains with lengths of 126, 208 and 632, compared to Dartois, Maino, Pope and Robert's original implementation, utilizing our techniques can reduce $30.8\%$, $20.3\%$ and $9.9\%$ multiplications over the base field $\mathbb{F}_p$, respectively. Even for the updated version which employs their inversion-free methods, our techniques still possess a slight advantage.
Last updated:  2024-06-16
Cryptography at the Crossroads: Ethical Responsibility, the Cypherpunk Movement and Institutions
Eric Blair
This paper explores the intersection of cryptographic work with ethical responsibility and political activism, inspired by the Cypherpunk Manifesto and Phillip Rogaway's analysis of the moral character of cryptography. The discussion encompasses the historical context of cryptographic development, the philosophical underpinnings of the cypherpunk ideology, and contemporary challenges posed by mass surveillance and privacy concerns. By examining these facets, the paper calls for a renewed commitment to developing cryptographic solutions that prioritize human rights and societal good.
Last updated:  2024-06-16
Analysis, modify and apply in IIOT form light-weight PSI in CM20
Zhuang Shan, Leyou Zhang, Qing Wu, and Qiqi Lai
Multi-party computation (\textsf{MPC}) is a major research interest in modern cryptography, and Privacy Set Intersection (\textsf{PSI}) is an important research topic within \textsf{MPC}. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other information. Therefore, \textsf{PSI} can be applied to various real-world scenarios, such as the Industrial Internet of Things (\textsf{IIOT}). Chase and Miao presented a paper on ``Light-weight PSI'' at CRYPTO 2020, highlighting its convenient structure and optimal communication cost. However, the drawback is that this protocol is deterministically encrypted and it was discovered through simulation that it is not resistant to probabilistic attacks. Building upon the ideas from CM20, this paper introduces the concept of a {\em perturbed pseudorandom generator}, constructs and proves its security, and replaces one of the hash functions (originally there were two) from CM20. In order to demonstrate the security of the \textsf{PSI} protocol proposed in this paper, a dedicated definition of Chosen Plaintext Attack (\textsf{CPA}) security model for this \textsf{PSI} protocol is provided. The paper then proceeds to prove that the \textsf{PSI} protocol satisfies this defined security model. Efficiency analysis shows that the \textsf{PSI} in this paper is comparable to CM20's \textsf{PSI}, whether on PC, pad, or phone.
Last updated:  2024-06-20
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Jia Liu and Mark Manulis
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms. This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions. To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Last updated:  2024-06-15
Consolidated Linear Masking (CLM): Generalized Randomized Isomorphic Representations, Powerful Degrees of Freedom and Low(er)-cost
Uncategorized
Itamar Levi and Osnat Keren
Show abstract
Uncategorized
Masking is a widely adopted countermeasure against side-channel analysis (SCA) that protects cryptographic implementations from information leakage. However, current masking schemes often incur significant overhead in terms of electronic cost. RAMBAM, a recently proposed masking technique that fits elegantly with the AES algorithm, offers ultra-low latency/area by utilizing redundant representations of finite field elements. This paper presents a comprehensive generalization of RAMBAM and various other masking schemes within a unified framework and a mathematical representation known as Consolidated Linear Masking (CLM), where masking schemes are formalized by their encoding. We establish a theoretical foundation for CLM linking randomized isomorphic (code) representations and the entropy provided by the redundancy to a revised notion of masking order. Our analysis reveals that RAMBAM is a specific instance of CLM as well as other masking constructions, thus paving the way for significant enhancements. For example, a $1^{st}$-order secure design can be achieved almost without increasing the size of the representation of the variables. This property scales up to any order and is versatile. We demonstrate how CLM enables: (1) randomized selection of the isomorphic field for improved security; (2) flexible choice of the randomization polynomial; (3) embedded mask-refreshing via the randomized isomorphic representation that reduces randomness requirements significantly as well as improves performance; (4) a wider range of isomorphic randomized mappings that significantly increases the available randomization space compared to RAMBAM; (5) considerable improvement in securing fault-injection attacks and inherent security against probing adversaries, i.e., more required probes. In addition, our framework addresses ways to improve the brute-force parameter choices in the original RAMBAM. By offering a unifying theoretical perspective for masking and practical enhancements, this work advances the design of efficient and secure masking countermeasures against SCA threats.
Last updated:  2024-06-15
Diffuse Some Noise: Diffusion Models for Measurement Noise Removal in Side-channel Analysis
Sengim Karayalcin, Guilherme Perin, and Stjepan Picek
Resilience against side-channel attacks is an important consideration for cryptographic implementations deployed in devices with physical access to the device. However, noise in side-channel measurements has a significant impact on the complexity of these attacks, especially when an implementation is protected with masking. Therefore, it is important to assess the ability of an attacker to deal with noise. While some previous works have considered approaches to remove (some) noise from measurements, these approaches generally require considerable expertise to be effectively employed or necessitate the ability of the attacker to capture a 'clean' set of traces without the noise. In this paper, we introduce a method for utilizing diffusion models to remove measurement noise from side-channel traces in a fully non-profiled setting. Denoising traces using our method considerably lowers the complexity of mounting attacks in both profiled and non-profiled settings. For instance, for a collision attack against the ASCADv2 dataset, we reduced the number of traces required to retrieve the key by 40%, and we showed similar improvements for ESHARD using a state-of-the-art MORE attack. Furthermore, we provide analyses into the scenarios where our method is useful and generate insights into how the diffusion networks denoise traces.
Last updated:  2024-06-15
Efficient and Secure Post-Quantum Certificateless Signcryption for Internet of Medical Things
Shiyuan Xu, Xue Chen, Yu Guo, Siu-Ming Yiu, Shang Gao, and Bin Xiao
Internet of Medical Things (IoMT) has gained significant research focus in both academic and medical institutions. Nevertheless, the sensitive data involved in IoMT raises concerns regarding user validation and data privacy. To address these concerns, certificateless signcryption (CLSC) has emerged as a promising solution, offering authenticity, confidentiality, and unforgeability. Unfortunately, most existing CLSC schemes are impractical for IoMT due to their heavy computational and storage requirements. Additionally, these schemes are vulnerable to quantum computing attacks. Therefore, research focusing on designing an efficient post-quantum CLSC scheme is still far-reaching. In this work, we propose PQ-CLSC, a novel post-quantum CLSC scheme that ensures quantum safety for IoMT. Our proposed design facilitates secure transmission of medical data between physicians and patients, effectively validating user legitimacy and minimizing the risk of private information leakage. To achieve this, we leverage lattice sampling algorithms and hash functions to generate the particial secret key and then employ the sign-then-encrypt method to obtain the ciphertext. We also formally and prove the security of our design, including indistinguishability against chosen-ciphertext attacks (IND-CCA2) and existential unforgeability against chosen-message attacks (EU-CMA) security. Finally, through comprehensive performance evaluation, our signcryption overhead is only 30%-55% compared to prior arts, while our computation overhead is just around 45% of other existing schemes. The evaluation results demonstrate that our solution is practical and efficient.
Last updated:  2024-06-18
Malicious Security for PIR (almost) for Free
Brett Falk, Pratyush Mishra, and Matan Shtepel
Private Information Retrieval (PIR) enables a client to retrieve a database element from a semi-honest server while hiding the element being queried from the server. Maliciously-secure PIR (mPIR) [Colombo et al., USENIX Security '23] strengthens the guarantees of plain (i.e., semi-honest) PIR by ensuring that even a misbehaving server (a) cannot compromise client privacy via selective-failure attacks, and (b) must answer every query *consistently* (i.e., with respect to the same database). These additional security properties are crucial for many real-world applications. In this work we present a generic compiler that transforms any PIR scheme into an mPIR scheme in a black-box manner, minimal overhead, and without requiring additional cryptographic assumptions. Since mPIR trivially implies PIR, our compiler establishes the equivalence of mPIR and PIR. By instantiating our compiler with existing PIR schemes, we immediately obtain mPIR schemes with $O(N^\epsilon)$ communication cost. In fact, by applying our compiler to a recent doubly-efficient PIR [Lin et al., STOC '23], we are able to construct a *doubly-efficient* mPIR scheme that requires only $\text{polylog}(N)$ communication and server and client computation. In comparison, all prior work incur a $\Omega(\sqrt{N})$ cost in these metrics. Our compiler makes use of smooth locally-decodable codes (LDCs) that have a robust decoding procedure. We term these codes "subcode"-LDCs, because they are LDCs where the query responses are from an error-correcting code. This property is shared by Reed--Muller codes (whose query responses are Reed--Solomon codewords) and more generally lifted codes. Applying our compiler requires us to consider decoding in the face of *non-signaling adversaries*, for reasons analogous to the need for non-signaling PCPs in the succinct-argument literature. We show how to construct such decoders for Reed--Muller codes, and more generally for smooth locally-decodable codes that have a robust decoding procedure.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.