Papers updated in last 365 days (Page 28 of 3044 results)
Cache Timing Leakages in Zero-Knowledge Protocols
The area of modern zero-knowledge proof systems has seen a significant rise in popularity over the last couple of years, with new techniques and optimized constructions emerging on a regular basis.
As the field matures, the aspect of implementation attacks becomes more relevant, however side-channel attacks on zero-knowledge proof systems have seen surprisingly little treatment so far. In this paper we give an overview of potential attack vectors and show that some of the underlying finite field libraries, and implementations of heavily used components like hash functions, are vulnerable w.r.t. cache attacks on CPUs.
On the positive side, we demonstrate that the computational overhead to protect against these attacks is relatively small.
zkSNARKs in the ROM with Unconditional UC-Security
The universal composability (UC) framework is a “gold standard” for security in cryptography. UC-secure protocols achieve strong security guarantees against powerful adaptive adversaries, and retain these guarantees when used as part of larger protocols. Zero knowledge succinct non-interactive arguments of knowledge (zkSNARKs) are a popular cryptographic primitive that are often used within larger protocols deployed in dynamic environments, and so UC-security is a highly desirable, if not necessary, goal.
In this paper we prove that there exist zkSNARKs in the random oracle model (ROM) that unconditionally achieve UC-security. Here, “unconditionally” means that security holds against adversaries that make a bounded number of queries to the random oracle, but are otherwise computationally unbounded.
Prior work studying UC-security for zkSNARKs obtains transformations that rely on computational assumptions and, in many cases, lose most of the succinctness property of the zkSNARK. Moreover, these transformations make the resulting zkSNARK more expensive and complicated.
In contrast, we prove that widely used zkSNARKs in the ROM are UC-secure without modifications. We prove that the Micali construction, which is the canonical construction of a zkSNARK, is UC-secure. Moreover, we prove that the BCS construction, which many zkSNARKs deployed in practice are based on, is UC-secure. Our results confirm the intuition that these natural zkSNARKs do not need to be augmented to achieve UC-security, and give confidence that their use in larger real-world systems is secure.
Survivable Payment Channel Networks
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.
Efficient (3,3)-isogenies on fast Kummer surfaces
We give an alternative derivation of (N,N)-isogenies between fast Kummer surfaces which complements existing works based on the theory of theta functions. We use this framework to produce explicit formulae for the case of N = 3, and show that the resulting algorithms are more efficient than all prior (3,3)-isogeny algorithms.
Last updated: 2024-09-05
Practical Delegatable Attribute-Based Anonymous Credentials with Chainable Revocation
Delegatable Anonymous Credentials (DAC) are an enhanced Anonymous Credentials (AC) system that allows credential owners to use credentials anonymously, as well as anonymously delegate them to other users. In this work, we introduce a new concept called Delegatable Attribute-based Anonymous Credentials with Chainable Revocation (DAAC-CR), which extends the functionality of DAC by allowing 1) fine-grained attribute delegation, 2) issuers to restrict the delegation capabilities of the delegated users at a fine-grained level, including the depth of delegation and the sets of delegable attributes, and 3) chainable revocation, meaning if a credential within the delegation chain is revoked, all subsequent credentials derived from it are also invalid.
We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
Key Policy Attribute-Based Encryption Leveraging Isogeny-Based Cryptography
We present the first Key Policy Attribute-Based Encryption (KP-ABE) scheme employing isogeny-based cryptography through class group actions, specifically utilizing the Csi-FiSh instantiation and pairing groups. We introduce a new assumption, denoted Isog-DLin, which combines the isogeny and DLin assumptions. We propose the following constructions: a small universe KP-ABE and a large universe KP-ABE under the Isog-DBDH assumption, and a small universe KP-ABE under the Isog-DLin assumption. In these constructions, the master key is designed to be secure against quantum computer attacks, while the ciphertext remains secure against classical computer attacks. This dual-layered approach ensures robust security across classical and quantum computational paradigms, addressing current and potential future cryptographic challenges.
FaultyGarble: Fault Attack on Secure Multiparty Neural Network Inference
The success of deep learning across a variety of
applications, including inference on edge devices, has led to
increased concerns about the privacy of users’ data and deep
learning models. Secure multiparty computation allows parties
to remedy this concern, resulting in a growth in the number
of such proposals and improvements in their efficiency. The
majority of secure inference protocols relying on multiparty
computation assume that the client does not deviate from the
protocol and passively attempts to extract information. Yet
clients, driven by different incentives, can act maliciously to
actively deviate from the protocol and disclose the deep learning
model owner’s private information. Interestingly, faults are
well understood in multiparty computation-related literature,
although fault attacks have not been explored. Our paper
introduces the very first fault attack against secure inference
implementations relying on garbled circuits as a prime example
of multiparty computation schemes. In this regard, laser fault
injection coupled with a model-extraction attack is successfully
mounted against existing solutions that have been assumed to
be secure against active attacks. Notably, the number of queries
required for the attack is equal to that of the best model extraction
attack mounted against the secure inference engines
under the semi-honest scenario.
Atomic and Fair Data Exchange via Blockchain
We introduce a blockchain Fair Data Exchange (FDE) protocol, enabling a storage server to transfer a data file to a client atomically: the client receives the file if and only if the server receives an agreed-upon payment. We put forth a new definition for a cryptographic scheme that we name verifiable encryption under committed key (VECK), and we propose two instantiations for this scheme. Our protocol relies on a blockchain to enforce the atomicity of the exchange and uses VECK to ensure that the client receives the correct data (matching an agreed-upon commitment) before releasing the payment for the decrypting key. Our protocol is trust-minimized and requires only constant-sized on-chain communication, concretely 3 signatures, 1 verification key, and 1 secret key, with most of the data stored and communicated off-chain. It also supports exchanging only a subset of the data, can amortize the server's work across multiple clients, and offers a general framework to design alternative FDE protocols using different commitment schemes. A prominent application of our protocol is the Danksharding data availability scheme on Ethereum, which commits to data via KZG polynomial commitments. We also provide an open-source implementation for our protocol with both instantiations for VECK, demonstrating our protocol's efficiency and practicality on Ethereum.
One-Way Functions and pKt Complexity
We introduce $\mathsf{pKt}$ complexity, a new notion of time-bounded Kolmogorov complexity that can be seen as a probabilistic analogue of Levin's $\mathsf{Kt}$ complexity. Using $\mathsf{pKt}$ complexity, we upgrade two recent frameworks that characterize one-way functions ($\mathsf{OWF}$) via symmetry of information and meta-complexity, respectively. Among other contributions, we establish the following results:
- $\mathsf{OWF}$ can be based on the worst-case assumption that $\mathsf{BPEXP}$ is not contained infinitely often in $\mathsf{P}/\mathsf{poly}$ if the failure of symmetry of information for $\mathsf{pKt}$ in the $\textit{worst-case}$ implies its failure on $\textit{average}$.
- (Infinitely-often) $\mathsf{OWF}$ exist if and only if the average-case easiness of approximating $\mathsf{pKt}$ with $\textit{two-sided}$ error implies its (mild) average-case easiness with $\textit{one-sided}$ error.
Previously, in a celebrated result, Liu and Pass (CRYPTO 2021 and CACM 2023) proved that one can base (infinitely-often) $\mathsf{OWF}$ on the assumption that $\mathsf{EXP} \nsubseteq \mathsf{BPP}$ if and only if there is a reduction from computing $\mathsf{Kt}$ on average with $\textit{zero}$ error to computing $\mathsf{Kt}$ on average with $\textit{two-sided}$ error. In contrast, our second result shows that closing the gap between two-sided error and one-sided error average-case algorithms for approximating $\mathsf{pKt}$ is both necessary and sufficient to $\textit{unconditionally}$ establish the existence of $\mathsf{OWF}$.
PassPro: A Secure Password-based Authentication Mechanism to Prevent Attacks
The password-based authentication system is a widely used authentication mechanism. However, it has several issues, including the domino effect, guessing attacks, dictionary attacks, rainbow table attacks, and database leakage issues. To address these issues, we present a client-side password hashing method called PassPro. PassPro uses two secrets and a domain word to shuffle the strings. The shuffled strings are converted into hash values and sent to the identity manager for authentication or identity creation. The shuffling is based on a pseudo-random algorithm. The legitimate user can reproduce the shuffled string again. The hash values are encrypted in the password database using a password-based encryption method with a mutually reproducible secret word for each user. Therefore, PassPro features- a) client-side password metering, b) client-side password hashing, c) prevention of the domino effect from leaked password database, d) protection of the password database leakage, e) encryption of the hash values using a mutually reproducible secret word, and g) prevention of dictionary and guessing attacks. Also, PassPro guarantees that adversaries, including authentication managers, cannot retrieve the user's original password and user ID. Alternatively, the original user ID and password cannot be retrieved even if the password database is given to the adversary. Furthermore, a password database's user ID and password are invalid in other domains, even if the user uses the same user ID and password in multiple domains.
RSA-Based Dynamic Accumulator without Hashing into Primes
A cryptographic accumulator is a compact data structure for representing a set of elements coming from some domain. It allows for a compact proof of membership and, in the case of a universal accumulator, non-membership of an element x in the data structure. A dynamic accumulator, furthermore, allows elements to be added to and deleted from the accumulator.
Previously known RSA-based dynamic accumulators were too slow in practice because they required that an element in the domain be represented as a prime number. Accumulators based on settings other than RSA had other drawbacks such as requiring a prohibitively large common reference string or a trapdoor, or not permitting deletions.
In this paper, we construct RSA-based dynamic accumulators that do not require that the accumulated elements be represented as primes. We also show how to aggregate membership and non-membership witnesses and batch additions and deletions. We demonstrate that, for 112-bit, 128-bit, and 192-bit security, the efficiency gains compared to previously known RSA-based accumulators are substantial, and, for the first time, make cryptographic accumulators a viable candidate for a certificate revocation mechanism as part of a WebPKI-type system. To achieve an efficient verification time for aggregated witnesses, we introduce a variant of Wesolowski’s proof of exponentiation (Journal of Cryptology 2020) that does not require hashing into primes.
Locally Verifiable Distributed SNARGs
The field of distributed certification is concerned with certifying properties of distributed networks, where the communication topology of the network is represented as an arbitrary graph; each node of the graph is a separate processor, with its own internal state. To certify that the network satisfies a given property, a prover assigns each node of the network a certificate, and the nodes then communicate with one another and decide whether to accept or reject. We require soundness and completeness: the property holds if and only if there exists an assignment of certificates to the nodes that causes all nodes to accept. Our goal is to minimize the length of the certificates, as well as the communication between the nodes of the network. Distributed certification has been extensively studied in the distributed computing community, but it has so far only been studied in the information- theoretic setting, where the prover and the network nodes are computationally unbounded.
In this work we introduce and study computationally bounded distributed certification: we define locally verifiable distributed $\mathsf{SNARG}$s ($\mathsf{LVD}$-$\mathsf{SNARG}$s), which are an analog of SNARGs for distributed networks, and are able to circumvent known hardness results for information-theoretic distributed certification by requiring both the prover and the verifier to be computationally efficient (namely, PPT algorithms).
We give two $\mathsf{LVD}$-$\mathsf{SNARG}$ constructions: the first allows us to succinctly certify any network property in $\mathsf{P}$, using a global prover that can see the entire network; the second construction gives an efficient distributed prover, which succinctly certifies the execution of any efficient distributed algorithm. Our constructions rely on non-interactive batch arguments for $\mathsf{NP}$ ($\mathsf{BARG}$s) and on $\mathsf{RAM}$ $\mathsf{SNARG}$s, which have recently been shown to be constructible from standard cryptographic assumptions.
Dual Polynomial Commitment Schemes and Applications to Commit-and-Prove SNARKs
In this work, we introduce a primitive called a dual polynomial commitment scheme that allows linking together a witness committed to using a univariate polynomial commitment scheme with a witness inside a multilinear polynomial commitment scheme. This yields commit-and-prove (CP) SNARKs with the flexibility of going back and forth between univariate and multilinear encodings of witnesses. This is in contrast to existing CP frameworks that assume compatible polynomial commitment schemes between different components of the proof systems. In addition to application to CP, we also show that our notion yields a version of Spartan with better proof size and verification complexity, at the cost of a more expensive prover.
We achieve this via a combination of the following technical contributions: (i) we construct a new univariate commitment scheme in the updatable SRS setting that has better prover complexity than KZG (ii) we construct a new multilinear commitment scheme in the updatable setting that is compatible for linking with our univariate scheme (iii) we construct an argument of knowledge to prove a given linear relationship between two witnesses committed using a two-tiered commitment scheme (Pedersen+AFG) using Dory as a black-box. These constructions are of independent interest.
We implement our commitment schemes and report on performance. We also implement the version of Spartan with our dual polynomial commitment scheme and demonstrate that it outperforms Spartan in proof size and verification complexity.
Password-Protected Key Retrieval with(out) HSM Protection
Password-protected key retrieval (PPKR) enables users to store and retrieve high-entropy keys from a server securely. The process is bootstrapped from a human-memorizable password only, addressing the challenge of how end-users can manage cryptographic key material. The core security requirement is protection against a corrupt server, which should not be able to learn the key or offline- attack it through the password protection. PPKR is deployed at a large scale with the WhatsApp Backup Protocol (WBP), allowing users to access their encrypted messaging history when switching to a new device. Davies et al. (Crypto’23) formally analyzed the WBP, proving that it satisfies most of the desired security. The WBP uses the OPAQUE protocol for password-based key exchange as a building block and relies on the server using a hardware security module (HSM) for most of its protection. In fact, the security analysis assumes that the HSM is incorruptible – rendering most of the heavy cryptography in the WBP obsolete. In this work, we explore how provably secure and efficient PPKR can be built that either relies strongly on an HSM – but then takes full advantage of that – or requires less trust assumption for the price of more advanced cryptography. To this end, we expand the definitional work by Davies et al. to allow the analysis of PPKR with fine-grained HSM corruption, such as leakage of user records or attestation keys. For each scenario, we aim to give minimal PPKR solutions. For the strongest corruption setting, namely a fully corrupted HSM, we propose a protocol with a simpler design and better efficiency than the WBP. We also fix an attack related to client authentication that was identified by Davies et al.
Universal Context Commitment without Ciphertext Expansion
An ongoing research challenge in symmetric cryptography is to design an authenticated encryption (AE) with a commitment to the secret key or preferably to the entire context. One way to achieve this is to use a transform on an existing AE scheme, if possible with no output length expansion. At EUROCRYPT'22, Bellare and Hoang proposed the HtE transform, which lifts key-commitment to context-commitment. In the same year at ESORICS'22, Chan and Rogaway proposed the CTX transform, which works on any AE scheme where the tag is not required for decryption. However, for AE schemes which are not key-committing to begin with and which use the tag for decryption, no such transform exists till date. The latter category encompasses all AE schemes based on the design paradigms SIV, MAC-then-Encrypt, and Encode-then-Encipher. In this work, we propose PACT, a transform to convert any AE scheme into a context-committing one without any output length expansion. In addition, PACT preserves both nonce-respecting and nonce-misuse security of the legacy AE scheme. However, this is not the case with all the existing transforms. To demonstrate this, we show that a combination of CTY and SC (proposed by Bellare and Hoang, CRYPTO'24) doesn't preserve the nonce-misuse security of the legacy AE scheme. PACT requires only one call to a collision-resistant unkeyed hash function and one call to a block cipher. Finally, we propose a lighter transform comPACT, which converts a nonce-respecting AE scheme into a context-committing one.
GRandLine: Adaptively Secure DKG and Randomness Beacon with (Log-)Quadratic Communication Complexity
A randomness beacon is a source of continuous and publicly verifiable randomness which is of crucial importance for many applications. Existing works on randomness beacons suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) each epoch takes many rounds of communication, or (iii) computationally expensive tools such as proof-of-work (PoW) or verifiable delay functions (VDF). In this work, we introduce GRandLine, the first adaptively secure randomness beacon protocol that overcomes all these limitations while preserving simplicity and optimal resilience in the synchronous network setting. We achieve our result in two steps. First, we design a novel distributed key generation (DKG) protocol GRand that runs in $\mathcal{O}(\lambda n^2\log{n})$ bits of communication but, unlike most conventional DKG protocols, outputs both secret and public keys as group elements. Here, $\lambda$ denotes the security parameter. Second, following termination of GRand, parties can use their keys to derive a sequence of randomness beacon values, where each random value costs only a single asynchronous round and $\mathcal{O}(\lambda n^2)$ bits of communication. We implement GRandLine and evaluate it using a network of up to 64 parties running in geographically distributed AWS instances. Our evaluation shows that GRandLine can produce about 2 beacon outputs per second in a network of 64 parties. We compare our protocol to the state-of-the-art randomness beacon protocols OptRand (NDSS '23), BRandPiper (CCS '21), and Drand, in the same setting and observe that it vastly outperforms them.
Actively Secure Private Set Intersection in the Client-Server Setting
Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing anything else. In some applications of PSI, a server holds a large set and runs a PSI protocol with multiple clients, each with its own smaller set. In this setting, existing protocols fall short: they either achieve only semi-honest security, or else require the server to run the protocol from scratch for each execution.
We design an efficient protocol for this setting with simulation-based security against malicious adversaries. In our protocol, the server publishes a one-time, linear-size encoding of its set. Then, multiple clients can independently execute a PSI protocol with the server, with complexity linear in the size of each client’s set. To learn the intersection, a client can download the server’s encoding, which can be accelerated via content-distribution or peer-to-peer networks since the same encoding is used by all clients; alternatively, clients can fetch only the relevant parts of the encoding using verifiable private information retrieval. A key ingredient of our protocol is an efficient instantiation of an oblivious verifiable unpredictable function, which may be of independent interest.
Our implementation shows that our protocol is highly efficient. For a server holding $10^8$ elements and each client holding $10^3$ elements, the size of the server’s encoding is 800MB; an execution of the protocol uses 60MB of communication, runs in under 5s in a WAN network with 120 Mbps bandwidth, and costs only 0.017 USD when utilizing network-caching infrastructures, a 5× saving compared to a state-of-the-art PSI protocol.
Towards Fine-Grained One-Way Functions from Strong Average-Case Hardness
Constructing one-way functions from average-case hardness is a long-standing open problem. A positive result would exclude Pessiland (Impagliazzo ’95) and establish a highly desirable win-win situation: either (symmetric) cryptography exists unconditionally, or all NP problems can be solved efficiently on the average. Motivated by the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results of the following type: either there are fine-grained one-way functions (FGOWF), or nontrivial speedups can be obtained for all NP problems on the average. FGOWFs only require a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter. We obtain three main results:
Construction. We show that if there is an NP language having a very strong form of average-case hardness, which we call block finding hardness, then FGOWF exist. We provide heuristic support for this very strong average-case hardness notion by showing that it holds for a random language. Then, we study whether weaker (and more natural) forms of average-case hardness could already suffice to obtain FGOWF, and obtain two negative results:
Separation I. We provide a strong oracle separation for the implication (∃ exponentially average-case hard NP language ⇒ ∃ FGOWF).
Separation II. We provide a second strong negative result for an even weaker candidate win-win result. Namely, we rule out a relativizing proof for the implication (∃ exponentially average-case NP hard language whose hardness amplifies optimally through parallel repetitions ⇒ ∃ FGOWF). This separation forms the core technical contribution of our work.
EUCLEAK
Secure elements are small microcontrollers whose main purpose is to generate/store secrets and then execute cryptographic operations. They undergo the highest level of security evaluations that exists (Common Criteria) and are often considered inviolable, even in the worst-case attack scenarios. Hence, complex secure systems build their security upon them.
FIDO hardware tokens are strong authentication factors to sign in to applications (any web service supporting FIDO); they often embed a secure element and the FIDO protocol uses Elliptic Curve Digital Signature Algorithm (ECDSA for short) as its core cryptographic primitive. YubiKey 5 Series are certainly the most widespread FIDO hardware tokens, their secure element is an Infineon SLE78.
This document shows how – finding a JavaCard open platform (the Feitian A22) based on a similar Infineon SLE78 – we understood the Infineon ECDSA implementation, found a side-channel vulnerability and designed a practical side-channel attack. The attack is then demonstrated on a YubiKey 5Ci. Finally, we show that the vulnerability extends to the more recent Infineon Optiga Trust M and Infineon Optiga TPM security microcontrollers.
Our work unearths a side-channel vulnerability in the cryptographic library of Infineon Technologies, one of the biggest secure element manufacturers. This vulnerability – that went unnoticed for 14 years and about 80 highest-level Common Criteria certification evaluations – is due to a non constant-time modular inversion.
The attack requires physical access to the secure element (few local electromagnetic side-channel acquisitions, i.e. few minutes, are enough) in order to extract the ECDSA secret key. In the case of the FIDO protocol, this allows to create a clone of the FIDO device.
All YubiKey 5 Series (with firmware version below 5.7) are impacted by the attack and in fact all Infineon security microcontrollers (including TPMs) that run the Infineon cryptographic library (as far as we know, any existing version) are vulnerable to the attack. These security microcontrollers are present in a vast variety of secure systems – often relying on ECDSA – like electronic passports and crypto-currency hardware wallets but also smart cars or homes. However, we did not check (yet) that the EUCLEAK attack applies to any of these products.
EvalRound+ Bootstrapping and its Rigorous Analysis for CKKS Scheme
Bootstrapping stands as a fundamental component of fully homomorphic encryption (FHE) schemes, facilitating an infinite number of operations by recovering the ciphertext modulus. This work is aimed at significantly reducing the consumption of modulus in bootstrapping, thereby enhancing the efficiency of FHE performance, specifically for the Cheon--Kim--Kim--Song (CKKS) scheme proposed by Cheon et al. Building on the EvalRound bootstrapping method proposed by Kim et al., which includes the steps of ModRaise, CoeffToSlot, EvalRound and SlotToCoeff, we introduce $\textrm{EvalRound}^{+}$ bootstrapping. This bootstrapping inherits the advantage of EvalRound bootstrapping in CoeffToSlot and resolves its disadvantage in SlotToCoeff. Furthermore, we conduct a set of rigorous and comprehensive analyses to precisely determine the optimal choices of the parameters. The implementation of $\textrm{EvalRound}^{+}$ bootstrapping, along with optimal choices, has achieved a reduction in modulus consumption by over $40\%$ for CoeffToSlot and SlotToCoeff. Additionally, it has increased the number of levels for general multiplication by 2-4 in the most widely used bootstrapping parameter sets.
Practical Blind Signatures in Pairing-Free Groups
Blind signatures have garnered significant attention in recent years, with several efficient constructions in the random oracle model relying on well-understood assumptions. However, this progress does not apply to pairing-free cyclic groups: fully secure constructions over cyclic groups rely on pairings, remain inefficient, or depend on the algebraic group model or strong interactive assumptions. To address this gap, Chairattana-Apirom, Tessaro, and Zhu (CTZ, Crypto 2024) proposed a new scheme based on the CDH assumption. Unfortunately, their construction results in large signatures and high communication complexity.
In this work, we propose a new blind signature construction in the random oracle model that significantly improves upon the CTZ scheme. Compared to CTZ, our scheme reduces communication complexity by a factor of more than 10 and decreases the signature size by a factor of more than 45, achieving a compact signature size of only 224 Bytes. The security of our scheme is based on the DDH assumption over pairing-free cyclic groups, and we show how to generalize it to the partially blind setting.
Security Strengthening of Threshold Symmetric Schemes
In this paper, we study the security definitions of various threshold symmetric primitives. Namely, we analyze the security definitions for threshold pseudorandom functions, threshold message authentication codes and threshold symmetric encryption. In each case, we strengthen the existing security definition, and we present a scheme that satisfies our stronger notion of security. In particular, we propose indifferentiability definition and IND-CCA2 definition for a threshold pseudorandom function and a threshold symmetric encryption scheme, respectively. Moreover, we show that these definitions are achievable. Notably, we propose the first IND-CCA2 secure threshold symmetric encryption scheme.
FDFB$^2$: Functional Bootstrapping via Sparse Polynomial Multiplication
Fully homomorphic encryption schemes are methods to perform compu-
tations over encrypted data. Since its introduction by Gentry, there has been a
plethora of research optimizing the originally inefficient cryptosystems. Over time,
different families have emerged. On the one hand, schemes such as BGV, BFV, or
CKKS excel at performing coefficient-wise addition or multiplication over vectors
of encrypted data. In contrast, accumulator-based schemes such as FHEW and
TFHE provide efficient methods to evaluate boolean circuits and means to efficiently
compute functions over small plaintext space of up to 4-5 bits in size.
In this paper, we focus on the second family. At a high level, accumulator-based
schemes encode the range of a function f in the coefficients of a polynomial, which
is then encrypted in a homomorphic accumulator. Given an input ciphertext, the
coefficients of the encrypted polynomial are homomorphically rotated, such that there
is a correspondence between the constant term of the result and the message contained
in the ciphertext. In the end, it is possible to derive a ciphertext of the constant term
encrypted with regard to the same encryption scheme as the input ciphertext. To
summarize, by appropriately encoding the function f on the accumulated polynomial,
we can compute f on the plaintext of the input ciphertext, where the output ciphertext
has its noise magnitude independent of the input ciphertext. However, by default, it
is necessary to impose restrictions on the type of functions we evaluate or drastically
limit the plaintext space that can be correctly processed. Otherwise, the procedure’s
output will be incorrect and hard to predict.
In this work, we describe two novel algorithms that have no such restrictions. Furthermore, we derive an algorithm that enables a user to evaluate an arbitrary amount
of functions at a computational cost that differs only by a constant amount compared
to a single function. Our methods lead to an evaluation that is between 15 and 31%
faster than previous works while also being conceptually simpler.
Single-query Quantum Hidden Shift Attacks
Quantum attacks using superposition queries are known to break many classically secure modes of operation. While these attacks do not necessarily threaten the security of the modes themselves, since they rely on a strong adversary model, they help us to draw limits on the provable security of these modes.
Typically these attacks use the structure of the mode (stream cipher, MAC or authenticated encryption scheme) to embed a period-finding problem, which can be solved with a dedicated quantum algorithm. The hidden period can be recovered with a few superposition queries (e.g., $O(n)$ for Simon's algorithm), leading to state or key-recovery attacks. However, this strategy breaks down if the period changes at each query, e.g., if it depends on a nonce.
In this paper, we focus on this case and give dedicated state-recovery attacks on the authenticated encryption schemes Rocca, Rocca-S, Tiaoxin-346 and AEGIS-128L. These attacks rely on a procedure to find a Boolean hidden shift with a single superposition query, which overcomes the change of nonce at each query. As they crucially depend on such queries, we stress that they do not break any security claim of the authors, and do not threaten the schemes if the adversary only makes classical queries.
ALGAES: An Authenticated Lattice-based Generic Asymmetric Encryption Scheme
In this article, we propose a generic hybrid encryption scheme providing entity authentication. The scheme is based on lossy trapdoor functions relying on the hardness of the Learning With Errors problem. The construction can be used on a number of different security requirements with minimal reconfiguration. It ensures entity authentication and ciphertext integrity while providing security against adaptive chosen ciphertext attacks in the standard model. As a desired characteristic of schemes providing entity authentication, we prove the strong unforgeability under chosen message attack for the construction. In addition, the scheme is post-quantum secure based on the hardness of the underlying assumption.
Coral: Maliciously Secure Computation Framework for Packed and Mixed Circuits
Achieving malicious security with high efficiency in dishonest-majority secure multiparty computation is a formidable challenge. The milestone works SPDZ and TinyOT have spawn a large family of protocols in this direction. For boolean circuits, state-of-the-art works (Cascudo et. al, TCC 2020 and Escudero et. al, CRYPTO 2022) have proposed schemes based on reverse multiplication-friendly embedding (RMFE) to reduce the amortized cost. However, these protocols are theoretically described and analyzed, resulting in a significant gap between theory and concrete efficiency.
Our work addresses existing gaps by refining and correcting several issues identified in prior research, leading to the first practically efficient realization of RMFE. We introduce an array of protocol enhancements, including RMFE-based quintuples and (extended) double-authenticated bits, aimed at improving the efficiency of maliciously secure boolean and mixed circuits. The culmination of these efforts is embodied in Coral, a comprehensive framework developed atop the MP-SPDZ library. Through rigorous evaluation across multiple benchmarks, Coral demonstrates a remarkable efficiency gain, outperforming the foremost theoretical approach by Escudero et al. (which incorporates our RMFE foundation albeit lacks our protocol enhancements) by a factor of 16-30×, and surpassing the leading practical implementation for Frederiksen et al. (ASIACRYPT 2015) by 4-7×.
NTRU+PKE: Efficient Public-Key Encryption Schemes from the NTRU Problem
We propose a new NTRU-based Public-Key Encryption (PKE) scheme called $\mathsf{NTRU+}\mathsf{PKE}$, which effectively incorporates the Fujisaki-Okamoto transformation for PKE (denoted as $\mathsf{FO}_{\mathsf{PKE}}$) to achieve chosen-ciphertext security in the Quantum Random Oracle Model (QROM). While $\mathsf{NTRUEncrypt}$, a first-round candidate in the NIST PQC standardization process, was proven to be chosen-ciphertext secure in the Random Oracle Model (ROM), it lacked corresponding security proofs for QROM. Our work extends the capabilities of the recent $\mathsf{ACWC}_{2}$ transformation, proposed by Kim and Park in 2023, by demonstrating that an $\mathsf{ACWC}_{2}$-transformed scheme can serve as a sufficient foundation for applying $\mathsf{FO}_\mathsf{PKE}$. Specifically, we show that the $\mathsf{ACWC}_{2}$-transformed scheme achieves (weak) $\gamma$-spreadness, an essential property for constructing an IND-CCA secure PKE scheme. Moreover, we provide the first proof of the security of $\mathsf{FO}_\mathsf{PKE}$ in the QROM. Finally, we show that $\mathsf{FO}_\mathsf{PKE}$ can be further optimized into a more efficient transformation, $\overline{\mathsf{FO}}_\mathsf{PKE}$, which eliminates the need for re-encryption during decryption. By instantiating an $\mathsf{ACWC}_{2}$-transformed scheme with appropriate parameterizations, we construct $\mathsf{NTRU+}\mathsf{PKE}$, which supports 256-bit message encryption. Our implementation results demonstrate that at approximately a classical 180-bit security level, $\mathsf{NTRU+}\mathsf{PKE}$ is about 2 times faster than \textsc{Kyber} + AES-256-GCM in AVX2 mode.
Software-Defined Cryptography: A Design Feature of Cryptographic Agility
Given the widespread use of cryptography in Enterprise IT, migration to post-quantum cryptography (PQC) is not drop-in replacement at all. Cryptographic agility, or crypto-agility, is a design feature that enables seamless updates to new cryptographic algorithms and standards without the need to modify or replace the surrounding infrastructure. This paper introduces a notion of software-defined cryptography as the desired design feature for crypto-agility, emphasizing the role of software in providing centralized governance for cryptography and automated enforcement of cryptographic policies, such as migration to PQC.
ML based Improved Differential Distinguisher with High Accuracy: Application to GIFT-128 and ASCON
In recent years, ML based differential distinguishers have been explored and compared with the classical methods. Complexity of a key recovery attack on block ciphers is calculated using the probability of a differential distinguisher provided by classical methods. Since theoretical computations suffice to calculate the data complexity in these cases, so there seems no restrictions on the practical availability of computational resources to attack a block cipher using classical methods. However, ML based differential cryptanalysis is based on the machine learning model that uses encrypted data to learn its features using available compute power. This poses a restriction on the accuracy of ML distinguisher for increased number of rounds and ciphers with large block size. Moreover, we can still construct the distinguisher but the accuracy becomes very low in such cases. In this paper, we present a new approach to construct the differential distinguisher with high accuracy using the existing ML based distinguisher of low accuracy. This approach outperforms all existing approaches with similar objective. We demonstrate our method to construct the high accuracy ML based distinguishers for GIFT-128 and ASCON permutation. For GIFT-128, accuracy of 7-round distinguisher is increased to 98.8% with $2^{9}$ data complexity. For ASCON, accuracy of 4-round distinguisher is increased to 99.4% with $2^{18}$ data complexity. We present the first ML based distinguisher for 8 rounds of GIFT-128 using the differential-ML distinguisher presented in Latincrypt-2021. This distinguisher is constructed with 99.8% accuracy and $2^{18}$ data complexity.
A Note on ARADI and LLAMA
Recently, the NSA has proposed a block cipher called ARADI and a mode of operation called LLAMA for memory encryption applications.
In this note, we comment on this proposal, on its suitability for the intended application, and describe an attack on LLAMA that breaks confidentiality of ciphertext and allows a straightforward forgery attack breaking integrity of ciphertext (INT-CTXT) using a related-IV attack.
Both attacks have negligible complexity.
Provably Secure Online Authenticated Encryption and Bidirectional Online Channels
In this work, we examine online authenticated encryption with variable expansion. We follow a notion where both encryption and decryption are online, and security is ensured in the RUP (Release of Unverified Plaintext) setting. Then we propose a generic way of obtaining an online authenticated encryption mode from a tweakable online encryption mode based on the encode-then-encipher paradigm (Bellare and Rogaway, Asiacrypt 2000). To instantiate our generic scheme, we start with proposing a provably-secure tweakable online encryption mode called t-OleF, a tweakable version of OleF (Bhaumik and Nandi, ToSC 2016(2)), and then plug it into our generic scheme to obtain OlÆF, a provably-secure online authenticated encryption mode. As an application, we propose a primitive we call a bidirectional online channel suited for communication between lightweight devices.
Committing Wide Encryption Mode with Minimum Ciphertext Expansion
We propose a new wide encryption (WE) mode of operation that satisfies robust authenticated encryption (RAE) and committing security with minimum ciphertext expansion. WE is attracting much attention in the last few years, and its advantage includes RAE security that provides robustness against wide range of misuses, combined with the encode-then-encipher (EtE) construction. Unfortunately, WE-based EtE does not provide good committing security, and there is a recent constant-time CMT-4 attack (Chen et al., ToSC 2023(4)). Improving CMT-4 security requires considerable ciphertext expansion, and the state-of-the-art scheme expands the ciphertext by s_rae + 2 s_cmt bits from an original message to achieve s_rae-bit RAE and s_cmt-bit CMT-4 security. Our new WE mode FFF addresses the issue by achieving s_rae-bit RAE and s_cmt-bit CMT-4 security only with max{s_cmt, s_rae} bits of ciphertext expansion. Our design is based on the committing concealer proposed by Bellare et al., and its extension to WE (cf. tag-based AE) while satisfying RAE security is the main technical innovation.
Tightly Secure Non-Interactive BLS Multi-Signatures
Due to their simplicity, compactness, and algebraic structure, BLS signatures are among the most widely used signatures in practice. For example, used as multi-signatures, they are integral in Ethereum's proof-of-stake consensus. From the perspective of concrete security, however, BLS (multi-)signatures suffer from a security loss linear in the number of signing queries. It is well-known that this loss can not be avoided using current proof techniques.
In this paper, we introduce a new variant of BLS multi-signatures that achieves tight security while remaining fully compatible with regular BLS. In particular, our signatures can be seamlessly combined with regular BLS signatures, resulting in regular BLS signatures. Moreover, it can easily be implemented using existing BLS implementations in a black-box way. Our scheme is also one of the most efficient non-interactive multi-signatures, and in particular more efficient than previous tightly secure schemes. We demonstrate the practical applicability of our scheme by showing how proof-of-stake protocols that currently use BLS can adopt our variant for fully compatible opt-in tight security.
SoK: The Engineer’s Guide to Post-Quantum Cryptography for Embedded Devices
Embedded systems are flexible and cost-effective and thus have found a use case in almost every part of our daily lives. Due to their widespread use, they have also become valuable targets for cyber attacks. However, translating cutting-edge cyber security from servers and desktops to the embedded realm can be challenging due to the limited computational power and memory of embedded devices. Although quantum computing is still in early research and development, it threatens to break conventional asymmetric cryptography which is a key component of most secure applications currently in use. Given the long lifespan of embedded devices, which can last for decades, research must find solutions for post-quantum (PQ) security rather sooner than later. The field of post-quantum cryptography (PQC) received significant attention in 2019 when the National Institute for Standards and Technology (NIST) launched a competition to find suitable PQC algorithms. During the PQC competition, the applicability of novel PQC algorithms to embedded devices was an important topic that garnered significant research interest. We provide a survey of the latest research regarding PQC for embedded systems. However, rather than focusing on PQC algorithms, our study revolves around practical use cases intending to help embedded developers understand the current state of research from an integration perspective.
Early Stopping Byzantine Agreement in $(1+\epsilon) \cdot f$ Rounds
In this paper, we present two early stopping Byzantine agreement protocols in the authenticated setting against a corrupt minority $t < n/2$, where $t$ represents the maximum number of malicious parties. Early stopping protocols ensure termination within a number of rounds determined solely by the actual number of malicious nodes $f$ present during execution, irrespective of $t$.
Our first protocol is deterministic and ensures early stopping termination in $ (d+5) \cdot (\lfloor f/d \rfloor +2)+2$ rounds, where $d$ is a fixed constant. For example, for all $d\ge 6$, our protocol runs in at most $(1+\epsilon )\cdot f$ rounds (where $0<\epsilon<1$), improving (for large $f$) upon the best previous early stopping deterministic broadcast protocol by Perry and Toueg 1984, which terminates in $min(2f+4,2t+2)$ rounds. Additionally, our second protocol is randomized, ensuring termination in an expected constant number of rounds and achieving early stopping in $(d+9) \cdot (\lfloor f/d \rfloor +1)+2$ rounds in the worst case. This marks a significant improvement over a similar result by Goldreich and Petrank 1990, which always requires an expected constant number of rounds and $O(t)$ rounds in the worst case, i.e., does not have the early stopping property.
Horcrux: Synthesize, Split, Shift and Stay Alive Preventing Channel Depletion via Universal and Enhanced Multi-hop Payments
Payment Channel Networks (PCNs) have been highlighted as viable solutions to address the scalability issues in current permissionless blockchains. They facilitate off-chain transactions, significantly reducing the load on the blockchain. However, the extensive reuse of multi-hop routes in the same direction poses a risk of channel depletion, resulting in involved channels becoming unidirectional or even closing, thereby compromising the sustainability and scalability of PCNs. Even more concerning, existing rebalancing protocol solutions heavily rely on trust assumptions and scripting languages, resulting in compromised universality and reliability.
In this paper, we present Horcrux, a universal and efficient multi-party virtual channel protocol without relying on extra trust assumptions, scripting languages, or the perpetual online requirement. Horcrux fundamentally addresses the channel depletion problem using a novel approach termed flow neutrality, which minimizes the impact on channel balance allocations during multi-hop payments (MHPs). Additionally, we formalize the security properties of Horcrux by modeling it within the Global Universal Composability framework and provide a formal security proof.
We implement Horcrux on a real Lightning Network dataset, comprising 10,529 nodes and 38,910 channels, and compare it to the state-of-the-art rebalancing schemes such as Shaduf [NDSS'22], Thora [CCS'22], and Revive [CCS'17]. The experimental results demonstrate that (1) the entire process of Horcrux costs less than 1 USD, significantly lower than Shaduf; (2) Horcrux achieves a $12\%$-$30\%$ increase in payment success ratio and reduces user deposits required for channels by $70\%$-$91\%$; (3) the performance of Horcrux improves by $1.2x$-$1.5x$ under long-term operation; and (4) Horcrux maintains a nearly zero channel depletion rate, whereas both Revive and Shaduf result in thousands of depleted channels.
Simulation-Secure Threshold PKE from LWE with Polynomial Modulus
In LWE based cryptosystems, using small (polynomially bounded) ciphertext modulus improves both efficiency and security.
In threshold encryption, one often needs "simulation security": the ability to simulate decryption shares without the secret key.
Existing lattice-based threshold encryption schemes provide one or the other but not both.
Simulation security has seemed to require superpolynomial flooding noise,
and the schemes with polynomial modulus use Rényi divergence based analyses that are sufficient for game-based but not simulation security.
In this work, we give the first construction of simulation-secure lattice-based threshold PKE with polynomially bounded modulus.
The construction itself is relatively standard, but we use an improved analysis, proving that when the ciphertext noise and flooding noise are both Gaussian, simulation is possible even with very small flooding noise.
Our modulus is small not just asymptotically but also concretely: this technique gives parameters roughly comparable to those of highly optimized non-threshold schemes like FrodoKEM.
As part of our proof, we show that LWE remains hard in the presence of some types of leakage; these results and techniques may also be useful in other contexts where noise flooding is used.
Secure Multiparty Computation with Lazy Sharing
Secure multiparty computation (MPC) protocols enable $n$ parties, each with private inputs, to compute a given function without leaking information beyond the outputs. One of the main approaches to designing efficient MPC protocols is to use secret sharing. In general, secret sharing based MPC contains three phases: input sharing, circuit evaluation, and output recovery. If the adversary corrupts at most $t$ parties, the protocol typically uses $(t,n)$ threshold secret sharing to share the inputs. In this work, we consider a weaker variant of threshold secret sharing called lazy threshold secret sharing (or simply lazy sharing) and show that
- Lazy sharing can serve as a viable alternative to threshold secret sharing in MPC without compromising security.
- Lazy sharing could be generated more efficiently than threshold secret sharing.
As a result, replacing threshold secret sharing with lazy sharing can lead to a more efficient input sharing phase. Moreover, we propose that the efficiency of the circuit evaluation phase can also be further improved. To support this claim, we apply lazy sharing to several state-of-the-art MPC protocols and analyze the efficiency gain in various settings. These protocols include the GMW protocol (Goldreich et al., STOC 1987), the AFLNO protocol (Araki et al., CCS 2016), and the SPDZ protocol (Damg{\aa}rd et al., CRYPTO 2012). By doing so, we analyze the efficiency gains in various settings and highlight the advantages of incorporating lazy sharing into MPC protocols.
Jackpot: Non-Interactive Aggregatable Lotteries
In proof-of-stake blockchains, liveness is ensured by repeatedly selecting random groups of parties as leaders, who are then in charge of proposing new blocks and driving consensus forward.
The lotteries that elect those leaders need to ensure that adversarial parties are not elected disproportionately often and that an adversary can not tell who was elected before those parties decide to speak, as this would potentially allow for denial-of-service attacks.
Whenever an elected party speaks, it needs to provide a winning lottery ticket, which proves that the party did indeed win the lottery.
Current solutions require all published winning tickets to be stored individually on-chain, which introduces undesirable storage overheads.
In this work, we introduce non-interactive aggregatable lotteries and show how these can be constructed efficiently.
Our lotteries provide the same security guarantees as previous lottery constructions, but additionally allow any third party to take a set of published winning tickets and aggregate them into one short digest.
We provide a formal model of our new primitive in the universal composability framework.
As one of our technical contributions, which may be of independent interest, we introduce aggregatable vector commitments with simulation-extractability and present a concretely efficient construction thereof in the algebraic group model in the presence of a random oracle.
We show how these commitments can be used to construct non-interactive aggregatable lotteries.
We have implemented our construction, called Jackpot, and provide benchmarks that underline its concrete efficiency.
Adaptive Successive Over-Relaxation Method for a Faster Iterative Approximation of Homomorphic Operations
Homomorphic encryption is a cryptographic technique that enables arithmetic
operations to be performed on encrypted data. However, word-wise fully
homomorphic encryption schemes, such as BGV, BFV, and CKKS schemes, only
support addition and multiplication operations on ciphertexts. This limitation
makes it challenging to perform non-linear operations directly on the
encrypted data. To address this issue, prior research has proposed efficient
approximation techniques that utilize iterative methods, such as functional
composition, to identify optimal polynomials. These approximations are
designed to have a low multiplicative depth and a reduced number of
multiplications, as these criteria directly impact the performance of the
approximated operations.
In this paper, we propose a novel method, named as adaptive successive
over-relaxation (aSOR), to further optimize the approximations used in
homomorphic encryption schemes. Our experimental results show that the aSOR
method can significantly reduce the computational effort required for these
approximations, achieving a reduction of 2–9 times compared to state-of-the-art
methodologies. We demonstrate the effectiveness of the aSOR method by applying
it to a range of operations, including sign, comparison, ReLU, square root,
reciprocal of m-th root, and division. Our findings suggest that the aSOR method
can greatly improve the efficiency of homomorphic encryption for performing
non-linear operations.
High-Throughput GPU Implementation of Dilithium Post-Quantum Digital Signature
Digital signatures are fundamental building blocks in various protocols to provide integrity and authenticity. The development of the quantum computing has raised concerns about the security guarantees afforded by classical signature schemes. CRYSTALS-Dilithium is an efficient post-quantum digital signature scheme based on lattice cryptography and has been selected as the primary algorithm for standardization by the National Institute of Standards and Technology. In this work, we present a high-throughput GPU implementation of Dilithium. For individual operations, we employ a range of computational and memory optimizations to overcome sequential constraints, reduce memory usage and IO latency, address bank conflicts, and mitigate pipeline stalls. This results in high and balanced compute throughput and memory throughput for each operation. In terms of concurrent task processing, we leverage task-level batching to fully utilize parallelism and implement a memory pool mechanism for rapid memory access. We propose a dynamic task scheduling mechanism to improve multiprocessor occupancy and significantly reduce execution time. Furthermore, we apply asynchronous computing and launch multiple streams to hide data transfer latencies and maximize the computing capabilities of both CPU and GPU. Across all three security levels, our GPU implementation achieves over 160× speedups for signing and over 80× speedups for verification on both commercial and server-grade GPUs. This achieves microsecond-level amortized execution times for each task, offering a high-throughput and quantum-resistant solution suitable for a wide array of applications in real systems.
Succinct Vector, Polynomial, and Functional Commitments from Lattices
Vector commitment schemes allow a user to commit to a vector of values $\mathbf{x} \in \{0,1\}^\ell$ and later, open up the commitment to a specific set of positions. Both the size of the commitment and the size of the opening should be succinct (i.e., polylogarithmic in the length $\ell$ of the vector). Vector commitments and their generalizations to polynomial commitments and functional commitments are key building blocks for many cryptographic protocols.
We introduce a new framework for constructing non-interactive lattice-based vector commitments and their generalizations. A simple instantiation of our framework yields a new vector commitment scheme from the standard short integer solution (SIS) assumption that supports private openings and large messages. We then show how to use our framework to obtain the first succinct functional commitment scheme that supports openings with respect to arbitrary bounded-depth Boolean circuits. In this scheme, a user commits to a vector $\mathbf{x} \in \{0,1\}^\ell$, and later on, open the commitment to any function $f(\mathbf{x})$. Both the commitment and the opening are non-interactive and succinct: namely, they have size $\textsf{poly}(\lambda, d, \log \ell)$, where $\lambda$ is the security parameter and $d$ is the depth of the Boolean circuit computing $f$. Previous constructions of functional commitments could only support constant-degree polynomials, or require a trusted online authority, or rely on non-falsifiable assumptions. The security of our functional commitment scheme is based on a new falsifiable family of "basis-augmented" SIS assumptions (BASIS) we introduce in this work.
We also show how to use our vector commitment framework to obtain (1) a polynomial commitment scheme where the user can commit to a polynomial $f \in \mathbb{Z}_q[x]$ and subsequently open the commitment to an evaluation $f(x) \in \mathbb{Z}_q$; and (2) an aggregatable vector (resp., functional) commitment where a user can take a set of openings to multiple indices (resp., function evaluations) and aggregate them into a single short opening. Both of these extensions rely on the same BASIS assumption we use to obtain our succinct functional commitment scheme.
Breaking the quadratic barrier: Quantum cryptanalysis of Milenage, telecommunications’ cryptographic backbone
The potential advent of large-scale quantum computers in the near future poses a threat to contemporary cryptography.
One ubiquitous usage of cryptography is currently present in the vibrant field of cellular networks.
The cryptography of cellular networks is centered around seven secret-key algorithms $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$, aggregated into an authentication and key agreement algorithm set.
Still, to the best of our knowledge, these secret key algorithms have not yet been subject to quantum cryptanalysis. Instead, many quantum security considerations for telecommunication networks argue that the threat posed by quantum computers is restricted to public-key cryptography. However, various recent works have presented quantum attacks on secret key cryptography that exploit quantum period finding to achieve more than a quadratic speedup compared to the best known classical attacks. Motivated by this quantum threat to symmetric cryptography, this paper presents a quantum cryptanalysis for the Milenage algorithm set, the prevalent instantiation of the seven secret-key $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$ algorithms that underpin cellular security.
Building upon recent quantum cryptanalytic results, we show attacks that go beyond a quadratic speedup. Concretely, we provide quantum attack scenarios for all Milenage algorithms, including a polynomial time quantum existential forgery attack in the $Q_2$ model. Our comprehensive attack overview can serve as a basis for further research into the resilience of Milenage against quantum adversaries.
TandaPay Whistleblowing Communities: Shifting Workplace Culture Towards Zero-Tolerance Sexual Harassment Policies
Abstract—Corporate sexual harassment policies often prioritize liability mitigation over the creation of a corporate culture free of harassment. Victims of sexual harassment are often required to report claims individually to HR. This can create an environment of self-censorship when employees feel that they cannot trust HR to act as an unbiased mediator. This problem is compounded when corporations have a culture that is tolerant of certain types of harassment. Forcing employees to report incidents to HR does nothing to address employees’ fear of bias and uncertainty. This paper presents TandaPay, a decentralized grievance reporting protocol designed to address sexual harassment. TandaPay empowers whistleblowing communities to collectively approve their own harassment claims. TandaPay reduces self-censorship by allowing employees to take ownership of the reporting process, as employees no longer need to rely on HR to act as an intermediary. The protocol employs a novel method of using financial incentives to guard against collusion. This provides corporations with a guarantee that employees can only approve valid claims. Using TandaPay, corporations can give employees greater autonomy with the goal of minimizing self-censorship. This increases the reporting of incidents, enabling workers to change the corporate culture to one of respect and accountability.
FLIP-and-prove R1CS
In this work, we consider the setting where one or more users with low computational resources would lie to outsource the task of proof generation for SNARKs to one external entity, named Prover. We study the scenario in which Provers have access to all statements and witnesses to be proven beforehand. We take a different approach to proof aggregation and design a new protocol that reduces simultaneously proving time and communication complexity, without going through recursive proof composition.
Our two main contributions: We first design FLIP, a communication efficient folding scheme where we apply the Inner Pairing Product Argument to fold R1CS instances of the same language into a single relaxed R1CS instance. Then, any proof system for relaxed R1CS language can be applied to prove the final instance. As a second contribution, we build a novel variation of Groth16 with the same communication complexity for relaxed R1CS and two extra pairings for verification, with an adapted trusted setup.
Compared to SnarkPack - a prior solution addressing scaling for multiple Groth16 proofs - our scheme improves in prover complexity by orders of magnitude, if we consider the total cost to generated the SNARK proofs one by one and the aggregation effort.
An immediate application of our solution is Filecoin, a decentralized storage network based on incentives that generates more than 6 million SNARKs for large circuits of 100 million constraints per day.
Last updated: 2024-08-29
Improved Key Recovery Attacks on Reduced-Round Salsa20
In this paper, we present an improved attack on the stream cipher Salsa20. Our improvements are based on two technical contributions.
First, we make use of a distribution of a linear combination of several random variables that are derived from different differentials and explain how to exploit this in order to improve the attack complexity. Secondly, we study and exploit how to choose the actual value for so-called probabilistic neutral bits optimally. Because of the limited influence of these key bits on the computation, in the usual attack approach, these are fixed to a constant value, often zero for simplicity. As we will show, despite the fact that their influence is limited, the constant can be chosen in significantly better ways, and intriguingly, zero is the worst choice. Using this, we propose the first-ever attack on 7.5-round of $128$-bit key version of Salsa20. Also, we provide improvements in the attack against the 8-round of $256$-bit key version of Salsa20 and the 7-round of $128$-bit key version of Salsa20.
A Documentation of Ethereum’s PeerDAS
Data availability sampling allows clients to verify availability of data on a peer-to-peer network provided by an untrusted source. This is achieved without downloading the full data by sampling random positions of the encoded data.
The long-term vision of the Ethereum community includes a comprehensive data availability protocol using polynomial commitments and tensor codes. As the next step towards this vision, an intermediate solution called PeerDAS is about to integrated, to bridge the way to the full protocol. With PeerDAS soon becoming an integral part of Ethereum's consensus layer, understanding its security guarantees is essential.
This document aims to describe the cryptography used in PeerDAS in a manner accessible to the cryptographic community, encouraging innovation and improvements, and to explicitly state the security guarantees of PeerDAS.
What Did Come Out of It? Analysis and Improvements of DIDComm Messaging
Self-Sovereign Identity (SSI) empowers individuals and organizations with full control over their data. Decentralized identifiers (DIDs) are at its center, where a DID contains a collection of public keys associated with an entity, and further information to enable entities to engage via secure and private messaging across different platforms. A crucial stepping stone is DIDComm, a cryptographic communication layer that is in production with version 2. Due to its widespread and active deployment, a formal study of DIDComm is highly overdue.
We present the first formal analysis of DIDComm’s cryptography, and formalize its goal of (sender-) anonymity and authenticity. We follow a composable approach to capture its security over a generic network, formulating the goal of DIDComm as a strong ideal communication resource. We prove that the proposed encryption modes reach the expected level of privacy and authenticity, but leak beyond the leakage induced by an underlying network (captured by a parameterizable resource).
We further use our formalism to propose enhancements and prove their security: first, we present an optimized algorithm that achieves simultaneously anonymity and authenticity, conforming to the DIDComm message format, and which outperforms the current DIDComm proposal in both ciphertext size and computation time by almost a factor of 2. Second, we present a novel DIDComm mode that fulfills the notion of anonymity preservation, in that it does never leak more than the leakage induced by the network it is executed over. We finally show how to merge this new mode into our improved algorithm, obtaining an efficient all-in-one mode for full anonymity and authenticity.
Understanding the Blockchain Interoperability Graph based on Cryptocurrency Price Correlation
Cryptocurrencies have gained high popularity in
recent years, with over 9000 of them, including major ones such
as Bitcoin and Ether. Each cryptocurrency is implemented on
one blockchain or over several such networks. Recently, various
technologies known as blockchain interoperability have been
developed to connect these different blockchains and create an
interconnected blockchain ecosystem. This paper aims to provide
insights on the blockchain ecosystem and the connection between
blockchains that we refer to as the interoperability graph. Our
approach is based on the analysis of the correlation between
cryptocurrencies implemented over the different blockchains.
We examine over 4800 cryptocurrencies implemented on 76
blockchains and their daily prices over a year. This experimental
study has potential implications for decentralized finance (DeFi),
including portfolio investment strategies and risk management.
Leakage-Resilience of Circuit Garbling
Due to the ubiquitous requirements and performance leap in the past decade, it has become feasible to execute garbling and secure computations in settings sensitive to side-channel attacks, including smartphones, IoTs and dedicated hardwares, and the possibilities have been demonstrated by recent works. To maintain security in the presence of a moderate amount of leaked information about internal secrets, we investigate {\it leakage-resilient garbling}. We augment the classical privacy, obliviousness and authenticity notions with leakages of the garbling function, and define their leakage-resilience analogues. We examine popular garbling schemes and unveil additional side-channel weaknesses due to wire label reuse and XOR leakages. We then incorporate the idea of label refreshing into the GLNP garbling scheme of Gueron et al. and propose a variant GLNPLR that provably satisfies our leakage-resilience definitions. Performance comparison indicates that GLNPLR is 60X (using AES-NI) or 5X (without AES-NI) faster than the HalfGates garbling with second order side-channel masking, for garbling AES circuit when the bandwidth is 2Gbps.
SoK: Instruction Set Extensions for Cryptographers
Framed within the general context of cyber-security, standard cryptographic constructions often represent an enabling technology for associated solutions. Alongside or in combination with their design, therefore, the implementation of such constructions is an important challenge: beyond delivering artefacts that are usable in practice, implementation can impact many quality metrics (such as efficiency and security) which determine fitness-for-purpose. A rich design space of implementation techniques can be drawn on in order to address this challenge, but threat- and opportunity-driven innovation based on clear understanding and empirical evidence remains vital.
In at least some use-cases, software-based implementation of cryptography is important, e.g., because it delivers an attractive trade off or is mandated for some reason. Such an implementation is heavily influenced both by 1) the Instruction Set Architecture (ISA) it is expressed using, and 2) the micro-architecture it is executed using. For example, the extent to which a general-purpose ISA can support more domain-specific requirements of a cryptographic construction will influence how the latter is mapped to the former (i.e., which implementation techniques are viable) and behavioural properties of doing so (e.g., the execution latency stemming from use of a given implementation technique).
This paper attempts to systematise the topic of cryptographic Instruction Set Extensions (ISEs), which represent an approach to provision of a platform where such support is more explicit and extensive. At a high level, the goal is to improve understanding of what is an extensive and somewhat inter-disciplinary body of literature (e.g., spanning academia and industry, hardware and software, as well as cryptographic and non-cryptographic publication venues). We argue that doing so will help to maximise the quality of subsequent work on this and associated topics.
CLAASPing ARADI: Automated Analysis of the ARADI Block Cipher
In early August 2024, three NSA researchers -- Patricia Greene, Mark Motley, and Bryan Weeks -- published the technical specifications for a new low-latency block cipher, ARADI, along with its corresponding authenticated encryption mode, LLAMA, which is specifically designed for memory encryption applications. Their manuscript offered minimal security analysis of the design, only briefly discussing the differential, linear and algebraic properties of cipher's underlying components. In this work, we present a set of distinguishers for the round reduced ARADI block cipher, discovered using the automated cryptanalysis tool CLAASP. More precisely, using CLAASP, we evaluate the resistance of ARADI against avalanche, statistical and continuous diffusion tests, differential and linear distinguishers, impossible differentials, algebraic attacks, and neural distinguishers.
Accordingly, we give distinguishers that reach up to 9 out of 16 rounds of ARADI. We hope these preliminary findings will encourage further in-depth cryptanalysis of the cipher to enhance confidence in its security.
Votexx: Extreme Coercion Resistance
We provide a novel perspective on a long-standing challenge to the integrity of votes cast without the supervision of a voting booth: "improper influence,'' which we define as any combination of vote buying and voter coercion. In comparison with previous proposals, our system is the first in the literature to protect against a strong adversary who learns all of the voter's keys---we call this property "extreme coercion resistance.'' When keys are stolen, each voter, or their trusted agents (which we call "hedgehogs''), may "nullify'' (effectively cancel) their vote in a way that is unstoppable and irrevocable, and such that the nullification action is forever unattributable to that voter or their hedgehog(s). We demonstrate the security of our VoteXX system in the universal composability model.
As in many other coercion-resistant systems, voters are authorized to vote with public-private keys. Each voter registers their public keys with the Election Authority (EA) in a way that convinces the EA that the voter has memorized a passphrase that corresponds to their private keys. As a consequence, if an adversary obtains a voter's keys, the voter also retains a copy. Voters concerned about adversaries stealing their private keys can themselves, or by delegating to one or more untrusted hedgehog(s), monitor the bulletin board for malicious ballots cast with their keys, and can act to nullify these ballots in a privacy-preserving manner with zero-knowledge proofs.
In comparison with previous proposals, our system offers some protection against even the strongest adversary who learns all keys. Other coercion-resistant protocols either do not address these attacks, place strong limitations on adversarial abilities, or rely on fully trusted parties to assist voters with their keys.
On the overflow and $p$-adic theory applied to homomorphic encryption
When integer and rational arithmetics are performed using modular arithmetics over $\mathbb{Z}/q\mathbb{Z}$, overflows naturally occur due to the mismatch between the infinite cardinality of $\mathbb{Z}$ or $\mathbb{Q}$ and the finite cardinality of $\mathbb{Z}/q\mathbb{Z}$. Since $\mathbb{Z}/q\mathbb{Z}$ is also the (sub) message space for many secure computation designs, secure computations of integer and rational arithmetics using these schemes must also consider the overflow problem.
Previous works [CLPX, CT-RSA'18] and [HDRdS, ACNS'23] perform integer and rational arithmetics using the CLPX homomorphic encryption scheme, where overflows are avoided by restricting supported circuits. This introduces an additional constraint beyond the noise budget limitation. In our work, we discuss the possibilities of tolerating overflows. Firstly, we explain that when input messages and the final result are well-bounded, intermediate values can go arbitrarily large without affecting output correctness. This kind of overflow is called pseudo-overflow and does not need to be avoided. Secondly, we note that for prime-power modulus $q=p^r$, overflow errors are small in the $p$-adic norm. Therefore, we apply the $p$-adic encoding technique in [HDRdS, ACNS'23] to the BGV/BFV homomorphic encryption scheme with plaintext modulus $p^r$.
Compared to [CLPX, CT-RSA'18] and [HDRdS, ACNS'23], our method supports circuits that are up to $2 \times$ deeper under the same ciphertext parameters, at the cost of an output error bounded by $p^{-r}$ in the $p$-adic norm.
ISABELLA: Improving Structures of Attribute-Based Encryption Leveraging Linear Algebra
Attribute-based encryption (ABE) is a powerful primitive that has found applications in important real-world settings requiring access control. Compared to traditional public-key encryption, ABE has established itself as a considerably more complex primitive that is additionally less efficient to implement. It is therefore paramount that the we can simplify the design of ABE schemes that are efficient, provide strong security guarantees, minimize the complexity in their descriptions and support all practical features that are desirable for common real-world settings. One of such practical features that is currently still difficult to achieve is multi-authority support. Motivated by NIST's ongoing standardization efforts around multi-authority schemes, we put a specific focus on simplifying the support of multiple authorities in the design of schemes.
To this end, we present ISABELLA, a framework for constructing pairing-based ABE with advanced functionalities under strong security guarantees. At a high level, our approach builds on various works that systematically and generically construct ABE schemes by reducing the effort of proving security to a simpler yet powerful ''core'' called pair encodings. To support the amount of adaptivity required by multi-authority ABE, we devise a new approach to designing schemes from pair encodings, while still being able to benefit from the advantages that pair encodings provide. As a direct result of our framework, we obtain various improvements for existing (multi-authority) schemes as well as new schemes.
Last updated: 2024-08-28
Oblivious Pseudo Random Function base on Ideal Lattice, Application in PSI and PIR
Privacy set intersection (PSI) and private information retrieval (PIR) are important areas of research in privacy protection technology. One of the key tools for both is the oblivious pseudorandom function (OPRF). Currently, existing oblivious pseudorandom functions either focus solely on efficiency without considering quantum attacks, or are too complex, resulting in low efficiency. The aim of this paper is to achieve a balance: to ensure that the oblivious pseudorandom function can withstand quantum attacks while simplifying its structure as much as possible. This paper constructs an efficient oblivious pseudorandom function based on the ideal lattice hardness assumption and the oblivious transfer (OT) technique by Chase and Miao (CRYPTO 2020), and also constructs PSI and PIR.
Zero-Knowledge Validation for an Offline Electronic Document Wallet using Bulletproofs
We describe designs for an electronic wallet, meant for the housing
of official government documents, which solves the problem of
displaying document data to untrusted parties (e.g., in order to allow
users to prove that they are above the drinking age). The wallet
attains this goal by employing Zero-Knowledge Proof technologies,
ascertaining that nothing beyond the intended information is ever
shared. In order to be practically applicable, the wallet has to meet
many additional constraints, such as to be usable in offline scenarios,
to employ only widely-accessible communication methods which,
themselves, must not impinge on the user’s privacy, and to be
constructed solely over standard, widely-studied cryptographic
algorithms, offering appropriately high levels of cryptographic
security. We explain how our design was able to successfully meet
all such additional constraints.
Dynamic Threshold Key Encapsulation with a Transparent Setup
A threshold key encapsulation mechanism (TKEM) facilitates the secure distribution of session keys among multiple participants, allowing key recovery through a threshold number of shares. TKEM has gained significant attention, especially for decentralized systems, including blockchains. However, existing constructions often rely on trusted setups, which pose security risks such as a single point of failure, and are limited by fixed participant numbers and thresholds. To overcome this, we propose a dynamic TKEM with a transparent setup, allowing for a flexible selection of recipients and thresholds without relying on trusted third parties in the setup phase. In addition, our construction does not rely on pairing operations. We prove the security of our TKEM under the decisional Diffie-Hellman assumption, ensuring selective chosen-ciphertext security and decapsulation consistency. Our proof-of-concept implementation highlights the practicality and efficiency of this approach, advancing the field of threshold cryptography.
Generalized one-way function and its application
One-way functions are fundamental to classical cryptography and their existence remains a longstanding problem in computational complexity theory. Recently, a provable quantum one-way function has been identified, which maintains its one-wayness even with unlimited computational resources. Here, we extend the mathematical definition of functions to construct a generalized one-way function by virtually measuring the qubit of provable quantum one-way function and randomly assigning the corresponding measurement outcomes with identical probability. Remarkably, using this generalized one-way function, we have developed an unconditionally secure key distribution protocol based solely on classical data processing, which can then utilized for secure encryption and signature. Our work highlights the importance of information in characterizing quantum systems and the physical significance of the density matrix. We demonstrate that probability theory and randomness are effective tools for countering adversaries with unlimited computational capabilities.
Unconditionally secure key distribution without quantum channel
Key distribution plays a fundamental role in cryptography. Currently, the quantum scheme stands as the only known method for achieving unconditionally secure key distribution. This method has been demonstrated over distances of 508 and 1002 kilometers in the measurement-device-independent and twin-field configurations, respectively. However, quantum key distribution faces transmission distance issues and numerous side channel attacks since the basic physical picture requires the use of quantum channels between users. Even when quantum repeater and quantum constellation are used, commercializing quantum cryptography on a large scale remains unattainable due to the considerable expense and significant technical hurdles associated with establishing a global quantum network and facilitating mobile quantum communication. Here, by discovering the provable quantum one-way function, we propose another key distribution scheme with unconditional security, named probability key distribution, that promises users between any two distances to generate a fixed and high secret key rate. There are no quantum channels for exchanging quantum signals between two legitimate users. Non-local entangled states can be generated, identified and measured in the equivalent virtual protocol and can be used to extract secret keys. We anticipate that this discovery presents a paradigm shift in achieving unconditionally secure cryptography, thereby facilitating its widespread application on a global scale.
Approach for High-Performance Random Number Generators for Critical Systems
In times of digitalization, the encryption and signing
of sensitive data is becoming increasingly important. These
cryptographic processes require large quantities of high-quality
random numbers. Which is why a high-performance random
number generator (RNG) is to be developed. For this purpose,
existing concepts of RNGs and application standards are first
analyzed. The proposed approach is to design a physical true
random number generator (PTRNG) with a high output of
random numbers. Based on this, the development begins with
the analog part of the RNG, the noise signal source and a
suitable amplifier for the analog noise signal. Therefore, a
special noise diode from Noisecom and an amplifier from NXP
were chosen and analyzed in different measurements. From
the results of the measurements, it can be concluded that both
components are suitable for use in the RNG.
Unbalanced Private Set Union with Reduced Computation and Communication
Private set union (PSU) is a cryptographic protocol that allows two parties to compute the union of their sets without revealing anything else. Despite some efficient PSU protocols that have been proposed, they mainly focus on the balanced setting, where the sets held by the parties are of similar size. Recently, Tu et al. (CCS 2023) proposed the first unbalanced PSU protocol which achieves sublinear communication complexity in the size of the larger set.
In this paper, we are interested in improving the efficiency of the unbalanced PSU protocol. We find that oblivious key-value store (OKVS) data structure plays an essential role in the most recently proposed PSU constructions and formalize unbalanced PSU as an OKVS decoding process with sublinear communication. Our key insight lies in when OKVS satisfies sparsity property, obtaining the necessary decoding information precisely aligns with the batch private information retrieval (BatchPIR) problem. We give two concrete constructions of unbalanced PSU protocols based on different OKVS encoding strategies. The first is based on oblivious PRF (OPRF) and a newly introduced cryptographic protocol called permuted private equality test, while the second is based on re-randomizable public key encryption.
Both our two constructions achieve sublinear communication complexity in the size of the larger set.
We implement our two unbalanced PSU protocols and compare them with the state-of-the-art unbalanced PSU of Tu et al. Experiments show that our protocols achieve a $1.3-5.6\times $ speedup in running time and $2.1-11.8\times$ shrinking in communication cost, depending on set sizes and network environments.
Scalable Mixed-Mode MPC
Protocols for secure multi-party computation (MPC) supporting mixed-mode computation have found a lot of applications in recent years due to their flexibility in representing the function to be evaluated. However, existing mixed-mode MPC protocols are only practical for a small number of parties: they are either tailored to the case of two/three parties, or scale poorly for a large number of parties.
In this paper, we design and implement a new system for highly efficient and scalable mixed-mode MPC tolerating an arbitrary number of semi-honest corruptions. Our protocols allow secret data to be represented in Encrypted, Boolean, Arithmetic, or Yao form, and support efficient conversions between these representations.
1. We design a multi-party table-lookup protocol, where both the index and the table can be kept private. The protocol is scalable even with hundreds of parties.
2. Using the above protocol, we design efficient conversions between additive arithmetic secret sharings and Boolean secret sharings for a large number of parties. For 32 parties, our conversion protocols require 1184× to 8141× less communication compared to the state- of-the-art protocols MOTION and MP-SPDZ; this leads to up to 1275× improvement in running time under 1 Gbps network. The improvements are even larger with more parties.
3. We also use new protocols to design an efficient multi-party distributed garbling protocol. The protocol could achieve asymptotically constant communication per party.
Our implementation will be made public.
Randomness Recoverable Secret Sharing Schemes
It is well-known that randomness is essential for secure cryptography. The randomness used in cryptographic primitives is not necessarily recoverable even by the party who can, e.g., decrypt or recover the underlying secret/message. Several cryptographic primitives that support randomness recovery have turned out useful in various applications. In this paper, we study randomness recoverable secret sharing schemes (RR-SSS), in both information-theoretic and computational settings and provide two results. First, we show that while every access structure admits a perfect RR-SSS, there are very simple access structures (e.g., in monotone $\mathsf{AC}^0$) that do not admit efficient perfect (or even statistical) RR-SSS. Second, we show that the existence of efficient computational RR-SSS for certain access structures in monotone $\mathsf{AC}^0$ implies the existence of one-way functions. This stands in sharp contrast to (non-RR) SSS schemes for which no such results are known.
RR-SSS plays a key role in making advanced attributed-based encryption schemes randomness recoverable, which in turn have applications in the context of designated-verifier non-interactive zero knowledge.
Split Gröbner Bases for Satisfiability Modulo Finite Fields
Satisfiability modulo finite fields enables automated verification for cryptosystems. Unfortunately, previous solvers scale poorly for even some simple systems of field equations, in part because they build a full Gröbner basis (GB) for the system. We propose a new solver that uses multiple, simpler GBs instead of one full GB. Our solver, implemented within the cvc5 SMT solver, admits specialized propagation algorithms, e.g., for understanding bitsums. Experiments show that it solves important bitsum-heavy determinism benchmarks far faster than prior solvers, without introducing much overhead for other benchmarks.
Satisfiability Modulo Finite Fields
We study satisfiability modulo the theory of finite fields and
give a decision procedure for this theory. We implement our procedure
for prime fields inside the cvc5 SMT solver. Using this theory, we con-
struct SMT queries that encode translation validation for various zero
knowledge proof compilers applied to Boolean computations. We evalu-
ate our procedure on these benchmarks. Our experiments show that our
implementation is superior to previous approaches (which encode field
arithmetic using integers or bit-vectors).
Construction bent functions using the Maiorana McFarland class
We are using the extended Maiorana-McFarland construction to create new bent functions. When we start with a bent function of dimension s-r, we can produce a new function of dimension s+r while ensuring that its balance is limited to the set of vectors with an even Hamming weight in its domain. We also compare this approach with the case where r=1 and apply it multiple times. Finally, we provide an algorithm as an example, focusing on the case where r=2 and another algorithm using r=1 two times.
Improved Reductions from Noisy to Bounded and Probing Leakages via Hockey-Stick Divergences
There exists a mismatch between the theory and practice of cryptography in the presence of leakage. On the theoretical front, the bounded leakage model, where the adversary learns bounded-length but noiseless information about secret components, and the random probing model, where the adversary learns some internal values of a leaking implementation with some probability, are convenient abstractions to analyze the security of numerous designs. On the practical front, side-channel attacks produce long transcripts which are inherently noisy but provide information about all internal computations, and this noisiness is usually evaluated with closely related metrics like the mutual information or statistical distance. Ideally, we would like to claim that resilience to bounded leakage or random probing implies resilience to noisy leakage evaluated according to these metrics. However, prior work (Duc, Dziembowski and Faust, Eurocrypt 2014; Brian et al., Eurocrypt 2021) has shown that proving such reductions with useful parameters is challenging.
In this work, we study noisy leakage models stemming from hockey-stick divergences, which generalize statistical distance and are also the basis of differential privacy. First, we show that resilience to bounded leakage and random probing implies resilience to our new noisy leakage model with improved parameters compared to models based on the statistical distance or mutual information. Second, we establish composition theorems for our model, showing that these connections extend to a setting where multiple leakages are obtained from a leaking implementation. We complement our theoretical results with a discussion of practical relevance, highlighting that (i) the reduction to bounded leakage applies to realistic leakage functions with noise levels that are decreased by several orders of magnitude compared to Brian et al., and (ii) the reduction to random probing usefully generalizes the seminal work of Duc, Dziembowski, and Faust, although it remains limited when the field size in which masking operates grows (i.e., hockey-stick divergences can better hide the field size dependency of the noise requirements, but do not annihilate it).
Fast Low Level Disk Encryption Using FPGAs
A fixed length tweakable enciphering scheme (TES) is the appropriate cryptographic functionality for low level disk encryption. Research on TES over the last two decades have led to a number of proposals many of which have already been implemented using FPGAs. This paper considers the FPGA implementations of two more recent and promising TESs, namely AEZ and FAST. The relevant architectures are described and simulation results on the Xilinx Virtex 5 and Virtex 7 FPGAs are presented. For comparison, two IEEE standard schemes, XCB and EME2 are considered. The results indicate that FAST outperforms the other schemes making it a serious candidate for future incorporation by disk manufacturers and standardisation bodies.
Perfect Monomial Prediction for Modular Addition
Modular addition is often the most complex component of typical Addition-Rotation-XOR (ARX) ciphers, and the division property is the most effective tool for detecting integral distinguishers. Thus, having a precise division property model for modular addition is crucial in the search for integral distinguishers in ARX ciphers.
Current division property models for modular addition either (a) express the operation as a Boolean circuit and apply standard propagation rules for basic operations (COPY, XOR, AND), or (b) treat it as a sequence of smaller functions with carry bits, modeling each function individually. Both approaches were originally proposed for the two-subset bit-based division property (2BDP), which is theoretically imprecise and may overlook some balanced bits.
Recently, more precise versions of the division property, such as parity sets, three-subset bit-based division property without unknown subsets (3BDPwoU) or monomial prediction (MP), and algebraic transition matrices have been proposed. However, little attention has been given to modular addition within these precise models.
The propagation rule for the precise division property of a vectorial Boolean function $\boldsymbol{f}$ requires that $\boldsymbol{u}$ can propagate to $\boldsymbol{v}$ if and only if the monomial $\pi_{\boldsymbol{u}}({\boldsymbol{x}})$ appears in $\pi_{\boldsymbol{v}}( \boldsymbol{f} )$. Braeken and Semaev (FSE 2005) studied the algebraic structure of modular addition and showed that for $\boldsymbol{x} \boxplus \boldsymbol{y} = \boldsymbol{z}$, the monomial $\pi_{\boldsymbol{u}}(\boldsymbol{x})\pi_{\boldsymbol{v}}(\boldsymbol{v})$ appears in $\pi_{\boldsymbol{w}}(\boldsymbol{w})$ if and only if $\boldsymbol{u} + \boldsymbol{v} = \boldsymbol{w}$. Their theorem directly leads to a precise division property model for modular addition. Surprisingly, this model has not been applied in division property searches, to the best of our knowledge.
In this paper, we apply Braeken and Semaev's theorem to search for integral distinguishers in ARX ciphers, leading to several new results. First, we improve the state-of-the-art integral distinguishers for all variants of the Speck family, significantly enhancing search efficiency for Speck-32/48/64/96 and detecting new integral distinguishers for Speck-48/64/96/128. Second, we determine the exact degrees of output bits for $7$-round Speck-$32$ and all/16/2 output bits for $2/3/4$-round Alzette for the first time. Third, we revisit the choice of rotation parameters in Speck instances, providing a criterion that enhances resistance against integral distinguishers. Additionally, we offer a simpler proof for Braeken and Semaev's theorem using monomial prediction, demonstrating the potential of division property methods in the study of Boolean functions.
We hope that the proposed methods will be valuable in the future design of ARX ciphers.
Chosen Text Attacks Against an Image Encryption Based on the Kronecker Xor Product, the Hill Cipher and the Sigmoid Logistic Map
In 2023, Mfungo et al. presented an image encryption scheme that relies on a series of diffusion techniques and uses a chaotic map to generate three secret keys. Note that two out of three keys are dynamically generated based on the size of the original image, while the remaining key is static. The authors claim that their proposal offers $149$ bits of security. Unfortunately, we found a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. If the attacker has access to $1$ encryption oracle query and $1$ decryption oracle query, we can lower the complexity to $\mathcal O(2^{18.58})$. Encrypting an image with Mfungo et al.'s scheme has a worst case complexity of $\mathcal O(2^{33})$. Therefore, both our attacks are faster than encrypting an image. Our attacks are feasible because the dynamic keys remain unchanged for different plaintext images of the same size, and the static key remains the same for all images.
Post-Quantum DNSSEC over UDP via QNAME-Based Fragmentation
In a typical network, a DNS(SEC) message over 1232 bytes would either be fragmented into several UDP/IP packets or require a re-transmit over TCP. Unfortunately, IP fragmentation is considered unreliable and a non-trivial number of servers do not support TCP.
We present $\texttt{QNAME}$-Based Fragmentation ($\mathsf{QBF}$): a DNS layer fragmentation scheme that fragments/re-assembles large post-quantum DNS(SEC) messages over UDP in just 1 round-trip while using only standard DNS records. Our experiments show that DNSSEC over $\mathsf{QBF}$, with either Falcon-512, Dilithium-2 or SPHINCS$^{+}$ as the zone signing algorithm, is practically as fast as the currently deployed ECDSA-P256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ queries.
Efficient online and Non-Interactive Threshold Signatures with Identifiable Aborts for Identity-Based Signatures in the IEEE P1363 Standard
Identity-based threshold signature (IDTS) enables the generation of valid signatures without revealing cryptographic keys in the signing process. While current protocols have achieved much progress in their efficiency, many schemes easily suffer from denial-of-service attacks in which misbehaving parties could keep from generating signatures without being caught. The identifiable abort property is designed to withstand such an attack in some recent IDTS protocols. However, all these schemes require many rounds of interaction for the resulting signature or utilize cryptographic techniques, which have a high complexity. In this study, we put forward a novel IDTS protocol that can achieve identifiable abort and resist arbitrary collusion attacks. Precisely, this ensures that corrupted parties are responsible in case of failure and cannot jointly obtain the input of honest parties. Moreover, we present the ideal IDTS functionality and provide the provable security of the proposed protocol with the global random oracle model. Our scheme has non-interactive signing, compatibility with the offline/online settings, and practical efficiency for the online phase. Finally, we give theoretical analyses and experimental results of our solution, showing that the signing time is less than two milliseconds and that the scheme is suitable for resource-constrained settings.
Practical Small Private Exponent Attacks against RSA
It is well known that the best small private exponent attack against RSA is that when the private exponent $d < N^{0.292}$, one can factor the RSA modulus $N = pq$. However, the bound $N^{0.292}$ is very difficult to achieve directly since we need to deal with some lattice with very high dimension, which seems infeasible by now. Recently, Li et al. proposed a practical attack that can solve cases when $d$ approaches $N^{0.292}$ within a month for $1024$ bit $N$. In this paper, we propose an improved practical small private exponent attack by enumerating the most significant bits of $p + q$. Together with some skills in implementations, we can also achieve the bound $N^{0.292}$, but with significantly less time compared to previous work.
Memory-Efficient Attacks on Small LWE Keys
Combinatorial attacks on small max norm LWE keys suffer enormous memory requirements, which render them inefficient in realistic attack scenarios. Therefore, more memory-efficient substitutes for these algorithms are needed. In this work, we provide new combinatorial algorithms for recovering small max norm LWE secrets outperforming previous approaches whenever the available memory is limited. We provide analyses of our algorithms for secret key distributions of current NTRU, Kyber and Dilithium variants, showing that our new approach outperforms previous memory-efficient algorithms. For instance, considering uniformly random ternary secrets of length $n$ we improve the best known time complexity for \emph{polynomial memory} algorithms from $2^{1.063n}$ down-to $2^{0.926n}$.
We obtain even larger gains for LWE secrets in $\{-m,\ldots,m\}^n$ with $m=2,3$ as found in Kyber and Dilithium. For example, for uniformly random keys in $\{-2,\ldots,2\}^n$ as is the case for Dilithium we improve the previously best time under polynomial memory restriction from $2^{1.742n}$ down-to $2^{1.282n}$.
Eventually, we provide novel time-memory trade-offs continuously interpolating between our polynomial memory algorithms and the best algorithms in the unlimited memory case (May, Crypto 2021).
Supersonic OT: Fast Unconditionally Secure Oblivious Transfer
Oblivious Transfer (OT) is a fundamental cryptographic protocol with applications in secure Multi-Party Computation, Federated Learning, and Private Set Intersection. With the advent of quantum computing, it is crucial to develop unconditionally secure core primitives like OT to ensure their continued security in the post-quantum era. Despite over four decades since OT's introduction, the literature has predominantly relied on computational assumptions, except in cases using unconventional methods like noisy channels or a fully trusted party. Introducing “Supersonic OT”, a highly efficient and unconditionally secure OT scheme that avoids public-key-based primitives, we offer an alternative to traditional approaches. Supersonic OT enables a receiver to obtain a response of size O(1). Its simple (yet non-trivial) design facilitates easy security analysis and implementation. The protocol employs a basic secret-sharing scheme, controlled swaps, the one-time pad, and a third-party helper who may be corrupted by a semi-honest adversary. Our implementation and runtime analysis indicate that a single instance of Supersonic OT completes in 0.35 milliseconds, making it up to 2000 times faster than the state-of-the-art base OT.
Pay Less for Your Privacy: Towards Cost-Effective On-Chain Mixers
On-chain mixers, such as Tornado Cash (TC), have become a popular privacy solution for many non-privacy-preserving blockchain users. These mixers enable users to deposit a fixed amount of coins and withdraw them to another address, while effectively reducing the linkability between these addresses and securely obscuring their transaction history. However, the high cost of interacting with existing on-chain mixer smart contracts prohibits standard users from using the mixer, mainly due to the use of computationally expensive cryptographic primitives. For instance, the deposit cost of TC on Ethereum is approximately $1.1m$ gas (i.e., $66$ USD in June 2023), which is $53\times$ higher than issuing a base transfer transaction.
In this work, we introduce the Merkle Pyramid Builder approach, to incrementally build the Merkle tree in an on-chain mixer and update the tree per batch of deposits, which can therefore decrease the overall cost of using the mixer. Our evaluation results highlight the effectiveness of this approach, showcasing a significant reduction of up to $7\times$ in the amortized cost of depositing compared to state-of-the-art on-chain mixers. Importantly, these improvements are achieved without compromising users' privacy. Furthermore, we propose the utilization of verifiable computations to shift the responsibility of Merkle tree updates from on-chain smart contracts to off-chain clients, which can further reduce deposit costs. Additionally, our analysis demonstrates that our designs ensure fairness by distributing Merkle tree update costs among clients over time.
Public-Key Anamorphism in (CCA-secure) Public-Key Encryption and Beyond
The notion of (Receiver-) Anamorphic Encryption was put forth recently to show that a dictator (i.e., an overreaching government), which demands to get the receiver’s private key and even dictates messages to the sender, cannot prevent the receiver from getting an additional covert anamorphic message from a sender. The model required an initial private collaboration to share some secret. There may be settings though where an initial collaboration may be impossible or performance-wise prohibitive, or cases when we need an immediate message to be sent without private key generation (e.g., by any casual sender in need). This situation, to date, somewhat limits the applicability of anamorphic encryption.
To overcome this, in this work, we put forth the new notion of “public-key anamorphic encryption,” where, without any initialization, any sender that has not coordinated in any shape or form with the receiver, can nevertheless, under the dictator control of the receiver’s private key, send the receiver an additional anamorphic secret message hidden from the dictator. We define the new notion with its unique new properties, and then prove that, quite interestingly, the known CCA-secure Koppula-Waters (KW) system is, in fact, public-key anamorphic.
We then describe how a public-key anamorphic scheme can support a new hybrid anamorphic encapsulation mode (KDEM) where the public-key anamorphic part serves a bootstrapping mechanism to activate regular anamorphic messages in the same ciphertext, thus together increasing the anamorphic channel capacity.
Looking at the state of research thus far, we observe that the initial system (Eurocrypt’22) that was shown to have regular anamorphic properties is the CCA-secure Naor-Yung (and other related schemes). Here we identify that the KW CCA-secure scheme also provides a new type of anamorphism. Thus, this situation is hinting that there may be a connection between some types of CCA-secure schemes and some type of anamorphic schemes (in spite of the fact that the goals of the two primitives are fundamentally different); this question is foundational in nature. Given this, we identify a sufficient condition for a “CCA-secure scheme which is black-box reduced from a CPA secure scheme” to directly give rise to an “anamorphic encryption scheme!” Furthermore, we identify one extra property of the reduction, that yields a public-key anamorphic scheme as defined here.
Instantiating the Hash-Then-Evaluate Paradigm: Strengthening PRFs, PCFs, and OPRFs.
We instantiate the hash-then-evaluate paradigm for pseudorandom functions (PRFs), $\mathsf{PRF}(k, x) := \mathsf{wPRF}(k, \mathsf{RO}(x))$, which builds a PRF $\mathsf{PRF}$ from a weak PRF $\mathsf{wPRF}$ via a public preprocessing random oracle $\mathsf{RO}$. In applications to secure multiparty computation (MPC), only the low-complexity wPRF performs secret-depending operations. Our construction replaces RO by $f(k_H , \mathsf{elf}(x))$, where $f$ is a non-adaptive PRF and the key $k_H$ is public and thus known to the distinguishing adversary.
We show that, perhaps surprisingly, several existing weak PRF candidates are plausibly also secure when their inputs are generated by $f(k_H , \mathsf{elf}(x))$. Firstly, analogous cryptanalysis applies (because pseudorandomness of $f$ implies good statistical properties) and/or secondly an attack
against the weak PRF with such pseudorandom inputs generated by $f$ would imply surprising results such as key agreement from the hardness of the high-noise version of the Learning Parity with Noise (LPN) when implementing both wPRF and $f$ from this assumption.
Our simple transformation of replacing RO(·) public pre-processing by $f(k_H , \mathsf{elf}(x))$ public preprocessing applies to the entire family of PRF-style functions. Specifically, we obtain results for oblivious PRFs, which are a core building block for password-based authenticated key exchange (PAKE) and private set intersection (PSI) protocols, and we also obtain results for pseudorandom correlation functions (PCF), which are a key tool for silent oblivious transfer (OT) extension.
Accountable Decryption made Formal and Practical
With the increasing scale and complexity of online activities, accountability, as an after-the-fact mechanism, has become an effective complementary approach to ensure system security. Decades of research have delved into the connotation of accountability. They fail, however, to achieve practical accountability of decryption. This paper seeks to address this gap. We consider the scenario where a client (called encryptor, her) encrypts her data and then chooses a delegate (a.k.a. decryptor, him) that stores data for her. If the decryptor initiates an illegitimate decryption on the encrypted data, there is a non-negligible probability that this behavior will be detected, thereby holding the decryptor accountable for his decryption. We make three contributions. First, we review key definitions of accountability known so far. Based on extensive investigations, we formalize new definitions of accountability specifically targeting the decryption process, denoted as accountable decryption, and discuss the (im)possibilities when capturing this concept. We also define the security goals in correspondence. Secondly, we present a novel Trusted Execution Environment(TEE)-assisted solution aligning with definitions. Instead of fully trusting TEE, we take a further step, making TEE work in the "trust, but verify" model where we trust TEE and use its service, but empower users (i.e., decryptors) to detect the potentially compromised state of TEEs. Thirdly, we implement a full-fledged system and conduct a series of evaluations. The results demonstrate that our solution is efficient. Even in a scenario involving $300,000$ log entries, the decryption process concludes in approximately $5.5$ms, and malicious decryptors can be identified within $69$ms.
On the anonymity of one authenticated key agreement scheme for mobile vehicles-assisted precision agricultural IoT networks
Smart farming uses different vehicles to manage all the operations on the farm. These vehicles should be put to good use for secure data transmission. The Vangala et al.'s key agreement scheme [IEEE TIFS, 18 (2023), 904-9193] is designed for agricultural IoT networks. In this note, we show that the scheme fails to keep anonymity, instead pseudonymity. The scheme simply thinks that anonymity is equivalent to preventing the real identity from being recovered. But the true anonymity means that the adversary cannot attribute different sessions to target users. To the best of our knowledge, it is the first time to clarify the differences between anonymity and pseudonymity.
Authenticity in the Presence of Leakage using a Forkcipher
Robust message authentication codes (MACs) and authenticated encryption (AE) schemes that provide authenticity in the presence of side-channel leakage are essential primitives. These constructions often rely on primitives designed for strong leakage protection, among others including the use of strong-unpredictable (tweakable) block-ciphers.
This paper extends the strong-unpredictability security definition to the versatile and new forkcipher primitive. We show how to construct secure and efficient MAC and AEs that guarantee authenticity in the presence of leakage. We present a leakage-resistant MAC, ForkMAC, and two leakage-resistant AE schemes, ForkDTE1 and ForkDTE2, which use forkciphers instead of traditional secure (tweakable) block-ciphers as compared to the prior art. We prove and analyze their security in the presence of leakage based on a strong unpredictable forkcipher. A comparison with the state-of-the-art in terms of both security and efficiency is followed in the paper.
Key advantages and highlights promoted by the proposed constructions are that for the minimal assumptions they require, unpredictability with leakage-based security, the tag-generation of ForkMAC is the most efficient among leakage-resilient MAC proposals, equivalent to HBC. ForkDTE 1 and 2 have a more efficient encryption than any other scheme, achieving integrity with leakage (and also providing misuse-resistance).
LAMA: Leakage-Abuse Attacks Against Microsoft Always Encrypted
Always Encrypted (AE) is a Microsoft SQL Server feature that allows clients to encrypt sensitive data inside client applications and ensures that the sensitive data is hidden from untrusted servers and database administrators. AE offers two column-encryption options: deterministic encryption (DET) and randomized encryption (RND). In this paper, we explore the security implications of using AE with both DET and
RND encryption modes by running Leakage Abuse Attacks (LAAs) against the system. We demonstrate how an adversary could extract the necessary data to run a frequency analysis LAA against DET-encrypted columns and an LAA for Order-Revealing Encryption against RND-encrypted columns. We run our attacks using real-world datasets encrypted in a full-scale AE instancer and demonstrate that a snooping server can recover over 95\% of the rows in 8 out of 15 DET-encrypted columns, and 10 out of 15 RND-encrypted columns.
Revisiting a Realistic EM Side-Channel Attack on a Complex Modern SoC
Side-channel analysis on complex SoC devices with high-frequency microprocessors and multitasking operating systems presents significant challenges in practice due to the high costs of trace acquisition and analysis, generally involving tens of thousands to millions of traces. This work uses a cryptographic execution process on a Broadcom 2837 SoC as a case study to explore ways to reduce costs in electromagnetic side-channel analysis. In the data acquisition phase, we propose an efficient electromagnetic probe positioning strategy that does not require additional tool assistance, significantly accelerating the collection of effective electromagnetic traces. In the side-channel analysis phase, we investigate the combined use of preprocessing techniques, where the optimal preprocessing approach successfully reduces the number of required electromagnetic traces by 12 times, significantly improving the success rate of attacks. Additionally, we implement profiling attacks on such devices, including traditional template attacks, MLP-based, and CNN-based side-channel analysis, demonstrating that even minimal modeling costs can yield excellent analysis performance. Our study confirms the feasibility of low-cost side-channel analysis on complex SoCs and indicates that the sensitive applications running on these devices still require protection.
ECC’s Achilles’ Heel: Unveiling Weak Keys in Standardized Curves
The strength of Elliptic curve cryptography (ECC) relies on curve choice. This work analyzes weak keys in standardized curves, i.e., private keys within small subgroups of the auxiliary group $\mathbb{Z}^*_p$. We quantify weak
key prevalence across standardized curves, revealing a potential vulnerability due to numerous small divisors in auxiliary group orders. To address this, we leverage the implicit "baby-steps giant-steps algorithm", which transforms the complex elliptic curve discrete logarithm problem into a simpler problem within $\mathbb{Z}^*_p$. This enables efficient detection of weak keys in small-order subgroups.
Our findings highlight the importance of rigorous key testing in applications using standardized ECC. While random weak keys are unlikely, malicious actors could exploit this by manipulating key generation libraries. To this end, we show how users can assess their private key vulnerabilities and mitigate risks by eliminating weak keys. Hence, this work contributes to improved ECC security through proactive key management practices.
Secure Coded Distributed Computing and Extensions to Multiple Access Setting
We consider two critical aspects of security in the distributed computing (DC) model: secure data shuffling and secure coded computing. It is imperative that any external entity overhearing the communication does not gain any information about the intermediate values (IVs) exchanged during the shuffling phase of the DC model. Our approach ensures IV confidentiality during data shuffling. Moreover, each node in the system must be able to recover the IVs necessary for computing its output functions but must also remain oblivious to the IVs associated with output functions not assigned to it. We design secure DC methods and establish achievable limits on the tradeoffs between the communication and computation loads to contribute to the advancement of secure data processing in distributed systems. First, we establish that the computation and communication loads stay the same as for non-secure data shuffling. However, implementing secure data shuffling requires additional overhead for storing secret keys at the nodes. Next, we show that for secure coded computation, both the computation and communication loads increase compared to the non-secure scenario, along with the overhead for storing secret keys. Finally, we extend our security results to a novel distributed computing model known as multi-access distributed computing (MADC), which was recently introduced. The MADC model features two distinct sets of nodes, namely mapper and reducer nodes. Unlike the original setting where mapper and reducer nodes were the same, in this model, they are separate entities, and each reducer node is connected to multiple mapper nodes. We show that, for MADC models also, the computation and communication loads remain the same with or without secure data shuffling. However, secure coded computation results in increased computation and communication loads compared to the non-secure case, and both scenarios require overhead for storing secret keys at the reducer nodes.
Best-Possible Unpredictable Proof-of-Stake: An Impossibility and a Practical Design
The proof-of-stake (PoS) protocols have been proposed to eliminate the unnecessary waste of computing power in Bitcoin. Multiple practical and provably secure designs have been developed, such as Ouroboros Praos (Eurocrypt 2018), Snow White (FC 2019), and more. However, an important security property called unpredictability has not been carefully studied in these provably secure PoS. Unpredictability property is critical for PoS since the attackers could use predictability to launch strengthened versions of multiple attacks (e.g., selfish-mining and bribing). Unpredictability has previously been investigated by Brown-Cohen et al. (EC 2019) in incentive-driven settings. In this paper, we investigate the property in the cryptographic setting, to achieve the ''best possible'' unpredictability for PoS.
First, we present an impossibility result for all proof-of-stake protocols under the single-extension design framework. In this framework, each honest player is allowed to extend exactly one chain in each round; the state-of-the-art permissionless PoS protocols (e.g., Praos, Snow White, and more), are all under this single-extension framework. Our impossibility result states that, if a single-extension PoS protocol achieves the best possible unpredictability, then this protocol cannot be proven secure unless more than $73\%$ of stake is honest. Then, to overcome the impossibility result, we introduce a new design framework, called multi-extension PoS, which allows each honest player to extend multiple chains in a round. We develop a novel strategy called ''$D$-distance-greedy'' strategy (where $D$ is a positive integer), in which honest players are allowed to extend a set of best chains that are ''close'' to the longest chain. (Of course, malicious players are allowed to behave arbitrarily in the protocol execution.) This ''$D$-distance-greedy'' strategy enables us to construct a class of PoS protocols that achieve the best possible unpredictability. Plus, we design a new tiebreak rule for the multi-extension protocol to choose the best chain that can be extended faster. This ensures that the adversary cannot slowdown the chain growth of honest players. Note, these protocols can be proven secure, assuming a much smaller fraction (e.g., $57\%$) of stake to be honest.
To enable a thorough security analysis in the cryptographic setting, we develop several new techniques. As the players are allowed to extend multiple chains, the analysis of chain growth is highly non-trivial. We introduce a new analysis framework to analyze the chain growth of a multi-extension protocol. To prove the common prefix property, we introduce a new concept called ''virtual chains'', and then present a reduction from the regular version of the common prefix to ''common prefix w.r.t. virtual chains''.
Game-Theoretically Fair Distributed Sampling
Cleve's celebrated result (STOC'86) showed that a strongly fair multi-party coin-toss is impossible in the presence of majority-sized coalitions. Recently, however, a fascinating line of work studied a relaxed fairness notion called \emph{game-theoretic fairness}, which guarantees that no coalition should be incentivized to deviate from the prescribed protocol.
A sequence of works has explored the feasibility of game-theoretic fairness for \emph{two-sided} coin-toss, and indeed demonstrated feasibility in the dishonest majority setting under standard cryptographic assumptions. In fact, the recent work of Wu, Asharov, and Shi (EUROCRYPT'22) completely characterized the regime where game-theoretic fairness is feasible. However, this line of work is largely restricted to two-sided coin-toss, and more precisely on a \emph{uniform} coin-toss (i.e., Bernoulli with parameter $1/2$). The only exceptions are the works on game-theoretically fair leader election, which can be viewed as a special case of uniform $n$-sided coin-toss where $n$ is the number of parties.
In this work, we \emph{initiate} the comprehensive study of game-theoretic fairness for multi-party \emph{sampling from general distributions}. In particular, for the case of $m$-sided \emph{uniform} coin-toss we give a nearly complete characterization of the regime in which game-theoretic fairness is feasible. Interestingly, contrary to standard fairness notions in cryptography, the composition of game-theoretically fair two-sided coin-toss protocols does not necessarily yield game-theoretically fair multi-sided coins. To circumvent this, we introduce new techniques compatible with game-theoretic fairness.
In particular, we give the following results:
- We give a protocol from standard cryptographic assumptions that achieves game-theoretic fairness for uniform $m$-sided coin-toss against half- or more-sized adversarial coalitions.
- To complement our protocol, we give a general impossibility result that establishes the optimality of our protocol for a broad range of parameters modulo an additive constant. Even in the worst-case, the gap between our protocol and our impossibility result is only a small constant multiplicative factor.
- We also present a game-theoretically fair protocol for \emph{any} efficiently sampleable $m$-outcome distribution in the dishonest majority setting. For instance, even for the case of $m=2$ (i.e., two-sided coin-toss), our result implies a game-theoretically fair protocol for an \emph{arbitrary} Bernoulli coin. In contrast, the work of Wu, Asharov, and Shi only focussed on a Bernoulli coin with parameter $1/2$.
Phase Modulation Side Channels: Jittery JTAG for On-Chip Voltage Measurements
Measuring the fluctuations of the clock phase of a target was identified as a leakage source on early electromagnetic side-channel investigations. Despite this, only recently was directly measuring the clock phase (or jitter) of digital signals from a target connected to being a source of exploitable leakage. As the phase of a clock output will be related to signal propagation delay through the target, and this propagation delay is related to voltage, this means that most digital devices perform an unintended phase modulation (PM) of their internal voltage onto clock output phases.
This paper first demonstrates an unprofiled CPA attack against a Cortex-M microcontroller using the phase of a clock output, observing the signal on both optically isolated and capacitively isolated paths. The unprofiled attack takes only 2-4x more traces than an attack using a classic shunt-resistor measurement.
It is then demonstrated how the JTAG bypass mode can be used to force a clock through a digital device. This forced clock signal can then be used as a highly effective oscilloscope that is located on the target device. As the attack does not require modifications to the device (such as capacitor removal or heat spreader removal) it is difficult to detect using existing countermeasures. The example attack over JTAG uses an unprofiled CPA attack, requiring only about 5x more traces than an ideal shunt-resistor based measurement. In addition, a version of this attack using a fault correlation analysis attack is also demonstrated.
Countermeasures are discussed, and a simple resampling countermeasure is tested. All tools both offensive and defensive presented in the paper have been released under open-source licenses.
Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux
We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible.
Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries.
We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.
Post-quantum hash functions using $\mathrm{SL}_n(\mathbb{F}_p)$
We define new families of Tillich-Zémor hash functions, using higher dimensional special linear groups over finite fields as platforms. The Cayley graphs of these groups combine fast mixing properties and high girth, which together give rise to good preimage and collision resistance of the corresponding hash functions. We justify the claim that the resulting hash functions are post-quantum secure.
Generalized Triangular Dynamical System: An Algebraic System for Constructing Cryptographic Permutations over Finite Fields
In recent years a new class of symmetric-key primitives over $\mathbb{F}_p$ that are essential to Multi-Party Computation and Zero-Knowledge Proofs based protocols has emerged. Towards improving the efficiency of such primitives, a number of new block ciphers and hash functions over $\mathbb{F}_p$ were proposed. These new primitives also showed that following alternative design strategies to the classical Substitution-Permutation Network (SPN) and Feistel Networks leads to more efficient cipher and hash function designs over $\mathbb{F}_p$ specifically for large odd primes $p$.
In view of these efforts, in this work we build an \emph{algebraic framework} that allows the systematic exploration of viable and efficient design strategies for constructing symmetric-key (iterative) permutations over $\mathbb{F}_p$. We first identify iterative polynomial dynamical systems over finite fields as the central building block of almost all block cipher design strategies. We propose a generalized triangular polynomial dynamical system (GTDS), and based on the GTDS we provide a generic definition of an iterative (keyed) permutation over $\mathbb{F}_p^n$.
Our GTDS-based generic definition is able to describe the three most well-known design strategies, namely SPNs, Feistel networks (FN) and Lai--Massey (LM). Consequently, the block ciphers that are constructed following these design strategies can also be instantiated from our generic definition. Moreover, we find that the recently proposed \texttt{Griffin} design, which neither follows the Feistel nor the SPN design, can be described using the generic GTDS-based definition. We also show that a new generalized Lai--Massey construction can be instantiated from the GTDS-based definition. The latter results confirm that our GTDS-based definition is able to instantiate cryptographic permutations that are beyond SPN, FN and LM based.
We further provide generic (security) analysis of the GTDS including an upper bound on the differential uniformity and the correlation.
PulpFHE: Complex Instruction Set Extensions for FHE Processors
The proliferation of attacks to cloud computing, coupled with the vast amounts of data outsourced to online services, continues to raise major concerns about the privacy for end users. Traditional cryptography can help secure data transmission and storage on cloud servers, but falls short when the already encrypted data needs to be processed by the cloud provider. An emerging solution to this challenge is fully homomorphic encryption (FHE), which enables computations directly on encrypted data, and recent works have focused on developing new processor designs tailored for native processing of FHE data. In this work, we introduce PulpFHE, an optimized instruction set extension tailored for the next generation of FHE processors. Our proposed FHE instructions offer native support for non-linear operations on encrypted data, and enable significantly faster homomorphic computations for a broad range of realistic applications.
RC4OK. An improvement of the RC4 stream cipher
In this paper we present an improved version of the classical RC4 stream cipher. The improvements allow to build lightweight high-performance cryptographically strong random number generator suitable for use in IoT and as a corresponding component of operating systems. The criterion for high performance is both a high speed of generating a stream of random numbers and low overhead costs for adding entropy from physical events to the state of the generator.
Verifiable Homomorphic Linear Combinations in Multi-Instance Time-Lock Puzzles
Time-Lock Puzzles (TLPs) have been developed to securely transmit sensitive information into the future without relying on a trusted third party. Multi-instance TLP is a scalable variant of TLP that enables a server to efficiently find solutions to different puzzles provided by a client at once. Nevertheless, existing multi-instance TLPs lack support for (verifiable) homomorphic computation. To address this limitation, we introduce the "Multi-Instance partially Homomorphic TLP" (MH-TLP), a multi-instance TLP supporting efficient verifiable homomorphic linear combinations of puzzles belonging to a client. It ensures anyone can verify the correctness of computations and solutions. Building on MH-TLP, we further propose the "Multi-instance Multi-client verifiable partially Homomorphic TLP" (MMH-TLP). It not only supports all the features of MH-TLP but also allows for verifiable homomorphic linear combinations of puzzles from different clients. Our schemes refrain from using asymmetric-key cryptography for verification and, unlike most homomorphic TLPs, do not require a trusted third party. A comprehensive cost analysis demonstrates that our schemes scale linearly with the number of clients and puzzles.
Probabilistic Data Structures in the Wild: A Security Analysis of Redis
Redis (Remote Dictionary Server) is a general purpose, in-memory database that supports a rich array of functionality, including various Probabilistic Data Structures (PDS), such as Bloom filters, Cuckoo filters, as well as cardinality and frequency estimators.
These PDS typically perform well in the average case. However, given that Redis is intended to be used across a diverse array of applications, it is crucial to evaluate how these PDS perform under worst-case scenarios, i.e., when faced with adversarial inputs. We offer a comprehensive analysis to address this question.
We begin by carefully documenting the different PDS implementations in Redis, explaining how they deviate from those PDS as described in the literature.
Then we show that these deviations enable a total of 10 novel attacks that are more severe than the corresponding attacks for generic versions of the PDS.
We highlight the critical role of Redis' decision to use non-cryptographic hash functions in the severity of these attacks.
We conclude by discussing countermeasures to the attacks, or explaining why, in some cases, countermeasures are not possible.
R-STELLAR: A Resilient Synthesizable Signature Attenuation SCA Protection on AES-256 with built-in Attack-on-Countermeasure Detection
Side-channel attacks (SCAs) remain a significant threat to the security of cryptographic systems in modern embedded devices. Even mathematically secure cryptographic algorithms, when implemented in hardware, inadvertently leak information through physical side-channel signatures such as power consumption, electromagnetic (EM) radiation, light emissions, and acoustic emanations. Exploiting these side channels significantly reduces the attacker’s search space.
In recent years, physical countermeasures have significantly increased the minimum traces-to-disclosure (MTD) to 1 billion. Among them, signature attenuation is the first method to achieve this mark. Signature attenuation often relies on analog techniques, and digital signature attenuation reduces MTD to 20 million, requiring additional methods for high resilience. We focus on improving the digital signature attenuation by an order of magnitude (MTD 200M).
Constant-Round Arguments for Batch-Verification and Bounded-Space Computations from One-Way Functions
What are the minimal cryptographic assumptions that suffice for constructing efficient argument systems, and for which tasks? Recently, Amit and Rothblum [STOC 2023] showed that one-way functions suffice for constructing constant-round arguments for bounded-depth computations. In this work we ask: what other tasks have efficient argument systems based only on one-way functions? We show two positive results:
First, we construct a new argument system for batch-verification of $k$ $UP$ statements ($NP$ statements with a unique witness) for witness relations that are verifiable in depth $D$. Taking $M$ to be the length of a single witness, the communication complexity is $O(\log k) \cdot (M + k \cdot D \cdot n^{\sigma})$, where $\sigma > 0$ is an arbitrarily small constant. In particular, the communication is quasi-linear in the length of a single witness, so long as ${k < M / (D \cdot n^{\sigma})}$.
The number of rounds is constant and the honest prover runs in polynomial time given witnesses for all $k$ inputs' membership in the language.
Our second result is a constant-round doubly-efficient argument system for languages in $P$ that are computable by bounded-space Turing machines. For this class of computations, we obtain an exponential improvement in the trade-off between the number of rounds and the (exponent of the) communication complexity, compared to known unconditionally sound protocols [Reingold, Rothblum and Rothblum, STOC 2016].
Practical and Efficient FHE-based MPC
We present a reactive MPC protocol built from FHE which is robust in the presence of active adversaries. In addition the protocol enables reduced bandwidth via means of transciphering, and also enables more expressive/efficient programs via means of a $\mathsf{Declassify}$ operation. All sub-components of the protocol can be efficiently realised using existing technology. We prove our protocol secure in the UC framework.
Efficient Zero-Knowledge Arguments for Paillier Cryptosystem
We present an efficient zero-knowledge argument of knowledge system customized for the Paillier cryptosystem. Our system enjoys sublinear proof size, low verification cost, and acceptable proof generation effort, while also supporting batch proof generation/verification. Existing works specialized for Paillier cryptosystem feature linear proof size and verification time. Using existing sublinear argument systems for generic statements (e.g., zk-SNARK) results in unaffordable proof generation cost since it involves translating the relations to be proven into an inhibitive large Boolean or arithmetic circuit over a prime order field. Our system does not suffer from these limitations.
The core of our argument systems is a constraint system defined over the ring of residue classes modulo a composite number, together with novel techniques tailored for arguing binary values in this setting. We then adapt the approach from Bootle et al. (EUROCRYPT 2016) to compile the constraint system into a sublinear argument system. Our constraint system is generic and can be used to express typical relations in Paillier cryptosystems including range proof, correctness proof, relationships between bits of plaintext, relationships of plaintexts among multiple ciphertexts, and more. Our argument supports batch proof generation and verification, with the amortized cost outperforming state-of-the-art protocol specialized for Paillier when the number of Paillier ciphertext is in the order of hundreds.
We report an end-to-end prototype and conduct comprehensive experiments across multiple scenarios. Scenario 1 is Paillier with packing. When we pack 25.6K bits into 400 ciphertexts, a proof that all these ciphertexts are correctly computed is 17 times smaller and is 3 times faster to verify compared with the naive implementation: using 25.6K OR-proofs without packing. Furthermore, we can prove additional statements almost for free, e.g., one can prove that the sum of a subset of the witness bits is less than a threshold t. Another scenario is range proof. To prove that each plaintext in 200 Paillier ciphertexts is of size 256 bits, our proof size is 10 times smaller than the state-of-the-art. Our analysis suggests that our system is asymptotically more efficient than existing protocols, and is highly suitable for scenarios involving a large number (more than 100) of Paillier ciphertexts, which is often the case for data analytics applications.