Papers updated in last 14 days (90 results)
Asymptotically Optimal Adaptive Asynchronous Common Coin and DKG with Silent Setup
This paper presents the first optimal-resilient, adaptively secure asynchronous common coin protocol with $O(\lambda n^2)$ communication complexity and $O(1)$ rounds, requiring only a public silent setup. Our protocol immediately implies a sequence of quadratic-communication, constant-round asynchronous Byzantine agreement protocols and asynchronous distributed key generation with a silent setup. Along the way, we formulate a new primitive called asynchronous subset alignment and introduce a simple framework to reason about specific composition security suitable for asynchronous common coin, which may be of independent interest.
Lossy Cryptography from Code-Based Assumptions
Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class $\mathcal{SZK}$ (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in $\mathcal{BPP}^{\mathcal{SZK}}$. This poses a barrier for building advanced primitives from code-based assumptions, as the only known such assumption is Learning Parity with Noise (LPN) with an extremely low noise rate $\frac{\log^2 n}{n}$, which is broken in quasi-polynomial time.
In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class $\mathcal{BPP}^{\mathcal{SZK}}$ and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece's cryptosystem and random $k\mbox{-}$XOR in average-case complexity. Roughly, the assumption states that
\[(\mathbf{T}\, \mathbf{M}, \mathbf{s} \,\mathbf{T}\, \mathbf{M} + \mathbf{e}) \quad \text{is indistinguishable from}\quad (\mathbf{T} \,\mathbf{M}, \mathbf{u}),\] for a random (dense) matrix $\mathbf{T}$, random sparse matrix $\mathbf{M}$, and sparse noise vector $\mathbf{e}$ drawn from the Bernoulli distribution with inverse polynomial noise probability.
We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate $\frac{\log^2 n}{n}$ that is only quasi-polynomially secure.
Leakage-Free Probabilistic Jasmin Programs
This paper presents a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove the equivalence between both definitions. We also prove that our new characterizations are compositional and relate our new definitions to existing ones from prior work, which could only be applied to deterministic programs.
To provide practical evidence, we use the Jasmin framework to develop a rejection sampling algorithm and provide an EasyCrypt proof that ensures the algorithm's implementation is leakage-free while not being constant-time.
Ringtail: Practical Two-Round Threshold Signatures from Learning with Errors
A threshold signature scheme splits the signing key among $\ell$ parties, such that any $t$-subset of parties can jointly generate signatures on a given message. Designing concretely efficient post-quantum threshold signatures is a pressing question, as evidenced by NIST's recent call.
In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of desirable properties:
(i) The signing protocol consists of only two rounds, where the first round is message-independent and can thus be preprocessed offline.
(ii) The scheme is concretely efficient and scalable to $t \leq 1024$ parties. For $128$-bit security and $t = 1024$ parties, we achieve $13.4$ KB signature size and $10.5$ KB of online communication.
(iii) The security is based on the standard learning with errors (LWE) assumption in the random oracle model. This improves upon the state-of-the-art (with comparable efficiency) which either has a three-round signing protocol [Eurocrypt'24] or relies on a new non-standard assumption [Crypto'24].
To substantiate the practicality of our scheme, we conduct the first WAN experiment deploying a lattice-based threshold signature, across 8 countries in 5 continents. We observe that an overwhelming majority of the end-to-end latency is consumed by network latency, underscoring the need for round-optimized schemes.
Fiat-Shamir Bulletproofs are Non-Malleable (in the Random Oracle Model)
Bulletproofs (Bünz et al. IEEE S&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their non-interactive version obtained using the Fiat-Shamir transform. A security proof for this setting is necessary for ruling out malleability attacks. These attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. An earlier version of this work (Ganesh et al. EUROCRYPT 2022) provided evidence for non-malleability of Fiat-Shamir Bulletproofs. This was done by proving simulation-extractability, which implies non-malleability, in the algebraic group model.
In this work, we generalize the former result and prove simulation extractability in the programmable random oracle model, removing the need for the algebraic group model. Along the way, we establish a generic chain of reductions for Fiat-Shamir-transformed multi-round public-coin proofs to be simulation-extractable in the (programmable) random oracle model, which may be of independent interest.
New Quantum Cryptanalysis of Binary Elliptic Curves (Extended Version)
This paper improves upon the quantum circuits required for the Shor's attack on binary elliptic curves. We present two types of quantum point addition, taking both qubit count and circuit depth into consideration.
In summary, we propose an in-place point addition that improves upon the work of Banegas et al. from CHES'21, reducing the qubit count – depth product by more than $73\%$ – $81\%$ depending on the variant. Furthermore, we develop an out-of-place point addition by using additional qubits. This method achieves the lowest circuit depth and offers an improvement of over $92\%$ in the qubit count – quantum depth product (for a single step).
To the best of our knowledge, our work improves from all previous works (including the CHES'21 paper by Banegas et al., the IEEE Access'22 paper by Putranto et al., and the CT-RSA'23 paper by Taguchi and Takayasu) in terms of circuit depth and qubit count – depth product.
Equipped with the implementations, we discuss the post-quantum security of binary elliptic curve cryptography. Under the MAXDEPTH metric (proposed by the US government's NIST), the quantum circuit with the highest depth in our work is $2^{24}$, which is significantly lower than the MAXDEPTH limit of $2^{40}$. For the gate count – full depth product, a metric for estimating quantum attack cost (used by NIST), the highest value in our work is $2^{60}$, considerably below the post-quantum security level 1 threshold (of the order of $2^{156}$).
Privacy-Preserving Dijkstra
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps:
First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs two secret-shared arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such a format, this step could be omitted.
Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, we show an oblivious $d$-normalization algorithm that takes $O(1)$ rounds and $O(V+E)$ secure operations.
We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O((V +E) \cdot \log V)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into a constant number of RAM memory words. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
On the Independence Assumption in Quasi-Cyclic Code-Based Cryptography
Cryptography based on the presumed hardness of decoding codes -- i.e., code-based cryptography -- has recently seen increased interest due to its plausible security against quantum attackers. Notably, of the four proposals for the NIST post-quantum standardization process that were advanced to their fourth round for further review, two were code-based. The most efficient proposals -- including HQC and BIKE, the NIST submissions alluded to above -- in fact rely on the presumed hardness of decoding structured codes. Of particular relevance to our work, HQC is based on quasi-cyclic codes, which are codes generated by matrices consisting of two cyclic blocks.
In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors $e_1,e_2$ (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices $A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated around its expectation. In the documentation, the authors model the distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
Stronger Security and Constructions of Multi-Designated Verifier Signatures
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. To extend OTR to group messaging we need to consider issues that are not present in the 2-party case. In group OTR (as in two-party OTR), the sender should be able to authenticate (or sign) his messages so that group members can verify who sent a message (that is, signatures should be unforgeable, even by group members). Also as in the two-party case, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that a corrupt sender cannot create confusion in the group as to what he said: if any group member accepts a signature, then all of them do.
To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS), which intuitively seem to target exactly the properties we require. However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (referred to as source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered.
The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency.
We give several constructions of our stronger notion of MDVS: one from generic standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs; one from specific instances of these primitives (for concrete efficiency); and one from functional encryption. The third construction requires an involved trusted setup step — including verification keys derived from a master secret — but this trusted setup buys us verifier-identity-based signing, for which such trusted setup is unavoidable. Additionally, in the third construction, the signature size can be made smaller by assuming a bound on colluding verifiers.
Probabilistic Attacks and Enhanced Security for "Private Set Intersection in the Internet Setting from Lightweight Oblivious PRF"
Privacy Set Intersection (PSI) has been an important research topic within privacy computation. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other private information. Therefore, PSI can be applied to various real-world scenarios.
Chase and Miao presented an impressive construction ``Private set intersection in the Internet setting from lightweight oblivious prf'' (CM20 for short) at Crypto 2020, highlighting its convenient structure and optimal communication cost. However, it does have some security vulnerabilities. Let $X$ be the privacy set of user $P_1$, $Y$ be the privacy set of user $P_2$. The CM20 protocol uses a pseudorandom function (PRF) to encrypt the privacy $x\in X$ of $P_1$ into $D_1$ and the privacy $y\in Y$ of $P_2$ into $D_2$, $D_1 = D_2$ as $x=y$. It then sends random data $F_1$ to user $P_1$ and random data $F_2$ to user $P_2$ using a random oblivious transfer technique. User $P_2$ computes $\delta=D_2\oplus F_2$ and sends $\delta$ to user $P_1$, and user $P_1$ uses $\delta$ and $F_1$ to re-encrypt $D_1$. Repeat this until $P_1$ re-encrypts all the privacy in all the privacy sets $X$, packages them up and sends them to $P_2$, who completes the privacy set intersection. However, if an adversary obtains $\delta$ and $F_2$ by any means, they can recover the PRF's encryption of the user's privacy, and the recovery process is non-trivial. This significantly weakens the security of the CM20 protocol.
In this paper, we make three main contributions. First, based on the above analysis, we present a method for attacking CM20, called {\em probabilistic attacks}. This attack is based on estimating and analysing the frequency distribution of the encrypted data from the PRF and the probability distribution of the original private data, and determining the relationship between the two. Although not 100\% effective, this method of attack poses a significant threat to the security of user data.
Secondly, we introduce a new tool called the {\em perturbed pseudorandom generator} (PPRG). We show that the PPRG can overcome probabilistic attacks by replacing the random oblivious transfer and one of the hash functions (originally there were two) in CM20.
Finally, we provide a dedicated indistinguishability against chosen-plaintext attack (IND-CPA) security model for this PSI protocol. The efficiency analysis shows that the proposed PSI is comparable to CM20's PSI, whether on a PC, MAC, pad or mobile phone.
Leuvenshtein: Efficient FHE-based Edit Distance Computation with Single Bootstrap per Cell
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called Leuvenshtein. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to $205\times$ faster performance compared to the best available TFHE implementation and up to $39\times$ faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional $3\times$ speedup can be achieved.
Dynamically Available Common Subset
Internet-scale consensus protocols used by blockchains are designed to remain operational in the presence of unexpected temporary crash faults (the so-called sleepy model of consensus) -- a critical feature for the latency-sensitive financial applications running on these systems.
However, their leader-based architecture, where a single block proposer is responsible for creating the block at each height, makes them vulnerable to short-term censorship attacks, in which the proposers profit at the application layer by excluding certain transactions.
In this work, we introduce an atomic broadcast protocol, secure in the sleepy model, that ensures the inclusion of all transactions within a constant expected time, provided that at least one participating node is non-censoring at all times.
Unlike traditional approaches, our protocol avoids designating a single proposer per block height, instead leveraging a so-called dynamically available common subset (DACS) protocol -- the first of its kind in the sleepy model. Additionally, our construction guarantees deterministic synchronization -- once an honest node confirms a block, all other honest nodes do so within a constant time, thus addressing a shortcoming of many low-latency sleepy protocols.
Publicly-Detectable Watermarking for Language Models
We present a publicly-detectable watermarking scheme for LMs: the detection algorithm contains no secret information, and it is executable by anyone. We embed a publicly-verifiable cryptographic signature into LM output using rejection sampling and prove that this produces unforgeable and distortion-free (i.e., undetectable without access to the public key) text output. We make use of error-correction to overcome periods of low entropy, a barrier for all prior watermarking schemes. We implement our scheme and find that our formal claims are met in practice.
A New Method for Solving Discrete Logarithm Based on Index Calculus
Index Calculus (IC) algorithm is the most effective probabilistic algorithm for solving discrete logarithms over finite fields of prime numbers, and it has been widely applied to cryptosystems based on elliptic curves. Since the IC algorithm was proposed in 1920, the research on it has never stopped, especially discretization of prime numbers on the finite fields, both the algorithm itself and its application have been greatly developed. Of course, there has been some research on elliptic curves,but with little success. For the IC algorithm, scholars pay more attention to how to improve the probability of solving and reduce the time complexity of calculation. It is the first time for the IICA to study the optimization problem of the IC by using the method of integer. However, the IICA only studies the case of integer up, and fails to consider the case of integer down. It is found that the integer direction of the IICA can be integer up or integer down, but the concept of modular multiplication needs to be used when integer down. After optimizing the IICA, the probability of successful solution of discrete logarithm is increased by nearly 2 times, and the number of transformations is also reduced to a certain extent, thus reducing the time complexity of solution. The re-optimized the IC algorithm greatly improves the probability of successful the IC solution. This research result poses a serious challenge to cryptosystems based on finite fields of prime numbers.
Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks
In the Proof of Stake (PoS) Ethereum ecosystem, users can stake ETH on Lido to receive stETH, a Liquid Staking Derivative (LSD) that represents staked ETH and accrues staking rewards. LSDs improve the liquidity of staked assets by facilitating their use in secondary markets, such as for collateralized borrowing on Aave or asset exchanges on Curve. The composability of Lido, Aave, and Curve enables an emerging strategy known as leverage staking, an iterative process that enhances financial returns while introducing potential risks. This paper establishes a formal framework for leverage staking with stETH and identifies 442 such positions on Ethereum over 963 days. These positions represent a total volume of 537,123 ETH (877m USD). Our data reveal that 81.7% of leverage staking positions achieved an Annual Percentage Rate (APR) higher than conventional staking on Lido. Despite the high returns, we also recognize the potential risks. For example, the Terra crash incident demonstrated that token devaluation can impact the market. Therefore, we conduct stress tests under extreme conditions of significant stETH devaluation to evaluate the associated risks. Our simulations reveal that leverage staking amplifies the risk of cascading liquidations by triggering intensified selling pressure through liquidation and deleveraging processes. Furthermore, this dynamic not only accelerates the decline of stETH prices but also propagates a contagion effect, endangering the stability of both leveraged and ordinary positions.
SPY-PMU: Side-Channel Profiling of Your Performance Monitoring Unit to Leak Remote User Activity
The Performance Monitoring Unit (PMU), a standard feature in all modern computing systems, presents significant security risks by leaking sensitive user activities through microarchitectural event data. This work demonstrates the feasibility of remote side-channel attacks leveraging PMU data, revealing vulnerabilities that compromise user privacy and enable covert surveillance without physical access to the target machine. By analyzing the PMU feature space, we create distinct micro-architectural fingerprints for benchmark applications, which are then utilized in machine learning (ML) models to detect the corresponding benchmarks. This approach allows us to build a pre-trained model for benchmark detection using the unique micro-architectural fingerprints derived from PMU data. Subsequently, when an attacker remotely accesses the victim’s PMU data, the pre-trained model enables the identification of applications used by the victim with high accuracy. In our proof-of-concept demonstration, the pre-trained model successfully identifies applications used by a victim when the attacker remotely accesses PMU data, showcasing the potential for malicious exploitation of PMU data. We analyze stress-ng benchmarks and build our classifiers using logistic regression, decision tree, k-nearest neighbors, and random forest ML models. Our proposed models achieve an average prediction accuracy of 98%, underscoring the potential risks associated with remote side-channel analysis using PMU data and emphasizing the need for more robust safeguards. This work underscores the urgent need for robust countermeasures to protect against such vulnerabilities and provides a foundation for future research in micro-architectural security.
Wave Hello to Privacy: Efficient Mixed-Mode MPC using Wavelet Transforms
This paper introduces new protocols for secure multiparty computation (MPC) leveraging Discrete Wavelet Transforms (DWTs) for computing nonlinear functions over large domains. By employing DWTs, the protocols significantly reduce the overhead typically associated with Lookup Table-style (LUT) evaluations in MPC. We state and prove foundational results for DWT-compressed LUTs in MPC, present protocols for 9 of the most common activation functions used in ML, and experimentally evaluate the performance of our protocols for large domain sizes in the LAN and WAN settings. Our protocols are extremely fast -- for instance, when considering 64-bit inputs, computing 1000 parallel instances of the sigmoid function, with an error less than $2^{-24}$ takes only a few hundred milliseconds incurs just 29\,KiB of online communication (40 bytes per evaluation).
A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal.
In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box.
We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms.
This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions.
To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation (iO), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions.
New Assumption: Previous to our work, all constructions of iO from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects.
New Techniques: We introduce a number of new techniques:
\begin{itemize}
\item We show how to build partially-hiding \emph{public-key} functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic $\mathsf{NC}^1$ functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups.
\item We construct single-ciphertext and single-secret-key functional encryption for all circuits with long outputs, which has the features of {\em linear} key generation and compact ciphertext, assuming only the LWE assumption.
\end{itemize}
Simplification: Unlike prior works, our new techniques furthermore let us construct {\em public-key} functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the {\em polynomial hardness} of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext is independent of the size of the circuit to be computed, and grows only sublinearly in the output size of the circuit and polynomially in the input size and the depth of the circuit. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve $iO$.
DL-SCADS: Deep Learning-Based Post-Silicon Side-Channel Analysis Using Decomposed Signal
Side-channel analysis (SCA) does not aim at the algorithm's weaknesses but rather its implementations. The rise of machine learning (ML) and deep learning (DL) is giving adversaries advanced capabilities to perform stealthy attacks. In this paper, we propose DL-SCADS, a DL-based approach along with signal decomposition techniques to leverage the power of secret key extraction from post-silicon EM/power side-channel traces. We integrate previously proven effective ideas of model ensembling and automated hyperparameter tuning with signal decomposition to develop an efficient and robust side-channel attack. Extensive experiments are performed on Advanced Encryption Standard (AES) and Post-Quantum Cryptography (PQC) to demonstrate the efficacy of our approach. The evaluation of the performance of the side-channel attack employing various decomposition techniques and comparison with the proposed approach in a range of datasets are also tabulated.
A Combinatorial Approach to IoT Data Security
This article explores the potential of Secret Sharing-Based Internet of Things (SBIoT) as a promising cryptographic element across diverse applications, including secure data storage in commercial cloud systems (Datachest), smart home environments (encompassing sensors, cameras, smart locks, and smart assistants), and e-health applications (protecting patient data and medical records). Beyond these applications, the paper makes two key contributions: the introduction of a novel cheater identification algorithm designed to verify the correct submission of shares during secret reconstruction, and empirical validation through experimental studies to support the theoretical advancements. This multifaceted approach not only demonstrates the versatility of SBIoT but also proposes innovative mechanisms to enhance security and integrity, contributing to the development of a more robust cryptographic framework.
This article expands upon the work presented in the poster "A Combinatorial Approach to IoT Data Security" at IWSEC 2023, Yokohama, Japan.
Efficient CPA Attack on Hardware Implementation of ML-DSA in Post-Quantum Root of Trust
Side-channel attacks (SCA) pose a significant threat to cryptographic implementations, including those designed to withstand the computational power of quantum computers.
This paper introduces the first side-channel attack on an industry-grade post-quantum cryptography implementation, Adam's Bridge.
Specifically, we present a Correlation Power Analysis (CPA) attack targeting the hardware implementation of ML-DSA within Caliptra, an open-source Silicon Root of Trust framework developed through a multi-party collaboration involving Google, AMD, and Microsoft.
Our attack focuses on the modular reduction process that follows the Number Theoretic Transform-based polynomial pointwise multiplication.
By exploiting side-channel leakage from a distinctive reduction algorithm unique to Adam's Bridge and leveraging the zeroization mechanism used to securely erase sensitive information by clearing internal registers, we significantly enhance the attack's efficacy.
Our findings reveal that an adversary can extract Caliptra's ML-DSA secret keys using only 10,000 power traces.
With access to these keys, an attacker could forge signatures for certificate generation, thereby compromising the integrity of the root of trust.
This work highlights the vulnerabilities of industry-standard root-of-trust systems to side-channel attacks. It underscores the urgent need for robust countermeasures to secure commercially deployed systems against such threats.
A Survey to Zero-Knowledge Interactive Verifiable Computing: Utilizing Randomness in Low-Degree Polynomials
This survey provides a comprehensive examination of zero-knowledge interactive verifiable computing, emphasizing the utilization of randomnes in low-degree polynomials. We begin by tracing the evolution of general-purpose verifiable computing, starting with the foundational concepts of complexity theory developed in the 1980s, including classes such as P, NP and NP-completeness. Through an exploration of the Cook-Levin Theorem and the transformation between NP problems like HAMPATH and SAT, we demonstrate the reducibility of NP problems to a unified framework, laying the groundwork for subsequent advancements.
Recognizing the limitations of NP-based proof systems in effectively verifying certain problems, we then delve into interactive proof systems (IPS) as a probabilistic extension of NP. IPS enhance verification efficiency by incorporating randomness and interaction, while accepting a small chance of error for that speed. We address the practical challenges of traditional IPS, where the assumption of a prover with unlimited computational power is unrealistic, and introduce the concept of secret knowledge. This approach allows a prover with bounded computational resources to convincingly demonstrate possession of secret knowledge to the verifier, thereby enabling high-probability verification by the verifier. We quantify this knowledge by assessing the verifier's ability to distinguish between a simulator and genuine prover, referencing seminal works such as Goldwasser et al.'s "The knowledge Complexity of Theorem Proving Procedures"
The survey further explores essential mathematical theories and cryptographic protocols, including the Schwartz-Zippel lemma and Reed-Solomon error correction, which underpin the power of low-degree polynomials in error detection and interactive proof systems. We provide a detailed, step-by-step introduction to tyhe sum-check protocol, proving its soundness and runtime characteristics.
Despite the sum-check protocol's theoretical applicability to all NP problems via SAT reduction, we highlight the sum-check protocol's limitation in requiring superpolynomial time for general-purpose computations of a honest prover. To address these limitations, we introduce the GKR protocol, a sophisticate general-purpose interactive proof system developed in the 2010s. We demonstrate how the sum-check protocol integrates into the GKR framework to achieve efficient, sound verification of computations in polynomial time. This survey not only reviews the historical and theoretical advancement in verifiable computing in the past 30 years but also offers an accessible introduction for newcomers by providing a solid foundation to understand the significant advancements in verifiable computing over the past decade, including developments such as ZK-SNARKs.
Scalable Two-Round $n$-out-of-$n$ and Multi-Signatures from Lattices in the Quantum Random Oracle Model
In this paper, we construct the first asymptotically efficient two-round $n$-out-of-$n$ and multi-signature schemes from lattices in the quantum random oracle model (QROM), using the Fiat-Shamir with Aborts (FSwA) paradigm. Our protocols can be viewed as the QROM~variants of the two-round protocols by Damgård et al. (JoC 2022). A notable feature of our protocol, compared to other counterparts in the classical random oracle model, is that each party performs an independent abort and still outputs a signature in exactly two rounds, making our schemes significantly more scalable.
From a technical perspective, the simulation of QROM~and the efficient reduction from breaking underlying assumption to forging signatures are the essential challenges to achieving efficient QROM security for the previously related works.
In order to conquer the former one we adopt the quantum-accessible pseudorandom function (QPRF) to simulate QROM. Particularly, we show
that there exist a QPRF~which can be programmed and inverted, even against a quantum adversary.
For the latter challenge, we tweak and apply the online extractability by Unruh (Eurocrypt 2015).
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
RSA is widely used in modern cryptographic practice, with certain RSA-based protocols relying on the secrecy of $p$ and $q$. A common approach is to use secure multiparty computation to address the privacy concerns of $p$ and $q$.
Specifically constrained to distributed RSA modulus generation protocols, the biprimality test for Blum integers $N=pq$, where $p\equiv q\equiv 3 \mod4$ are two primes, proposed by Boneh and Franklin ($2001$) is the most commonly used. Over the past $20 $ years, the worst-case acceptance rate of this test has been consistently assumed to be $1/2$ under the condition $\gcd(pq,p+q-1)=1$.
In this paper, we demonstrate that for the Boneh-Franklin test, the acceptance probability is at most $1/4$, rather than $1/2$, except in the specific case where $p = q = 3$. We establish that the value of $1/4$ represents the tightest upper bound. This finding significantly enhances the practical effectiveness of the Boneh-Franklin test: achieving equivalent soundness for the RSA modulus now requires only half the number of iterations previously deemed necessary.
Furthermore, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance rate of the proposed test is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factors of $N$. Simulation study suggests that this test is generally more efficient than the Boneh-Franklin test for detecting when $N$ is not an RSA modulus. Additionally, this test is applicable to generating arbitrary RSA moduli for arbitrary odd primes $p,q$.
A corresponding protocol is developed for this test, validated for resilience against semi-honest adversaries, and shown to be applicable to most known distributed RSA modulus generation protocols. After thoroughly analyzing and comparing well-known protocols for Blum integers, including the variant Miller-Rabin test used by Burkhardt et al. (CCS 2023), the Boneh-Franklin test, and our proposed Lucas-type test, our proposed protocol test is also highly competitive in verifying whether $N$ is an RSA modulus.
Non Linearizable Entropic Operator
In [Pan21] a linearization attack is proposed in order to break the cryp-
tosystem proposed in [Gli21]. We want to propose here a non-linearizable
operator that disables this attack as this operator doesn't give raise to a
quasigrup and doesn't obey the latin square property.
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions).
In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Attribute Based Encryption for Turing Machines from Lattices
We provide the first attribute based encryption (ABE) scheme for Turing machines supporting unbounded collusions from lattice assumptions. In more detail, the encryptor encodes an attribute $\mathbf{x}$ together with a bound $t$ on the machine running time and a message $m$ into the ciphertext, the key generator embeds a Turing machine $M$ into the secret key and decryption returns $m$ if and only if $M(\mathbf{x})=1$. Crucially, the input $\mathbf{x}$ and machine $M$ can be of unbounded size, the time bound $t$ can be chosen dynamically for each input and decryption runs in input specific time.
Previously the best known ABE for uniform computation supported only non-deterministic log space Turing machines (${\sf NL})$ from pairings (Lin and Luo, Eurocrypt 2020). In the post-quantum regime, the state of the art supports non-deterministic finite automata from LWE in the $\textit{ symmetric}$ key setting (Agrawal, Maitra and Yamada, Crypto 2019).
In more detail, our results are:
1. We construct the first ABE for ${\sf NL}$ from the LWE, evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) and Tensor LWE (Wee, Eurocrypt 2022) assumptions. This yields the first (conjectured) post-quantum ABE for ${\sf NL}$.
2. Relying on LWE, evasive LWE and a new assumption called $\textit{circular tensor}$ LWE, we construct ABE for all Turing machines. At a high level, the circular tensor LWE assumption incorporates circularity into the tensor LWE (Wee, Eurocrypt 2022) assumption.
Towards our ABE for Turing machines, we obtain the first CP-ABE for circuits of unbounded depth and size from the same assumptions -- this may be of independent interest.
Voting with coercion resistance and everlasting privacy using linkable ring signatures
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters register create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
Post-Quantum DNSSEC with Faster TCP Fallbacks
In classical DNSSEC, a drop-in replacement with quantum-safe cryptography would increase DNS query resolution times by $\textit{at least}$ a factor of $2\times$. Since a DNS response containing large post-quantum signatures is likely to get marked truncated ($\texttt{TC}$) by a nameserver (resulting in a wasted UDP round-trip), the client (here, the resolver) would have to retry its query over TCP, further incurring a $\textit{minimum}$ of two round-trips due to the three-way TCP handshake.
We present $\mathsf{TurboDNS}$: a backward-compatible protocol that eliminates $\textit{two}$ round-trips from the preceding flow by 1) sending TCP handshake data in the initial DNS/UDP flight itself, and 2) immediately streaming the DNS response over TCP after authenticating the client with a cryptographic cookie. Our experiments show that DNSSEC over $\mathsf{TurboDNS}$, with either Falcon-512 or Dilithium-2 as the zone signing algorithm, is practically as fast as the currently deployed ECDSA P-256 and RSA-2048 setups in resolving $\texttt{QTYPE}$ $\texttt{A}$ DNS queries.
Smaug: Modular Augmentation of LLVM for MPC
Secure multi-party computation (MPC) is a crucial tool for privacy-preserving computation, but it is getting increasingly complicated due to recent advancements and optimizations. Programming tools for MPC allow programmers to develop MPC applications without mastering all cryptography. However, most existing MPC programming tools fail to attract real users due to the lack of documentation, maintenance, and the ability to compose with legacy codebases. In this work, we build Smaug, a modular extension of LLVM. Smaug seamlessly brings all LLVM support to MPC programmers, including error messaging, documentation, code optimization, and frontend support to compile from various languages to LLVM intermediate representation (IR). Smaug can efficiently convert non-oblivious LLVM IR to their oblivious counterparts while applying popular optimizations as LLVM code transformations. With benchmarks written in C++ and Rust and backends for Yao and GMW protocols, we observe that Smaug performs as well as (and sometimes much better than) prior tools using domain-specific languages with similar backends. Finally, we use Smaug to compile open-source projects that
implement Minesweeper and Blackjack, producing usable two-party games with ease.
What is "legal" and "illegal?": Social Norms, Current Practices and Perceived Risks among the Cryptocurrency Users in Bangladesh
Cryptocurrency practices worldwide are seen as innovative, yet they navigate a fragmented regulatory environment. Many local authorities aim to balance promoting innovation, safeguarding consumers, and managing potential threats. In particular, it is unclear how people deal with cryptocurrencies in the regions where trading or mining is prohibited. This insight is crucial in conveying the risk reduction strategies. To address this, we conducted semi-structured interviews with 28 cryptocurrency traders and miners from Bangladesh, where the local authority is hostile towards cryptocurrencies. Our research revealed that the participants use unique strategies to mitigate risks around cryptocurrencies. Our findings indicate a prevalent uncertainty at both personal and organizational levels concerning the interpretation of laws, a situation worsened by the actions of the major financial service providers who indirectly facilitate cryptocurrency transactions. We further connect our findings to the broader issues in HCI regarding folk models, informal market and legality, and education and awareness.
Nearly Quadratic Asynchronous Distributed Key Generation
We prove that for any $1\le k\le \log n$, given a VRF setup and assuming secure erasures, there exists a protocol for Asynchronous Distributed Key Generation (ADKG) that is resilient to a strongly adaptive adversary that can corrupt up to $f<n/3$ parties. With all but negligible probability, all nonfaulty parties terminate in an expected $O(k)$ rounds and send a total expected $\tilde{O}(n^{2+1/k})$ messages.
Compact Key Storage in the Standard Model
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets $(s_1,...,s_n)$ from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al. introduced CKS as a primitive for backing up $(s_1,...,s_n)$. The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM).
In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations.
We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret $s_i$ but a derived key $k_i$, and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal.
Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
MicroNova: Folding-based arguments with efficient (on-chain) verification
We describe the design and implementation of MicroNova, a folding-based recursive argument for producing proofs of incremental computations of the form $y = F^{(\ell)}(x)$, where $F$ is a possibly non-deterministic computation (encoded using a constraint system such as R1CS), $x$ is the initial input, $y$ is the output, and $\ell > 0$. The proof of an $\ell$-step computation is produced step-by-step such that the proof size nor the time to verify it depends on $\ell$. The proof at the final iteration is then compressed, to achieve further succinctness in terms of proof size and verification time. Compared to prior folding-based arguments, a distinguishing aspect of MicroNova is the concrete efficiency of the verifier—even in a resource-constrained environment such as Ethereum’s blockchain. In particular, the compressed proof consists of $O(\log{N})$ group elements and it can be verified with $O(\log{N})$ group scalar multiplications and two pairing operations, where $N$ is the number of constraints for a single invocation of $F$. MicroNova requires a universal trusted setup and can employ any existing setup material created for the popular KZG univariate polynomial commitment scheme. Finally, we implement and experimentally evaluate MicroNova. We find that MicroNova’s proofs can be efficiently verified on the Ethereum blockchain with $\approx$2.2M gas. Furthermore, MicroNova’s prover incurs minimal overheads atop its baseline Nova’s prover.
NMFT: A Copyrighted Data Trading Protocol based on NFT and AI-powered Merkle Feature Tree
With the rapid growth of blockchain-based Non-Fungible Tokens (NFTs), data trading has evolved to incorporate NFTs for ownership verification. However, the NFT ecosystem faces significant challenges in copyright protection, particularly when malicious buyers slightly modify the purchased data and re-mint it as a new NFT, infringing upon the original owner's rights. In this paper, we propose a copyright-preserving data trading protocol to address this challenge.
First, we introduce the Merkle Feature Tree (MFT), an enhanced version of the traditional Merkle Tree that incorporates an AI-powered feature layer above the data layer. Second, we design a copyright challenge phase during the trading process, which recognizes the data owner with highly similar feature vectors and earlier on-chain timestamp as the legitimate owner. Furthermore, to achieve efficient and low-gas feature vector similarity computation on blockchain, we employ Locality-Sensitive Hashing (LSH) to compress high-dimensional floating-point feature vectors into single uint256 integers.
Experiments with multiple image and text feature extraction models demonstrate that LSH effectively preserves the similarity between highly similar feature vectors before and after compression, thus supporting similarity-based copyright challenges. Experimental results on the Ethereum Sepolia testnet demonstrate NMFT's scalability with sublinear growth in gas consumption while maintaining stable latency.
Efficient Multi-party Private Set Union Resistant to Maximum Collusion Attacks
Multi-party Private Set Union (MPSU) enables multiple participants to jointly compute the union of their private sets without leaking any additional information beyond the resulting union. Liu et al. (ASIACRYPT 2023) presented the first MPSU protocol that scales to large data sets, designating one participant as the "leader" responsible for obtaining the final union. However, this approach assumes that the leader does not collude with any other participant, which can be impractical due to the inherent lack of mutual trust among participants, thereby limiting its applicability.
On the other hand, the state-of-the-art protocol that allows all participants to learn the computed union was proposed by Seo et al. (PKC 2012). While their construction achieves $O(1)$ round complexity, it remains secure only if fewer than half of the participants collude, leaving open the problem of designing stronger collusion tolerance and multi-party output.
In this work, we address these limitations by first proposing $\Pi_\text{MPSU}^{\text{one-leader}}$ that designates one participant as leader to obtain the union result. Building upon this construction, we extend this design to $\Pi_\text{MPSU}^{\text{leaderless}}$, which enables every participant to receive the union result simultaneously. Both protocols operate under the semi-honest model, tolerate maximal collusion among participants, and efficiently handle large-scale set computation. We implement these schemes and conducted a comprehensive comparison against state-of-the-art solutions. The result shows that, for input sizes of $2^{12}$ at a comparable security level, $\Pi_\text{MPSU}^{\text{one-leader}}$ achieves a $663$ times speedup in online runtime compared to the state-of-the-art. Furthermore, it also remains $22$ times faster than half-collusion-tolerant protocol.
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent
End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models.
We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
Secure Vault scheme in the Cloud Operating Model
The rising demand for data privacy in cloud-based environments has led to the development of advanced mechanisms for securely managing sensitive information. A prominent solution in this domain is the "Data Privacy Vault," a concept that is being provided commercially by companies such as Hashicorp, Basis Theory, Skyflow Inc., VGS, Evervault, Protegrity, Anonomatic, and BoxyHQ. However, no existing work has rigorously defined the security notions required for a Data Privacy Vault or proven them within a formal framework which is the focus of this paper.
Among its other uses, data privacy vaults are increasingly being used as storage for LLM training data which necessitates a scheme that enables users to securely store sensitive information in the cloud while allowing controlled access for performing analytics on specific non-sensitive attributes without exposing sensitive data. Conventional solutions involve users generating encryption keys to safeguard their data, but these solutions are not deterministic and are therefore unsuited for the LLM setting. To address this, we propose a novel framework that is deterministic as well as semantically secure. Our scheme operates in the Cloud Operating model where the server is trusted but stateless, and the storage is outsourced.
We provide a formal definition and a concrete instantiation of this data privacy vault scheme. We introduce a novel tokenization algorithm that serves as the core mechanism for protecting sensitive data within the vault. Our approach not only generates secure, unpredictable tokens for sensitive data but also securely stores sensitive data while enabling controlled data retrieval based on predefined access levels. Our work fills a significant gap in the existing literature by providing a formalized framework for the data privacy vault, complete with security proofs and a practical construction - not only enhancing the understanding of vault schemes but also offering a viable solution for secure data management in the era of cloud computing.
FO derandomization sometimes damages security
FO derandomization is a common step in protecting against chosen-ciphertext attacks. There are theorems qualitatively stating that FO derandomization preserves ROM OW-CPA security. However, quantitatively, these theorems are loose, allowing the possibility of the derandomized security level being considerably smaller than the original security level. Many cryptosystems rely on FO derandomization without adjusting parameters to account for this looseness.
This paper proves, for two examples of a randomized ROM PKE, that derandomizing the PKE degrades ROM OW-CPA security by a factor close to the number of hash queries. The first example can be explained by the size of the message space of the PKE; the second cannot. This paper also gives a concrete example of a randomized non-ROM PKE that appears to have the same properties regarding known attacks.
As a spinoff, this paper presents a 2^88-guess attack exploiting derandomization to break one out of 2^40 ciphertexts for a FrodoKEM-640 public key. This attack contradicts the official FrodoKEM claim that "the FrodoKEM parameter sets comfortably match their target security levels with a large margin". The official responses to this attack so far include (1) renaming FrodoKEM as "ephemeral FrodoKEM" and (2) proposing a newly patched "FrodoKEM".
This paper does not involve new cryptanalysis: the attacks are straightforward. What is new is finding examples where derandomization damages security.
Exploring Large Integer Multiplication for Cryptography Targeting In-Memory Computing
Emerging cryptographic systems such as Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKP) are computation- and data-intensive. FHE and ZKP implementations in software and hardware largely rely on the von Neumann architecture, where a significant amount of energy is lost on data movements. A promising computing paradigm is computing in memory (CIM), which enables computations to occur directly within memory, thereby reducing data movements and energy consumption. However, efficiently performing large integer multiplications - critical in FHE and ZKP - is an open question, as existing CIM methods are limited to small operand sizes. In this work, we address this question by exploring advanced algorithmic approaches for large integer multiplication,
identifying the Karatsuba algorithm as the most effective for
CIM applications. Thereafter, we design the first Karatsuba multiplier for resistive CIM crossbars. Our multiplier uses a three-stage pipeline to enhance throughput and, additionally, balances memory endurance with efficient array sizes. Compared to existing CIM multiplication methods, when scaled up to the bit widths required in ZKP and FHE, our design achieves up to 916x in throughput and 281x in area-time product improvements.
Two generalizations of almost perfect nonlinearity
Almost perfect nonlinear (in brief, APN) functions are vectorial functions $F:\mathbb F_2^n\rightarrow \mathbb F_2^n$ playing roles in several domains of information protection, at the intersection of computer science and mathematics. Their definition comes from cryptography and is also related to coding theory.
When they are used as substitution boxes (S-boxes, which are the only nonlinear components in block ciphers), APN functions contribute optimally to the resistance against differential attacks. This makes of course a strong cryptographic motivation for their study, which has been very active since the 90's, and has posed interesting and difficult mathematical questions, some of which are still unanswered. \\Since the introduction of differential attacks, more recent types of cryptanalyses have been designed, such as integral attacks. No notion about S-boxes has been identified which would play a similar role with respect to integral attacks. In this paper, we study two generalizations of APNness that are natural from a mathematical point of view, since they directly extend classical characterizations of APN functions. We call these two notions strong non-normality and sum-freedom. The former existed already for Boolean functions (it had been introduced by Dobbertin) and the latter is new. \\
We study how these two notions are related to cryptanalyses (the relation is weaker for strong non-normality).
The two notions behave differently from each other while they have similar definitions. They behave differently from differential uniformity, which is a well-known generalization of APNness. We study the different ways to define them. We prove their satisfiability, their monotonicity, their invariance under classical equivalence relations and we characterize them by the Walsh transform.
\\
We finally begin a study of the multiplicative inverse function (used as a substitution box in the Advanced Encryption Standard and other block ciphers) from the viewpoint of these two notions. In particular, we find a simple expression of the sum of the values taken by this function over affine subspaces of $\mathbb F_{2^n}$ that are not vector subspaces. This formula shows that the sum never vanishes on such affine spaces. We also give a formula for the case of a vector space defined by one of its bases.
PQConnect: Automated Post-Quantum End-to-End Tunnels
This paper introduces PQConnect, a post-quantum end-to-end tunneling protocol that automatically protects all packets between clients that have installed PQConnect and servers that have installed and configured PQConnect.
Like VPNs, PQConnect does not require any changes to higher-level protocols and application software. PQConnect adds cryptographic protection to unencrypted applications, works in concert with existing pre-quantum applications to add post-quantum protection, and adds a second application-independent layer of defense to any applications that have begun to incorporate application-specific post-quantum protection.
Unlike VPNs, PQConnect automatically creates end-to-end tunnels to any number of servers using automatic peer discovery, with no need for the client administrator to configure per-server information. Each server carries out a client-independent configuration step to publish an announcement that the server's name accepts PQConnect connections. Any PQConnect client connecting to that name efficiently finds this announcement, automatically establishes a post-quantum point-to-point IP tunnel to the server, and routes traffic for that name through that tunnel.
The foundation of security in PQConnect is the server's long-term public key used to encrypt and authenticate all PQConnect packets. PQConnect makes a conservative choice of post-quantum KEM for this public key. PQConnect also uses a smaller post-quantum KEM for forward secrecy, and elliptic curves to ensure pre-quantum security even in case of security failures in KEM design or KEM software. Security of the handshake component of PQConnect has been symbolically proven using
Tamarin.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens.
A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model.
Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
Encrypted Multi-map that Hides Query, Access, and Volume Patterns
Uncategorized
Uncategorized
We present an encrypted multi-map, a fundamental data structure underlying
searchable encryption/structured encryption. Our protocol supports updates and
is designed for applications demanding very strong data security. Not only it
hides the information about queries and data, but also the query, access, and
volume patterns. Our protocol utilizes a position-based ORAM and an encrypted
dictionary. We provide two instantiations of the protocol, along with their
operation-type-revealing variants, all using PathORAM but with different
encrypted dictionary instantiations (AVL tree or BSkiplist). Their efficiency
has been evaluated through both asymptotic and concrete complexity analysis,
outperforming prior work while achieving the same level of strong security. We
have implemented our instantiations and evaluated their performance on two
real-world email databases (Enron and Lucene). We also discuss the strengths and
limitations of our construction, including its resizability, and highlight that
optimized solutions, even with heavy network utilization, may become practical
as network speed improves.
Breaking the Shadow: Key Recovery Attack on Full-Round Shadow Block Ciphers with Minimal Data
Shadow is a family of lightweight block ciphers introduced by Guo, Li, and Liu in 2021, with Shadow-32 having a 32-bit block size and a 64-bit key, and Shadow-64 having a 64-bit block size and a 128-bit key. Both variants use a generalized Feistel network with four branches, incorporating the AND-Rotation-XOR operation similar to the Simon family for their bridging function. This paper reveals that the security claims of the Shadow family are not as strong as suggested. We present a key recovery attack that can retrieve the sequence of round keys used for encryption with only two known plaintext/ciphertext pairs, requiring time and memory complexity of $2^{43.23}$ encryptions and $2^{21.62}$ blocks of memory for Shadow-32, and complexity of $2^{81.32}$ encryptions and $2^{40.66}$ blocks of memory for Shadow-64. Notably, this attack is independent of the number of rounds and the bridging function employed. Furthermore, we critically evaluate one of the recent cryptanalysis on Shadow ciphers and identify significant flaws in the proposed key recovery attacks. In particular, we demonstrate that the distinguisher used in impossible differential attacks by Liu et al. is ineffective for key recovery, despite their higher claimed complexities compared to ours.
Computing the Hermite Normal Form: A Survey
The Hermite Normal Form (HNF) of a matrix is an analogue of the echolon form over the integers. Any integer matrix can be transformed into its unique HNF.
A common obstacle in computing the HNF is the extensive blow up of intermediate values. As first approach to this problem, we discuss the $Modulo Determinant Algorithm$. It keeps the entries bounded by $d$, the determinant of the lattice, and has a time complexity of $\mathcal{O}(n^3\log^2 d)$, where $n$ is the dimension of the matrix. Although this algorithm is very useful if the determinant is small, in the general case, the entries still become extremely large.
Secondly, we study the $Linear Space Algorithm$. It has a time complexity of $\mathcal{O}(n^5\mathrm{polylog}(M, n))$, where $M$ denotes the largest absolute value of the input matrix. This is as fast as the best previously known algorithms, but in contrast, it assures space complexity linear in the input size, i.e. $\mathcal{O}(n^2\log M)$.
As last algorithm to compute the HNF we analyze the $Heuristic Algorithm$, which is based on the first two algorithms. It achieves a much faster runtime in practice, yielding a heuristic runtime of $\mathcal{O}(n^4\mathrm{polylog}(M, n))$, while keeping the linear space complexity.
Besides some performance speed ups, the $Linear Space Algorithm$ and $Heuristic Algorithm$ are precisely the algorithms implemented by SageMath.
EQSIGN: Practical Digital Signatures from the Non-Abelian Hidden Subgroup Problem and Information Theoretic Equivocation
We present a novel digital signature scheme grounded in non-commutative cryptography and implemented over a bilinear matrix group platform. At the core of our design is a unique equivocation function that obfuscates intermediate elements, effectively concealing outputs and minimizing observable information leakage. To the best of our knowledge, this is the first digital signature scheme to combine information-theoretic security with computational hardness, relying on a challenging instance of the Non-Abelian Hidden Subgroup Problem (NAHSP) and strengthened by practical guarantees. This dual-layered security approach ensures robustness against both classical and quantum adversaries while maintaining communication overheads competitive with RSA. Our work represents a significant advancement toward efficient, quantum-resilient digital signatures for real-world applications. This paper is an early pre-release intended to invite collaboration and feedback. The work is presented for research purposes only and is not intended for use in production systems.
An Embedded Domain-Specific Language for Using One-Hot Vectors and Binary Matrices in Secure Computation Protocols
The use of secure computation protocols within production software systems and applications is complicated by the fact that such protocols sometimes rely upon -- or are most compatible with -- unusual or restricted models of computation. We employ the features of a contemporary and widely used programming language to create an embedded domain-specific language for working with user-defined functions as binary matrices that operate on one-hot vectors. At least when working with small finite domains, this allows programmers to overcome the restrictions of more simple secure computation protocols that support only linear operations (such as addition and scalar multiplication) on private inputs. Notably, programmers are able to define their own input and output domains, to use all available host language features and libraries to define functions that operate on these domains, and to translate inputs, outputs, and functions between their usual host language representations and their one-hot vector or binary matrix forms. Furthermore, these features compose in a straightforward way with simple secure computation libraries available for the host language.
Improved Rejection Sampling for Compact Lattice Signatures
One of the primary approaches used to construct lattice-based signature schemes is through the “Fiat-Shamir with aborts” methodology. Such a scheme may abort and restart during signing which corresponds to rejection sampling produced signatures to ensure that they follow a distribution that is independent of the secret key. This rejection sampling is only feasible when the output distribution is sufficiently wide, limiting how compact this type of signature schemes can be.
In this work, we develop a new method to construct signatures influenced by the rejection condition. This allows our rejection sampling to target significantly narrower output distributions than previous approaches, allowing much more compact signatures. The combined size of a signature and a verification key for the resulting scheme is less than half of that for ML-DSA and comparable to that of compact hash-and-sign lattice signature schemes, such as Falcon.
Post-Quantum Privacy for Traceable Receipt-Free Encryption
Traceable Receipt-free Encryption (TREnc) has recently been introduced as a verifiable public-key encryption primitive endowed with a unique security model. In a nutshell, TREnc allows randomizing ciphertexts in transit in order to remove any subliminal information up to a public trace that ensures the non-malleability of the underlying plaintext. A remarkable property of TREnc is the indistinguishability of the randomization of chosen ciphertexts against traceable chosen-ciphertext attacks (TCCA). The main application lies in voting systems by allowing voters to encrypt their votes, tracing whether a published ballot takes their choices into account, and preventing them from proving how they
voted. While being a very promising primitive, the few existing TREnc mechanisms solely rely on discrete-logarithm related assumptions making them vulnerable to the well-known record-now/decrypt-later attack in the wait of quantum computers.
We address this limitation by building the first TREnc whose privacy withstands the advent of quantum adversaries in the future. To design our construction, we first generalize the original TREnc primitive that is too restrictive to be easily compatible with built-in lattice-based semantically-secure encryption. Our more flexible model keeps all the ingredients generically implying receipt-free voting. Our instantiation relies on Ring Learning With Errors (RLWE) with pairing-based statistical zero-knowledge simulation sound proofs from Groth-Sahai, and further enjoys a public-coin common reference string removing the need of a trusted setup.
Definition of End-to-end Encryption
This document provides a definition of end-to-end encryption (E2EE). End-to-end encryption is an application of cryptographic mechanisms to provide security and privacy to communication between endpoints. Such communication can include messages, email, video, audio, and other forms of media. E2EE provides security and privacy through confidentiality, integrity, authenticity and forward secrecy for communication amongst people.
Zero Knowledge Memory-Checking Techniques for Stacks and Queues
There are a variety of techniques for implementing read/write memory inside of zero-knowledge proofs and validating consistency of memory accesses. These techniques are generally implemented with the goal of implementing a RAM or ROM. In this paper, we present memory techniques for more specialized data structures: queues and stacks. We first demonstrate a technique for implementing queues in arithmetic circuits that requires 3 multiplication gates and 1 advice value per read and 2 multiplication gates per write. This is based on using Horner's Rule to evaluate 2 polynomials at random points and check that the values read from the queue are equal to the values written to the queue as vectors. Next, we present a stack scheme based on an optimized version of the RAM scheme of Yang and Heath that requires 5 multiplication gates and 4 advice values per read and 2 multiplication gates per write. This optimizes the RAM scheme by observing that reads and writes to a stack are already "paired" which avoids the need for inserting dummy operations for each access as in a stack.
We also introduce a different notion of "multiplexing" or "operation privacy" that is better suited to the use case of stacks and queues. All of the techniques we provide are based on evaluating polynomials at random points and using randomly evaluated polynomials as universal hash functions to check set/vector equality.
Fully Hybrid TLSv1.3 in WolfSSL on Cortex-M4
To provide safe communication across an unprotected medium such as the internet, network protocols are being established. These protocols employ public key techniques to perform key exchange and authentication. Transport Layer Security (TLS) is a widely used network protocol that enables secure communication between a server and a client. TLS is employed in billions of transactions per second. Contemporary protocols depend on traditional methods that utilize the computational complexity of factorization or (elliptic curve) logarithm mathematics problems. The ongoing advancement in the processing power of classical computers requires an ongoing increase in the security level of the underlying cryptographic algorithms. This study focuses on the analysis of Curve448 and Edwards curve Ed448, renowned for their superior security features that offer a 224-bit level of security as part of the TLSv1.3 protocol. The exponential advancement of quantum computers, however, presents a substantial threat to secure network communication that depends on classical crypto schemes, irrespective of their degree of security. Quantum computers have the capability to resolve these challenges within a feasible timeframe. In order to successfully transition to Post-Quantum secure network protocols, it is imperative to concurrently deploy both classical and post-quantum algorithms. This is done to fulfill the requirements of both enterprises and governments, while also instilling more assurance in the reliability of the post-quantum systems. This paper presents a detailed hybrid implementation architecture of the TLSv1.3 network protocol. We showcase the first deployment of Curve448 and Crystals-Kyber for the purpose of key exchanging, and Ed448 and Crystals-Dilithium for verifying the authenticity of entities and for X.509 Public Key Infrastructure (PKI). We rely upon the widely used OpenSSL library and the specific wolfSSL library for embedded devices to provide our results for server and client applications.
ClusterGuard: Secure Clustered Aggregation for Federated Learning with Robustness
Federated Learning (FL) enables collaborative model training while preserving data privacy by avoiding the sharing of raw data. However, in large-scale FL systems, efficient secure aggregation and dropout handling remain critical challenges. Existing state-of-the-art methods, such as those proposed by Liu et al. (UAI'22) and Li et al. (ASIACRYPT'23), suffer from prohibitive communication overhead, implementation complexity, and vulnerability to poisoning attacks. Alternative approaches that utilize partially connected graph structures (resembling client grouping) to reduce communication costs, such as Bell et al. (CCS'20) and ACORN (USENIX Sec'23), face the risk of adversarial manipulation during the graph construction process.
To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks.
Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
zkFFT: Extending Halo2 with Vector Commitments & More
This paper introduces zkFFT, a novel zero-knowledge argument designed to efficiently generate proofs for FFT (Fast Fourier Transform) relations. Our approach enables the verification that one committed vector is the FFT of another, addressing an efficiency need in general-purpose non-interactive zero-knowledge proof systems where the proof relation utilizes vector commitments inputs.
We present a concrete enhancement to the Halo2 proving system, demonstrating how zkFFT optimizes proofs in scenarios where the proof relation includes one or more vector commitments. Specifically, zkFFT incorporates streamlined logic within Halo2 and similar systems, augmenting proof and verification complexity by only $O(\text{log}N)$, where $N$ is the vector size. This represents a substantial improvement over conventional approach, which often necessitates specific circuit extensions to validate the integrity of vector commitments and their corresponding private values in the arithmetic framework of the proof relation. The proposed zkFFT method supports multiple vector commitments with only a logarithmic increase in extension costs, making it highly scalable. This capability is pivotal for practical applications involving multiple pre-committed values within proof statements.
Apart from Halo2, our technique can be adapted to any other zero-knowledge proof system that relies on arithmetization, where each column is treated as an evaluation of a polynomial over a specified domain, computes this polynomial via FFT, and subsequently commits to the resulting polynomial using a polynomial commitment scheme based on inner-product arguments. Along with efficient lookup and permutation arguments, zkFFT will streamline and significantly optimize the generation of zero-knowledge proofs for arbitrary relations.
Beyond the applications in augmenting zero-knowledge proof systems, we believe that the formalized zkFFT argument can be of independent interest.
Tailorable codes for lattice-based KEMs with applications to compact ML-KEM instantiations
Compared to elliptic curve cryptography, a main drawback of lattice-based schemes is the larger size of their public keys and ciphertexts. A common procedure for compressing these objects consists essentially of dropping some of their least significant bits. Albeit effective for compression, there is a limit to the number of bits to be dropped before we get a noticeable decryption failure rate (DFR), which is a security concern. To address this issue, this paper presents a family of error-correction codes that, by allowing an increased number of dropped bits while preserving a negligible DFR, can be used for both ciphertext and public-key compression in modern lattice-based schemes. To showcase the impact and practicality of our proposal, we use the highly optimized ML-KEM, a post-quantum lattice-based scheme recently standardized by NIST. We provide detailed procedures for tailoring our codes to ML-KEM's specific noise distributions, and show how to analyze the DFR without independence assumptions on the noise coefficients. Among our results, we achieve between 4% and 8% ciphertext compression for ML-KEM. Alternatively, we obtain 8% shorter public keys compared to the current standard. We also present isochronous implementations of the decoding procedure, achieving negligible performance impact in the full ML-KEM decapsulation even when considering optimized implementations for AVX2, Cortex-M4, and Cortex-A53.
A Heuristic Proof of P $\neq$ NP
The question of whether the complexity class P equals NP is a major unsolved problem in theoretical computer science. In this paper, we introduce a new language, the Add/XNOR problem, which has the simplest structure and perfect randomness, by extending the subset sum problem. We prove that P $\neq$ NP as it shows that the square-root complexity is necessary to solve the Add/XNOR problem. That is, problems that are verifiable in polynomial time are not necessarily solvable in polynomial time.
The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). To our knowledge, it is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem; the circuit also achieves sublinear depth and nearly linear gate count. We build on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. Our circuit completely factors any number $N$, whose prime decomposition has distinct exponents, and finds at least one non-trivial factor if not all exponents are the same. In particular, to factor an $n$-bit integer $N=P^2 Q$ (with $P$ and $Q$ prime, and $Q<2^m$ for some $m$), our circuit uses $\widetilde{O}(m)$ qubits and has depth at most $\widetilde{O}(m + n/m)$, with $\widetilde{O}(n)$ quantum gates. When $m=\Theta(n^a)$ with $2/3 < a < 1$, the space and depth are sublinear in $n$, yet no known classical algorithms exploit the relatively small size of $Q$ to run faster than general-purpose factoring algorithms. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.
The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of $A$ mod $B$, in the regime where $B$ is classical and much larger than $A$. Crucially, our circuit reads the bits of the classical value $B$ in a streaming fashion, never storing more than $\widetilde{O}(\log A)$ qubits of quantum information at one time. In the context of the larger Jacobi algorithm for factoring $N = P^2Q$, this reduces the overall qubit count to be roughly proportional to the length of $Q$, rather than the length of $N$. Our circuit for computing the Jacobi symbol is also highly gate-efficient and parallelizable, achieving gate count $\widetilde{O}(\log B)$ and depth at most $\widetilde{O}(\log A + \log B/\log A)$. Finally, we note that our circuit for computing the Jacobi symbol generalizes to related problems, such as computing the greatest common divisor, and thus could be of independent interest.
Algebraic Zero Knowledge Contingent Payment
In this work, we introduce Modular Algebraic Proof Contingent Payment (MAPCP), a novel zero-knowledge contingent payment (ZKCP) construction. Unlike previous approaches, MAPCP is the first that simultaneously avoids using zk-SNARKs as the tool for zero-knowledge proofs and HTLC contracts to atomically exchange a secret for a payment. As a result, MAPCP sidesteps the common reference string (crs) creation problem and is compatible with virtually any cryptocurrency, even those with limited or no smart contract support. Moreover, MAPCP contributes to fungibility, as its payment transactions blend seamlessly with standard cryptocurrency payments.
We analyze the security of MAPCP and demonstrate its atomicity, meaning that, (i) the buyer gets the digital product after the payment is published in the blockchain (buyer security); and (ii) the seller receives the payment if the buyer gets access to the digital product (seller security). Moreover, we present a construction of MAPCP in a use case where a customer pays a notary in exchange for a document signature.
Designated-Verifier zk-SNARKs Made Easy
Zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) is a kind of proof system that enables a prover to convince a verifier that an NP statement is true efficiently. In the last decade, various studies made a lot of progress in constructing more efficient and secure zk-SNARKs. Our research focuses on designated-verifier zk-SNARKs, where only the verifier knowing some secret verification state can be convinced by the proof. A natural idea of getting a designated-verifier zk-SNARK is encrypting a publicly-verifiable zk-SNARK's proof via public-key encryption. This is also the core idea behind the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain designated-verifier zk-SNARKs. However, the transformation only applies to zk-SNARKs which requires the complicated trusted setup phase and sticks on storage-expensive common reference strings. The loss of the secret verification state also makes the proof immediately lose the designated-verifier property.
To address these issues, we first define "strong designated-verifier" considering the case where the adversary has access to the secret verification state, then propose a construction of strong designated-verifier zk-SNARKs. The construction inspired by designated verifier signatures based on two-party ring signatures does not use encryption and can be applied on any public-verifiable zk-SNARKs to yield a designated-verifiable variant. We introduce our construction under the circuit satisfiability problem and implement it in Circom, then test it on different zk-SNARKs, showing the validity of our construction.
Generalized Cryptanalysis of Cubic Pell RSA
The RSA (Rivest-Shamir-Adleman) cryptosystem is a fundamental algorithm of public key cryptography and is widely used across various information domains. For an RSA modulus represented as $N = pq$, with its factorization remaining unknown, security vulnerabilities arise when attackers exploit the key equation $ed-k(p-1)(q-1)=1$. To enhance the security, Murru and Saettone introduced cubic Pell RSA --- a variant of RSA based on the cubic Pell equation, where the key equation becomes $ed-k(p^2+p+1)(q^2+q+1)=1$. In this paper, we further investigate the security implications surrounding the generalized key equation $eu-(p^2+p+1)(q^2+q+1)v=w$. We present a novel attack strategy aimed at recovering the prime factors $p$ and $q$ under specific conditions satisfied by $u$, $v$, and $w$. Our generalized attack employs lattice-based Coppersmith's techniques and extends several previous attack scenarios, thus deepening the understanding of mathematical cryptanalysis.
Improved Lattice-Based Attack on Mersenne Low Hamming Ratio Search Problem
This paper investigates the Mersenne number-based $\mathsf{AJPS}$ cryptosystem, with a particular focus on its associated hard problem. Specifically, we aim to enhance the existing lattice-based attack on the Mersenne low Hamming ratio search problem. Unlike the previous approach of directly employing lattice reduction algorithm, we apply the lattice-based method to solving polynomial equations derived from the above problem. We extend the search range for vulnerabilities in weak keys and increase the success probability of key recovery attack. To validate the efficacy and accuracy of our proposed improvements, we conduct numerical computer experiments. These experiments serve as a concrete validation of the practicality and effectiveness of our improved attack.
Optimally Secure TBC Based Accordion Mode
The design of tweakable wide block ciphers has advanced significantly over the past two decades. This evolution began with the approach of designing a wide block cipher by Naor and Reingold. Since then, numerous tweakable wide block ciphers have been proposed, many of which build on existing block ciphers and are secure up to the birthday bound for the total number of blocks queried. Although there has been a slowdown in the development of tweakable wide block cipher modes in last couple of years, the latest NIST proposal for accordion modes has reignited interest and momentum in the design and analysis of these ciphers. Although new designs have emerged, their security often falls short of optimal (i.e., $n$-bit) security, where $n$ is the output size of the primitive. In this direction, designing an efficient tweakable wide block cipher with $n$-bit security seems to be an interesting research problem. An optimally secure tweakable wide block cipher mode can easily be turned into a misuse-resistant RUP secure authenticated encryption scheme with optimal security. This paper proposes $\textsf{HCTR+}$, which turns an $n$-bit tweakable block cipher (TBC) with $n$-bit tweak into a variable input length tweakable block cipher. Unlike tweakable \textsf{HCTR}, $\textsf{HCTR+}$ ensures $n$-bit security regardless of tweak repetitions. We also propose two TBC-based almost-xor-universal hash functions, named $\textsf{PHASH+}$ and $\textsf{ZHASH+}$, and use them as the underlying hash functions in the $\textsf{HCTR+}$ construction to create two TBC-based $n$-bit secure tweakable wide block cipher modes, $\textsf{PHCTR+}$ and $\textsf{ZHCTR+}$. Experimental results show that both $\textsf{PHCTR+}$ and $\textsf{ZHCTR+}$ exhibit excellent software performance when their underlying TBC is instantiated with \textsf{Deoxys-BC-128-128}.
Improved Universal Thresholdizer from Iterative Shamir Secret Sharing
The universal thresholdizer, introduced at CRYPTO'18, is a cryptographic scheme that transforms any cryptosystem into a threshold variant, thereby enhancing its applicability in threshold cryptography. It enables black-box construction of one-round threshold signature schemes based on the Learning with Errors problem, and similarly, facilitates one-round threshold ciphertext-attack secure public key encryption when integrated with non-threshold schemes.
Current constructions of universal thresholdizer are fundamentally built upon linear secret sharing schemes.
One approach employs Shamir's secret sharing, which lacks compactness and results in ciphertext sizes of $O(N \log N)$, and another approach uses $\{0,1\}$-linear secret sharing scheme ($\{0,1\}$-LSSS), which is compact but induces high communication costs due to requiring $O(N^{5.3})$ secret shares.
In this work, we introduce a communication-efficient universal thresholdizer by revising the linear secret sharing scheme.
We propose a specialized linear secret sharing scheme, called TreeSSS, which reduces the number of required secret shares
$O(N^{3+o(1)})$ while maintaining
the compactness of the universal thresholdizer.
TreeSSS can also serve as a subroutine for constructing lattice based $t$-out-of-$N$ threshold cryptographic primitives such as threshold fully homomorphic encryptions and threshold signatures. In this context, TreeSSS offers the advantage of lower communication overhead due to the reduced number of secret shares involved.
Solving AES-SAT Using Side-Channel Hints: A Practical Assessment
Side-channel attacks exploit information leaked through non-primary channels, such as power consumption, electromagnetic emissions, or timing, to extract sensitive data from cryptographic devices. Over the past three decades, side-channel analysis has evolved into a mature research field with well-established methodologies for analyzing standard cryptographic algorithms like the Advanced Encryption Standard (AES). However, the integration of side-channel analysis with formal methods remains relatively unexplored. In this paper, we present a hybrid attack on AES that combines side-channel analysis with SAT. We model AES as a SAT problem and leverage hints of the input and output values of the S-boxes, extracted via profiled deep learning-based power analysis, to solve it. Experimental results on an ATXmega128D4 MCU implementation of AES-128 demonstrate that the SAT-assisted approach consistently recovers the full encryption key from a single trace, captured from devices different from those used for profiling, within one hour. In contrast, without SAT assistance, the success rate remains below 80% after 26 hours of key enumeration.
Strongly Secure Universal Thresholdizer
A universal thresholdizer (UT), constructed from a threshold fully homomorphic encryption by Boneh et. al
, Crypto 2018, is a general framework for universally thresholdizing many cryptographic schemes. However,
their framework is insufficient to construct strongly secure threshold schemes, such as threshold signatures
and threshold public-key encryption, etc.
In this paper, we strengthen the security definition for a universal thresholdizer and propose a scheme
which satisfies our stronger security notion. Our UT scheme is an improvement of Boneh et. al ’s construction
at the level of threshold fully homomorphic encryption using a key homomorphic pseudorandom function.
We apply our strongly secure UT scheme to construct strongly secure threshold signatures and threshold
public-key encryption.
Report on evaluation of KpqC Round-2 candidates
This report covers our analysis (security, proofs, efficiency) of the Round-2 candidates to the Korean post-quantum competiton KpqC. Signature systems covered in the report are AIMer, HAETAE, MQ-Sign, and NCC-Sign; KEMs covered are NTRU+, Paloma, REDOG, and SMAUG-T.
Under What Conditions Is Encrypted Key Exchange Actually Secure?
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, in the setting where the only secret shared in advance is a low-entropy password. The standard security notion for PAKE is in the Universal Composability (UC) framework. In recent years there have been a large number of works analyzing the UC-security of Encrypted Key Exchange (EKE), the very first PAKE protocol, and its One-encryption variant (OEKE), both of which compile an unauthenticated Key Agreement (KA) protocol into a PAKE.
In this work, we present a comprehensive and thorough study of the UC-security of both EKE and OEKE in the most general setting and using the most efficient building blocks:
1. We show that among the five existing results on the UC-security of (O)EKE using a general KA protocol, all are incorrect;
2. We show that for (O)EKE to be UC-secure, the underlying KA protocol needs to satisfy several additional security properties: though some of these are closely related to existing security properties, some are new, and all are missing from existing works on (O)EKE;
3. We give UC-security proofs for EKE and OEKE using Programmable-Once Public Function (POPF), which is the most efficient instantiation to date and is around 4 times faster
than the standard instantiation using Ideal Cipher (IC).
Our results in particular allow for PAKE constructions from post-quantum KA protocols such as Kyber. We also present a security analysis of POPF using a new, weakened notion of almost UC realizing a functionality, that is still sufficient for proving composed protocols to be fully
UC-secure.
Blind Signatures from Proofs of Inequality
Blind signatures are an important primitive for privacy-preserving technologies. To date, highly efficient pairing-free constructions rely on the random oracle model, and additionally, a strong assumption, such as interactive assumptions or the algebraic group model.
In contrast, for signatures we know many efficient constructions that rely on the random oracle model and standard assumptions. In this work, we develop techniques to close this gap. Compared to the most efficient pairing-free AGM-based blind signature by Crites et. al. (Crypto 2023), our construction has a relative overhead of only a factor $3\times$ and $2\times$ in terms of communication and signature size, and it is provable in the random oracle model under the DDH assumption. With one additional move and $\mathbb{Z}_p$ element, we also achieve one-more strong unforgeability.
Our construction is inspired by the recent works by Chairattana-Apirom, Tessaro, and Zhu (Crypto 2024) and Klooß, Reichle, and Wagner (Asiacrypt 2024), and we develop a tailored technique to circumvent the sources of inefficiency in their constructions. Concretely, we achieve signature and communication size of $192$ B and $608$ B, respectively.
Tightly-Secure Blind Signatures in Pairing-Free Groups
We construct the first blind signature scheme that achieves all of the following properties simultaneously:
- it is tightly secure under a standard (i.e., non-interactive,
non-\(q\)-type) computational assumption,
- it does not require pairings,
- it does not rely on generic, non-black-box techniques (like generic NIZK
proofs).
The third property enables a reasonably efficient solution, and in fact signatures in our scheme comprise 10 group elements and 29 \(\mathbb{Z}_p\)-elements.
Our scheme starts from a pairing-based non-blind signature scheme (Abe et al., JoC 2023), and uses recent techniques of Chairattana-Apirom, Tessaro, and Zhu (CRYPTO 2024) to replace the pairings used in this scheme with non-interactive zero-knowledge proofs in the random oracle model. This conversion is not generic or straightforward (also because the mentioned previous works have converted only significantly simpler signature schemes), and we are required to improve upon and innovate existing techniques in several places.
As an interesting side note, and unlike previous works, our techniques only require a non-programmable random oracle, and our signature scheme achieves predicate blindness (which means that the user can prove statements about the signed message during the signing process).
How to Compress Garbled Circuit Input Labels, Efficiently
Garbled Circuits are essential building blocks in cryptography, and extensive research has explored their construction from both applied and theoretical perspectives. However, a challenge persists: While theoretically designed garbled circuits offer optimal succinctness--remaining constant in size regardless of the underlying circuit’s complexit--and are reusable for multiple evaluations, their concrete computational costs are prohibitively high. On the other hand, practically efficient garbled circuits, inspired by Yao’s garbled circuits, encounter limitations due to substantial communication bottlenecks and a lack of reusability.
To strike a balance, we propose a novel concept: online-offline garbling. This approach leverages instance-independent and (partially) reusable preprocessing during an offline phase, to enable the creation of constant-size garbled circuits in an online phase, while maintaining practical efficiency. Specifically, during the offline stage, the garbler generates and transmits a reference string, independent of the computation to be performed later. Subsequently, in the online stage, the garbler efficiently transforms a circuit into a constant-size garbled circuit. The evaluation process relies on both the reference string and the garbled circuit.
We demonstrate that by leveraging existing tools such as those introduced by Applebaum et al. (Crypto’13) and Chongwon et al. (Crypto’17), online-offline garbling can be achieved under a variety of assumptions, including the hardness of Learning With Errors (LWE), Computational Diffie-Hellman (CDH), and factoring. In contrast, without the help of an offline phase, constant-size garbling is only feasible under the LWE and circular security assumptions, or the existence of indistinguishability obfuscation. However, these schemes are still very inefficient, several orders of magnitude more costly than Yao-style garbled circuits.
To address this, we propose a new online-offline garbling scheme based on Ring LWE. Our scheme offers both asymptotic and concrete efficiency. It serves as a practical alternative to Yao-style garbled circuits, especially in scenarios where online communication is constrained. Furthermore, we estimate the concrete latency using our approach in realistic settings and demonstrate that it is 2-20X faster than using Yao-style garbled circuits. This improvement is estimated without taking into account parallelization of computation, which can lead to further performance improvement using our scheme.
Succinct Partial Garbling from Groups and Applications
A garbling scheme transforms a program (e.g., circuit) $C$ into a garbled program $\hat{C}$, along with a pair of short keys $(k_{i,0},k_{i,1})$ for each input bit $x_i$, such that $(C,\hat{C}, \{k_{i,x_i}\})$ can be used to recover the output $z = C(x)$ while revealing nothing else about the input $x$. This can be naturally generalized to partial garbling, where part of the input is public, and a computation $z = C(x, y)$ is decomposed into a public part $C_{\text{pub}}(x)$, depending only on the public input $x$, and a private part $z = C_{\text{priv}}(C_{\text{pub}}(x), y)$ that also involves a private input $y$.
A key challenge in garbling is to achieve succinctness, where the size of the garbled program may grow only with the security parameter and (possibly) the output length, but not with the size of $C$. Prior work achieved this strong notion of succinctness using heavy tools such as indistinguishability obfuscation (iO) or a combination of fully homomorphic encryption and attribute-based encryption.
In this work, we introduce new succinct garbling schemes based on variants of standard group-based assumptions. Our approach, being different from prior methods, offers a promising pathway towards practical succinct garbling. Specifically, we construct:
- A succinct partial garbling scheme for general circuits, where the garbled circuit size scales linearly with the private computation $|C_{\text{priv}}|$ and is independent of the public computation $|C_{\text{pub}}|$. This implies fully succinct conditional disclosure of secrets (CDS) protocols for circuits.
- Succinct (fully hiding) garbling schemes for simple types of programs, including truth tables, bounded-length branching programs (capturing decision trees and DFAs as special cases) and degree-2 polynomials, where the garbled program size is independent of the program size. This implies succinct private simultaneous messages (PSM) protocols for the same programs.
Our succinct partial garbling scheme can be based on a circular-security variant of the power-DDH assumption, which holds in the generic group model, or alternatively on the key-dependent message security of the Damgård-Jurik encryption. For bounded-depth circuits or the aforementioned simple programs, we avoid circular-security assumptions entirely.
At the heart of our technical approach is a new computational flavor of algebraic homomorphic MAC (aHMAC), for which we obtain group-based constructions building on techniques from the literature on homomorphic secret sharing. Beyond succinct garbling, we demonstrate the utility of aHMAC by constructing constrained pseudorandom functions (CPRFs) for general constraint circuits from group-based assumptions. Previous CPRF constructions were limited to $\mathsf{NC}^1$ circuits or alternatively relied on lattices or iO.
Perfectly Secure Fluid MPC with Abort and Linear Communication Complexity
The \emph{Fluid} multiparty computation (MPC) model, introduced in (Choudhuri \emph{et al.} CRYPTO 2021), addresses dynamic scenarios where participants can join or leave computations between rounds. Communication complexity initially stood at $\Omega(n^2)$ elements per gate, where $n$ is the number of parties in a committee online at a time. This held for both statistical security (honest majority) and computational security (dishonest majority) in (Choudhuri \emph{et al.}~CRYPTO'21) and (Rachuri and Scholl, CRYPTO'22), respectively. The work of Bienstock \emph{et al.}~CRYPTO'23) improved communication to $O(n)$ elements per gate. However, it's important to note that the perfectly secure setting with one-third corruptions per committee has only recently been addressed in the work of (David \emph{et al.}~CRYPTO'23). Notably, their contribution marked a significant advancement in the Fluid MPC literature by introducing guaranteed output delivery. However, this achievement comes at the cost of prohibitively expensive communication, which scales to $\Omega(n^9)$ elements per gate.
In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only \emph{linear} communication of $O(n)$ elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes "for free (asymptotically linear in $n$) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain, and Decentralized Finance (DeFi) ecosystem. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and for scaling to large numbers of users as our performance analysis demonstrates. Our construction is blockchain-agnostic and is analyzed in the Universal Composition (UC) framework, offering a secure and modular approach for its integration into the broader blockchain ecosystem.
Quantum Sieving for Code-Based Cryptanalysis and Its Limitations for ISD
Sieving using near-neighbor search techniques is a well-known method in lattice-based cryptanalysis, yielding the current best runtime for the shortest vector problem in both the classical [BDGL16] and quantum [BCSS23] setting. Recently, sieving has also become an important tool in code-based cryptanalysis. Specifically, using a sieving subroutine, [GJN23, DEEK24] presented a variant of the information-set decoding (ISD) framework, which is commonly used for attacking cryptographically relevant instances of the decoding problem. The resulting sieving-based ISD framework yields complexities close to the best-performing classical algorithms for the decoding problem such as [BJMM12, BM18]. It is therefore natural to ask how well quantum versions perform.
In this work, we introduce the first quantum algorithms for code sieving by designing quantum variants of the aforementioned sieving subroutine. In particular, using quantum-walk techniques, we provide a speed-up over the best known classical algorithm from [DEEK24] and over a variant using Grover's algorithm [Gro96]. Our quantum-walk algorithm exploits the structure of the underlying search problem by adding a layer of locality-sensitive filtering, inspired by the quantum-walk algorithm for lattice sieving from [CL21]. We complement our asymptotic analysis of the quantum algorithms with numerical results, and observe that our quantum speed-ups for code sieving behave similarly as those observed in lattice sieving.
In addition, we show that a natural quantum analog of the sieving-based ISD framework does not provide any speed-up over the first presented quantum ISD algorithm [Ber10]. Our analysis highlights that the framework should be adapted in order to outperform the state-of-the-art of quantum ISD algorithms [KT17, Kir18].
Sneaking up the Ranks: Partial Key Exposure Attacks on Rank-Based Schemes
A partial key exposure attack is a key recovery attack where an adversary obtains a priori partial knowledge of the secret key, e.g., through side-channel leakage. While for a long time post-quantum cryptosystems, unlike RSA, have been believed to be resistant to such attacks, recent results by Esser, May, Verbel, and Wen (CRYPTO ’22), and by Kirshanova and May (SCN ’22), have refuted this belief.
In this work, we focus on partial key exposure attacks in the context of rank-metric-based schemes, particularly targeting the RYDE, MIRA, and MiRitH digital signatures schemes, which are active candidates in the NIST post-quantum cryptography standardization process. We demonstrate that, similar to the RSA case, the secret key in RYDE can be recovered from a constant fraction of its bits. Specifically, for NIST category I parameters, our attacks remain efficient even when less than 25% of the key material is leaked. Interestingly, our attacks lead to a natural improvement of the best generic attack on RYDE without partial knowledge, reducing security levels by up to 9 bits. For MIRA and MiRitH our attacks remain efficient as long as roughly 57%-60% of the secret key material is leaked.
Additionally, we initiate the study of partial exposure of the witness in constructions following the popular MPCitH (MPC-in-the-Head) paradigm. We show a generic reduction from recovering RYDE and MIRA’s witness to the MinRank problem, which again leads to efficient key recovery from constant fractions of the secret witness in both cases.
A Lattice Attack Against a Family of RSA-like Cryptosystems
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. In 2002, Elkamchouchi, Elshenawy, and Shaban introduced an interesting RSA-like cryptosystem that, unlike the classical RSA key equation $ed - k (p-1)(q-1) = 1$, uses the key equation $ed - k (p^2-1)(q^2-1) = 1$. The scheme was further extended by Cotan and Te\c seleanu to a variant that uses the key equation $ed - k (p^n-1)(q^n-1) = 1$, where $n \geq 1$. Furthermore, they provide a continued fractions attack that recovers the secret key $d$ if $d < N^{0.25n}$. In this paper we improve this bound using a lattice based method. Moreover, our method also leads to the factorisation of the modulus $N$, while the continued fractions one does not (except for $n=1,2,3,4$).
Partial Exposure Attacks Against a Family of RSA-like Cryptosystems
An RSA generalization using complex integers was introduced by Elkamchouchi, Elshenawy, and Shaban in 2002. This scheme was further extended by Cotan and Teșeleanu to Galois fields of order $n \geq 1$. In this generalized framework, the key equation is $ed - k (p^n-1)(q^n-1) = 1$, where $p$ and $q$ are prime numbers. Note that, the classical RSA, and the Elkamchouchi \emph{et al.} key equations are special cases, namely $n=1$ and $n=2$. In addition to introducing this generic family, Cotan and Teșeleanu describe a continued fractions attack capable of recovering the secret key $d$ if $d < N^{0.25n}$. This bound was later improved by Teșeleanu using a lattice based method. In this paper, we explore other lattice attacks that could lead to factoring the modulus $N = pq$. Namely, we propose a series of partial exposure attacks that can aid an adversary in breaking this family of cryptosystems if certain conditions hold.
One Solves All: Exploring ChatGPT's Capabilities for Fully Automated Simple Power Analysis on Cryptosystems
Side-channel analysis is a powerful technique to extract secret data from cryptographic devices. However, this task heavily relies on experts and specialized tools, particularly in the case of simple power analysis (SPA). Meanwhile, ChatGPT, a leading example of large language models, has attracted great attention and been widely applied for assisting users with complex tasks. Despite this, ChatGPT’s capabilities for fully automated SPA, where prompts and traces are input only once, have yet to be systematically explored and improved. In this paper, we introduce a novel prompt template with three expert strategies and conduct a large-scale evaluation of ChatGPT’s capabilities for SPA. We establish a dataset comprising seven sets of real power traces from various implementations of public-key cryptosystems, including RSA, ECC, and Kyber, as well as eighteen sets of simulated power traces that illustrate typical SPA leakage patterns. The results indicate that ChatGPT fails to be directly used for SPA. However, by applying the expert strategies, we successfully recovered the private keys for all twenty-five traces, which demonstrate that non-experts can use ChatGPT with our expert strategies to perform fully automated SPA.
Pacmann: Efficient Private Approximate Nearest Neighbor Search
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes
with up to $63\%$ reduction in computation time and $24\%$ reduction in overall latency.
On the complexity of the problem of solving systems of tropical polynomial equations of degree two
In this paper, we investigate the computational complexity of the problem of solving a one-sided system of equations of degree two of a special form over the max-plus algebra. Also, we consider the asymptotic density of solvable systems of this form. Such systems have appeared during the analysis of some tropical cryptography protocols that were recently suggested. We show how this problem is related to the integer linear programming problem and prove that this problem is NP-complete. We show that the asymptotic density of solvable systems of this form with some restrictions on the coefficients, the number of variables, and the number of equations is 0. As a corollary, we prove that this problem (with some restrictions on the coefficients, the number of variables, and the number of equations) is decidable generically in polynomial time.
Weightwise Almost Perfectly Balanced Functions, Construction From A Permutation Group Action View
The construction of Boolean functions with good cryptographic properties over subsets of vectors with fixed Hamming weight is significant for lightweight stream ciphers like FLIP. In this article, we propose a general method to construct a class of Weightwise Almost Perfectly Balanced (WAPB) Boolean functions using the action of a cyclic permutation group on $\mathbb{F}_2^n$. This class generalizes the Weightwise Perfectly Balanced (WPB) $2^m$-variable Boolean function construction by Liu and Mesnager to any $n$. We show how to bound the nonlinearity and weightwise nonlinearities of functions from this construction. Additionally, we explore two significant permutation groups, $\langle \psi \rangle$ and $\langle \sigma \rangle$, where $\psi$ is a binary-cycle permutation and $\sigma$ is a rotation. We theoretically analyze the cryptographic properties of the WAPB functions derived from these permutations and experimentally evaluate their nonlinearity parameters for $n$ between 4 and 10.
Bypassing the characteristic bound in logUp
In this informal note, we describe how to bypass the characteristic bound in logUp [eprint 2022/1530] by abstracting the notion of (pole) multiplicity. The method applies as well to the GKR-variant from Papini and Haböck [eprint 2023/1284], and it moreover unlocks fractional decomposition lookups over binary fields.
Exact Template Attacks with Spectral Computation
The so-called Gaussian template attacks (TA) is one of the optimal Side-Channel Analyses (SCA) when the measurements are captured with normal noise.
In the SCA literature, several optimizations of its implementation are introduced, such as coalescence and spectral computation. The coalescence consists of averaging traces corresponding to the same plaintext value, thereby coalescing (synonymous: compacting) the dataset. Spectral computation consists of sharing the computational workload when estimating likelihood across key hypotheses.
State-of-the-art coalescence leverages the Law of Large Numbers (LLN) to compute the mean of equivalent traces.
This approach comes with a drawback because the LLN is just an asymptotic approximation.
So it does not lead to an exact Template Attack, especially for a few number of traces.
In this paper, we introduce a way of calculating the TA exactly and with the same computational complexity (using the spectral approach), without using the LLN, regardless of the number of messages.
For the experimental validation of this approach, we use the ANSSI SCA Database (ASCAD), with different numbers of messages and different amounts of samples per trace.
Recall that this dataset concerns a software implementation of AES-128 bits, running on an ATMEGA-8515 microprocessor.
COCO: Coconuts and Oblivious Computations for Orthogonal Authentication
Authentication often bridges real-world individuals and their virtual public identities, like usernames, user IDs and e-mails, exposing vulnerabilities that threaten user privacy. This research introduces COCO (Coconuts and Oblivious Computations for Orthogonal Authentication), a framework that segregates roles among Verifiers, Authenticators, and Clients to achieve privacy-preserving authentication.
COCO eliminates the need for Authenticators to directly access virtual public identifiers or real-world identifiers for authentication. Instead, the framework leverages Oblivious Pseudorandom Functions (OPRFs) and an extended Coconut Credential Scheme to ensure privacy by introducing separate unlinkable orthogonal authentication identifiers and a full-consensus mechanism to perform zero-knowledge authentications whose proof-s are unlinkable across multiple sessions. Authentication process becomes self-contained, preventing definitive reverse tracing of virtual public identifiers to real-world identifiers.
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco