## Papers updated in last 7 days (56 results)

Areion: Highly-Efficient Permutations and Its Applications (Extended Version)

In real-world applications, the overwhelming majority of cases require (authenticated) encryption or hashing with relatively short input, say up to 2K bytes. Almost all TCP/IP packets are 40 to 1.5K bytes, and the maximum packet lengths of major protocols, e.g., Zigbee, Bluetooth low energy, and Controller Area Network (CAN), are less than 128 bytes. However, existing schemes are not well optimized for short input. To bridge the gap between real-world needs (in the future) and limited performances of state-of-the-art hash functions and authenticated encryptions with associated data (AEADs) for short input, we design a family of wide-block permutations Areion that fully leverages the power of AES instructions, which are widely deployed in many devices. As for its applications, we propose several hash functions and AEADs. Areion significantly outperforms existing schemes for short input and even competitive to relatively long messages. Indeed, our hash function is surprisingly fast, and its performance is less than three cycles/byte in the latest Intel architecture for any message size. It is significantly much faster than existing state-of-the-art schemes for short messages up to around 100 bytes, which are the most widely-used input size in real-world applications, on both the latest CPU architectures (IceLake, Tiger Lake, and Alder Lake) and mobile platforms (Pixel 7, iPhone 14, and iPad Pro with Apple M2).

A New Method to Test the Zeros of Riemann Zeta Function

The zeta function $\zeta(z)=\sum_{n=1}^{\infty} \frac{1}{n^z}$ is convergent only for $\text{Re}(z)>1$. To test its zeros, one needs to use the Riemann-Siegel function $Z(t)$. If $Z(t_1)$ and $Z(t_2)$ have opposite signs, $Z(t)$ vanishes between $t_1$ and $t_2$, and $\zeta(z)$ has a zero on the critical line between $\frac{1}{2}+it_1$ and $\frac{1}{2}+it_2$. This method is non-polynomial time, because it has to compute the sum $\sum_{n\leq \alpha}\frac{\cos(\vartheta(1/2+it)-t\log{n})}{\sqrt{n}}$, where $\alpha=\lfloor\sqrt{t/(2\pi)}\rfloor$. The eta function $\eta(z)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^z}$ is convergent for $\text{Re}(z)>0$, and $\eta(z)=\left(1-2^{1-z}\right)\zeta(z)$ for the critical strip $0<\text{Re}(z)<1$. The alternating series can be directly used to test the zeros because $\eta(z)$ and the analytic continuation of $\zeta(z)$ have the same zeros in the critical strip. In this paper, we present a polynomial time algorithm to test the zeros based on $\eta(z)$, which is more understandable and suitable for modern computing machines than the general method. Besides, we clarify the actual meaning of logarithm symbol in the Riemann-Siegel formula.

Rotatable Zero Knowledge Sets: Post Compromise Secure Auditable Dictionaries with application to Key Transparency

Key Transparency (KT) systems allow end-to-end encrypted service providers (messaging, calls, etc.) to maintain an auditable directory of their users’ public keys, producing proofs that all participants have a consistent view of those keys, and allowing each user to check updates to their own keys. KT has lately received a lot of attention, in particular its privacy preserving variants, which also ensure that users and auditors do not learn anything beyond what is necessary to use the service and keep the service provider accountable.
Abstractly, the problem of building such systems reduces to constructing so-called append-only Zero-Knowledge Sets (aZKS). Unfortunately, existing aZKS (and KT) solutions do not allow to adequately restore the privacy guarantees after a server compromise, a form of Post-Compromise Security (PCS), while maintaining the auditability properties. In this work we address this concern through the formalization of an extension of aZKS called Rotatable ZKS (RZKS). In addition to providing PCS, our notion of RZKS has several other attractive features, such as a stronger (extractable) soundness notion, and the ability for a communication party with out-of-date data to efficiently “catch up” to the current epoch while ensuring that the server did not erase any of the past data.
Of independent interest, we also introduce a new primitive called a Rotatable Verifiable Random Function (VRF), and show how to build RZKS in a modular fashion from a rotatable VRF, ordered accumulator, and append-only vector commitment schemes.

Khatam: Reducing the Communication Complexity of Code-Based SNARKs

We prove that Basefold(Crypto 2024) is secure in the $\textit{list decoding regime}$, within the double Johnson bound and with error probability $\frac{O(n)}{|F|}$. At the heart of this proof is a new, stronger statement for $\textit{correlated agreement}$, which roughly states that if a linear combination of vectors $\pi_L + r \pi_R$ agrees with a codeword at every element in $S \subset [n]$, then so do $\pi_L, \pi_R$. Our result is purely combinatorial and therefore extends to any finite field and any linear code. As such, it can be applied to any coding-based multilinear Polynomial Commitment Scheme to reduce its communication complexity.

OpenNTT: An Automated Toolchain for Compiling High-Performance NTT Accelerators in FHE

Modern cryptographic techniques such as fully homomorphic encryption (FHE) have recently gained broad attention. Most of these cryptosystems rely on lattice problems wherein polynomial multiplication forms the computational bottleneck. A popular method to accelerate these polynomial multiplications is the Number-Theoretic Transformation (NTT). Recent works aim to improve the practical deployability of NTT and propose toolchains supporting the NTT hardware accelerator design processes. However, existing design tools do not provide on-the-fly twiddle factor generation (TFG) which leads to high memory demands. Inspired by this situation, we present OpenNTT, a fully automated, open-source framework to compile NTT hardware accelerators with TFG for various NTT types and parameter sets. We address the challenge of combining conflict-free memory accesses and efficient, linear twiddle factor generation through a dedicated NTT processing order. Following this order, we develop a flexible twiddle factor generation method with minimal memory usage. These core concepts together with a frequency-optimized hardware architecture form our OpenNTT framework. We use OpenNTT to compile and test NTT hardware designs with various parameter sets on FPGAs. The obtained results show a clear memory reduction due to TFG and a speedup by 2.7× in latency and 2.2× in area-time-product, compared to prior arts.

MixBuy: Contingent Payment in the Presence of Coin Mixers

A contingent payment protocol involves two mutually distrustful parties, a buyer and a seller, operating on the same blockchain, and a digital product, whose ownership is not tracked on a blockchain (e.g. a digital book). The buyer holds coins on the blockchain and transfers them to the seller in exchange for the product. However, if the blockchain does not hide transaction details, any observer can learn that a buyer purchased some product from a seller.
In this work, we take contingent payment a step further: we consider a buyer who wishes to buy a digital product from a seller routing the payment via an untrusted mixer.
Crucially, we require that said payment is unlinkable, meaning that the mixer (or any other observer) does not learn which buyer is paying which seller. We refer to such setting as unlinkable contingent payment (UCP).
We present MixBuy, a system that realizes UCP. Mixbuy relies on oracle-based unlinkable contingent payment (O-UCP), a novel four-party cryptographic protocol where the mixer pays the seller and the seller provides the buyer with the product only if a semi-trusted notary attests that the buyer has paid the mixer. More specifically, we require four security notions: (i) mixer security that guarantees that if the mixer pays the seller, the mixer must get paid from the buyer; (ii) seller security that guarantees that if the seller delivers the product to the buyer, the seller must get paid from the mixer; (iii) buyer security that guarantees that if the buyer pays the mixer, the buyer must obtain the product; and (iv) unlinkability that guarantees that given a set of buyers and sellers, the mixer should not learn which buyer paid which seller.
We present a provably secure and efficient cryptographic construction for O-UCP. Our construction can be readily used to realize UCP on most blockchains, as it has minimal functionality requirements (i.e., digital signatures and timelocks). To demonstrate the practicality of our construction, we provide a proof of concept for O-UCP and our benchmarks in commodity hardware show that the communication overhead is small (a few kB per message) and the running time is below one second.

The Blockwise Rank Syndrome Learning problem and its applications to cryptography

Recently the notion of blockwise error in a context of rank based cryptography has been introduced by Sont et al. at AsiaCrypt 2023 . This notion of error, very close to the notion sum-rank metric, permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes.
A little before the multi-syndromes approach introduced for LRPC and RQC schemes had also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes.
In the present paper we show that the two previous approaches (blockwise errors and multi-syndromes) can be combined in a unique approach which leads to very efficient generalized RQC and LRPC schemes. In order to do so, we introduce a new problem, the Blockwise Rank Support Learning problem, which consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors.
The new schemes we introduce have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme. The new approach proposed in this paper permits to reach a 40 % gain in terms of parameters size when compared to previous results, obtaining even better results in terms of size than for the KYBER scheme whose total sum is 1.5 kB.
Besides the description of theses new schemes the paper provides new attacks for the l-RD problem introduced in the paper by Song et al. of AsiaCrypt 2023, in particular these new attacks permit to cryptanalyze all blockwise LRPC parameters they proposed (with an improvement of more than 40bits in the case of structural attacks). We also describe combinatorial attacks and algebraic attacks, for the new Blockwise Rank Support Learning problem we introduce.

A Single-Trace Message Recovery Attack on a Masked and Shuffled Implementation of CRYSTALS-Kyber

Last year CRYSTALS-Kyber was chosen by NIST as a new, post-quantum secure key encapsulation mechanism to be standardized. This makes it important to assess the resistance of CRYSTALS-Kyber implementations to physical attacks. Pure side-channel attacks on post-quantum cryptographic algorithms have already been well-explored. In this paper, we present an attack on a masked and shuffled software implementation of CRYSTALS-Kyber that combines fault injection with side-channel analysis. First, a voltage fault injection is performed to bypass the shuffling. We found settings that consistently glitch the desired instructions without crashing the device. After the successful fault injection, a deep learning-assisted profiled power analysis based on the Hamming weight leakage model is used to recover the message (shared key). We propose a partial key enumeration method that allows us to significantly increase the success rate of message recovery (from 0.122 without enumeration to 0.887 with 32 enumerated bits).

A Single Trace Fault Injection Attack on Hedged CRYSTALS-Dilithium

CRYSTALS-Dilithium is a post-quantum secure digital signature algorithm currently being standardised by NIST. As a result, devices making use of CRYSTALS-Dilithium will soon become generally available and be deployed in various environments. It is thus important to assess the resistance of CRYSTALS-Dilithum implementations to physical attacks.
In this paper, we present an attack on a CRYSTALS-Dilithium implementation in hedged mode in ARM Cortex-M4 using fault injection. Voltage glitching is performed to skip computation of a seed during the generation of the signature. We identified settings that consistently skip the desired function without crashing the device. After the successful fault injection, the resulting signature allows for the extraction of the secret key vector. Our attack succeeds with probability 0.582 in a single trace.
We also propose countermeasures against the presented attack.

Improved ML-DSA Hardware Implementation With First Order Masking Countermeasure

We present the protected hardware implementation of the Module-Lattice-Based Digital Signature Standard (ML-DSA). ML-DSA is an extension of Dilithium 3.1, which is the winner of the Post Quantum Cryptography (PQC) competition in the digital signature category. The proposed design is based on the existing high-performance Dilithium 3.1 design. We implemented existing Dilithium masking gadgets in hardware, which were only implemented in software. The masking gadgets are integrated with the unprotected ML-DSA design and functional verification of the complete design is verified with the Known Answer Tests(KATs) generated from ML-DSA reference software. We also present the practical power side-channel attack experimental results by implementing masking gadgets on the standard side-channel evaluation FPGA board and collecting power traces up-to 1 million traces. The proposed protected design has the overhead of 1.127× LUT, 1.2× Flip-Flop, and 378× execution time compared to unprotected design. The experimental results show that it resists side-channel attacks.

Verifying Jolt zkVM Lookup Semantics

Lookups are a popular way to express repeated constraints in state-of-the art SNARKs. This is especially the case for zero-knowledge virtual machines (zkVMs), which produce succinct proofs of correct execution for programs expressed as bytecode according to a specific instruction set architecture (ISA). The Jolt zkVM (Arun, Setty & Thaler, Eurocrypt 2024) for RISC-V ISA employs Lasso (Setty, Thaler & Wahby, Eurocrypt 2024), an efficient lookup argument for massive structured tables, to prove correct execution of instructions. Internally, Lasso performs multiple lookups into smaller subtables, then combines the results.
We present an approach to formally verify Lasso-style lookup arguments against the semantics of instruction set architectures. We demonstrate our approach by formalizing and verifying all Jolt 32-bit instructions corresponding to the RISC-V base instruction set (RV32I) using the ACL2 theorem proving system. Our formal ACL2 model has undergone extensive validation against the Rust implementation of Jolt. Due to ACL2's bit-blasting, rewriting, and developer-friendly features, our formalization is highly automated.
Through formalization, we also discovered optimizations to the Jolt codebase, leading to improved efficiency without impacting correctness or soundness. In particular, we removed one unnecessary lookup each for four instructions, and reduced the sizes of three subtables by 87.5\%.

ColliderScript: Covenants in Bitcoin via 160-bit hash collisions

We introduce a method for enforcing covenants on Bitcoin outputs without requiring any changes to Bitcoin by designing a hash collision based equivalence check which bridges Bitcoin's limited Big Script to Bitcoin's Small Script. This allows us evaluate the signature of the spending transaction (available only to Big Script) in Small Script. As Small Script enables arbitrary computations, we can introspect into the spending transaction and enforce covenants on it.
Our approach leverages finding collisions in the $160$-bit hash functions: SHA-1 and RIPEMD-160. By the birthday bound this should cost $\sim2^{80}$ work. Each spend of our covenant costs $\sim2^{86}$ hash queries and $\sim2^{56}$ bytes of space. For security, we rely on an assumption regarding the hardness of finding a $3$-way collision (with short random inputs) in $160$-bit hash functions, arguing that if the assumption holds, breaking covenant enforcement requires $\sim2^{110}$ hash queries. To put this in perspective, the work to spend our covenant is $\sim33$ hours of the Bitcoin mining network, whereas breaking our covenant requires $\sim 450,000$ years of the Bitcoin mining network.
We believe there are multiple directions of future work that can significantly improve these numbers.
Evaluating covenants and our equivalence check requires performing many operations in Small Script, which must take no more than $4$ megabytes in total size, as Bitcoin does not allow transactions greater than $4$ megabytes. We only provide rough estimates of the transaction size because, as of this writing, no Small Script implementations of the hash functions required, SHA-1 and RIPEMD-160, have been written.

Mastic: Private Weighted Heavy-Hitters and Attribute-Based Metrics

Insight into user experience and behavior is critical to the success of large software systems and web services. Gaining such insights, while preserving user privacy, is a significant challenge. Recent advancements in multi-party computation have made it practical to securely compute aggregates over secret shared data. Two such protocols have emerged as candidates for standardization at the IETF: Prio (NSDI 2017) for general-purpose statistics; and Poplar (IEEE S&P 2021) for heavy hitters, where the goal is to compute the most popular inputs held by users without learning the inputs themselves. While each of these protocols is well-suited to certain applications, there remain a number of use cases identified by IETF for which neither Prio nor Poplar is practical.
We introduce Mastic, a protocol for the following functionality: each of a large number of clients holds an input (e.g., a URL) and its corresponding weight (e.g., page load time); for a given candidate input (or prefix), a small number of non-colluding servers wish to securely aggregate the weights of clients that hold that input (or some input with that prefix), without learning the weights or which client holds which input. This functionality makes two new classes of applications possible. The first is a natural generalization of heavy hitters we call weighted heavy-hitters. The second is an enhancement of Prio-style metrics we call attribute-based metrics in which aggregates are grouped by hierarchical user attributes (e.g., their geographic location or software version). We demonstrate Mastic's practicality for these applications with a real-world example of each. We also compare our protocol with Prio and Poplar on a wide area network. Overall, we report over one order of magnitude performance improvement over Poplar for plain heavy-hitters and $1.5-2\times$ improvement over Prio for attribute-based metrics.

Schnorr Signatures are Tightly Secure in the ROM under a Non-interactive Assumption

We show that the widely-used Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive and falsifiable variant of the discrete-log (DL) problem we introduce, holds in the underlying group. Notably, our reduction is tight, meaning the constructed adversary against CDL has essentially the same running time and success probability as the assumed forger. This is crucial for justifying the size of the underlying group used in practice. To our knowledge, we are the first to exhibit such a reduction. Indeed, prior work required interactive and non-falsifiable assumptions (Bellare and Dai, INDOCRYPT 2020) or additional idealized models beyond the ROM like the algebraic group model (Fuchsbauer et al., EUROCRYPT 2020). We justify CDL by showing it holds in two carefully-chosen idealized models that idealize different aspects of it. Namely, we show that CDL is as hard as DL in these models.

Pushing the QAM method for finding APN functions further

APN functions offer optimal resistance to differential attacks and are instrumental in the design of block ciphers in cryptography. While finding APN functions is very difficult in general, a promising way to construct APN functions is through symmetric matrices called Quadratic APN matrices (QAM). It is known that the search space for the QAM method can be reduced by means of orbit partitions induced by linear equivalences. This paper builds upon and improves these approaches in the case of homogeneous quadratic functions over $\mathbb{F}_{2^n}$ with coefficients in the subfield $\mathbb{F}_{2^m}$. We propose an innovative approach for computing orbit partitions for cases where it is infeasible due to the large search space, resulting in the applications for the dimensions $(n,m)=(8,4)$, and $(n,m)=(9,3)$. We find and classify, up to CCZ-equivalence, all quadratic APN functions for the cases of $(n,m)=(8,2),$ and $(n,m)=(10,1)$, discovering a new APN function in dimension $8$. Also, we show that an exhaustive search for $(n,m) = (10,2)$ is infeasible for the QAM method using currently available means, following partial searches for this case.

Verifiable Quantum Advantage without Structure

We show the following hold, unconditionally unless otherwise stated, relative to a random oracle:
- There are NP search problems solvable by quantum polynomial-time machines but not classical probabilistic polynomial-time machines.
- There exist functions that are one-way, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar separations hold for digital signatures and CPA-secure public key encryption (the latter requiring the assumption of a classically CPA-secure encryption scheme). Interestingly, the separation does not necessarily extend to the case of other cryptographic objects such as PRGs.
- There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are non-interactive, whereas for non-uniform adversaries the proofs are two message public coin.
- Our results do not appear to contradict the Aaronson-Ambanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction.
By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.

Ideal-to-isogeny algorithm using 2-dimensional isogenies and its application to SQIsign

The Deuring correspondence is a correspondence between supersingular elliptic curves and quaternion orders. Under this correspondence, an isogeny between elliptic curves corresponds to a quaternion ideal. This correspondence plays an important role in isogeny-based cryptography and several algorithms to compute an isogeny corresponding to a quaternion ideal (ideal-to-isogeny algorithms) have been proposed. In particular, SQIsign is a signature scheme based on the Deuring correspondence and uses an ideal-to-isogeny algorithm. In this paper, we propose a novel ideal-to-isogeny algorithm using isogenies of dimension $2$. Our algorithm is based on Kani's reducibility theorem, which gives a connection between isogenies of dimension $1$ and $2$. By using the characteristic $p$ of the base field of the form $2^fg - 1$ for a small odd integer $g$, our algorithm works by only $2$-isogenies and $(2, 2)$-isogenies in the operations in $\mathbb{F}_{p^2}$. We apply our algorithm to SQIsign and compare the efficiency of the new algorithm with the existing one. Our analysis shows that the key generation and the signing in our algorithm are at least twice as fast as those in the existing algorithm at the NIST security level 1. This advantage becomes more significant at higher security levels. In addition, our algorithm also improves the efficiency of the verification in SQIsign.

Siniel: Distributed Privacy-Preserving zkSNARK

Uncategorized

Uncategorized

Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.

New Experimental Evidences For the Riemann Hypothesis

The zeta function $\zeta(z)=\sum_{n=1}^{\infty} \frac{1}{n^z}$ is convergent for $\text{Re}(z)>1$, and the eta function $\eta(z)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^z}$ is convergent for $\text{Re}(z)>0$. The eta function and the analytic continuation of zeta function have the same zeros in the critical strip $0<\text{Re}(z)<1$, owing to that $\eta(z)=\left(1-2^{1-z}\right)\zeta(z)$. In this paper, we present the new experimental evidences which show that for any $a\in (0, 1), b\in (-\infty, \infty)$, there exists a zero $\frac{1}{2}+it$ such that the modulus $|\eta(a+ib)|\geq |\eta(a+it)|>|\eta(\frac{1}{2}+it)|=0$. These evidences further confirm that all zeros are on the critical line $\text{Re}(z)=\frac{1}{2}$.

Non-Interactive Zero-Knowledge Proofs with Certified Deletion

We introduce the notion of non-interactive zero-knowledge (NIZK) proofs with certified deletion. Our notion enables the recipient of a quantum NIZK proof for a (quantumly hard) NP statement to delete the proof and collapse it into a classical deletion certificate. Once this certificate is successfully validated, we require the recipient of the proof to lose their ability to find accepting inputs to NIZK verification.
We formally define this notion and build several candidate constructions from standard cryptographic assumptions. In particular, we propose a primary construction from classical NIZK for NP and one-way functions, albeit with two limitations: (i) deletion certificates are only privately verifiable, and (ii) both prover and verifier are required to be quantum algorithms. We resolve these hurdles in two extensions that assume the quantum hardness of the learning with errors problem. The first one achieves publicly verifiable certificates, and the second one requires merely classical communication between classical provers and quantum verifiers.

Notions of Quantum Reductions and Impossibility of Statistical NIZK

Non-Interactive Zero-Knowledge Arguments (NIZKs) are cryptographic protocols that enable a prover to demonstrate the validity of an $\mathsf{NP}$ statement to a verifier with a single message, without revealing any additional information. The soundness and zero-knowledge properties of a NIZK correspond to security against a malicious prover and a malicious verifier respectively. Statistical NIZKs (S-NIZKs) are a variant of NIZKs for which the zero-knowledge property is guaranteed to hold information-theoretically. Previous works have shown that S-NIZKs satisfying a weak version of soundness known as static soundness exist based on standard assumptions. However, the work of Pass (TCC 2013) showed that S-NIZKs with the stronger \emph{adaptive} soundness property are inherently challenging to obtain. The work proved that standard (black-box) proof techniques are insufficient to prove the security of an S-NIZK based on any standard (falsifiable)
assumption. We extend this result to the setting where parties can perform quantum computations and communicate using quantum information, while the quantum security reduction is restricted to query the adversary classically. To this end, we adapt the well-known meta-reduction paradigm for showing impossibility results to the quantum setting. Additionally, we reinterpret our result using a new framework for studying quantum reductions, which we believe to be of independent interest.

The LaZer Library: Lattice-Based Zero Knowledge and Succinct Proofs for Quantum-Safe Privacy

The hardness of lattice problems offers one of the most promising
security foundations for quantum-safe cryptography. Basic schemes
for public key encryption and digital signatures are already close to
standardization at NIST and several other standardization bodies,
and the research frontier has moved on to building primitives with
more advanced privacy features. At the core of many such primi-
tives are zero-knowledge proofs. In recent years, zero-knowledge
proofs for (and using) lattice relations have seen a dramatic jump
in efficiency and they currently provide arguably the shortest, and
most computationally efficient, quantum-safe proofs for many sce-
narios. The main difficulty in using these proofs by non-experts
(and experts!) is that they have a lot of moving parts and a lot of
internal parameters depend on the particular instance that one is
trying to prove.
Our main contribution is a library for zero-knowledge and suc-
cinct proofs which consists of efficient and flexible C code under-
neath a simple-to-use Python interface. Users without any back-
ground in lattice-based proofs should be able to specify the lattice
relations and the norm bounds that they would like to prove and the
library will automatically create a proof system, complete with the
intrinsic parameters, using either the succinct proofs of LaBRADOR
(Beullens and Seiler, Crypto 2023) or the linear-size, though smaller
for certain application, proofs of Lyubashevsky et al. (Crypto 2022).
The Python interface also allows for common operations used in
lattice-based cryptography which will enable users to write and pro-
totype their full protocols within the syntactically simple Python
environment.
We showcase some of the library’s usefulness by giving protocol
implementations for blind signatures, anonymous credentials, the
zero-knowledge proof needed in the recent Swoosh protocol (Gaj-
land et al., Usenix 2024), proving knowledge of Kyber keys, and an
aggregate signature scheme. Most of these are the most efficient,
from a size, speed, and memory perspective, known quantum-safe
instantiations.

Single-Server Client Preprocessing PIR with Tight Space-Time Trade-oﬀ

This paper partly solves the open problem of tight trade-off of client storage and server time in the client preprocessing setting of private information retrieval (PIR). In the client preprocessing setting of PIR, the client is allowed to store some hints generated from the database in a preprocessing phase and use the hints to assist online queries. We construct a new single-server client preprocessing PIR scheme. For a database with $n$ entries of size $w$, our protocol uses $S=O((n/T) \cdot (\log n + w))$ bits of client storage and $T$ amortized server probes over $n/T$ queries, where $T$ is a tunable online time parameter. Our scheme matches (up to constant factors) a $ST = \Omega(nw)$ lower bound generalized from a recent work by Yeo (EUROCRYPT 2023) and a communication barrier generalized from Ishai, Shi, and Wichs (CRYPTO 2024).
From a technical standpoint, we present a novel organization of hints where each PIR query consumes a hint, and entries in the consumed hint are relocated to other hints. We then present a new data structure to track the hint relocations and use small-domain pseudorandom permutations to make the hint storage sublinear while maintaining efficient lookups in the hints.

KLaPoTi: An asymptotically efficient isogeny group action from 2-dimensional isogenies

We construct and implement an efficient post-quantum commutative cryptographic group action based on combining the SCALLOP framework for group actions from isogenies of oriented elliptic curves on one hand with the recent Clapoti method for polynomial-time evaluation of the CM group action on elliptic curves on the other.
We take advantage of the very attractive performance of $(2^e, 2^e)$-isogenies between products of elliptic curves in the theta coordinate system.
To successfully apply Clapoti in dimension $2$, it is required to resolve a particular quadratic diophantine norm equation, for which we employ a slight variant of the KLPT algorithm.
Our work marks the first practical instantiation of the CM group action for which both the setup as well as the online phase can be computed in (heuristic) polynomial time.

Attacking Automotive RKE Security: How Smart are your ‘Smart’ Keys?

Remote Keyless Entry (RKE) systems are ubiqui-
tous in modern day automobiles, providing convenience for
vehicle owners - occasionally at the cost of security. Most
automobile companies have proprietary implementations of
RKE; these are sometimes built on insecure algorithms and
authentication mechanisms. This paper presents a compre-
hensive study conducted on the RKE systems of multiple
cars from four automobile manufacturers not previously
explored.
Specifically, we analyze the design, implementation, and
security levels of 7 different cars manufactured by Honda,
Maruti-Suzuki, Toyota, and Mahindra. We also do a deep
dive into the RKE system of a particular Honda model.
We evaluate the susceptibility of these systems to known
vulnerabilities (such as RollJam and RollBack at-
tacks). This is accomplished using a novel tool – ‘Puck-
py’, that helps analyze RKE protocols. Our tool automates
several aspects of the protocol analysis process, reducing
time and logistical constraints in RKE research; we provide
standardized protocols to execute various attacks using our
Puck-Py tool. We find that, despite having a long period
of time to fix security issues, several popular automobiles
remain susceptible to attacks, including the basic RollJam
attack.

One Server for the Price of Two: Simple and Fast Single-Server Private Information Retrieval

We present SimplePIR, the fastest single-server private information retrieval scheme known to date. SimplePIR’s security holds under the learning-with-errors assumption. To answer a client’s query, the SimplePIR server performs fewer than one 32-bit multiplication and one 32-bit addition per database byte. SimplePIR achieves 10 GB/s/core server throughput, which approaches the memory bandwidth of the
machine and the performance of the fastest two-server private-information-retrieval schemes (which require non-colluding servers). SimplePIR has relatively large communication costs: to make queries to a 1 GB database, the client must download a 121 MB "hint" about the database contents; thereafter, the client may make an unbounded number of queries, each requiring 242 KB of communication. We present a second single-server scheme, DoublePIR, that shrinks the hint to 16 MB at the cost of slightly higher per-query communication (345 KB) and slightly lower throughput (7.4 GB/s/core). Finally, we apply our new private-information-retrieval schemes, together with a novel data structure for approximate set membership, to the task of private auditing in Certificate Transparency. We achieve a strictly stronger notion of privacy than Google Chrome’s current approach with modest communication overheads: 16 MB of download per month, along with 150 bytes per TLS connection.

Zero-Knowledge Location Privacy via Accurate Floating-Point SNARKs

We introduce Zero-Knowledge Location Privacy (ZKLP), enabling users to prove to third parties that they are within a specified geographical region while not disclosing their exact location. ZKLP supports varying levels of granularity, allowing for customization depending on the use case. To realize ZKLP, we introduce the first set of Zero-Knowledge Proof (ZKP) circuits that are fully compliant to the IEEE 754 standard for floating-point arithmetic.
Our results demonstrate that our floating point circuits amortize efficiently, requiring only $64$ constraints per multiplication for $2^{15}$ single-precision floating-point multiplications. We utilize our floating point implementation to realize the ZKLP paradigm. In comparison to a baseline, we find that our optimized implementation has $15.9 \times$ less constraints utilizing single precision floating-point values, and $12.2 \times$ less constraints when utilizing double precision floating-point values. We demonstrate the practicability of ZKLP by building a protocol for privacy preserving peer-to-peer proximity testing - Alice can test if she is close to Bob by receiving a single message, without either party revealing any other information about their location. In such a configuration, Bob can create a proof of (non-)proximity in $0.26 s$, whereas Alice can verify her distance to about $470$ peers per second.

Towards Practical Oblivious Map

Oblivious map (OMAP) is an important component in encrypted databases, utilized to safeguard against the server inferring sensitive information about client's encrypted key-value stores based on access patterns. Despite its widespread usage and importance, existing OMAP solutions face practical challenges, including the need for a large number of interaction rounds between the client and server, as well as the substantial communication bandwidth requirements. For example, the state-of-the-art protocol named OMIX++ in VLDB 2024 still requires $O(\log{n})$ interaction rounds and $O(\log^2{n})$ communication bandwidth per access, where $n$ denotes the total number of key-value pairs stored.
In this work, we introduce more practical and efficient OMAP constructions. Consistent with all prior OMAPs, our constructions also adapt only the tree-based Oblivious RAM (ORAM) and oblivious data structures (ODS) to achieve OMAP for enhanced practicality. In complexity, our approach needs $O(\log{n}/\log{\log{n})+O(\log{\lambda})}$ interaction rounds and $O(\log^2{n}/\log{\log{n}})+O(\log{\lambda}\log{n})$ communication bandwidth per data access where $\lambda$ is the security parameter. This new complexity results from our two main contributions. First, unlike prior works that rely solely on search trees, we design a novel framework for OMAP that combines hash table with search trees. Second, we propose a more efficient tree-based ORAM named DAORAM, which is of significant independent interest. This newly developed ORAM noticeably accelerates our constructions as it supports obliviously accessing hash tables much more efficiently. We implement both our proposed constructions and prior methods to experimentally demonstrate that our constructions substantially outperform prior methods in terms of efficiency.

A Faster Software Implementation of SQISign

Isogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQIsign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is inefficient compared to other post-quantum competitors because of complicated procedures in the ideal-to-isogeny translation, which is the efficiency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQIsign and mainly focus on how to improve the execution of the ideal-to-isogeny translation in SQIsign. Specifically, we demonstrate how to utilize the reduced Tate pairing to save one of the two elliptic curve discrete logarithms. In addition, the efficient implementation of the remainder discrete logarithm computation is explored. We speed up other procedures in the ideal-to-isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performance of key generation and verification in SQIsign. In the instantiation with $p_{1973}$, the improvements lead to a speedup of 5.47%, 8.80% and 25.34% for key generation, signature and verification, respectively.

The Learning Stabilizers with Noise problem

Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.

On the efficient representation of isogenies (a survey)

We survey different (efficient or not) representations of isogenies, with a particular focus on the recent "higher dimensional" isogeny representation, and algorithms to manipulate them.

Ideal Pseudorandom Codes

Pseudorandom codes are error-correcting codes with the property that no efficient adversary can distinguish encodings from uniformly random strings. They were recently introduced by Christ and Gunn [CRYPTO 2024] for the purpose of watermarking the outputs of randomized algorithms, such as generative AI models. Several constructions of pseudorandom codes have since been proposed, but none of them are robust to error channels that depend on previously seen codewords. This stronger kind of robustness is referred to as adaptive robustness, and it is important for meaningful applications to watermarking.
In this work, we show the following.
- Adaptive robustness: We show that the pseudorandom codes of Christ and Gunn are adaptively robust, resolving a conjecture posed by Cohen, Hoover, and Schoenbach [S&P 2025]. Our proof involves several new ingredients, combining ideas from both cryptography and coding theory and taking hints from the analysis of Boolean functions.
- Ideal security: We define an ideal pseudorandom code as one which is indistinguishable from the ideal functionality, capturing both the pseudorandomness and robustness properties in one simple definition. We show that any adaptively robust pseudorandom code for single-bit messages can be bootstrapped to build an ideal pseudorandom code with linear information rate, under no additional assumptions.
- CCA security: In the setting where the encoding key is made public, we define a CCA-secure pseudorandom code in analogy with CCA-secure encryption. We show that any adaptively robust public-key pseudorandom code for single-bit messages can be used to build a CCA-secure pseudorandom code with linear information rate, in the random oracle model.
Together with the result of Christ and Gunn, it follows that there exist ideal pseudorandom codes assuming the $2^{O(\sqrt{n})}$-hardness of LPN. This extends to CCA security in the random oracle model. These results immediately imply stronger robustness guarantees for generative AI watermarking schemes, such as the practical quality-preserving image watermarks of Gunn, Zhao, and Song (2024).

Cryptographically Secure Digital Consent

In the digital age, the concept of consent for online actions executed by third parties is crucial for maintaining trust and security in third-party services.
This work introduces the notion of cryptographically secure digital consent, which aims to replicate the traditional consent process in the online world.
We provide a flexible digital consent solution that accommodates different use cases and ensures the integrity of the consent process.
The proposed framework involves a client (referring to the user or their devices), an identity manager (which authenticates the client), and an agent (which executes the action upon receiving consent).
It supports various applications and ensures compatibility with existing identity managers.
We require the client to keep no more than a password. The design addresses several security and privacy challenges, including preventing offline dictionary attacks, ensuring non-repudiable consent, and preventing unauthorized actions by the agent.
Security is maintained even if either the identity manager or the agent is compromised, but not both.
Our notion of an identity manager is broad enough to include combinations of different authentication factors such as a password, a smartphone, a security device, biometrics, or an e-passport. We demonstrate applications for signing PDF documents, e-banking, and key recovery.

The State of the Union: Union-Only Signatures for Data Aggregation

A union-only signature (UOS) scheme (informally introduced by Johnson et al. at CT-RSA 2002) allows signers to sign sets of messages in such a way that (1) any third party can merge two signatures to derive a signature on the union of the message sets, and (2) no adversary, given a signature on some set, can derive a valid signature on any strict subset of that set (unless it has seen such a signature already).
Johnson et al. originally posed building a UOS as an open problem. In this paper, we make two contributions: we give the first formal definition of a UOS scheme, and we give the first UOS constructions. Our main construction uses hashing, regular digital signatures, Pedersen commitments and signatures of knowledge. We provide an implementation that demonstrates its practicality. Our main construction also relies on the hardness of the short integer solution (SIS) problem; we show how that this assumption can be replaced with the use of groups of unknown order. Finally, we sketch a UOS construction using SNARKs; this additionally gives the property that the size of the signature does not grow with the number of merges.

Scooby: Improved Multi-Party Homomorphic Secret Sharing Based on FHE

We present new constructions of multi-party homomorphic secret sharing (HSS) based on a new primitive that we call homomorphic encryption with decryption to shares (HEDS).
Our first construction, which we call Scooby, is based on many popular
fully homomorphic encryption (FHE) schemes with a linear decryption property. Scooby achieves an $n$-party HSS for general circuits with complexity $O(|F| + \log n)$, as opposed to $O(n^2 \cdot |F|)$ for the prior best construction based on multi-key FHE.
Scooby can be based on (ring)-LWE with a super-polynomial modulus-to-noise ratio. In our second construction, Scrappy, assuming any generic FHE plus HSS for NC1-circuits, we obtain a HEDS scheme which does not require a super-polynomial modulus. While these schemes all require FHE, in another instantiation, Shaggy, we show how in some cases it is possible to obtain multi-party HSS without FHE, for a small number of parties and constant-degree polynomials.
Finally, we show that our Scooby scheme can be adapted to use multi-key fully homomorphic encryption, giving more efficient spooky encryption and setup-free HSS. This latter scheme, Casper, if concretely instantiated with a B/FV-style multi-key FHE scheme, for functions $F$ which do not require bootstrapping, gives an HSS complexity of $O(n \cdot |F| + n^2 \cdot \log n)$.

Dual Support Decomposition in the Head: Shorter Signatures from Rank SD and MinRank

The MPC-in-the-Head (MPCitH) paradigm is widely used for building post-quantum signature schemes, as it provides a versatile way to design proofs of knowledge based on hard problems. Over the years, the MPCitH landscape has changed significantly, with the most recent improvements coming from VOLE-in-the-Head (VOLEitH) and Threshold-Computation-in-the-Head (TCitH).
While a straightforward application of these frameworks already improve the existing MPCitH-based signatures, we show in this work that we can adapt the arithmetic constraints representing the underlying security assumptions (here called the modeling) to achieve smaller sizes using these new techniques.
More precisely, we explore existing modelings for the rank syndrome decoding (RSD) and MinRank problems and we introduce a new modeling, named dual support decomposition, which achieves better sizes with the VOLEitH and TCitH frameworks by minimizing the size of the witnesses.
While this modeling is naturally more efficient than the other ones for a large set of parameters, we show that it is possible to go even further and explore new areas of parameters. With these new modeling and parameters, we obtain low-size witnesses which drastically reduces the size of the arithmetic part of the signature.
We apply the TCitH and VOLEitH frameworks to our new modeling for both RSD and MinRank and compare our results to the NIST candidates RYDE, MiRitH, and MIRA (MPCitH-based schemes from RSD and MinRank). We also note that recent techniques optimizing the sizes of GGM trees are applicable to our schemes and further reduce the signature sizes by a few hundred bytes. We obtain signature sizes below 3.5 kB for 128 bits of security with N=256 parties (a.k.a. leaves in the GGM trees) and going as low as ≈ 2.8 kB with N=2048, for both RSD and MinRank. This represents an improvement of more than 2 kB compared to the original submissions to the 2023 NIST call for additional signatures.

A Query Reconstruction Attack on the Chase-Shen Substring-Searchable Symmetric Encryption Scheme

Searchable symmetric encryption (SSE) enables queries over symmetrically encrypted databases. To achieve practical efficiency, SSE schemes incur a certain amount of leakage; however, this leads to the possibility of leakage cryptanalysis, i.e., cryptanalytic attacks that exploit the leakage from the target SSE scheme to subvert its data and query privacy guarantees. Leakage cryptanalysis has been widely studied in the context of SSE schemes supporting either keyword queries or range queries, often with devastating consequences. However, little or no attention has been paid to cryptanalysing substring-SSE schemes, i.e., SSE schemes supporting arbitrary substring queries over encrypted data. This is despite their relevance to many real-world applications, e.g., in the context of securely querying outsourced genomic databases. In particular, the first ever substring-SSE scheme, proposed nearly a decade ago by Chase and Shen (PoPETS '15), has not been cryptanalysed to date.
In this paper, we present the first leakage cryptanalysis of the substring-SSE scheme of Chase and Shen. We propose a novel inference-based query reconstruction attack that: (i) exploits a reduced version of the actual leakage profile of their scheme, and (ii) assumes a weaker attack model as compared to the one in which Chase and Shen originally claimed security. We implement our attack and experimentally validate its success rate and efficiency over two real-world datasets. Our attack achieves high query reconstruction rate with practical efficiency, and scales smoothly to large datasets containing $100,000$ strings.
To the best of our knowledge, ours is the first and only query reconstruction attack on (and the first systematic leakage cryptanalysis of) any substring-SSE scheme till date.

Quantum Procedures for Nested Search Problems with Applications in Cryptanalysis

In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints.
A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting of $\ell$ layers multiplies the complexity by a constant factor $(\pi/2)^\ell$.
In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways:
1) our framework results in quantum procedures that are significantly simpler to describe and analyze;
2) it reduces the overhead factor from $(\pi/2)^\ell$ to $\sqrt{\ell}$;
3) it is simpler to apply and optimize, without needing manual quantum analysis.
We give generic complexity formulas and show that for concrete instances, numerical optimizations enable further improvements, reducing even more the gap to an exact quadratic speedup. We demonstrate our framework by giving a tighter analysis of quantum attacks on reduced-round AES.

Classic McEliece Hardware Implementation with Enhanced Side-Channel and Fault Resistance

In this work, we propose the first hardware implementation of Classic McEliece protected with countermeasures against Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA). Classic Mceliece is one of the leading candidates for Key Encapsulation Mechanisms (KEMs) in the ongoing round 4 of the NIST standardization process for post-quantum cryptography. In particular, we implement a range of generic countermeasures against SCA and FIA, particularly protected the vulnerable operations such as additive Fast Fourier Transform (FFT) and Gaussian elimination, that have been targeted by prior SCA and FIA attacks. We also perform a detailed SCA evaluation demonstrating no leakage even with 100000 traces (improvement of more than 100× the number of traces compared to unprotected implementation). This comes at a modest total area overhead of between 4× to 7×, depending on the type of implemented SCA countermeasure. Furthermore, we present a thorough ASIC benchmark for SCA and FIA protected Classic McEliece design.

Scutum: Temporal Verification for Cross-Rollup Bridges via Goal-Driven Reduction

Scalability remains a key challenge for blockchain adoption. Rollups—especially zero-knowledge (ZK) and optimistic rollups—address this by processing transactions off-chain while maintaining Ethereum’s security, thus reducing gas fees and improving speeds. Cross-rollup bridges like Orbiter Finance enable seamless asset transfers across various Layer 2 (L2) rollups and between L2 and Layer 1 (L1) chains. However, the increasing reliance on these bridges raises significant security concerns, as evidenced by major hacks like those of Poly Network and Nomad Bridge, resulting in losses of hundreds of millions of dollars. Traditional security analysis methods such as static analysis and fuzzing are inadequate for cross-rollup bridges due to their complex designs involving multiple entities, smart contracts, and zero-knowledge circuits. These systems require reasoning about temporal sequences of events across different entities, which exceeds the capabilities of conventional analyzers.
In this paper, we introduce a scalable verifier to systematically assess the security of cross-rollup bridges. Our approach features a comprehensive multi-model framework that captures both individual behaviors and complex interactions using temporal properties. To enhance scalability, we approximate temporal safety verification through reachability analysis of a graph representation of the contracts, leveraging advanced program analysis techniques. Additionally, we incorporate a conflict-driven refinement loop to eliminate false positives and improve precision. Our evaluation on mainstream cross-rollup bridges, including Orbiter Finance, uncovered multiple zero-day vulnerabilities, demonstrating the practical utility of our method. The tool also exhibited favorable runtime performance, enabling efficient analysis suitable for real-time or near-real-time applications.

Private Neural Network Training with Packed Secret Sharing

We present a novel approach for training neural networks that leverages packed Shamir secret sharing scheme. For specific training protocols based on Shamir scheme, we demonstrate how to realize the conversion between packed sharing and Shamir sharing without additional communication overhead. We begin by introducing a method to locally convert between Shamir sharings with secrets stored at different slots. Building upon this conversion, we achieve free conversion from packed sharing to Shamir sharing. We then show how to embed the conversion from Shamir sharing to packed sharing into the truncation used during the training process without incurring additional communication costs. With free conversion between packed sharing and Shamir sharing, we illustrate how to utilize the packed scheme to parallelize certain computational steps involved in neural network training. On this basis, we propose training protocols with information-theoretic security between general $n$ parties under the semi-honest model. The experimental results demonstrate that, compared to previous work in this domain, applying the packed scheme can effectively improve training efficiency. Specifically, when packing $4$ secrets into a single sharing, we observe a reduction of more than $20\%$ in communication overhead and an improvement of over $10\%$ in training speed under the WAN setting.

Symmetric Encryption on a Quantum Computer

Classical symmetric encryption algorithms use $N$ bits of a shared
secret key to transmit $N$ bits of a message over a one-way channel in
an information theoretically secure manner. This paper proposes a hybrid
quantum-classical symmetric cryptosystem that uses a quantum computer to
generate the secret key. The algorithm leverages quantum circuits to
encrypt a message using a one-time pad-type technique whilst requiring
a shorter classical key. We show that for an $N$-qubit circuit, the
maximum number of bits needed to specify a quantum circuit grows as
$N^{3/2}$ while the maximum number of bits that the quantum circuit
can encode grows as $N^2$. We do not utilise the full expressive
capability of the quantum circuits as we focus on second order Pauli
expectation values only. The potential exists to encode an exponential
number of bits using higher orders of Pauli expectation values.
Moreover, using a parameterised quantum circuit (PQC), we could
further augment the amount of securely shared information by
introducing a secret key dependence on some of the PQC parameters.
The algorithm may be suitable for an early fault-tolerant quantum
computer implementation as some degree of noise can be tolerated.
Simulation results are presented along with experimental results on
the 84-qubit Rigetti Ankaa-2 quantum computer.

MIFARE Classic: exposing the static encrypted nonce variant

MIFARE Classic smart cards, developed and licensed by NXP, are widely used but have been subjected to numerous attacks over the years. Despite the introduction of new versions, these cards have remained vulnerable, even in card-only scenarios. In 2020, the FM11RF08S, a new variant of MIFARE Classic, was released by the leading Chinese manufacturer of unlicensed "MIFARE compatible" chips. This variant features specific countermeasures designed to thwart all known card-only attacks and is gradually gaining market share worldwide.
In this paper, we present several attacks and unexpected findings regarding the FM11RF08S. Through empirical research, we discovered a hardware backdoor and successfully cracked its key. This backdoor enables any entity with knowledge of it to compromise all user-defined keys on these cards without prior knowledge, simply by accessing the card for a few minutes.
Additionally, our investigation into older cards uncovered another hardware backdoor key that was common to several manufacturers.

Hybrid Zero-Knowledge from Garbled Circuits

We present techniques for constructing zero-knowledge argument systems from garbled circuits, extending the GC-to-ZK compiler by Jawurek, Kerschbaum, and Orlandi (ACM CCS 2023) and the GC-to-Σ compiler by Hazay and Venkitasubramaniam (J. Crypto, 2020) to the following directions:
- Our schemes are hybrid, commit-and-prove zero-knowledge argument systems that establish a connection between secrets embedded in algebraic commitments and a relation represented by a Boolean circuit.
- Our schemes incorporate diverse cross-domain secrets embedded within distinct algebraic commitments, simultaneously supporting Pedersen-like commitments and lattice-based commitments.
As an application, we develop circuit-represented compositions of Σ-protocols that support attractive access structures, such as weighted thresholds, that can be easily represented by a small circuit. For predicates P1, . . . , Pn individually associated with a Σ-protocol, and a predicate C represented by a Boolean circuit, we construct a Σ-protocol for proving C(P1, . . . , Pn) = 1. This result answers positively an open question posed by Abe, et. al., at TCC 2021.

How to Delete Without a Trace: Certified Deniability in a Quantum World

Is it possible to comprehensively destroy a piece of quantum information, so that nothing is left behind except the memory of that one had it at some point? For example, various works, most recently Morimae, Poremba, and Yamakawa (TQC '24), show how to construct a signature scheme with certified deletion where a user who deletes a signature on $m$ cannot later produce a signature for $m$. However, in all of the existing schemes, even after deletion the user is still able keep irrefutable evidence that $m$ was signed, and thus they do not fully capture the spirit of deletion.
In this work, we initiate the study of certified deniability in order to obtain a more comprehensive notion of deletion. Certified deniability uses a simulation-based security definition, ensuring that any information the user has kept after deletion could have been learned without being given the deleteable object to begin with; meaning that deletion leaves no trace behind! We define and construct two non-interactive primitives that satisfy certified deniability in the quantum random oracle model: signatures and non-interactive zero-knowledge arguments (NIZKs). As a consequence, for example, it is not possible to delete a signature/NIZK and later provide convincing evidence that it used to exist. Notably, our results utilize uniquely quantum phenomena to bypass Pass's (CRYPTO '03) celebrated result showing that deniable NIZKs are impossible even in the random oracle model.

Fast Two-party Threshold ECDSA with Proactive Security

We present a new construction of two-party, threshold ECDSA, building on a 2017 scheme of Lindell and improving his scheme in several ways.
ECDSA signing is notoriously hard to distribute securely, due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated.
We overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2 Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption to applications that rely on the verification key.
Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.

A Tight Analysis of GHOST Consistency

The GHOST protocol has been proposed as an improvement to the Nakamoto consensus mechanism that underlies Bitcoin. In contrast to the Nakamoto fork-choice rule, the GHOST rule justifies selection of a chain with weights computed over subtrees rather than individual paths. This mechanism has been adopted by a variety of consensus protocols, and is a part of the currently deployed protocol supporting Ethereum.
We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the security region.
Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is inferior to the region of Nakamoto consensus.

Compiled Nonlocal Games from any Trapdoor Claw-Free Function

A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computation done by a quantum server. Their compiler relies on the existence of quantum fully-homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on the framework of measurement-based quantum computation.
It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.

OPTIMSM: FPGA hardware accelerator for Zero-Knowledge MSM

The Multi-Scalar Multiplication (MSM) is the main barrier to accelerating Zero-Knowledge applications. In recent years, hardware acceleration of this algorithm on both FPGA and GPU has become a popular research topic and the subject of a multi-million dollar prize competition (ZPrize). This work presents OPTIMSM: Optimized Processing Through Iterative Multi-Scalar Multiplication. This novel accelerator focuses on the acceleration of the MSM algorithm for any Elliptic Curve (EC) by improving upon the Pippenger algorithm. A new iteration technique is introduced to decouple the required buckets from the window size, resulting in fewer EC computations for the same on-chip memory resources. Furthermore, we combine known optimizations from the literature for the first time to achieve additional latency improvements. Our enhanced MSM implementation significantly reduces computation time, achieving a speedup of up to $\times 12.77$ compared to recent FPGA implementations. Specifically, for the BLS12-381 curve, we reduce the computation time for an MSM of size $2^{24}$ to 914 ms using a single compute unit on the U55C FPGA or to 231 ms using four U55C devices. These results indicate a substantial improvement in efficiency, paving the way for more scalable and efficient Zero-Knowledge proof systems.

Cloning Games, Black Holes and Cryptography

The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes.
The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games:
1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$, respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies.
2. All existing works analyze $1\mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log \log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$).
Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.

BrakingBase - a linear prover, poly-logarithmic verifier, field agnostic polynomial commitment scheme

We propose a Polynomial Commitment Scheme (PCS), called BrakingBase, which allows a prover to commit to multilinear (or univariate) polynomials with $n$ coefficients in $O(n)$ time. The evaluation protocol of BrakingBase operates with an $O(n)$ time-complexity for the prover, while the verifier time-complexity and proof-complexity are $O(\lambda \log^2 n)$, where $λ$ is the security parameter. Notably, BrakingBase is field-agnostic, meaning it can be instantiated over any field of sufficiently large size. Additionally, BrakingBase can be combined with the Polynomial Interactive Oracle Proof (PIOP) from Spartan (Crypto 2020) to yield a Succinct Non-interactive ARgument of Knowledge (SNARK) with a linear-time prover, as well as poly-logarithmic complexity for both the verifier runtime and the proof size. We obtain our PCS by combining the Brakedown and Basefold PCS. The commitment protocol of BrakingBase is similar to that of Brakedown. The evaluation protocol of BrakingBase improves upon Brakedown’s verifier work by reducing it through multiple instances of the sum-check protocol. Basefold PCS is employed to commit to and later evaluate the multilinear extension (MLE) of the witnesses involved in the sum-check protocol at random points. This includes the MLE corresponding to the parity-check matrix of the linear-time encodable code used in Brakedown. We show that this matrix is sparse and use the Spark compiler from Spartan to evaluate its multilinear extension at a random point. We implement BrakingBase and compare its performance to Brakedown and Basefold over a 128 bit prime field.

Constructing Dembowski–Ostrom permutation polynomials from upper triangular matrices

We establish a one-to-one correspondence between Dembowski-Ostrom (DO) polynomials and upper triangular matrices. Based on this correspondence, we give a bijection between DO permutation polynomials and a special class of upper triangular matrices, and construct a new batch of DO permutation polynomials. To the best of our knowledge, almost all other known DO permutation polynomials are located in finite fields of $\mathbb{F}_{2^n}$, where $n$ contains odd factors (see Table 1). However, there are no restrictions on $n$ in our results, and especially the case of $n=2^m$ has not been studied in the literature. For example, we provide a simple necessary and sufficient condition to determine when $\gamma\, Tr(\theta_{i}x)Tr(\theta_{j}x) + x$ is a DO permutation polynomial. In addition, when the upper triangular matrix degenerates into a diagonal matrix and the elements on the main diagonal form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, this diagonal matrix corresponds to all linearized permutation polynomials. In a word, we construct several new DO permutation polynomials, and our results can be viewed as an extension of linearized permutation polynomials.

RAD-FS: Remote Timing and Power SCA Security in DVFS-Augmented Ultra-Low-Power Embedded Systems

High-performance crypto-engines have become crucial components in modern System-On-Chip (SoC) architectures across platforms, from servers to edge-IoTs’. Alas, their secure operation faces a significant obstacle caused by information-leakage accessed through Side-Channel Analysis (SCA). Adversaries exploit statistical-analysis techniques on measured (e.g.,) power and timing signatures generated during (e.g.,) encryption, extracting secrets. Mathematical countermeasures against such attacks often impose substantial power-performance-area overheads. Dynamic Voltage and Frequency Scaling (DVFS) techniques provide power-efficiency by varying power consumption according to workload; these modulations are called power-states. Unintentionally, DVFS introduces new inherent weaknesses exploitable by malicious actors: power-states leak information in both power and timing side-channels, measurable in software and hardware. We introduce a method to increase side-channel resistance using integrated voltage regulators and DVFS: (1) Pushing known prior-art in the topic to Ultra Low Power (ULP) regime (2) For the first time introducing a mechanism to aid in counteracting the inherent weakness of DVFS in SCA (3) Providing measurements performed on 40nm process ULP PLS15 test-chip down at 580 mV power-supply (4) Offering improved and parameterized resistance to remote-timing vulnerabilities inherent to DVFS. We present various results and perform a detailed analysis while comparing performance and security to prior-art. Importantly, our solution is
configurable in terms of security, maintaining degrees-of-freedom for power-optimization of DVFS.

A Composability Treatment of Bitcoin's Transaction Ledger with Variable Difficulty

As the first proof-of-work (PoW) permissionless blockchain, Bitcoin aims at maintaining a decentralized yet consistent transaction ledger as protocol participants (“miners”) join and leave as they please. This is achieved by means of a subtle PoW difficulty adjustment mechanism that adapts to the perceived block generation rate, and important steps have been taken in previous work to provide a rigorous analysis of the conditions (such as bounds on dynamic participation) that are sufficient for Bitcoin’s security properties to be ascertained.
Such existing analysis, however, is property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.

Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration

Recognizing the importance of a fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we design a multiplier-less number theoretic transform using a Fermat number as an auxiliary modulus. To make this algorithm scalable with the degree of polynomial, we apply a univariate to multivariate polynomial ring transformation.
We develop an accelerator architecture for fully homomorphic encryption using these algorithmic techniques for efficient multivariate polynomial multiplication. For practical homomorphic encryption application benchmarks, the hardware accelerator achieves a 1,200$\times$ speed-up compared to software implementations. Finally, we conclude the paper by discussing the advantages and limitations of the proposed polynomial multiplication method.

Anonymous Public-Key Quantum Money and Quantum Voting

Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited: a user in possession of k banknotes cannot produce k +1 banknotes. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely,
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a public-key quantum money scheme with anonymity against users and traceability by authorities.
Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes.
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability.
Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting!
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes.
Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness