Papers updated in last 365 days (2814 results)
Pushing the QAM method for finding APN functions further
APN functions offer optimal resistance to differential attacks and are instrumental in the design of block ciphers in cryptography. While finding APN functions is very difficult in general, a promising way to construct APN functions is through symmetric matrices called Quadratic APN matrices (QAM). It is known that the search space for the QAM method can be reduced by means of orbit partitions induced by linear equivalences. This paper builds upon and improves these approaches in the case of homogeneous quadratic functions over $\mathbb{F}_{2^n}$ with coefficients in the subfield $\mathbb{F}_{2^m}$. We propose an innovative approach for computing orbit partitions for cases where it is infeasible due to the large search space, resulting in the applications for the dimensions $(n,m)=(8,4)$, and $(n,m)=(9,3)$. We find and classify, up to CCZ-equivalence, all quadratic APN functions for the cases of $(n,m)=(8,2),$ and $(n,m)=(10,1)$, discovering a new APN function in dimension $8$. Also, we show that an exhaustive search for $(n,m) = (10,2)$ is infeasible for the QAM method using currently available means, following partial searches for this case.
Verifiable Quantum Advantage without Structure
We show the following hold, unconditionally unless otherwise stated, relative to a random oracle:
- There are NP search problems solvable by quantum polynomial-time machines but not classical probabilistic polynomial-time machines.
- There exist functions that are one-way, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar separations hold for digital signatures and CPA-secure public key encryption (the latter requiring the assumption of a classically CPA-secure encryption scheme). Interestingly, the separation does not necessarily extend to the case of other cryptographic objects such as PRGs.
- There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are non-interactive, whereas for non-uniform adversaries the proofs are two message public coin.
- Our results do not appear to contradict the Aaronson-Ambanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction.
By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.
Ideal-to-isogeny algorithm using 2-dimensional isogenies and its application to SQIsign
The Deuring correspondence is a correspondence between supersingular elliptic curves and quaternion orders. Under this correspondence, an isogeny between elliptic curves corresponds to a quaternion ideal. This correspondence plays an important role in isogeny-based cryptography and several algorithms to compute an isogeny corresponding to a quaternion ideal (ideal-to-isogeny algorithms) have been proposed. In particular, SQIsign is a signature scheme based on the Deuring correspondence and uses an ideal-to-isogeny algorithm. In this paper, we propose a novel ideal-to-isogeny algorithm using isogenies of dimension $2$. Our algorithm is based on Kani's reducibility theorem, which gives a connection between isogenies of dimension $1$ and $2$. By using the characteristic $p$ of the base field of the form $2^fg - 1$ for a small odd integer $g$, our algorithm works by only $2$-isogenies and $(2, 2)$-isogenies in the operations in $\mathbb{F}_{p^2}$. We apply our algorithm to SQIsign and compare the efficiency of the new algorithm with the existing one. Our analysis shows that the key generation and the signing in our algorithm are at least twice as fast as those in the existing algorithm at the NIST security level 1. This advantage becomes more significant at higher security levels. In addition, our algorithm also improves the efficiency of the verification in SQIsign.
Siniel: Distributed Privacy-Preserving zkSNARK
Uncategorized
Uncategorized
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
New Experimental Evidences For the Riemann Hypothesis
The zeta function $\zeta(z)=\sum_{n=1}^{\infty} \frac{1}{n^z}$ is convergent for $\text{Re}(z)>1$, and the eta function $\eta(z)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^z}$ is convergent for $\text{Re}(z)>0$. The eta function and the analytic continuation of zeta function have the same zeros in the critical strip $0<\text{Re}(z)<1$, owing to that $\eta(z)=\left(1-2^{1-z}\right)\zeta(z)$. In this paper, we present the new experimental evidences which show that for any $a\in (0, 1), b\in (-\infty, \infty)$, there exists a zero $\frac{1}{2}+it$ such that the modulus $|\eta(a+ib)|\geq |\eta(a+it)|>|\eta(\frac{1}{2}+it)|=0$. These evidences further confirm that all zeros are on the critical line $\text{Re}(z)=\frac{1}{2}$.
Non-Interactive Zero-Knowledge Proofs with Certified Deletion
We introduce the notion of non-interactive zero-knowledge (NIZK) proofs with certified deletion. Our notion enables the recipient of a quantum NIZK proof for a (quantumly hard) NP statement to delete the proof and collapse it into a classical deletion certificate. Once this certificate is successfully validated, we require the recipient of the proof to lose their ability to find accepting inputs to NIZK verification.
We formally define this notion and build several candidate constructions from standard cryptographic assumptions. In particular, we propose a primary construction from classical NIZK for NP and one-way functions, albeit with two limitations: (i) deletion certificates are only privately verifiable, and (ii) both prover and verifier are required to be quantum algorithms. We resolve these hurdles in two extensions that assume the quantum hardness of the learning with errors problem. The first one achieves publicly verifiable certificates, and the second one requires merely classical communication between classical provers and quantum verifiers.
Notions of Quantum Reductions and Impossibility of Statistical NIZK
Non-Interactive Zero-Knowledge Arguments (NIZKs) are cryptographic protocols that enable a prover to demonstrate the validity of an $\mathsf{NP}$ statement to a verifier with a single message, without revealing any additional information. The soundness and zero-knowledge properties of a NIZK correspond to security against a malicious prover and a malicious verifier respectively. Statistical NIZKs (S-NIZKs) are a variant of NIZKs for which the zero-knowledge property is guaranteed to hold information-theoretically. Previous works have shown that S-NIZKs satisfying a weak version of soundness known as static soundness exist based on standard assumptions. However, the work of Pass (TCC 2013) showed that S-NIZKs with the stronger \emph{adaptive} soundness property are inherently challenging to obtain. The work proved that standard (black-box) proof techniques are insufficient to prove the security of an S-NIZK based on any standard (falsifiable)
assumption. We extend this result to the setting where parties can perform quantum computations and communicate using quantum information, while the quantum security reduction is restricted to query the adversary classically. To this end, we adapt the well-known meta-reduction paradigm for showing impossibility results to the quantum setting. Additionally, we reinterpret our result using a new framework for studying quantum reductions, which we believe to be of independent interest.
The LaZer Library: Lattice-Based Zero Knowledge and Succinct Proofs for Quantum-Safe Privacy
The hardness of lattice problems offers one of the most promising
security foundations for quantum-safe cryptography. Basic schemes
for public key encryption and digital signatures are already close to
standardization at NIST and several other standardization bodies,
and the research frontier has moved on to building primitives with
more advanced privacy features. At the core of many such primi-
tives are zero-knowledge proofs. In recent years, zero-knowledge
proofs for (and using) lattice relations have seen a dramatic jump
in efficiency and they currently provide arguably the shortest, and
most computationally efficient, quantum-safe proofs for many sce-
narios. The main difficulty in using these proofs by non-experts
(and experts!) is that they have a lot of moving parts and a lot of
internal parameters depend on the particular instance that one is
trying to prove.
Our main contribution is a library for zero-knowledge and suc-
cinct proofs which consists of efficient and flexible C code under-
neath a simple-to-use Python interface. Users without any back-
ground in lattice-based proofs should be able to specify the lattice
relations and the norm bounds that they would like to prove and the
library will automatically create a proof system, complete with the
intrinsic parameters, using either the succinct proofs of LaBRADOR
(Beullens and Seiler, Crypto 2023) or the linear-size, though smaller
for certain application, proofs of Lyubashevsky et al. (Crypto 2022).
The Python interface also allows for common operations used in
lattice-based cryptography which will enable users to write and pro-
totype their full protocols within the syntactically simple Python
environment.
We showcase some of the library’s usefulness by giving protocol
implementations for blind signatures, anonymous credentials, the
zero-knowledge proof needed in the recent Swoosh protocol (Gaj-
land et al., Usenix 2024), proving knowledge of Kyber keys, and an
aggregate signature scheme. Most of these are the most efficient,
from a size, speed, and memory perspective, known quantum-safe
instantiations.
Single-Server Client Preprocessing PIR with Tight Space-Time Trade-off
This paper partly solves the open problem of tight trade-off of client storage and server time in the client preprocessing setting of private information retrieval (PIR). In the client preprocessing setting of PIR, the client is allowed to store some hints generated from the database in a preprocessing phase and use the hints to assist online queries. We construct a new single-server client preprocessing PIR scheme. For a database with $n$ entries of size $w$, our protocol uses $S=O((n/T) \cdot (\log n + w))$ bits of client storage and $T$ amortized server probes over $n/T$ queries, where $T$ is a tunable online time parameter. Our scheme matches (up to constant factors) a $ST = \Omega(nw)$ lower bound generalized from a recent work by Yeo (EUROCRYPT 2023) and a communication barrier generalized from Ishai, Shi, and Wichs (CRYPTO 2024).
From a technical standpoint, we present a novel organization of hints where each PIR query consumes a hint, and entries in the consumed hint are relocated to other hints. We then present a new data structure to track the hint relocations and use small-domain pseudorandom permutations to make the hint storage sublinear while maintaining efficient lookups in the hints.
KLaPoTi: An asymptotically efficient isogeny group action from 2-dimensional isogenies
We construct and implement an efficient post-quantum commutative cryptographic group action based on combining the SCALLOP framework for group actions from isogenies of oriented elliptic curves on one hand with the recent Clapoti method for polynomial-time evaluation of the CM group action on elliptic curves on the other.
We take advantage of the very attractive performance of $(2^e, 2^e)$-isogenies between products of elliptic curves in the theta coordinate system.
To successfully apply Clapoti in dimension $2$, it is required to resolve a particular quadratic diophantine norm equation, for which we employ a slight variant of the KLPT algorithm.
Our work marks the first practical instantiation of the CM group action for which both the setup as well as the online phase can be computed in (heuristic) polynomial time.
Attacking Automotive RKE Security: How Smart are your ‘Smart’ Keys?
Remote Keyless Entry (RKE) systems are ubiqui-
tous in modern day automobiles, providing convenience for
vehicle owners - occasionally at the cost of security. Most
automobile companies have proprietary implementations of
RKE; these are sometimes built on insecure algorithms and
authentication mechanisms. This paper presents a compre-
hensive study conducted on the RKE systems of multiple
cars from four automobile manufacturers not previously
explored.
Specifically, we analyze the design, implementation, and
security levels of 7 different cars manufactured by Honda,
Maruti-Suzuki, Toyota, and Mahindra. We also do a deep
dive into the RKE system of a particular Honda model.
We evaluate the susceptibility of these systems to known
vulnerabilities (such as RollJam and RollBack at-
tacks). This is accomplished using a novel tool – ‘Puck-
py’, that helps analyze RKE protocols. Our tool automates
several aspects of the protocol analysis process, reducing
time and logistical constraints in RKE research; we provide
standardized protocols to execute various attacks using our
Puck-Py tool. We find that, despite having a long period
of time to fix security issues, several popular automobiles
remain susceptible to attacks, including the basic RollJam
attack.
Khatam: Reducing the Communication Complexity of Code-Based SNARKs
We prove that Basefold(Crypto 2024) is secure in the $\textit{list decoding regime}$, within the double Johnson bound and with error probability $\frac{O(n)}{|F|}$. At the heart of this proof is a new, stronger statement for $\textit{correlated agreement}$, which roughly states that if a linear combination of vectors $\pi_L + r \pi_R$ agrees with a codeword at every element in $S \subset [n]$, then so do $\pi_L, \pi_R$. Our result is purely combinatorial and therefore extends to any finite field and any linear code. As such, it can be applied to any coding-based multilinear Polynomial Commitment Scheme to reduce its communication complexity.
One Server for the Price of Two: Simple and Fast Single-Server Private Information Retrieval
We present SimplePIR, the fastest single-server private information retrieval scheme known to date. SimplePIR’s security holds under the learning-with-errors assumption. To answer a client’s query, the SimplePIR server performs fewer than one 32-bit multiplication and one 32-bit addition per database byte. SimplePIR achieves 10 GB/s/core server throughput, which approaches the memory bandwidth of the
machine and the performance of the fastest two-server private-information-retrieval schemes (which require non-colluding servers). SimplePIR has relatively large communication costs: to make queries to a 1 GB database, the client must download a 121 MB "hint" about the database contents; thereafter, the client may make an unbounded number of queries, each requiring 242 KB of communication. We present a second single-server scheme, DoublePIR, that shrinks the hint to 16 MB at the cost of slightly higher per-query communication (345 KB) and slightly lower throughput (7.4 GB/s/core). Finally, we apply our new private-information-retrieval schemes, together with a novel data structure for approximate set membership, to the task of private auditing in Certificate Transparency. We achieve a strictly stronger notion of privacy than Google Chrome’s current approach with modest communication overheads: 16 MB of download per month, along with 150 bytes per TLS connection.
Zero-Knowledge Location Privacy via Accurate Floating-Point SNARKs
We introduce Zero-Knowledge Location Privacy (ZKLP), enabling users to prove to third parties that they are within a specified geographical region while not disclosing their exact location. ZKLP supports varying levels of granularity, allowing for customization depending on the use case. To realize ZKLP, we introduce the first set of Zero-Knowledge Proof (ZKP) circuits that are fully compliant to the IEEE 754 standard for floating-point arithmetic.
Our results demonstrate that our floating point circuits amortize efficiently, requiring only $64$ constraints per multiplication for $2^{15}$ single-precision floating-point multiplications. We utilize our floating point implementation to realize the ZKLP paradigm. In comparison to a baseline, we find that our optimized implementation has $15.9 \times$ less constraints utilizing single precision floating-point values, and $12.2 \times$ less constraints when utilizing double precision floating-point values. We demonstrate the practicability of ZKLP by building a protocol for privacy preserving peer-to-peer proximity testing - Alice can test if she is close to Bob by receiving a single message, without either party revealing any other information about their location. In such a configuration, Bob can create a proof of (non-)proximity in $0.26 s$, whereas Alice can verify her distance to about $470$ peers per second.
Towards Practical Oblivious Map
Oblivious map (OMAP) is an important component in encrypted databases, utilized to safeguard against the server inferring sensitive information about client's encrypted key-value stores based on access patterns. Despite its widespread usage and importance, existing OMAP solutions face practical challenges, including the need for a large number of interaction rounds between the client and server, as well as the substantial communication bandwidth requirements. For example, the state-of-the-art protocol named OMIX++ in VLDB 2024 still requires $O(\log{n})$ interaction rounds and $O(\log^2{n})$ communication bandwidth per access, where $n$ denotes the total number of key-value pairs stored.
In this work, we introduce more practical and efficient OMAP constructions. Consistent with all prior OMAPs, our constructions also adapt only the tree-based Oblivious RAM (ORAM) and oblivious data structures (ODS) to achieve OMAP for enhanced practicality. In complexity, our approach needs $O(\log{n}/\log{\log{n})+O(\log{\lambda})}$ interaction rounds and $O(\log^2{n}/\log{\log{n}})+O(\log{\lambda}\log{n})$ communication bandwidth per data access where $\lambda$ is the security parameter. This new complexity results from our two main contributions. First, unlike prior works that rely solely on search trees, we design a novel framework for OMAP that combines hash table with search trees. Second, we propose a more efficient tree-based ORAM named DAORAM, which is of significant independent interest. This newly developed ORAM noticeably accelerates our constructions as it supports obliviously accessing hash tables much more efficiently. We implement both our proposed constructions and prior methods to experimentally demonstrate that our constructions substantially outperform prior methods in terms of efficiency.
A Faster Software Implementation of SQISign
Isogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQIsign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is inefficient compared to other post-quantum competitors because of complicated procedures in the ideal-to-isogeny translation, which is the efficiency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQIsign and mainly focus on how to improve the execution of the ideal-to-isogeny translation in SQIsign. Specifically, we demonstrate how to utilize the reduced Tate pairing to save one of the two elliptic curve discrete logarithms. In addition, the efficient implementation of the remainder discrete logarithm computation is explored. We speed up other procedures in the ideal-to-isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performance of key generation and verification in SQIsign. In the instantiation with $p_{1973}$, the improvements lead to a speedup of 5.47%, 8.80% and 25.34% for key generation, signature and verification, respectively.
Verifying Jolt zkVM Lookup Semantics
Lookups are a popular way to express repeated constraints in state-of-the art SNARKs. This is especially the case for zero-knowledge virtual machines (zkVMs), which produce succinct proofs of correct execution for programs expressed as bytecode according to a specific instruction set architecture (ISA). The Jolt zkVM (Arun, Setty & Thaler, Eurocrypt 2024) for RISC-V ISA employs Lasso (Setty, Thaler & Wahby, Eurocrypt 2024), an efficient lookup argument for massive structured tables, to prove correct execution of instructions. Internally, Lasso performs multiple lookups into smaller subtables, then combines the results.
We present an approach to formally verify Lasso-style lookup arguments against the semantics of instruction set architectures. We demonstrate our approach by formalizing and verifying all Jolt 32-bit instructions corresponding to the RISC-V base instruction set (RV32I) using the ACL2 theorem proving system. Our formal ACL2 model has undergone extensive validation against the Rust implementation of Jolt. Due to ACL2's bit-blasting, rewriting, and developer-friendly features, our formalization is highly automated.
Through formalization, we also discovered optimizations to the Jolt codebase, leading to improved efficiency without impacting correctness or soundness. In particular, we removed one unnecessary lookup each for four instructions, and reduced the sizes of three subtables by 87.5\%.
ColliderScript: Covenants in Bitcoin via 160-bit hash collisions
We introduce a method for enforcing covenants on Bitcoin outputs without requiring any changes to Bitcoin by designing a hash collision based equivalence check which bridges Bitcoin's limited Big Script to Bitcoin's Small Script. This allows us evaluate the signature of the spending transaction (available only to Big Script) in Small Script. As Small Script enables arbitrary computations, we can introspect into the spending transaction and enforce covenants on it.
Our approach leverages finding collisions in the $160$-bit hash functions: SHA-1 and RIPEMD-160. By the birthday bound this should cost $\sim2^{80}$ work. Each spend of our covenant costs $\sim2^{86}$ hash queries and $\sim2^{56}$ bytes of space. For security, we rely on an assumption regarding the hardness of finding a $3$-way collision (with short inputs) in $160$-bit hash functions, arguing that if the assumption holds, breaking covenant enforcement requires $\sim2^{110}$ hash queries. To put this in perspective, the work to spend our covenant is $\sim33$ hours of the Bitcoin mining network, whereas breaking our covenant requires $\sim 450,000$ years of the Bitcoin mining network.
We believe there are multiple directions of future work that can significantly improve these numbers.
Evaluating covenants and our equivalence check requires performing many operations in Small Script, which must take no more than $4$ megabytes in total size, as Bitcoin does not allow transactions greater than $4$ megabytes. We only provide rough estimates of the transaction size because, as of this writing, no Small Script implementations of the hash functions required, SHA-1 and RIPEMD-160, have been written.
The Learning Stabilizers with Noise problem
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
On the efficient representation of isogenies (a survey)
We survey different (efficient or not) representations of isogenies, with a particular focus on the recent "higher dimensional" isogeny representation, and algorithms to manipulate them.
Ideal Pseudorandom Codes
Pseudorandom codes are error-correcting codes with the property that no efficient adversary can distinguish encodings from uniformly random strings. They were recently introduced by Christ and Gunn [CRYPTO 2024] for the purpose of watermarking the outputs of randomized algorithms, such as generative AI models. Several constructions of pseudorandom codes have since been proposed, but none of them are robust to error channels that depend on previously seen codewords. This stronger kind of robustness is referred to as adaptive robustness, and it is important for meaningful applications to watermarking.
In this work, we show the following.
- Adaptive robustness: We show that the pseudorandom codes of Christ and Gunn are adaptively robust, resolving a conjecture posed by Cohen, Hoover, and Schoenbach [S&P 2025]. Our proof involves several new ingredients, combining ideas from both cryptography and coding theory and taking hints from the analysis of Boolean functions.
- Ideal security: We define an ideal pseudorandom code as one which is indistinguishable from the ideal functionality, capturing both the pseudorandomness and robustness properties in one simple definition. We show that any adaptively robust pseudorandom code for single-bit messages can be bootstrapped to build an ideal pseudorandom code with linear information rate, under no additional assumptions.
- CCA security: In the setting where the encoding key is made public, we define a CCA-secure pseudorandom code in analogy with CCA-secure encryption. We show that any adaptively robust public-key pseudorandom code for single-bit messages can be used to build a CCA-secure pseudorandom code with linear information rate, in the random oracle model.
Together with the result of Christ and Gunn, it follows that there exist ideal pseudorandom codes assuming the $2^{O(\sqrt{n})}$-hardness of LPN. This extends to CCA security in the random oracle model. These results immediately imply stronger robustness guarantees for generative AI watermarking schemes, such as the practical quality-preserving image watermarks of Gunn, Zhao, and Song (2024).
Cryptographically Secure Digital Consent
In the digital age, the concept of consent for online actions executed by third parties is crucial for maintaining trust and security in third-party services.
This work introduces the notion of cryptographically secure digital consent, which aims to replicate the traditional consent process in the online world.
We provide a flexible digital consent solution that accommodates different use cases and ensures the integrity of the consent process.
The proposed framework involves a client (referring to the user or their devices), an identity manager (which authenticates the client), and an agent (which executes the action upon receiving consent).
It supports various applications and ensures compatibility with existing identity managers.
We require the client to keep no more than a password. The design addresses several security and privacy challenges, including preventing offline dictionary attacks, ensuring non-repudiable consent, and preventing unauthorized actions by the agent.
Security is maintained even if either the identity manager or the agent is compromised, but not both.
Our notion of an identity manager is broad enough to include combinations of different authentication factors such as a password, a smartphone, a security device, biometrics, or an e-passport. We demonstrate applications for signing PDF documents, e-banking, and key recovery.
The State of the Union: Union-Only Signatures for Data Aggregation
A union-only signature (UOS) scheme (informally introduced by Johnson et al. at CT-RSA 2002) allows signers to sign sets of messages in such a way that (1) any third party can merge two signatures to derive a signature on the union of the message sets, and (2) no adversary, given a signature on some set, can derive a valid signature on any strict subset of that set (unless it has seen such a signature already).
Johnson et al. originally posed building a UOS as an open problem. In this paper, we make two contributions: we give the first formal definition of a UOS scheme, and we give the first UOS constructions. Our main construction uses hashing, regular digital signatures, Pedersen commitments and signatures of knowledge. We provide an implementation that demonstrates its practicality. Our main construction also relies on the hardness of the short integer solution (SIS) problem; we show how that this assumption can be replaced with the use of groups of unknown order. Finally, we sketch a UOS construction using SNARKs; this additionally gives the property that the size of the signature does not grow with the number of merges.
Scooby: Improved Multi-Party Homomorphic Secret Sharing Based on FHE
We present new constructions of multi-party homomorphic secret sharing (HSS) based on a new primitive that we call homomorphic encryption with decryption to shares (HEDS).
Our first construction, which we call Scooby, is based on many popular
fully homomorphic encryption (FHE) schemes with a linear decryption property. Scooby achieves an $n$-party HSS for general circuits with complexity $O(|F| + \log n)$, as opposed to $O(n^2 \cdot |F|)$ for the prior best construction based on multi-key FHE.
Scooby can be based on (ring)-LWE with a super-polynomial modulus-to-noise ratio. In our second construction, Scrappy, assuming any generic FHE plus HSS for NC1-circuits, we obtain a HEDS scheme which does not require a super-polynomial modulus. While these schemes all require FHE, in another instantiation, Shaggy, we show how in some cases it is possible to obtain multi-party HSS without FHE, for a small number of parties and constant-degree polynomials.
Finally, we show that our Scooby scheme can be adapted to use multi-key fully homomorphic encryption, giving more efficient spooky encryption and setup-free HSS. This latter scheme, Casper, if concretely instantiated with a B/FV-style multi-key FHE scheme, for functions $F$ which do not require bootstrapping, gives an HSS complexity of $O(n \cdot |F| + n^2 \cdot \log n)$.
Dual Support Decomposition in the Head: Shorter Signatures from Rank SD and MinRank
The MPC-in-the-Head (MPCitH) paradigm is widely used for building post-quantum signature schemes, as it provides a versatile way to design proofs of knowledge based on hard problems. Over the years, the MPCitH landscape has changed significantly, with the most recent improvements coming from VOLE-in-the-Head (VOLEitH) and Threshold-Computation-in-the-Head (TCitH).
While a straightforward application of these frameworks already improve the existing MPCitH-based signatures, we show in this work that we can adapt the arithmetic constraints representing the underlying security assumptions (here called the modeling) to achieve smaller sizes using these new techniques.
More precisely, we explore existing modelings for the rank syndrome decoding (RSD) and MinRank problems and we introduce a new modeling, named dual support decomposition, which achieves better sizes with the VOLEitH and TCitH frameworks by minimizing the size of the witnesses.
While this modeling is naturally more efficient than the other ones for a large set of parameters, we show that it is possible to go even further and explore new areas of parameters. With these new modeling and parameters, we obtain low-size witnesses which drastically reduces the size of the arithmetic part of the signature.
We apply the TCitH and VOLEitH frameworks to our new modeling for both RSD and MinRank and compare our results to the NIST candidates RYDE, MiRitH, and MIRA (MPCitH-based schemes from RSD and MinRank). We also note that recent techniques optimizing the sizes of GGM trees are applicable to our schemes and further reduce the signature sizes by a few hundred bytes. We obtain signature sizes below 3.5 kB for 128 bits of security with N=256 parties (a.k.a. leaves in the GGM trees) and going as low as ≈ 2.8 kB with N=2048, for both RSD and MinRank. This represents an improvement of more than 2 kB compared to the original submissions to the 2023 NIST call for additional signatures.
A Query Reconstruction Attack on the Chase-Shen Substring-Searchable Symmetric Encryption Scheme
Searchable symmetric encryption (SSE) enables queries over symmetrically encrypted databases. To achieve practical efficiency, SSE schemes incur a certain amount of leakage; however, this leads to the possibility of leakage cryptanalysis, i.e., cryptanalytic attacks that exploit the leakage from the target SSE scheme to subvert its data and query privacy guarantees. Leakage cryptanalysis has been widely studied in the context of SSE schemes supporting either keyword queries or range queries, often with devastating consequences. However, little or no attention has been paid to cryptanalysing substring-SSE schemes, i.e., SSE schemes supporting arbitrary substring queries over encrypted data. This is despite their relevance to many real-world applications, e.g., in the context of securely querying outsourced genomic databases. In particular, the first ever substring-SSE scheme, proposed nearly a decade ago by Chase and Shen (PoPETS '15), has not been cryptanalysed to date.
In this paper, we present the first leakage cryptanalysis of the substring-SSE scheme of Chase and Shen. We propose a novel inference-based query reconstruction attack that: (i) exploits a reduced version of the actual leakage profile of their scheme, and (ii) assumes a weaker attack model as compared to the one in which Chase and Shen originally claimed security. We implement our attack and experimentally validate its success rate and efficiency over two real-world datasets. Our attack achieves high query reconstruction rate with practical efficiency, and scales smoothly to large datasets containing $100,000$ strings.
To the best of our knowledge, ours is the first and only query reconstruction attack on (and the first systematic leakage cryptanalysis of) any substring-SSE scheme till date.
Quantum Procedures for Nested Search Problems with Applications in Cryptanalysis
In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints.
A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting of $\ell$ layers multiplies the complexity by a constant factor $(\pi/2)^\ell$.
In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways:
1) our framework results in quantum procedures that are significantly simpler to describe and analyze;
2) it reduces the overhead factor from $(\pi/2)^\ell$ to $\sqrt{\ell}$;
3) it is simpler to apply and optimize, without needing manual quantum analysis.
We give generic complexity formulas and show that for concrete instances, numerical optimizations enable further improvements, reducing even more the gap to an exact quadratic speedup. We demonstrate our framework by giving a tighter analysis of quantum attacks on reduced-round AES.
Classic McEliece Hardware Implementation with Enhanced Side-Channel and Fault Resistance
In this work, we propose the first hardware implementation of Classic McEliece protected with countermeasures against Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA). Classic Mceliece is one of the leading candidates for Key Encapsulation Mechanisms (KEMs) in the ongoing round 4 of the NIST standardization process for post-quantum cryptography. In particular, we implement a range of generic countermeasures against SCA and FIA, particularly protected the vulnerable operations such as additive Fast Fourier Transform (FFT) and Gaussian elimination, that have been targeted by prior SCA and FIA attacks. We also perform a detailed SCA evaluation demonstrating no leakage even with 100000 traces (improvement of more than 100× the number of traces compared to unprotected implementation). This comes at a modest total area overhead of between 4× to 7×, depending on the type of implemented SCA countermeasure. Furthermore, we present a thorough ASIC benchmark for SCA and FIA protected Classic McEliece design.
Scutum: Temporal Verification for Cross-Rollup Bridges via Goal-Driven Reduction
Scalability remains a key challenge for blockchain adoption. Rollups—especially zero-knowledge (ZK) and optimistic rollups—address this by processing transactions off-chain while maintaining Ethereum’s security, thus reducing gas fees and improving speeds. Cross-rollup bridges like Orbiter Finance enable seamless asset transfers across various Layer 2 (L2) rollups and between L2 and Layer 1 (L1) chains. However, the increasing reliance on these bridges raises significant security concerns, as evidenced by major hacks like those of Poly Network and Nomad Bridge, resulting in losses of hundreds of millions of dollars. Traditional security analysis methods such as static analysis and fuzzing are inadequate for cross-rollup bridges due to their complex designs involving multiple entities, smart contracts, and zero-knowledge circuits. These systems require reasoning about temporal sequences of events across different entities, which exceeds the capabilities of conventional analyzers.
In this paper, we introduce a scalable verifier to systematically assess the security of cross-rollup bridges. Our approach features a comprehensive multi-model framework that captures both individual behaviors and complex interactions using temporal properties. To enhance scalability, we approximate temporal safety verification through reachability analysis of a graph representation of the contracts, leveraging advanced program analysis techniques. Additionally, we incorporate a conflict-driven refinement loop to eliminate false positives and improve precision. Our evaluation on mainstream cross-rollup bridges, including Orbiter Finance, uncovered multiple zero-day vulnerabilities, demonstrating the practical utility of our method. The tool also exhibited favorable runtime performance, enabling efficient analysis suitable for real-time or near-real-time applications.
Private Neural Network Training with Packed Secret Sharing
We present a novel approach for training neural networks that leverages packed Shamir secret sharing scheme. For specific training protocols based on Shamir scheme, we demonstrate how to realize the conversion between packed sharing and Shamir sharing without additional communication overhead. We begin by introducing a method to locally convert between Shamir sharings with secrets stored at different slots. Building upon this conversion, we achieve free conversion from packed sharing to Shamir sharing. We then show how to embed the conversion from Shamir sharing to packed sharing into the truncation used during the training process without incurring additional communication costs. With free conversion between packed sharing and Shamir sharing, we illustrate how to utilize the packed scheme to parallelize certain computational steps involved in neural network training. On this basis, we propose training protocols with information-theoretic security between general $n$ parties under the semi-honest model. The experimental results demonstrate that, compared to previous work in this domain, applying the packed scheme can effectively improve training efficiency. Specifically, when packing $4$ secrets into a single sharing, we observe a reduction of more than $20\%$ in communication overhead and an improvement of over $10\%$ in training speed under the WAN setting.
Symmetric Encryption on a Quantum Computer
Classical symmetric encryption algorithms use $N$ bits of a shared
secret key to transmit $N$ bits of a message over a one-way channel in
an information theoretically secure manner. This paper proposes a hybrid
quantum-classical symmetric cryptosystem that uses a quantum computer to
generate the secret key. The algorithm leverages quantum circuits to
encrypt a message using a one-time pad-type technique whilst requiring
a shorter classical key. We show that for an $N$-qubit circuit, the
maximum number of bits needed to specify a quantum circuit grows as
$N^{3/2}$ while the maximum number of bits that the quantum circuit
can encode grows as $N^2$. We do not utilise the full expressive
capability of the quantum circuits as we focus on second order Pauli
expectation values only. The potential exists to encode an exponential
number of bits using higher orders of Pauli expectation values.
Moreover, using a parameterised quantum circuit (PQC), we could
further augment the amount of securely shared information by
introducing a secret key dependence on some of the PQC parameters.
The algorithm may be suitable for an early fault-tolerant quantum
computer implementation as some degree of noise can be tolerated.
Simulation results are presented along with experimental results on
the 84-qubit Rigetti Ankaa-2 quantum computer.
MIFARE Classic: exposing the static encrypted nonce variant
MIFARE Classic smart cards, developed and licensed by NXP, are widely used but have been subjected to numerous attacks over the years. Despite the introduction of new versions, these cards have remained vulnerable, even in card-only scenarios. In 2020, the FM11RF08S, a new variant of MIFARE Classic, was released by the leading Chinese manufacturer of unlicensed "MIFARE compatible" chips. This variant features specific countermeasures designed to thwart all known card-only attacks and is gradually gaining market share worldwide.
In this paper, we present several attacks and unexpected findings regarding the FM11RF08S. Through empirical research, we discovered a hardware backdoor and successfully cracked its key. This backdoor enables any entity with knowledge of it to compromise all user-defined keys on these cards without prior knowledge, simply by accessing the card for a few minutes.
Additionally, our investigation into older cards uncovered another hardware backdoor key that was common to several manufacturers.
Hybrid Zero-Knowledge from Garbled Circuits
We present techniques for constructing zero-knowledge argument systems from garbled circuits, extending the GC-to-ZK compiler by Jawurek, Kerschbaum, and Orlandi (ACM CCS 2023) and the GC-to-Σ compiler by Hazay and Venkitasubramaniam (J. Crypto, 2020) to the following directions:
- Our schemes are hybrid, commit-and-prove zero-knowledge argument systems that establish a connection between secrets embedded in algebraic commitments and a relation represented by a Boolean circuit.
- Our schemes incorporate diverse cross-domain secrets embedded within distinct algebraic commitments, simultaneously supporting Pedersen-like commitments and lattice-based commitments.
As an application, we develop circuit-represented compositions of Σ-protocols that support attractive access structures, such as weighted thresholds, that can be easily represented by a small circuit. For predicates P1, . . . , Pn individually associated with a Σ-protocol, and a predicate C represented by a Boolean circuit, we construct a Σ-protocol for proving C(P1, . . . , Pn) = 1. This result answers positively an open question posed by Abe, et. al., at TCC 2021.
How to Delete Without a Trace: Certified Deniability in a Quantum World
Is it possible to comprehensively destroy a piece of quantum information, so that nothing is left behind except the memory of that one had it at some point? For example, various works, most recently Morimae, Poremba, and Yamakawa (TQC '24), show how to construct a signature scheme with certified deletion where a user who deletes a signature on $m$ cannot later produce a signature for $m$. However, in all of the existing schemes, even after deletion the user is still able keep irrefutable evidence that $m$ was signed, and thus they do not fully capture the spirit of deletion.
In this work, we initiate the study of certified deniability in order to obtain a more comprehensive notion of deletion. Certified deniability uses a simulation-based security definition, ensuring that any information the user has kept after deletion could have been learned without being given the deleteable object to begin with; meaning that deletion leaves no trace behind! We define and construct two non-interactive primitives that satisfy certified deniability in the quantum random oracle model: signatures and non-interactive zero-knowledge arguments (NIZKs). As a consequence, for example, it is not possible to delete a signature/NIZK and later provide convincing evidence that it used to exist. Notably, our results utilize uniquely quantum phenomena to bypass Pass's (CRYPTO '03) celebrated result showing that deniable NIZKs are impossible even in the random oracle model.
Fast Two-party Threshold ECDSA with Proactive Security
We present a new construction of two-party, threshold ECDSA, building on a 2017 scheme of Lindell and improving his scheme in several ways.
ECDSA signing is notoriously hard to distribute securely, due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated.
We overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2 Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption to applications that rely on the verification key.
Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.
A Tight Analysis of GHOST Consistency
The GHOST protocol has been proposed as an improvement to the Nakamoto consensus mechanism that underlies Bitcoin. In contrast to the Nakamoto fork-choice rule, the GHOST rule justifies selection of a chain with weights computed over subtrees rather than individual paths. This mechanism has been adopted by a variety of consensus protocols, and is a part of the currently deployed protocol supporting Ethereum.
We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the security region.
Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is inferior to the region of Nakamoto consensus.
Compiled Nonlocal Games from any Trapdoor Claw-Free Function
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computation done by a quantum server. Their compiler relies on the existence of quantum fully-homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols.
Our compiler is based on the framework of measurement-based quantum computation.
It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property.
Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.
OPTIMSM: FPGA hardware accelerator for Zero-Knowledge MSM
The Multi-Scalar Multiplication (MSM) is the main barrier to accelerating Zero-Knowledge applications. In recent years, hardware acceleration of this algorithm on both FPGA and GPU has become a popular research topic and the subject of a multi-million dollar prize competition (ZPrize). This work presents OPTIMSM: Optimized Processing Through Iterative Multi-Scalar Multiplication. This novel accelerator focuses on the acceleration of the MSM algorithm for any Elliptic Curve (EC) by improving upon the Pippenger algorithm. A new iteration technique is introduced to decouple the required buckets from the window size, resulting in fewer EC computations for the same on-chip memory resources. Furthermore, we combine known optimizations from the literature for the first time to achieve additional latency improvements. Our enhanced MSM implementation significantly reduces computation time, achieving a speedup of up to $\times 12.77$ compared to recent FPGA implementations. Specifically, for the BLS12-381 curve, we reduce the computation time for an MSM of size $2^{24}$ to 914 ms using a single compute unit on the U55C FPGA or to 231 ms using four U55C devices. These results indicate a substantial improvement in efficiency, paving the way for more scalable and efficient Zero-Knowledge proof systems.
Cloning Games, Black Holes and Cryptography
The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes.
The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games:
1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$, respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies.
2. All existing works analyze $1\mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log \log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$).
Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.
BrakingBase - a linear prover, poly-logarithmic verifier, field agnostic polynomial commitment scheme
We propose a Polynomial Commitment Scheme (PCS), called BrakingBase, which allows a prover to commit to multilinear (or univariate) polynomials with $n$ coefficients in $O(n)$ time. The evaluation protocol of BrakingBase operates with an $O(n)$ time-complexity for the prover, while the verifier time-complexity and proof-complexity are $O(\lambda \log^2 n)$, where $λ$ is the security parameter. Notably, BrakingBase is field-agnostic, meaning it can be instantiated over any field of sufficiently large size. Additionally, BrakingBase can be combined with the Polynomial Interactive Oracle Proof (PIOP) from Spartan (Crypto 2020) to yield a Succinct Non-interactive ARgument of Knowledge (SNARK) with a linear-time prover, as well as poly-logarithmic complexity for both the verifier runtime and the proof size. We obtain our PCS by combining the Brakedown and Basefold PCS. The commitment protocol of BrakingBase is similar to that of Brakedown. The evaluation protocol of BrakingBase improves upon Brakedown’s verifier work by reducing it through multiple instances of the sum-check protocol. Basefold PCS is employed to commit to and later evaluate the multilinear extension (MLE) of the witnesses involved in the sum-check protocol at random points. This includes the MLE corresponding to the parity-check matrix of the linear-time encodable code used in Brakedown. We show that this matrix is sparse and use the Spark compiler from Spartan to evaluate its multilinear extension at a random point. We implement BrakingBase and compare its performance to Brakedown and Basefold over a 128 bit prime field.
Constructing Dembowski–Ostrom permutation polynomials from upper triangular matrices
We establish a one-to-one correspondence between Dembowski-Ostrom (DO) polynomials and upper triangular matrices. Based on this correspondence, we give a bijection between DO permutation polynomials and a special class of upper triangular matrices, and construct a new batch of DO permutation polynomials. To the best of our knowledge, almost all other known DO permutation polynomials are located in finite fields of $\mathbb{F}_{2^n}$, where $n$ contains odd factors (see Table 1). However, there are no restrictions on $n$ in our results, and especially the case of $n=2^m$ has not been studied in the literature. For example, we provide a simple necessary and sufficient condition to determine when $\gamma\, Tr(\theta_{i}x)Tr(\theta_{j}x) + x$ is a DO permutation polynomial. In addition, when the upper triangular matrix degenerates into a diagonal matrix and the elements on the main diagonal form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, this diagonal matrix corresponds to all linearized permutation polynomials. In a word, we construct several new DO permutation polynomials, and our results can be viewed as an extension of linearized permutation polynomials.
RAD-FS: Remote Timing and Power SCA Security in DVFS-Augmented Ultra-Low-Power Embedded Systems
High-performance crypto-engines have become crucial components in modern System-On-Chip (SoC) architectures across platforms, from servers to edge-IoTs’. Alas, their secure operation faces a significant obstacle caused by information-leakage accessed through Side-Channel Analysis (SCA). Adversaries exploit statistical-analysis techniques on measured (e.g.,) power and timing signatures generated during (e.g.,) encryption, extracting secrets. Mathematical countermeasures against such attacks often impose substantial power-performance-area overheads. Dynamic Voltage and Frequency Scaling (DVFS) techniques provide power-efficiency by varying power consumption according to workload; these modulations are called power-states. Unintentionally, DVFS introduces new inherent weaknesses exploitable by malicious actors: power-states leak information in both power and timing side-channels, measurable in software and hardware. We introduce a method to increase side-channel resistance using integrated voltage regulators and DVFS: (1) Pushing known prior-art in the topic to Ultra Low Power (ULP) regime (2) For the first time introducing a mechanism to aid in counteracting the inherent weakness of DVFS in SCA (3) Providing measurements performed on 40nm process ULP PLS15 test-chip down at 580 mV power-supply (4) Offering improved and parameterized resistance to remote-timing vulnerabilities inherent to DVFS. We present various results and perform a detailed analysis while comparing performance and security to prior-art. Importantly, our solution is
configurable in terms of security, maintaining degrees-of-freedom for power-optimization of DVFS.
A Composability Treatment of Bitcoin's Transaction Ledger with Variable Difficulty
As the first proof-of-work (PoW) permissionless blockchain, Bitcoin aims at maintaining a decentralized yet consistent transaction ledger as protocol participants (“miners”) join and leave as they please. This is achieved by means of a subtle PoW difficulty adjustment mechanism that adapts to the perceived block generation rate, and important steps have been taken in previous work to provide a rigorous analysis of the conditions (such as bounds on dynamic participation) that are sufficient for Bitcoin’s security properties to be ascertained.
Such existing analysis, however, is property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.
Exploring the Advantages and Challenges of Fermat NTT in FHE Acceleration
Recognizing the importance of a fast and resource-efficient polynomial multiplication in homomorphic encryption, in this paper, we design a multiplier-less number theoretic transform using a Fermat number as an auxiliary modulus. To make this algorithm scalable with the degree of polynomial, we apply a univariate to multivariate polynomial ring transformation.
We develop an accelerator architecture for fully homomorphic encryption using these algorithmic techniques for efficient multivariate polynomial multiplication. For practical homomorphic encryption application benchmarks, the hardware accelerator achieves a 1,200$\times$ speed-up compared to software implementations. Finally, we conclude the paper by discussing the advantages and limitations of the proposed polynomial multiplication method.
Anonymous Public-Key Quantum Money and Quantum Voting
Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited: a user in possession of k banknotes cannot produce k +1 banknotes. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely,
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a public-key quantum money scheme with anonymity against users and traceability by authorities.
Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes.
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability.
Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting!
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes.
Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness
Timed Secret Sharing
This paper introduces the notion of timed secret sharing (TSS), which establishes lower and upper time bounds for secret reconstruction in a threshold secret sharing scheme. Such time bounds are particularly useful in scenarios where an early or late reconstruction of a secret matters. We propose several new constructions that offer different security properties and show how they can be instantiated efficiently using novel techniques. We highlight how our ideas can be used to break the public goods game, which is an issue inherent to threshold secret sharing-based systems, without relying on incentive mechanism. We achieve this through an upper time bound that can be implemented either via short-lived proofs, or the gradual release of additional shares, establishing a trade-off between time and fault tolerance. The latter independently provides robustness in the event of dropout by some portion of shareholders.
SCIF: Privacy-Preserving Statistics Collection with Input Validation and Full Security
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
On the Power of Oblivious State Preparation
We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while keeping the sender's choice hidden from any malicious receiver.
We obtain the following results:
- The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP.
- OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation.
- Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE.
Thus, the OSP abstraction helps separate the cryptographic layer from the information-theoretic layer when building cryptosystems across classical and quantum participants. Indeed, several of the aforementioned applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions.
Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following:
- OSP implies oblivious transfer between one classical and one quantum party.
- Two-round OSP implies public-key encryption with classical keys and ciphertexts.
In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.
Elementary Formulas for Greatest Common Divisors and Semiprime Factors
We conjecture new elementary formulas for computing the greatest common divisor (GCD) of two integers, alongside an elementary formula for extracting the prime factors of semiprimes. These formulas are of fixed-length and require only the basic arithmetic operations of: addition, subtraction, multiplication, division with remainder, and exponentiation. Our GCD formulas result from simplifying a formula of Mazzanti and are derived using Kronecker substitution techniques from our earlier research. By applying these GCD formulas together with our recent discovery of an arithmetic expression for $\sqrt{n}$, we are able to derive explicit elementary formulas for the prime factors of a semiprime $n=p q$.
IrisLock: Iris Biometric Key Derivation with 42 bits of security
Despite decades of effort, a chasm exists between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees.
This is frustrating given the research that has developed theoretical tools, known as fuzzy extractors, that enable secure, privacy-preserving biometric authentication with public enrollment data (Dodis et al., SIAM JoC 2008). Unfortunately, fuzzy extractor systems either:
-Make strong independence assumptions, such as:
-- Bits of biometrics are i.i.d. (or that all correlation is pairwise between features (Hine et al., TIFS 2023)), or
-- For an error-correcting code, the nearest codeword and the coset of biometric readings are independent (Zhang, Cui, and Yu, ePrint 2021/1559).
These assumptions either have not been statistically checked or
statistical analysis indicates they are false.
- Or use incorrect cryptographic analysis. Simhadri et al. (ISC, 2019) assume the security of sample-then-lock (Canetti et al., Journal of Cryptology 2021) is captured by the average min-entropy of subsets. Zhang et al. (ICPR, 2022) show an attack on this incorrect analysis.
This work introduces IrisLock, an iris key derivation system powered by technical advances in both
1) feature extraction from the iris and
2) the fuzzy extractor used to secure authentication keys. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). We correct a proof in Canetti et al. and show the minimum of min-entropy of subsets is the relevant security measure. Our primary parameters are $42$ bits of security at $45\%$ true accept rate (TAR). Our quantitive level of security is as good as the above systems, Simhadri et al's incorrect analysis yields an estimate of $32$ bits, while Zhang et al.'s system on the face estimates $45$ bits (with the independence condition). One can easily incorporate a password, boosting security to $64$ bits.
Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics (the open dataset regime).
The only statistical assumption made is necessary: the accuracy of min-entropy estimation.
Improved Gadgets for the High-Order Masking of Dilithium
We present novel and improved high-order masking gadgets for Dilithium, a post-quantum signature scheme that has been standardized by the National Institute of Standards and Technologies (NIST). Our proposed gadgets include the ShiftMod gadget, which is used for efficient arithmetic shifts and serves as a component in other masking gadgets. Additionally, we propose a new algorithm for Boolean-to-arithmetic masking conversion of a $\mu$-bit integer $x$ modulo any integer $q$, with a complexity that is independent of both $\mu$ and $q$. This algorithm is used in Dilithium to mask the generation of the random variable $y$ modulo $q$. Moreover, we describe improved techniques for masking the Decompose function in Dilithium. Our new gadgets are proven to be secure in the $t$-probing model.
We demonstrate the effectiveness of our countermeasures by presenting a complete high-order masked implementation of Dilithium that utilizes the improved gadgets described above. We provide practical results obtained from a C implementation and compare the performance improvements provided by our new gadgets with those of previous work.
VCVio: A Formally Verified Forking Lemma and Fiat-Shamir Transform, via a Flexible and Expressive Oracle Representation
As cryptographic protocols continue to become more complex and specialized, their security proofs have grown more complex as well, making manual verification of their correctness more difficult. Formal verification via proof assistants has become a popular approach to solving this, by allowing researchers to write security proofs that can be verified correct by a computer.
In this paper we present a new framework of this kind for verifying security proofs, taking a foundational approach to representing and reasoning about protocols. We implement our framework in the Lean programming language, and give a number of security proofs to demonstrate that our system is both powerful and usable, with comparable automation to similar systems.
Our framework is especially focused on reasoning about and manipulating oracle access, and we demonstrate the usefulness of this approach by implementing both a general forking lemma and a version of the Fiat-Shamir transform for sigma protocols. As a simple case study we then instantiate these to an implementation of a Schnorr-like signature scheme.
Improved High-Order Masked Generation of Masking Vector and Rejection Sampling in Dilithium
In this work, we introduce enhanced high-order masking techniques tailored for Dilithium, the post-quantum signature scheme recently standardized by NIST. We improve the masked generation of the masking vector $\vec{y}$, based on a fast Boolean-to-arithmetic conversion modulo $q$. We also describe an optimized gadget for the high-order masked rejection sampling, with a complexity independent from the size of the modulus $q$. We prove the security of our gadgets in the classical ISW $t$-probing model. Finally, we detail our open-source C implementation of these gadgets integrated into a fully masked Dilithium implementation, and provide an efficiency comparison with previous works.
Formulations and Constructions of Remote State Preparation with Verifiability, with Applications
Remote state preparation with verifiability (RSPV) is an important quantum cryptographic primitive [GV19,Zha22]. In this primitive, a client would like to prepare a quantum state (sampled or chosen from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. In this work we make several contributions on its formulations, constructions and applications. In more detail:
- We first work on the definitions and abstract properties of the RSPV problem. We select and compare different variants of definitions [GV19,GMP22,Zha22], and study their basic properties (like composability and amplification).
- We also study a closely related question of how to certify the server's operations (instead of solely the states). We introduce a new notion named remote operator application with verifiability (ROAV). We compare this notion with related existing definitions [SW87,MY04,MV21,NZ23], study its abstract properties and leave its concrete constructions for further works.
- Building on the abstract properties and existing results [BGKPV23], we construct a series of new RSPV protocols. Our constructions not only simplify existing results [GV19] but also cover new state families, for example, states in the form of $\frac{1}{\sqrt{2}}(|0\rangle|x_0\rangle+|1\rangle|x_1\rangle)$. All these constructions rely only on the existence of weak NTCF [BKVV,AMR22], without additional requirements like the adaptive hardcore bit property [BCMVV,AMR22].
- As a further application, we show that the classical verification of quantum computations (CVQC) problem [ABEM,Mah18] could be constructed from assumptions on group actions [ADMP20]. This is achieved by combining our results on RSPV with group-action-based instantiation of weak NTCF [AMR22], and then with the quantum-gadget-assisted quantum verification protocol [FKD18].
SoK: On the Physical Security of UOV-based Signature Schemes
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis regarding the physical security of UOV-based signature schemes is missing.
In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Improved ML-DSA Hardware Implementation With First Order Masking Countermeasure
We present the protected hardware implementation of the Module-Lattice-Based Digital Signature Standard (MLDSA). ML-DSA is an extension of Dilithium 3.1, which is the winner of the Post Quantum Cryptography (PQC) competition in the digital signature category. The proposed design is based on the existing high-performance Dilithium 3.1 design. We implemented existing Dilithium masking gadgets in hardware, which were only implemented in software. The masking gadgets are integrated with the unprotected ML-DSA design and functional verification of the complete design is verified with the Known Answer Tests(KATs) generated from an updated ML-DSA software implementation. We also present the practical power side-channel attack experimental results by implementing masking gadgets on the standard sidechannel evaluation FPGA board and collecting power traces up-to 1 million traces. The proposed protected design has the overhead of 1.127× LUT, 1.2× Flip-Flop, and 378× execution time compared to unprotected design. The experimental results show that it resists side-channel attacks.
Decryption Indistinguishability under Chosen Control Flow
Security proofs for cryptographic primitives typically assume operations are executed in the correct sequence; however, insecure implementations or software-level attacks can disrupt control flows, potentially invalidating these guarantees. To address this issue, we introduce a new security notion, IND-CFA, which formalizes decryption
security in the presence of adversarially controlled execution flows. Using this notion, we investigate the control flows under which a cryptographic scheme remains secure, providing insights into secure implementation practices. We revisit the Encrypt-then-MAC paradigm, underscoring the crucial role of operation sequencing in ensuring the
security of authenticated encryption schemes built using this method. Additionally, we provide a detailed analysis of the Encode-then-Encipher (EtE) paradigm, a widely adopted approach for constructing robust AE schemes, revealing its vulnerability to adversarial control flows that can enable attackers to infer low-entropy values in the
presence of multiple failure conditions.
Et tu, Brute? SCA Assisted CCA using Valid Ciphertexts - A Case Study on HQC KEM
HQC is a code-based key encapsulation mechanism (KEM)
that was selected to move to the fourth round of the NIST post-quantum
standardization process. While this scheme was previously targeted by
side-channel assisted chosen-ciphertext attacks for key recovery, all these attacks have relied on malformed ciphertexts for key recovery. Thus, all these attacks can be easily prevented by deploying a detection based countermeasures for invalid ciphertexts, and refreshing the secret key upon detection of an invalid ciphertext. This prevents further exposure of the secret key to the attacker and thus serves as an attractive option for protection against prior attacks. Thus, in this work, we present a critical analysis of the detection based countermeasure, and present the first side-channel based chosen-ciphertext attack that attempts to utilize only valid ciphertexts for key recovery, thereby defeating the detection based countermeasure. We propose novel attacks exploiting leakage from the ExpandAndSum and FindPeaks operations within the Reed-Muller decoder for full key recovery with 100% success rate. We show that our attacks are quite robust to noise in the side-channel measurements, and we also present novel extensions of our attack to the shuffling countermeasure on both the ExpandAndSum and FindPeaks operation, which renders the shuffling countermeasure ineffective. Our work therefore shows that low-cost detection based countermeasures can be rendered ineffective, and cannot offer standalone protection against CC-based side-channel attacks. Thus, our work encourages more study towards development of new low-cost countermeasures against CC-based side-channel attacks.
Succinct Randomized Encodings from Non-compact Functional Encryption, Faster and Simpler
Succinct randomized encodings allow encoding the input $x$ of a time-$t$ uniform computation $M(x)$ in sub-linear time $o(t)$. The resulting encoding $\tilde{x}$ allows recovering the result of the computation $M(x)$, but hides any other information about $x$. Such encodings are known to have powerful applications such as reducing communication in MPC, bootstrapping advanced encryption schemes, and constructing time-lock puzzles.
Until not long ago, the only known constructions were based on indistinguishability obfuscation, and in particular they were not based on standard post-quantum assumptions. In terms of efficiency, these constructions' encoding time is $\rm{polylog}(t)$, essentially the best one can hope for. Recently, a new construction was presented based on Circular Learning with Errors, an assumption similar to the one used in fully-homomorphic encryption schemes, and which is widely considered to be post-quantum resistant. However, the encoding efficiency significantly falls behind obfuscation-based scheme and is $\approx \sqrt{t} \cdot s$, where $s$ is the space of the computation.
We construct, under the same assumption, succinct randomized encodings with encoding time $\approx t^{\varepsilon} \cdot s$ for arbitrarily small constant $\varepsilon<1$. Our construction is relatively simple, generic and relies on any non-compact single-key functional encryption that satisfies a natural {\em efficiency preservation} property.
SophOMR: Improved Oblivious Message Retrieval from SIMD-Aware Homomorphic Compression
Privacy-preserving blockchains and private messaging services that ensure receiver-privacy face a significant UX challenge: each client must scan every payload posted on the public bulletin board individually to avoid missing messages intended for them. Oblivious Message Retrieval (OMR) addresses this issue by securely outsourcing this expensive scanning process to a service provider using Homomorphic Encryption (HE).
In this work, we propose a new OMR scheme that substantially improves upon the previous state-of-the-art, PerfOMR (USENIX Security'24). Our implementation demonstrates reductions of 3.3x in runtime, 2.2x in digest size, and 1.3x in key size, in a scenario with 65536 payloads (each 612 bytes), of which up to 50 are pertinent.
At the core of these improvements is a new homomorphic compression mechanism, where ciphertexts of length proportional to the number of total payloads are compressed into a digest whose length is proportional to the upper bound on the number of pertinent payloads. Unlike previous approaches, our scheme fully exploits the native homomorphic SIMD structure of the underlying HE scheme, significantly enhancing efficiency. In the setting described above, our compression scheme achieves 7.4x speedup compared to PerfOMR.
Revisiting Leakage-Resilient MACs and Succinctly-Committing AEAD: More Applications of Pseudo-Random Injections
Pseudo-Random Injections (PRIs) have had several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it can be transformed into a succinctly committed scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes (MACs) and AEAD schemes.
First, we look at some of the properties and definitions PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security
only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAE from scratch that achieves succinct CMT4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
Batching Adaptively-Sound SNARGs for NP
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement $x$ is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially-secure indistinguishability obfuscation, sub-exponentially-secure one-way functions, and polynomial hardness of discrete log).
We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
Ciphertext-Policy ABE from Inner-Product FE
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices. Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in plain Decision Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH assumption.
Utility Sampling for Trust Metrics in PKI
Uncategorized
Uncategorized
We propose a new trust metric for a network of public key certificates, e.g. as in PKI, which allows a user to buy insurance at a fair price on the possibility of failure of the certifications provided while transacting with an arbitrary party in the network.
Our metric builds on a metric and model of insurance provided by Reiter and Stubblebine~\cite{RS}, while addressing various limitations and drawbacks of the latter. It conserves all the beneficial properties of the latter over other schemes, including protecting the user from unintentional or malicious dependencies in the network of certifications. Our metric is built on top of a simple
and intuitive model of trust and risk based on ``utility sampling'', which maybe of interest for non-monetary applications as well.
On the Multi-User Security of LWE-based NIKE
Non-interactive key exchange (NIKE) schemes like the Diffie-Hellman key exchange are a widespread building block in several cryptographic protocols. Since the Diffie-Hellman key exchange is not post-quantum secure, it is important to investigate post-quantum alternatives.
We analyze the security of the LWE-based NIKE by Ding et al. (ePrint 2012) and Peikert (PQCrypt 2014) in a multi-user setting where the same public key is used to generate shared keys with multiple other users. The Diffie-Hellman key exchange achieves this security notion. The mentioned LWE-based NIKE scheme comes with an inherent correctness error (Guo et al., PKC 2020), and this has significant implications for the multi-user security, necessitating a closer examination.
Single-user security generically implies multi-user security when all users generate their keys honestly for NIKE schemes with negligible correctness error. However, the LWE-based NIKE requires a super-polynomial modulus to achieve a negligible correctness error, which makes the scheme less efficient. We show that
- generically, single-user security does not imply multi-user security when the correctness error is non-negligible, but despite this
- the LWE-based NIKE with polynomial modulus is multi-user secure for honest users when the number of users is fixed in advance. This result takes advantage of the leakage-resilience properties of LWE.
We then turn to a stronger model of multi-user security that allows adversarially generated public keys. For this model, we consider a variant of the LWE-based NIKE where each public key is equipped with a NIZKPoK of the secret key. Adding NIZKPoKs is a standard technique for this stronger model and Hesse et al. (Crypto 2018) showed that this is sufficient to achieve security in the stronger multi-user security model for perfectly correct NIKEs (which the LWE-based NIKE is not). We show that
- for certain parameters that include all parameters with polynomial modulus, the LWE-based NIKE can be efficiently attacked with adversarially generated public keys, despite the use of NIZKPoKs, but
- for suitable parameters (that require a super-polynomial modulus), this security notion is achieved by the LWE-based NIKE with NIZKPoKs.
This stronger security notion has been previously achieved for LWE-based NIKE only in the QROM, while all our results are in the standard model.
Improved Algebraic Attacks on Round-Reduced LowMC with Single-Data Complexity
Recently, Picnic3 has introduced several alternative LowMC instances, which prompts the cryptanalysis competition for LowMC. In this paper, we provide new solutions to the competition with full S-box layers under single-data complexity. First, we present a new guess-and-determine attack framework that achieves the best trade-off in complexity, while effectively enhancing two algorithms applicable to 2-round LowMC cryptanalysis. Next, we present a new meet-in-the-middle attack framework for 2-/3-round LowMC, which can gradually reduce the number of variables and narrow down the range of candidate keys in stages. As a result, our 3-stage MITM attacks have both lower time complexity and memory complexity than the best previous 2-round attacks proposed by Banik et al. at ASIACRYPT 2021, with memory reduced drastically by a factor of $ 2^{29.7} \sim 2^{70.4} $.
Verifiable Secret Sharing from Symmetric Key Cryptography with Improved Optimistic Complexity
In this paper we propose verifiable secret sharing (VSS) schemes
secure for any honest majority in the synchronous model, and that only use symmetric-key cryptographic tools, therefore having plausibly post-quantum security. Compared to the state-of-the-art scheme with these features (Atapoor et al., Asiacrypt `23), our main improvement lies on the complexity of the ``optimistic'' scenario where the dealer and all but a small number of receivers behave honestly in the sharing phase: in this case, the running time and download complexity (amount of information read) of each honest verifier is polylogarithmic and the total amount of broadcast information by the dealer is logarithmic; all these complexities were linear in the aforementioned work by Atapoor et al. At the same time, we preserve these complexities with respect to the previous work for the ``pessimistic'' case where the dealer or $O(n)$ receivers cheat actively.
The new VSS protocol is of interest in multiparty computations where each party runs one VSS as a dealer, such as distributed key generation protocols.
Our main technical handle is a distributed zero-knowledge proof of low degreeness of a polynomial, in the model of Boneh et al. (Crypto `19) where the statement (in this case the evaluations of the witness polynomial) is distributed among several verifiers, each knowing one evaluation. Using folding techniques similar to FRI (Ben-Sasson et al., ICALP `18) we construct such a proof where each verifier receives polylogarithmic information and runs in polylogarithmic time.
Pseudorandom Function-like States from Common Haar Unitary
Recent active studies have demonstrated that cryptography without one-way functions (OWFs) could be possible in the quantum world. Many fundamental primitives that are natural quantum analogs of OWFs or pseudorandom generators (PRGs) have been introduced, and their mutual relations and applications have been studied. Among them, pseudorandom function-like state generators (PRFSGs) [Ananth, Qian, and Yuen, Crypto 2022] are one of the most important primitives. PRFSGs are a natural quantum analogue of pseudorandom functions (PRFs), and imply many applications such as IND-CPA secret-key encryption (SKE) and EUF-CMA message authentication code (MAC). However, only known constructions of (many-query-secure) PRFSGs are ones from OWFs or pseudorandom unitaries (PRUs).
In this paper, we construct classically-accessible adaptive secure PRFSGs in the invertible quantum Haar random oracle (QHRO) model which is introduced in [Chen and Movassagh, Quantum]. The invertible QHRO model is an idealized model where any party can access a public single Haar random unitary and its inverse, which can be considered as a quantum analog of the random oracle model. Our PRFSG constructions resemble the classical Even-Mansour encryption based on a single permutation, and are secure against any unbounded polynomial number of queries to the oracle and construction. To our knowledge, this is the first application in the invertible QHRO model without any assumption or conjecture. The previous best construction in the idealized model is PRFSGs secure up to o(λ/ log λ) queries in the common Haar state model [Ananth, Gulati, and Lin, TCC 2024].
We develop new techniques on Haar random unitaries to prove the selective and adaptive security of our PRFSGs. For selective security, we introduce a new formula, which we call the Haar twirl approximation formula. For adaptive security, we show the unitary reprogramming lemma and the unitary resampling lemma. These have their own interest, and may have many further applications. In particular, by using the approximation formula, we give an alternative proof of the non-adaptive security of the PFC ensemble [Metger, Poremba, Sinha, and Yuen, FOCS 2024] as an additional result.
Finally, we prove that our construction is not PRUs or quantum-accessible non-adaptive PRFSGs by presenting quantum polynomial time attacks. Our attack is based on generalizing the hidden subgroup problem where the relevant function outputs quantum states.
A comprehensive analysis of Regev's quantum algorithm
Public key cryptography can be based on integer factorization and
the discrete logarithm problem (DLP), applicable in multiplicative groups and
elliptic curves. Regev’s recent quantum algorithm was initially designed for the
factorization and was later extended to the DLP in the multiplicative group.
In this article, we further extend the algorithm to address the DLP for elliptic
curves. Notably, based on celebrated conjectures in Number Theory, Regev’s
algorithm is asymptotically faster than Shor’s algorithm for elliptic curves.
Our analysis covers all cases where Regev’s algorithm can be applied. We
examine the general framework of Regev’s algorithm and offer a geometric
description of its parameters. This preliminary analysis enables us to certify
the success of the algorithm on a particular instance before running it.
In the case of integer factorization, we demonstrate that there exists an in-
finite family of RSA moduli for which the algorithm always fails. On the other
hand, when the parameters align with the Gaussian heuristics, we prove that
Regev’s algorithm succeeds. By noting that the algorithm naturally adapts
to the multidimensional DLP, we proved that it succeeds for a certain range
of parameters.
Linear Proximity Gap for Reed-Solomon Codes within the 1.5 Johnson Bound
We establish a linear proximity gap for Reed-Solomon (RS) codes within the one-and-a-half Johnson bound. Specifically, we investigate the proximity gap for RS codes, revealing that any affine subspace is either entirely $\delta$-close to an RS code or nearly all its members are $\delta$-far from it. When $\delta$ is within the one-and-a-half Johnson bound, we prove an upper bound on the number of members (in the affine subspace) that are $\delta$-close to the RS code for the latter case. Our bound is linear in the length of codewords. In comparison, Ben-Sasson, Carmon, Ishai, Kopparty and Saraf [FOCS 2020] prove a linear bound when $\delta$ is within the unique decoding bound and a quadratic bound when $\delta$ is within the Johnson bound. Note that when the rate of the RS code is smaller than 0.23, the one-and-a-half Johnson bound is larger than the unique decoding bound.
Proximity gaps for Reed-Solomon (RS) codes have implications in various RS code-based protocols. In many cases, a stronger property than individual distance—known as correlated agreement—is required, i.e., functions in the affine subspace are not only $\delta$-close to an RS code, but also agree on the same evaluation domain. Our results support this stronger property.
Isotropic Quadratic Forms, Diophantine equations and Digital Signatures, DEFIv2
This work introduces DEFIv2 - an efficient hash-and-sign digital signature scheme based on isotropic quadratic forms over a commutative ring of characteristic 0. The form is public, but the construction is a trapdoor that depends on the scheme's private key. For polynomial rings over integers and rings of integers of algebraic number fields, the cryptanalysis is reducible to solving a quadratic Diophantine equation over the ring or, equivalently, to solving a system of quadratic Diophantine equations over rational integers. It is still an open problem whether quantum computers will have any advantage in solving Diophantine problems.
Foundations of Adaptor Signatures
Adaptor signatures extend the functionality of regular signatures through the computation of pre-signatures on messages for statements of NP relations. Pre-signatures are publicly verifiable; they simultaneously hide and commit to a signature of an underlying signature scheme on that message. Anybody possessing a corresponding witness for the statement can adapt the pre-signature to obtain the "regular" signature. Adaptor signatures have found numerous applications for conditional payments in blockchain systems, like payment channels (CCS'20, CCS'21), private coin mixing (CCS'22, SP'23), and oracle-based payments (NDSS'23).
In our work, we revisit the state of the security of adaptor signatures and their constructions. In particular, our two main contributions are:
- Security Gaps and Definitions: We review the widely-used security model of adaptor signatures due to Aumayr et al. (ASIACRYPT'21) and identify gaps in their definitions that render known protocols for private coin-mixing and oracle-based payments insecure. We give simple counterexamples of adaptor signatures that are secure w.r.t. their definitions but result in insecure instantiations of these protocols. To fill these gaps, we identify a minimal set of modular definitions that align with these practical applications.
- Secure Constructions: Despite their popularity, all known constructions are (1) derived from identification schemes via the Fiat-Shamir transform in the random oracle model or (2) require modifications to the underlying signature verification algorithm, thus making the construction useless in the setting of cryptocurrencies. More concerningly, all known constructions were proven secure w.r.t. the insufficient definitions of Aumayr et al., leaving us with no provably secure adaptor signature scheme to use in applications.
Firstly, in this work, we salvage all current applications by proving the security of the widely-used Schnorr adaptor signatures under our proposed definitions. We then provide several new constructions, including presenting the first adaptor signature schemes for Camenisch-Lysyanskaya (CL), Boneh-Boyen-Shacham (BBS+), and Waters signatures, all of which are proven secure in the standard model. Our new constructions rely on a new abstraction of digital signatures, called dichotomic signatures, which covers the essential properties we need to build adaptor signatures. Proving the security of all constructions (including identification-based schemes) relies on a novel non-black-box proof technique. Both our digital signature abstraction and the proof technique could be of independent interest to the community.
Secure Encryption and Key Exchange using Arbiter PUF
Uncategorized
Uncategorized
This paper introduces a novel approach to enhancing cryp-
tographic security. It proposes the use of one-time message sharing com-
bined with Physically Unclonable Functions (PUF) to securely exchange
keys and generate an S-subbyte-box for encryption. This innovative tech-
nique aims to elevate the security standards of cryptographic applica-
tions.
Amun: Securing E-Voting Against Over-the-Shoulder Coercion
In an election where each voter may express $P$ preferences among $M$ possible choices, the Amun protocol allows to secure vote casting against over-the-shoulder adversaries, retaining privacy, fairness, end-to-end verifiability, and correctness.
Before the election, each voter receives a ballot containing valid and decoy tokens: only valid tokens contribute in the final tally, but they remain indistinguishable from the decoys.
Since the voter is the only one who knows which tokens are valid (without being able to prove it to a coercer), over-the-shoulder attacks are thwarted.
We prove the security of the construction under the standard Decisional Diffie Hellman assumption in the random oracle model.
Breaking BASS
We provide several attacks on the BASS signature scheme introduced by Grigoriev, Ilmer, Ovchinnikov and Shpilrain in 2023. We lay out a trivial forgery attack which generates signatures passing the scheme's probabilistic signature verification with high probability. Generating these forgeries is faster than generating signatures honestly. Moreover, we describe a key-only attack which allows us to recover an equivalent private key from a signer's public key. The time complexity of this recovery is asymptotically the same as that of signing messages.
Breaking Verifiable Delay Functions in the Random Oracle Model
This work resolves the open problem of whether verifiable delay functions (VDFs) can be constructed in the random oracle model.
A VDF is a cryptographic primitive that requires a long time to compute (even with parallelization), but produces a unique output that is efficiently and publicly verifiable.
We prove that VDFs with \emph{imperfect completeness} and \emph{computational uniqueness} do not exist in the random oracle model. This also rules out black-box constructions of VDFs from other cryptographic primitives, such as one-way permutations and collision-resistant hash functions.
Prior to our work, Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both \emph{perfect completeness} and \emph{perfect uniqueness} do not exist in the random oracle model; on the other hand, Ephraim, Freitag, Komargodski, and Pass (Eurocrypt 2020) construct VDFs with \emph{perfect completeness} and \emph{computational uniqueness} in the random oracle model assuming the hardness of repeated squaring. Our result is optimal -- we bridge the current gap between previously known impossibility results and existing constructions.
On the Sample Complexity of Linear Code Equivalence for all Code Rates
In parallel with the standardization of lattice-based cryptosystems, the research community in Post-quantum Cryptography focused on non-lattice-based hard problems for constructing public-key cryptographic primitives. The Linear Code Equivalence (LCE) Problem has gained attention regarding its practical applications and cryptanalysis.
Recent advancements, including the LESS signature scheme and its candidacy in the NIST standardization for additional signatures, supported LCE as a foundation for post-quantum cryptographic primitives. However, recent cryptanalytic results have revealed vulnerabilities in LCE-based constructions when multiple related public keys are available for one specific code rate. In this work, we generalize the proposed attacks to cover all code rates. We show that the complexity of recovering the private key from multiple public keys is significantly reduced for any code rate scenario. Thus, we advise against constructing specific cryptographic primitives using LCE.
Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference
We present $\textit{Rhombus}$, a new secure matrix-vector multiplication (MVM) protocol in the semi-honest two-party setting, which is able to be seamlessly integrated into existing privacy-preserving machine learning (PPML) frameworks and serve as the basis of secure computation in linear layers.
$\textit{Rhombus}$ adopts RLWE-based homomorphic encryption (HE) with coefficient encoding, which allows messages to be chosen from not only a field $\mathbb{F}_p$ but also a ring $\mathbb{Z}_{2^\ell}$, where the latter supports faster computation in non-linear layers. To achieve better efficiency, we develop an input-output packing technique that reduces the communication cost incurred by HE with coefficient encoding by about $21\times$, and propose a split-point picking technique that reduces the number of rotations to that sublinear in the matrix dimension. Compared to the recent protocol $\textit{HELiKs}$ by Balla and Koushanfar (CCS'23), our implementation demonstrates that $\textit{Rhombus}$ improves the whole performance of an MVM protocol by a factor of $7.4\times \sim 8\times$, and improves the end-to-end performance of secure two-party inference of ResNet50 by a factor of $4.6\times \sim 18\times$.
Fully Secure Searchable Encryption from PRFs, Pairings, and Lattices
Searchable encryption is a cryptographic primitive that allows us to perform searches on encrypted data. Searchable encryption schemes require that ciphertexts do not leak information about keywords. However, most of the existing schemes do not achieve the security notion that trapdoors do not leak information. Shen et al. (TCC 2009) proposed a security notion called full security, which includes both ciphertext privacy and trapdoor privacy, but there are few fully secure constructions. Full security is defined for the secret key settings since it is known that public key schemes cannot achieve the trapdoor privacy in principle.
In this paper, we construct a query-bounded fully secure scheme from pseudorandom functions. In addition, we propose three types of efficient (unbounded) fully secure schemes. One of them is based on bilinear groups, and the others are besed on lattices. We then analyze the existing constructions. We then analyze the existing constructions. First, we simplify the Cheng et al. scheme (Information Sciences 2023) and prove its security. This scheme had not been proved to be secure. Second, we show that the Li-Boyen pairing-based scheme (IACR CiC 2024) does not achieve the trapdoor privacy, not as claimed.
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and linear time. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83) in the UC framework. Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology.
Encrypted RAM Delegation: Applications to Rate-1 Extractable Arguments, Homomorphic NIZKs, MPC, and more
In this paper we introduce the notion of encrypted RAM delegation. In an encrypted RAM delegation scheme, the prover creates a succinct proof for a group of two input strings $x_\mathsf{pb}$ and $x_\mathsf{pr}$, where $x_\mathsf{pb}$ corresponds to a large \emph{public} input and $x_\mathsf{pr}$ is a \emph{private} input. A verifier can check correctness of computation of $\mathcal{M}$ on $(x_\mathsf{pb}, x_\mathsf{pr})$, given only the proof $\pi$ and $x_\mathsf{pb}$.
We design encrypted RAM delegation schemes from a variety of standard assumptions such as DDH, or LWE, or $k$-linear. We prove strong knowledge soundness guarantee for our scheme as well as a special input hiding property to ensure that $\pi$ does not leak anything about $x_\mathsf{pr}$.
We follow this by describing multiple applications of encrypted RAM delegation. First, we show how to design a rate-1 non-interactive zero-knowledge (NIZK) argument system with a straight-line extractor. Despite over 30+ years of research, the only known construction in the literature for rate-1 NIZKs from standard assumptions relied on fully homomorphic encryption. Thus, we provide the first rate-1 NIZK scheme based purely on DDH or $k$-linear assumptions.
Next, we also design fully-homomorphic NIZKs from encrypted RAM delegation. The only prior solution crucially relied on algebraic properties of pairing-based NIZKs, thus was only known from the decision linear assumption. We provide the first fully-homomorphic NIZK system from LWE (thus post-quantum security) and from DDH-hard groups.
We also provide a communication-complexity-preserving compiler for a wide class of semi-malicious multiparty computation (MPC) protocols to obtain fully malicious MPC protocols. This gives the first such compiler for a wide class of MPC protocols as any comparable compiler provided in prior works relied on strong non-falsifiable assumptions such as zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). Moreover, we also show many other applications to composable zero-knowledge batch arguments, succinct delegation of committed programs, and fully context-hiding multi-key multi-hop homomorphic signatures.
Unclonable Cryptography with Unbounded Collusions and Impossibility of Hyperefficient Shadow Tomography
Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program or functionality in a quantum state such that a user in possession of k copies cannot create k+1 copies, for any k. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve. Previous work has been able to achieve copy-protection for various functionalities only in restricted models: (i) in the bounded collusion setting where k -> k+1 security is achieved for a-priori fixed collusion bound k (in the plain model with the same computational assumptions as ours, by Liu, Liu, Qian, Zhandry [TCC'22]), or, (ii) only k -> 2k security is achieved (relative to a structured quantum oracle, by Aaronson [CCC'09]).
In this work, we give the first unbounded collusion-resistant (i.e. multiple-copy secure) copy-protection schemes, answering the long-standing open question of constructing such schemes, raised by multiple previous works starting with Aaronson (CCC'09).
More specifically, we obtain the following results.
- We construct (i) public-key encryption, (ii) public-key functional encryption, (iii) signature and (iv) pseudorandom function schemes whose keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure iO and LWE.
- We show that any unlearnable functionality can be copy-protected against unbounded collusions, relative to a classical oracle.
- As a corollary of our results, we rule out the existence of hyperefficient quantum shadow tomography,
* even given non-black-box access to the measurements, assuming subexponentially secure iO and LWE, or,
* unconditionally relative to a quantumly accessible classical oracle, and hence answer an open question by Aaronson (STOC'18).
We obtain our results through a novel technique which uses identity-based encryption to construct multiple copy secure copy-protection schemes from 1-copy -> 2-copy secure schemes. We believe our technique is of independent interest.
Along the way, we also obtain the following results.
- We define and prove the security of new collusion-resistant monogamy-of-entanglement games for coset states.
- We construct a classical puncturable functional encryption scheme whose master secret key can be punctured at all functions f such that f(m_0) != f(m_1). This might also be of independent interest.
Making Searchable Symmetric Encryption Schemes Smaller and Faster
Searchable Symmetric Encryption (SSE) has emerged as a promising tool for facilitating efficient query processing over encrypted data stored in un-trusted cloud servers. Several techniques have been adopted to enhance the efficiency and security of SSE schemes. The query processing costs, storage costs and communication costs of any SSE are directly related to the size of the encrypted index that is stored in the server. To our knowledge, there is no work directed towards minimizing the index size. In this paper we introduce a novel technique to directly reduce the index size of any SSE. Our proposed technique generically transforms any secure single keyword SSE into an equivalently functional and secure version with reduced storage requirements, resulting in faster search and reduced communication overhead. Our technique involves in arranging the set of document identifiers $\mathsf{db}(w)$ related to a keyword $w$ in leaf nodes of a complete binary tree and eventually obtaining a succinct representation of the set $\mathsf{db}(w)$. This small representation of $\mathsf{db}(w)$ leads to smaller index sizes. We do an extensive theoretical analysis of our scheme and prove its correctness. In addition, our comprehensive experimental analysis validates the effectiveness of our scheme on real and simulated data and shows that it can be deployed in practical situations.
Smoothing Parameter and Shortest Vector Problem on Random Lattices
Lattice problems have many applications in various domains of computer science. There is currently a gap in the understanding of these problems with respect to their worst-case complexity and their average-case behaviour.
For instance, the Shortest Vector problem (SVP) on an n-dimensional lattice has worst-case complexity $2^{n+o(n)}$ \cite{ADRS15}.
However, in practice, people rely on heuristic (unproven) sieving algorithms of time complexity $2^{0.292n+o(n)}$ \cite{BeckerDGL16}
to assess the security of lattice-based cryptography schemes. Those heuristic algorithms are experimentally verified
for lattices used in cryptography, which are usually random in some way.
In this paper, we try to bridge the gap between worst-case and heuristic algorithms. Using the formalism of random real lattices developped by Siegel, we show a tighter upper bound on an important lattice parameter called the smoothing parameter that applies to almost all random lattices. This allows us to obtain a $2^{n/2+o(n)}$ time algorithm for an approximation version of the SVP on random lattices with a small constant approximation factor.
Swiper: a new paradigm for efficient weighted distributed protocols
The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of the total weight.
In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst case and, sometimes, even smaller than the number of participants.
While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems, we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include erasure-coded distributed storage and broadcast protocols, verifiable secret sharing, and asynchronous consensus. Although there are ad-hoc weighted solutions to some of these problems, the protocols yielded by our transformations enjoy all the benefits of nominal solutions, including simplicity, efficiency, and a wider range of possible cryptographic assumptions.
Quantum Chosen-Cipher Attack on Camellia
The Feistel structure represents a fundamental architectural component within the domain of symmetric cryptographic algorithms, with a substantial body of research conducted within the context of classical computing environments. Nevertheless, research into specific symmetric cryptographic algorithms utilizing the Feistel structure is relatively scarce in quantum computing environments. This paper builds upon a novel 4-round distinguisher proposed by Ito et al. for the Feistel structure under the quantum chosen-ciphertext attack (qCCA) setting. It introduces a 5-round distinguisher for Camellia. The efficacy of the distinguisher has been empirically validated. Furthermore, this paper combines Grover's algorithm with Simon's algorithm, utilizing an analysis of Camellia's key scheduling characteristics to construct a 9-round key recovery attack on Camellia algorithm. The time complexity for acquiring the correct key bits is $2^{61.5}$, and it requires 531 quantum bits. This represents the inaugural chosen-ciphertext attack on Camellia under the Q2 model.
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Investigation of the Optimal Linear Characteristics of BAKSHEESH (Full Version)
This paper aims to provide a more comprehensive understanding of the optimal linear characteristics of BAKSHEESH. Initially, an explicit formula for the absolute correlation of the $R$-round optimal linear characteristic of BAKSHEESH is proposed when $R \geqslant 12$. By examining the linear characteristics of BAKSHEESH with three active S-boxes per round, we derive some properties of the three active S-boxes in each round. Furthermore, we demonstrate that there is only one 1-round iterative linear characteristic with three active S-boxes. Since the 1-round linear characteristic is unique, it must be included in any $R$-round ($R \geqslant 12$) linear characteristics of BAKSHEESH with three active S-boxes per round. Finally, we confirm that BAKSHEESH's total number of $R$-round optimal linear characteristics is $3072$ for $R \geqslant 12$. All of these characteristics are generated by employing the 1-round iterative linear characteristic.
Secure Vickrey Auctions with Rational Parties
In this work, we construct a second price (Vickrey) auction protocol (SPA), which does not require any auctioneers and ensures total privacy in the presence of rational parties participating in auction. In particular, the confidentiality of the highest bid and the identity of the second highest bidder are protected. We model the bidders participating in the second price auction as rational, computationally bounded and privacy-sensitive parties. These are self-interested agents who care about winning the auction more than learning about the private bids of other parties. A rational party does not deviate from the protocol arbitrarily but does so only for its own individual `advantage' -- without any consideration for others. Such an advantage is modeled using suitable utility functions.
We show that for rational and computationally bounded parties participating in our second-price auctions protocol, there exists a privacy-preserving dominant strategy equilibrium in which every party prefers to follow the protocol rather than to deviate.
Our protocol is implemented using open-source cryptographic constructs. Running our SPA protocol on commodity hardware with $15$ bidders, with bids of length $10$ bits, completes in $1.26$sec and has total communication of $0.77$MB whereas, under similar conditions, Atlas (semi-honest) protocol takes $40\%$ more time ($2.11$ sec) and $87\%$ more communication ($6.09$MB).
Privacy-Preserving Multi-Party Search via Homomorphic Encryption with Constant Multiplicative Depth
We propose a privacy-preserving multiparty search protocol
using threshold-level homomorphic encryption, which we prove correct
and secure to honest but curious adversaries. Unlike existing approaches,
our protocol maintains a constant circuit depth. This feature enhances
its suitability for practical applications involving dynamic underlying
databases.
Is Periodic Pseudo-randomization Sufficient for Beacon Privacy?
In this paper, we investigate whether the privacy mechanism of periodically changing the pseudorandom identities of Bluetooth Low Energy (BLE) beacons is sufficient to ensure privacy.
We consider a new natural privacy notion for BLE broadcasting beacons which we call ``Timed-sequence- indistinguishability'' of beacons. This new privacy definition is stronger than the well-known indistinguishability, since it considers not just the advertisements' content, but also the advertisements' broadcasting times which are observable in the physical world.
We then prove that beacons with periodically changing pseudorandom identities do not achieve timed-sequence- indistinguishability. We do this by presenting a novel privacy attack against BLE beacons, which we call the ``Timer Manipulation Attack.'' This new time-based privacy attack can be executed by merely inserting or reinserting the beacon's battery at the adversary's chosen time. We performed this attack against an actually deployed beacon.
To mitigate the ``Timer Manipulation Attack'' and other attacks associated with periodic signaling, we propose a new countermeasure involving quasi-periodic randomized scheduling of identity changes. We prove that our countermeasure ensures timed-sequence indistinguishability for beacons, thereby enhancing the beacon's privacy. Additionally, we show how to integrate this countermeasure in the attacked system while essentially preserving its feasibility and utility, which is crucial for practical industrial adoption.
Consensus Under Adversary Majority Done Right
A spectre is haunting consensus protocols—the spectre of adversary majority. The literature is inconclusive, with possibilities and impossibilities running abound. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, we have known impossibility results for adversaries above 1/2 in synchrony, and above 1/3 in partial synchrony. What gives? It is high time that we pinpoint the culprit of this confusion: the critical role of the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens.
A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model.
Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
FLock: Robust and Privacy-Preserving Federated Learning based on Practical Blockchain State Channels
\textit{Federated Learning} (FL) is a distributed machine learning paradigm that allows multiple clients to train models collaboratively without sharing local data. Numerous works have explored security and privacy protection in FL, as well as its integration with blockchain technology. However, existing FL works still face critical issues. \romannumeral1) It is difficult to achieving \textit{poisoning robustness} and \textit{data privacy} while ensuring high \textit{model accuracy}. Malicious clients can launch \textit{poisoning attacks} that degrade the global model. Besides, aggregators can infer private data from the gradients, causing \textit{privacy leakages}. Existing privacy-preserving poisoning defense FL solutions suffer from decreased model accuracy and high computational overhead. \romannumeral2) Blockchain-assisted FL records iterative gradient updates on-chain to prevent model tampering, yet existing schemes are not compatible with practical blockchains and incur high costs for maintaining the gradients on-chain. Besides, incentives are overlooked, where unfair reward distribution hinders the sustainable development of the FL community. In this work, we propose FLock, a robust and privacy-preserving FL scheme based on practical blockchain state channels. First, we propose a lightweight secure \textit{Multi-party Computation} (MPC)-friendly robust aggregation method through quantization, median, and Hamming distance, which could resist poisoning attacks against up to $<50\%$ malicious clients. Besides, we propose communication-efficient Shamir's secret sharing-based MPC protocols to protect data privacy with high model accuracy. Second, we utilize blockchain off-chain state channels to achieve immutable model records and incentive distribution. FLock achieves cost-effective compatibility with practical cryptocurrency platforms, e.g. Ethereum, along with fair incentives, by merging the secure aggregation into a multi-party state channel. In addition, a pipelined \textit{Byzantine Fault-Tolerant} (BFT) consensus is integrated where each aggregator can reconstruct the final aggregated results. Lastly, we implement FLock and the evaluation results demonstrate that FLock enhances robustness and privacy, while maintaining efficiency and high model accuracy. Even with 25 aggregators and 100 clients, FLock can complete one secure aggregation for ResNet in $2$ minutes over a WAN. FLock successfully implements secure aggregation with such a large number of aggregators, thereby enhancing the fault tolerance of the aggregation.
Isogeny interpolation and the computation of isogenies from higher dimensional representations
The Supersingular Isogeny Diffie-Hellman (SIDH) scheme is a public key cryptosystem that was submitted to the National Institute of Standards and Technology's competition for the standardization of post-quantum cryptography protocols. The private key in SIDH consists of an isogeny whose degree is a prime power. In July 2022, Castryck and Decru discovered an attack that completely breaks the scheme by recovering Bob's secret key, using isogenies between higher dimensional abelian varieties to interpolate and reconstruct the isogenies comprising the SIDH private key. The original attack applies in theory to any prime power degree, but the implementation accompanying the original attack required one of the SIDH keys involved in a key exchange to have degree equal to a power of $2$. An implementation of the power of $3$ case was published subsequently by Decru and Kunzweiler. However, despite the passage of several years, nobody has published any implementations for prime powers other than $2$ or $3$, and for good reason --- the necessary higher dimensional isogeny computations rapidly become more complicated as the base prime increases. In this paper, we provide for the first time a fully general isogeny interpolation implementation that works for any choice of base prime, and provide timing benchmarks for various combinations of SIDH base prime pairs. We remark that the technique of isogeny interpolation now has constructive applications as well as destructive applications, and that our methods may open the door to increased flexibility in constructing isogeny-based digital signatures and cryptosystems.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs $0$ when $x=0$, and the $\operatorname{ARK}_K$ (for AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e.,
$$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in} \operatorname{ARK}_K(x) = x + K \;.$$
We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a random walk over the alternating (or symmetric) group that we call the inverse walk.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log^2 n + n\log n\log \frac{1}{\epsilon}\right)$$ rounds generates a permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). This is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro and Vaikuntanathan (CRYPTO 2023) by providing a missing piece in their proof of $t$-wise independence of (a variant of) AES. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a reachability result: namely, that every even permutation can be generated eventually by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a tight convergence result, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant.
We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security.
We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
On the Jordan-Gauss graphs and new multivariate public keys
We suggest two families of multivariate public keys defined over arbitrary finite commutative ring \(K\) with unity. The first one has quadratic multivariate public rule, this family is an obfuscation of previously defined cryptosystem defined in terms of well known algebraic graphs \(D(n, K)\) with the partition sets isomorphic to \(K^n\). Another family of cryptosystems uses the combination of Eulerian transformation of \(K[x_1, x_2, \ldots, x_n]\) sending each variable \(x_i\) to a monomial term with the quadratic encryption map of the first cryptosystem. The resulting map has unbounded degree and the density \(O(n^4)\) like the cubic multivariate map. The space of plaintexts of the second cryptosystem is the variety \((K^*)^n\) and the space of ciphertexts is the affine space \(K^n\).
A Note on Related-Tweakey Impossible Differential Attacks
In this note we review the technique proposed at ToSC 2018 by Sadeghi et al. for attacks built upon several related-tweakey impossible differential trails. We show that the initial encryption queries are improper and lead the authors to misevaluate a filtering value in the key recovery phase. We identified 4 other papers (from Eurocrypt, DCC, and 2 from ToSC) that follow on the results of Sadeghi et al., and in three of them the flawed technique was reused.
We thus present a careful analysis of these types of attacks and give generic complexity formulas similar to the ones proposed by Boura et al. at Asiacrypt 2014.
We apply these to the aforementioned papers and provide patched versions of their attacks. The main consequence is an increase in the memory complexity. We show that in many cases (a notable exception being quantum impossible differentials) it is possible to recover the numeric time estimates of the flawed analysis, and in all cases we were able to build a correct attack reaching the same number of rounds.
Towards Explainable Side-Channel Leakage: Unveiling the Secrets of Microarchitecture
We explore the use of microbenchmarks, small assembly code snippets, to detect microarchitectural side-channel leakage in CPU implementations. Specifically, we investigate the effectiveness of microbenchmarks in diagnosing the predisposition to side-channel leaks in two commonly used RISC-V cores: Picorv32 and Ibex. We propose a new framework that involves diagnosing side-channel leaks, identifying leakage points, and constructing leakage profiles to understand the underlying causes. We apply our framework to several realistic case studies that test our framework for explaining side-channel leaks and showcase the subtle interaction of data via order-reducing leaks.