## Papers updated in last 31 days (291 results)

Curve Trees: Practical and Transparent Zero-Knowledge Accumulators

In this work we improve upon the state of the art for practical \textit{zero-knowledge for set membership}, a building block at the core of several privacy-aware applications, such as anonymous payments, credentials and whitelists. This primitive allows a user to show knowledge of an element in a large set without leaking the specific element. One of the obstacles to its deployment is efficiency. Concretely efficient solutions exist, e.g., those deployed in Zcash Sapling, but they often work at the price of a strong trust assumption: an underlying setup that must be generated by a trusted third party.
To find alternative approaches we focus on a common building block: accumulators, a cryptographic data structure which compresses the underlying set. We propose novel, more efficient and fully transparent constructions (i.e., without a trusted setup) for accumulators supporting zero-knowledge proofs for set membership. Technically, we introduce new approaches inspired by ``commit-and-prove'' techniques to combine shallow Merkle trees and 2-cycles of elliptic curves into a highly practical construction. Our basic accumulator construction---dubbed \emph{Curve Trees}---is completely transparent (does not require a trusted setup) and is based on simple and widely used assumptions (DLOG and Random Oracle Model). Ours is the first fully transparent construction that obtains concretely small proof/commitment sizes for large sets and a proving time one order of magnitude smaller than proofs over Merkle Trees with Pedersen hash. For a concrete instantiation targeting 128 bits of security we obtain: a commitment to a set of \textit{any} size is 256 bits; for $|S| = 2^{40}$ a zero-knowledge membership proof is 3KB, its proving takes $2$s and its verification $40$ms on an ordinary laptop.
Using our construction as a building block we can design a simple and concretely efficient anonymous cryptocurrency with full anonymity set, which we dub $\mathbb{V}$cash. Its transactions can be verified in $\approx 80$ms or $\approx 5$ms when batch-verifying multiple ($> 100$) transactions; transaction sizes are $4.1$KB. Our timings are competitive with those of the approach in Zcash Sapling and trade slightly larger proofs (transactions in Zcash Sapling are 2.8KB) for a completely transparent setup.

Recursion over Public-Coin Interactive Proof Systems; Faster Hash Verification

SNARK is a well-known family of cryptographic tools that is increasingly used in the field of computation integrity at scale. In this area, multiple works have introduced SNARK-friendly cryptographic primitives: hashing, but also encryption, and signature verification. Despite all the efforts to create cryptographic primitives that can be proved faster, it remains a major performance hole in practice. In this paper, we present a recursive technique that can improve the efficiency of the prover by an order of magnitude compared to proving MiMC hashes (a SNARK-friendly hash function, Albrecht et al. 2016) with a Groth16 (Eurocrypt 2016) proof. We use GKR (a well-known public-coin argument system by Goldwasser et al., STOC 2008) to prove the integrity of hash computations and embed the GKR verifier inside a SNARK circuit. The challenge comes from the fact that GKR is a public-coin interactive protocol, and applying Fiat-Shamir naively may result in worse performances than applying existing techniques directly. This is because Fiat-Shamir itself is involved with hash computation over a large string. We take advantage of a property that SNARK schemes commonly have, to build a protocol in which the Fiat-Shamir hashes have very short inputs. The technique we present is generic and can be applied over any SNARK-friendly hash, most known SNARK schemes, and any public-coin argument system in place of GKR.

Find the Bad Apples: An efficient method for perfect key recovery under imperfect SCA oracles – A case study of Kyber

Side-channel resilience is a crucial feature when assessing whether a post-quantum cryptographic proposal is sufficiently mature to be deployed. In this paper, we propose a generic and efficient adaptive approach to improve the sample complexity (i.e., the required number of traces) of plaintext-checking (PC) oracle-based side-channel attacks (SCAs), a major class of key recovery chosen-ciphertext SCAs on lattice-based key encapsulation mechanisms. This new approach is preferable when the constructed PC oracle is imperfect, which is common in practice, and its basic idea is to design new detection codes that can determine erroneous positions in the initially recovered secret key. These secret entries are further corrected with a small number of additional traces.
This work benefits from the generality of PC oracle and thus is applicable to various schemes and implementations. We instantiated the proposed generic attack framework on Kyber512 and fully implemented this attack instance. Through extensive computer simulations and also a real-world experiment with electromagnetic (EM) leakages from an ARM-Cortext-M4 platform, we demonstrated that the newly proposed attack could greatly improve the state-of-the-art in terms of the required number of traces. For instance, the new attack requires only 41% of the EM traces needed in a majority-voting attack in our experiments, where the raw oracle accuracy is fixed.

Efficient Aggregatable BLS Signatures with Chaum-Pedersen Proofs

BLS signatures have fast aggregated signature verification but slow individual signature verification. We propose a three part optimisation that dramatically reduces CPU time in large distributed system using BLS signatures: First, public keys should be given on both source groups $\mathbb{G}_1$ and $\mathbb{G}_2$, with a proof-of-possession check for correctness. Second, aggregated BLS signatures should carry their particular aggregate public key in $\mathbb{G}_2$, so that verifiers can do both hash-to-curve and aggregate public key checks in $\mathbb{G}_1$. Third, individual non-aggregated BLS signatures should carry short Chaum-Pedersen DLEQ proofs of correctness, so that verifying individual signatures no longer requires pairings, which makes their verification much faster. We prove security for these optimisations. The proposed scheme is implemented and benchmarked to compare with classic BLS scheme.

On lLneariazation Attack of Entropic Quasigroups Cryptography

Uncategorized

Uncategorized

In this paper we study linearization proposed on ePrint 2021/583, that's addressed to entropic quasigroups cryptography. We show how this attack can be avoided and actually linearization can be used to build valid cryptosystems.

Themis: Fast, Strong Order-Fairness in Byzantine Consensus

We introduce Themis, a scheme for introducing fair ordering of transactions into (permissioned) Byzantine consensus protocols with at most $f$ faulty nodes among $n \geq 4f +1$. Themis enforces the strongest notion of fair ordering proposed to date. It also achieves standard liveness, rather than the weaker notion of previous work with the same fair ordering property.
We show experimentally that Themis can be integrated into state-of-the-art consensus protocols with minimal modification or performance overhead. Additionally, we introduce a suite of experiments of general interest for evaluating the practical strength of various notions of fair ordering and the resilience of fair-ordering protocols to adversarial manipulation. We use this suite of experiments to show that the notion of fair ordering enforced by Themis is significantly stronger in practical settings than those of competing systems.
We believe Themis offers strong practical protection against many types of transaction-ordering attacks---such as front-running and back-running---that are currently impacting commonly used smart contract systems.

Robustness of Affine and Extended Affine Equivalent Surjective S-Box(es) against Differential Cryptanalysis

A Feistel Network (FN) based block cipher relies on a Substitution Box (S-Box) for achieving the non-linearity. S-Box is carefully designed to achieve optimal cryptographic security bounds. The research of the last three decades shows that considerable efforts are being made on the mathematical design of an S-Box. To import the exact cryptographic profile of an S-Box, the designer focuses on the Affine Equivalent (AE) or Extended Affine (EA) equivalent S-Box. In this research, we argue that the Robustness of surjective mappings is invariant under AE and not invariant under EA transformation. It is proved that the EA equivalent of a surjective mapping does not necessarily contribute to the Robustness against the Differential Cryptanalysis (DC) in the light of Seberry's criteria. The generated EA equivalent S-Box(es) of DES and other $6 \times 4$ mappings do not show a good robustness profile compared to the original mappings. This article concludes that a careful selection of affine permutation parameters is significant during the design phase to achieve high Robustness against DC and Differential Power Analysis (DPA) attacks.

Multi-Instance Secure Public-Key Encryption

Mass surveillance targets many users at the same time with the goal of learning as much as possible. Intuitively, breaking many users’ cryptography simultaneously should be at least as hard as that of only breaking a single one, but ideally security degradation is gradual: an adversary ought to work harder to break more. Bellare, Ristenpart and Tessaro (Crypto’12) introduced the notion of multi-instance security to capture the related concept for password hashing with salts. Auerbach, Giacon and Kiltz (Eurocrypt’20) motivated the study of public key encryption (PKE) in the multi-instance setting, yet their technical results are exclusively stated in terms of key encapsulation mechanisms (KEMs), leaving a considerable gap.
We investigate the multi-instance security of public key encryption. Our contributions are twofold. Firstly, we define and compare possible security notions for multi-instance PKE, where we include PKE schemes whose correctness is not perfect. Secondly, we observe that, in general, a hybrid encryption scheme of a multi-instance secure KEM and an arbitrary data encapsulation mechanism (DEM) is unlikely to inherit the KEM’s multi-instance security. Yet, we show how with a suitable information-theoretic DEM, and a computationally secure key derivation function if need be, inheritance is possible. As far as we are aware, ours is the first inheritance result in the challenging multi-bit scenario.

Differential Meet-In-The-Middle Cryptanalysis

In this paper we introduce the differential-meet-in-the-middle framework, a new cryptanalysis technique against symmetric primitives. The idea of this new cryptanalysis method consists in combining into one attack techniques from both meet-in-the-middle and differential cryptanalysis. The introduced technique can be seen as a way of extending meet-in-the-middle attacks and their variants but also as a new way to perform the key recovery part in differential attacks. We provide a simple tool to search, given a differential, for efficient applications of this new attack and apply our approach, in combination with some additional techniques, to SKINNY-128-384. Our attack on SKINNY-128-384 permits to break 25 out of the 56 rounds of this variant and improves by two rounds the previous best known attacks in the single key model.

Witness Encryption for Succinct Functional Commitments and Applications

Witness encryption (WE), introduced by Garg, Gentry, Sahai, and Waters (STOC 2013) allows one to encrypt a message to a statement $\mathsf{x}$ for some NP language $\mathcal{L}$, such that any user holding a witness for $\mathsf{x} \in \mathcal{L}$ can decrypt the ciphertext.
The extreme power of this primitive comes at the cost of its elusiveness: a practical construction from established cryptographic assumptions is currently out of reach.
In this work we introduce and construct a new notion of encryption that has a strong flavor of WE and that, crucially, we can build from well-studied assumptions (based on bilinear pairings) for interesting classes of computation.
Our new notion, witness encryption for (succinct) functional commitment, takes inspiration from a prior weakening of witness encryption introduced by Benhamouda and Lin (TCC 2020). In a nutshell, theirs is a WE where: the encryption statement consists of a (non compressible) commitment $\mathsf{cm}$, a function $G$ and a value $y$; the decryption witness consists of a (non succinct) NIZK proof about the fact that $\mathsf{cm}$ opens to $v$ such that $y=G(v)$. Benhamouda and Lin showed how to apply this primitive to obtain MPC with non-interactive and reusability properties—dubbed $\mathsf{mrNISC}$—replacing the requirement of WE in existing round-collapsing techniques.
Our new WE-like notion is motivated by supporting both commitments of a fixed size and fixed decryption complexity, independent of the size of the value $v$—in contrast to the work by Benhamouda and Lin where this complexity is linear. As a byproduct, our efficiency requirement substantially improves the offline stage of $\mathsf{mrNISC}$ protocols.
From a technical standpoint, our work shows how to solve additional challenges arising from relying on computationally binding commitments and computational soundness (of functional commitments), as opposed to statistical binding and unconditional soundness (of NIZKs), used in Benhamouda and Lin's work.
In order to tackle them, we need not only to modify their basic blueprint, but also to model and instantiate different types of projective hash functions as building blocks.
Our techniques are of independent interest and may highlight new avenues to design practical variants of witness encryption.
As an additional contribution, we show that our new WE-flavored primitive and its efficiency properties are versatile: we discuss its further applications and show how to extend this primitive to better suit these settings.

Dashing and Star: Byzantine Fault Tolerance Using Weak Certificates

State-of-the-art Byzantine fault-tolerant (BFT) protocols assuming partial synchrony such as SBFT and HotStuff use \textit{regular certificates} obtained from $2f+1$ (partial) signatures. We show in this paper that one can use \textit{weak certificates} obtained from only $f+1$ signatures to \textit{assist} in designing more robust and more efficient BFT protocols. We design and implement two BFT systems: Dashing (a family of two HotStuff-style BFT protocols) and Star (a parallel BFT framework). Our protocols have been deployed in mission-critical Central Bank Digital Currency (CBDC) infrastructures.
We first present Dashing1 that targets both efficiency and robustness using weak certificates. Dashing1 is also network-adaptive in the sense that it can leverage network connection discrepancy to improve performance. We demonstrate that Dashing1 outperforms HotStuff in various failure-free and failures scenarios. We further show in Dashing2 how to further enable a \textit{one-phase} fast path by using \textit{strong certificates} obtained from $3f+1$ signatures, a highly challenging task we tackled in the paper.
We then leverage weak certificates to build Star, a highly efficient BFT framework that delivers transactions from $n-f$ replicas using only a single consensus instance. Star compares favorably with existing protocols in terms of censorship resistance, communication complexity, pipelining, state transfer, performance and scalability, and/or robustness under failures.
We demonstrate that the Dashing protocols achieve 47\%-107\% higher peak throughput than HotStuff for experiments conducted on Amazon EC2. Meanwhile, unlike all known BFT protocols whose performance degrades as $f$ grows large, the peak throughput of Star keeps increasing as $f$ grows. When deployed in a WAN with 91 replicas across five continents, Star achieves an impressive throughput of 256 ktx/sec, 35.9x that of HotStuff, 23.9x that of Dashing1, and 2.38x that of Narwhal.

CHVote Protocol Specification

This document provides a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol designed for political elections in Switzerland. The document describes every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document accommodates the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, it serves as an ultimate companion document for the developers in charge of implementing this system. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public even enables implementations by third parties, for example by students trying to develop a clone of the system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.

On the Complete Non-Malleability of the Fujisaki-Okamoto Transform

The Fujisaki-Okamoto (FO) transform (CRYPTO 1999 and JoC 2013) turns any weakly (i.e., IND-CPA) secure public-key encryption (PKE) scheme into a strongly (i.e., IND-CCA) secure key encapsulation method (KEM) in the random oracle model (ROM). Recently, the FO transform re-gained momentum as part of CRISTAL-Kyber, selected by the NIST as the PKE winner of the post-quantum cryptography standardization project.
Following Fischlin (ICALP 2005), we study the complete non-malleability of KEMs obtained via the FO transform. Intuitively, a KEM is completely non-malleable if no adversary can maul a given public key and ciphertext into a new public key and ciphertext encapsulating a related key for the underlying blockcipher.
On the negative side, we find that KEMs derived via FO are not completely non-malleable in general. On the positive side, we show that complete non-malleability holds in the ROM by assuming the underlying PKE scheme meets an additional property, or by a slight tweak of the transformation.

TEDT2 - Highly Secure Leakage-resilient TBC-based Authenticated Encryption

Leakage-resilient authenticated encryption (AE) schemes received considerable attention during the previous decade. Two core security models of bounded and unbounded leakage have evolved, where the latter has been motivated in a very detailed and practice-oriented manner. In that setting, designers often build schemes based on (tweakable) block ciphers due to the small state size, such as the recent two-pass AE scheme TEDT from TCHES 1/2020. TEDT is interesting due to its high security guarantees of O(n - log(n^2))-bit integrity under leakage and similar AE security in the black-box setting. Though, a detail limited it to provide only n/2-bit privacy under leakage.
In this work, we extend TEDT to TEDT2 in three aspects with the help of a tweakable block cipher with a 3n-bit tweakey: we (1) adopt the idea from the design team of Romulus of replacing TEDT's previous internal hash function with Naito's MDPH, (2) move the nonce from the hash to the tag-generation function both for more efficiency, and (3) strengthen the security of the encryption to obtain beyond-birthday-bound security also under leakage.

Rate-1 Non-Interactive Arguments for Batch-NP and Applications

We present a rate-$1$ construction of a publicly verifiable non-interactive argument system for batch-$\mathsf{NP}$ (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of $k$ NP statements each with an $m$-bit witness, has size $m + \mathsf{poly}(\lambda,\log k)$.
In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019).
We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption in the standard model:
- A multi-hop BARG scheme for $\mathsf{NP}$.
- A multi-hop aggregate signature scheme.
- An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time
deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$.
Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ were only known under non-standard knowledge assumptions or in the random oracle model.

Reversing, Breaking, and Fixing the French Legislative Election E-Voting Protocol

We conduct a security analysis of the e-voting protocol used for the largest political election using e-voting in the world, the 2022 French legislative election for the citizens overseas. Due to a lack of system and threat model specifications, we built and contributed such specifications by studying the French legal framework and by reverse-engineering the code base accessible to the voters. Our analysis reveals that this protocol is affected by two design-level and implementation-level vulnerabilities. We show how those allow a standard voting server attacker and even more so a channel attacker to defeat the election integrity and ballot privacy due to 6 attack variants. We propose and discuss 5 fixes to prevent those attacks. Our specifications, the attacks, and the fixes were acknowledged by the relevant stakeholders during our responsible disclosure. Our attacks are in the process of being prevented with our fixes for future elections. Beyond this specific protocol, we draw general conclusions and lessons from this instructive experience where an e-voting protocol meets the real-world constraints of a large-scale and political election.

CycloneNTT: An NTT/FFT Architecture Using Quasi-Streaming of Large Datasets on DDR- and HBM-based FPGA Platforms

Number-Theoretic-Transform (NTT) is a variation of Fast-Fourier-Transform (FFT) on finite fields. NTT is being increasingly used in blockchain and zero-knowledge proof applications. Although FFT and NTT are widely studied for FPGA implementation, we believe CycloneNTT is the first to solve this problem for large data sets ($\ge2^{24}$, 64-bit numbers) that would not fit in the on-chip RAM. CycloneNTT uses a state-of-the-art butterfly network and maps the dataflow to hybrid FIFOs composed of on-chip SRAM and external memory. This manifests into a quasi-streaming data access pattern minimizing external memory access latency and maximizing throughput. We implement two variants of CycloneNTT optimized for DDR and HBM external memories. Although historically this problem has been shown to be memory-bound, CycloneNTT's quasi-streaming access pattern is optimized to the point that when using HBM (Xilinx C1100), the architecture becomes compute-bound. On the DDR-based platform (AWS F1), the latency of the application is equal to the streaming of the entire dataset $\log N$ times to/from external memory. Moreover, exploiting HBM's larger number of channels, and following a series of additional optimizations, CycloneNTT only requires $\frac{1}{6}\log N$ passes.

Accountable Threshold Signatures with Proactive Refresh

An accountable threshold signature (ATS) is a threshold signature scheme where every signature identifies the quorum of signers who generated that signature. They are widely used in financial settings where signers need to be held accountable for threshold signatures they generate. In this paper we initiate the study of proactive refresh for accountable threshold signatures. Proactive refresh is a protocol that lets the group of signers refresh their shares of the secret key, without changing the public key or the threshold. We give several definitions for this notion achieving different levels of security. We observe that certain natural constructions for an ATS cannot be proactively refreshed because the secret key generated at setup is needed for accountability. We then construct three types of ATS schemes with proactive refresh. The first is a generic construction that is efficient when the number of signers is small. The second is a hybrid construction that performs well for a large number of signers and satisfies a strong security definition. The third is a collection of very practical constructions derived from ATS versions of the Schnorr and BLS signature schemes; however these practical constructions only satisfy our weaker notion of security.

Improved Universal Circuits using Lookup Tables

A Universal Circuit (UC) is a Boolean circuit of size $\Theta(n \log n)$ that can simulate any Boolean function up to a certain size $n$. Valiant (STOC'76) provided the first two UC constructions of asymptotic sizes $\sim5 n\log n$ and $\sim4.75 n\log n$, and today's most efficient construction of Liu et al. (CRYPTO'21) has size $\sim3n\log n$.
Evaluating a public UC with a secure Multi-Party Computation (MPC) protocol allows efficient Private Function Evaluation (PFE), where a private function is evaluated on private data.
Most existing UC constructions simulate circuits consisting of 2-input gates.
In this work, we study UCs that simulate circuits consisting of ($\rho \rightarrow \omega$)-Lookup Tables (LUTs) that map $\rho$ inputs to $\omega$ outputs.
Existing UC constructions can be easily extend to ($\rho \rightarrow$ 1)-LUTs (we call this the fixed UC construction).
We further extend this to ($\rho \rightarrow \omega$)-LUTs.
Unfortunately, the size of the fixed UC construction is linear in the largest input size $\rho$ of the LUT, i.e., even if only a single LUT in the circuit has a large input size, the size of the whole UC is dominated by this LUT size.
To circumvent this, we design a \emph{dynamic} UC construction, where the dimensions of the individual LUTs are public.
We implement the fixed and dynamic UC constructions based on the UC construction by Liu et al., which also is the first implementation of their construction. We show that the concrete size of our dynamic UC construction improves by at least $2\times$ over Liu et al.'s UC for all benchmark circuits, that are representative for many PFE applications.

Just How Fair is an Unreactive World?

Fitzi, Garay, Maurer, and Ostrovsky (J. Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with guaranteed output delivery. In this work, we show that in the presence of $n - 1$ corrupt parties, no unreactive primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with fairness. We show more generally that for $t > \frac{n}{2}$, in the presence of $t$ malicious parties, no unreactive primitive of cardinality $t$ is complete for realizing an arbitrary $n$-party functionality with fairness. We complement this result by noting that $(t+1)$-wise fair exchange is complete for realizing an arbitrary $n$-party functionality with fairness. In order to prove our results, we utilize the primitive of fair coin tossing and introduce the notion of predictability in coin tossing protocols, which we believe is of independent interest.

TiGER: Tiny bandwidth key encapsulation mechanism for easy miGration based on RLWE(R)

The quantum resistance Key Encapsulation Mechanism (PQC-KEM) design aims to replace cryptography in legacy security protocols. It would be nice if PQC-KEM were faster and lighter than ECDH or DH for easy migration to legacy security protocols. However, it seems impossible due to the temperament of the secure underlying problems in a quantum environment. Therefore, it makes reason to determine the threshold of the scheme by analyzing the maximum bandwidth the legacy security protocol can adapt. We specified the bandwidth threshold at 1,244 bytes based on IKEv2 (RFC7296), a security protocol with strict constraints on payload size in the initial exchange for secret key sharing. We propose TiGER that is an IND-CCA secure KEM based on RLWE(R). TiGER has a ciphertext (1,152bytes) and a public key (928 bytes) smaller than 1,244 bytes, even at the AES256 security level. To our knowledge, TiGER is the only scheme with such an achievement. Also, TiGER satisfies security levels 1, 3, and 5 of NIST competition. Based on reference implementation, TiGER is 1.7-2.6x faster than Kyber and 2.2-4.4x faster than LAC.

A Central Limit Framework for Ring-LWE Decryption

Uncategorized

Uncategorized

This paper develops Central Limit arguments for analysing the noise in ciphertexts in two homomorphic encryption schemes that are based on Ring-LWE. The first main contribution of this paper is to present an average-case noise analysis for the BGV scheme. Our approach builds upon the recent work of Costache et al. that gives the approximation of a polynomial product as a multivariate Normal distribution. We show how this result can be applied in the BGV context and experimentally verify its improvement over prior, worst-case, approaches. Our second main contribution is to develop a Central Limit framework to analyse the noise growth in the homomorphic Ring-LWE cryptosystem of Lyubashevsky, Peikert and Regev (Eurocrypt 2013, full version). Our approach is very general: apart from finite variance, no assumption on the distribution of the noise is required (in particular, the noise need not be subgaussian). We show that our approach leads to tighter bounds for the probability of decryption failure than have been obtained in prior work.

XMSS-SM3 and MT-XMSS-SM3: Instantiating Extended Merkle Signature Schemes with SM3

We instantiate the hash-based post-quantum stateful signature schemes XMSS and its multi-tree version described in RFC 8391 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test.

LightSwap: An Atomic Swap Does Not Require Timeouts At Both Blockchains

Security and privacy issues with centralized exchange services have motivated the design of atomic swap protocols for decentralized trading across currencies. These protocols follow a standard blueprint similar to the 2-phase commit in databases: (i) both users first lock their coins under a certain (cryptographic) condition and a timeout; (ii-a) the coins are swapped if the condition is fulfilled; or (ii-b) coins are released after the timeout. The quest for these protocols is to minimize the requirements from the scripting language supported by the swapped coins, thereby supporting a larger range of cryptocurrencies. The recently proposed universal atomic swap protocol [IEEE S&P’22] demonstrates how to swap coins whose scripting language only supports the verification of a digital signature on a transaction. However, the timeout functionality is cryptographically simulated with verifiable timelock puzzles, a computationally expensive primitive that hinders its use in battery-constrained devices such as mobile phones. In this state of affairs, we question whether the 2-phase commit paradigm is necessary for atomic swaps in the first place. In other words, is it possible to design a secure atomic swap protocol where the timeout is not used by (at least one of the two) users?
In this work, we present LightSwap, the first secure atomic swap protocol that does not require the timeout functionality (not even in the form of a cryptographic puzzle) by one of the two users. LightSwap is thus better suited for scenarios where a user, running an instance of LightSwap on her mobile phone, wants to exchange coins with an online exchange service running an instance of LightSwap on a computer. We show how LightSwap can be used to swap Bitcoin and Monero, an interesting use case since Monero does not provide any scripting functionality support other than linkable ring signature verification.

Two new infinite families of APN functions in trivariate form

We present two infinite families of APN functions in triviariate form over finite fields of the form $\mathbb{F}_{2^{3m}}$. We show that the functions from both families are permutations when $m$ is odd, and are 3-to-1 functions when $m$ is even. In particular, our functions are AB permutations for $m$ odd. Furthermore, we observe that for $m = 3$, i.e. for $\mathbb{F}_{2^9}$, the functions from our families are CCZ-equivalent to the two bijective sporadic APN instances discovered by Beierle and Leander. We also perform an exhaustive computational search for quadratic APN functions with binary coefficients in trivariate form over $\mathbb{F}_{2^{3m}}$ with $m \le 5$ and report on the results.

Compute, but Verify: Efficient Multiparty Computation over Authenticated Inputs

Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some trusted authority. We propose a generic and efficient compiler that transforms any linear secret sharing based MPC protocol into one with input authentication.
Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t<n/4$, our compiler preserves the stronger identifiable abort security of the underlying MPC protocol. No existing solution for authenticated MPC achieves this regardless of the corruption threshold.
Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment. We also illustrate the practicality of our approach by extending the well-known MP-SPDZ library with our compiler, thus yielding prototype authenticated MPC protocols.

Quantum Algorithm for Oracle Subset Product

In 1993 Bernstein and Vazirani proposed a quantum algorithm for the Bernstein-Vazirani problem, which is given oracle access to the function $f(a_1,\dots,a_n) = a_1x_1+\cdots + a_nx_n \pmod 2$ with respect to a secret string $x = x_1\dots x_n \in \{0,1\}^n$, where $a_1,\dots,a_n \in \{0,1\}$, find $x$. We give a quantum algorithm for a new problem called the oracle subset product problem, which is given oracle access to the function $f(a_1,\dots,a_n) = a_1^{x_1}\cdots a_n^{x_n}$ with respect to a secret string $x = x_1\dots x_n\in\{0,1\}^n$, where $a_1,\dots,a_n\in \mathbb Z$, find $x$. Similar to the Bernstein-Vazirani algorithm, it is a quantum algorithm for a problem that is originally polynomial time solvable by classical algorithms; and that the advantage of the algorithm over classical algorithms is that it only makes one call to the function instead of $n$ calls.

Concurrently Composable Non-Interactive Secure Computation

We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC. Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017).
In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of:
- a non-interactive (weakly) CCA-secure commitment,
- a stand-alone secure SPS-NISC with subexponential security,
and satisfies the notion of "angel-based" UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness.
We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.

Blockin: Multi-Chain Sign-In Standard with Micro-Authorizations

The tech industry is currently making the transition from Web 2.0 to Web 3.0,
and with this transition, authentication and authorization have been reimag-
ined. Users can now sign in to websites with their unique public/private key
pair rather than generating a username and password for every site. How-
ever, many useful features, like role-based access control, dynamic resource
owner privileges, and expiration tokens, currently don’t have efficient Web
3.0 solutions. Our solution aims to provide a flexible foundation for resource
providers to implement the aforementioned features on any blockchain
through a two-step process. The first step, authorization, creates an on-chain
asset which is to be presented as an access token when interacting with a
resource. The second step, authentication, verifies ownership of an asset
through querying the blockchain and cryptographic digital signatures. Our
solution also aims to be a multi-chain standard, whereas current Web 3.0
sign-in standards are limited to a single blockchain.

Side-channel and Fault-injection attacks over Lattice-based Post-quantum Schemes (Kyber, Dilithium): Survey and New Results

In this work, we present a systematic study of Side-Channel Attacks (SCA) and Fault Injection Attacks (FIA) on structured lattice-based schemes, with a focus on Kyber Key Encapsulation Mechanism (KEM) and Dilithium signature scheme, which are leading candidates in the NIST standardization process for Post-Quantum Cryptography (PQC). Through our study, we attempt to understand the underlying similarities and differences between the existing attacks, while classifying them into different categories. Given the wide variety of reported attacks, simultaneous protection against all the attacks requires to implement customized protections/countermeasures for
both Kyber and Dilithium. We therefore present a range of customized countermeasures, capable of providing defenses/mitigations against existing SCA/FIA, and incorporate several SCA and FIA countermeasures within a single design of Kyber and Dilithium. Among the several countermeasures discussed in this work, we present novel countermeasures that offer simultaneous protection against several SCA and FIA-based chosen-ciphertext attacks for Kyber KEM. We implement the presented countermeasures within the well-known pqm4 library for the ARM Cortex-M4 based microcontroller. Our performance evaluation reveals that the presented custom countermeasures incur reasonable performance overheads, on the ARM Cortex-M4 microcontroller. We therefore believe
our work argues for the usage of custom countermeasures within real-world implementations of lattice-based schemes, either in a standalone manner or as reinforcements to generic countermeasures such as masking.

Locally Computable UOWHF with Linear Shrinkage

This is an errata for our paper, ``Locally Computable UOWHF with Linear Shrinkage''. There is a gap in the proof of Theorem 4.1 that asserts that the collection $F_{P,n,m}$ is $\delta$-secure $\beta$-random target-collision resistant assuming the one-wayness and the pseudorandomness of the collection for related parameters. We currently do not know whether Theorem 4.1 (as stated in Section 4) holds. We are grateful to Colin Sandon for pointing out the gap.
We note that Theorem 5.1, which transforms any $\delta$-secure $\beta$-random target collision resistant collection to a target collision resistant collection while preserving constant locality and linear shrinkage, remains intact. Thus, one can construct a locally computable UOWHF with linear shrinkage based on the hypothesis that random local functions are $\delta$-secure $\beta$-random target-collision resistant. We also mention that locally-computable functions with linear-shrinkage that achieve a stronger form of *collision-resistance* were constructed by Applebaum, Haramaty, Ishai, Kushilevitz and Vaikuntanathan (ITCS 2017) based on incomparable assumptions.

Non-interactive Mimblewimble transactions, revisited

Mimblewimble is a cryptocurrency protocol that promises to overcome notorious blockchain scalability issues and provides user privacy. For a long time its wider adoption has been hindered by the lack of non-interactive transactions, that is, payments for which only the sender needs to be online.
Yu proposed a way of adding non-interactive transactions to stealth addresses to Mimblewimble, but we show that it is flawed. Building on Yu and integrating ideas from Burkett, we give a fixed scheme and provide a rigorous security analysis in a strenghtening of the previous security model from Eurocrypt'19.
Our protocol is considered for implementation by MimbleWimbleCoin and a variant is now deployed as MimbleWimble Extension Blocks (MWEB) in Litecoin.

Exploring Integrity of AEADs with Faults: Definitions and Constructions

Implementation-based attacks are major concerns for modern cryptography. For symmetric-key cryptography, a significant amount of exploration has taken place in this regard for primitives such as block ciphers. Concerning symmetric-key operating modes, such as Authenticated Encryption with Associated Data (AEAD), the state- of-the-art mainly addresses the passive Side-Channel Attacks (SCA) in the form of leakage resilient cryptography. So far, only a handful of work address Fault Attacks (FA) in the context of AEADs concerning the fundamental properties – integrity and confidentiality. In this paper, we address this gap by exploring mode-level issues arising due to FAs. We emphasize that FAs can be fatal even in cases where the adversary does not aim to extract the long-term secret, but rather tries to violate the basic security requirements (integrity and confidentiality). Notably, we show novel integrity attack examples on state-of-the-art AEAD constructions and even on a prior fault-resilient AEAD construction called SIV$. On the constructive side, we first present new security notions of fault-resilience, for PRF (frPRF), MAC (frMAC) and AEAD (frAE), the latter can be seen as an improved version of the notion introduced by Fischlin and Gunther at CT-RSA’20. Then, we propose new constructions to turn a frPRF into a fault-resilient MAC frMAC (hash-then-frPRF) and into a fault-resilient AEAD frAE (MAC-then-Encrypt-then-MAC or MEM).

PGC: Pretty Good Decentralized Confidential Payment System with Auditability

Modern cryptocurrencies such as Bitcoin and Ethereum achieve decentralization by replacing a trusted center with
a distributed and append-only ledger (known as blockchain). However, removing this trusted center comes at significant cost of privacy due to the public nature of blockchain. Many existing cryptocurrencies fail to provide transaction anonymity and confidentiality, meaning that addresses of sender, receiver and transfer amount are publicly accessible. As the privacy concerns grow, a number of academic work have sought to enhance privacy by leveraging cryptographic tools. Though strong privacy is appealing, it might be abused in some cases. Particularly, anonymity poses great challenges to auditability, which is a crucial property for the adoption of decentralized payment systems.
Aiming for a middle ground between privacy and auditability, we introduce the notion of \emph{auditable decentralized confidential payment} (ADCP) system. In addition to offering transaction confidentiality, ADCP system supports two levels of auditability, namely regulation compliance and supervision. We present a generic construction of ADCP system from integrated signature and encryption scheme and non-interactive zero-knowledge proof systems. We then instantiate our generic construction by carefully designing the underlying building blocks, yielding a standalone cryptocurrency called PGC. In PGC, the setup procedure is (semi-)transparent, and transaction cost is independent of system scale, which is roughly 1.4KB and takes under 28ms to generate and 9ms to verify.
At the core of PGC is an additively homomorphic public-key encryption scheme that we introduce, twisted ElGamal,
which is not only as secure as standard exponential ElGamal, but also friendly to Sigma protocols and range proofs.
This enables us to easily devise zero-knowledge proofs for basic correctness of transactions as well as various application-dependent policies in a modular fashion. Moreover, it is very efficient. Compared with the most efficient reported implementation of Paillier PKE, twisted ElGamal is an order of magnitude better in key and ciphertext size and decryption speed (for small message space), two orders of magnitude better in encryption speed. We believe twisted ElGamal is of independent interest on its own right. Along the way of designing and reasoning zero-knowledge proofs for PGC, we also obtain two interesting results. One is weak forking lemma which is a useful tool to prove computational knowledge soundness. The other is a method to prove no-knowledge of discrete logarithm, which is a complement of standard proof of discrete logarithm knowledge.

Quagmire ciphers and group theory: What is a Beaufort cipher?

We show that a Beaufort cipher is simultaneously both a quagmire 1 and a quagmire 2 cipher, which includes it in the set of quagmire 4 ciphers as well, albeit as a degenerate one. The Beaufort is one of a family of ciphers that share this property.

Quagmire ciphers and group theory: Recovering keywords from the key table

We demonstrate that with some ideas from group theory we are very often able to recover the keywords for a quagmire cipher from its key table. This would be the last task for a cryptologist in analyzing such a cipher.

Proofs of Proof-of-Stake with Sublinear Complexity

Popular Ethereum wallets (e.g., MetaMask) entrust centralized infrastructure providers (e.g., Infura) to run the consensus client logic on their behalf. As a result, these wallets are light-weight and high-performant, but come with security risks. A malicious provider can mislead the wallet, e.g., fake payments and balances, or censor transactions. On the other hand, light clients, which are not in popular use today, allow decentralization, but at concretely inefficient and asymptotically linear bootstrapping complexity. This poses a dilemma between decentralization and performance. In this paper, we design, implement, and evaluate concretely efficient and asymptotically logarithmic bootstrapping complexity. Our proofs of proof-of-stake (PoPoS) take the form of a Merkle tree of PoS epochs. The verifier enrolls the provers in a bisection game, in which the honest prover is destined to win once an adversarial Merkle tree is challenged at sufficient depth. To evaluate our superlight protocol, we provide a client implementation that is compatible with mainnet PoS Ethereum: compared to the state-of-the-art light client construction of PoS Ethereum, our client improves time-to-completion by 9x, communication by 180x, and energy usage by 30x (when bootstrapping after 10 years of consensus execution). We prove that our construction is secure and show how to employ it for other PoS systems such as Cardano (with full adaptivity), Algorand, and Snow White.

The Return of the SDitH

This paper presents a code-based signature scheme based on the well-known syndrome decoding (SD) problem. The scheme builds upon a recent line of research which uses the Multi-Party-Computation-in-the-Head (MPCitH) approach to construct efficient zero-knowledge proofs, such as Syndrome Decoding in the Head (SDitH), and builds signature schemes from them using the Fiat-Shamir transform.
At the heart of our proposal is a new approach to amplify the soundness of any MPC protocol that uses additive secret sharing. An MPCitH protocol with $N$ parties can be repeated $D$ times using parallel composition to reach the same soundness as a protocol run with $N^D$ parties. However, the former comes with $D$ times higher communication costs, often mainly contributed by the usage of $D$ `auxiliary' states (which in general have a significantly bigger impact on size than random states). Instead of that, we begin by generating $N^D$ shares, arranged into a $D$-dimensional hypercube of side $N$ containing only one `auxiliary' state. We derive from this hypercube $D$ sharings of size $N$ which are used to run $D$ instances of an $N$ party MPC protocol. This approach leads to an MPCitH protocol with $1/N^D$ soundness error, requiring $N^D$ offline computation, only $ND$ online computation, and only $1$ `auxiliary'. As the, potentially offline, share generation phase is generally inexpensive, this leads to trade-offs that are superior to just using parallel composition.
Our novel method of share generation and aggregation not only improves certain MPCitH protocols in general but also shows in concrete improvements of signature schemes. Specifically, we apply it to the work of Feneuil, Joux, and Rivain (CRYPTO'22) on code-based signatures, and obtain a new signature scheme that achieves a 3.3x improvement in global runtime, and a 15x improvement in online runtime for their shortest signatures size (8.5 kB). It is also possible to leverage the fact that most computations are offline to define parameter sets leading to smaller signatures: 6.7 kB for 60 ms offline, or 5.6 kB for 700 ms offline. For NIST security level 1, online signature cost is around 3 million cycles (1 ms on commodity processors), regardless of signature size.

An attack on a key exchange protocol based on max-times and min-times algebras

In this paper, we examine one of the public key exchange protocols proposed in [M. I. Durcheva. An application of different dioids in public key cryptography. In AIP Conference Proceedings, vol. 1631, pp 336-343. AIP, 2014] which uses max-times and min-times algebras. We discuss properties of powers of matrices over these algebras and introduce a fast attack on this protocol.

Breaking Panther

Panther is a sponge-based lightweight authenticated encryption scheme published at Indocrypt 2021. Its round function is based on four Nonlinear Feedback Shift Registers (NFSRs). We show here that it is possible to fully recover the secret key of the construction by using a single known plaintext-ciphertext pair and with minimal computational ressources. Furthermore, we show that in a known ciphertext setting an attacker is able with the knowledge of a single ciphertext to decrypt all plaintext blocks expect for the very first ones and can forge the tag with only one call and probability one. As we demonstrate, the problem of the design comes mainly from the low number of iterations of the round function during the absorption phase. All of our attacks have been implemented and validated.

Better Steady than Speedy: Full break of SPEEDY-7-192

Differential attacks are among the most important families of cryptanalysis against symmetric primitives. Since their introduction in 1990, several improvements to the basic technique as well as many dedicated attacks against symmetric primitives have been proposed. Most of the proposed improvements concern the key-recovery part. However, when designing a new primitive, the security analysis regarding differential attacks is often limited to finding the best trails over a limited number of rounds with branch and bound techniques, and a poor heuristic is then applied to deduce the total number of rounds a differential attack could reach. In this work we analyze the security of the SPEEDY family of block ciphers against differential cryptanalysis and show how to optimize many of the steps of the key-recovery procedure for this type of attacks. For this, we implemented a search for finding optimal trails for this cipher and their associated multiple probabilities under some constraints and applied non-trivial techniques to obtain optimal data and key-sieving. This permitted us to fully break SPEEDY-7-192, the 7-round variant of SPEEDY supposed to provide 192-bit security. Our work demonstrates among others the need to better understand the subtleties of differential cryptanalysis in order to get meaningful estimates on the security offered by a cipher against these attacks.

Folding Schemes with Selective Verification

In settings such as delegation of computation where a prover is doing computation as a service for many verifiers, it is important to amortize the prover’s costs without increasing those of the verifier. We introduce folding schemes with selective verification. Such a scheme allows a prover to aggregate m NP statements $x_i\in \mathcal{L}$ in a single statement $x\in\mathcal{L}$. Knowledge of a witness for $x$ implies knowledge of witnesses for all $m$ statements. Furthermore, each statement can be individually verified by asserting the validity of the aggregated statement and an individual proof $\pi$ with size sublinear in the number of aggregated statements. In particular, verification of statement $x_i$ does not require reading (or even knowing) all the statements aggregated. We demonstrate natural folding schemes for various languages: inner product relations, vector and polynomial commitment openings and relaxed R1CS of NOVA. All these constructions incur a minimal overhead for the prover, comparable to simply reading the statements.

Self-Timed Masking: Implementing Masked S-Boxes Without Registers

Masking is one of the most used side-channel protection techniques.
However, a secure masking scheme requires additional implementation costs, e.g. random number, and transistor count. Furthermore, glitches and early evaluation can temporally weaken a masked implementation in hardware, creating a potential source of exploitable leakages. Registers are generally used to mitigate these threats, hence increasing the implementation's area and latency.
In this work, we show how to design glitch-free masking without registers with the help of the dual-rail encoding and asynchronous logic. This methodology is used to implement low-latency masking with arbitrary protection order. Finally, we present a side-channel evaluation of our first and second order masked AES implementations.

End-to-End Secure Messaging with Traceability Only for Illegal Content

As end-to-end encrypted messaging services become widely adopted, law enforcement agencies have increasingly expressed concern that such services interfere with their ability to maintain public safety. Indeed, there is a direct tension between preserving user privacy and enabling content moderation on these platforms. Recent research has begun to address this tension, proposing systems that purport to strike a balance between the privacy of ''honest'' users and traceability of ''malicious'' users. Unfortunately, these systems suffer from a lack of protection against malicious or coerced service providers.
In this work, we address the privacy vs. content moderation question through the lens of pre-constrained cryptography [Ananth et al., ITCS 2022]. We introduce the notion of set pre-constrained (SPC) group signatures that guarantees security against malicious key generators. SPC group signatures offer the ability to trace users in messaging systems who originate pre-defined illegal content (such as child sexual abuse material), while providing security against malicious service providers.
We construct concretely efficient protocols for SPC group signatures, and demonstrate the real-world feasibility of our approach via an implementation. The starting point for our solution is the recently introduced Apple PSI system, which we significantly modify to improve security and expand functionality.

A Password-Based Access Control Framework for Time-Sequence Aware Media Cloudization

The time sequence-based outsourcing makes new demands for related access control continue to grow increasingly in cloud computing. In this paper, we propose a practical password-based access control framework for such media cloudization relying on content control based on the time-sequence attribute, which is designed over prime-order groups. First, the scheme supports
multi-keyword search simultaneously in any monotonic boolean formulas, and enables media owner to control content encryption key for different time-periods using an updatable password; Second, the scheme supports the key self-retrievability of content encryption key, which is more suitable for the cloud-based media applications with massive users. Then, we show that the proposed scheme is provably secure in the standard model. Finally, the detailed result of performance evaluation shows the proposed scheme is efficient and practical for cloud-based media applications.

Efficient Threshold FHE with Application to Real-Time Systems

Threshold Fully Homomorphic Encryption (ThFHE) enables arbitrary computation over encrypted data while keeping the decryption key to be distributed across multiple parties at all time. ThFHE is a key enabler for threshold cryptography and, more generally, secure distributed computing. Existing ThFHE schemes inherently require highly inefficient parameters and are unsuitable for practical deployment. In this paper, we take the first step towards to make ThFHE practically usable by (i) proposing a novel ThFHE scheme with a new analysis resulting in significantly improved parameters; (ii) and providing the first ThFHE implementation benchmark based on Torus FHE.
• We propose the first ThFHE scheme with a polynomial modulus-to-noise ratio that supports practically efficient parameters while retaining provable security based on standard quantum-safe assumptions. We achieve this via a novel Rényi divergence-based security analysis of our proposed threshold decryption mechanism.
• We present a highly optimized software implementation of our proposed ThFHE scheme that builds upon the existing Torus FHE library and supports (distributed) decryption on highly resource-constrained ARM-based handheld devices. To the best of our knowledge, this is the first practically efficient implementation of any ThFHE scheme. Along the way, we implement several extensions to the Torus FHE library, including a Torus-based linear integer secret sharing subroutine to support ThFHE key sharing and distributed decryption for any threshold access structure.
We illustrate the efficacy of our proposal via an end-to-end use case involving encrypted computations over a real medical database, and distributed decryptions of the computed result on resource-constrained handheld devices.

AlgSAT --- a SAT Method for Search and Verification of Differential Characteristics from Algebraic Perspective

A good differential is a start for a successful differential attack. However, a differential might be invalid, i.e., there is no right pair following the differential1, due to some complicated contradictions that are hard to be considered. In this paper, we present a novel and handy method to search and verify a differential characteristic (DC) based on a recently proposed algebraic perspective on the differential(-linear) cryptanalysis (CRYPTO 2021).
From this algebraic perspective, exact Boolean expressions of differentials over a cryptographic primitive can be conveniently established, thus verifying a given DC is naturally a Boolean satisfiability problem (SAT problem). With this observation, our approach simulates the round function of the target cipher symbolically and derives a set of Boolean equations in Algebraic Normal Form (ANF). These Boolean equations can be solved by off-the-shelf SAT solvers such as Bosphorus, which accept ANFs as their input.
To demonstrate the power of our new tool, we apply it to Gimli, Ascon, and Xoodoo. For Gimli, we improve the efficiency of searching for a valid 8-round colliding DC compared with the previous MILP model (CRYPTO 2020). Our approach takes about one minute to find a valid 8-round DC, while the previous MILP model could not find any such DCs in practical time. Based on this DC, a practical semi-free-start collision attack on the intermediate 8-round Gimli-Hash is thus successfully mounted, i.e., a colliding message pair is found. For Ascon, we check several DCs reported at FSE 2021. Firstly, we verify a 2-round DC used in the collision attack on Ascon-Hash by giving a right pair (such a right pair requires $2^{156}$ attempts to find in a random search). Secondly, a 4-round differential used in the forgery attack on Ascon-128’s iteration phase is proven invalid, as a result, the corresponding forgery attack is invalid, too. For Xoodoo, we verify tens of thousands of 3-round DCs and two 4-round DCs extended from the so-called differential trail cores found by the designers or our search tool. We find all of these DCs are valid, which well demonstrates the sound independence of the differential propagation over Xoodoo’s round functions. Besides, as an independent interest, we develop a SAT-based automatic search toolkit called XoodooSat to search for 2-, 3-, and 4-round differential trail cores of Xoodoo. Our toolkit finds two more 3-round differential trail cores of weight 48 that were missed by the designers which enhance the security analysis of Xoodoo.

On the computational hardness needed for quantum cryptography

In the classical model of computation, it is well established that one-way functions (OWF) are minimal for computational cryptography: They are essential for almost any cryptographic application that cannot be realized with respect to computationally unbounded adversaries. In the quantum setting, however, OWFs appear not to be essential (Kretschmer 2021; Ananth et al., Morimae and Yamakawa 2022), and the question of whether such a minimal primitive exists remains open.
We consider EFI pairs — efficiently samplable, statistically far but computationally indistinguishable pairs of (mixed) quantum states. Building on the work of Yan (2022), which shows equivalence between EFI pairs and statistical commitment schemes, we show that EFI pairs are necessary for a large class of quantum-cryptographic applications. Specifically, we construct EFI pairs from minimalistic versions of commitments schemes, oblivious transfer, and general secure multiparty computation, as well as from 𝖰𝖢𝖹𝖪 proofs from essentially any non-trivial language. We also construct quantum computational zero knowledge (𝖰𝖢𝖹𝖪) proofs for all of 𝖰𝖨𝖯 from any EFI pair.
This suggests that, for much of quantum cryptography, EFI pairs play a similar role to that played by OWFs in the classical setting: they are simple to describe, essential, and also serve as a linchpin for demonstrating equivalence between primitives.

Post-Quantum Hybrid KEMTLS Performance in Simulated and Real Network Environments

Adopting Post-Quantum Cryptography (PQC) in network protocols is a challenging subject. Larger PQC public keys and signatures can significantly slow the Transport Layer Security (TLS) protocol. In this context, KEMTLS is a promising approach that replaces the handshake signatures by using PQC Key Encapsulation Mechanisms (KEMs), which have, in general, smaller sizes. However, for broad PQC adoption, hybrid cryptography has its advantages over PQC-only approaches, mainly about the confidence in the security of existing cryptographic schemes. This work brings hybrid cryptography to the KEMTLS and KEMTLS-PDK protocols. We analyze different network conditions and show that the penalty when using Hybrid KEMTLS over PQC-only KEMTLS is minor under certain security levels. We also compare Hybrid KEMTLS with a hybrid version of PQTLS. Overall, the benefits of using hybrid protocols outweigh the slowdown penalties in higher security parameters, which encourages its use in practice.

On the Performance Gap of a Generic C Optimized Assembler and Wide Vector Extensions for Masked Software with an Ascon-{\it{p}} test case

Efficient implementations of software masked designs constitute both an important goal and a significant challenge to Side Channel Analysis attack (SCA) security. In this paper we discuss the shortfall between generic C implementations and optimized (inline-) assembly versions while providing a large spectrum of efficient and generic masked implementations for any order, and demonstrate cryptographic algorithms and masking gadgets with reference to the state of the art. Our main goal is to show the prime performance gaps we can expect between different implementations and suggest how to harness the underlying hardware efficiently, a daunting task for various masking-orders or masking algorithm (multiplications, refreshing etc.).
This paper focuses on implementations targeting wide vector bitsliced designs such as the ISAP algorithm. We explore concrete instances of implementations utilizing processors enabled by wide-vector capability extensions of the AMD64 Instruction Set Architecture (ISA); namely, the SSE2/3/4.1, AVX-2 and AVX-512 Streaming Single Instruction Multiple Data (SIMD) extensions. These extensions mainly enable efficient memory level parallelism and provide a gradual reduction in computation-time as a function of the level of extension and the hardware support for instruction-level parallelism. For the first time we provide a complete open-source repository of such gadgets tailored for these extensions, various gadgets types and for all orders.
We evaluate the disparities between $\mathit{generic}$ high-level language masking implementations for optimized (inline-) assembly and conventional single execution path data-path architectures such as the ARM architecture. We underscore the crucial trade-off between state storage in the data-memory as compared to keeping it in the register-file (RF). This relates specifically to masked designs, and is particularly difficult to resolve because it requires inline-assembly manipulations and is not natively supported by compilers. Moreover, as the masking order ($d$) increases and the state gets larger, there must be an increase in data memory read/write accesses for state handling since the RF is simply not large enough. This requires careful optimization which depends to a considerable extent on the underlying algorithm to implement.
We discuss how full utilization of SSE extensions is not always possible; i.e. when $d$ is not a power of two, and pin-point the optimal $d$ values and very sub-optimal values of $d$ which aggressively under-utilize the hardware. More generally, this paper presents several different fully generic masked implementations for any order or multiple highly optimized (inline-) assembly instances which are quite generic (for a wide spectrum of ISAs and extensions), and provide very specific implementations targeting specific extensions. The goal is to promote open-source availability, research, improvement and implementations relating to SCA security and masked designs. The building blocks and methodologies provided here are portable and can be easily adapted to other algorithms.

The Security of Quasigroups Based Substitution Permutation Networks

The study of symmetric structures based on quasigroups is relatively new and certain gaps can be found in the literature. In this paper, we want to fill one of these gaps. More precisely, in this work we study substitution permutation networks based on quasigroups that make use of permutation layers that are non-linear relative to the quasigroup operation. We prove that for quasigroups isotopic with a group $\mathbb{G}$, the complexity of mounting a differential attack against this type of substitution permutation network is the same as attacking another symmetric structure based on $\mathbb{G}$. The resulting structure is interesting and new, and we hope that it will form the basis for future secure block ciphers.

Polynomial-Time Cryptanalysis of the Subspace Flooding Assumption for Post-Quantum $i\mathcal{O}$

Indistinguishability Obfuscation $(i\mathcal{O})$ is a highly versatile primitive implying a myriad advanced cryptographic applications. Up until recently, the state of feasibility of $i\mathcal{O}$ was unclear, which changed with works (Jain-Lin-Sahai STOC 2021, Jain-Lin-Sahai Eurocrypt 2022) showing that $i\mathcal{O}$ can be finally based upon well-studied hardness assumptions. Unfortunately, one of these assumptions is broken in quantum polynomial time. Luckily, the line work of Brakerski et al. Eurocrypt 2020, Gay-Pass STOC 2021, Wichs-Wee Eurocrypt 2021, Brakerski et al. ePrint 2021, Devadas et al. TCC 2021 simultaneously created new pathways to construct $i\mathcal{O}$ with plausible post-quantum security from new assumptions, namely a new form of circular security of LWE in the presence of leakages. At the same time, effective cryptanalysis of this line of work has also begun to emerge (Hopkins et al. Crypto 2021).
It is important to identify the simplest possible conjectures that yield post-quantum $i\mathcal{O}$ and can be understood through known cryptanalytic tools. In that spirit, and in light of the cryptanalysis of Hopkins et al., recently Devadas et al. gave an elegant construction of $i\mathcal{O}$ from a fully-specified and simple-to-state assumption along with a thorough initial cryptanalysis.
Our work gives a polynomial-time distinguisher on their "final assumption" for their scheme. Our algorithm is extremely simple to describe: Solve a carefully designed linear system arising out of the assumption. The argument of correctness of our algorithm, however, is nontrivial.
We also analyze the "T-sum" version of the same assumption described by Devadas et. al. and under a reasonable conjecture rule out the assumption for any value of $T$ that implies $i\mathcal{O}$.

Threshold Signatures with Private Accountability

Existing threshold signature schemes come in two flavors: (i) fully private, where the signature reveals nothing about the set of signers that generated the signature, and (ii) accountable, where the signature completely identifies the set of signers. In this paper we propose a new type of threshold signature, called TAPS, that is a hybrid of privacy and accountability. A TAPS signature is fully private from the public's point of view. However, an entity that has a secret tracing key can trace a signature to the threshold of signers that generated it. A TAPS makes it possible for an organization to keep its inner workings private, while ensuring that signers are accountable for their actions. We construct a number of TAPS schemes. First, we present a generic construction that builds a TAPS from any accountable threshold signature. This generic construction is not efficient, and we next focus on efficient schemes based on standard assumptions. We build two efficient TAPS schemes (in the random oracle model) based on the Schnorr signature scheme. We conclude with a number of open problems relating to efficient TAPS.

TRIFORS: LINKable Trilinear Forms Ring Signature

We present TRIFORS (TRIlinear FOrms Ring Signature), a logarithmic post-quantum (linkable) ring signature based on a novel assumption regarding the equivalence of alternating trilinear forms. The basis of this work is the construction by Beullens, Katsumata and Pintore from Asiacrypt 2020 to obtain a linkable ring signature from a cryptographic group action. The group action on trilinear forms used here is the same employed in the signature presented by Tang et al. at Eurocrypt 2022. We first define a sigma protocol that, given a set of public keys, the ring, allows to prove the knowledge of a secret key corresponding to a public one in the ring. Furthermore, some optimisations are used to reduce the size of the signature: among others, we use a novel application of the combinatorial number system to the space of the challenges. Using the Fiat-Shamir transform, we obtain a (linkable) ring signature of competitive length with the state-of-the-art among post-quantum proposals for security levels 128 and 192.

FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption

Fully Homomorphic Encryption is a technique that allows computation on encrypted data. It has the potential to drastically change privacy considerations in the cloud, but high computational and memory overheads are preventing its broad adoption. TFHE is a promising Torus-based FHE scheme that heavily relies on bootstrapping, the noise-removal tool that must be invoked after every encrypted gate computation.
We present FPT, a Fixed-Point FPGA accelerator for TFHE bootstrapping. FPT is the first hardware accelerator to heavily exploit the inherent noise present in FHE calculations. Instead of double or single-precision floating-point arithmetic, it implements TFHE bootstrapping entirely with approximate fixed-point arithmetic. Using an in-depth analysis of noise propagation in bootstrapping FFT computations, FPT is able to use noise-trimmed fixed-point representations that are up to 50% smaller than prior implementations using floating-point or integer FFTs.
FPT's microarchitecture is built as a streaming processor inspired by traditional streaming DSPs: it instantiates high-throughput computational stages that are directly cascaded, with simplified control logic and routing networks. We explore different throughput-balanced compositions of streaming kernels with a user-configurable streaming width in order to construct a full bootstrapping pipeline. FPT's streaming approach allows 100% utilization of arithmetic units and requires only small bootstrapping key caches, enabling an entirely compute-bound bootstrapping throughput of 1 BS / 35$\mu$s. This is in stark contrast to the established classical CPU approach to FHE bootstrapping acceleration, which tends to be heavily memory and bandwidth-constrained.
FPT is fully implemented and evaluated as a bootstrapping FPGA kernel for an Alveo U280 datacenter accelerator card. FPT achieves almost three orders of magnitude higher bootstrapping throughput than existing CPU-based implementations, and 2.5$\times$ higher throughput compared to recent ASIC emulation experiments.

Division of Regulatory Power: Collaborative Regulation for Privacy-Preserving Blockchains

Recently, fast advances in decentralized blockchains have led to the rapid development of decentralized payment systems and finance. In decentralized anonymous payment systems such as Monero, Zerocash and Zether, payment amounts and traders' addresses are confidential to other users. Therefore, cryptocurrency may be used for illegal activities such as money laundering, bribery and blackmails. To solve this problem, some decentralized anonymous payment schemes supporting regulation have been proposed. Unfortunately, most solutions have no restriction on the regulator's power, which may lead to abuse of power and disclosure of privacy. In this paper, we propose a decentralized anonymous payment scheme supporting collaborative regulation. Different from existing solutions, our scheme prevents abuse of power by dividing the regulatory power into two regulatory authorities. These two regulatory authorities, namely Filter and Supervisor, can cooperate to recover payment amounts and traders' addresses from suspicious transactions. However, neither Filter nor Supervisor alone can decode transactions to obtain transaction privacy. Our scheme enjoys three major advantages over others: 1) We design a generic transaction structure using zk-SNARK, 2) divide regulatory power using the regulation label, 3) and achieve aggregability of transaction amounts using the amount label. The experimental result shows that the time cost of generating a transaction is about 11 s and the transaction fee is about 12,183k gas, which is acceptable.

Vortex : Building a Lattice-based SNARK scheme with Transparent Setup

We present the first transparent and plausibly post-quantum SNARK relying on the Ring Short Integer Solution problem (Ring-SIS), a well-known assumption from lattice-based cryptography. At its core, our proof system relies on a new linear-commitment scheme named Vortex which is inspired from the work of Orion and Brakedown. Vortex uses a hash function based on Ring-SIS derived from “SWIFFT" (Lyubashevsky et al., FSE08). We take advantage of the linear structure of this particular hash function to craft an efficient self-recursion technique. Although Vortex proofs have $O(\sqrt{n})$ size in the witness size, we show how our self-recursion technique can be used to build a SNARK scheme based on Vortex. The resulting SNARK works over any field with reasonably large 2-adicity (also known as FFT-friendly fields). Moreover, we introduce Wizard-IOP, an extension of the concept of polynomial-IOP. Working with Wizard-IOP rather than separate polynomial-IOPs provides us with a strong tool for handling a wide class of queries, needed for proving the correct executions of the complex state machines (e.g., zk-EVM as our use-case) efficiently and conveniently.

MinRoot: Candidate Sequential Function for Ethereum VDF

We present a candidate sequential function for a VDF protocol to be used within the Ethereum ecosystem. The new function, called MinRoot, is an optimized iterative algebraic transformation and is a strict improvement over competitors VeeDo and Sloth++. We analyze various attacks on sequentiality and suggest weakened versions for public scrutiny. We also announce bounties on certain research directions in cryptanalysis.

Hardness of Approximation for Stochastic Problems via Interactive Oracle Proofs

Hardness of approximation aims to establish lower bounds on the approximability of optimization problems in NP and beyond. We continue the study of hardness of approximation for problems beyond NP, specifically for \emph{stochastic} constraint satisfaction problems (SCSPs). An SCSP with $k$ alternations is a list of constraints over variables grouped into $2k$ blocks, where each constraint has constant arity.
An assignment to the SCSP is defined by two players who alternate in setting values to a designated block of variables, with one player choosing their assignments uniformly at random and the other player trying to maximize the number of satisfied constraints.
In this paper, we establish hardness of approximation for SCSPs based on interactive proofs. For $k \leq O(\log n)$, we prove that it is $AM[k]$-hard to approximate, to within a constant, the value of SCSPs with $k$ alternations and constant arity. Before, this was known only for $k = O(1)$.
Furthermore, we introduce a natural class of $k$-round interactive proofs, denoted $IR[k]$ (for \emph{interactive reducibility}), and show that several protocols (e.g., the sumcheck protocol) are in $IR[k]$. Using this notion, we extend our inapproximability to all values of $k$: we show that for every $k$, approximating an SCSP instance with $O(k)$ alternations and constant arity is $IR[k]$-hard.
While hardness of approximation for CSPs is achieved by constructing suitable PCPs, our results for SCSPs are achieved by constructing suitable IOPs (interactive oracle proofs). We show that every language in $AM[k \leq O(\log n)]$ or in $IR[k]$ has an $O(k)$-round IOP whose verifier has \emph{constant} query complexity (\emph{regardless} of the number of rounds $k$). In particular, we derive a ``sumcheck protocol'' whose verifier reads $O(1)$ bits from the entire interaction transcript.

A Toolbox for Barriers on Interactive Oracle Proofs

Interactive oracle proofs (IOPs) are a proof system model that combines features of interactive proofs (IPs) and probabilistically checkable proofs (PCPs). IOPs have prominent applications in complexity theory and cryptography, most notably to constructing succinct arguments.
In this work, we study the limitations of IOPs, as well as their relation to those of PCPs. We present a versatile toolbox of IOP-to-IOP transformations containing tools for: (i) length and round reduction; (ii) improving completeness; and (iii) derandomization.
We use this toolbox to establish several barriers for IOPs:
-- Low-error IOPs can be transformed into low-error PCPs. In other words, interaction can be used to construct low-error PCPs; alternatively, low-error IOPs are as hard to construct as low-error PCPs. This relates IOPs to PCPs in the regime of the sliding scale conjecture for inverse-polynomial soundness error.
-- Limitations of quasilinear-size IOPs for 3SAT with small soundness error.
-- Limitations of IOPs where query complexity is much smaller than round complexity.
-- Limitations of binary-alphabet constant-query IOPs.
We believe that our toolbox will prove useful to establish additional barriers beyond our work.

Cryptography with Weights: MPC, Encryption and Signatures

The security of several cryptosystems rests on the trust assumption that a certain fraction of the parties are honest. This trust assumption has enabled a diverse of cryptographic applications such as secure multiparty computation, threshold encryption, and threshold signatures. However, current and emerging practical use cases suggest that this paradigm of
one-person-one-vote is outdated.
In this work, we consider {\em weighted} cryptosystems where every party is assigned a certain weight and the trust assumption is that a certain fraction of the total weight is honest. This setting can be translated to the standard setting (where each party has a unit weight) via virtualization. However, this method is quite expensive, incurring a multiplicative overhead in the weight.
We present new weighted cryptosystems with significantly better efficiency. Specifically, our proposed schemes incur only an {\em additive} overhead in weights.
\begin{itemize}
\item We first present a weighted ramp secret-sharing scheme where the size of the secret share is as short as $O(w)$ (where $w$ corresponds to the weight). In comparison, Shamir's secret sharing with virtualization requires secret shares of size $w\cdot\lambda$, where $\lambda=\log |\mathbb{F}|$ is the security parameter.
\item Next, we use our weighted secret-sharing scheme to construct weighted versions of (semi-honest) secure multiparty computation (MPC), threshold encryption, and threshold signatures. All these schemes inherit the efficiency of our secret sharing scheme and incur only an additive overhead in the weights.
\end{itemize}
Our weighted secret-sharing scheme is based on the Chinese remainder theorem. Interestingly, this secret-sharing scheme is {\em non-linear} and only achieves statistical privacy. These distinct features introduce several technical hurdles in applications to MPC and threshold cryptosystems. We resolve these challenges by developing several new ideas.

VERI-ZEXE: Decentralized Private Computation with Universal Setup

Traditional blockchain systems execute program state transitions on-chain, requiring each network node participating in state-machine replication to re-compute every step of the program when validating transactions. This limits both scalability and privacy. Recently, Bowe et al. introduced a primitive called decentralized private computation (DPC) and provided an instantiation called ZEXE, which allows users to execute arbitrary computations off-chain without revealing the program logic to the network. Moreover, transaction validation takes only constant time, independent of the off-chain computation. However, ZEXE required a separate trusted setup for each application, which is highly impractical. Prior attempts to remove this per-application setup incurred significant performance loss.
We propose a new DPC instantiation VERI-ZEXE that is highly efficient and requires only a single universal setup to support an arbitrary number of applications. Our benchmark improves the state-of-the-art by 9x in transaction generation time and by 3.4x in memory usage. Along the way, we also design efficient gadgets for variable-base multi-scalar multiplication and modular arithmetic within the plonk constraint system, leading to a Plonk verifier gadget using only ∼ 21k plonk constraints.

Enhancing Ring-LWE Hardness using Dedekind Index Theorem

In this work we extend the known pseudorandomness of Ring-LWE (RLWE) to be based on ideal lattices of non Dedekind domains. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices.
We now show that for any number field $\mathbb{Q}[X]/(f(X))$, for all prime integers $p$ such that the factorization of $f(X)$ modulo $p$ passes the Dedekind index theorem criterion, which is almost all $p$, we can base $p$-power RLWE in the polynomial ring $\mathbb{Z}[X]/(f(X))$ itself and its hardness on hardness of ideal lattices of this ring. This ring can potentially be a strict sub-ring of the ring of integers of the field, and hence not be a Dedekind domain. We also give natural examples, and prove that certain ideals require at least three generators, as opposed to two sufficient for Dedekind domains. Such rings also do not satisfy many other algebraic properties of Dedekind domains such as ideal invertibility. Our proof technique is novel as it builds an algebraic theory for general such rings that also include cyclotomic rings.

Syndrome Decoding in the Head: Shorter Signatures from Zero-Knowledge Proofs

Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer urges the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants.
In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector $x$ such that $y=Hx$ and $\operatorname{wt}(x)\leq w$ and which achieves a soundness error closed to $1/N$ for an arbitrary $N$.
While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{2^8}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common "signature size + public key size" metric.

Zero-Knowledge Protocols for the Subset Sum Problem from MPC-in-the-Head with Rejection

We propose zero-knowledge arguments for the modular subset sum problem. Given a set of integers, this problem asks whether a subset adds up to a given integer $t$ modulo a given integer $q$. This NP-complete problem is considered since the 1980s as an interesting alternative in cryptography to hardness assumptions based on number theory and it is in particular believed to provide post-quantum security. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity.
We improve this approach by using a secret-sharing over small integers (rather than modulo $q$) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus $q$.
Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus $q$, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide an efficient alternative to state-of-the-art protocols when the underlying ring is not small and NTT-friendly. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols. Eventually, we use our technique to construct an efficient digital signature scheme based on a pseudo-random function due to Boneh-Halevi-Howgrave-Graham.

Shared Permutation for Syndrome Decoding: New Zero-Knowledge Protocol and Code-Based Signature

Zero-knowledge proofs are an important tool for many cryptographic protocols and applications. The threat of a coming quantum computer motivates the research for new zero-knowledge proof techniques for (or based on) post-quantum cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) of random linear codes. This problem is known to be NP-hard and the cryptanalysis state of affairs has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. As a simple public-coin three-round protocol, it can be converted to a post-quantum signature scheme through the famous Fiat-Shamir transform. The main drawback of this protocol is its high soundness error of $2/3$, meaning that it should be repeated $\approx 1.7\lambda$ times to reach a $\lambda$-bit security.
In this paper, we improve this three-decade-old state of affairs by introducing a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Our protocol achieves a soundness error of 1/n for an arbitrary n in complexity O(n). Our construction requires the verifier to trust some of the variables sent by the prover which can be ensured through a cut-and-choose approach. We provide an optimized version of our zero-knowledge protocol which achieves arbitrary soundness through parallel repetitions and merged cut-and-choose phase. While turning this protocol into a signature scheme, we achieve a signature size of 17 KB for 128-bit security. This represents a significant improvement over previous constructions based on the syndrome decoding problem for random linear codes.

Finding Collisions for Round-Reduced Romulus-H

Romulus-H is a hash function that currently competes as a finalist in the NIST Lightweight Cryptography competition. It is based on the Hirose DBL construction which is provably secure when used with an ideal block cipher. However, in practice, ideal block ciphers can only be approximated. The security of concrete instantiations must be cryptanalyzed carefully; the security margin may be higher or lower than in the secret-key setting. So far, the Hirose DBL construction has been studied with only a few other block ciphers, like IDEA and AES. However, Romulus-H uses Hirose DBL with the SKINNY block cipher where no dedicated analysis has been published so far.
In this work, we present the first third-party analysis of Romulus-H. We propose a new framework for finding collisions in hash functions based on the Hirose DBL construction. This is in contrast to previous work that only focused on free-start collisions. Our framework is based on the idea of joint differential characteristics which capture the relationship between the two block cipher calls in the Hirose DBL construction. To identify good joint differential characteristics, we propose a combination of a MILP and CP model. Then, we use these characteristics in another CP model to find collisions. Finally, we apply this framework to Romulus-H and find practical collisions of the hash function for 10 out of 40 rounds and practical semi-free-start collisions up to 14 rounds.

A New Post-Quantum Key Agreement Protocol and Derived Cryptosystem Based on Rectangular Matrices

In this paper, we present an original algorithm to generate session keys and a subsequent generalized ElGamal-type cryptosystem. The scheme presented here has been designed to prevent both linear and brute force attacks using rectangular matrices and to achieve high complexity. Our algorithm includes a new generalized Diffie-Hellmann scheme based on rectangular matrices and polynomial field operations. Two variants are presented, the first with a double exchange between the parties and the second with a single exchange, thus speeding up the generation of session keys.

Tight Preimage Resistance of the Sponge Construction

The cryptographic sponge is a popular method for hash function design. The construction is in the ideal permutation model proven to be indifferentiable from a random oracle up to the birthday bound in the capacity of the sponge. This result in particular implies that, as long as the attack complexity does not exceed this bound, the sponge construction achieves a comparable level of collision, preimage, and second preimage resistance as a random oracle. We investigate these state-of-the-art bounds in detail, and observe that while the collision and second preimage security bounds are tight, the preimage bound is not tight. We derive an improved and tight preimage security bound for the cryptographic sponge construction.
The result has direct implications for various lightweight cryptographic hash functions. For example, the NIST Lightweight Cryptography finalist Ascon-Hash does not generically achieve $2^{128}$ preimage security as claimed, but even $2^{192}$ preimage security. Comparable improvements are obtained for the modes of Spongent, PHOTON, ACE, Subterranean 2.0, and QUARK, among others.

Temporary Block Withholding Attacks on Filecoin's Expected Consensus

As of 28 January 2022, Filecoin is ranked as the first capitalized storage-oriented cryptocurrency. In this system, miners dedicate their storage space to the network and verify transactions to earn rewards. Nowadays, Filecoin's network capacity has surpassed 15 exbibytes.
In this paper, we propose three temporary block withholding attacks to challenge Filecoin's expected consensus (EC). Specifically, we first deconstruct EC following old-fashioned methods (which have been widely developed since 2009) to analyze the advantages and disadvantages of EC's design. We then present three temporary block withholding schemes by leveraging the shortcomings of EC. We build Markov Decision Process (MDP) models for the three attacks to calculate the adversary's gains. We develop Monte Carlo simulators to mimic the mining strategies of the adversary and other miners and indicate the impacts of the three attacks on expectation. As a result, we show that our three attacks have significant impacts on Filecoin's mining fairness and transaction throughput. For instance, when honest miners who control more than half the global storage power assemble their tipsets after the default transmission cutoff time, an adversary with 1% of the global storage power is able to launch temporary block withholding attacks without a loss in revenue, which is rare in existing blockchains. Finally, we discuss the implications of our attacks and propose several countermeasures to mitigate them.

Analyzing the Leakage Resistance of the NIST's Lightweight Crypto Competition's Finalists

We investigate the security of the NIST Lightweight Crypto Competition’s Finalists against side-channel attacks. We start with a mode-level analysis that allows us to put forward three candidates (As- con, ISAP and Romulus-T) that stand out for their leakage properties and do not require a uniform protection of all their computations thanks to (expensive) implementation-level countermeasures. We then implement these finalists and evaluate their respective performances. Our results confirm the interest of so-called leveled implementations (where only the key derivation and tag generation require security against differential power analysis). They also suggest that these algorithms differ more by their qualitative features (e.g., two-pass designs to improve confidentiality with decryption leakage vs. one-pass designs, flexible overheads thanks to masking vs. fully mode-level, easier to implement, schemes) than by their quantitative features, which all improve over the AES and are quite sensitive to security margins against cryptanalysis.

The Random Fault Model

In this work, we introduce a more advanced fault adversary inspired from the random probing model, called the random fault model, where the adversary can fault all values in the algorithm but where the probability for each fault to occur is limited. The new adversary model is used to evaluate the security of side-channel and fault countermeasures such as Boolean masking, inner product masking, error detection techniques, error correction techniques, multiplicative tags, and shuffling methods. The results of the security analysis reveal novel insights including: error correction providing little security when faults target more bits; the order between masking and duplication providing a trade-off between side-channel and fault security; and inner product masking and multiplicative masking providing exponential protection in the field size. Moreover, the results also explain the experimental results from CHES 2022 and find weaknesses in the shuffling method from SAMOS 2021.

Characteristic Automated Search of Cryptographic Algorithms for Distinguishing Attacks (CASCADA)

Automated search methods based on Satisfiability Modulo Theories (SMT) problems are being widely used to evaluate the security of block ciphers against distinguishing attacks. While these methods provide a systematic and generic methodology, most of their software implementations are limited to a small set of ciphers and attacks, and extending these implementations requires significant effort and expertise.
In this work we present CASCADA, an open-source Python library to evaluate the security of cryptographic primitives, specially block ciphers, against distinguishing attacks with bit-vector SMT solvers. The tool CASCADA implements the bit-vector property framework herein proposed and several SMT-based automated search methods to evaluate the security of ciphers against differential, related-key differential, rotational-XOR, impossible-differential, impossible-rotational-XOR, related-key impossible-differential, linear and zero-correlation cryptanalysis. The library CASCADA is the result of a huge engineering effort, and it provides many functionalities, a modular design, an extensive documentation and a complete suite of tests.

Two-Client Inner-Product Functional Encryption, with an Application to Money-Laundering Detection

In this paper, we extend Inner-Product Functional Encryption (IPFE), where there is just a vector in the key and a vector in the single sender's ciphertext, to two-client ciphertexts. More precisely, in our two-client functional encryption scheme, there are two data providers who can independently encrypt vectors $\mathbf{x}$ and $\mathbf{y}$ for a data consumer who can, from a functional decryption key associated to a vector $\mathbf{\alpha}$, compute $\sum \alpha_i x_i y_i = \mathbf{x} \cdot \mathsf{Diag}(\mathbf{\alpha}) \cdot \mathbf{y}^\top$. Ciphertexts are linear in the dimension of the vectors, whereas the functional decryption keys are of constant size.
We study two interesting particular cases:
- 2-party Inner-Product Functional Encryption, with $\mathbf{\alpha}= (1,\ldots,1)$. There is a unique functional decryption key, which enables the computation of $\mathbf{x}\cdot \mathbf{y}^\top$ by a third party, where $\mathbf{x}$ and $\mathbf{y}$ are provided by two independent clients;
- Inner-Product Functional Encryption with a Selector, with $\mathbf{x}= \mathbf{x}_0 \| \mathbf{x}_1$ and $\mathbf{y}= \bar{b}^n \| b^n \in \{ 1^n \| 0^n, 0^n \| 1^n \}$, for some bit $b$, on the public coefficients $\mathbf{\alpha} = \mathbf{\alpha}_0 \| \mathbf{\alpha}_1$, in the functional decryption key, so that one gets $\mathbf{x}_b \cdot \mathbf{\alpha}_b^\top$, where $\mathbf{x}$ and $b$ are provided by two independent clients.
This result is based on the fundamental Product-Preserving Lemma, which is of independent interest. It exploits Dual Pairing Vector Spaces (DPVS), with security proofs under the $\mathsf{SXDH}$ assumption.
We provide two practical applications: to medical diagnosis for the latter IPFE with Selector, and to money-laundering detection for the former 2-party IPFE, both with strong privacy properties, with adaptative security and the use of labels granting a Multi-Client Functional Encryption (MCFE) security for the scheme, thus enabling its use in practical situations.

Algorithms for switching between block-wise and arithmetic masking

The task of ensuring the required level of security of information systems in the adversary models with additional data obtained through side channels (a striking example of implementing threats in such a model is a differential power analysis) has become increasingly relevant in recent years. An effective protection method against side-channel attacks is masking all intermediate variables used in the algorithm with random values. At the same time, many algorithms use masking of different kinds, for example, Boolean, byte-wise, and arithmetic; therefore, a problem of switching between masking of different kinds arises. Switching between Boolean and arithmetic masking is well studied, while no solutions have been proposed for switching between masking of other kinds. This article recalls the requirements for switching algorithms and presents algorithms for switching between block-wise and arithmetic masking, which includes the case of switching between byte-wise and arithmetic masking.

Enhanced pqsigRM: Code-Based Digital Signature Scheme with Short Signature and Fast Verification for Post-Quantum Cryptography

We present a novel code-based digital signature scheme, called enhanced pqsigRM for post-quantum cryptography (PQC).
This scheme is based on a modified Reed--Muller (RM) code, which reduces the signature size and verification time compared with existing code-based signature schemes.
In fact, it strengthens pqsigRM submitted to NIST for post-quantum cryptography standardization.
The proposed scheme has the advantage of the short signature size and fast verification and uses public codes that are more difficult to distinguish from random codes.
We use $(U,U+V)$-codes with the high-dimensional hull to overcome the disadvantages of code-based schemes.
The proposed decoder samples from coset elements with small Hamming weight for any given syndrome and efficiently finds such an element.
Using a modified RM code, the proposed signature scheme resists various known attacks on RM-code-based cryptography.
It has advantages on signature size, verification time, and proven security.
For 128 bits of classical security, the signature size of the proposed signature scheme is 512 bytes, which corresponds to 1/4.7 of that of CRYSTALS-DILITHIUM, and the number of median verification cycles is 1,717,336, which corresponds to the five times of that of CRYSTALS-DILITHIUM.

Cryptography with Certified Deletion

We propose a new, unifying framework that yields an array of cryptographic primitives with certified deletion. These primitives enable a party in possession of a quantum ciphertext to generate a classical certificate that the encrypted plaintext has been information-theoretically deleted, and cannot be recovered even given unbounded computational resources.
- For $X \in \{\mathsf{public}\text{-}\mathsf{key},\mathsf{attribute\text{-}based},\mathsf{fully\text{-}homomorphic},\mathsf{witness},\mathsf{timed}\text{-}\mathsf{release}\}$, our compiler converts any (post-quantum) $X$ encryption to $X$ encryption with certified deletion.
In addition, we compile statistically-binding commitments to statistically-binding commitments with certified everlasting hiding. As a corollary, we also obtain statistically-sound zero-knowledge proofs for QMA with certified everlasting zero-knowledge assuming statistically-binding commitments.
- We also obtain a strong form of everlasting security for two-party and multi-party computation in the dishonest majority setting. While simultaneously achieving everlasting security against all parties in this setting is known to be impossible, we introduce everlasting security transfer (EST). This enables any one party (or a subset of parties) to dynamically and certifiably information-theoretically delete other participants' data after protocol execution.
We construct general-purpose secure computation with EST assuming statistically-binding commitments, which can be based on one-way functions or pseudorandom quantum states.
We obtain our results by developing a novel proof technique to argue that a bit $b$ has been information-theoretically deleted from an adversary's view once they output a valid deletion certificate, despite having been previously information-theoretically determined by the ciphertext they held in their view. This technique may be of independent interest.

Efficient Linkable Ring Signature from Vector Commitment inexplicably named Multratug

In this paper we revise the ideas of our previous work ‘Lin2-Xor lemma and Log-size Linkable Threshold Ring Signature’ and introduce another lemma called Lin2-Choice, which extends the Lin2-Xor lemma. Using the Lin2-Choice lemma we create a compact general-purpose trusted-setup-free log-size linkable threshold ring signature with a strong security model. The signature size is 2log(n+1)+3l+1, where n is the ring size and l is the threshold. It is composed of several public coin arguments that are special honest verifier zero-knowledge and have computational witness-extended emulation. We use an arbitrary vector commitment argument as the base building block, providing the possibility to use any of its concrete implementations that have the above properties. Also, we present an extended version of our signature of size 2log(n+l+1)+6l+4, that simultaneously proves the sum of hidden amounts attached to the signing keys, i.e. proves the balance. All this in a prime order group without bilinear parings in which the decisional Diffie-Hellman assumption holds.

On the Adaptive Security of the Threshold BLS Signature Scheme

Threshold signatures are a crucial tool for many distributed protocols. As shown by Cachin, Kursawe, and Shoup (PODC '00), schemes with unique signatures are of particular importance, as they allow to implement distributed coin flipping very efficiently and without any timing assumptions. This makes them an ideal building block for (inherently randomized) asynchronous consensus protocols. The threshold BLS signature of Boldyreva (PKC '03) is both unique and very compact, but unfortunately lacks a security proof against adaptive adversaries. Thus, current consensus protocols either rely on less efficient alternatives or are not adaptively secure. In this work, we revisit the security of the threshold BLS signature by showing the following results, assuming $t$ adaptive corruptions:
- We give a modular security proof that follows a two-step approach: 1) We introduce a new security notion for distributed key generation protocols (DKG). We show that it is satisfied by several protocols that previously only had a static security proof. 2) Assuming any DKG protocol with this property, we then prove unforgeability of the threshold BLS scheme. Our reductions are tight and can be used to substantiate real-world parameter choices.
- To justify our use of strong assumptions such as the algebraic group model (AGM) and the hardness of one-more-discrete logarithm (OMDL), we prove two impossibility results: 1) Without the AGM, we rule out a natural class of tight security reductions from $(t+1)$-OMDL. 2) Even in the AGM, a strong interactive assumption is required in order to prove the scheme secure.

Scan, Shuffle, Rescan: Machine-Assisted Election Audits With Untrusted Scanners

We introduce a new way to conduct election audits using untrusted scanners. Post-election audits perform statistical hypothesis testing to confirm election outcomes. However, existing approaches are costly and laborious for close elections---often the most important cases to audit---requiring extensive hand inspection of ballots. We instead propose automated consistency checks, augmented by manual checks of only a small number of ballots. Our protocols scan each ballot twice, shuffling the ballots between scans: a ``two-scan'' approach inspired by two-prover proof systems. We show that this gives strong statistical guarantees even for close elections, provided that (1) the permutation accomplished by the shuffle is unknown to the scanners and (2) the scanners cannot reliably identify a particular ballot among others cast for the same candidate. Our techniques drastically reduce the time, expense, and labor of auditing close elections, which we hope will promote wider deployment.
We present three rescan audit protocols and analyze their statistical guarantees. We first present a simple scheme illustrating our basic idea in a simplified two-candidate setting. We then extend this scheme to support (1) more than two candidates; (2) processing of ballots in batches; and (3) imperfect scanners, as long as scanning errors are too infrequent to affect the election outcome. Our proposals require manual handling or inspection of 10--100 ballots per batch in a variety of settings, in contrast to existing techniques that require hand inspecting many more ballots in close elections. Unlike prior techniques that depend on the relative margin of victory, our protocols are to our knowledge the first to depend on the absolute margin, and give meaningful guarantees even for extremely close elections: e.g., absolute margins of tens or hundreds of votes.

WOTSwana: A Generalized Sleeve Construction for Multiple Proofs of Ownership

The $\mathcal{S}_{leeve}$ construction proposed by Chaum et al. (ACNS'21) introduces an extra security layer for digital wallets by allowing users to generate a "back up key" securely nested inside the secret key of a signature scheme, i.e., ECDSA. The "back up key", which is secret, can be used to issue a "proof of ownership", i.e., only the real owner of this secret key can generate a single proof, which is based on the WOTS+ signature scheme. The authors of $\mathcal{S}_{leeve}$ proposed the formal technique for a single proof of ownership, and only informally outlined a construction to generalize it to multiple proofs. This work identifies that their proposed construction presents drawbacks, i.e., varying of signature size and signing/verifying computation complexity, limitation of linear construction, etc. Therefore we introduce WOTSwana, a generalization of $\mathcal{S}_{leeve}$, which is, more concretely, a more general scheme, i.e., an extra security layer that generates multiple proofs of ownership, and put forth a thorough formalization of two constructions: (1) one given by a linear concatenation of numerous WOTS+ private/public keys, and (2) a construction based on tree like structure, i.e., an underneath Merkle tree whose leaves are WOTS+ private/public key pairs. Furthermore, we present the security analysis for multiple proofs of ownership, showcasing that this work addresses the early mentioned drawbacks of the original construction. In particular, we extend the original security definition for $\mathcal{S}_{leeve}$. Finally, we illustrate an alternative application of our construction, by discussing the creation of an encrypted group chat messaging application.

What Can Cryptography Do For Decentralized Mechanism Design?

Recent works of Roughgarden (EC'21) and Chung and Shi (SODA'23) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition.
In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.

Plactic signatures (insecure?)

Plactic signatures use the plactic monoid (semistandard tableaus with Knuth’s associative multiplication) and full-domain hashing (SHAKE). Monico found an attack which likely makes plactic signatures insecure.

A New Higher Order Differential of RAGHAV

RAGHAV is a 64-bit block cipher proposed by Bansod in 2021. It supports 80-, and 128-bit secret keys. The designer evaluated its security against typical attack, such as differential cryptanalysis, linear cryptanalysis, and so on. On the other hand, it has not been reported the security of RAGHAV against higher order differential attack, which is one of the algebraic attacks. In this paper, we applied higher order differential cryptanalysis to RAGHAV. As a results, we found a new full-round higher order characteristic of RAGHAV using 1-st order differential. Exploiting this characteristic, we also show that the full-round of RAGHAV is attackable by distinguishing attack with 2 chosen plaintexts.

Anonymous Tokens with Hidden Metadata Bit from Algebraic MACs

On the one hand, the web needs to be secured from malicious activities such as bots or DoS attacks; on the other hand, such needs ideally should not justify services tracking people's activities on the web. Anonymous tokens provide a nice tradeoff between allowing an issuer to ensure that a user has been vetted and protecting the users' privacy. However, in some cases, whether or not a token is issued reveals a lot of information to an adversary about the strategies used to distinguish honest users from bots or attackers.
In this work, we focus on designing an anonymous token protocol between a client and an issuer (also a verifier) that enables the issuer to support its fraud detection mechanisms while preserving users' privacy. This is done by allowing the issuer to embed a hidden (from the client) metadata bit into the tokens. We first study an existing protocol from CRYPTO 2020 which is an extension of Privacy Pass from PoPETs 2018; that protocol aimed to provide support for a hidden metadata bit, but provided a somewhat restricted security notion. We demonstrate a new attack, showing that this is a weakness of the protocol, not just the definition. In particular, the metadata bit hiding is weak in the setting where the attacker can redeem some tokens and get feedback on what bit is extracted.
We then revisit the formalism of anonymous tokens with private metadata bit, consider the more natural notion, and design a scheme which achieves it. In order to design this new secure protocol, we base our construction on algebraic MACs instead of PRFs. Our security definitions capture a realistic threat model where adversaries could, through direct feedback or side channels, learn the embedded bit when the token is redeemed. Finally, we compare our protocol with one of the CRYPTO 2020 protocols which we obtain 20\% more efficient implementation.

flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size

We present a protocol for checking the values of a committed polynomial $\phi(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $m$ are contained in a table $T\in \mathbb{F}^N$. After an $O(N \log^2 N)$ preprocessing step, the prover algorithm runs in *quasilinear* time $O(m\log ^2 m)$.
We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size $N$ with prover time being $O(m^2+m\log N)$ and $O(m^2)$, respectively.
We pose further improving this complexity to $O(m\log m)$ as the next important milestone for efficient zk-SNARK lookups.

cuXCMP: CUDA-Accelerated Private Comparison Based on Homomorphic Encryption

Private comparison schemes constructed on homomorphic encryption oﬀer the noninteractive, output expressive and parallelizable features, and have advantages in communication bandwidth and performance. In this paper, we propose cuXCMP, which allows negative and ﬂoat inputs, oﬀers fully output expressive feature, and is more extensible and practical compared to XCMP (AsiaCCS 2018). Meanwhile, we introduce several memory-centric optimizations of the constant term extraction kernel tailored for CUDA-enabled GPUs. Firstly, we fully utilize the shared memory and present compact GPU implementations of NTT and INTT using a single block; Secondly, we fuse multiple kernels into one AKS kernel, which conducts the automorphism and key switching operation, and reduce the grid dimension for better resource usage, data access rate and synchronization. Thirdly, we precisely measure the IO latency and choose an appropriate number of CUDA streams to enable concurrent execution of independent operations, yielding a constant term extraction kernel with perfect latency hide, i.e., CTX. Combining these approaches, we boost the overall execution time to optimum level and the speedup ratio increases with the comparison scales. For one comparison, we speedup the AKS by 23.71×, CTX by 15.58×, and scheme by 1.83× (resp., 18.29×, 11.75×, and 1.42×) compared to C (resp., AVX512) baselines, respectively. For 32 comparisons, our CTX and scheme implementations outperform the C (resp., AVX512) baselines by 112.00× and 1.99× (resp., 81.53× and 1.51×).

Another Round of Breaking and Making Quantum Money: How to Not Build It from Lattices, and More

Public verification of quantum money has been one of the central objects in quantum cryptography ever since Wiesner's pioneering idea of using quantum mechanics to construct banknotes against counterfeiting. So far, we do not know any publicly-verifiable quantum money scheme that is provably secure from standard assumptions.
In this work, we provide both negative and positive results for publicly verifiable quantum money.
**In the first part, we give a general theorem, showing that a certain natural class of quantum money schemes from lattices cannot be secure. We use this theorem to break the recent quantum money scheme of Khesin, Lu, and Shor.
**In the second part, we propose a framework for building quantum money and quantum lightning we call invariant money which abstracts some of the ideas of quantum money from knots by Farhi et al.(ITCS'12). In addition to formalizing this framework, we provide concrete hard computational problems loosely inspired by classical knowledge-of-exponent assumptions, whose hardness would imply the security of quantum lightning, a strengthening of quantum money where not even the bank can duplicate banknotes.
**We discuss potential instantiations of our framework, including an oracle construction using cryptographic group actions and instantiations from rerandomizable functional encryption, isogenies over elliptic curves, and knots.

The Performance Analysis of Post-Quantum Cryptography for Vehicular Communications

For avoding the attacks from quantum computing (QC), this study applies the post-quantum cryptography (PQC) methods without hidden subgroups to the security of vehicular communications. Due the mainstream technologies of PQC methods (i.e. lattice-based cryptography methods), the standard digital signature methods including Dilithium and Falcon have been discussed and compared. This study uses a queueing model to analyze the performance of these standard digital signature methods for selection decision-making.

Witness-Succinct Universally-Composable SNARKs

Zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs) are becoming an increasingly fundamental tool in many real-world applications where the proof compactness is of the utmost importance, including blockchains. A proof of security for SNARKs in the Universal Composability (UC) framework (Canetti, FOCS'01) would rule out devastating malleability attacks. To retain security of SNARKs in the UC model, one must show their simulation-extractability such that the knowledge extractor is both black-box and straight-line, which would imply that proofs generated by honest provers are non-malleable. However, existing simulation-extractability results on SNARKs either lack some of these properties, or alternatively have to sacrifice witness succinctness to prove UC security.
In this paper, we provide a compiler lifting any simulation-extractable NIZKAoK into a UC-secure one in the global random oracle model, importantly, while preserving the same level of witness succinctness. Combining this with existing zkSNARKs, we achieve, to the best of our knowledge, the first zkSNARKs simultaneously achieving UC-security and constant sized proofs.

Updatable Encryption from Group Actions

Updatable Encryption (UE) allows to rotate the encryption key in the outsourced storage
setting while minimizing the bandwith used. The server can update ciphertexts to the new key using a
token provided by the client. UE schemes should provide strong confidentiality guarantees against an
adversary that can corrupt keys and tokens.
This paper studies the problem of building UE in the group action framework. We introduce a new
notion of Mappable Effective Group Action (MEGA) and show that we can build CCA secure UE from
a MEGA by generalizing the SHINE construction of Boyd et al. at Crypto 2020. Unfortunately, we do
not know how to instantiate this new construction in the post-quantum setting. Doing so would solve
the open problem of building a CCA secure post-quantum UE scheme.
Isogeny-based group actions are the most studied post-quantum group actions. Unfortunately, the
resulting group actions are not mappable. We show that we can still build UE from isogenies by
introducing a new algebraic structure called Effective Triple Orbital Group Action (ETOGA). We
prove that UE can be built from an ETOGA and show how to instantiate this abstract structure from
isogeny-based group actions. This new construction solves two open problems in ciphertext-independent
post-quantum UE. First, this is the first post-quantum UE scheme that supports an unbounded number
of updates. Second, our isogeny-based UE scheme is the first post-quantum UE scheme not based on
lattices.

Secure Auctions in the Presence of Rational Adversaries

Sealed bid auctions are used to allocate a resource among a set of interested parties. Traditionally, auctions need the presence of a trusted auctioneer to whom the bidders provide their private bid values. Existence of such a trusted party is not an assumption easily realized in practice. Generic secure computation protocols can be used to remove a trusted party. However, generic techniques result in inefficient protocols, and typically do not provide fairness - that is, a corrupt party can learn the output and abort the protocol thereby preventing other parties from learning the output.
At CRYPTO 2009, Miltersen, Nielsen and Triandopoulos [MNT09], introduced the problem of building auctions that are secure against rational bidders. Such parties are modelled as self-interested agents who care more about maximizing their utility than about learning information about bids of other agents. To realize this, they put forth a novel notion of information utility and introduce a game-theoretic framework that helps analyse protocols while taking into account both information utility as well as monetary utility. Unfortunately, their construction makes use a of generic MPC protocol and, consequently, the authors do not analyze the concrete efficiency of their protocol.
In this work, we construct the first concretely efficient and provably secure protocol for First Price Auctions in the rational setting. Our protocol guarantees privacy, public verifiability and fairness. Inspired by [MNT09], we put forth a solution concept that we call Privacy Enhanced Computational Weakly Dominant Strategy Equilibrium that captures parties' privacy and monetary concerns in the game theoretic context, and show that our protocol realizes this. We believe this notion to be of independent interest. Our protocol is crafted specifically for the use case of auctions, is simple, using off-the-shelf cryptographic components.
Executing our auction protocol on commodity hardware with 30 bidders, with bids of length 10, our protocol runs to completion in 0.429s and has total communication of 82KB.

Secret Sharing for Generic Access Structures

Sharing a secret efficiently amongst a group of participants is not easy since there is always an adversary / eavesdropper trying to retrieve the secret. In secret sharing schemes, every participant is given a unique share. When the desired group of participants come together and provide their shares, the secret is obtained. For other combinations of shares, a garbage value is returned. A threshold secret sharing scheme was proposed by Shamir and Blakeley independently. In this (n,t) threshold secret sharing scheme, the secret can be obtained when at least $t$ out of $n$ participants contribute their shares. This paper proposes a novel algorithm to reveal the secret only to the subsets of participants belonging to the access structure. This scheme implements totally generalized ideal secret sharing. Unlike threshold secret sharing schemes, this scheme reveals the secret only to the authorized sets of participants, not any arbitrary set of users with cardinality more than or equal to $t$. Since any access structure can be realized with this scheme, this scheme can be exploited to implement various access priorities and access control mechanisms. A major advantage of this scheme over the existing ones is that the shares being distributed to the participants is totally independent of the secret being shared. Hence, no restrictions are imposed on the scheme and it finds a wider use in real world applications.

Efficient Methods for Implementation of Generalized Access Structures

The recent advent of cloud computing and IoT has made it imperative to store huge amount of data in the cloud servers. Enormous amount of data is also stored in the servers of big organizations. In organizations, it is not desirable for every member to have equal privileges to access the stored data. Threshold secret sharing schemes are widely used for implementing such access control mechanisms. The access privileges of the members also vary from one data packet to another. While implementing such dynamic access structures using threshold secret sharing schemes, the key management becomes too complex to be implemented in real time. Furthermore, the storage of the users’ access privileges requires exponential space (O($2^n$)) and its implementation requires exponential time. In this paper, the algorithms proposed to tackle the problems of
priority based access control and authentication require a space complexity of O($n^2$) and a time complexity of O($n$), where n is the number of users. In the practical scenario, such space can easily be provided on the servers and the algorithms can run smoothly in real time.

Clarion: Anonymous Communication from Multiparty Shuffling Protocols

This paper studies the role of multiparty shuffling protocols in enabling more efficient metadata-hiding communication. We show that the process of shuffling messages can be expedited by having servers collaboratively shuffle and verify secret-shares of messages instead of using a conventional mixnet approach where servers take turns performing independent verifiable shuffles of user messages. We apply this technique to achieve both practical and asymptotic improvements in anonymous broadcast and messaging systems. We first show how to build a three server anonymous broadcast scheme, secure against one malicious server, that relies only on symmetric cryptography. Next, we adapt our three server broadcast scheme to a $k$-server scheme secure against $k-1$ malicious servers, at the cost of a more expensive per-shuffle preprocessing phase. Finally, we show how our scheme can be used to significantly improve the performance of the MCMix anonymous messaging system.
We implement our shuffling protocol in a system called Clarion and find that it outperforms a mixnet made up of a sequence of verifiable (single-server) shuffles by $9.2\times$ for broadcasting small messages and outperforms the MCMix conversation protocol by $11.8\times$.

Throughput Limitation of the Off-chain Payment Networks

Off-chain payment channels were introduced as one of the solutions to the blockchain scalability problem. The channels shape a network, where parties have to lock funds for their creation. A channel is expected to route a limited number of transactions before it becomes unbalanced, when all of the funds are devoted to one of the parties. Since an on-chain transaction is often necessary to establish, rebalance, or close a channel, the off-chain network is bounded to the throughput of the blockchain.
In this paper, we propose a mathematical model to formulate limitation on the throughput of an off-chain payment network. As a case study, we show the limitation of the Lightning Network, in comparison with popular banking systems. Our results show that theoretically, the throughput of the Lightning Network can reach the order of 10000 transactions per second, close to the average throughput of centralized banking systems.

Range Search over Encrypted Multi-Attribute Data

This work addresses expressive queries over encrypted data by presenting the first systematic study of multi-attribute range search on a symmetrically encrypted database outsourced to an honest-but-curious server. Prior work includes a thorough analysis of single-attribute range search schemes (e.g. Demertzis et al. 2016) and a proposed high-level approach for multi-attribute schemes (De Capitani di Vimercati et al. 2021). We first introduce a flexible framework for building secure range search schemes over multiple attributes (dimensions) by adapting a broad class of geometric search data structures to operate on encrypted data. Our framework encompasses widely used data structures such as multi-dimensional range trees and quadtrees, and has strong security properties that we formally prove. We then develop six concrete highly parallelizable range search schemes within our framework that offer a sliding scale of efficiency and security tradeoffs to suit the needs of the application. We evaluate our schemes with a formal complexity and security analysis, a prototype implementation, and an experimental evaluation on real-world datasets.

Large-Scale Non-Interactive Threshold Cryptosystems in the YOSO Model

A $(t,n)$-public key threshold cryptosystem allows distributing the execution of a cryptographic task among a set of $n$ parties by splitting the secret key required for the computation into $n$ shares. A subset of at least $t+1$ honest parties is required to execute the task of the cryptosystem correctly, while security is guaranteed as long as at most $t < \frac{n}{2}$ parties are corrupted. Unfortunately, traditional threshold cryptosystems do not scale well, when executed at large-scale (e.g., in the Internet-environment). In such settings, a possible approach is to select a subset of $n$ players (called a committee) out of the entire universe of $N\gg n$ parties to run the protocol. If done naively, however, this means that the adversary's corruption power does not scale with $N$ as otherwise, the adversary would be able to corrupt the entire committee. A beautiful solution for this problem is given by Benhamouda et al. (TCC 2020) who present a novel form of secret sharing, where the efficiency of the protocol is \emph{independent} of $N$, but the adversarial corruption power \emph{scales} with $N$ (a.k.a. fully mobile adversary). They achieve this through a novel mechanism that guarantees parties in a committee to stay anonymous -- also referred to as the YOSO (You Only Speak Once) model -- until they start to interact within the protocol.
In this work, we initiate the study of large-scale threshold cryptography in the YOSO model of communication. We formalize and present novel protocols for distributed key generation, threshold encryption, and signature schemes that guarantee security in large-scale environments. A key challenge in our analysis is that we cannot use the secret sharing protocol of Benhamouda et al. as a black-box to construct our schemes, and instead we require a more generalized version, which may be of independent interest. Finally, we show how our protocols can be concretely instantiated in the YOSO model, and discuss interesting applications of our schemes.

Classic McEliece Key Generation on RAM constrained devices

Classic McEliece is a code based encryption scheme and candidate of the NIST post quantum contest. Implementing Classic McEliece on smart card chips is a challenge, because those chips have only a very limited amount of RAM. Decryption is not an issue because the cryptogram size is short and the decryption algorithm can be implemented using very few RAM. However key generation is a concern, because a large binary matrix must be inverted.
In this paper, we show how key generation can be done on smart card chips with very little RAM resources. This is accomplished by modifying the key generation algorithm and splitting it in a security critical part and a non security critical part. The security critical part can be implemented on the smart card controller. The non critical part contains the matrix inversion and will be done on a connected host.

The inspection model for zero-knowledge proofs and efficient Zerocash with secp256k1 keys

Proving discrete log equality for groups of the same order is addressed by Chaum and Pedersen's seminal work. However, there has not been a lot of work in proving discrete log equality for groups of different orders.
This paper presents an efficient solution, which leverages a technique we call delegated Schnorr. The discovery of this technique is guided by a design methodology that we call the inspection model, and we find it useful for protocol designs.
We show two applications of this technique on the Findora blockchain:
**Maxwell-Zerocash switching:**
There are two privacy-preserving transfer protocols on the Findora blockchain, one follows the Maxwell construction and uses Pedersen commitments over Ristretto, one follows the Zerocash construction and uses Rescue over BLS12-381. We present an efficient protocol to convert assets between these two constructions while preserving the privacy.
**Zerocash with secp256k1 keys:**
Bitcoin, Ethereum, and many other chains do signatures on secp256k1. There is a strong need for ZK applications to not depend on special curves like Jubjub, but be compatible with secp256k1. Due to FFT unfriendliness of secp256k1, many proof systems (e.g., Groth16, Plonk, FRI) are infeasible. We present a solution using Bulletproofs over curve secq256k1 ("q") and delegated Schnorr which connects Bulletproofs to TurboPlonk over BLS12-381.
We conclude the paper with (im)possibility results about Zerocash with only access to a deterministic ECDSA signing oracle, which is the case when working with MetaMask. This result shows the limitations of the techniques in this paper.
This paper is under a bug bounty program through a grant from Findora Foundation.