Papers updated in last 183 days (1350 results)
Zero-Knowledge Functional Elementary Databases
Zero-knowledge elementary databases (ZK-EDBs) enable a prover to commit a database ${D}$ of key-value $(x,v)$ pairs and later provide a convincing answer to the query ``send me the value $D(x)$ associated with $x$'' without revealing any extra knowledge (including the size of ${D}$). After its introduction, several works extended it to allow more expressive queries, but the expressiveness achieved so far is still limited: only a relatively simple queries--range queries over the keys and values-- can be handled by known constructions.
In this paper we introduce a new notion called zero knowledge functional elementary databases (ZK-FEDBs), which allows the most general functional queries. Roughly speaking, for any Boolean circuit $f$, ZK-FEDBs allows the ZK-EDB prover to provide convincing answers to the queries of the form ``send me all records ${(x,v)}$ in ${{D}}$ satisfying $f(x,v)=1$,'' without revealing any extra knowledge (including the size of ${D}$). We present a construction of ZK-FEDBs in the random oracle model and generic group model, whose proof size is only linear in the length of record and the size of query circuit, and is independent of the size of input database $D$.
Our technical constribution is two-fold. Firstly, we introduce a new variant of zero-knowledge sets (ZKS) which supports combined operations on sets, and present a concrete construction that is based on groups with unknown order. Secondly, we develop a tranformation that tranforms the query of Boolean circuit into a query of combined operations on related sets, which may be of independent interest.
Blockchain Transaction Censorship: (In)secure and (In)efficient?
The ecosystem around blockchain and Decentralized Finance (DeFi) is seeing more and more interest from centralized regulators. For instance, recently, the US government placed sanctions on the largest DeFi mixer, Tornado.Cash (TC). To our knowledge, this is the first time that centralized regulators sanction a decentralized and open-source blockchain application. It has led various blockchain participants, e.g., miners/validators and DeFi platforms, to censor TC-related transactions. The blockchain community has extensively discussed that censoring transactions could affect users’ privacy.
In this work, we analyze the efficiency and possible security implications of censorship on the different steps during the life cycle of a blockchain transaction, i.e., generation, propagation, and validation. We reveal that fine-grained censorship will reduce the security of block validators and centralized transaction propagation services, and can potentially cause Denial of Service (DoS) attacks. We also find that DeFi platforms adopt centralized third-party services to censor user addresses at the frontend level, which blockchain users could easily bypass. Moreover, we present a tainting attack whereby an adversary can prevent users from interacting normally with DeFi platforms by sending TC-related transactions.
New Design Techniques for Efficient Arithmetization-Oriented Hash Functions:Anemoi Permutations and Jive Compression Mode
Advanced cryptographic protocols such as Zero-knowledge (ZK) proofs of knowledge, widely used in cryptocurrency applications such as Zcash, Monero, Filecoin, Tezos, Topos, demand new cryptographic hash functions that are efficient not only over the binary field $\mathbb{F}_2$, but also over large fields of prime characteristic $\mathbb{F}_p$. This need has been acknowledged by the wider community and new so-called Arithmetization-Oriented (AO) hash functions have been proposed, e.g. MiMC-Hash, Rescue-Prime, Poseidon, Reinforced Concrete and Griffin to name a few.
In this paper we propose Anemoi: a new family of ZK-friendly permutations, that can be used to construct efficient hash functions and compression functions. The main features of these algorithms are that 1) they are designed to be efficient within multiple proof systems (e.g. Groth16, Plonk, etc.), 2) they contain dedicated functions optimised for specific applications (namely Merkle tree hashing and general purpose hashing), 3) they have highly competitive performance e.g. about a factor of 2 improvement over Poseidon and Rescue-Prime in terms of R1CS constraints, a 21%-35% Plonk constraint reduction over a highly optimized Poseidon implementation, as well as competitive native performance, running between two and three times faster than Rescue-Prime, depending on the field size.
On the theoretical side, Anemoi pushes further the frontier in understanding the design principles that are truly entailed by arithmetization-orientation. In particular, we identify and exploit a previously unknown relationship between CCZ-equivalence and arithmetization-orientation. In addition, we propose two new standalone components that can be easily reused in new designs. One is a new S-box called Flystel, based on the well-studied butterfly structure, and the second is Jive -- a new mode of operation, inspired by the ``Latin dance'' symmetric algorithms (Salsa, ChaCha and derivatives). Our design is a conservative one: it uses a very classical Substitution-Permutation Network structure, and our detailed analysis of algebraic attacks highlights can be of independent interest.
Lattice-based, more general anti-leakage model and its application in decentralization
In the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$, and under the \LWE assumption, the conditional distribution of $\mathbf{s}$ given $\mathbf{b}$ and $\mathbf{s}$ should be consistent. However, if an adversary chooses $\mathbf{A}$ adaptively, the gap between the two may be larger. In this work, we are mainly interested in quantifying $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$, while $\mathbf{A}$ an adversary chooses. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian, the above result still holds.
In addition, as an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product.
As an application of the above results, we improved the multi-key
fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts
Key lifting : Multi-key Fully Homomorphic Encryption in plain model without noise flooding
Multi-key Fully Homomorphic Encryption (\MK), based on the Learning With Error assumption (\LWE), usually lifts ciphertexts of different users to new ciphertexts under a common public key to enable homomorphic evaluation. The efficiency of the current Multi-key Fully Homomorphic Encryption (\MK) scheme is mainly restricted by two aspects:
Expensive ciphertext expansion operation: In a boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$, the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$.
Noise flooding technology resulting in a large modulus $q$ : In order to prove the security of the scheme, the noise flooding technology introduced in the encryption and distributed decryption stages will lead to a huge modulus $q = 2^{O(\lambda L)}B_\chi$, which corrodes the whole scheme and leads to sub-exponential approximation factors $\gamma = \tilde{O}(n\cdot 2^{\sqrt{nL}})$.
This paper solves the first problem by presenting a framework called Key-Lifting Multi-key Fully Homomorphic Encryption (\KL). With this \emph{key lifting} procedure, the number of encryptions for a local user is reduced to $O(N)$, similar to single-key fully homomorphic encryption (\FHE). For the second problem, based on R\'{e}nyi divergence, we propose an optimized proof method that removes the noise flooding technology in the encryption phase. Additionally, in the distributed decryption phase, we prove that the asymmetric nature of the DGSW ciphertext ensures that the noise after decryption does not leak the noise in the initial ciphertext, as long as the depth of the circuit is sufficient. Thus, our initial ciphertext remains semantically secure even without noise flooding, provided the encryption scheme is leakage-resilient. This approach significantly reduces the size of the modulus $q$ (with $\log q = O(L)$) and the computational overhead of the entire scheme.
Hidden Stabilizers, the Isogeny To Endomorphism Ring Problem and the Cryptanalysis of pSIDH
The Isogeny to Endomorphism Ring Problem (IsERP) asks to compute the endomorphism ring of the codomain of an isogeny between supersingular curves in characteristic $p$ given only a representation for this isogeny, i.e. some data and an algorithm to evaluate this isogeny on any torsion point. This problem plays a central role in isogeny-based cryptography; it underlies the security of
pSIDH protocol (ASIACRYPT 2022) and it is at the heart of the recent attacks that broke the SIDH key exchange. Prior to this work, no efficient algorithm was known to solve IsERP for a generic isogeny degree, the hardest case seemingly when the degree is prime.
In this paper, we introduce a new quantum polynomial-time algorithm to solve IsERP for isogenies whose degrees are odd and have $O(\log\log p)$ many prime factors. As main technical tools, our algorithm uses a quantum algorithm for computing hidden Borel subgroups, a group action on supersingular isogenies from EUROCRYPT 2021, various algorithms for the Deuring correspondence and a new algorithm to lift arbitrary quaternion order elements modulo an odd integer $N$ with $O(\log\log p)$ many prime factors to powersmooth elements.
As a main consequence for cryptography, we obtain a quantum polynomial-time key recovery attack on pSIDH. The technical tools we use may also be of independent interest.
How to achieve bidirectional zero-knowledge authentication?
Due to the completeness, reliability and zero-knowledge nature, the zero-knowledge proof is widely used to designed various protocols, including zero-knowledge authentication protocols. However, the existing zero-knowledge proof scheme cannot realize bidirectional authentication. In this paper, we design a series of bidirectional zero-knowledge
protocols based on two new flavors of operations applicable to multiplicative cyclic group. The two notions are formally defined in this paper. We also provide some formal definitions and properties for the two
notions. According to our definitions, any bounded polynomial function
defined on multiplicative cyclic group has duality and mirror. Based on
the two operations, we introduce and formally define dual commitment
scheme and mirror commitment scheme. Besides, we provide two efficient
constructions for dual commitment and mirror commitment respectively
based on CDH assumption and RSA assumption, and named DCCDH,
DCRSA, MCCDH and MCRSA respectively. We also provide the extended version supporting multiple messages in the appendix. Then, we
design some efficient non-interactive as well as interactive zero-knowledge
authentication protocols based on these commitments. The protocols allow two participants to submit commitments to each other so that they
can achieve mutual zero-knowledge authentication only a communication
initialization needed. Moreovere , similar to other commitment schemes,
our schemes also can be widely used to construction of other schemes
for cryptography, such as, verifiable secret sharing, zero-knowledge sets,
credentials and content extraction signatures.
Lightweight Asynchronous Verifiable Secret Sharing with Optimal Resilience
We present new protocols for *Asynchronous Verifiable Secret Sharing* for Shamir (i.e., threshold $t<n$) sharing of secrets.
Our protocols:
* Use only "lightweight" cryptographic primitives, such as hash functions;
* Can share secrets over rings such as $\mathbb{Z}_{p^k}$ as well as finite fields $\mathbb{F}_q$;
* Provide *optimal resilience*, in the sense that they tolerate up to $t < n/3$ corruptions, where $n$ is the total number of parties;
* Are *complete*, in the sense that they guarantee that if any honest party receives their share then all honest parties receive their shares;
* Employ *batching* techniques, whereby a dealer shares many secrets in parallel, and achieves an amortized communication complexity that is linear in $n$, at least on the "happy path", where no party *provably* misbehaves.
Bounded Verification for Finite-Field-Blasting (In a Compiler for Zero Knowledge Proofs)
Zero Knowledge Proofs (ZKPs) are cryptographic protocols
by which a prover convinces a verifier of the truth of a statement with-
out revealing any other information. Typically, statements are expressed
in a high-level language and then compiled to a low-level representation
on which the ZKP operates. Thus, a bug in a ZKP compiler can com-
promise the statement that the ZK proof is supposed to establish. This
paper takes a step towards ZKP compiler correctness by partially veri-
fying a field-blasting compiler pass, a pass that translates Boolean and
bit-vector logic into equivalent operations in a finite field. First, we define
correctness for field-blasters and ZKP compilers more generally. Next, we
describe the specific field-blaster using a set of encoding rules and de-
fine verification conditions for individual rules. Finally, we connect the
rules and the correctness definition by showing that if our verification
conditions hold, the field-blaster is correct. We have implemented our
approach in the CirC ZKP compiler and have proved bounded versions
of the corresponding verification conditions. We show that our partially
verified field-blaster does not hurt the performance of the compiler or its
output; we also report on four bugs uncovered during verification.
Label Correlation in Deep Learning-based Side-channel Analysis
The efficiency of the profiling side-channel analysis can be significantly improved with machine learning techniques. Although powerful, a fundamental machine learning limitation of being data-hungry received little attention in the side-channel community. In practice, the maximum number of leakage traces that evaluators/attackers can obtain is constrained by the scheme requirements or the limited accessibility of the target. Even worse, various countermeasures in modern devices increase the conditions on the profiling size to break the target.
This work demonstrates a practical approach to dealing with the lack of profiling traces. Instead of learning from a one-hot encoded label, transferring the labels to their distribution can significantly speed up the convergence of guessing entropy. By studying the relationship between all possible key candidates, we propose a new metric, denoted Label Correlation (LC), to evaluate the generalization ability of the profiling model. We validate LC with two common use cases: early stopping and network architecture search, and the results indicate its superior performance.
Ablation Analysis for Multi-device Deep Learning-based Physical Side-channel Analysis
Deep learning-based side-channel analysis is an effective way of performing profiling attacks on power and electromagnetic leakages, even against targets protected with countermeasures. While many research papers have reported successful results, they typically focus on profiling and attacking a single device, assuming that leakages are similar between devices of the same type. However, this assumption is not always realistic due to variations in hardware and measurement setups, creating what is known as the portability problem. Profiling multiple devices has been proposed as a solution, but obtaining access to these devices may pose a challenge for attackers.
This paper proposes a new approach to overcome the portability problem by introducing a neural network layer assessment methodology based on the ablation paradigm. This methodology evaluates the sensitivity and resilience of each layer, providing valuable knowledge to create a Multiple Device Model from Single Device (MDMSD). Specifically, it involves ablating a specific neural network section and performing recovery training. As a result, the profiling model, trained initially on a single device, can be generalized to leakage traces measured from various devices. By addressing the portability problem through a single device, practical side-channel attacks could be more accessible and effective for attackers.
A Transformation for Lifting Discrete Logarithm Based Cryptography to Post-Quantum Cryptography
This is an update of the previous version of this report. While the transformation and its theoretical background remain the same as in the initial version, in this version, we introduce conditions for choosing the parameters that render the attacks (both classical and quantum algorithms attacks) proposed by Lorenz Panny in March 2023 on the first variant, inapplicable. For the classical attack, we prove that the discrete logarithms that he was basing his attack upon do not exist for the new parameters. For the quantum algorithm attacks where he proposed computing a basis of a three-dimensional lattice, as proposed in Kitaev's generalization of Shor's quantum algorithm, we prove that for our transformation, the rank of that lattice (the Abelian Stabiliser in Kitaev's terminology) has a rank one, which makes the Kitaev's quantum algorithm inapplicable.
We construct algebraic structures where rising to the non-associative power indices is no longer tied with the Discrete Logarithm Problem but with a variant of a problem that has been analysed in the last two decades and does not have a quantum polynomial algorithm that solves it. The problem is called Exponential Congruences Problem (ECP). By this, \emph{we disprove} the claims presented in the ePrint report 2021/583 titled "Entropoids: Groups in Disguise" by Lorenz Panny that \emph{"all instantiations of the entropoid framework should be breakable in polynomial time on a quantum computer."}
Additionally, we construct an Arithmetic for power indices and propose generic recipe guidelines that we call "Entropic-Lift" for transforming some of the existing classical cryptographic schemes that depend on the hardness of Discrete Logarithm Problem to post-quantum cryptographic schemes that will base their security on the hardness of the Entropoid variant of the Exponential Congruences Problem (EECP).
As concrete examples, we show how to transform the classical Diffie-Hellman key exchange, DSA and Schnorr signature schemes.
We also post several open problems in relation to EECP and the "Entropic-Lift" transformation.
An Anonymous Multi-receiver Certificateless Hybrid Signcryption (AMCLHS) using mKEM-DEM for Broadcast Communication
Confidentiality, authentication, and anonymity are the fundamental security requirements in broadcast communication that can be achieved by Digital Signature (DS), encryption, and pseudo-anonymous identity techniques. Signcryption offer both DS and encryption in a single logical step with high efficiency. Similarly, anonymous multireceiver signcryption ensure receiver privacy by generating identical ciphertext for multiple receivers while keeping their identities private. While signcryption is a significant improvement over “sign then encrypt”, it still incurs higher computational and communication cost and does not provide the required level of security.
In this paper, we propose a multiple-recipient Key Encapsulation Mechanism (mKEM) - Data Encapsulation Mechanism (DEM) based Anonymous Multireceiver Certificateless Hybrid Signcryption (AMCLHS). The AMCLHS uses a combination of symmetric key and asymmetric key cryptography to signcrypt an arbitrary length message in broadcast communication and has two unique settings as follows:
Pseudo-Identity PID Settings: We introduce a new algorithmic step in AMCLHS construction where each user (sender and receiver) is assigned a PID to enable the sender to signcrypt identical messages for multiple receivers while keeping the identities of other receivers anonymous.
The receiver anonymity is achieved by choosing random Real-Identity (ID_R) to generate PID of the users in key generation algorithm of AMCLHS scheme. Our approach relies on the Elliptic Curve Discrete Logarithm (ECDL) hardness assumptions, the hash function, and verification-based secret key of the Register Authority (RA), using time Delta T.
mKEM-DEM Settings: We introduce the first construction that achieves optimal ciphertext from the Diffie-Hellman (DH) assumption using mKEM-DEM for Signcryption. Our scheme uses mKEM to generate a symmetric key for multiple-receivers and DEM to signcrypt message using the previously generated symmetric key and the sender's private key. mKEM for key setup and Signcryption for confidentiality and forward security, and DEM for key generation and unsigncryption for indistinguishability under Indistinguishability against Chosen Ciphertext Attack (IND-CCA2).
Our scheme relies on DH and Bilinear Pairing (BP) assumption and uses a single key for all messages, which minimizes ciphertext length and ultimately reduces complexity overhead.
The scheme operates in a multireceiver certificateless environment, preventing the key escrow problem, and demonstrates cryptographic notions for Indistinguishability under Chosen-Ciphertext Attack (IND-CCA2) and Existential Unforgeability against Chosen Message Attack (EUF-CMA) for Type-I and Type-II adversaries under q-Decisional Bilinear Diffie-Hellman Inversion (q-DBDHI) and ECDL hard assumptions. We compare the proposed scheme with existing multireceiver hybrid signcryption schemes in terms of computation cost, communication cost, and security requirements. We show that, compared to existing multireceiver schemes which has overall cost of O(n^2), our scheme is computationally more efficient and has optimal communication cost, with signcryption cost linear O(n) to the number of designated receivers while the unsigncryption cost remains constant O(1). Our scheme achieves confidentiality, authentication, anonymity, and simultaneously achieves unlinkability, non-repudiation, and forward security.
$\mathsf{Skye}$: A Fast KDF based on Expanding PRF and its Application to Signal
A Key Derivation Function KDF generates a uniform and highly random key-stream from weakly random key material. KDFs are broadly used in various security protocols such as digital signatures and key exchange protocols. HKDF is the most deployed KDF in practice. It is based on the $\textit{extract-then-expand}$ paradigm and is presently used, among others, in the Signal Protocol for end-to-end encrypted messaging.
HKDF was proposed as a generic KDF for general input sources and thus is not optimized for source-specific use cases such as key derivation from Diffie-Hellman (DH) sources (i.e. DH shared secrets as key material). Furthermore, the sequential HKDF design is unnecessarily slower on some general-purpose platforms that benefit from parallelization.
In this work, we propose a novel, efficient and secure KDF called $\mathsf{Skye}$. $\mathsf{Skye}$ follows the $\textit{extract-then-expand}$ paradigm and consists of two algorithms: efficient deterministic $\textit{randomness extractor}$ and $\textit{expansion}$ functions. Instantiating our extractor for dedicated source-specific (e.g. DH sources) inputs allows us to achieve a significant efficiency speed-up over HKDF at the same security level. We provide concrete security analysis of $\mathsf{Skye}$ and both its algorithms in the standard model.
We provide a software performance comparison of $\mathsf{Skye}$ with the AES-based expanding PRF $\mathsf{ButterKnife}$ and HKDF with SHA-256 (as used in Signal). Our results show that in isolation $\mathsf{Skye}$ performs from 4x to 47x faster than HKDF, depending on the platform instruction support. We further demonstrate that with such a performance gain, when $\mathsf{Skye}$ is integrated within the current Signal implementation, we can achieve significant overall improvements ranging from $38\%$ to $64\%$ relative speedup in unidirectional messaging. Even in bidirectional messaging, that includes DH computation with dominating computational cost, $\mathsf{Skye}$ still contributes to $12-36\%$ relative speedup when just 10 messages are sent and received at once.
Key-Range Attribute-Based Signatures for Range of Inner Product and Its Applications
In attribute-based signatures (ABS) for range of inner product (ARIP), recently proposed by Ishizaka and Fukushima at ICISC 2022, a secret-key labeled with an $n$-dimensional vector $\mathbf{x}\in\mathbb{Z}_p^n$ for a prime $p$ can be used to sign a message under an $n$-dimensional vector $\mathbf{y}\in\mathbb{Z}_p^n$ and a range $[L,R]=\{L, L+1, \cdots, R-1, R\}$ with $L,R\in\mathbb{Z}_p$ iff their inner product is within the range, i.e., $\langle \mathbf{x}, \mathbf{y} \rangle \in [L,R]\pmod p$. We consider its key-range version, named key-range ARIP (KARIP), where the range $[L,R]$ is associated with a secret-key but not with a signature. We propose three generic KARIP constructions based on linearly homomorphic signatures and non-interactive witness-indistinguishable proof, which lead to concrete KARIP instantiations secure under standard assumptions with different features in terms of efficiency. We also show that KARIP has various applications, e.g., key-range ABS for range evaluation of polynomials/weighted averages/Hamming distance/Euclidean distance, key-range time-specific signatures, and key-range ABS for hyperellipsoid predicates.
A flexible Snark via the monomial basis
We describe a pairing-based SNARK with a universal updateable CRS that can be instantiated with any pairing-friendly curve endowed with a sufficiently large prime scalar field. We use the monomial basis, thus sidestepping the need for large smooth order subgroups in the scalar field. In particular, the scheme can be instantiated with outer curves to widely used curves such as Ed25519, secp256k1 and BN254. This allows us to largely circumvent the overhead of non-native field arithmetic for succinct proofs of statements in these base fields. These include proofs of valid signatures in Ed25519 and secp256k1 and one layer recursion with BN254.
The proof size is constant \( (10\; \mathbb{G}_1\), \(20\;\mathbb{F}_p)\), as is the verification runtime, which is dominated by a single pairing check (i.e. two pairings). The Prover time is dominated by the \(10\) multi-scalar multiplications in \(\mathbb{G}_1\) - with a combined MSM length of $22\cdot |\mathrm{Circuit}|$ - and, to a lesser extent, the computation of a single sum of polynomial products over the scalar field.
The scheme supports succinct lookup arguments for subsets as well as subsequences. Our construction relies on homomorphic table commitments, which makes them amenable to vector lookups. The Prover algorithm runs in runtime $O(M\cdot \log(M))$, where $M = \max \{|\text{Circuit}| , \;|\text{Table}|\}.$
When the scalar field has low $2$-adicity - as is inevitably the case with any outer curve to Ed25519, secp256k1 or BN254 - we use the Schonhage-Strassen algorithm or the multimodular FFT algorithm to compute the sum of polynomial products that occurs in one of the steps of the proof generation. Despite the small (but discernible) slowdown compared to polynomial products in FFT-friendly fields, we empirically found that the MSMs dominate the proof generation time. To that end, we have included some benchmarks for polynomial products in FFT-unfriendly fields.
Furthermore, the scheme supports custom gates, albeit at the cost of a larger proof size. As an application of the techniques in this paper, we describe a protocol that supports multiple \( \mathbf{univariate}\) custom gates $\mathcal{G}_i$ of high degree that are sparsely distributed, in the sense that \[ \sum_{i} \deg(\mathcal{G}_i)\cdot \#(\mathcal{G}_i\;\text{gates}) \; = \; O(|\text{Circuit}|). \] This comes at the cost of three additional $\mathbb{G}_1$ elements and does not blow up the proof generation time, i.e. it does not entail MSMs or FFTs of length larger than the circuit size.
At the moment, Panther Protocol's Rust implementation in a 576-bit pairing-friendly outer curve to Ed25519 has a (not yet optimized) Prover time of 45 seconds for a million gate circuit on a 64 vCPU AWS machine.
Private Proof-of-Stake Blockchains using Differentially-private Stake Distortion
Safety, liveness, and privacy are three critical properties for any private proof-of-stake (PoS) blockchain. However, prior work (SP'21) has shown that to obtain safety and liveness, a PoS blockchain must in theory forgo privacy. Specifically, to ensure safety and liveness, PoS blockchains elect parties based on stake proportion, potentially exposing a party's stake even with private transaction processing. In this work, we make two key contributions. First, we present the first stake inference attack applicable to both deterministic and randomized PoS with exponentially less running time in comparison with SOTA designs. Second, we use differentially private stake distortion to achieve privacy in PoS blockchains and design two stake distortion mechanisms that any PoS protocol can use. We further evaluate our proposed methods using Ethereum 2.0, a widely-recognized PoS blockchain in operation. Results demonstrate effective stake inference risk mitigation, reasonable privacy, and preservation of essential safety and liveness properties.
Witness Encryption for Succinct Functional Commitments and Applications
Witness encryption (WE), introduced by Garg, Gentry, Sahai, and Waters (STOC 2013) allows one to encrypt a message to a statement $\mathsf{x}$ for some NP language $\mathcal{L}$, such that any user holding a witness for $\mathsf{x} \in \mathcal{L}$ can decrypt the ciphertext.
The extreme power of this primitive comes at the cost of its elusiveness: a practical construction from established cryptographic assumptions is currently out of reach.
In this work we introduce and construct a new notion of encryption that has a strong flavor of WE and that, crucially, we can build from well-studied assumptions (based on bilinear pairings) for interesting classes of computation.
Our new notion, witness encryption for (succinct) functional commitment, takes
inspiration from a prior weakening of witness encryption introduced by Benhamouda and Lin (TCC 2020). In a nutshell, theirs is a WE where: the encryption statement consists of a (non compressible) commitment $\mathsf{cm}$, a function $G$ and a value $y$; the decryption witness consists of a (non succinct) NIZK proof about the fact that $\mathsf{cm}$ opens to $v$ such that $y=G(v)$.
Benhamouda and Lin showed how to apply this primitive to obtain MPC with non-interactive and reusability properties---dubbed mrNISC---replacing the requirement of WE in existing round-collapsing techniques.
Our new WE-like notion is motivated by supporting both commitments of a fixed size and fixed decryption complexity, independent
$|v|$---in contrast to the work by Benhamouda and Lin where this complexity is linear. As a byproduct, our efficiency profile substantially improves the offline stage of mrNISC protocols.
Our work solves the additional challenges that arise from relying on computationally binding commitments and computational soundness (of functional commitments), as opposed to statistical binding and unconditional soundness (of NIZKs), used in Benhamouda and Lin's work.
To tackle them, we not only modify their basic blueprint, but also model and instantiate different types of projective hash functions as building blocks.
Furthermore, as one of our main contributions, we show the first pairing-based construction of functional commitments for NC1 circuits with linear verification.
Our techniques are of independent interest and may highlight new avenues to design practical variants of witness encryption.
As an additional contribution, we show that our new WE-flavored primitive and its efficiency properties are versatile: we discuss its further applications and show how to extend this primitive to better suit these settings.
Triply Adaptive UC NIZK
Non-interactive zero knowledge (NIZK) enables proving the validity of NP statement without leaking anything else. We study multi-instance NIZKs in the common reference string (CRS) model, against an adversary that adaptively corrupts parties and chooses statements to be proven. We construct the first such $\textit{triply adaptive}$ NIZK that provides full adaptive soundness, as well as adaptive zero-knowledge, assuming either LWE or else LPN and DDH (previous constructions rely on non-falsifiable knowledge assumptions). In addition, our NIZKs are universally composable (UC). Along the way, we:
- Formulate an ideal functionality, $\mathcal{F}_\textsf{NICOM}$, which essentially captures $\textit{non-interactive}$ commitments, and show that it is realizable by existing protocols using standard assumptions.
- Define and realize, under standard assumptions, Sigma protocols which satisfy triply adaptive security with access to $\mathcal{F}_\textsf{NICOM}$.
- Use the Fiat-Shamir transform, instantiated with correlation intractable hash functions, to compile a Sigma protocol with triply adaptive security with access to $\mathcal{F}_\textsf{NICOM}$ into a triply adaptive UC-NIZK argument in the CRS model with access to $\mathcal{F}_\textsf{NICOM}$, assuming LWE (or else LPN and DDH).
- Use the UC theorem to obtain UC-NIZK in the CRS model.
Fast Secure Multiparty ECDSA with Practical Distributed Key Generation and Applications to Cryptocurrency Custody
ECDSA is a standardized signing algorithm that is widely used in TLS, code signing, cryptocurrency and more. Due to its importance, the problem of securely computing ECDSA in a distributed manner (known as threshold signing) has received considerable interest. Despite this interest, however, as of the time of publication of the conference version of this paper ([Lindel and Nof, ACM SIGSAC 18'), there had been no full threshold solution for more than two parties (meaning that any t-out-of-n parties can sign, security is preserved for any t−1 or fewer corrupted parties, and t ≤ n can be any value) that supports practical key distribution. All previous solutions for this functionality utilized Paillier homomorphic encryption, and efficient distributed Paillier key generation for more than two parties is not known.
In this paper, we present the first (again, for the conference version publication time) truly practical full threshold ECDSA signing protocol that has fast signing and key generation. This solves an old open problem and opens the door to many practical uses of threshold ECDSA signing that are in demand today. One of these applications is the construction of secure cryptocurrency wallets (where key-shares are spread over multiple devices, and so are hard to steal) and cryptocurrency custody solutions (where large sums of invested cryptocurrency are strongly protected by splitting the key between a bank/financial institution, the customer who owns the currency, and possibly a third-party trustee, in multiple shares at each). There is growing practical interest in such solutions, but prior to our work, these could not be deployed due to the need for a distributed key generation.
Quantum Speed-Up for Multidimensional (Zero Correlation) Linear Distinguishers
This paper shows how to achieve a quantum speed-up for multidimensional (zero correlation) linear distinguishers.
To understand the post-quantum security of symmetric-key cryptosystems, it is important to study how much quantum speed-up can be obtained for classical cryptanalytic techniques such as differential, linear, and integral cryptanalysis.
A previous work by Kaplan et al. has already shown a quantum quadratic speed-up for one-dimensional linear distinguishers.
However, classical linear cryptanalysis often exploits multidimensional approximations to achieve more efficient attacks, and in fact it is highly non-trivial whether Kaplan et al.'s technique can be extended into the multidimensional case.
To remedy this, we investigate a new quantum technique to speed-up multidimensional linear distinguishers.
Firstly, we observe that there is a close relationship between the subroutine of Simon's algorithm and linear correlations via Fourier transform.
Specifically, a slightly modified version of Simon's subroutine, which we call Correlation Extraction Algorithm (CEA), can be used to speed-up multidimensional linear distinguishers.
CEA also leads to a speed-up for multidimensional zero correlation distinguishers, as well as some integral distinguishers through the correspondence of zero correlation and integral properties shown by Bogdanov et al. and Sum et al.
Furthermore, we observe possibility of a more than quadratic speed-ups for some special types of integral distinguishers when multiple integral properties exist.
Especially, we show a single-query distinguisher on a $4$-bit cell SPN cipher with the same integral property as 2.5-round AES.
Our attacks are the first to observe such a speed-up for classical cryptanalytic techniques without relying on any algebraic structures such as hidden periods or shifts.
By replacing the Hadamard transform in CEA with the general quantum Fourier transform, our technique also speeds-up generalized linear distinguishers on an arbitrary finite abelian group.
Generation of two ''independent'' points on an elliptic curve of $j$-invariant $\neq 0, 1728$
This article is dedicated to a new generation method of two ``independent'' $\mathbb{F}_{\!q}$-points $P_0$, $P_1$ on almost any ordinary elliptic curve $E$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. In particular, the method is relevant for all standardized and real-world elliptic curves of $j$-invariants different from $0$, $1728$. The points $P_0$, $P_1$ are characterized by the fact that nobody (even a generator) knows the discrete logarithm $\log_{P_0}(P_1)$ in the group $E(\mathbb{F}_{\!q})$. Moreover, only one square root extraction in $\mathbb{F}_{\!q}$ (instead of two ones) is required in comparison with all previous generation methods.
History-Free Sequential Aggregate Signatures from Generic Trapdoor Functions
A sequential aggregate signature (SAS) scheme allows multiple users to sequentially combine their respective signatures in order to reduce communication costs. Historically, early proposals required the use of trapdoor permutation (e.g., RSA).
In recent years, a number of attempts have been made to extend SAS schemes to post-quantum assumptions. Many post-quantum signatures have been proposed in the hash-and-sign paradigm, which requires the use of trapdoor functions and appears to be an ideal candidate for sequential aggregation attempts. However, the hardness in achieving post-quantum one-way permutations makes it difficult to obtain similarly general constructions. Direct attempts at generalizing permutation-based schemes have been proposed, but they either lack formal security or require additional properties on the trapdoor function, which are typically not available for multivariate or code-based functions.
In this paper, we propose a history-free sequential aggregate signature based on generic trapdoor functions, generalizing existing techniques. We prove the security of our scheme in the random oracle model by adopting the probabilistic hash-and-sign with retry paradigm, and we instantiate our construction with three post-quantum schemes, comparing their compression capabilities. Finally, we discuss how direct extensions of permutation-based SAS schemes are not possible without additional properties, showing the insecurity of two existing multivariate schemes when instantiated with Unbalanced Oil and Vinegar.
Curve Trees: Practical and Transparent Zero-Knowledge Accumulators
In this work we improve upon the state of the art for practical zero-knowledge for set membership, a building block at the core of several privacy-aware applications, such as anonymous payments, credentials and whitelists. This primitive allows a user to show knowledge of an element in a large set without leaking the specific element. One of the obstacles to its deployment is efficiency. Concretely efficient solutions exist, e.g., those deployed in Zcash Sapling, but they often work at the price of a strong trust assumption: an underlying setup that must be generated by a trusted third party.
To find alternative approaches we focus on a common building block: accumulators, a cryptographic data structure which compresses the underlying set. We propose novel, more efficient and fully transparent constructions (i.e., without a trusted setup) for accumulators supporting zero-knowledge proofs for set membership. Technically, we introduce new approaches inspired by ``commit-and-prove'' techniques to combine shallow Merkle trees and 2-cycles of elliptic curves into a highly practical construction. Our basic accumulator construction---dubbed Curve Trees---is completely transparent (does not require a trusted setup) and is based on simple and widely used assumptions (DLOG and Random Oracle Model). Ours is the first fully transparent construction that obtains concretely small proof/commitment sizes for large sets and a proving time one order of magnitude smaller than proofs over Merkle Trees with Pedersen hash. For a concrete instantiation targeting 128 bits of security we obtain: a commitment to a set of any size is 256 bits; for $|S| = 2^{40}$ a zero-knowledge membership proof is 2.9KB, its proving takes $2$s and its verification $40$ms on an ordinary laptop.
Using our construction as a building block we can design a simple and concretely efficient anonymous cryptocurrency with full anonymity set, which we dub $\mathbb{V}$cash. Its transactions can be verified in $\approx 80$ms or $\approx 5$ms when batch-verifying multiple ($> 100$) transactions; transaction sizes are $4$KB. Our timings are competitive with those of the approach in Zcash Sapling and trade slightly larger proofs (transactions in Zcash Sapling are 2.8KB) for a completely transparent setup.
Finding and Evaluating Parameters for BGV
Fully Homomorphic Encryption (FHE) is a groundbreaking technology that allows for arbitrary computations to be performed on encrypted data. State-of-the-art schemes such as Brakerski Gentry Vaikuntanathan (BGV) are based on the Learning with Errors over rings (RLWE) assumption, and each ciphertext has an associated error that grows with each homomorphic operation.
For correctness, the error needs to stay below a certain threshold, requiring a trade-off between security and error margin for computations in the parameters.
Choosing the parameters accordingly, for example, the polynomial degree or the ciphertext modulus, is challenging and requires expert knowledge specific to each scheme.
In this work, we improve the parameter generation process across all steps of its process. We provide a comprehensive analysis for BGV in the Double Chinese Remainder Theorem (DCRT) representation providing more accurate and better bounds than previous work on the DCRT, and empirically derive a closed formula linking the security level, the polynomial degree, and the ciphertext modulus.
Additionally, we introduce new circuit models and combine our theoretical work in an easy-to-use parameter generator for researchers and practitioners interested in using BGV for secure computation.
Our formula results in better security estimates than previous closed formulas, while our DCRT analysis results in reduced prime sizes of up to 42% compared to previous work.
Breaking the power-of-two barrier: noise estimation for BGV in NTT-friendly rings
The Brakerski-Gentry-Vaikuntanathan (BGV) scheme is a Fully Homomorphic Encryption (FHE) cryptosystem based on the Ring Learning With Error (RLWE) problem.
Ciphertexts in this scheme contain an error term that grows with operations and causes decryption failure when it surpasses a certain threshold.
For this reason, the parameters of BGV need to be estimated carefully, with a trade-off between security and error margin.
The ciphertext space of BGV is the ring $\mathcal R_q=\mathbb Z_q[x]/(\Phi_m(x))$, where usually the degree $n$ of the cyclotomic polynomial $\Phi_m(x)$ is chosen as a power of two for efficiency reasons. However, the jump between two consecutive powers-of-two polynomials can sometimes also cause a jump of the security, resulting in parameters that are much bigger than what is needed.
In this work, we explore the non-power-of-two instantiations of BGV.
Although our theoretical research encompasses results applicable to any cyclotomic ring, our main investigation is focused on the case of $m=2^s 3^t$, i.e., cyclotomic polynomials with degree $n=2^s 3^{t-1}$.
We provide a thorough analysis of the noise growth in this new setting using the canonical norm and compare our results with the power-of-two case considering practical aspects like NTT algorithms.
We find that in many instances, the parameter estimation process yields better results for the non-power-of-two setting.
Asymmetric Trapdoor Pseudorandom Generators: Definitions, Constructions, and Applications to Homomorphic Signatures with Shorter Public Keys
We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers $pr_1,\dots,pr_N$ for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number $pr_i$ to recover the whole $pr$ sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use $pr$ sequence to generate a sequence $sr_1,\dots,sr_N$ of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system.
As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only $O(T)$ that is independent of the dataset size. As a comparison, the shortest size of the existing public key is $O(\sqrt{N}+\sqrt{T})$, proposed by Catalano et al. (CRYPTO'15), where $N$ is the dataset size and $T$ is the dimension of the message. In other words, we provide the first homomorphic signature scheme with $O(1)$-sized public keys for the one-dimension messages.
Coefficient Grouping for Complex Affine Layers
Designing symmetric-key primitives for applications in Fully Homomorphic Encryption (FHE) has become important to address the issue of the ciphertext expansion. In such a context, cryptographic primitives with a low-AND-depth decryption circuit are desired. Consequently, quadratic nonlinear functions are commonly used in these primitives, including the well-known $\chi$ function over $\mathbb{F}_2^n$ and the power map over a large finite field $\mathbb{F}_{p^n}$. In this work, we study the growth of the algebraic degree for an SPN cipher over $\mathbb{F}_{2^n}^{m}$, whose S-box is defined as the combination of a power map $x\mapsto x^{2^d+1}$ and an $\mathbb{F}_2$-linearized affine polynomial $x\mapsto c_0+\sum_{i=1}^{w}c_ix^{2^{h_i}}$ where $c_1,\ldots,c_w\neq0$. Specifically, motivated by the fact that the original coefficient grouping technique published at EUROCRYPT 2023 becomes less efficient for $w>1$, we develop a variant technique that can efficiently work for arbitrary $w$. With this new technique to study the upper bound of the algebraic degree, we answer the following questions from a theoretic perspective:
1. can the algebraic degree increase exponentially when $w=1$?
2. what is the influence of $w$, $d$ and $(h_1,\ldots,h_w)$ on the growth of the algebraic degree?
Based on this, we show (i) how to efficiently find $(h_1,\ldots,h_w)$ to achieve the exponential growth of the algebraic degree and (ii) how to efficiently compute the upper bound of the algebraic degree for arbitrary $(h_1,\ldots,h_w)$. Therefore, we expect that these results can further advance the understanding of the design and analysis of such primitives.
The Self-Anti-Censorship Nature of Encryption: On the Prevalence of Anamorphic Cryptography
s part of the responses to the ongoing ``crypto wars,'' the notion of {\em Anamorphic Encryption} was put forth [Persiano-Phan-Yung Eurocrypt '22].
The notion allows private communication in spite of a dictator who (in violation of the usual normative conditions under which Cryptography is developed) is engaged in an extreme form of surveillance and/or censorship, where it asks for all private keys and knows and may even dictate all messages.
The original work pointed out efficient ways to use two known schemes in the anamorphic mode, bypassing the draconian censorship and hiding information from the all-powerful dictator.
A question left open was whether these examples are outlier results or whether anamorphic mode is pervasive in existing systems.
Here we answer the above question: we develop new techniques, expand the notion, and show that the notion of Anamorphic Cryptography is, in fact, very much prevalent.
We first refine the notion of Anamorphic Encryption with respect to the nature of covert communication.
Specifically, we distinguish {\em Single-Receiver Encryption} for many to one communication, and {\em Multiple-Receiver Encryption} for many to many communication within the group of conspiring (against the dictator) users. We then show that Anamorphic Encryption can be embedded in the randomness used in the encryption, and give families of constructions that can be applied to numerous ciphers. In total the families cover classical encryption schemes, some of which in actual use (RSA-OAEP, Pailler, Goldwasser-Micali, ElGamal schemes, Cramer-Shoup, and Smooth Projective Hash based systems). Among our examples is an anamorphic channel with much higher capacity than the regular channel.
In sum, the work shows the very large extent of the potential futility of control and censorship over the use of strong encryption by the dictator (typical for and even stronger than governments engaging in the ongoing ``crypto-wars''): While such limitations obviously hurt utility which encryption typically brings to safety in computing systems, they essentially, are not helping the dictator.
The actual implications of what we show here and what does it mean in practice require further policy and legal analyses and perspectives.
Classical substitution ciphers and group theory
Uncategorized
Uncategorized
We explore some connections between classical substitution ciphers, both monoalphabetic and polyalphabetic, and mathematical group theory. We try to do this in a way that is accessible to cryptographers who are not familiar with group theory, and to mathematicians who are not familiar with classical ciphers.
A Two-Party Hierarchical Deterministic Wallets in Practice
The applications of Hierarchical Deterministic Wallet are rapidly growing in various areas such as cryptocurrency exchanges and hardware wallets. Improving privacy and security is more important than ever. In this study, we proposed a protocol that fully support a two-party computation of BIP32. Our protocol, similar to the distributed key generation, can generate each party’s secret share, the common chain-code, and the public key without revealing a seed and any descendant private keys. We also provided a simulation-based proof of our protocol assuming a rushing, static, and malicious adversary in the hybrid model. Our master key generation protocol produces up to total of two bit leakages from a honest party given the feature that the seeds will be re-selected after each execution. The proposed hardened child key derivation protocol leads up to a one bit leakage in the worst situation of simulation from a honest party and will be accumulated with each execution. Fortunately, in reality, this issue can be largely mitigated by adding some validation criteria of boolean circuits and masking the input shares before each execution. We then implemented the proposed protocol and ran in a single thread on a laptop which turned out with practically acceptable execution time. Lastly, the outputs of our protocol can be easily integrated with many threshold sign protocols.
Additional Modes for ASCON
NIST selected the A SCON family of cryptographic primitives for standardization in February 2023 as the final step in the Lightweight Cryptography Competition. The ASCON submission to the competition provided Authenticated Encryption with Associated Data (AEAD), hashing, and Extensible Output Function (XOF) modes. Real world cryptography systems often need more than packet encryption and simple hashing. Keyed message authentication, key derivation, cryptographically secure pseudo-random number generation (CSPRNG), password hashing, and encryption of sensitive values in memory are also important. This paper defines additional modes that can be deployed on top of ASCON based on proven designs from the literature.
Satisfiability Modulo Finite Fields
We study satisfiability modulo the theory of finite fields and
give a decision procedure for this theory. We implement our procedure
for prime fields inside the cvc5 SMT solver. Using this theory, we con-
struct SMT queries that encode translation validation for various zero
knowledge proof compilers applied to Boolean computations. We evalu-
ate our procedure on these benchmarks. Our experiments show that our
implementation is superior to previous approaches (which encode field
arithmetic using integers or bit-vectors).
Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning
This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients. In secure aggregation, a server sums up the private inputs of clients and obtains the result without learning anything about the individual inputs beyond what is implied by the final sum. Flamingo focuses on the multi-round setting found in federated learning in which many consecutive summations (averages) of model weights are performed to derive a good model. Previous protocols, such as Bell et al. (CCS ’20), have been designed for a single round and are adapted to the federated learning setting by repeating the protocol multiple times. Flamingo eliminates the need for the per-round setup of previous protocols, and has a new lightweight dropout resilience protocol to ensure that if clients leave in the middle of a sum the server can still obtain a meaningful result. Furthermore, Flamingo introduces a new way to locally choose the so-called client neighborhood introduced by Bell et al. These techniques help Flamingo reduce the number of interactions between clients and the server, resulting in a significant reduction in the end-to-end runtime for a full training session over prior work.
We implement and evaluate Flamingo and show that it can securely train a neural network on the (Extended) MNIST and CIFAR-100 datasets, and the model converges without a loss in accuracy, compared to a non-private federated learning system.
BAKSHEESH: Similar Yet Different From GIFT
We propose a lightweight block cipher named BAKSHEESH, which follows up on the popular cipher GIFT-128 (CHES'17). BAKSHEESH runs for 35 rounds, which is 12.50 percent smaller compared to GIFT-128 (runs for 40 rounds) while maintaining the same security claims against the classical attacks.
The crux of BAKSHEESH is to use a 4-bit SBox that has a non-trivial Linear Structure (LS). An SBox with one or more non-trivial LS has not been used in a cipher construction until DEFAULT (Asiacrypt'21). DEFAULT is pitched to have inherent protection against the Differential Fault Attack (DFA), thanks to its SBox having 3 non-trivial LS. BAKSHEESH, however, uses an SBox with only 1 non-trivial LS; and is a traditional cipher just like GIFT-128.
The SBox requires a low number of AND gates, making BAKSHEESH suitable for side channel countermeasures (when compared to GIFT-128) and other niche applications. Indeed, our study on the cost of the threshold implementation shows that BAKSHEESH offers a few-fold advantage over other lightweight ciphers. The design is not much deviated from its predecessor (GIFT-128), thereby allowing for easy implementation (such as fix-slicing in software). However, BAKSHEESH opts for the full-round key XOR, compared to the half-round key XOR in GIFT.
Thus, when taking everything into account, we show how a cipher construction can benefit from the unique vantage point of using 1 LS SBox, by combining the state-of-the-art progress in classical cryptanalysis and protection against device-dependent attacks. We, therefore, create a new paradigm of lightweight ciphers, by adequate deliberation on the design choice, and solidify it with appropriate security analysis and ample implementation/benchmark.
Too Many Hints - When LLL Breaks LWE
All modern lattice-based schemes build on variants of the LWE problem. Information leakage of the LWE secret $\mathbf s \in \mathbb{Z}_q^n$ is usually modeled via so-called hints, i.e., inner products of $\mathbf s$ with some (random, but known) vector.
At Crypto`20, Dachman-Soled, Ducas, Gong and Rossi (DDGR) defined among other so-called perfect hints and modular hints. The trailblazing DDGR framework allows to integrate and combine hints successively into lattices, and estimates the resulting LWE security loss.
We introduce a new methodology to integrate and combine an arbitrary number of perfect and modular in a single stroke. As opposed to DDGR, our methodology is significantly more efficient in constructing lattice bases, and thus easily allows for a large number of hints up to cryptographic dimensions, a regime that is impractical in DDGR. The efficiency of our method defines a large LWE parameter regime, in which we can fully carry out attacks faster than DDGR can solely estimate them. A key component of our new method is dimension reduction of $\mathbf s$, which significantly reduces LWE security.
The benefits of our approach allow us to practically determine which number of hints is sufficient to efficiently break LWE-based lattice schemes in practice. For mod-$q$ hints, i.e., modular hints defined over $\mathbb{Z}_q$, we reconstruct Kyber-512 secret keys via LLL reduction (only!) with an amount of $449$ hints. For Falcon-512, NTRU-HRSS-701, Kyber-768 and Dilithium-1024 we need $452$, $622$, $702$ and $876$ modular hints, respectively.
Our results for perfect hints significantly improve over these numbers, requiring for LWE dimension $n$ roughly $n/2$ perfect hints. Namely, we reconstruct via LLL reduction Kyber-512 keys with merely $234$ perfect hints. For secret keys of Falcon-512, NTRU-HRSS-701, Kyber-768 and Dilithium-1024 we require $233$, $332$, $390$ and $463$ perfect hints, respectively. We find such a small amount of perfect hints quite remarkable. If we resort to stronger lattice reduction techniques like BKZ, we need even fewer hints.
For mod-$q$ hints our method is extremely efficient, taking total time for constructing our lattice bases and secret key recovery via LLL of around 20 mins for dimension 512, 40 mins for dimensions 701 and 768, and less than 10 hours for dimension 1024. For perfect hints we require around 3 hours (dim 512), 11 hours (dim 701), 1 day (dim 768), and one week (dim 1024).
Our results demonstrate that especially perfect hints are powerful in practice, and stress the necessity to properly protect lattice schemes against leakage.
Quantum Attacks on Type-1 Generalized Feistel Schemes
Generalized Feistel schemes (GFSs) are extremely important and extensively researched cryptographic schemes. In this paper, we investigate the security of Type-1 GFS in quantum circumstances. On the one hand, in the qCCA setting, we give a new quantum polynomial time distinguisher on (d^2 -1)-round Type-1 GFS with branches d >3, which extends the previous results by d-2 rounds. This leads to a more efficient analysis of type-1 GFS, that is, the complexity of some previous key-recovery attacks is reduced by a factor of 2^(((d-2)k)/2), where k is the key length of the internal round function. On the other hand, for CAST-256, which is a certain block cipher based on Type-1 GFS, we give a 17-round quantum distinguisher in the qCPA setting. As a result, we construct an r(r > 17) round quantum key-recovery attack with complexity O(2^(37(r-17))/2 ).
Exact Security Analysis of ASCON
The Ascon cipher suite, offering both authenticated encryption with associated data (AEAD) and hashing functionality, has recently emerged as the winner of the NIST Lightweight Cryptography (LwC) standardization process. The AEAD schemes within Ascon, namely Ascon-128 and Ascon-128a, have also been previously selected as the preferred lightweight authenticated encryption solutions in the CAESAR competition. In this paper, we present a tight and comprehensive security analysis of the Ascon AEAD schemes within the random permutation model. Existing integrity analyses of Ascon (and any Duplex AEAD scheme in general) commonly include the term $DT/2^c$, where $D$ and $T$ represent data and time complexities respectively, and $c$ denotes the capacity of the underlying sponge. In this paper, we demonstrate that Ascon achieves AE security when $T$ is bounded by $\min\{2^{\kappa}, 2^c\}$ (where $\kappa$ is the key size), and $DT$ is limited to $2^b$ (with $b$ being the size of the underlying permutation, which is 320 for Ascon). Our findings indicate that in accordance with NIST requirements, Ascon allows for a tag size as low as 64 bits while enabling a higher rate of 192 bits, surpassing the recommended rate.
Tagged Chameleon Hash from Lattice and Application to Redactable Blockchain
Chameleon hash (CH) is a trapdoor hash function. Generally it is hard to find collisions, but with the help of trapdoor, finding collisions becomes easy. CH plays an important role in converting a conventional blockchain to a redactable one. However, most of the existing CH schemes are too weak to support redactable blockchain. The currently known CH schemes serving for redactable blockchain have the best security of so-called “full collision resistance (f-CR)”, but they are built either on random oracle model or rely on heavy tools like the simulation-sound extractable non-interactive zero-knowledge (SSE-NIZK) proof system. Moreover, up to now there is no CH scheme with post-quantum f-CR security in the standard model. Therefore, no CH can support redactable blockchain in a post-quantum way without relying on random oracles.
In this paper, we introduce a variant of CH, namely tagged chameleon hash (tCH). Tagged chameleon hash takes a tag into hash evaluations and collision finding algorithms. We define two security notions for tCH, collision resistance (CR) and full collision resistance (f-CR), and prove the equivalence between CR and f-CR when tCH works in the one-time tag mode. We propose a tCH scheme from lattice without using any NIZK proof, and prove that its collision resistance is (almost) tightly reduced to the Short Integer Solution (SIS) assumption in the standard model. We also show how to apply tCH to a blockchain in one-time tag mode so that the blockchain can be compiled to a redactable one. Our tCH scheme provides the first post-quantum solution for redactable blockchains, without resorting to random oracles or NIZK proofs. Besides, we also construct a more efficient tCH scheme with CR tightly reduced to SIS in the random oracle model, which may be of independent interest.
An update on Keccak performance on ARMv7-M
This note provides an update on Keccak performance on the ARMv7-M processors. Starting from the XKCP implementation, we have applied architecture-specific optimizations that have yielded a performance gain of up to 21% for the largest permutation instance.
Classical and Quantum Meet-in-the-Middle Nostradamus Attacks on AES-like Hashing
At EUROCRYPT 2006, Kelsey and Kohno proposed the so-called chosen target forced-prefix (CTFP) preimage attack, where for any challenge prefix $P$, the attacker can generate a suffix $S$ such that $H(P\|S) = y$ for some hash value $y$ published in advance by the attacker. Consequently, the attacker can pretend to predict some event represented by $P$ she did not know before, and thus this type of attack is also known as the Nostradamus attack. At ASIACRYPT 2022, Benedikt et al. convert Kelsey et al.'s attack to a quantum one, reducing the time complexity from $\mathcal{O}(\sqrt{n}\cdot 2^{2n/3})$ to $\mathcal{O}(\sqrt[3]{n} \cdot 2^{3n/7})$. CTFP preimage attack is less investigated in the literature than (second-)preimage and collision attacks and lacks dedicated methods. In this paper, we propose the first dedicated Nostradamus attack based on the meet-in-the-middle (MITM) attack, and the MITM Nostradamus attack could be up to quadratically accelerated in the quantum setting. According to the recent works on MITM preimage attacks on AES-like hashing, we build an automatic tool to search for optimal MITM Nostradamus attacks and model the tradeoff between the offline and online phases. We apply our method to AES-MMO and Whirlpool, and obtain the first dedicated attack on round-reduced version of these hash functions. Our method and automatic tool are applicable to other AES-like hashings.
ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols
Accumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.
Generic Security of the SAFE API and Its Applications
We provide security foundations for SAFE, a recently introduced API framework for sponge-based hash functions tailored to prime-field-based protocols. SAFE aims to provide a robust and foolproof interface, has been implemented in the Neptune hash framework and some zero-knowledge proof projects, but currently lacks any security proof.
In this work we identify the SAFECore as versatile variant sponge construction underlying SAFE, we prove indifferentiability of SAFECore for all (binary and prime) fields up to around $|\mathbb{F}_p|^{c/2}$ queries, where $\mathbb{F}_p$ is the underlying field and $c$ the capacity, and we apply this security result to various use cases. We show that the SAFE-based protocols of plain hashing, authenticated encryption, verifiable computation, non-interactive proofs, and commitment schemes are secure against a wide class of adversaries, including those dealing with multiple invocations of a sponge in a single application. Our results pave the way of using SAFE with the full taxonomy of hash functions, including SNARK-, lattice-, and x86-friendly hashes.
New Baselines for Local Pseudorandom Number Generators by Field Extensions
We will revisit recent techniques and results on the cryptoanalysis of local pseudorandom number generators (PRGs). By doing so, we will achieve a new attack on PRGs whose time complexity only depends on the algebraic degree of the PRG. Concretely, for PRGs $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$, we will give an algebraic algorithm that distinguishes between random points and image points of $F$, whose time complexity is bounded by
\[\exp(O(\log(n)^{\deg F /(\deg F - 1)} \cdot n^{1-e/(\deg F -1)} ))\]
and whose advantage is at least $1 - o(1)$ in the worst case.
To the best of the author's knowledge, this attack outperforms current attacks on the pseudorandomness of local random functions with guaranteed noticeable advantage and gives a new baseline algorithm for local PRGs. Furthermore, this is the first subexponential attack that is applicable to polynomial PRGs of constant degree over fields of any size with a guaranteed noticeable advantage.
We will extend this distinguishing attack further to achieve a search algorithm that can invert a uniformly random constant-degree map $F : \{0,1\}^n\rightarrow \{0,1\}^{n^{1+e}}$ with high advantage in the average case. This algorithm has the same runtime complexity as the distinguishing algorithm.
Revisiting Key Decomposition Techniques for FHE: Simpler, Faster and More Generic
Ring-LWE based homomorphic encryption computations in large depth use a combination of two techniques: 1) decomposition of big numbers into small limbs/digits, and 2) efficient cyclotomic multiplications modulo $X^N+1$. It was long believed that the two mechanisms had to be strongly related, like in the full-RNS setting that uses a CRT decomposition of big numbers over an NTT-friendly family of prime numbers, and NTT over the same primes for multiplications. However, in this setting NTT was the bottleneck of all large-depth FHE computations. A breakthrough result from Crypto'2023 by Kim et al. managed to overcome this limitation by introducing a second gadget decomposition and by showing that it indeed shifts the bottleneck and renders the cost of NTT computations negligible compared to the rest of the computation. In this paper, we extend this result (far) beyond the Full-RNS settings and show that we can completely decouple the big number decomposition from the cyclotomic arithmetic aspects. As a result, we get modulus switching/rescaling for free, and the memory footprint for storing relinearization keys across different levels is considerably lower compared to the CRT-based counterparts, by typically a factor $\ell/3$ where $\ell$ is the deepest level of multiplication depth supported. We verify both in theory and in practice that the performance of key-switching, external and internal products and automorphisms using our representation are similar or faster than the one achieved by Kim et al. Crypto'2023 paper, and we discuss the high impact of these results for people who work on low-level or hardware optimizations as well as the benefits of the new parametrizations for people currently working on compilers for FHE.
We even manage to lower the running time of the gate bootstrapping of TFHE by eliminating 12.5% of its FFTs.
SQISignHD: New Dimensions in Cryptography
We introduce SQISignHD, a new post-quantum digital signature scheme inspired by SQISign.
SQISignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQISignHD overcomes the main drawbacks of SQISign. First, it scales well to high security levels, since the public parameters for SQISignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQISign, with compressed signatures as small as 116 bytes for the post-quantum NIST-1 level of security.
These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention.
Threshold Linear Secret Sharing to the Rescue of MPC-in-the-Head
The MPC-in-the-Head paradigm is a popular framework to build zero-knowledge proof systems using techniques from secure multi-party computation (MPC). While this paradigm is not restricted to a particular secret sharing scheme, all the efficient instantiations for small circuits proposed so far rely on additive secret sharing.
In this work, we show how applying a threshold linear secret sharing scheme (threshold LSSS) can be beneficial to the MPC-in-the-Head paradigm. For a general passively-secure MPC protocol model capturing most of the existing MPCitH schemes, we show that our approach improves the soundness of the underlying proof system from $1/N$ down to $1/\binom{N}{\ell}$, where $N$ is the number of parties and $\ell$ is the privacy threshold of the sharing scheme. While very general, our technique is limited to a number of parties $N \leq |\mathbb{F}|$, where $\mathbb{F}$ is the field underlying the statement, because of the MDS conjecture.
Applying our approach with a low-threshold LSSS also boosts the performance of the proof system by making the MPC emulation cost independent of $N$ for both the prover and the verifier. The gain is particularly significant for the verification time which becomes logarithmic in $N$ (while the prover still has to generate and commit the $N$ input shares). We further generalize and improve our framework: we show how homomorphic commitments can get rid of the linear complexity of the prover, we generalize our result to any quasi-threshold LSSS, and we describe an efficient batching technique relying on Shamir's secret sharing.
We finally apply our techniques to specific use-cases. We first propose a variant of the recent SDitH signature scheme achieving new interesting trade-offs. In particular, for a signature size of 10 KB, we obtain a verification time lower than $0.5$ ms, which is competitive with SPHINCS+, while achieving much faster signing. We further apply our batching technique to two different contexts: batched SDitH proofs and batched proofs for general arithmetic circuits based on the Limbo proof system. In both cases, we obtain an amortized proof size lower than $1/10$ of the baseline scheme when batching a few dozen statements, while the amortized performances are also significantly improved.
Towards compressed permutation oracles
Compressed oracles (Zhandry, Crypto 2019) are a powerful technique to reason about quantum random oracles, enabling a sort of lazy sampling in the presence of superposition queries. A long-standing open question is whether a similar technique can also be used to reason about random (efficiently invertible) permutations.
In this work, we make a step towards answering this question. We first define the compressed permutation oracle and illustrate its use. While the soundness of this technique (i.e., the indistinguishability from a random permutation) remains a conjecture, we show a curious 2-for-1 theorem: If we use the compressed permutation oracle methodology to show that some construction (e.g., Luby-Rackoff) implements a random permutation (or strong qPRP), then we get the fact that this methodology is actually sound for free.
Brakedown's expander code
This write-up summarizes the sampling analysis of the expander code from Brakedown [GLSTW21]. We elaborate their convexity argument for general linear expansion bounds, and we combine their approach with the one from Spielman [Sp96] to achieve asymptotic linear-time under constant field size. Choosing tighter expansion bounds we obtain more efficient parameters than [GLSTW21] for their 128 bit large field, reducing the encoding costs by 25% and beyond, and we provide a similar parameter set for the Mersenne prime field with modulus $p = 2^{31} - 1$, optimized by the combined Spielman-Brakedown approach.
Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time
Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.
Earn While You Reveal: Private Set Intersection that Rewards Participants
In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
Owl: An Augmented Password-Authenticated Key Exchange Scheme
We present Owl, an augmented password-authenticated key exchange (PAKE) protocol that is both efficient and supported by security proofs. Owl is motivated by recognized limitations in SRP-6a and OPAQUE. SRP-6a is the only augmented PAKE that has enjoyed wide use in practice to date, but it lacks the support of formal security proofs, and does not support elliptic curve settings. OPAQUE was proposed in 2018 as a provably secure and efficient alternative to SRP-6a, and was chosen by the IETF in 2020 for standardization, but open issues leave it unclear whether OPAQUE will replace SRP-6a in practice. Owl is obtained by efficiently adapting J-PAKE to an asymmetric setting, providing additional security against server compromise yet with lower computation than J-PAKE. Our scheme is provably secure, efficient and agile in supporting implementations in diverse multiplicative groups and elliptic curve settings. Owl is the first solution that provides systematic advantages over SRP-6a in terms of security, computation, message sizes, and agility. Owl’s agility across settings also contrasts ongoing issues related to how OPAQUE will instantiate a hash-to-curve operation in the elliptic curve setting (and what impact this will have on efficiency, security and forward compatibility with new elliptic curves in the future).
Leakage Certification Made Simple
Side channel evaluations benefit from sound characterisations of adversarial leakage models, which are the determining factor for attack success. Two questions are of interest: can we estimate a quantity that captures the ideal adversary (who knows the distributions that are involved in an attack), and can we judge how good one (or several) given leakage models are in relation to the ideal adversary?
Existing work has led to a proliferation of custom quantities (the hypothetical information HI, perceived informatino PI, training information TI, and learnable information LI). These quantities all provide only (loose) bounds for the ideal adversary, they are slow to estimate, convergence guarantees are only for discrete distributions, and they have bias.
Our work shows that none of these quantities is necessary: it is possible to characterise the ideal adversary precisely via the mutual information between the device inputs and the observed side channel traces. We achieve this result by a careful characterisation of the distributions in play. We also put forward a mutual information based approach to leakage certification, with a consistent estimator, and demonstrate via a range of case studies that our approach is simpler, faster, and correct.
A New Sieving-Style Information-Set Decoding Algorithm
The problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm, addressing the task of solving the syndrome decoding problem.
Our approach involves maintaining a list of weight-$2p$ solution vectors
to a partial syndrome decoding problem and then creating new vectors
by identifying pairs of vectors that collide in $p$ positions. By gradually increasing the parity-check condition by one and repeating this process
iteratively, we find the final solution(s). We show that our novel algorithm
performs better than other ISDs in the memory-restricted scenario when
applied to McEliece. Notably, in the case of problems with very low relative weight, it seems to significantly outperform all previous algorithms. In particular, for code-based candidates BIKE and HQC, the algorithm has lower bit complexity than the previous best results.
LFHE: Fully Homomorphic Encryption with Bootstrapping Key Size Less than a Megabyte
Fully Homomorphic Encryption (FHE) enables computations to be performed on encrypted data, so one can outsource computations of confidential information to an untrusted party. Ironically, FHE requires the client to generate massive evaluation keys and transfer them to the server side where all computations are supposed to be performed. In this paper, we propose LFHE, the Light-key FHE variant of the FHEW scheme introduced by Ducas and Micciancio in Eurocrypt 2015, and its improvement TFHE scheme proposed by Chillotti et al. in Asiacrypt 2016. In the proposed scheme the client generates small packed evaluation keys, which can be transferred to the server side with much smaller communication overhead compared to the original non-packed variant. The server employs a key reconstruction technique to obtain the evaluation keys needed for computations.
This approach allowed us to achieve the FHE scheme with the packed evaluation key transferring size of less than a Megabyte, which is an order of magnitude improvement compared to the best-known methods.
Shorter and Faster Identity-Based Signatures with Tight Security in the (Q)ROM from Lattices
We provide identity-based signature (IBS) schemes with tight security against adaptive adversaries, in the (classical or quantum) random oracle model (ROM or QROM), in both unstructured and structured lattices, based on the SIS or RSIS assumption. These signatures are short (of size independent of the message length).
Our schemes build upon a work from Pan and Wagner (PQCrypto’21) and improve on it in several ways. First, we prove their transformation from non-adaptive to adaptive IBS in the QROM. Then, we simplify the parameters used and give concrete values. Finally, we simplify the signature scheme by using a non-homogeneous relation, which helps us reduce the size of the signature and get rid of one costly trapdoor delegation.
On the whole, we get better security bounds, shorter signatures and faster algorithms.
Bounded Surjective Quadratic Functions over $\mathbb F_p^n$ for MPC-/ZK-/FHE-Friendly Symmetric Primitives
Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), many MPC-, FHE- and ZK-friendly symmetric-key primitives that minimize the number of multiplications over $\mathbb F_p$ for a large prime $p$ have been recently proposed in the literature. These symmetric primitives are usually defined via invertible functions, including (i) Feistel and Lai-Massey schemes and (ii) SPN constructions instantiated with invertible non-linear S-Boxes. However, the ``invertibility'' property is actually never required in any of the mentioned applications.
In this paper, we discuss the possibility to set up MPC-/FHE-/ZK-friendly symmetric primitives instantiated with non-invertible bounded surjective functions. With respect to one-to-one functions, any output of a $l$-bounded surjective function admits at most $l$ pre-images. The simplest example is the square map $x\mapsto x^2$ over $\mathbb F_p$ for a prime $p\ge 3$, which is (obviously) $2$-bounded surjective.
When working over $\mathbb F_p^n$ for $n\ge 2$, we set up bounded surjective functions by re-considering the recent results proposed by Grassi, Onofri, Pedicini and Sozzi at FSE/ToSC 2022 as starting points. Given a quadratic local map $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2,3\}$, they proved that the shift-invariant non-linear function over $\mathbb F_p^n$ defined as $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\| y_1\| \ldots \| y_{n-1}$ where $y_i := F(x_i, x_{i+1})$ is never invertible for any $n\ge 2\cdot m-1$. Here, we prove that
- the quadratic function $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2\}$ that minimizes the probability of having a collision for $\mathcal S_F$ over $\mathbb F_p^n$ is of the form $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent);
- the function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before via $F(x_0, x_1) = x_0^2 + x_1$ (or equivalent) is $2^n$-bounded surjective.
As concrete applications, we propose modified versions of the MPC-friendly schemes MiMC, HadesMiMC, and (partially of) Hydra, and of the FHE-friendly schemes Masta, Pasta, and Rubato. By instantiating them with the bounded surjective quadratic functions proposed in this paper, we are able to improve the security and/or the performances in the target applications/protocols.
Batch Proofs are Statistically Hiding
Batch proofs are proof systems that convince a verifier that $x_1,\dots, x_t \in L$, for some $NP$ language $L$, with communication that is much shorter than sending the $t$ witnesses. In the case of statistical soundness (where the cheating prover is unbounded but honest prover is efficient), interactive batch proofs are known for $UP$, the class of unique witness $NP$ languages. In the case of computational soundness (aka arguments, where both honest and dishonest provers are efficient), non-interactive solutions are now known for all of $NP$, assuming standard cryptographic assumptions. We study the necessary conditions for the existence of batch proofs in these two settings. Our main results are as follows.
1. Statistical Soundness: the existence of a statistically-sound batch proof for $L$ implies that $L$ has a statistically witness indistinguishable ($SWI$) proof, with inverse polynomial $SWI$ error, and a non-uniform honest prover. The implication is unconditional for public-coin protocols and relies on one-way functions in the private-coin case.
This poses a barrier for achieving batch proofs beyond $UP$ (where witness indistinguishability is trivial). In particular, assuming that $NP$ does not have $SWI$ proofs, batch proofs for all of $NP$ do not exist. This motivates further study of the complexity class $SWI$, which, in contrast to the related class $SZK$, has been largely left unexplored.
2. Computational Soundness: the existence of batch arguments ($BARG$s) for $NP$, together with one-way functions, implies the existence of statistical zero-knowledge ($SZK$) arguments for $NP$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $BARG$s from one-way functions would yield constant-round $SZK$ arguments from one-way functions. This would be surprising as $SZK$ arguments are currently only known assuming constant-round statistically-hiding commitments (which in turn are unlikely to follow from one-way functions).
3. Non-interactive: the existence of non-interactive $BARG$s for $NP$ and one-way functions, implies non-interactive statistical zero-knowledge arguments ($NISZKA$) for $NP$, with negligible soundness error, inverse polynomial zero-knowledge error, and non-uniform honest prover. Assuming also lossy public-key encryption, the statistical zero-knowledge error can be made negligible. We further show that $BARG$s satisfying a notion of honest somewhere extractability imply lossy public key encryption.
All of our results stem from a common framework showing how to transform a batch protocol for a language $L$ into an $SWI$ protocol for $L$.
Lattice-based Commit-Transferrable Signatures and Applications to Anonymous Credentials
Anonymous Credentials are an important tool to protect user's privacy for proving possession of certain credentials.
Although various efficient constructions have been proposed based on pre-quantum assumptions, there have been limited accomplishments in the post-quantum and especially practical settings. This research aims to derive new methods that enhance the current state of the art.
To achieve this, we make the following contributions.
By distilling prior design insights, we propose a new primitive to instantiate \emph{signature with protocols}, called commit-transferrable signature (\CTS). When combined with a multi-theorem straight-line extractable non-interactive zero-knowledge proof of knowledge (\NIZKPoK), $\CTS$ gives a modular approach to construct anonymous credentials.
We then show efficient instantiations of $\CTS$ and the required \NIZKPoK from lattices, which are believed to be post-quantum hard. Finally, we propose concrete parameters for the $\CTS$, \NIZKPoK, and the overall Anonymous Credentials, based on Module-\SIS~and Ring-\LWE. This would serve as an important guidance for future deployment in practice.
Threshold ECDSA in Three Rounds
We present a three-round protocol for threshold ECDSA signing with malicious security against a dishonest majority, which information-theoretically UC-realizes a standard threshold signing functionality, assuming ideal commitment and two-party multiplication primitives. Our work improves upon and fully subsumes the DKLs $t$-of-$n$ and 2-of-$n$ protocols. This document focuses on providing a succinct but complete description of the protocol and its security proof, and contains little expository text.
Revisiting cycles of pairing-friendly elliptic curves
A recent area of interest in cryptography is recursive composition of proof systems. One of the approaches to make recursive composition efficient involves cycles of pairing-friendly elliptic curves of prime order. However, known constructions have very low embedding degrees. This entails large parameter sizes, which makes the overall system inefficient.
In this paper, we explore $2$-cycles composed of curves from families parameterized by polynomials, and show that such cycles do not exist unless a strong condition holds. As a consequence, we prove that no $2$-cycles can arise from the known families, except for those cycles already known. Additionally, we show some general properties about cycles, and provide a detailed computation on the density of pairing-friendly cycles among all cycles.
Deniable Cryptosystems: Simpler Constructions and Achieving Leakage Resilience
Deniable encryption (Canetti et al. CRYPTO ’97) is an intriguing primitive, which provides security guarantee against coercion by allowing a sender to convincingly open the ciphertext into a fake message. Despite the notable result by Sahai and Waters STOC ’14 and other efforts in functionality extension, all the deniable public key encryption (DPKE) schemes suffer from intolerable overhead due to the heavy building blocks, e.g., translucent sets or indistinguishability obfuscation. Besides, none of them considers the possible damage from leakage in the real world, obstructing these protocols from practical use.
To fill the gap, in this work we first present a simple and generic approach of sender-DPKE from ciphertext-simulatable encryption, which can be instantiated with nearly all the common PKE schemes. The core of this design is a newly-designed framework for flipping a bit-string that offers inverse polynomial distinguishability. Then we theoretically expound and experimentally show how classic side-channel attacks (timing or simple power attacks), can help the coercer to break deniability, along with feasible countermeasures.
Efficient Asymmetric Threshold ECDSA for MPC-based Cold Storage
Motivated by applications to cold-storage solutions for ECDSA-based cryptocurrencies, we present a new threshold ECDSA protocol between $n$ ``online'' parties and a single ``offline'' (aka.~cold) party. The primary objective of this protocol is to minimize the exposure of the offline party in terms of connected time and bandwidth. This is achieved through a unique asymmetric signing phase, in which the majority of computation, communication, and interaction is handled by the online parties.
Our protocol supports a very efficient non-interactive pre-signing stage; the parties calculate preprocessed data for future signatures where each party (offline or online) sends a single independently-generated short message per future signature. Then, to calculate the signature, the offline party simply receives a single short message (approx.~300B) and outputs the signature. All previous ECDSA protocols either have high exposure for all parties, or rely on non-standard coding assumptions. (We assume strong RSA, DCR, DDH and enhanced unforgeability of ECDSA.)
To achieve the above, we present a new batching technique for proving in zero-knowledge that the plaintexts of practically any number of Paillier ciphertexts all lie in a given range. The cost of the resulting batch proof is very close to that of the non-batch proof for a single ciphertext, and the technique is applicable to arbitrary Schnorr-style protocols.
Subversion-Resilient Authenticated Encryption without Random Oracles
In 2013, the Snowden revelations have shown subversion of cryptographic implementations to be a relevant threat.
Since then, the academic community has been pushing the development of models and constructions
to defend against adversaries able to arbitrarily subvert cryptographic implementations.
To capture these strong capabilities of adversaries, Russell, Tang, Yung, and Zhou (CCS'17) proposed CPA-secure encryption in a model that utilizes a trusted party called a watchdog testing an implementation before use to detect potential subversion.
This model was used to construct subversion-resilient implementations of primitives such as random oracles by Russell, Tang, Yung, and Zhou (CRYPTO'18) or signature schemes by Chow et al. (PKC'19) but primitives aiming for a CCA-like security remained elusive in any watchdog model.
In this work, we present the first subversion-resilient authenticated encryption scheme with associated data (AEAD) without making use of random oracles.
At the core of our construction are subversion-resilient PRFs, which we obtain from weak PRFs in combination with the classical Naor-Reingold transformation.
We revisit classical constructions based on PRFs to obtain subversion-resilient MACs, where both tagging and verification are subject to subversion, as well as subversion-resilient symmetric encryption in the form of stream ciphers.
Finally, we observe that leveraging the classical Encrypt-then-MAC approach yields subversion-resilient AEAD.
Our results are based on the trusted amalgamation model by Russell, Tang, Yung, and Zhou (ASIACRYPT'16) and the assumption of honest key generation.
Undetectable Watermarks for Language Models
Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested methods of embedding watermarks in model outputs, by $\textit{noticeably}$ altering the output distribution. We ask: Is it possible to introduce a watermark without incurring $\textit{any detectable}$ change to the output distribution?
To this end we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.
How to Design Fair Protocols in the Multi-Blockchain Setting
Recently, there have been several proposals for secure computation with fair output delivery that require the use of a bulletin board abstraction (in addition to a trusted execution environment (TEE)). These proposals require all protocol participants to have read/write access to the bulletin board. These works envision the use of (public or permissioned) blockchains to implement the bulletin board abstractions. With the advent of consortium blockchains which place restrictions on who can read/write contents on the blockchain, it is not clear how to extend prior proposals to a setting where (1) not all parties have read/write access on a single consortium blockchain, and (2) not all parties prefer to post on a public blockchain.
In this paper, we address the above by showing the first protocols for fair secure computation in the multi-blockchain setting. More concretely, in a $n$-party setting where at most $t < n$ parties are corrupt, our protocol for fair secure computation works as long as (1) $t$ parties have access to a TEE (e.g., Intel SGX), and (2) each of the above $t$ parties are on some blockchain with each of the other parties. Furthermore, only these $t$ parties need write access on the blockchains.
In an optimistic setting where parties behave honestly, our protocol runs completely off-chain.
Nimble: Rollback Protection for Confidential Cloud Services (extended version)
This paper introduces Nimble, a cloud service that helps applications running in trusted execution environments (TEEs) to detect rollback attacks (i.e., detect whether a data item retrieved from persistent storage is the latest version). To achieve this, Nimble realizes an append-only ledger service by employing a simple state machine running in a TEE in conjunction with a crash fault-tolerant storage service. Nimble then replicates this trusted state machine to ensure the system is available even if a minority of state machines crash. A salient aspect of Nimble is a new reconfiguration protocol that allows a cloud provider to replace the set of nodes running the trusted state machine whenever it wishes—without affecting safety. We have formally verified Nimble’s core protocol in Dafny, and have implemented Nimble such that its trusted state machine runs in multiple TEE platforms (Intel SGX and AMD SNP-SEV). Our results show that a deployment of Nimble on machines running in different availability zones can achieve from tens of thousands of requests/sec with an end-to-end latency of under 3.2 ms (based on an in-memory key-value store) to several thousands of requests/sec with a latency of 30ms (based on Azure Table).
Time to Bribe: Measuring Block Construction Market
With the emergence of Miner Extractable Value (MEV), block construction markets on blockchains have evolved into a competitive arena. Following Ethereum's transition from Proof of Work (PoW) to Proof of Stake (PoS), the Proposer Builder Separation (PBS) mechanism has emerged as the dominant force in the Ethereum block construction market.
This paper presents an in-depth longitudinal study of the Ethereum block construction market, spanning from the introduction of PoS and PBS in September 2022 to May 2023. We analyze the market shares of builders and relays, their temporal changes, and the financial dynamics within the PBS system, including payments among builders and block proposers---commonly referred to as bribes. We introduce an MEV-time law quantifying the expected MEV revenue wrt. the time elapsed since the last proposed block. We provide empirical evidence that moments of crisis (e.g. the FTX collapse, USDC stablecoin de-peg) coincide with significant spikes in MEV payments compared to the baseline.
Despite the intention of the PBS architecture to enhance decentralization by separating actor roles, it remains unclear whether its design is optimal. Implicit trust assumptions and conflicts of interest may benefit particular parties and foster the need for vertical integration. MEV-Boost was explicitly designed to foster decentralization, causing the side effect of enabling risk-free sandwich extraction from unsuspecting users, potentially raising concerns for regulators.
Efficient TFHE Bootstrapping in the Multiparty Setting
In this paper, we introduce a new approach to efficiently compute TFHE bootstrapping keys for (predefined) multiple users. Hence, a fixed number of users can enjoy the same level of efficiency as in the single key setting, keeping their individual input privacy. Our construction relies on a novel algorithm called homomorphic indicator, which can be of independent interest. We provide a detailed analysis of the noise growth and a set of secure parameters suitable to be used in practice. Moreover, we compare the complexity of our technique with other state-of-the-art constructions and show which method performs better in what parameter sets, based on our noise analysis. We also provide a prototype implementation of our technique. To the best of our knowledge, this is the first implementation of TFHE in the multiparty setting.
Amun: Securing E-Voting Against Over-the-Shoulder Coercion
In an election where each voter may express $P$ preferences among $M$ possible choices, the Amun protocol allows to secure vote casting against over-the-shoulder adversaries, retaining privacy, fairness, end-to-end verifiability, and correctness.
Before the election, each voter receives a ballot containing valid and decoy tokens: only valid tokens contribute in the final tally, but they remain indistinguishable from the decoys.
Since the voter is the only one who knows which tokens are valid (without being able to prove it to a coercer), over-the-shoulder attacks are thwarted.
We prove the security of the construction under the standard Decisional Diffie Hellman assumption in the random oracle model.
Scaling Mobile Private Contact Discovery to Billions of Users
Uncategorized
Uncategorized
Mobile contact discovery is a convenience feature of messengers such as WhatsApp or Telegram that helps users to identify which of their existing contacts are registered with the service. Unfortunately, the contact discovery implementation of many popular messengers massively violates the users' privacy as demonstrated by Hagen et al. (NDSS '21, ACM TOPS '23). Unbalanced private set intersection (PSI) protocols are a promising cryptographic solution to realize mobile private contact discovery, however, state-of-the-art protocols do not scale to real-world database sizes with billions of registered users in terms of communication and/or computation overhead.
In our work, we make significant steps towards truly practical large-scale mobile private contact discovery. For this, we combine and substantially optimize the unbalanced PSI protocol of Kales et al. (USENIX Security '19) and the private information retrieval (PIR) protocol of Kogan and Corrigan-Gibbs (USENIX Security '21). Our resulting protocol has a total communication overhead that is sublinear in the size of the server's user database and also has sublinear online runtimes. We optimize our protocol by introducing database partitioning and efficient scheduling of user queries. To handle realistic change rates of databases and contact lists, we propose and evaluate different possibilities for efficient updates. We implement our protocol on smartphones and measure online runtimes of less than 2s to query up to 1024 contacts from a database with more than two billion entries. Furthermore, we achieve a reduction in setup communication up to factor 32x compared to state-of-the-art mobile private contact discovery protocols.
Private Eyes: Zero-Leakage Iris Searchable Encryption
Biometric databases are being deployed with few cryptographic protections. Because of the nature of biometrics, privacy breaches affect users for their entire life.
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Proximity queries are the required functionality: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric.
Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and encrypted maps. One searches for the disjunction of the LSHs of a noisy biometric reading. The underlying encrypted map needs to efficiently answer disjunction queries.
We focus on the iris biometric. Iris biometric data requires a large number of LSHs, approximately 1000. The most relevant prior work is in zero-leakage k-nearest-neighbor search (Boldyreva and Tang, PoPETS 2021), but that work is designed for a small number of LSHs.
Our main cryptographic tool is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most 6% of LSHs match any stored value.
To aid in evaluation, we produce a synthetic iris generation tool to evaluate sizes beyond available iris datasets. This generation tool is a simple generative adversarial network. Accurate statistics are crucial to optimizing the cryptographic primitives so this tool may be of independent interest.
Our scheme is implemented and open-sourced. For the largest tested parameters of 5000 stored irises, search requires 26 rounds of communication and 26 minutes of single-threaded computation.
A Quantum Analysis of Nested Search Problems with Applications in Cryptanalysis
In this paper we study search problems that arise very often in cryptanalysis: nested search problems, where each search layer has known degrees of freedom and/or constraints. A generic quantum solution for such problems consists of nesting Grover's quantum search algorithm or amplitude amplification (QAA) by Brassard et al., obtaining up to a square-root speedup on classical algorithms. However, the analysis of nested Grover or QAA is complex and introduces technicalities that in previous works are handled in a case-by-case manner. Moreover, straightforward nesting introduces an overhead factor of $(\pi/2)^\ell$ in the complexity (for $\ell$ layers).
In this paper, we aim to remedy both these issues and introduce a generic framework and tools to transform a classical nested search into a quantum procedure. It improves the state-of-the-art in three ways:
1) our framework results in quantum procedures that are significantly simpler to describe and analyze;
2) it reduces the overhead factor from $(\pi/2)^\ell$ to $\sqrt{\ell}$;
3) it is simpler to apply and optimize, without needing manual quantum analysis.
We give a generic complexity formula that improves the state-of-the-art both for concrete instantiations and in the asymptotic setting. For concrete instances, we show that numerical optimizations enable further improvements over this formula, which results in a further decrease in the gap to an exact quadratic speedup.
We demonstrate our framework with applications in cryptanalysis and improve the complexity of quantum attacks on reduced-round AES.
A Note on ``On the Design of Mutual Authentication and Key Agreement Protocol in Internet of Vehicles-Enabled Intelligent Transportation System''
We remark that the key agreement scheme [IEEE Trans. Veh. Technol. 2021, 70(2): 1736--1751] fails to keep anonymity and untraceability, because the user $U_k$ needs to invoke the public key $PK_{U_j}$ to verify the signature generated by the user $U_j$. Since the public key is compulsively linked to the true identity $ID_{U_j}$ for authentication, any adversary can reveal the true identity by checking the signature.
SDitH in the QROM
The MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five round code-based identification into three rounds we obtain two main benefits. On the one hand, it allows us to further
develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results
for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature to just a hash and some serialization. In addition, we propose the introduction of proof-of-work techniques to allow for a reduction in signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.
The security of Kyber's FO-transform
In this short note we give another direct proof for the variant of the FO transform used by Kyber in the QROM. At PKC'23 Maram & Xagawa gave the first direct proof which does not require the indirection via FO with explicit rejection, thereby avoiding either a non-tight bound, or the necessity to analyze the failure probability in a new setting. However, on the downside their proof produces a bound that incurs an additive collision bound term. We explore a different approach for a direct proof, which results in a simpler argument closer to prior proofs, but a slightly worse bound.
The Referendum Problem in Anonymous Voting for Decentralized Autonomous Organizations
A natural approach to anonymous voting over Ethereum assumes that there is an off-chain aggregator that performs the following task. The aggregator receives valid signatures of YES/NO preferences from eligible voters and uses them to compute a zk-SNARK proof of the fact that the majority of voters have cast a preference for YES or NO. Then, the aggregator sends to the smart contract the zk-SNARK proof, the smart contract verifies the proof and can trigger an action (e.g., a transfer of funds). As the zk-SNARK proof guarantees anonymity, the privacy of the voters is preserved by attackers not colluding with the aggregator. Moreover, if the SNARK proof verification is efficient the GAS cost will be independent on the number of participating voters and signatures submitted by voters to the aggregator.
In this paper we show that this naive approach to run referenda over Ethereum can incur severe security problems. We propose both mitigations and hardness results for achieving voting procedures in which the proofs submitted on-chain are either ZK or succinct.
Poseidon2: A Faster Version of the Poseidon Hash Function
Zero-knowledge proof systems for computational integrity have seen a rise in popularity in the last couple of years. One of the results of this development is the ongoing effort in designing so-called arithmetization-friendly hash functions in order to make these proofs more efficient. One of these new hash functions, Poseidon, is extensively used in this context, also thanks to being one of the first constructions tailored towards this use case. Many of the design principles of Poseidon have proven to be efficient and were later used in other primitives, yet parts of the construction have shown to be expensive in real-word scenarios.
In this paper, we propose an optimized version of Poseidon, called Poseidon2. The two versions differ in two crucial points. First, Poseidon is a sponge hash function, while Poseidon2 can be either a sponge or a compression function depending on the use case. Secondly, Poseidon2 is instantiated by new and more efficient linear layers with respect to Poseidon. These changes allow to decrease the number of multiplications in the linear layer by up to 90% and the number of constraints in Plonk circuits by up to 70%. This makes Poseidon2 the currently fastest arithmetization-oriented hash function without lookups.
Besides that, we address a recently proposed algebraic attack and propose a simple modification that makes both Poseidon and Poseidon2 secure against this approach.
A Faster Software Implementation of SQISign
Isogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQISign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is ineffcient compared to other post-quantum competitors because of complicated procedures in ideal to isogeny translation, which is the effciency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQISign and
mainly discuss how to improve the execution of ideal to isogeny translation in SQISign. To be precise, we modify the SigningKLPT algorithm to accelerate the performance of generating the ideal $I_\sigma$. In addition, we explore how to save one of the two elliptic curve discrete logarithms and compute the remainder with the help of the reduced Tate pairing correctly and effciently. We speed up other procedures in ideal to isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performances of key generation and verification in SQISign. In particular, in the instantiation with p3923 the improvements lead to a speedup of 8.82%, 8.50% and 18.94% for key generation, signature and verification, respectively
Breaking the quadratic barrier: Quantum cryptanalysis of Milenage, telecommunications’ cryptographic backbone
The potential advent of large-scale quantum computers in the near future poses a threat to contemporary cryptography.
One ubiquitous usage of cryptography is currently present in the vibrant field of cellular networks.
The cryptography of cellular networks is centered around seven secret-key algorithms $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$, aggregated into an authentication and key agreement algorithm set.
Still, to the best of our knowledge, these secret key algorithms have not yet been subject to quantum cryptanalysis. Instead, many quantum security considerations for telecommunication networks argue that the threat posed by quantum computers is restricted to public-key cryptography. However, various recent works have presented quantum attacks on secret key cryptography that exploit quantum period finding to achieve more than a quadratic speedup compared to the best known classical attacks. Motivated by this quantum threat to symmetric cryptography, this paper presents a quantum cryptanalysis for the Milenage algorithm set, the prevalent instantiation of the seven secret-key $f_1, \ldots, f_5, f_1^{*}, f_5^{*}$ algorithms that underpin cellular security.
Building upon recent quantum cryptanalytic results, we show attacks that go beyond a quadratic speedup.
Concretely, we provide quantum attack scenarios for all Milenage algorithms, including exponential speedups when the attacker is allowed to issue superposition queries. Our results do not constitute a quantum break of the Milenage algorithms, but they do show that Milenage suffers from structural weaknesses making it susceptible to quantum attacks.
Schnorr protocol in Jasmin
We implement the Schnorr proof system in assembler via the Jasmin toolchain, and prove the security (proof-of-knowledge property) and the absence of leakage through timing side-channels of that implementation in EasyCrypt.
In order to do so, we show how leakage-freeness of Jasmin programs can be proven for probabilistic programs (that are not constant-time). We implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We implement and verify the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF 2023).
Scalable Agreement Protocols with Optimal Optimistic Efficiency
Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in $scalable$ protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in $n$, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only $\tilde O(1)$ bits throughout $\tilde O(1)$ rounds, but guarantees only that $1-o(1)$ fraction of honest parties end up agreeing on a consistent output, assuming constant $<1/3$ fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends $\tilde O(\sqrt{n})$ bits throughout $\tilde O(1)$ rounds. Getting a full agreement protocol with $o(\sqrt{n})$ communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ <i> xxx</i>$Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.
Private Approximate Nearest Neighbor Search with Sublinear Communication
Nearest neighbor search is a fundamental building-block for a wide range of applications. A privacy-preserving protocol for nearest neighbor search involves a set of clients who send queries to a remote database. Each client retrieves the nearest neighbor(s) to its query in the database without revealing any information about the query. To ensure database privacy, clients must learn as little as possible beyond the query answer, even if behaving maliciously by deviating from protocol.
Existing protocols for private nearest neighbor search require heavy cryptographic tools, resulting in high computational and bandwidth overheads. In this paper, we present the first lightweight protocol for private nearest neighbor search. Our protocol is instantiated using two non-colluding servers, each holding a replica of the database. Our design supports an arbitrary number of clients simultaneously querying the database through the two servers. Each query consists of a single round of communication between the client and the two servers. No communication is required between the servers to answer queries.
If at least one of the servers is non-colluding, we ensure that (1) no information is revealed on the client’s query, (2) the total communication between the client and the servers is sublinear in the database size, and (3) each query answer only leaks a small and bounded amount of information about the database to the client, even if the client is malicious.
We implement our protocol and report its performance on real-world data. Our construction requires between 10 and 20 seconds of query latency over large databases of 10M feature vectors. Client overhead remained under 10 ms of processing time per query and less than 10 MB of communication.
Don't throw your nonces out with the bathwater: Speeding up Dilithium by reusing the tail of y
We suggest a small change to the Dilithium signature scheme, that allows one to reuse computations between rejected nonces, for a speed-up in signing time.
The modification is based on the idea that, after rejecting on a too large $\|\mathbf{r}_0\|_\infty$, not all elements of the nonce $\mathbf{y}$ are spent.
We swap the order of the checks; and if this $\mathbf{r}_0$-check fails, we only need to resample $y_1$.
We provide a proof that shows that the modification does not affect the security of the scheme.
We present measurements of the performance of the modified scheme on AVX2, Cortex M4, and Cortex M3,
which show a speed-up ranging from 11% for Dilithium2 on M3 to 22% for Dilithium3 on AVX2.
Note on Subversion-Resilient Key Exchange
In this work, we set out to create a subversion resilient authenticated key exchange protocol. The first step was to design a meaningful security model for this primitive, and our goal was to avoid using building blocks like reverse firewalls and public watchdogs. We wanted to exclude these kinds of tools because we desired that our protocols to be self contained in the sense that we could prove security without relying on some outside, tamper-proof party. To define the model, we began by extending models for regular authenticated key exchange, as we wanted our model to retain all the properties from regular AKE.
While trying to design protocols that would be secure in this model, we discovered that security depended on more than just the protocol, but also on engineering questions like how keys are stored and accessed in memory. Moreover, even if we assume that we can find solutions to these engineering challenges, other problems arise when trying to develop a secure protocol, partly because it's hard to define what secure means in this setting.It is in particular not clear how a subverted algorithm should affect the freshness predicate inherited from trivial attacks in regular AKE. The attack variety is large, and it is not intuitive how one should treat or classify the different attacks.
In the end, we were unable to find a satisfying solution for our model, and hence we could not prove any meaningful security of the protocols we studied. This work is a summary of our attempt, and the challenges we faced before concluding it.
The Problem of Half Round Key XOR
In the design of GIFT, half round key XOR is used. This leads to the undesired consequence that the security against the differential/linear attacks are overestimated. This comes from the observation that; in the usual DDT/LAT based analysis of the differential/linear attacks, the inherent assumption is the full round key is XORed at each round.
Non-interactive VSS using Class Groups and Application to DKG
Verifiable secret sharing (VSS) allows a dealer to send shares of a secret value to parties such that each party receiving a share can verify (often interactively) if the received share was correctly generated. Non-interactive VSS (NI-VSS) allows the dealer to perform secret sharing such that every party (including an outsider) can verify their shares along with others’ without any interaction with the dealer as well as among themselves. Existing NI-VSS schemes employing either exponentiated ElGamal or lattice-based encryption schemes involve zero-knowledge range proofs, resulting in higher computational and communication complexities.
In this work, we present cgVSS, a NI-VSS protocol that uses class groups for encryption. In cgVSS, the dealer encrypts the secret shares in the exponent through a class group encryption such that the parties can directly decrypt their shares. The existence of a subgroup where a discrete logarithm is tractable in a class group allows the receiver to efficiently decrypt the share though it is available in the exponent. This yields a novel-yet-simple VSS protocol where the dealer publishes the encryptions of the shares and the zero-knowledge proof of the correctness of the dealing. The linear homomorphic nature of the employed encryption scheme allows for an efficient zero-knowledge proof of correct sharing. Given the rise in demand for VSS protocols in the blockchain space, especially for publicly verifiable distributed key generation (DKG), our NI-VSS construction can be particularly impactful. We implement our cgVSS protocol using the BICYCL library and compare its performance with a simplified version of the state-of-the-art NI-VSS by Groth. Our protocol reduces the message complexity and the bit length of the broadcast message by at least 5.6x for a 150-party system, with a 2.7x, 2.4x speed-up in the dealer and receiver computation times, respectively.
Towards the Links of Cryptanalytic Methods on MPC/FHE/ZK-Friendly Symmetric-Key Primitives
Symmetric-key primitives designed over the prime field $\mathbb{F}_p$ with odd characteristics, rather than the traditional $\mathbb{F}_2^{n}$, are becoming the most popular choice for MPC/FHE/ZK-protocols for better efficiencies. However, the security of $\mathbb{F}_p$ is less understood as there are highly nontrivial gaps when extending the cryptanalysis tools and experiences built on $\mathbb{F}_2^{n}$ in the past few decades to $\mathbb{F}_p$.
At CRYPTO 2015, Sun et al. established the links among impossible differential, zero-correlation linear, and integral cryptanalysis over $\mathbb{F}_2^{n}$ from the perspective of distinguishers. In this paper, following the definition of linear correlations over $\mathbb{F}_p$ by Baignéres, Stern and Vaudenay at SAC 2007, we successfully establish comprehensive links over $\mathbb{F}_p$, by reproducing the proofs and offering alternatives when necessary. Interesting and important differences between $\mathbb{F}_p$ and $\mathbb{F}_2^n$ are observed.
- Zero-correlation linear hulls can not lead to integral distinguishers for some cases over $\mathbb{F}_p$, while this is always possible over $\mathbb{F}_2^n$ proven by Sun et al..
- When the newly established links are applied to GMiMC, its impossible differential, zero-correlation linear hull and integral distinguishers can be increased by up to 3 rounds for most of the cases, and even to an arbitrary number of rounds for some special and limited cases, which only appeared in $\mathbb{F}_p$. It should be noted that all these distinguishers do not invalidate GMiMC's security claims.
The development of the theories over $\mathbb{F}_p$ behind these links, and properties identified (be it similar or different) will bring clearer and easier understanding of security of primitives in this emerging $\mathbb{F}_p$ field, which we believe will provide useful guides for future cryptanalysis and design.
Homomorphic Signatures for Subset and Superset Mixed Predicates and Its Applications
In homomorphic signatures for subset predicates (HSSB), each message (to be signed) is a set. Any signature on a set $M$ allows us to derive a signature on any subset $M'\subseteq M$. Its superset version, which should be called homomorphic signatures for superset predicates (HSSP), allows us to derive a signature on any superset $M'\supseteq M$. In this paper, we propose homomorphic signatures for subset and superset mixed predicates (HSSM) as a simple combination of HSSB and HSSP. In HSSM, any signature on a message of a set-pair $(M, W)$ allows us to derive a signature on any $(M', W')$ such that $M'\subseteq M$ and $W'\supseteq W$. We propose an original HSSM scheme which is unforgeable under the decisional linear assumption and completely context-hiding. We show that HSSM has various applications, which include disclosure-controllable HSSB, disclosure-controllable redactable signatures, (key-delegatable) superset/subset predicate signatures, and wildcarded identity-based signatures.
PSI from ring-OLE
Private set intersection (PSI) is one of the most extensively studied instances of secure computation. PSI allows two parties to compute the intersection of their input sets without revealing anything else. Other useful variants include PSI-Payload, where the output includes payloads associated with members of the intersection, and PSI-Sum, where the output includes the sum of the payloads instead of individual ones.
In this work, we make two related contributions. First, we construct simple and efficient protocols for PSI and PSI-Payload from a ring version of oblivious linear function evaluation (ring-OLE) that can be efficiently realized using recent ring-LPN based protocols. A standard OLE over a field F allows a sender with $a,b \in \mathbb{F}$ to deliver $ax+b$ to a receiver who holds $x \in \mathbb{F}$. Ring-OLE generalizes this to a ring $\mathcal{R}$, in particular, a polynomial ring over $\mathbb{F}$. Our second contribution is an efficient general reduction of a variant of PSI-Sum to PSI-Payload and secure inner product.
Our protocols have better communication cost than state-of-the-art PSI protocols, especially when requiring security against malicious parties and when allowing input-independent preprocessing. Compared to previous maliciously secure PSI protocols that have a similar com- putational cost, our online communication is 2x better for small sets (28 − 212 elements) and 20% better for large sets (220 − 224). Our protocol is also simpler to describe and implement. We obtain even bigger improvements over the state of the art (4-5x better running time) for our variant of PSI-Sum.
On Extremal Algebraic Graphs and implementations of new cubic Multivariate Public Keys
Algebraic Constructions of Extremal Graph Theory
were efficiently used for the construction of Low Density Parity Check Codes for satellite communication, constructions of
stream ciphers and Postquantum Protocols of Noncommutative
cryptography and corresponding El Gamal type cryptosystems.
We shortly observe some results in these applications and present
idea of the usage of algebraic graphs for the development
of Multivariate Public Keys (MPK). Some MPK schemes are
presented at theoretical level, implementation of one of them is discussed.
On Sustainable Ring-based Anonymous Systems
Anonymous systems (e.g. anonymous cryptocurrencies and updatable anonymous credentials) often follow a construction template where an account can only perform a single anonymous action, which in turn potentially spawns new (and still single-use) accounts (e.g. UTXO with a balance to spend or session with a score to claim). Due to the anonymous nature of the action, no party can be sure which account has taken part in an action and, therefore, must maintain an ever-growing list of potentially unused accounts to ensure that the system keeps running correctly. Consequently, anonymous systems constructed based on this common template are seemingly not sustainable.
In this work, we study the sustainability of ring-based anonymous systems, where a user performing an anonymous action is hidden within a set of decoy users, traditionally called a ``ring''.
On the positive side, we propose a general technique for ring-based anonymous systems to achieve sustainability. Along the way, we define a general model of decentralised anonymous systems (DAS) for arbitrary anonymous actions, and provide a generic construction which provably achieves sustainability. As a special case, we obtain the first construction of anonymous cryptocurrencies achieving sustainability without compromising availability. We also demonstrate the generality of our model by constructing sustainable decentralised anonymous social networks.
On the negative side, we show empirically that Monero, one of the most popular anonymous cryptocurrencies, is unlikely to be sustainable without altering its current ring sampling strategy. The main subroutine is a sub-quadratic-time algorithm for detecting used accounts in a ring-based anonymous system.
Finding Desirable Substitution Box with SASQUATCH
This paper presents ``SASQUATCH'', an open-source tool, that aids in finding an unknown substitution box (SBox) given its properties. The inspiration of our work can be directly attributed to the DCC 2022 paper by Lu, Mesnager, Cui, Fan and Wang. Taking their work as the foundation (i.e., converting the problem of SBox search to a satisfiability modulo theory instance and then invoking a solver), we extend in multiple directions (including -- but not limiting to -- coverage of more options, imposing time limit, parallel execution for multiple SBoxes, non-bijective SBox), and package everything within an easy-to-use interface. We also present ASIC benchmarks for some of the SBoxes.
Understanding the Duplex and Its Security
At SAC 2011, Bertoni et al. introduced the keyed duplex construction as a tool to build permutation based authenticated encryption schemes. The construction was generalized to full-state absorption by Mennink et al. (ASIACRYPT 2015). Daemen et al. (ASIACRYPT 2017) generalized it further to cover much more use cases, and proved security of this general construction, and Dobraunig and Mennink (ASIACRYPT 2019) derived a leakage resilience security bound for this construction. Due to its generality, the full-state keyed duplex construction that we know today has plethora applications, but the flip side of the coin is that the general construction is hard to grasp and the corresponding security bounds are very complex. Consequently, the state-of-the-art results on the full-state keyed duplex construction are not used to the fullest. In this work, we revisit the history of the duplex construction, give a comprehensive discussion of its possibilities and limitations, and demonstrate how the two security bounds (of Daemen et al. and Dobraunig and Mennink) can be interpreted in particular applications of the duplex.
Study of Arithmetization Methods for STARKs
This technical paper explores two solutions for arithmetization of computational integrity statements in STARKs, namely the algebraic intermediate representation, AIR, and is preprocessed variant, PAIR. The work then focuses on their soundness implications for Reed-Solomon proximity testing. It proceeds by presenting a comparative study of these methods, providing their theoretical foundations and deriving the degree bounds for low-degree proximity testing. The study shows that using PAIR increases the degree bound for Reed-Solomon proximity testing, which affects its soundness and complexity. However, the possibility of reducing the degree bound with multiple selector columns is also explored, namely by following an approach based on the decomposition of the selector values. Focusing on performance optimization, the work proceeds by qualitatively comparing computational demands of the components of both arithmetization methods, particularly their impact on the low-degree extensions. The paper concludes that, while PAIR might simplify constraint enforcement, it can be easily translated to AIR, and system testing with benchmarks is necessary to determine the application-specific superiority of either method. This work should provide insight into the strengths and limitations of each method, helping researchers and practitioners in the field of STARKs make informed design choices.
Practical Robust DKG Protocols for CSIDH
A Distributed Key Generation (DKG) protocol is an essential component of threshold cryptography. DKGs enable a group of parties to generate a secret and public key pair in a distributed manner so that the secret key is protected from being exposed, even if a certain number of parties are compromised. Robustness further guarantees that the construction of the key pair is always successful, even if malicious parties try to sabotage the computation. In this paper, we construct two efficient robust DKG protocols in the CSIDH setting that work with Shamir secret sharing. Both the proposed protocols are proven to be actively secure in the quantum random oracle model and use an Information Theoretically (IT) secure Verifiable Secret Sharing (VSS) scheme that is built using bivariate polynomials. As a tool, we construct a new piecewise verifiable proof system for structured public keys, that could be of independent interest. In terms of isogeny computations, our protocols outperform the previously proposed DKG protocols CSI-RAShi and Structured CSI-RAShi. As an instance, using our DKG protocols, 4 parties can sample a PK of size 4kB, for CSI-FiSh and CSI-SharK, respectively, 3.4 and 1.7 times faster than the current alternatives. On the other hand, since we use an IT-secure VSS, the fraction of corrupted parties is limited to less than a third and the communication cost of our schemes scales slightly worse with an increasing number of parties. For a low number of parties, our scheme still outperforms the alternatives in terms of communication.
Efficient Hybrid Exact/Relaxed Lattice Proofs and Applications to Rounding and VRFs
In this work, we study hybrid exact/relaxed zero-knowledge proofs from lattices, where the proved relation is exact in one part and relaxed in the other. Such proofs arise in important real-life applications such as those requiring verifiable PRF evaluation and have so far not received significant attention as a standalone problem.
We first introduce a general framework, LANES+, for realizing such hybrid proofs efficiently by combining standard relaxed proofs of knowledge RPoK and the LANES framework (due to a series of works in Crypto'20, Asiacrypt'20, ACM CCS'20). The latter framework is a powerful lattice-based proof system that can prove exact linear and multiplicative relations. The advantage of LANES+ is its ability to realize hybrid proofs more efficiently by exploiting RPoK for the high-dimensional part of the secret witness while leaving a low-dimensional secret witness part for the exact proof that is proven at a significantly lower cost via LANES. Thanks to the flexibility of LANES+, other exact proof systems can also be supported.
We apply our LANES+ framework to construct substantially shorter proofs of rounding, which is a central tool for verifiable deterministic lattice-based cryptography. Based on our rounding proof, we then design an efficient long-term verifiable random function (VRF), named LaV. LaV leads to the shortest VRF outputs among the proposals of standard (i.e., long-term and stateless) VRFs based on quantum-safe assumptions.
Of independent interest, we also present generalized results for challenge difference invertibility, a fundamental soundness security requirement for many proof systems.
SMAUG: Pushing Lattice-based Key Encapsulation Mechanisms to the Limits
Recently, NIST has announced Kyber, a lattice-based key encapsulation mechanism (KEM), as a post-quantum standard.
However, it is not the most efficient scheme among the NIST's KEM finalists.
Saber enjoys more compact sizes and faster performance, and Mera et al. (TCHES '21) further pushed its efficiency, proposing a shorter KEM, Sable.
As KEM are frequently used on the Internet, such as in TLS protocols, it is essential to achieve high efficiency while maintaining sufficient security.
In this paper, we further push the efficiency limit of lattice-based KEMs by proposing SMAUG, a new post-quantum KEM scheme submitted to the Korean Post-Quantum Cryptography (KPQC) competition, whose IND-CCA2 security is based on the combination of MLWE and MLWR problems.
We adopt several recent developments in lattice-based cryptography, targeting the textit{smallest} and the \textit{fastest} KEM while maintaining high enough security against various attacks, with a full-fledged use of sparse secrets.
Our design choices allow SMAUG to balance the decryption failure probability and ciphertext sizes without utilizing error correction codes, whose side-channel resistance remains open.
With a constant-time C reference implementation, SMAUG achieves ciphertext sizes up to 12% and 9% smaller than Kyber and Saber, with much faster running time, up to 103% and 58%, respectively.
Compared to Sable, SMAUG has the same ciphertext sizes but a larger public key, which gives a trade-off between the public key size versus performance; SMAUG has 39%-55% faster encapsulation and decapsulation speed in the parameter sets having comparable security.
Goldfish: No More Attacks on Proof-of-Stake Ethereum
The latest message driven (LMD) greedy heaviest observed sub-tree (GHOST) consensus protocol is a critical component of proof-of-stake (PoS) Ethereum. In its current form, the protocol is brittle, as evidenced by recent attacks and patching attempts. We report on Goldfish, a considerably simplified candidate under consideration for a future Ethereum protocol upgrade. We prove that Goldfish satisfies the properties required of a drop-in replacement for LMD GHOST: Goldfish is secure in synchronous networks under dynamic participation, assuming a majority of the nodes (called validators) follows the protocol. Goldfish is reorg resilient (i.e., honestly produced blocks are guaranteed inclusion in the ledger) and supports fast confirmation (i.e., the expected confirmation latency is independent of the desired security level). We show that subsampling validators can improve the communication efficiency of Goldfish, and that Goldfish is composable with finality gadgets and accountability gadgets, which improves state-of-the-art ebb-and-flow protocols. Attacks on LMD GHOST exploit lack of coordination among honest validators, typically provided by a locking mechanism in classical BFT protocols. However, locking requires votes from a quorum of all participants and is not compatible with dynamic availability. Goldfish is powered by a novel coordination mechanism to synchronize the honest validators' actions under dynamic participation. Experiments with our implementation of Goldfish demonstrate the practicality of this mechanism for Ethereum.
Extremal algebraic graphs, quadratic multivariate public keys and temporal rules
We introduce large groups of quadratic transformations of a vector space over the finite fields defined via symbolic computations with the usage of
algebraic constructions of Extremal Graph Theory. They can serve as platforms for the protocols of Noncommutative Cryptography with security based on the complexity of word decomposition problem in noncommutative polynomial transformation group.
The modifications of these symbolic computations in the case of large fields of characteristic two allow us to define quadratic bijective multivariate public keys such that the inverses of public maps has a large polynomial degree. Another family of public keys is defined over arbitrary commutative ring with unity.
We suggest the usage of constructed protocols for the private delivery of quadratic encryption maps instead of the public usage of these transformations, i.e. the idea of temporal multivariate rules with their periodical change.