Papers updated in last 7 days (99 results)

Last updated:  2024-10-06
Universally Composable SNARKs with Transparent Setup without Programmable Random Oracle
Christian Badertscher, Matteo Campanelli, Michele Ciampi, Luigi Russo, and Luisa Siniscalchi
Non-interactive zero-knowledge (NIZK) proofs allow a prover to convince a verifier about the validity of an NP-statement by sending a single message and without disclosing any additional information (besides the validity of the statement). Single-message cryptographic proofs are very versatile, which has made them widely used both in theory and in practice. This is particularly true for succinct proofs, where the length of the message is sublinear in the size of the NP relation. This versatility, unfortunately, comes at a price, since any NIZK proof system requires some form of setup, like a common reference string. One way to circumvent the need for a setup is by relying on a Random Oracle. Unfortunately, if the Random Oracle is modeled as a Global resource that the simulator is not allowed to program, then it is impossible to obtain a secure NIZK. This impossibility has been circumvented by allowing the simulator (and the real-world adversary) to program the RO, and allowing the honest parties to check, via a special interface, if the RO outputs have been programmed. In this work, we show that this impossibility can be circumvented by meaningfully weakening the Universal Composability framework following the model proposed by Broadnax et al. (Eurocrypt 2017). In this model, the ideal world functionalities are allowed to interact with oracles that have quasi-polynomial time capabilities. As our main result, we propose the first composable NIZK proof system that relies on a global (non-programmable) random oracle as its only form of setup. The NIZK scheme we propose is witness-succinct (with proofs logarithmic in the size of the witness). Our results break both the barrier of programmability of the random oracle and of polylogarithmic proof size for UC-secure NIZKs with transparent setups. We are able to construct our NIZK using the framework proposed by Ganesh et al. (Eurocrypt 2023), which requires—among other building blocks—a polynomial commitment scheme with special features and a polynomial encoding scheme (a primitive that appropriately masks a witness as a polynomial). As a core technical contribution, we show a polynomial commitment of this type using a basic component of Bulletproofs as a building block, as well as a polynomial encoding based on techniques completely different from the ones from Ganesh et al..
Last updated:  2024-10-06
LWE with Quantum Amplitudes: Algorithm, Hardness, and Oblivious Sampling
Yilei Chen, Zihan Hu, Qipeng Liu, Han Luo, and Yaxin Tu
The learning with errors problem (LWE) is one of the most important building blocks for post-quantum cryptography. To better understand the quantum hardness of LWE, it is crucial to explore quantum variants of LWE. To this end, Chen, Liu, and Zhandry [Eurocrypt 2022] defined S|LWE> and C|LWE> problems by encoding the error of LWE samples into quantum amplitudes, and showed efficient quantum algorithms for a few interesting amplitudes. However, algorithms or hardness results of the most interesting amplitude, Gaussian, were not addressed before. In this paper, we show new algorithms, hardness results and applications for S|LWE> and C|LWE> with real Gaussian, Gaussian with linear or quadratic phase terms, and other related amplitudes. Let n be the dimension of LWE samples. Our main results are 1. There is a $2^{\tilde{O}(\sqrt{n})}$-time algorithm for S|LWE> with Gaussian amplitude with known phase, given $2^{\tilde{O}(\sqrt{n})}$ many quantum samples. The algorithm is modified from Kuperberg's sieve, and in fact works for more general amplitudes as long as the amplitudes and phases are completely known. 2. There is a polynomial time quantum algorithm for solving S|LWE> and C|LWE> for Gaussian with quadratic phase amplitudes, where the sample complexity is as small as $\tilde{O}(n)$. As an application, we give a quantum oblivious LWE sampler where the core quantum sampler requires only quasi-linear sample complexity. This improves upon the previous oblivious LWE sampler given by Debris-Alazard, Fallahpour, Stehlé [STOC 2024], whose core quantum sampler requires $\tilde{O}(nr)$ sample complexity, where $r$ is the standard deviation of the error. 3. There exist polynomial time quantum reductions from standard LWE or worst-case GapSVP to S|LWE> with Gaussian amplitude with small unknown phase, and arbitrarily many samples. Compared to the first two items, the appearance of the unknown phase term places a barrier in designing efficient quantum algorithm for solving standard LWE via S|LWE>.
Last updated:  2024-10-06
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2024-10-06
Quantum Circuits of AES with a Low-depth Linear Layer and a New Structure
Haotian Shi and Xiutao Feng
In recent years quantum computing has developed rapidly. The security threat posed by quantum computing to cryptography makes it necessary to better evaluate the resource cost of attacking algorithms, some of which require quantum implementations of the attacked cryptographic building blocks. In this paper we manage to optimize quantum circuits of AES in several aspects. Firstly, based on de Brugière \textit{et al.}'s greedy algorithm, we propose an improved depth-oriented algorithm for synthesizing low-depth CNOT circuits with no ancilla qubits. Our algorithm finds a CNOT circuit of AES MixColumns with depth 10, which breaks a recent record of depth 16. In addition, our algorithm gives low-depth CNOT circuits for many MDS matrices and matrices used in block ciphers studied in related work. Secondly, we present a new structure named compressed pipeline structure to synthesize quantum circuits of AES, which can be used for constructing quantum oracles employed in quantum attacks based on Grover's and Simon's algorithms. When the number of ancilla qubits required by the round function and its inverse is not very large, our structure will have a better trade-off of $D$-$W$ cost. Moreover, our encryption oracle will have the lowest depth to date. We then give detailed encryption circuits of AES-128 under the guidance of our structure and make some comparisons with other circuits. Finally, the encryption part and the key schedule part have their own application scenarios. The Encryption oracle used in Simon's algorithm built with the former will have smaller round depth. For example, we can construct an AES-128 Encryption oracle with $T$-depth 33, while the previous best result is 60. A small variant of the latter, along with our method to make an Sbox input-invariant, can avoid the allocation of extra ancilla qubits for storing key words in the shallowed pipeline structure. Based on this, we achieve an encryption circuit of AES-128 with the lowest $TofD$-$W$ cost 130720 to date.
Last updated:  2024-10-06
How to Recover the Full Plaintext of XCB
Peng Wang, Shuping Mao, Ruozhou Xu, Jiwu Jing, and Yuewu Wang
XCB, a tweakable enciphering mode, is part of IEEE Std. 1619.2 for shared storage media. We show that all versions of XCB are not secure through three plaintext recovery attacks. A key observation is that XCB behaves like an LRW1-type tweakable block cipher for single-block messages, which lacks CCA security. The first attack targets one-block XCB, using three queries to recover the plaintext. The second one requires four queries to recover the plaintext that excludes one block. The last one requires seven queries to recover the full plaintext. The first attack applies to any scheme that follows the XCB structure, whereas the latter two attacks work on all versions of XCB, exploiting the separable property of the underlying universal hash function. We also discuss the impact of these vulnerabilities on XCB-based applications, such as disk encryption, nonce-based encryption, deterministic authenticated encryption and robust authenticated encryption, highlighting the risks due to XCB's failure to achieve STPRP security. To address these flaws, we propose the XCB* structure, an improved version of XCB that adds only two XOR operations. We prove that XCB* is STPRP-secure when using AXU hash functions, SPRPs, and a secure IV-based stream cipher.
Last updated:  2024-10-05
The Supersingular Isogeny Path and Endomorphism Ring Problems: Unconditional Reductions
Maher Mamah
In this paper we study several computational problems related to current post-quantum cryptosystems based on isogenies between supersingular elliptic curves. In particular we prove that the supersingular isogeny path and endomorphism ring problems are unconditionally equivalent under polynomial time reductions. We show that access to a factoring oracle is sufficient to solve the Quaternion path problem of KLPT and prove that these problems are equivalent, where previous results either assumed heuristics or the generalised Riemann Hypothesis (GRH). Recently these reductions have become foundational for the security of isogeny-based cryptography
Last updated:  2024-10-05
A note on the G-FFT
Ulrich Haböck
For primes $p$ with $p+1$ being smooth, the G-FFT from Li and Xing [LX23] is an algebraic FFT, which at first glance seems equivalent to the circle FFT from [IACR eprint 2024/278]: It also uses the circle curve over $\mathbb F_p$ (in other words the projective line) as underlying domain, and interpolates by low-degree functions with poles over the same set of points. However, their approach to control the degree of the FFT basis is fundamentally different. The G-FFT makes use of punctured Riemann-Roch spaces, and the construction works with the group doubling map only, no projection onto the $x$-axis involved. In this note we give an elementary description of the G-FFT without using abstract algebra. We describe a variant which uses a simpler, and in our opinion more natural function space, and which treats the exceptional point of the domain (the group identity) differently. In comparison to the circle FFT, the G-FFT (both the original as well as our variant) has the following downsides. Interpolation and domain evaluation costs the double number of multiplications (the twiddle is not an ``odd'' function), and the function space is not invariant under the group action, causing additional overhead when applied in STARKs.
Last updated:  2024-10-05
A note on adding zero-knowledge to STARKs
Ulrich Haböck and Al Kindi
We discuss zero-knowledge in the context of FRI-based STARKs using techniques desirable in practice: Randomization by polynomials over the basefield, and decomposing the overall quotient into polynomials of smaller degree.
Last updated:  2024-10-05
HARTS: High-Threshold, Adaptively Secure, and Robust Threshold Schnorr Signatures
Renas Bacho, Julian Loss, Gilad Stern, and Benedikt Wagner
Threshold variants of the Schnorr signature scheme have recently been at the center of attention due to their applications to cryptocurrencies. However, existing constructions for threshold Schnorr signatures among a set of $n$ parties with corruption threshold $t_c$ suffer from at least one of the following drawbacks: (i) security only against static (i.e., non-adaptive) adversaries, (ii) cubic or higher communication cost to generate a single signature, (iii) strong synchrony assumptions on the network, or (iv) $t_c+1$ are sufficient to generate a signature, i.e., the corruption threshold of the scheme equals its reconstruction threshold. Especially (iv) turns out to be a severe limitation for many asynchronous real-world applications where $t_c < n/3$ is necessary to maintain liveness, but a higher signing threshold of $n-t_c$ is needed. A recent scheme, ROAST, proposed by Ruffing et al. (ACM CCS 2022) addresses (iii) and (iv), but still falls short of obtaining subcubic communication complexity and adaptive security. In this work, we present HARTS, the first threshold Schnorr signature scheme to incorporate all these desiderata. More concretely: - HARTS is adaptively secure and remains fully secure and operational even under asynchronous network conditions in the presence of up to $t_c < n/3$ malicious parties. This is optimal. - HARTS outputs a Schnorr signature of size $\lambda$ with a near-optimal amortized communication cost of $O(\lambda n^2 \log{n})$ bits and a single asynchronous online round per signature. - HARTS is high-threshold: no fewer than $t_r+1$ signature shares can be combined to yield a full signature, where any $t_r\in [t_c,n-t_c)$ is supported. This especially covers the case $t_r \geq 2n/3 > 2t_c$. This is optimal. We prove our result in a modular fashion in the algebraic group model. At the core of our construction, we design a new simple and adaptively secure high-threshold asynchronous verifiable secret sharing (AVSS) scheme which may be of independent interest.
Last updated:  2024-10-05
Succinct Arguments over Towers of Binary Fields
Benjamin E. Diamond and Jim Posen
We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with just two elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with no embedding overhead. We further introduce binary-field adaptations of HyperPlonk (EUROCRYPT '23)'s product and permutation checks and of Lasso ('23)'s lookup. Our binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.
Last updated:  2024-10-05
Optimized Privacy-Preserving Clustering with Fully Homomorphic Encryption
Chen Yang, Jingwei Chen, Wenyuan Wu, and Yong Feng
Clustering is a crucial unsupervised learning method extensively used in the field of data analysis. For analyzing big data, outsourced computation is an effective solution but privacy concerns arise when involving sensitive information. Fully homomorphic encryption (FHE) enables computations on encrypted data, making it ideal for such scenarios. However, existing privacy-preserving clustering based on FHE are often constrained by the high computational overhead incurred from FHE, typically requiring decryption and interactions after only one iteration of the clustering algorithm. In this work, we propose a more efficient approach to evaluate the one-hot vector for the index of the minimum in an array with FHE, which fully exploits the parallelism of single-instruction-multiple-data of FHE schemes. By combining this with FHE bootstrapping, we present a practical FHE-based k-means clustering protocol whose required round of interactions between the data owner and the server is optimal, i.e., accomplishing the entire clustering process on encrypted data in a single round. We implement this protocol using the CKKS FHE scheme. Experiments show that our protocol significantly outperforms the state-of-the-art FHE-based k-means clustering protocols on various public datasets and achieves comparable accuracy to plaintext result. Additionally, We adapt our protocol to support mini-batch k-means for large-scale datasets and report its performance.
Last updated:  2024-10-05
Oracle Separation Between Quantum Commitments and Quantum One-wayness
John Bostanci, Boyang Chen, and Barak Nehoran
We show that there exists a unitary quantum oracle relative to which quantum commitments exist but no (efficiently verifiable) one-way state generators exist. Both have been widely considered candidates for replacing one-way functions as the minimal assumption for cryptography—the weakest cryptographic assumption implied by all of computational cryptography. Recent work has shown that commitments can be constructed from one-way state generators, but the other direction has remained open. Our results rule out any black-box construction, and thus settle this crucial open problem, suggesting that quantum commitments (as well as its equivalency class of EFI pairs, quantum oblivious transfer, and secure quantum multiparty computation) appear to be strictly weakest among all known cryptographic primitives.
Last updated:  2024-10-05
The module action for isogeny based cryptography
Damien Robert
We extend the usual ideal action on oriented elliptic curves to a (Hermitian) module action on oriented (polarised) abelian varieties. Oriented abelian varieties are naturally enriched in $R$-modules, and our module action comes from the canonical power object construction on categories enriched in a closed symmetric monoidal category. In particular our action is canonical and gives a fully fledged symmetric monoidal action. Furthermore, we give algorithms to compute this action in practice, generalising the usual algorithms in rank~$1$. The action allows us to unify in the same framework, on the one hand isogeny based cryptography based on ordinary or oriented elliptic curves, and on the other hand the one based on supersingular elliptic curves defined over $\mathbb{F}_{p^2}$. In particular, from our point of view, supersingular elliptic curves over $\mathbb{F}_p$ are given by a rank~$1$ module action, while (the Weil restriction) of those defined over $\mathbb{F}_{p^2}$ are given by a rank~$2$ module action. As a consequence, rank~$2$ module action inversion is at least as hard as the supersingular isogeny path problem. We thus propose to use Hermitian modules as an avatar of a cryptographic symmetric monoidal action framework. This generalizes the more standard cryptographic group action framework, and still allows for a NIKE (Non Interactive Key Exchange). The main advantage of our action is that, presumably, Kuperberg's algorithm does not apply. Compared to CSIDH, this allows for more compact keys and much better scaling properties. In practice, we propose the key exchange scheme $\otimes$-MIKE (Tensor Module Isogeny Key Exchange). Alice and Bob start from a supersingular elliptic curve $E_0/\mathbb{F}_p$ and both compute a $2^n$-isogeny over $\mathbb{F}_{p^2}$. They each send the $j$-invariant of their curve. Crucially, unlike SIDH, no torsion information at all is required. Their common secret, given by the module action, is then a dimension~$4$ principally polarised abelian variety. We obtain a very compact post-quantum NIKE: only 64B for NIST level~$1$ security.
Last updated:  2024-10-05
Elliptic Curve Cryptography for the masses: Simple and fast finite field arithmetic
Michael Scott
Shaped prime moduli are often considered for use in elliptic curve and isogeny-based cryptography to allow for faster modular reduction. Here we focus on the most common choices for shaped primes that have been suggested, that is pseudo-Mersenne, generalized Mersenne and Montgomery-friendly primes. We consider how best to to exploit these shapes for maximum efficiency, and provide an open source tool to automatically generate, test and time working high-level language finite-field code. Next we consider the advantage to be gained from implementations that are written in assembly language and which exploit special instructions, SIMD hardware if present, and the particularities of the algorithm being implemented.
Last updated:  2024-10-04
A Complete Analysis of the BKZ Lattice Reduction Algorithm
Jianwei Li and Phong Q. Nguyen
We present the first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL: we provide guarantees on the quality of the current lattice basis during execution. Previous analyses were either heuristic or only applied to theoretical variants of BKZ, not the real BKZ implemented in software libraries. Our analysis extends to a generic BKZ algorithm where the SVP-oracle is replaced by an approximate oracle and/or the basis update is not necessarily performed by LLL. As an application, we observe that in certain approximation regimes, it is more efficient to use BKZ with an approximate rather than exact SVP-oracle.
Last updated:  2024-10-04
Newton Polytope-Based Strategy for Finding Small Roots of Multivariate Polynomials
Yansong Feng, Abderrahmane Nitaj, and Yanbin Pan
Coppersmith's method plays an important role in cryptanalysis. By introducing a new tool called Sumsets theory from Additive Combinatorics, we propose a novel strategy for Coppersmith's method based on Newton polytope. With our novel strategy, we provide the first provable and efficient algorithm for calculating the asymptotic bounds of Coppersmith's method, which is typically a tedious and non-trivial task. Moreover, our new perspective offers a better understanding of Coppersmith's method. As a byproduct, we apply our new technique and prove the new heuristic introduced by Meers and Nowakowski at Asiacrypt'23 and improve the cryptanalytic result for the Commutative Isogeny Hidden Number Problem.
Last updated:  2024-10-04
Application-Aware Approximate Homomorphic Encryption: Configuring FHE for Practical Use
Andreea Alexandru, Ahmad Al Badawi, Daniele Micciancio, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful tool for performing privacy-preserving analytics over encrypted data. A promising method for FHE over real and complex numbers is approximate homomorphic encryption, instantiated with the Cheon-Kim-Kim-Song (CKKS) scheme. The CKKS scheme enables efficient evaluation for many privacy-preserving machine learning applications. While the efficiency advantages of CKKS are clear, there is currently a lot of confusion on how to securely instantiate the scheme for any given application, especially after secret-key recovery attacks were discovered by Li and Micciancio (EUROCRYPT'21) for the $IND-CPA^D$ setting, i.e., where decryption results are shared with other parties. On the one hand, the generic definition of $IND-CPA^D$ is application-agnostic and often requires impractically large parameters. On the other hand, practical CKKS implementations target specific applications and use tighter parameters. A good illustration are the recent secret-key recovery attacks against a CKKS implementation in the OpenFHE library by Guo et al. (USENIX Security'24). These attacks misuse the library by employing different circuits during parameter estimation and run-time computation, yet they do not violate the generic (application-agnostic) $IND-CPA^D$ definition. To address this confusion, we introduce the notion of application-aware homomorphic encryption and devise related security definitions, which correspond more closely to how homomorphic encryption schemes are implemented and used in practice. We then formulate the guidelines for implementing the application-aware homomorphic encryption model to achieve $IND-CPA^D$ security for practical applications of CKKS. We also show that our application-aware model can be used for secure, efficient instantiation of exact homomorphic encryption schemes.
Last updated:  2024-10-04
A New World in the Depths of Microcrypt: Separating OWSGs and Quantum Money from QEFID
Amit Behera, Giulio Malavolta, Tomoyuki Morimae, Tamer Mour, and Takashi Yamakawa
While in classical cryptography, one-way functions (OWFs) are widely regarded as the “minimal assumption,” the situation in quantum cryptography is less clear. Recent works have put forward two concurrent candidates for the minimal assumption in quantum cryptography: One-way state generators (OWSGs), postulating the existence of a hard search problem with an efficient verification algorithm, and EFI pairs, postulating the existence of a hard distinguishing problem. Two recent papers [Khurana and Tomer STOC’24; Batra and Jain FOCS’24] showed that OWSGs imply EFI pairs, but the reverse direction remained open. In this work, we give strong evidence that the opposite direction does not hold: We show that there is a quantum unitary oracle relative to which EFI pairs exist, but OWSGs do not. In fact, we show a slightly stronger statement that holds also for EFI pairs that output classical bits (QEFID). As a consequence, we separate, via our oracle, QEFID, and one-way puzzles from OWSGs and several other Microcrypt primitives, including efficiently verifiable one-way puzzles and unclonable state generators. In particular, this solves a problem left open in [Chung, Goldin, and Gray Crypto’24]. Using similar techniques, we also establish a fully black-box separation (which is slightly weaker than an oracle separation) between private-key quantum money schemes and QEFID pairs. One conceptual implication of our work is that the existence of an efficient verification algorithm may lead to qualitatively stronger primitives in quantum cryptography.
Last updated:  2024-10-04
Dynamic zk-SNARKs
Weijie Wang, Charalampos Papamanthou, Shravan Srinivasan, and Dimitrios Papadopoulos
In this work, we put forth the notion of dynamic zk-SNARKs. A dynamic zk-SNARK is a zk-SNARK that has an additional update algorithm. The update algorithm takes as input a valid source statement-witness pair $(x,w)\in \mathcal{L}$ along with a verifying proof $\pi$, and a valid target statement-witness pair $(x',w')\in \mathcal{L}$. It outputs a verifying proof $\pi'$ for $(x',w')$ in sublinear time (for $(x,w)$ and $(x',w')$ with small Hamming distance) potentially with the help of a data structure. To the best of our knowledge, none of the commonly-used zk-SNARKs are dynamic---a single update in $(x,w)$ can be handled only by recomputing the proof, which requires at least linear time. After presenting the formal definition of dynamic zk-SNARKs, we provide two constructions. The first one is based on recursive SNARKs and has $O(\log n)$ update time. However it suffers from heuristic security---it must encode the random oracle in the SNARK circuit. The second one and our central contribution, $\mathsf{Dynaverse}$, is based solely on KZG commitments and has $O(\sqrt{n}\log n)$ update time. Our preliminary evaluation shows, that, while worse asymptotically, $\mathsf{Dynaverse}$ outperforms the recursive-based approach by at least one order of magnitude.
Last updated:  2024-10-04
Fiat-Shamir in the Wild
Hieu Nguyen, Uyen Ho, and Alex Biryukov
The Fiat-Shamir transformation is a key technique for removing interactivity from cryptographic proof systems in real-world applications. In this work, we discuss five types of Fiat-Shamir-related protocol design errors and illustrate them with concrete examples mainly taken from real-life applications. We discuss countermeasures for such vulnerabilities.
Last updated:  2024-10-04
A Simple Framework for Secure Key Leasing
Fuyuki Kitagawa, Tomoyuki Morimae, and Takashi Yamakawa
Secure key leasing (a.k.a. key-revocable cryptography) enables us to lease a cryptographic key as a quantum state in such a way that the key can be later revoked in a verifiable manner. We propose a simple framework for constructing cryptographic primitives with secure key leasing via the certified deletion property of BB84 states. Based on our framework, we obtain the following schemes. - A public key encryption scheme with secure key leasing that has classical revocation based on any IND-CPA secure public key encryption scheme. Prior works rely on either quantum revocation or stronger assumptions such as the quantum hardness of the learning with errors (LWE) problem. - A pseudorandom function with secure key leasing that has classical revocation based on one-way functions. Prior works rely on stronger assumptions such as the quantum hardness of the LWE problem. - A digital signature scheme with secure key leasing that has classical revocation based on the quantum hardness of the short integer solution (SIS) problem. Our construction has static signing keys, i.e., the state of a signing key almost does not change before and after signing. Prior constructions either rely on non-static signing keys or indistinguishability obfuscation to achieve a stronger goal of copy-protection. In addition, all of our schemes remain secure even if a verification key for revocation is leaked after the adversary submits a valid certificate of deletion. To our knowledge, all prior constructions are totally broken in this setting. Moreover, in our view, our security proofs are much simpler than those for existing schemes.
Last updated:  2024-10-04
HierNet: A Hierarchical Deep Learning Model for SCA on Long Traces
Suvadeep Hajra and Debdeep Mukhopadhyay
Side-channel analysis (SCA) compromises the security of cryptographic devices by exploiting various side-channel leakages such as power consumption, electromagnetic (EM) emanations, or timing variations, posing a practical threat to the security and privacy of modern digital systems. In power or EM SCA, statistical or machine learning methods are employed to extract secret information from power/EM traces. In many practical scenarios, raw power/EM traces can span hundreds of thousands of features, with relevant leakages occurring over only a few small segments. Consequently, existing SCAs often select a small number of features before launching the attack, making their success highly dependent on the feasibility of feature selection. However, feature selection may not always be possible, such as in the presence of countermeasures like masking or jitters. Several recent works have employed deep learning (DL) methods to conduct SCA on long raw traces, thereby reducing dependence on feature selection steps. However, these methods often perform poorly against various jitter-based countermeasures. While some of these methods have shown high robustness to several jitter-based countermeasures on relatively shorter traces, we demonstrate in this work that their performance deteriorates as trace lengths increase. To mitigate these limitations of the existing models, we develop a hierarchical DL model for SCA on long traces that is effective against masking, random delay, and clock jitter countermeasures. The proposed model, HierNet, extracts information from long traces using a two-level information assimilation process. At the base level, a DL model with shift-invariance is employed to extract information from smaller trace segments. Subsequently, a top-level DL model integrates the outputs of the base model to generate the final output. HierNet has been experimentally evaluated against various combinations of masking, random delay, and clock jitter countermeasures, using traces up to 250K features long. The results have been compared with three existing SCA benchmark models. They demonstrate HierNet's superiority in several scenarios, such as on long traces or against clock jitter countermeasures, showcasing the ability of HierNet to reach the guessing entropy 1 using fewer than or close to 10 attack traces while the benchmark models fail to do the same using as many as 5K attack traces. Additionally, HierNet exhibits significantly better performance in low-training-data scenarios.
Last updated:  2024-10-04
Optimized One-Dimensional SQIsign Verification on Intel and Cortex-M4
Marius A. Aardal, Gora Adj, Arwa Alblooshi, Diego F. Aranha, Isaac A. Canales-Martínez, Jorge Chavez-Saab, Décio Luiz Gazzoni Filho, Krijn Reijnders, and Francisco Rodríguez-Henríquez
SQIsign is a well-known post-quantum signature scheme due to its small combined signature and public-key size. However, SQIsign suffers from notably long signing times, and verification times are not short either. To improve this, recent research has explored both one-dimensional and two-dimensional variants of SQIsign, each with distinct characteristics. In particular, SQIsign2D’s efficient signing and verification times have made it a focal point of recent research. However, the absence of an optimized one-dimensional verification implementation hampers a thorough comparison between these different variants. This work bridges this gap in the literature: we provide a state-of-the-art implementation of one-dimensional SQIsign verification, including novel optimizations. We report a record-breaking one-dimensional SQIsign verification time of 8.6 Ice Lake Mcycles, closely matching SQIsign2D. For uncompressed signatures, the signature size doubles and we verify in only 5.6 Mcycles. Taking advantage of the inherent parallelism available in isogeny computations, we present 5-core variants that can go as low as 1.3 Mcycles. Furthermore, we present the first implementation that supports both 32-bit and 64-bit processors. It includes optimized assembly code for the Cortex-M4 and has been integrated with the pqm4 project. Our results motivate further research into one-dimensional SQIsign, as it boasts unique features among isogeny-based schemes.
Last updated:  2024-10-04
Fully Privacy-preserving Billing Models for Peer-to-Peer Electricity Trading Markets
Akash Madhusudan, Mustafa A. Mustafa, Hilder V.L. Pereira, and Erik Takke
Peer-to-peer energy trading markets enable users to exchange electricity, directly offering them increased financial benefits. However, discrepancies often arise between the electricity volumes committed to in trading auctions and the volumes actually consumed or injected. Solutions designed to address this issue often require access to sensitive information that should be kept private. This paper presents a novel, fully privacy-preserving billing protocol designed to protect users' sensitive consumption and production data in the context of billing protocols for energy trading. Leveraging advanced cryptographic techniques, including fully homomorphic encryption (FHE) and pseudorandom zero sharing (PRZS), our protocol ensures robust security and confidentiality while addressing the critical issue of managing discrepancies between promised and actual electricity volumes. The proposed protocol guarantees that users' sensitive information remains inaccessible to external parties, including the trading platform and billing server. By utilizing FHE, the protocol allows computations on encrypted data without compromising privacy, while PRZS ensures secure aggregation of individual discrepancies of each household. This combination of cryptographic primitives maintains data privacy and enhances billing accuracy, even when fluctuations in energy supply and demand occur. We analyze real-time consumption and production data from 100 households to experimentally validate the effectiveness and efficiency of our billing model. By implementing a flexible framework compatible with any billing method, we demonstrate that our protocol can accurately compute individual bills for 100 households in approximately 0.17 seconds.
Last updated:  2024-10-04
FLUENT: A Tool for Efficient Mixed-Protocol Semi-Private Function Evaluation
Daniel Günther, Joachim Schmidt, Thomas Schneider, and Hossein Yalame
In modern business to customer interactions, handling private or confidential data is essential. Private Function Evaluation (PFE) protocols ensure the privacy of both the customers' input data and the business' function evaluated on it which is often sensitive intellectual property (IP). However, fully hiding the function in PFE results in high performance overhead. Semi-Private Function Evaluation (SPFE) is a generalization of PFE to only partially hide the function, whereas specific non-critical components remain public. Our paper introduces a novel framework designed to make SPFE accessible to non-experts and practical for real-world deployments. To achieve this, we improve on previous SPFE solutions in two aspects. First, we enhance the developer experience by leveraging High-Level Synthesis (HLS), making our tool more user-friendly than previous SPFE frameworks. Second, we achieve a \(2 \times\) speedup compared to the previous state-of-the-art through more efficient underlying constructions and the usage of Lookup Tables (LUTs). We evaluate the performance of our framework in terms of communication and runtime efficiency. Our final implementation is available as an open-source project, aiming to bridge the gap between advanced cryptographic protocols and their practical application in industry scenarios.
Last updated:  2024-10-04
Communication-Efficient Multi-Party Computation for RMS Programs
Thomas Attema, Aron van Baarsen, Stefan van den Berg, Pedro Capitão, Vincent Dunning, and Lisa Kohl
Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph. Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of "message passing" algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort. In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.
Last updated:  2024-10-04
Revisiting Shuffle-Based Private Set Unions with Reduced Communication
Jiseung Kim, Hyung Tae Lee, and Yongha Son
A Private Set Union (PSU) allows two parties having sets $X$ and $Y$ to securely compute the union $X \cup Y$ while revealing no additional information. Recently, there have been proposed so-called shuffle-based PSU protocols due to Garimella et. al. (PKC'21) and Jia et. al. (USENIX'22). Except a few base oblivious transfers, those proposals are fully based on symmetric key primitives and hence enjoy quite low computation costs. However, they commonly have drawbacks on large communication cost of $O(\ell n\log n)$ with input set size $n$ and $\ell \ge O(\lambda + \log n)$ where $\lambda$ is a statistical security parameter. We propose two optimizations for each work that reduce communication cost while maintaining strength in computation cost; the first one optimizes Garimella et. al. to have $O(\ell n + n \log n)$, and the second one optimizes Jia et. al. by reducing the concrete value of $\ell$ by $\log n$. Concretely, the first (second, resp) optimization provides $3.3 - 3.9$x ($1.7 - 1.8$x, resp) lower communication input set sizes $n = 2^{16} - 2^{20}$. We demonstrate by comprehensive analysis and implementation that our optimization leads to better PSU protocol, compared to the state-of-the-art proposal of Zhang et. al. (USENIX'23) as well as previous shuffle-based PSUs. As a concrete amount of improvement, we see $1.4-1.5$x speed up for $100$Mbps network, and $1.8-2.2$x speed up for $10$Mbps network on input set sizes $n = 2^{16} - 2^{20}$.
Last updated:  2024-10-04
Constant-Cost Batched Partial Decryption in Threshold Encryption
Sora Suegami, Shinsaku Ashizawa, and Kyohei Shibano
Threshold public key encryption schemes distribute secret keys among multiple parties, known as the committee, to reduce reliance on a single trusted entity. However, existing schemes face inefficiencies as the committee should perform computation and communication for decryption of each individual ciphertext. As the number of ciphertexts being decrypted per unit of time increases, this can limit the number of committee parties and their decentralization due to increased hardware requirements, heightening the risk of adversarial collusion. To address this, we introduce tag-based batched threshold encryption (TBTE), which ensures constant computational and communication costs per committee member, independent of the number of ciphertexts being decrypted in batch under distinct decryption policies. The TBTE scheme is constructed over bilinear groups in the random oracle model and secure in the algebraic group model, assuming the hardness of the $(q_1,q_2)$-discrete logarithm problem and the EAV-security of the symmetric-key encryption scheme. Evaluation of our implementation demonstrates constant data size, specifically 48 bytes received and 56 bytes sent, and constant execution time for each committee party during decryption, even for various batch sizes up to $2^{20}$.
Last updated:  2024-10-04
Mind the Composition of Toffoli Gates: Structural Algebraic Distinguishers of ARADI
Emanuele Bellini, Mohamed Rachidi, Raghvendra Rohit, and Sharwan K. Tiwari
This paper reveals a critical flaw in the design of ARADI, a recently proposed low-latency block cipher by NSA researchers -- Patricia Greene, Mark Motley, and Bryan Weeks. The weakness exploits the specific composition of Toffoli gates in the round function of ARADI's nonlinear layer, and it allows the extension of a given algebraic distinguisher to one extra round without any change in the data complexity. More precisely, we show that the cube-sum values, though depending on the secret key bits, are always equal in two of the state words. Such a structural property is difficult to obtain by the direct application of division property and has never been seen before in any state-of-the-art block cipher. We call this structural property \textit{weakly-composed-Toffoli gates}, and introduce a theoretical framework which can describe it in general terms. We present algebraic distinguishers that reach 8 out of 16 rounds of ARADI. Most notably, we show that these distinguishers have better data complexities than the division property-based distinguishers for the same number of rounds. We further investigate whether changing the linear layer or the order of composition of Toffoli gates could avoid this property. We give a negative answer to the same and show that it is impossible to prevent this structural property unless the nonlinear layer is re-designed. As a side result, we provide a key-recovery attack on 10 rounds ARADI with $2^{124}$ data and $2^{177}$ time for a 256-bit key. Our work highlights the significance of security analysis during the cipher design phase, and shows that these strong structural distinguishers could have been avoided during this phase.
Last updated:  2024-10-04
Fully-Succinct Arguments over the Integers from First Principles
Matteo Campanelli and Mathias Hall-Andersen
Succinct arguments of knowledge allow an untrusted prover to establish that they know a witness for an NP relation. Many recent efficient constructions of such schemes work over arithmetic computations expressed in finite fields. Several common settings, however, have an extremely simple representation when expressed over the integers (e.g., RSA signatures/accumulators, range checks for committed values, computations over rational numbers). Efficient arguments of knowledge working natively over $\mathbb{Z}$ could be applied to such computations without the overhead from emulating integer arithmetic over a finite field. We propose the first native construction of SNARKs over the integers that is fully succinct, thus resolving an open problem from Towa and Vergnaud (Asiacrypt 2020). By fully succinct, we mean that \textit{both} the proof size and the verifier's running time should be sublinear in both $|\vec w|$—the size of the witness as a vector of integers—and $\log_2 \lVert \vec w \rVert_\infty$—the size in bits of the largest integer in the witness vector (in absolute value). As a stepping stone for our results we provide a general theoretical framework for building succinct arguments over the integers. Its most attractive feature is that it allows to reuse already existing constructions of SNARKs in a modular way and can be used as a starting point for constructions following up our work. We build these systematic foundations by leveraging a common technique in theoretical computer science—fingerprinting—and applying it to a new setting. Our framework consists of two main ingredients: idealized protocols and polynomial commitments such that an object ``committed over the integers'' can however be ``queried modulo $q$'', for a randomly sampled prime $q$. We obtain our final construction, $\mathbb{Z}$aratan, by lifting the $\mathsf{Spartan}$ construction (Setty, CRYPTO 2020) to the integers and applying a form of polynomial commitment based on the techniques from DARK (Bünz et al., Eurocrypt 2020). $\mathbb{Z}$aratan has a transparent setup, is proven secure in the generic group model for groups of unknown order and can be heuristically made non-interactive in the ROM via the Fiat-Shamir transform.
Last updated:  2024-10-04
Small Public Exponent Brings More: Improved Partial Key Exposure Attacks against RSA
Yansong Feng, Abderrahmane Nitaj, and Yanbin Pan
Let $(N,e)$ be a public key of the RSA cryptosystem, and $d$ be the corresponding private key. In practice, we usually choose a small $e$ for quick encryption. In this paper, we improve partial private key exposure attacks against RSA with MSBs of $d$ and small $e$. The key idea is that under such a setting we can usually obtain more information about the prime factors of $N$ and then, by solving a univariate modular polynomial equation using Coppersmith's method, $N$ can be factored in polynomial time. Compared to previous results, we reduce the number of the leaked bits in $d$ that are needed to mount the attack by $\log_2 (e)$ bits. For $e=65537$, previous work required an additional enumeration of 17 bits to achieve our new bound, resulting in a $2^{10}$ (or 1,024) x increase in time consumption. Furthermore, our experiments show that for a $1024$-bit modulus $N$, our attack can achieve the theoretical bound on a simple personal computer, which verifies the new method.
Last updated:  2024-10-04
I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis
Guilherme Perin, Sengim Karayalcin, Lichao Wu, and Stjepan Picek
Deep neural networks have proven effective for second-order profiling side-channel attacks, even in a black-box setting with no prior knowledge of masks and implementation details. While such attacks have been successful, no explanations were provided for understanding why a variety of deep neural networks can (or cannot) learn high-order leakages and what the limitations are. In other words, we lack the explainability on neural network layers combining (or not) unknown and random secret shares, which is a necessary step to defeat, e.g., Boolean masking countermeasures. In this paper, we use information-theoretic metrics to explain the internal activities of deep neural network layers. We propose a novel methodology for the explainability of deep learning-based profiling side-channel analysis (denoted ExDL-SCA) to understand the processing of secret masks. Inspired by the Information Bottleneck theory, our explainability methodology uses perceived information to explain and detect the different phenomena that occur in deep neural networks, such as fitting, compression, and generalization. We provide experimental results on masked AES datasets showing what relevant features deep neural networks use, and where in the networks relevant features are learned and irrelevant features are compressed. Using our method, evaluators can determine what secret masks are being exploited by the network, which allows for more detailed feedback on the implementations. This paper opens new perspectives for understanding the role of different neural network layers in profiling side-channel attacks.
Last updated:  2024-10-04
SQIsign2D-East: A New Signature Scheme Using 2-dimensional Isogenies
Kohei Nakagawa and Hiroshi Onuki
Isogeny-based cryptography is cryptographic schemes whose security is based on the hardness of a mathematical problem called the isogeny problem, and is attracting attention as one of the candidates for post-quantum cryptography. A representative isogeny-based cryptography is the signature scheme called SQIsign, which was submitted to the NIST PQC standardization competition. SQIsign has attracted much attention because of its very short signature and key size among the candidates for the NIST PQC standardization. Recently, a lot of new schemes have been proposed that use high-dimensional isogenies. Among them, the signature scheme called SQIsignHD has an even shorter signature size than SQIsign. However, it requires 4-dimensional isogeny computations for the signature verification. In this paper, we propose a new signature scheme, SQIsign2D-East, which requires only two-dimensional isogeny computations for verification, thus reducing the computational cost of verification. First, we generalized an algorithm called RandIsogImg, which computes a random isogeny of non-smooth degree. Then, by using this generalized RandIsogImg, we construct a new signature scheme SQIsign2D-East.
Last updated:  2024-10-04
Succinct Non-Subsequence Arguments
San Ling, Khai Hanh Tang, Khu Vu, Huaxiong Wang, and Yingfei Yan
Lookup arguments have recently attracted a lot of developments due to their applications in the constructions of succinct non-interactive arguments of knowledge (SNARKs). A closely related topic is subsequence arguments in which one can prove that string $\mathbf{s}$ is a subsequence of another string $\mathbf{t}$, i.e., deleting some characters in $\mathbf{t}$ can achieve $\mathbf{s}$. A dual notion, namely, non-subsequence arguments, is to prove that $\mathbf{s}$ is not a subsequence of $\mathbf{t}$. These problems have a lot of important applications in DNA sequence analysis, internet of things, blockchains, natural language processing, speech recognition, etc. However, despite their applications, they are not well-studied in cryptography, especially succinct arguments for non-subsequences with efficient proving time and sublinear verification time. In this work, we propose the first succinct non-subsequence argument. Our solution applies the sumcheck protocol and is instantiable by any multivariate polynomial commitment schemes (PCSs). We achieve an efficient prover whose running time is linear in the size of sequences $\mathbf{s}$, $\mathbf{t}$ and their respective alphabet $\Sigma$. Our proof is succinct and the verifier time is sublinear assuming the employed PCS has succinct commitments and sublinear verification time. When instantiating with Sona PCS (EUROCRYPT'24), we achieve proof size $\mathcal{O}(\log_2|\mathbf{s}| + \log_2|\mathbf{t}|+\log_2|\Sigma|)$, prover time $\mathcal{O}(|\mathbf{s}|+|\mathbf{t}|+|\Sigma|)$ and verifier time $\mathcal{O}(\sqrt{|\mathbf{s}|}+\sqrt{|\mathbf{t}|}+\sqrt{|\Sigma|})$. Extending our technique, we can achieve a batch subsequence argument for proving in batch $k$ interleaving subsequence and non-subsequence arguments without proof size suffering a linear blow-up in $k$.
Last updated:  2024-10-03
On the Security of Nova Recursive Proof System
Hyeonbum Lee and Jae Hong Seo
Nova is a new type of recursive proof system that uses a folding scheme as its core building block. This brilliant idea of folding relations can significantly reduce the recursion overhead. In this paper, we study some issues related to Nova’s soundness proof, which relies on the soundness of the folding scheme in a recursive manner. First, due to its recursive nature, the proof strategy inevitably causes the running time of the recursive extractor to expand polynomially for each additional recursive step. This constrains Nova's soundness model to only logarithmically bounded recursive steps. Consequently, the soundness proof in this limited model does not guarantee soundness for a linear number of rounds in the security parameter, such as 128 rounds for 128-bit security. On the other hand, there are no known attacks on the arbitrary depth recursion of Nova, leaving a gap between theoretical security guarantees and real-world attacks. We aim to bridge this gap in two opposite directions. In the negative direction, we present a recursive proof system that is unforgeable in a log-round model but forgeable if used in linear rounds. This shows that the soundness proof in the log-round model might not be applicable to real-world applications that require linearly long rounds. In a positive direction, we show that when Nova uses a specific group-based folding scheme, its knowledge soundness over polynomial rounds can be proven in the algebraic group model with our modifications. To the best of our knowledge, this is the first result to show Nova's polynomial rounds soundness. Second, the folding scheme is converted non-interactively via the Fiat-Shamir transformation and then arithmetized into R1CS. Therefore, the soundness of Nova using the non-interactive folding scheme essentially relies on the heuristic random oracle instantiation in the standard model. In our new soundness proof for Nova in the algebraic group model, we replace this heuristic with a cryptographic hash function with a special property. We view this hash function as an independent object of interest and expect it to help further understand the soundness of Nova.
Last updated:  2024-10-03
Understanding Leakage in Searchable Encryption: a Quantitative Approach
Alexandra Boldyreva, Zichen Gui, and Bogdan Warinschi
Searchable encryption, or more generally, structured encryption, permits search over encrypted data. It is an important cryptographic tool for securing cloud storage. The standard security notion for structured encryption mandates that a protocol leaks nothing about the data or queries, except for some allowed leakage, defined by the leakage function. This is due to the fact that some leakage is unavoidable for efficient schemes. Unfortunately, it was shown by numerous works that even innocuous-looking leakage can often be exploited by attackers to undermine users' privacy and recover their queries and/or data, despite the structured encryption schemes being provably secure. Nevertheless, the standard security remains the go-to notion used to show the "security" of structured encryption schemes. While it is not likely that researchers will design practical structured encryption schemes with no leakage, it is not satisfactory that very few works study ways to assess leakage. This work proposes a novel framework to quantify leakage. Our methodology is inspired by the quantitative information flow, and we call our method $q$-leakage analysis. We show how $q$-leakage analysis is related to the standard security. We also demonstrate the usefulness of $q$-leakage analysis by analyzing the security of two existing schemes with complex leakage functions.
Last updated:  2024-10-03
Towards Verifiable FHE in Practice: Proving Correct Execution of TFHE's Bootstrapping using plonky2
Louis Tremblay Thibault and Michael Walter
In this work we demonstrate for the first time that a full FHE bootstrapping operation can be proven using a SNARK in practice. We do so by designing an arithmetic circuit for the bootstrapping operation and prove it using plonky2. We are able to prove the circuit on an AWS Hpc7a instance in under 20 minutes. Proof size is about 200kB and verification takes less than 10ms. As the basis of our bootstrapping operation we use TFHE's programmable bootstrapping and modify it in a few places to more efficiently represent it as an arithmetic circuit (while maintaining full functionality and security). In order to achieve our results in a memory-efficient way, we take advantage of the structure of the computation and plonky2's ability to efficiently prove its own verification circuit to implement a recursion-based IVC scheme. Lastly, we present a security proof in the UC model that captures active attacks in real world applications of verifiable FHE and augment our prototype to fit such applications.
Last updated:  2024-10-03
Monotone Policy BARGs from BARGs and Additively Homomorphic Encryption
Shafik Nassar, Brent Waters, and David J. Wu
A monotone policy batch $\mathsf{NP}$ language $\mathcal{L}_{\mathcal{R}, P}$ is parameterized by a monotone policy $P \colon \{0,1\}^k \to \{0,1\}$ and an $\mathsf{NP}$ relation $\mathcal{R}$. A statement $(x_1, \ldots, x_k)$ is a YES instance if there exists $w_1, \ldots, w_k$ where $P(\mathcal{R}(x_1, w_1), \ldots, \mathcal{R}(x_k, w_k)) = 1$. For example, we might say that an instance $(x_1, \ldots, x_k)$ is a YES instance if a majority of the statements are true. A monotone policy batch argument (BARG) for $\mathsf{NP}$ allows a prover to prove that $(x_1, \ldots, x_k) \in \mathcal{L}_{\mathcal{R}, P}$ with a proof of size $\mathsf{poly}(\lambda, |\mathcal{R}|, \log k)$, where $\lambda$ is the security parameter, $|\mathcal{R}|$ is the size of the Boolean circuit that computes $\mathcal{R}$, and $k$ is the number of instances. Recently, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) gave the first monotone policy BARG for $\mathsf{NP}$ from the learning with errors (LWE) assumption. In this work, we describe a generic approach for constructing monotone policy BARGs from any BARG for $\mathsf{NP}$ together with an additively homomorphic encryption scheme. This yields the first constructions of monotone policy BARGs from the $k$-$\mathsf{Lin}$ assumption in prime-order pairing groups as well as the (subexponential) DDH assumption in pairing-free groups. Central to our construction is a notion of a zero-fixing hash function, which is a relaxed version of a predicate-extractable hash function from the work of Brakerski et al. Our relaxation enables a direct realization of zero-fixing hash functions from BARGs for $\mathsf{NP}$ and additively homomorphic encryption, whereas the previous notion relied on leveled homomorphic encryption, and by extension, the LWE assumption. As an application, we also show how to combine a monotone policy BARG with a puncturable signature scheme to obtain a monotone policy aggregate signature scheme. Our work yields the first (statically-secure) monotone policy aggregate signatures that supports general monotone Boolean circuits from standard pairing-based assumptions. Previously, this was only known from LWE.
Last updated:  2024-10-03
Tightly Secure Threshold Signatures over Pairing-Free Groups
Renas Bacho and Benedikt Wagner
Threshold signatures have been drawing lots of attention in recent years. Of particular interest are threshold signatures that are proven secure under adaptive corruptions (NIST Call 2023). Sadly, existing constructions with provable adaptive security suffer from at least one of the following drawbacks: (i) strong idealizations such as the algebraic group model (AGM), (ii) an unnatural restriction on the corruption threshold being $t/2$ where $t$ is the signing threshold, or (iii) prohibitively large security loss under established assumptions. Notably, point (iii) has received little to no attention in the literature on this subject. In this work, we introduce Twinkle-T, a new threshold signature scheme which overcomes these limitations. Twinkle-T is the first scheme to have a fully tight security proof under up to $t$ adaptive corruptions without relying on the AGM. It also has a signing protocol consisting of only three rounds and thus matches the currently best threshold signature with full adaptive security Twinkle (Eurocrypt 2024) in the pairing-free discrete logarithm setting. We prove security from a standard non-interactive assumption, namely, the Decisional Diffie-Hellman (DDH) assumption.
Last updated:  2024-10-03
Private Laconic Oblivious Transfer with Preprocessing
Rishabh Bhadauria, Nico Döttling, Carmit Hazay, and Chuanwei Lin
Laconic cryptography studies two-message protocols that securely compute on large amounts of data with minimal communication cost. Laconic oblivious transfer (OT) is a central primitive where the receiver's input is a large database $\mathsf{DB}$ and the sender's inputs are two messages $m_0$, $m_1$ along with an index $i$, such that the receiver learns the message determined by the choice bit $\mathsf{DB}_i$. OT becomes even more useful for secure computation when considering its laconic variants, which offer succinctness and round optimality. However, existing constructions are not practically efficient because they rely on heavy cryptographic machinery and non-black-box techniques. In this work, we initiate the study of laconic OT correlations, where the model allows an offline phase to generate the correlations later used in a lightweight online phase. Our correlation is conceptually simple, captured by an inner product computation, and enables us to achieve a private laconic OT protocol where the sender's index $i$ is also hidden from the receiver. Our construction is the first private laconic OT with database-dependent preprocessing based solely on symmetric-key assumptions, achieving sublinear online computational complexity for the receiver. Furthermore, we enhance our construction with updatability and receiver privacy. Finally, we demonstrate the applications of private laconic OT to laconic function evaluation for RAM programs and laconic private set intersection with preprocessing.
Last updated:  2024-10-03
Breaking, Repairing and Enhancing XCBv2 into the Tweakable Enciphering Mode GEM
Amit Singh Bhati, Michiel Verbauwhede, and Elena Andreeva
Tweakable enciphering modes (TEMs) provide security in a variety of storage and space-critical applications like disk and file-based encryption, and packet-based communication protocols, among others. XCB-AES (known as XCBv2) is specified in the IEEE 1619.2 standard for encryption of sector-oriented storage media and it comes with a proof of security for block-aligned input messages. In this work, we demonstrate an attack on XCBv2. We show that XCBv2 is $\textit{insecure}$ also for full block messages by presenting a plaintext recovery attack using $\textit{only}$ two queries. We demonstrate that our attack further applies to the HCI and MXCB TEMs, which follow a similar design approach to XCBv2. We then propose a simple, ``quick'' fix that is not vulnerable to our attack and provably restore the security for XCBv2. Following the responsible disclosure process, we communicated the attack details to IEEE and the authors of XCB-AES. The authors have confirmed the validity of our attack on 02/09/2024. Our next contribution is to strengthen the provable security of XCBv2 (currently $n/3$ bits). We propose a new modular TEM called GEM which can be seen as a generalization of the Hash-CTR-Hash approach as used in XCB-style and HCTR-style TEMs. We are able to prove that GEM achieves full $n$-bit security using $\textit{only}$ $n$-bit PRP/PRF. We also give two concrete GEM instantiations: $\mathsf{KohiNoor}$ and $\mathsf{DaryaiNoor}$, both of which are based on AES-128 and GHASH-256, and internally use variants of the CTR-based weak pseudorandom functions GCTR-3 and SoCTR, respectively. SoCTR uses AES-128 and GCTR-3 is based on $\mathsf{ButterKnife}$-256. Our security proofs show that both $\mathsf{KohiNoor}$ and $\mathsf{DaryaiNoor}$ provide full $n$-bit security. From applications perspective, $\mathsf{DaryaiNoor}$ addresses the need for reusing classical components, while $\mathsf{KohiNoor}$ enhances performance by leveraging a more modern primitive based on the AES/Deoxys round function. Our implementation demonstrates competitive performance: For typical 4KiB sector size, $\mathsf{KohiNoor}$'s performance is on par with AES$_{6}$-CTET+, yet achieving higher standard security guarantees. $\mathsf{DaryaiNoor}$ is on par with AES-CTET+ performance-wise while also maintaining higher security with standard components. Our GEM instances triple the security margin of XCBv2 and double that of HCTR2 at the cost of performance loss of only $12\%$ ($\mathsf{KohiNoor}$) and $68\%$ ($\mathsf{DaryaiNoor}$) for 4KiB messages.
Last updated:  2024-10-03
STARK-based Signatures from the RPO Permutation
Shahla Atapoor, Cyprien Delpech de Saint Guilhem, and Al Kindi
This work describes a digital signature scheme constructed from a zero-knowledge proof of knowledge of a pre-image of the Rescue Prime Optimized (RPO) permutation. The proof of knowledge is constructed with the DEEP-ALI interactive oracle proof combined with the Ben-Sasson--Chiesa--Spooner (BCS) transformation in the random oracle model. The EUF-CMA security of the resulting signature scheme is established from the UC-friendly security properties of the BCS transformation and the pre-image hardness of the RPO permutation. The implementation of the scheme computes signatures in 13 ms and verifies them in 1 ms on a single core when the BCS transform is implemented with the Blake3 hash function. (The multi-threaded implementation signs in 9.2 ms and also verifies in 1 ms.) These speeds are obtained with parameters achieving 122 bits of average-case security for \( 2^{122} \)-bounded adversaries with access to at most \( 2^{64} \) signatures.
Last updated:  2024-10-03
Revisiting Keyed-Verification Anonymous Credentials
Michele Orrù
Keyed-verification anonymous credentials are widely recognized as among the most efficient tools for anonymous authentication. In this work, we revisit two prominent credential systems: the scheme by Chase et al. (CCS 2014), commonly referred to as CMZ or PS MAC, and the scheme by Barki et al. (SAC 2016), known as BBDT or BBS MAC. We show how to make CMZ statistically anonymous and BBDT compatible with the BBS RFC draft. We provide a comprehensive security analysis for strong(er) properties of unforgeability and anonymity. These properties allow them to be composed with extensions that users can pick and choose. We show that simpler variants satisfying one-more unforgeability can still be anonymous tokens (Kreuter et al., CRYPTO 2020). To enable faster proofs for complex presentations, we present a compiler that uses an interactive oracle proof and a designated-verifier polynomial commitment to construct a designated-verifier non-interactive argument. For keyed-verification anonymous credentials, designated-verifier proofs suffice since the verifier is known in advance. We explore extensions that could benefit from this approach.
Last updated:  2024-10-03
Breaking Verifiable Delay Functions in the Random Oracle Model
Ziyi Guan, Artur Riazanov, and Weiqiang Yuan
A verifiable delay function (VDF) is a cryptographic primitive that requires a long time to compute (even with parallelization), but produces a unique output that is efficiently and publicly verifiable. We prove that VDFs with \emph{imperfect completeness} and \emph{computational uniqueness} do not exist in the random oracle model. This also rules out black-box constructions of VDFs from other cryptographic primitives, such as one-way permutations and collision-resistant hash functions. Prior to our work, Mahmoody, Smith and Wu (ICALP 2020) prove that VDFs satisfying both \emph{perfect completeness} and \emph{perfect uniqueness} do not exist in the random oracle model; on the other hand, Ephraim, Freitag, Komargodski, and Pass (Eurocrypt 2020) construct VDFs with \emph{perfect completeness} and \emph{computational uniqueness} in the random oracle model assuming the hardness of repeated squaring. Our result is optimal -- we bridge the current gap between previously known impossibility results and existing constructions.
Last updated:  2024-10-03
SNARKs for Virtual Machines are Non-Malleable
Uncategorized
Matteo Campanelli, Antonio Faonio, and Luigi Russo
Show abstract
Uncategorized
Cryptographic proof systems have a plethora of applications: from building other cryptographic tools (e.g., malicious security for MPC protocols) to concrete settings such as private transactions or rollups. In several settings it is important for proof systems to be non-malleable: an adversary should not to be able to modify a proof they have observed into another for a statement for which they do not know the witness. Proof systems that have been deployed in practice should arguably satisfy this notion: it is crucial in settings such as transaction systems and in order to securely compose proofs with other cryptographic protocols. As a consequence, results on non-malleability should keep up with designs of proofs being deployed. Recently, Arun et al. proposed $\mathsf{Jolt}$ (Eurocrypt 2024), arguably the first efficient proof system whose architecture is based on the lookup singularity approach (Barry Whitehat, 2022). This approach consists in representing a general computation as a series of table lookups. The final result is a SNARK for a Virtual Machine execution (or SNARK VM). Both SNARK VMs and lookup-singularity SNARKs are architectures with enormous potential and will probably be adopted more and more in the next years (and they already are). As of today, however, there is no literature regarding the non-malleability of SNARK VMs. The goal of this work is to fill this gap by providing both concrete non-malleability results and a set of technical tools for a more general study of SNARK VMs security (as well as "modular" SNARKs in general). As a concrete result, we study the non-malleability of (an idealized version of) $\mathsf{Jolt}$ and its fundamental building block, the lookup argument $\mathsf{Lasso}$. While connecting our new result on the non-malleability of $\mathsf{Lasso}$ to that of $\mathsf{Jolt}$, we develop a set of tools that enable the composition of non-malleable SNARKs. We believe this toolbox to be valuable in its own right.
Last updated:  2024-10-03
MAYO Key Recovery by Fixing Vinegar Seeds
Sönke Jendral and Elena Dubrova
As the industry prepares for the transition to post-quantum secure public key cryptographic algorithms, vulnerability analysis of their implementations is gaining importance. A theoretically secure cryptographic algorithm should also be able to withstand the challenges of physical attacks in real-world environments. MAYO is a candidate in the ongoing first round of the NIST post-quantum standardization process for selecting additional digital signature schemes. This paper demonstrates three first-order single-execution fault injection attacks on a MAYO implementation in an ARM Cortex-M4 processor. By using voltage glitching to disrupt the computation of the vinegar seed during the signature generation, we enable the recovery of the secret key directly from the faulty signatures. Our experimental results show that the success rates of the fault attacks in a single execution are 36%, 82%, and 99%, respectively. They emphasize the importance of developing countermeasures against fault attacks prior to the widespread deployment of post-quantum algorithms like MAYO.
Last updated:  2024-10-03
A Crack in the Firmament: Restoring Soundness of the Orion Proof System and More
Thomas den Hollander and Daniel Slamanig
Orion (Xie et al. CRYPTO'22) is a recent plausibly post-quantum zero-knowledge argument system with a linear time prover. It improves over Brakedown (Golovnev et al. ePrint'21 and CRYPTO'23) by reducing the proof size and verifier complexity to be polylogarithmic and additionally adds the zero-knowledge property. The argument system is demonstrated to be concretely efficient with a prover time being the fastest among all existing succinct proof systems and a proof size that is an order of magnitude smaller than Brakedown. Since its publication in CRYPTO 2022, two revisions have been made to the zk-SNARK. First, there was an issue with how zero-knowledge was handled. Second, Orion was discovered to be unsound, which was then repaired through the use of a commit-and-prove SNARK as an "outer" SNARK. As we will show in this paper, unfortunately, Orion in its current revision is still unsound (with and without the zero-knowledge property) and we will demonstrate practical attacks on it. We then show how to repair Orion without additional assumptions, with the resuling polynomial commitment denoted as Scorpius, which requires non-trivial fixes when aiming to preserve the linear time prover complexity. The proposed fixes lead to an even improved efficiency, i.e., smaller proof size and verifier time, over the claimed efficiency of the initial version of Orion. We also apply the recent ideas of Diamond and Posen (CiC'24) to make the challenge in Orion logarithmically sized. Moreover, we provide the first rigorous security proofs and explicitly consider multi-point openings and non-interactivity. While revisiting Orion we make some additional contributions which might be of independent interest, most notable an improved code randomization technique that retains the minimum relative distance.
Last updated:  2024-10-03
FANNG-MPC: Framework for Artificial Neural Networks and Generic MPC
Najwa Aaraj, Abdelrahaman Aly, Tim Güneysu, Chiara Marcolla, Johannes Mono, Rogerio Paludo, Iván Santos-González, Mireia Scholz, Eduardo Soria-Vazquez, Victor Sucasas, and Ajith Suresh
In this work, we introduce FANNG-MPC, a versatile secure multi-party computation framework capable to offer active security for privacy preserving machine learning as a service (MLaaS). Derived from the now deprecated SCALE-MAMBA, FANNG is a data-oriented fork, featuring novel set of libraries and instructions for realizing private neural networks, effectively reviving the popular framework. To the best of our knowledge, FANNG is the first MPC framework to offer actively secure MLaaS in the dishonest majority setting. FANNG goes beyond SCALE-MAMBA by decoupling offline and online phases and materializing the dealer model in software, enabling a separate set of entities to produce offline material. The framework incorporates database support, a new instruction set for pre-processed material, including garbled circuits and convolutional and matrix multiplication triples. FANNG also implements novel private comparison protocols and an optimized library supporting Neural Network functionality. All our theoretical claims are substantiated by an extensive evaluation using an open-sourced implementation, including the private inference of popular neural networks like LeNet and VGG16.
Last updated:  2024-10-03
HHL for tensor-decomposable matrices
Cezary Pilaszewicz and Marian Margraf
We use the HHL algorithm to retrieve a quantum state holding the algebraic normal formal of a Boolean function. Unlike the standard HHL applications, we do not describe the cipher as an exponentially big system of equations. Rather, we perform a set of small matrix inversions which corresponds to the Boolean Möbius transform. This creates a superposition holding information about the ANF in the form: $\ket{\mathcal{A}_{f}} =\frac{1}{C} \sum_{I=0}^{2^n-1} c_I \ket{I}$, where $c_I$ is the coefficient of the ANF and $C$ is a scaling factor. The procedure has a time complexity of $\mathcal{O}(n)$ for a Boolean function with $n$ bit input. We also propose two approaches how some information about the ANF can be extracted from such a state.
Last updated:  2024-10-03
Verifiable random function from the Deuring correspondence and higher dimensional isogenies
Antonin Leroux
In this paper, we introduce $\mathsf{DeuringVUF}$, a new Verifiable Unpredictable Function (VUF) protocol based on isogenies between supersingular curves. The most interesting application of this VUF is $\mathsf{DeuringVRF}$ a post-quantum Verifiable Random Function (VRF). The main advantage of this new scheme is its compactness, with combined public key and proof size of roughly 450 bytes, which is orders of magnitude smaller than other generic purpose post-quantum VRF constructions. This scheme is also the first post-quantum VRF satisfying unconditional uniqueness. We show that this scheme is practical by providing a first non-optimized C implementation that runs in roughly 20ms for verification and 350ms for evaluation. The function at the heart of our construction is the one that computes the codomain of an isogeny of big prime degree from its kernel. The evaluation can be performed efficiently with the knowledge of the endomorphism ring using a new ideal-to-isogeny algorithm introduced recently by Basso, Dartois, De Feo, Leroux, Maino, Pope, Robert and Wesolowski that uses computation of dimension $2$ isogenies between elliptic products to compute effectively the translation through the Deuring correspondence of any ideal. On the other hand, without the knowledge of the endomorphism ring, this computation appears to be hard. The security of our $\mathsf{DeuringVUF}$ holds under a new assumption call the one-more isogeny problem (OMIP). Another application of $\mathsf{DeuringVUF}$ is the first hash-and-sign signature based on isogenies in the standard model. While we don't expect the signature in itself to outperform the recent variants of SQIsign, it remains very competitive in both compactness and efficiency while providing a new framework to build isogeny-based signature that could lead to new interesting applications. We also introduce several new algorithms for the effective Deuring correspondence. In particular, we introduce an algorithm to translate an ideal of norm a big power of a small prime $\ell$ into the corresponding isogeny of dimension $1$ using isogenies between abelian variety of dimension $2$ as a tool. This algorithm can be used to improve the SQIsign signature scheme.
Last updated:  2024-10-03
Bit t-SNI Secure Multiplication Gadget for Inner Product Masking
John Gaspoz and Siemen Dhooghe
Masking is a sound countermeasure to protect against differential power analysis. Since the work by Balasch et al. in ASIACRYPT 2012, inner product masking has been explored as an alternative to the well known Boolean masking. In CARDIS 2017, Poussier et al. showed that inner product masking achieves higher-order security versus Boolean masking, for the same shared size, in the bit-probing model. Wang et al. in TCHES 2020 verified the inner product masking's security order amplification in practice and proposed new gadgets for inner product masking. Finally, Wu et al. in TCHES 2022 showed that this security amplification comes from the bit-probing model, but that Wang al.'s gadgets are not higher-order bit-probing secure reducing the computation's practical security. The authors concluded their work with the open question of providing an inner product multiplication gadget which maintains the masking's bit-probing security, and conjectured that such gadget maintains the practical security order amplification of the masking during its computation. In this paper, we answer positively to Wu et al.'s open problems. We are the first to present a multiplication gadget for inner product masking which is proven secure in the bit-level probing model using the t-Strong Non-Interference (SNI) property. Moreover, we provide practical evidence that the gadget indeed maintains the security amplification of its masking. This is done via an evaluation of an assembly implementation of the gadget on an ARM Cortex-M4 core. We used this implementation to take leakage measurements and show no leakage happens for orders below the gadget's bit-probing security level either for its univariate or multivariate analysis.
Last updated:  2024-10-03
Faster BGV Bootstrapping for Power-of-Two Cyclotomics through Homomorphic NTT
Shihe Ma, Tairong Huang, Anyu Wang, and Xiaoyun Wang
Power-of-two cyclotomics is a popular choice when instantiating the BGV scheme because of its efficiency and compliance with the FHE standard. However, in power-of-two cyclotomics, the linear transformations in BGV bootstrapping cannot be decomposed into sub-transformations for acceleration with existing techniques. Thus, they can be highly time-consuming when the number of slots is large, degrading the advantage brought by the SIMD property of the plaintext space. By exploiting the algebraic structure of power-of-two cyclotomics, this paper derives explicit decomposition of the linear transformations in BGV bootstrapping into NTT-like sub-transformations, which are highly efficient to compute homomorphically. Moreover, multiple optimizations are made to evaluate homomorphic linear transformations, including modified BSGS algorithms, trade-offs between level and time, and specific simplifications for thin and general bootstrapping. We implement our method on HElib. With the number of slots ranging from 4096 to 32768, we obtain a 2.4x$\sim$55.1x improvement in bootstrapping throughput, compared to previous works or the naive approach.
Last updated:  2024-10-03
A short-list of pairing-friendly curves resistant to the Special TNFS algorithm at the 192-bit security level
Diego F. Aranha, Georgios Fotiadis, and Aurore Guillevic
For more than two decades, pairings have been a fundamental tool for designing elegant cryptosystems, varying from digital signature schemes to more complex privacy-preserving constructions. However, the advancement of quantum computing threatens to undermine public-key cryptography. Concretely, it is widely accepted that a future large-scale quantum computer would be capable to break any public-key cryptosystem used today, rendering today's public-key cryptography obsolete and mandating the transition to quantum-safe cryptographic solutions. This necessity is enforced by numerous recognized government bodies around the world, including NIST which initiated the first open competition in standardizing post-quantum (PQ) cryptographic schemes, focusing primarily on digital signatures and key encapsulation/public-key encryption schemes. Despite the current efforts in standardizing PQ primitives, the landscape of complex, privacy-preserving cryptographic protocols, e.g., zkSNARKs/zkSTARKs, is at an early stage. Existing solutions suffer from various disadvantages in terms of efficiency and compactness and in addition, they need to undergo the required scrutiny to gain the necessary trust in the academic and industrial domains. Therefore, it is believed that the migration to purely quantum-safe cryptography would require an intermediate step where current classically secure protocols and quantum-safe solutions will co-exist. This is enforced by the report of the Commercial National Security Algorithm Suite version 2.0, mandating transition to quantum-safe cryptographic algorithms by 2033 and suggesting to incorporate ECC at 192-bit security in the meantime. To this end, the present paper aims at providing a comprehensive study on pairings at 192-bit security level. We start with an exhaustive review in the literature to search for all possible recommendations of such pairing constructions, from which we extract the most promising candidates in terms of efficiency and security, with respect to the advanced Special TNFS attacks. Our analysis is focused, not only on the pairing computation itself, but on additional operations that are relevant in pairing-based applications, such as hashing to pairing groups, cofactor clearing and subgroup membership testing. We implement all functionalities of the most promising candidates within the RELIC cryptographic toolkit in order to identify the most efficient pairing implementation at 192-bit security and provide extensive experimental results.
Last updated:  2024-10-03
On TLS for the Internet of Things, in a Post Quantum world
Michael Scott
The TLS (Transport Layer Security) protocol is the most important, most attacked, most analysed and most used cryptographic protocol in the world today. TLS is critical to the integrity of the Internet, and if it were to be broken e-commerce would become impossible, with very serious implications for the global economy. Furthermore TLS is likely to assume even greater significance in the near future with the rapid growth of an Internet of Things (IoT) -- a multiplicity of internet connected devices all engaged in secure inter-communication. However the impending invention of a Cryptographically Relevant Quantum Computer (CRQC) would represent an existential threat to TLS in its current form. As it stands the latest version TLS1.3, benefiting as it does from years of research and study, provides effective security, but it must soon be updated to resist this new threat. In this research we first undertake a new clean-room implementation of a small-footprint open source TLS1.3, written in C++ and Rust, and suitable for IoT applications. Our implementation is designed to be cryptographically agile, so that it can easily accomodate new post-quantum cryptographic primitives. Next we use this new implementation as a vehicle to study the impact of going post-quantum, with a particular emphasis on the impact on the Internet of Things. Finally we showcase the flexibility of our implementation by proposing an implementation of TLS that uses identity-based encryption to mitigate this impact.
Last updated:  2024-10-03
$\mathsf{FREPack}$: Improved SNARK Frontend for Highly Repetitive Computations
Sriram Sridhar and Yinuo Zhang
Modern SNARK designs typically follow a frontend-backend paradigm: The frontend compiles a user's program into some equivalent circuit representation, while the backend calls for a SNARK specifically made for proving circuit satisfiability. While these circuits are often defined over small fields, the backend prover always needs to lift the computation to much larger fields to ensure soundness. This gap introduces concrete overheads for ZK applications like zkRollups, where group-based SNARKs are used to provide constant-size proofs for Merkle tree openings. For a class of highly repetitive computations, we propose $\mathsf{FREPack}$, an improved frontend that effectively bridges this gap. The larger the gap between circuit's small field and backend's large field, the more $\mathsf{FREPack}$ reduces the circuit size, making it particularly well-suited for group-based backends. Our implementation shows that, for proving $\approx 300$ iterations of SHA-256, $\mathsf{FREPack}$ improves the performance of Groth16 by $3.6\times$, Nova by $3.8\times$, and Spartan by $5.9\times$.
Last updated:  2024-10-03
Information-Theoretic 2-Party Computation from Additive Somewhat Homomorphic Encryption
Jonathan Trostle
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key additive somewhat homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is additive somewhat homomorphic and we give protocols to handle addition and multiplication. In one scheme, the server handles circuit multiplication gates by returning the multiplicands to the client which does the multiplication and sends back the encrypted product. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy is not information-theoretic, but rather depends on hardness of the subset sum problem. Correctness for the server in the malicious model can be verified by a 3rd party with high probability where the client and server privacy are information-theoretically protected from the verifier. Scaling the 2PC protocol via separate encryption parameters for smaller subcircuits allows the ciphertext size to remain constant as circuit size grows.
Last updated:  2024-10-02
Fully Composable Homomorphic Encryption
Daniele Micciancio
The traditional definition of fully homomorphic encryption (FHE) is not composable, i.e., it does not guarantee that evaluating two (or more) homomorphic computations in a sequence produces correct results. We formally define and investigate a stronger notion of homomorphic encryption which we call "fully composable homomorphic encryption", or "composable FHE". The definition is both simple and powerful: it does not directly involve the evaluation of multiple functions, and yet it supports the arbitrary composition of homomorphic evaluations. On the technical side, we compare the new definition with other definitions proposed in the past, proving both implications and separations, and show how the "bootstrapping" technique of (Gentry, STOC 2009) can be formalized as a method to transform a (non-composable, circular secure) homomorphic encryption scheme into a fully composable one. We use this formalization of bootstrapping to formulate a number of conjectures and open problems.
Last updated:  2024-10-02
PoUDR: Proof of Unified Data Retrieval in Decentralized Storage Networks
Zonglun Li, Shuhao Zheng, Junliang Luo, Ziyue Xin, Dun Yuan, Shang Gao, Sichao Yang, Bin Xiao, and Xue Liu
Decentralized storage networks, including IPFS and Filecoin, have created a marketplace where individuals exchange storage space for profit. These networks employ protocols that reliably ensure data storage providers accurately store data without alterations, safeguarding the interests of storage purchasers. However, these protocols lack an effective and equitable payment mechanism for data retrieval, particularly when multiple data queriers are involved. This necessitates a protocol that ensures both data integrity and fair compensation for data providers. In decentralized storage, data is fragmented into small blocks and stored across multiple nodes, a process known as data swarming. Due to this property, traditional data exchange protocols are inadequate in terms of communication and economic efficiency. We propose the Proof of Unified Data Retrieval protocol (PoUDR). PoUDR incorporates ZK-SNARK to facilitate a fair data exchange protocol. PoUDR reduces the number of blockchain transactions for both single block and data swarming retrieval. The protocol requires only a single key-revealing transaction submitted by the provider to the blockchain for each data block. This architecture allows for further optimization of transaction numbers through a batched proof technique on the provider's side. This approach necessitates only $N_P$ transactions within a specific time frame when data consisting of $N_D$ blocks, provided by $N_P$ providers, is queried by $N_Q$ queriers. This work provides a comprehensive definition for Secure Swarming Data Exchange (SSDE), including security assumptions. Also it offers a detailed game-based security analysis for the PoUDR protocol. Moreover, the PoUDR protocol has been fully integrated into the Bitswap protocol (IPFS). Within this integration, our proposed Relaxed Groth16 algorithm addresses the significant technical challenge of generating zero-knowledge proofs, leading to substantial cost reductions for overall feasibility of secure data retrieval in decentralized storage networks.
Last updated:  2024-10-02
HEonGPU: a GPU-based Fully Homomorphic Encryption Library 1.0
Ali Şah Özcan and Erkay Savaş
HEonGPU is a high-performance library designed to optimize Fully Homomorphic Encryption (FHE) operations on Graphics Processing Unit (GPU). By leveraging the parallel processing capac- ity of GPUs, HEonGPU significantly reduces the computational overhead typically associated with FHE by executing complex operation concurrently. This allows for faster execution of homomorphic computations on encrypted data, enabling real-time applications in privacy-preserving machine learn- ing and secure data processing. A key advantage of HEonGPU lies in its multi-stream architecture, which not only allows parallel processing of tasks to improve throughput but also eliminates the over- head of data transfers between the host device (i.e., CPU) and GPU. By efficiently managing data within the GPU using multi-streams, HEonGPU minimizes the need for repeated memory transfers, further enhancing performance. HEonGPU’s GPU-optimized design makes it ideal for large-scale encrypted computations, providing users with reduced latency and higher performance across various FHE schemes.
Last updated:  2024-10-02
Batching-Efficient RAM using Updatable Lookup Arguments
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, and Nitin Singh
RAM (random access memory) is an important primitive in verifiable computation. In this paper, we focus on realizing RAM with efficient batching property, i.e, proving a batch of $m$ updates on a RAM of size $N$ while incurring a cost that is sublinear in $N$. Classical approaches based on Merkle-trees or address ordered transcripts to model RAM correctness are either concretely inefficient, or incur linear overhead in the size of the RAM. Recent works explore cryptographic accumulators based on unknown-order groups (RSA, class-groups) to model the RAM state. While recent RSA accumulator based approaches offer significant improvement over classical methods, they incur linear overhead in the size of the accumulated set to compute witnesses, as well as prohibitive constant overheads. We realize a batching-efficient RAM with superior asymptotic and concrete costs as compared to existing approaches. Towards this: (i) we build on recent constructions of lookup arguments to allow efficient lookups even in presence of table updates, and (ii) we realize a variant of sub-vector relation addressed in prior works, which we call committed index lookup. We combine the two building blocks to realize batching-efficient RAM with sublinear dependence on size of the RAM. Our construction incurs an amortized proving cost of $\tilde{O}(m\log m + \sqrt{mN})$ for a batch of $m$ updates on a RAM of size $N$. Our results also benefit the recent arguments for sub-vector relation, by enabling them to be efficient in presence of updates to the table. We believe that this is a contribution of independent interest. We implement our solution to evaluate its concrete efficiency. Our experiments show that it offers significant improvement over existing works on batching-efficient accumulators/RAMs, with a substantially reduced resource barrier.
Last updated:  2024-10-02
Robust AE With Committing Security
Viet Tung Hoang and Sanketh Menda
There has been a recent interest to develop and standardize Robust Authenticated Encryption (Robust AE) schemes. NIST, for example, is considering an Accordion mode (a wideblock tweakable blockcipher), with Robust AE as a primary application. On the other hand, recent attacks and applications suggest that encryption needs to be committing. Indeed, committing security isalso a design consideration in the Accordion mode. Yet it is unclear how to build a Robust AE with committing security. In this work, we give a modular solution for this problem. We first show how to transform any wideblock tweakable blockcipher TE to a Robust AE scheme SE that commits just the key. The overhead is cheap, just a few finite-field multiplications and blockcipher calls. If one wants to commit the entire encryption context, one can simply hash the context to derive a 256-bit subkey, and uses SE on that subkey. The use of 256-bit key on SE only means that it has to rely on AES-256 but doesn't require TE to have 256-bit key. Our approach frees the Accordion designs from consideration of committing security. Moreover, it gives a big saving for several key-committing applications that don't want to pay the inherent hashing cost of full committing.
Last updated:  2024-10-02
Findex: A Concurrent and Database-Independent Searchable Encryption Scheme
Théophile Brézot and Chloé Hébant
State-of-the-art database implementations offer a wide range of functionalities and impressive performances while supporting highly concurrent loads. However they all rely on the server knowing the content of the database, which raises issues when sensitive information is being stored on a server that cannot be trusted. Encrypting documents before sending them to a remote server solves the confidentiality issue at the cost of loosing the keyword search functionality. Cryptographic primitives such as Symmetric Searchable Encryption (SSE) schemes have been proposed to recover this functionality. However, no SSE construction properly defines correctness and successfully guarantees security in a concurrent setting. This paper attempts a first step in this direction by recommending linearizability as the standard notion of correctness for a concurrent SSE. We study the impact of concurrency on security and stress the need for finer-grained security models. Hence, we propose the log-security model that guarantees security against an adversary having access to persistency-related logs, fixing a blind spot in the snapshot model while capturing security in a concurrent setting. We also build the first concurrent SSE solution proven correct and secure in a concurrent setting, that can be implemented on top of any database. Our scheme proved to be fast thanks to optimal wait-free search operations and sequentially-optimal, lock-free modifications, that both execute under one micro-second per binding, while only incurring a 13.3% storage expansion.
Last updated:  2024-10-02
Sailfish: Towards Improving the Latency of DAG-based BFT
Nibesh Shrestha, Rohan Shrothrium, Aniket Kate, and Kartik Nayak
Directed Acyclic Graph (DAG) based BFT protocols balance consensus efforts across different parties and maintain high throughput even when some designated parties fail. However, existing DAG-based BFT protocols exhibit long latency to commit decisions, primarily because they have a \emph{leader} every 2 or more ``rounds''. Recent works, such as Shoal (FC'23) and Mysticeti, have deemed supporting a leader vertex in each round particularly difficult, if not impossible. Consequently, even under honest leaders, these protocols require high latency (or communication complexity) to commit the proposal submitted by the leader (leader vertex) and additional latency to commit other proposals (non-leader vertices). In this work, we present Sailfish, the first DAG-based BFT that supports a leader vertex in each round. Under honest leaders, Sailfish maintains a commit latency of one reliable broadcast (RBC) round plus $1\delta$ to commit the leader vertex (where $\delta$ is the actual transmission latency of a message) and only an additional RBC round to commit non-leader vertices. We also extend Sailfish to Multi-leader Sailfish, which facilitates multiple leaders within a single round and commits all leader vertices in a round with a latency of one RBC round plus $1\delta$. Our experimental evaluation demonstrates that our protocols introduce significantly lower latency overhead compared to existing DAG-based protocols, with similar throughput.
Last updated:  2024-10-02
Computation Efficient Structure Aware PSI From Incremental Function Secret Sharing
Gayathri Garimella, Benjamin Goff, and Peihan Miao
Structure-Aware Private Set Intersection (sa-PSI), recently introduced by Garimella et al. (Crypto'22), is a PSI variant where Alice's input set $S_A$ has a publicly known structure (for example, interval, ball or union of balls) and Bob's input $S_B$ is an unstructured set of elements. Prior work achieves sa-PSI where the communication cost only scales with the description size of $S_A$ instead of the set cardinality. However, the computation cost remains linear in the cardinality of $S_A$, which could be prohibitively large. In this work, we present a new semi-honest sa-PSI framework where both computation and communication costs only scale with the description size of $S_A$. Our main building block is a new primitive that we introduce called Incremental Boolean Function Secret Sharing (ibFSS), which is a generalization of FSS that additionally allows for evaluation on input prefixes. We formalize definitions and construct a weak ibFSS for a $d$-dimensional ball with $\ell_\infty$ norm, which may be of independent interest. Independently, we improve spatial hashing techniques (from prior work) when $S_A$ has structure union of $d$-dimensional balls in $(\{0,1\}^u)^d$, each of diameter $\delta$, from $O(u \cdot d \cdot (\log \delta)^d)$ to $O(\log \delta \cdot d)$ in terms of both computation and communication. Finally, we resolve the following open questions from prior work with communication and computation scaling with the description size of the structured set. - Our PSI framework can handle a union of overlapping structures, while prior work strictly requires a disjoint union. - We have a new construction that enables Bob with unstructured input $S_B$ to learn the intersection. - We extend to a richer class of functionalities like structure-aware PSI Cardinality and PSI-Sum of associated values.
Last updated:  2024-10-02
A fast heuristic for mapping Boolean circuits to functional bootstrapping
Sergiu Carpov
Functional bootstrapping in FHE schemes such as FHEW and TFHE allows the evaluation of a function on an encrypted message, in addition to noise reduction. Implementing programs that directly use functional bootstrapping is challenging and error-prone. In this paper, we propose a heuristic that automatically maps Boolean circuits to functional bootstrapping instructions. Unlike other approaches, our method does not limit the encrypted data plaintext space to a power-of-two size, allowing the instantiation of functional bootstrapping with smaller parameters. Furthermore, the negacyclic property of functional bootstrapping is exploited to extend the plaintext space. Despite the inherently greedy nature of the heuristic, experimental results show that the mapped circuits exhibit a significant reduction in evaluation time. Our heuristic demonstrates a $45\%$ reduction in evaluation time when compared to hand-optimized Trivium and Kreyvium implementations.
Last updated:  2024-10-02
Formal Security Analysis of the OpenID FAPI 2.0 Family of Protocols: Accompanying a Standardization Process
Pedram Hosseyni, Ralf Küsters, and Tim Würtele
FAPI 2.0 is a suite of Web protocols developed by the OpenID Foundation's FAPI Working Group (FAPI WG) for third-party data sharing and digital identity in high-risk environments. Even though the specifications are not completely finished, several important entities have started to adopt the FAPI 2.0 protocols, including Norway's national HelseID, Australia's Consumer Data Standards, as well as private companies like Authlete and Australia-based connectID; the predecessor FAPI 1.0 is in widespread use with millions of users. The FAPI WG asked us to accompany the standardization of the FAPI 2.0 protocols with a formal security analysis to proactively identify vulnerabilities before widespread deployment and to provide formal security guarantees for the standards. In this paper, we report on our analysis and findings. Our analysis is based on a detailed model of the Web infrastructure, the so-called Web Infrastructure Model (WIM), which we extend to be able to carry out our analysis of the FAPI 2.0 protocols including important extensions like FAPI-CIBA. Based on the (extended) WIM and formalizations of the security goals and attacker model laid out in the FAPI 2.0 specifications, we provide a formal model of the protocols and carry out a formal security analysis, revealing several attacks. We have worked with the FAPI WG to fix the protocols, resulting in several amendments to the specifications. With these changes in place, we have adjusted our protocol model and formally proved that the security properties hold true under the strong attacker model defined by the FAPI WG.
Last updated:  2024-10-02
X2X: Low-Randomness and High-Throughput A2B and B2A Conversions for $d+1$ shares in Hardware
Quinten Norga, Jan-Pieter D'Anvers, Suparna Kundu, and Ingrid Verbauwhede
The conversion between arithmetic and Boolean masking representations (A2B \& B2A) is a crucial component for side-channel resistant implementations of lattice-based (post-quantum) cryptography. In this paper, we first propose novel $d$-order algorithms for the secure addition (SecADDChain$_q$) and B2A (B2X2A). Our secure adder is well-suited for repeated ('chained') executions, achieved through an improved method for repeated masked modular reduction. The optimized B2X2A gadget removes a full secure addition compared to state-of-the-art B2A approaches, by relying on the X2B operation. This component directly converts a compositely shared variable, consisting of a mix of arithmetic and Boolean sharing, to $d+1$ Boolean shares. This approach reduces the required amount of SecADDs to $2d$, of which $2\cdot\lceil\text{log}_2(d)\rceil$ are max-order. Secondly, we develop both a first- and high-order masked, unified hardware implementation that can compute both A2B & B2A conversions for power-of-two ($p$) and prime ($q$) moduli. Compared to state-of-the-art (high-throughput) hardware implementations that only support A2B$_k$, we reduce area utilization for a second-order implementation by 45% up to 60% and fresh randomness up to 62%, while supporting all four types of additive mask conversions. Our first-order design only requires 1,133/2,170 [LUT/FF] on Kintex-7 FPGAs. Our proposed algorithms are proven secure in the robust probing model and their implementations are validated via practical lab analysis using the TVLA methodology. We experimentally show that our masked implementation is hardened against first-and second order univariate and multivariate power-based side-channel attacks using 100 million traces, for each mode of operation.
Last updated:  2024-10-02
Quantum Cryptography from Meta-Complexity
Taiga Hiroka and Tomoyuki Morimae
In classical cryptography, one-way functions (OWFs) are the minimal assumption, while recent active studies have demonstrated that OWFs are not necessarily the minimum assumption in quantum cryptography. Several new primitives have been introduced such as pseudorandom unitaries (PRUs), pseudorandom function-like state generators (PRFSGs), pseudorandom state generators (PRSGs), one-way state generators (OWSGs), one-way puzzles (OWPuzzs), and EFI pairs. They are believed to be weaker than OWFs, but they still imply many useful applications such as private-key quantum money schemes, secret-key encryption, message authentication codes, digital signatures, commitments, and multiparty computations. Now that the possibility of quantum cryptography without OWFs has opened up, the most important goal in the field is to provide them with concrete instantiations. For example, in classical cryptography, there are many instantiations of OWFs based on concrete hardness assumptions, such as the hardness of discrete logarithms or learning with errors. The study of generic primitives is justified by the existence of concrete instantiations. On the other hand, in quantum cryptography, all known constructions of those primitives are only from OWFs. We therefore have the following important open problem: Do they have instantiations based on some concrete hardness assumptions that will not imply OWFs? Ideally, the assumptions should be the ones that are studied in other contexts than cryptography. In this paper, we give a candidate answer to the question by showing that quantum-average-hardness of GapK problem implies the existence of OWPuzzs. A GapK problem is a promise problem to decide whether a given bit string has a small Kolmogorov complexity or not. Its quantum-average-hardness means that an instance is sampled from a quantum-polynomial-time-samplable distribution, and no quantum-polynomial-time algorithm can solve the problem with high probability. As far as we know, this is the first time that a ``Microcrypt'' primitive is constructed based on concrete hardness assumptions that do not seem to imply OWFs. Moreover, the assumptions are studied in other contexts than cryptography, especially in the field of meta-complexity. (Note: During the preparation of this manuscript, Khurana and Tomer \cite{cryptoeprint:2024/1490} uploaded a concurrent work.)
Last updated:  2024-10-02
DUPLEX: Scalable Zero-Knowledge Lookup Arguments over RSA Group
Semin Han, Geonho Yoon, Hyunok Oh, and Jihye Kim
Lookup arguments enable a prover to convince a verifier that a committed vector of lookup elements $\vec{f} \in \mathbb{F}^m$ is contained within a predefined table $T \in \mathbb{F}^N$. These arguments are particularly beneficial for enhancing the performance of SNARKs in handling non-arithmetic operations, such as batched range checks or bitwise operations. While existing works have achieved efficient and succinct lookup arguments, challenges remain, particularly when dealing with large vectors of lookup elements in privacy-sensitive applications. In this paper, we introduce $\duplex$, a scalable zero-knowledge lookup argument scheme that offers significant improvements over previous approaches. Notably, we present the first lookup argument designed to operate over the RSA group. Our core technique allows for the transformation of elements into prime numbers to ensure compatibility with the RSA group, all without imposing substantial computational costs on the prover. Given $m$ lookup elements, $\duplex$ achieves an asymptotic proving time of $O(m \log m)$, with constant-sized proofs, constant-time verification, and a public parameter size independent of the table size $N$. Additionally, $\duplex$ ensures the privacy of lookup elements and is robust against dynamic table updates, making it highly suitable for scalable verifiable computation in real-world applications. We implemented and empirically evaluated $\duplex$, comparing it with the state-of-the-art zero-knowledge lookup argument Caulk [CCS'22]. Our experimental results demonstrate that $\duplex$ significantly outperforms Caulk in proving time for both single and batched lookup arguments, while maintaining practical proof size and verification time.
Last updated:  2024-10-02
Call Me By My Name: Simple, Practical Private Information Retrieval for Keyword Queries
Sofía Celi and Alex Davidson
We introduce $\mathsf{ChalametPIR}$: a single-server Private Information Retrieval (PIR) scheme supporting fast, low-bandwidth \emph{keyword} queries, with a conceptually very simple design. In particular, we develop a generic framework for converting PIR schemes for index queries over flat arrays (based on Learning With Errors) into keyword PIR. This involves representing a key-value map using any probabilistic filter that permits reconstruction of elements from inclusion queries (e.g. Cuckoo filters). In particular, we make use of recently developed \emph{Binary Fuse filters} to construct $\mathsf{ChalametPIR}$, with minimal efficiency blow-up compared with state-of-the-art index-based schemes (all costs bounded by a factor of $\leq 1.08$). Furthermore, we show that $\mathsf{ChalametPIR}$ achieves runtimes and financial costs that are factors of between $6\times$-$11\times$ and $3.75\times$-$11.4\times$ more efficient, respectively, than state-of-the-art keyword PIR approaches, for varying database configurations. Bandwidth costs are reduced or remain competitive, depending on the configuration. Finally, we believe that our application of Binary Fuse filters can have independent value towards developing efficient variants of related cryptographic primitives (e.g. private set intersection), that already benefit from using less efficient filter constructions.
Last updated:  2024-10-02
Security Perceptions of Users in Stablecoins: Advantages and Risks within the Cryptocurrency Ecosystem
Maggie Yongqi Guan, Yaman Yu, Tanusree Sharma, Molly Zhuangtong Huang, Kaihua Qin, Yang Wang, and Kanye Ye Wang
Stablecoins, a type of cryptocurrency pegged to another asset to maintain a stable price, have become an important part of the cryptocurrency ecosystem. Prior studies have primarily focused on examining the security of stablecoins from technical and theoretical perspectives, with limited investigation into users’ risk perceptions and security behaviors in stablecoin practices. To address this research gap, we conducted a mixed-method study that included constructing a stablecoin interaction framework based on the literature, which informed the design of our interview protocol, semi-structured interviews (n=21), and Reddit data analysis (9,326 posts). We found that participants see stable value and regulatory compliance as key security advantages of stablecoins over other cryptocurrencies. However, participants also raised concerns about centralization risks in fiat-backed stablecoins, perceived challenges in crypto-backed stablecoins due to limited reliance on fully automated execution, and confusion regarding the complex mechanisms of algorithmic stablecoins. We proposed improving user education and optimizing mechanisms to address these concerns and promote the safer use of stablecoins.
Last updated:  2024-10-01
32-bit and 64-bit CDC-7-XPUF Implementations on a Zynq-7020 SoC
Oğuz Yayla and Yunus Emre Yılmaz
Physically (or Physical) Unclonable Functions (PUFs) are basic and useful primitives in designing cryptographic systems. PUFs are designed to facilitate device authentication, secure boot, firmware integrity, and secure communications. To achieve these objectives, PUFs must exhibit both consistent repeatability and instance-specific randomness. The Arbiter PUF (APUF), recognized as the first silicon PUF, is capable of generating a substantial number of secret keys instantaneously based on the input, all while maintaining a lightweight design. This advantageous characteristic makes it particularly well-suited for device authentication in applications with constrained resources, especially for Internet-of-Things (IoT) devices. Despite these advantages, APUFs are vulnerable to machine learning (ML) attacks. Hence, those APUF designs were improved to achieve increased resistance against such attacks while maintaining usefulness and efficiency for IoT applications, and Component-Differentially Challenged XOR Arbiters (CDC-XPUFs) were proposed. In this work, ML-resistant 32-bit and 64-bit implementations of the Component-Differentially Challenged XOR Arbiter PUF with 7-stream (CDC-7-XPUF) are carried out. These CDC-7-XPUFs are evaluated using PUF metrics from the literature, and the resource utilization ratios of both implementations are also presented. The implementation setup contains the ZC702 Rev1.1 Evaluation Board, featuring the Xilinx Zynq-7020 SoC, and utilizes a configuration involving three boards for experimental validation.
Last updated:  2024-10-01
How (not) to hash into class groups of imaginary quadratic fields?
István András Seres, Péter Burcsi, and Péter Kutas
Class groups of imaginary quadratic fields (class groups for short) have seen a resurgence in cryptography as transparent groups of unknown order. They are a prime candidate for being a trustless alternative to RSA groups because class groups do not need a (distributed) trusted setup to sample a cryptographically secure group of unknown order. Class groups have recently found many applications in verifiable secret sharing, secure multiparty computation, transparent polynomial commitments, and perhaps most importantly, in time-based cryptography, i.e., verifiable delay functions, (homomorphic) time-lock puzzles, timed commitments, etc. However, there are various roadblocks to making class groups widespread in practical cryptographic deployments. We initiate the rigorous study of hashing into class groups. Specifically, we want to sample a uniformly distributed group element in a class group such that nobody knows its discrete logarithm with respect to any public parameter. We point out several flawed algorithms in numerous publicly available class group libraries. We further illustrate the insecurity of these hash functions by showing concrete attacks against cryptographic protocols, i.e., verifiable delay functions, if they were deployed with one of those broken hash-to-class group functions. We propose two families of cryptographically secure hash functions into class groups. We implement these constructions and evaluate their performance. We release our implementation as an open-source library.
Last updated:  2024-10-01
The Insecurity of Masked Comparisons: SCAs on ML-KEM’s FO-Transform
Julius Hermelink, Kai-Chun Ning, Richard Petri, and Emanuele Strieder
NIST released the draft standard for ML-KEM, and we can expect its widespread use in the embedded world in the near future. Several side-channel attacks have been proposed, and one line of research has focused on attacks against the comparison step of the FO-transform. A work published at TCHES 2022 stressed the need for secure higher-order masked comparisons beyond the $t$-probing model and proposed a higher-order masked comparison method. Subsequently, D'Anvers, Van Beirendonck, and Verbauwhede improved upon the performance of several previous proposals; their higher-order masked algorithm currently achieves the highest performance for masked comparisons. In this work, we show that while this proposal is secure in the $t$-probing model, its security in practice is questionable. We first propose an approximate template attack that requires only a small number of traces for profiling and has an exceptionally high noise tolerance. We demonstrate that, without knowledge of the targeted values, a vertical analysis of the distribution of certain points of interest can replace the profiling phase. Finally, we explain how a decryption failure oracle may be constructed from a single trace. We prove that these attacks are not affected by higher masking orders for noise levels that by far prevent previous profiled attacks on ML-KEM. Further, we provide simulations showing that even under extreme noise levels, the attacks are not prevented by realistic masking orders. Additionally, we carry out the attacks on multiple physical devices to stress the practicality of our attack. We discuss the underlying causes for our attack, demonstrate the difficulty of securing the FO-transform in ML-KEM, draw conclusions about the (in-)sufficiency of $t$-probing security in this context, and highlight an open gap in securing ML-KEM on embedded devices.
Last updated:  2024-10-01
Instance-Hiding Interactive Proofs
Changrui Mu and Prashant Nalini Vasudevan
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in NP/poly and coNP/poly. 2. If an average-case hard language has an IHIP, then One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Simulatable IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Simulatable IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Simulatable IHIP, then One-Way Functions exist.
Last updated:  2024-10-01
Exploiting the Central Reduction in Lattice-Based Cryptography
Tolun Tosun, Amir Moradi, and Erkay Savas
This paper questions the side-channel security of central reduction technique, which is widely adapted in efficient implementations of Lattice-Based Cryptography (LBC). We show that the central reduction leads to a vulnerability by creating a strong dependency between the power consumption and the sign of sensitive intermediate values. We exploit this dependency by introducing the novel absolute value prediction function, which can be employed in higher-order non-profiled multi-query Side-Channel Analysis (SCA) attacks. Our results reveal that – compared to classical reduction algorithms – employing the central reduction scheme leads to a two-orders-of-magnitude decrease in the number of required SCA measurements to exploit secrets of masked implementations. We particularly show that our approach is valid for the prime moduli employed by Kyber and Dilithium, the lattice-based post-quantum algorithms selected by NIST. We practically evaluate our introduced approach by performing second-order non-profiled attacks against an open-source masked implementation of Kyber on an ARM Cortex-M4 micro-processor. In our experiments, we revealed the full secret key of the aforementioned masked implementation with only 250 power traces without any forms of profiling or choosing the ciphertexts.
Last updated:  2024-10-01
VOLE-in-the-head signatures from Subfield Bilinear Collisions
Janik Huth and Antoine Joux
In this paper, we introduce a new method to construct a signature scheme based on the subfield bilinear collision problem published at Crypto 2024. We use techniques based on vector oblivious linear evaluation (VOLE) to significantly improve the running time and signature size of the scheme compared to the MPC-in-the-head version.
Last updated:  2024-10-01
Cryptographic Characterization of Quantum Advantage
Tomoyuki Morimae, Yuki Shirakawa, and Takashi Yamakawa
Quantum computational advantage refers to an existence of computational tasks that are easy for quantum computing but hard for classical one. Unconditionally showing quantum advantage is beyond our current understanding of complexity theory, and therefore some computational assumptions are needed. Which complexity assumption is necessary and sufficient for quantum advantage? In this paper, we show that inefficient-verifier proofs of quantumness (IV-PoQ) exist if and only if classically-secure one-way puzzles (OWPuzzs) exist. As far as we know, this is the first time that a complete cryptographic characterization of quantum advantage is obtained. IV-PoQ capture various types of quantum advantage previously studied, such as sampling advantage and searching advantage. Previous work [Morimae and Yamakawa 2024] showed that IV-PoQ can be constructed from OWFs, but a construction of IV-PoQ from weaker assumptions was left open. Our result solves the open problem. OWPuzzs are one of the most fundamental quantum cryptographic primitives implied by many quantum cryptographic primitives weaker than one-way functions (OWFs). The equivalence between IV-PoQ and classically-secure OWPuzzs therefore highlights that if there is no quantum advantage, then these fundamental primitives do not exist. The equivalence also means that quantum advantage is an example of the applications of OWPuzzs. Except for commitments, no application of OWPuzzs was known before. Our result shows that quantum advantage is another application of OWPuzzs. Moreover, it is the first quantum computation classical communication (QCCC) application of OWPuzzs.
Last updated:  2024-10-01
Relaxed Lattice-Based Programmable Hash Functions: New Efficient Adaptively Secure IBEs
Xingye Lu, Jingjing Fan, and Man Ho AU
In this paper, we introduce the notion of relaxed lattice-based programmable hash function (RPHF), which is a novel variant of lattice-based programmable hash functions (PHFs). Lattice-based PHFs, together with preimage trapdoor functions (TDFs), have been widely utilized (implicitly or explicitly) in the construction of adaptively secure identity-based encryption (IBE) schemes. The preimage length and the output length of the underlying PHF and TDF together determine the user secret key and ciphertext lengths of the IBE schemes. However, the current lattice-based PHF definition imposes the restriction that the preimage length of TDF in the IBE schemes cannot be too short, hindering the utilization of size-efficient NTRU TDF. To overcome this hurdle, RPHF relaxes the hash key distribution requirement in the definition of PHF from statistical indistinguishability to computational indistinguishability. This relaxation eliminates limitations on the preimage length of underlying TDFs in IBE, enabling the construction of IBEs from NTRU TDFs. We introduce two instantiations of RPHF: the first produces a hash output length of $2$ ring elements, with a hash key size linear to the input length, and the second yields an output length of $14$ ring elements, with a hash key size proportional to the square root of the input length. Building upon these RPHF instantiations, we propose two adaptively secure lattice-based IBE schemes with ciphertext lengths of $5$ and $17$ ring elements and user secret key lengths of $4$ and $16$ ring elements, respectively. The length of the IBE master public key is roughly equivalent to the size of the hash key of the underlying RPHF. In comparison to existing IBE constructions, our proposed schemes achieve a significant reduction (over an order of magnitude) in ciphertext and secret key sizes. Notably, state-of-the-art constructions from ideal lattices exhibit secret key and ciphertext sizes over 100 ring elements, making our proposed schemes highly efficient. Moreover, the master public key sizes of our IBEs remain comparable.
Last updated:  2024-10-01
More Efficient Lattice-based OLE from Circuit-private Linear HE with Polynomial Overhead
Leo de Castro, Duhyeong Kim, Miran Kim, Keewoo Lee, Seonhong Min, and Yongsoo Song
We present a new and efficient method to obtain circuit privacy for lattice-based linearly homomorphic encryptions (LHE). In particular, our method does not involve noise-flooding with exponetially large errors or iterative bootstrapping. As a direct result, we obtain a semi-honest oblivious linear evaluation (OLE) protocol with the same efficiency, reducing the communication cost of the prior state of the art by 50%. Consequently, the amortized time of our protocol improves the prior work by 33% under 100Mbps network setting. Our semi-honest OLE is the first to achieve both concrete efficiency and asymptotic quasi-optimality. Together with an extension of the recent zero-knowledge proof of plaintext knowledge, our LHE yields actively-secure OLE with 2.7x reduced communication from the prior work. When applied to Overdrive (Eurocrypt '18), an MPC preprocessing protocol, our method provides 1.4x improvement in communication over the state of the art.
Last updated:  2024-10-01
Multi-Designated Detector Watermarking for Language Models
Zhengan Huang, Gongxian Zeng, Xin Mu, Yu Wang, and Yue Yu
In this paper, we initiate the study of multi-designated detector watermarking (MDDW) for large language models (LLMs). This technique allows model providers to generate watermarked outputs from LLMs with two key properties: (i) only specific, possibly multiple, designated detectors can identify the watermarks, and (ii) there is no perceptible degradation in the output quality for ordinary users. We formalize the security definitions for MDDW and present a framework for constructing MDDW for any LLM using multi-designated verifier signatures (MDVS). Recognizing the significant economic value of LLM outputs, we introduce claimability as an optional security feature for MDDW, enabling model providers to assert ownership of LLM outputs within designated-detector settings. To support claimable MDDW, we propose a generic transformation converting any MDVS to a claimable MDVS. Our implementation of the MDDW scheme highlights its advanced functionalities and flexibility over existing methods, with satisfactory performance metrics.
Last updated:  2024-10-01
SPADE: Digging into Selective and PArtial DEcryption using Functional Encryption
Camille Nuoskala, Hossein Abdinasibfar, and Antonis Michalas
Functional Encryption (FE) is a cryptographic technique established to guarantee data privacy while allowing the retrieval of specific results from the data. While traditional decryption methods rely on a secret key disclosing all the data, FE introduces a more subtle approach. The key generation algorithm generates function-specific decryption keys that can be adaptively provided based on policies. Adaptive access control is a good feature for privacy-preserving techniques. Generic schemes have been designed to run basic functions, such as linear regression. However, they often provide a narrow set of outputs, resulting in a lack of thorough analysis. The bottom line is that despite significant research, FE still requires appropriate constructions to unleash its full potential in securely analyzing data and providing more insights. In this article, we introduce SPADE, a novel FE scheme that features multiple users and offers fine-grained access control through partial decryption of the ciphertexts. Unlike existing FE schemes, our construction also supports qualitative data, such as genomics, expanding the applications of privacy-preserving analysis to enable a comprehensive study of the data. SPADE is a significant advancement that balances privacy and data analysis with clear implications in healthcare and finance. To verify its applicability, we conducted extensive experiments on datasets used in sleep medicine (hypnogram data) and DNA analysis (genomic records).
Last updated:  2024-10-01
Extending class group action attacks via sesquilinear pairings
Joseph Macula and Katherine E. Stange
We introduce a new tool for the study of isogeny-based cryptography, namely pairings which are sesquilinear (conjugate linear) with respect to the $\mathcal{O}$-module structure of an elliptic curve with CM by an imaginary quadratic order $\mathcal{O}$. We use these pairings to study the security of problems based on the class group action on collections of oriented ordinary or supersingular elliptic curves. This extends work of of both (Castryck, Houben, Merz, Mula, Buuren, Vercauteren, 2023) and (De Feo, Fouotsa, Panny, 2024).
Last updated:  2024-09-30
BEAT-MEV: Epochless Approach to Batched Threshold Encryption for MEV Prevention
Jan Bormet, Sebastian Faust, Hussien Othman, and Ziyan Qu
In decentralized finance (DeFi), the public availability of pending transactions presents significant privacy concerns, enabling market manipulation through miner extractable value (MEV). MEV occurs when block proposers exploit the ability to reorder, omit, or include transactions, causing financial loss to users from frontrunning. Recent research has focused on encrypting pending transactions, hiding transaction data until block finalization. To this end, Choudhuri et al. (USENIX '24) introduce an elegant new primitive called Batched Threshold Encryption (BTE) where a batch of encrypted transactions is selected by a committee and only decrypted after block finalization. Crucially, BTE achieves low communication complexity during decryption and guarantees that all encrypted transactions outside the batch remain private. An important shortcoming of their construction is, however, that it progresses in epochs and requires a costly setup in MPC for each batch decryption. In this work, we introduce a novel BTE scheme addressing the limitations by eliminating the need for an expensive epoch setup while achieving practical encryption and decryption times. Additionally, we explore the problem of how users can coordinate their transactions, which is crucial for the functionality of the system. Along the way, we present several optimizations and trade-offs between communication and computational complexity that allow us to achieve practical performance on standard hardware ($<2$ ms for encryption and $<440$ ms for decrypting $512$ transactions). Finally, we prove our constructions secure in a model that captures practical attacks on MEV-prevention mechanisms.
Last updated:  2024-09-30
Adaptive Garbled Circuits and Garbled RAM from Non-Programmable Random Oracles
Cruz Barnum, David Heath, Vladimir Kolesnikov, and Rafail Ostrovsky
Garbled circuit techniques that are secure in the adaptive setting -- where inputs are chosen after the garbled program is sent -- are motivated by practice, but they are notoriously difficult to achieve. Prior adaptive garbling is either impractically expensive or encrypts the entire garbled program with the output of a programmable random oracle (PRO), a strong assumption. We present a simple framework for proving adaptive security of garbling schemes in the non-programmable random oracle (NPRO) model. NPRO is a milder assumption than PRO, and it is close to the assumption required by the widely used Free XOR extension. Our framework is applicable to a number of existing GC techniques, which are proved adaptively secure without modification. As our main application of the framework, we construct and prove adaptively secure a garbling scheme for tri-state circuits, a recently proposed circuit model that captures both Boolean circuits and RAM programs (Heath et al., Crypto 2023). For a given TSC $C$, our garbling of $C$ is at most $|C|\cdot \lambda$ bits long, where $\lambda$ is the security parameter. This implies both an adaptively secure garbled Boolean circuit scheme, as well an adaptively secure garbled RAM scheme where the garbling of $T$ steps of a RAM program has size $O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda)$ bits. Our scheme is concretely efficient: its Boolean circuit handling matches the performance of half-gates, and it is adaptively secure from NPRO.
Last updated:  2024-09-30
Mind the Propagation of States New Automatic Search Tool for Impossible Differentials and Impossible Polytopic Transitions (Full Version)
Xichao Hu, Yongqiang Li, Lin Jiao, Shizhu Tian, and Mingsheng Wang
Impossible differentials cryptanalysis and impossible polytopic cryptanalysis are the most effective approaches to estimate the security of block ciphers. However, the previous automatic search methods of their distinguishers, impossible differentials and impossible polytopic transitions, neither consider the impact of key schedule in the single-key setting and the differential property of large S-boxes, nor apply to the block ciphers with variable rotations. Thus, unlike previous methods which focus on the propagation of the difference or $s$-difference, we redefine the impossible differentials and impossible $(s+1)$-polytopic transitions according to the propagation of state, which allow us to break through those limitations of the previous methods. Theoretically, we prove that traditional impossible differentials and impossible $(s+1)$-polytopic transitions are equivalent to part of our redefinitions, which have advantages from broader view. Technically, we renew the automatic search model and design an SAT-based tool to evaluate our redefined impossible differentials and impossible $(s+1)$-polytopic transitions efficiently. This tool is capable of searching for impossible differentials and impossible $(s+1)$-polytopic transitions not only in the single-key setting but also in the related-key setting. As a result, for GIFT64, we get the $6$-round impossible differentials which cannot be detected by all previous tools. For PRINTcipher, we propose the first modeling method for the key-dependent permutation and key-dependent S-box. For MISTY1, we derive 902 4-round impossible differentials by exploiting the differential property of S-boxes. For RC5, we present the first modeling method for the variable rotation and get 2.5-round impossible differentials for each version of it. We also get the results of impossible differentials for GIFT, CHAM and SPECK in related-key setting. More remarkable, our tool can be used to evaluate the security of given cipher against the impossible differentials, and we prove that there exists no 5-round 1 input active word and 1 output active word impossible differentials for AES-128 even consider the relations of 3-round keys. Besides, we also get the impossible $(s+1)$-polytopic transitions for PRINTcipher, GIFT64, PRESENT, and RC5, all of which can cover more rounds than their corresponding impossible differentials as far as we know.
Last updated:  2024-09-30
A reduction from Hawk to the principal ideal problem in a quaternion algebra
Clémence Chevignard, Pierre-Alain Fouque, Guilhem Mureau, Alice Pellet-Mary, and Alexandre Wallet
In this article we present a non-uniform reduction from rank-2 module-LIP over Complex Multiplication fields, to a variant of the Principal Ideal Problem, in some fitting quaternion algebra. This reduction is classical deterministic polynomial-time in the size of the inputs. The quaternion algebra in which we need to solve the variant of the principal ideal problem depends on the parameters of the module-LIP problem, but not on the problem's instance. Our reduction requires the knowledge of some special elements of this quaternion algebras, which is why it is non-uniform. In some particular cases, these elements can be computed in polynomial time, making the reduction uniform. This is the case for the Hawk signature scheme: we show that breaking Hawk is no harder than solving a variant of the principal ideal problem in a fixed quaternion algebra (and this reduction is uniform).
Last updated:  2024-09-30
Breaking HWQCS: a code-based signature scheme from high weight QC-LDPC codes
Alex Pellegrini and Giovanni Tognolini
We analyse HWQCS, a code based signature scheme presented at ICISC 2023, which uses quasi-cyclic low density parity check codes (QC-LDPC). The scheme introduces high Hamming weight errors and signs each message using a fresh ephemeral secret key rather than using only one secret key, so to avoid known attacks on QC-LDPC signature schemes. In this paper, we show that the signatures of HWQCS leak substantial information concerning the ephemeral keys and formally describe this behaviour. Furthermore, we show that for each security level, we can exploit the leakage to efficiently reconstruct partial secret data from very few signatures, and finally mount a universal forgery attack.
Last updated:  2024-09-30
Bitwise Garbling Schemes --- A Model with $\frac{3}{2}\kappa$-bit Lower Bound of Ciphertexts
Fei Xu, Honggang Hu, and Changhong Xu
At Eurocrypt 2015, Zahur, Rosulek, and Evans proposed the model of Linear Garbling Schemes. This model proved a $2\kappa$-bit lower bound of ciphertexts for a broad class of garbling schemes. Since then, several methods have been developed that bypass this lower bound, albeit with a notable limitation: Their reliance on specifically correlated input wire labels restricts their applicability across most gates. At Crypto 2021, Rosulek and Roy presented the innovative "three-halves" garbling scheme in which AND gates cost $1.5\kappa+5$ bits and XOR gates are free. A noteworthy aspect of their scheme is the slicing-and-dicing technique, which is applicable universally to all AND gates when garbling a boolean circuit. Following this revelation, Rosulek and Roy presented several open problems. Our research primarily addresses one of them: ``Is $1.5\kappa$ bits optimal for garbled AND gates in a more inclusive model than Linear Garbling Schemes?'' In this paper, we propose the Bitwise Garbling Schemes, a model that seamlessly incorporates the slicing-and-dicing technique. Our key revelation is that $1.5\kappa$ bits is indeed optimal for arbitrary garbled AND gates in our model. Since Rosulek and Roy also suggested another problem which questions the necessity of free-XOR, we explore constructions without free-XOR and prove a $2\kappa$-bit lower bound. Therefore, sacrificing compatibility with free-XOR does not lead to a more efficient scheme.
Last updated:  2024-09-30
A Formal Analysis of Apple’s iMessage PQ3 Protocol
Felix Linker, Ralf Sasse, and David Basin
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Last updated:  2024-09-30
A Composable View of Homomorphic Encryption and Authenticator
Ganyuan Cao
Homomorphic Encryption (HE) is a cutting-edge cryptographic technique that enables computations on encrypted data to be mirrored on the original data. This has quickly attracted substantial interest from the research community due to its extensive practical applications, such as in cloud computing and privacy-preserving machine learning. In addition to confidentiality, the importance of authenticity has emerged to ensure data integrity during transmission and evaluation. To address authenticity, various primitives have been developed including Homomorphic Authenticator (HA). Corresponding security notions have also been introduced by extending the existing notions to their homomorphic versions. Despite these advancements, formalizing the security of HE and HA remains challenging due to the novelty of these primitives and complexity of application scenarios involving message evaluation. It is inclusive which definitions in this zoo of notions are insufficient or overly complex. Moreover, HE and HA are designed to be combined to construct a secure communication channel that ensures both confidentiality and authenticity. However, the security of such compositions is not always clear when game-based notions are used to formalize security. To bridge this gap, we conduct a constructive analysis through the lens of com- posable security. This method enables us to examine the security properties of each primitive in isolation and to more effectively evaluate their security when integrated into a larger system. We introduce the concepts of a confidential channel and an au- thenticated channel to specify the security requirements for HE and HA, respectively. We make a comparison with existing game-based notions to determine whether they adequately capture the intended security objectives. We then analyze whether the composition of HE and HA constructs a Homomorphic Authenticated Encryption (HAE) that provides both confidentiality and authenticity in presence of message evaluation. Specifically, we examine a serial composition of HE and HA, corresponding to Encrypt-then-MAC (EtM) composition for constructing classical AE.
Last updated:  2024-09-30
A Greedy Global Framework for Lattice Reduction Using Deep Insertions
Sanjay Bhattacherjee, Julio Hernandez-Castro, and Jack Moyler
LLL-style lattice reduction algorithms iteratively employ size reduction and reordering on ordered basis vectors to find progressively shorter, more orthogonal vectors. DeepLLL reorders the basis through deep insertions yielding much shorter vectors than LLL. DeepLLL was introduced alongside BKZ, however, the latter has received greater attention and has emerged as the state-of-the-art. We first show that LLL-style algorithms work with a designated measure of basis quality and iteratively improves it; specifically, DeepLLL improves a sublattice measure based on the generalised Lovász condition. We then introduce a new generic framework X-GG for lattice reduction algorithms that work with a measure X of basis quality. X-GG globally searches for deep insertions that minimise X in each iteration. We instantiate the framework with two quality measures — basis potential (Pot) and squared sum (SS) — both of which have corresponding DeepLLL algorithms. We prove polynomial runtimes for our X-GG algorithms and also prove their output to be of guaranteed quality. By experimentally comparing the performances of LLL, X-DeepLLL and X-GG implementations without preprocessing, we find that our GG algorithms produce better quality bases whilst being much faster than the corresponding DeepLLL versions. We also compare SS-GG and the FPLLL implementation of BKZ with LLL-preprocessed bases at higher dimensions $n$, varying the threshold $\delta$ for deep insertion. With $\delta=1-10^{-6}$, SS-GG can provide simultaneous improvements in both runtime and output quality than BKZ-12 for $100\leq n\leq250$. In our experimental comparison with BKZ-21, SS-GG with $\delta=1$ has significantly better runtime while its output quality keeps getting better from around $n=350$.
Last updated:  2024-09-30
FLI: Folding Lookup Instances
Albert Garreta and Ignacio Manzur
We introduce two folding schemes for lookup instances: FLI and FLI+SOS. Both use a PIOP to check that a matrix has elementary basis vectors as rows, with FLI+SOS adding a twist based on Lasso’s SOS-decomposability. FLI takes two lookup instances $\{\mathbf{a}_1\}, \{\mathbf{a}_2\}\subseteq\mathbf{t}$, and expresses them as matrix equations $M_i\cdot\mathbf{t}^\mathsf{T}=\mathbf{a}_i^\mathsf{T}$ for $i=1,2$, where each matrix $M_i\in\mathbb{F}^{m\times N}$ has rows which are elementary basis vectors in $\mathbb{F}^N$. Matrices that satisfy this condition are said to be in $R_{\mathsf{elem}}$. Then, a folding scheme for $R_{\mathsf{elem}}$ into a relaxed relation is used, which combines the matrices $M_1, M_2$ as $M_1+\alpha M_2$ for a random $\alpha\in\mathbb{F}$. Finally, the lookup equations are combined as $(M_1+\alpha M_2)\cdot \mathbf{t}^{\mathsf{T}} = (\mathbf{a}_1+\alpha \mathbf{a}_2)^\mathsf{T}$. In FLI, only the property that a matrix is in $R_{\mathsf{elem}}$ is folded, and this makes the FLI folding step the cheapest among existing solutions. The price to pay is in the cost for proving accumulated instances. FLI+SOS builds upon FLI to enable folding of large SOS-decomposable tables. This is achieved through a variation of Lasso's approach to SOS-decomposability, which fits FLI naturally. For comparison, we describe (for the first time to our knowledge) straightforward variations of Protostar and Proofs for Deep Thought that also benefit from SOS-decomposability. We see that for many reasonable parameter choices, and especially those arising from lookup-based zkVMs, FLI+SOS can concretely be the cheapest folding solution.
Last updated:  2024-09-30
CHVote Protocol Specification
Rolf Haenni, Reto E. Koenig, Philipp Locher, and Eric Dubuis
This document provides a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol designed for political elections in Switzerland. The document describes every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document accommodates the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, it serves as an ultimate companion document for the developers in charge of implementing this system. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public even enables implementations by third parties, for example by students trying to develop a clone of the system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.
Last updated:  2024-09-30
Folding Schemes with Privacy Preserving Selective Verification
Joan Boyar and Simon Erfurth
Folding schemes are an exciting new primitive, transforming the task of performing multiple zero-knowledge proofs of knowledge for a relation into performing just one zero-knowledge proof, for the same relation, and a number of cheap inclusion-proofs. Recently, folding schemes have been used to amortize the cost associated with proving different statements to multiple distinct verifiers, which has various applications. We observe that for these uses, leaking information about the statements folded together can be problematic, yet this happens with previous constructions. Towards resolving this issue, we give a natural definition of privacy preserving folding schemes, and what security they should offer. To construct privacy preserving folding schemes, we first define a statement hiders, a primitive which might be of independent interest. In a nutshell, a statement hider hides an instance of a relation as a new instance in the same relation. The new instance is in the relation if and only if the initial instance is. With this building block, we can utilize existing folding schemes to construct a privacy preserving folding scheme, by first hiding each of the statements. Folding schemes allow verifying that a statement was folded into another statement, while statement hiders allow verifying that a statement was hidden as another statement.
Last updated:  2024-09-30
Improved Lattice Blind Signatures from Recycled Entropy
Corentin Jeudy and Olivier Sanders
Blind signatures represent a class of cryptographic primitives enabling privacy-preserving authentication with several applications such as e-cash or e-voting. It is still a very active area of research, in particular in the post-quantum setting where the history of blind signatures has been hectic. Although it started to shift very recently with the introduction of a few lattice-based constructions, all of the latter give up an important characteristic of blind signatures (size, efficiency, or security under well-known assumptions) to achieve the others. In this paper, we propose another design which revisits the link between the two main procedures of blind signatures, namely issuance and showing, demonstrating that we can significantly alleviate the second one by adapting the former. Concretely, we show that we can harmlessly inject excess randomness in the issuance phase, and then recycle the entropy surplus during showing to decrease the complexity of the zero-knowledge proof which constitutes the main component of the signature. This leads to a blind signature scheme with small sizes, low complexity, and that still relies on well-known lattice assumptions.
Last updated:  2024-09-30
A Note on the SNOVA Security
Lih-Chung Wang, Chun-Yen Chou, Jintai Ding, Yen-Liang Kuan, Jan Adriaan Leegwater, Ming-Siou Li, Bo-Shu Tseng, Po-En Tseng, and Chia-Chun Wang
SNOVA is one of the submissions in the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition. SNOVA is a UOV variant that uses the noncommutative-ring technique to educe the size of the public key. SNOVA's public key size and signature size are well-balanced and have good performance. Recently, Beullens proposed a forgery attack against SNOVA, pointing out that the parameters of SNOVA can be attacked. Beullens also argued that with some slight adjustments his attacks can be prevented. In this note, we explain Beullens' forgery attack and show that the attack can be invalid by two different approaches. Finally, we show that these two approaches do not increase the sizes of the public keys or signatures and the current parameters satisfy the security requirement of NIST.
Last updated:  2024-09-30
Challenges in Timed Cryptography: A Position Paper
Karim Eldefrawy, Benjamin Terner, and Moti Yung
Time-lock puzzles are unique cryptographic primitives that use computational complexity to keep information secret for some period of time, after which security expires. This topic, while over 25 years old, is still in a state where foundations are not well understood: For example, current analysis techniques of time-lock primitives provide no sound mechanism to build composed multi-party cryptographic protocols which use expiring security as a building block. Further, there are analyses that employ idealizations and simulators of unrealistic computational power to be an acceptable sound security argument. Our goal with this short paper is to advocate for understanding what approaches may lead to sound modeling beyond idealization, and what approaches may, in fact, be hopeless at this task of sound modeling. We explain in this paper how existing attempts at this subtle problem lack either composability, a fully consistent analysis, or functionality. The subtle flaws in the existing frameworks reduce to an impossibility result by Mahmoody et al., who showed that time-lock puzzles with super-polynomial gaps (between committer and solver) cannot be constructed from random oracles alone (or any repetitive computation where the next state is completely random given the prior state); yet still the analyses of algebraic puzzles today treat the solving process as if each step is a generic or random oracle. We point out that if the generation process relies on a trapdoor function that cannot be treated as a random oracle (to allow efficient generation while avoiding this impossibility result), then, to be consistent, the analysis of the solving process should also not treat such a trapdoor function (and its intermediate states) as a random oracle. We also delineate additional issues with the proof techniques used for time-lock puzzles. Specifically, when a time-lock puzzle must retain privacy for some amount of time, the reduction should bound the running time of the simulator. A simulator that can ``simulate" if given time that if given to an adversary allows said adversary to solve the puzzle is not a valid security argument. We survey the adherence of various attempts to this principle, as well as the properties that different attempts achieve toward composition.
Last updated:  2024-09-29
Schnorr Signatures are Tightly Secure in the ROM under a Non-interactive Assumption
Gavin Cho, Georg Fuchsbauer, and Adam O'Neill
We show that the Schnorr signature scheme meets existential unforgeability under chosen-message attack (EUF-CMA) in the random oracle model (ROM) if the circular discrete-logarithm (CDL) assumption, a new, non-interactive variant of DL we introduce, holds in the underlying group. Our reduction is completely tight, meaning the constructed adversary against CDL has both essentially the same running time and success probability as the assumed forger. To our knowledge, we are the first to exhibit such a reduction. Previously, Bellare and Dai (INDOCRYPT 2020) showed the scheme is EUF-CMA the ROM if their multi-base DL assumption holds in the underlying group. However, multi-base DL is interactive; moreover, their reduction, while tighter than the initial result of Pointcheval and Stern (EUROCRYPT 1996), still incurs a security loss that is linear in the number of the adversary’s RO queries. We justify CDL by showing it holds in two carefully chosen idealized models, which idealize different aspects of our assumption. Our quantitative bounds in these models are essentially the same as for DL, giving strong evidence that CDL is as hard DL in appropriate elliptic-curve groups groups.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.