Papers updated in last 183 days (Page 18 of 1837 results)

Last updated:  2025-01-10
On Quantum Simulation-Soundness
Behzad Abdolmaleki, Céline Chevalier, Ehsan Ebrahimi, Giulio Malavolta, and Quoc-Huy Vu
Non-interactive zero-knowledge (NIZK) proof systems are a cornerstone of modern cryptography, but their security has received little attention in the quantum settings. Motivated by improving our understanding of this fundamental primitive against quantum adversaries, we propose a new definition of security against quantum adversary. Specifically, we define the notion of quantum simulation soundness (SS-NIZK), that allows the adversary to access the simulator in superposition. We show a separation between post-quantum and quantum security of SS-NIZK, and prove that Sahai’s construction for SS-NIZK (in the CRS model) can be made quantumly-simulation-sound. As an immediate application of our new notion, we prove the security of the Naor-Yung paradigm in the quantum settings, with respect to a strong quantum IND-CCA security notion. This provides the quantum analogue of the classical dual key approach to prove the security of encryption schemes. Along the way, we introduce a new notion of quantum-query advantage functions, which may be used as a general framework to show classical/quantum separation for other cryptographic primitives, and it may be of independent interest.
Last updated:  2025-01-10
Designing a General-Purpose 8-bit (T)FHE Processor Abstraction
Daphné Trama, Pierre-Emmanuel Clet, Aymen Boudguiga, Renaud Sirdey, and Nicolas Ye
Making the most of TFHE programmable bootstrapping to evaluate functions or operators otherwise challenging to perform with only the native addition and multiplication of the scheme is a very active line of research. In this paper, we systematize this approach and apply it to build an 8-bit FHE processor abstraction, i.e., a software entity that works over FHE-encrypted 8-bit data and presents itself to the programmer by means of a conventional-looking assembly instruction set. In doing so, we provide several homomorphic LookUp Table (LUT) dereferencing operators based on variants of the tree-based method and show that they are the most efficient option for manipulating encryptions of 8-bit data (optimally represented as two basis 16 digits). We then systematically apply this approach over a set of around 50 instructions, including, notably, conditional assignments, divisions, or fixed-point arithmetic operations. We further test the approach on several simple algorithms, including the execution of a neuron with a sigmoid activation function with 16-bit precision. We conclude the paper by comparing our work to the FHE compilers available in the state of the art. Finally, this work reveals that a very limited set of functional bootstrapping patterns is versatile and efficient enough to achieve general-purpose FHE computations beyond the boolean circuit approach. As such, these patterns may be an appropriate target for further works on advanced software optimizations or hardware implementations.
Last updated:  2025-01-10
Deletions and Dishonesty: Probabilistic Data Structures in Adversarial Settings
Mia Filić, Keran Kocher, Ella Kummer, and Anupama Unnikrishnan
Probabilistic data structures (PDS) are compact representations of high-volume data that provide approximate answers to queries about the data. They are commonplace in today's computing systems, finding use in databases, networking and more. While PDS are designed to perform well under benign inputs, they are frequently used in applications where inputs may be adversarially chosen. This may lead to a violation of their expected behaviour, for example an increase in false positive rate. In this work, we focus on PDS that handle approximate membership queries (AMQ). We consider adversarial users with the capability of making adaptive insertions, deletions and membership queries to AMQ-PDS, and analyse the performance of AMQ-PDS under such adversarial inputs. We argue that deletions significantly empower adversaries, presenting a challenge to enforcing honest behaviour when compared to insertion-only AMQ-PDS. To address this, we introduce a new concept of an honest setting for AMQ-PDS with deletions. By leveraging simulation-based security definitions, we then quantify how much harm can be caused by adversarial users to the functionality of AMQ-PDS. Our resulting bounds only require calculating the maximal false positive probability and insertion failure probability achievable in our novel honest setting. We apply our results to Cuckoo filters and Counting filters. We show how to protect these AMQ-PDS at low cost, by replacing or composing the hash functions with keyed pseudorandom functions in their construction. This strategy involves establishing practical bounds for the probabilities mentioned above. Using our new techniques, we demonstrate that achieving security against adversarial users making both insertions and deletions remains practical.
Last updated:  2025-01-10
Compact Proofs of Partial Knowledge for Overlapping CNF Formulae
Gennaro Avitabile, Vincenzo Botta, Daniele Friolo, Daniele Venturi, and Ivan Visconti
At CRYPTO '94, Cramer, Damgaard, and Schoenmakers introduced a general technique for constructing honest-verifier zero-knowledge proofs of partial knowledge (PPK), where a prover Alice wants to prove to a verifier Bob she knows witnesses for claims out of claims without revealing the indices of those claims. Their solution starts from a base honest-verifier zero-knowledge proof of knowledge and requires to run in parallel execution of the base protocol, giving a complexity of , where is the communication complexity of the base protocol. However, modern practical scenarios require communication-efficient zero-knowledge proofs tailored to handle partial knowledge in specific application-dependent formats. In this paper we propose a technique to compose a large class of -protocols for atomic statements into -protocols for PPK over formulae in conjunctive normal form (CNF) that overlap, in the sense that there is a common subset of literals among all clauses of the formula. In such formulae, the statement is expressed as a conjunction of clauses, each of which consists of a disjunction of literals (i.e., each literal is an atomic statement) and literals are shared among clauses. The prover, for a threshold parameter , proves knowledge of at least witnesses for distinct literals in each clause. At the core of our protocol, there is a new technique to compose -protocols for regular CNF relations (i.e., when ) that exploits the overlap among clauses and that we then generalize to formulae where providing improvements over state-of-the-art constructions.
Last updated:  2025-01-10
VDORAM: Towards a Random Access Machine with Both Public Verifiability and Distributed Obliviousness
Huayi Qi, Minghui Xu, Xiaohua Jia, and Xiuzhen Cheng
Verifiable random access machines (vRAMs) serve as a foundational model for expressing complex computations with provable security guarantees, serving applications in areas such as secure electronic voting, financial auditing, and privacy-preserving smart contracts. However, no existing vRAM provides distributed obliviousness, a critical need in scenarios where multiple provers seek to prevent disclosure against both other provers and the verifiers. Implementing a publicly verifiable distributed oblivious RAM (VDORAM) presents several challenges. Firstly, the development of VDORAM is hindered by the limited availability of sophisticated publicly verifiable multi-party computation (MPC) protocols. Secondly, the lack of readily available front-end implementations for multi-prover zero-knowledge proofs (ZKPs) poses a significant obstacle to developing practical applications. Finally, directly adapting existing RAM designs to the VDORAM paradigm may prove either impractical or inefficient due to the inherent complexities of reconciling oblivious computation with the generation of publicly verifiable proofs. To address these challenges, we introduce CompatCircuit, the first multi-prover ZKP front-end implementation to our knowledge. CompatCircuit integrates collaborative zkSNARKs to implement publicly verifiable MPC protocols with rich functionalities beyond those of an arithmetic circuit, enabling the development of multi-prover ZKP applications. Building upon CompatCircuit, we present VDORAM, the first publicly verifiable distributed oblivious RAM. By combining distributed oblivious architectures with verifiable RAM, VDORAM achieves an efficient RAM design that balances communication overhead and proof generation time. We have implemented CompatCircuit and VDORAM in approximately 15,000 lines of code, demonstrating usability by providing a practical and efficient implementation. Our performance evaluation result reveals that the system still provides moderate performance with distributed obliviousness.
Last updated:  2025-01-10
Forking the RANDAO: Manipulating Ethereum's Distributed Randomness Beacon
Ábel Nagy, János Tapolcai, István András Seres, and Bence Ladóczki
Proof-of-stake consensus protocols often rely on distributed randomness beacons (DRBs) to generate randomness for leader selection. This work analyses the manipulability of Ethereum's DRB implementation, RANDAO, in its current consensus mechanism. Even with its efficiency, RANDAO remains vulnerable to manipulation through the deliberate omission of blocks from the canonical chain. Previous research has shown that economically rational players can withhold blocks --~known as a block withholding attack or selfish mixing~-- when the manipulated RANDAO outcome yields greater financial rewards. We introduce and evaluate a new manipulation strategy, the RANDAO forking attack. Unlike block withholding, whereby validators opt to hide a block, this strategy relies on selectively forking out an honest proposer's block to maximize transaction fee revenues and block rewards. In this paper, we draw attention to the fact that the forking attack is significantly more harmful than selfish mixing for two reasons. Firstly, it exacerbates the unfairness among validators. More importantly, it significantly undermines the reliability of the blockchain for the average user by frequently causing already published blocks to be forked out. By doing so, the attacker can fork the chain without losing slots, and we demonstrate that these are later fully compensated for. Our empirical measurements, investigating such manipulations on Ethereum mainnet, revealed no statistically significant traces of these attacks to date.
Last updated:  2025-01-10
A Framework for Generating S-Box Circuits with Boyar-Peralta Algorithm-Based Heuristics, and Its Applications to AES, SNOW3G, and Saturnin
Yongjin Jeon, Seungjun Baek, Giyoon Kim, and Jongsung Kim
In many lightweight cryptography applications, low area and latency are required for efficient implementation. The gate count in the cipher and the circuit depth must be low to minimize these two metrics. Many optimization strategies have been developed for the linear layer, led by the Boyar-Peralta (BP) algorithm. The Advanced Encryption Standard (AES) has been a focus of extensive research in this area. However, while the linear layer uses only XOR gates, the S-box, which is an essential nonlinear component in symmetric cryptography, uses various gate types, making optimization challenging, particularly as the bit size increases. In this paper, we propose a new framework for a heuristic search to optimize the circuit depth or XOR gate count of S-box circuits. Existing S-box circuit optimization studies have divided the nonlinear and linear layers of the S-box, optimizing each separately, but limitations still exist in optimizing large S-box circuits. To extend the optimization target from individual internal components to the entire S-box circuit, we extract the XOR information of each node in the target circuit and reconstruct the nodes based on nonlinear gates. Next, we extend the BP algorithm-based heuristics to address nonlinear gates and incorporate this into the framework. It is noteworthy that the effects of our framework occur while maintaining the AND gate count and AND depth without any increase. To demonstrate the effectiveness of the proposed framework, we apply it to the AES, SNOW3G, and Saturnin S-box circuits. Our results include depth improvements by about 40% and 11% compared to the existing AES S-box [BP10] and Saturnin super S-box [CDL+20] circuits, respectively. We implement a new circuit for the SNOW3G S-box, which has not previously been developed, and apply our framework to reduce its depth. We expect the proposed framework to contribute to the design and implementation of various symmetric-key cryptography solutions.
Last updated:  2025-01-09
A Survey of Interactive Verifiable Computing: Utilizing Randomness in Low-Degree Polynomials
Angold Wang
This survey provides a comprehensive examination of verifiable computing, tracing its evolution from foundational complexity theory to modern zero-knowledge succinct non-interactive arguments of knowledge (ZK-SNARKs). We explore key developments in interactive proof systems, knowledge complexity, and the application of low-degree polynomials in error detection and verification protocols. The survey delves into essential mathematical frameworks such as the Cook-Levin Theorem, the sum-check protocol, and the GKR protocol, highlighting their roles in enhancing verification efficiency and soundness. By systematically addressing the limitations of traditional NP-based proof systems and then introducing advanced interactive proof mechanisms to overcome them, this work offers an accessible step-by-step introduction for newcomers while providing detailed mathematical analyses for researchers. Ultimately, we synthesize these concepts to elucidate the GKR protocol, which serves as a foundation for contemporary verifiable computing models. This survey not only reviews the historical and theoretical advancements in verifiable computing over the past three decades but also lays the groundwork for understanding recent innovations in the field.
Last updated:  2025-01-09
All-You-Can-Compute: Packed Secret Sharing for Combined Resilience
Sebastian Faust, Maximilian Orlt, Kathrin Wirschem, and Liang Zhao
Unprotected cryptographic implementations are vulnerable to implementation attacks, such as passive side-channel attacks and active fault injection attacks. Recently, countermeasures like polynomial masking and duplicated masking have been introduced to protect implementations against combined attacks that exploit leakage and faults simultaneously. While duplicated masking requires shares to resist an adversary capable of probing values and faulting values, polynomial masking requires only shares, which is particularly beneficial for affine computation. At CHES', Arnold et al. showed how to further improve the efficiency of polynomial masking in the presence of combined attacks by embedding two secrets into one polynomial sharing. This essentially reduces the complexity of previous constructions by half. The authors also observed that using techniques from packed secret sharing (Grosso et al., CHES') cannot easily achieve combined resilience to encode an arbitrary number of secrets in one polynomial encoding. In this work, we resolve these challenges and show that it is possible to embed an arbitrary number of secrets in one encoding and propose gadgets that are secure against combined attacks. We present two constructions that are generic and significantly improve the computational and randomness complexity of existing compilers, such as the laOla compiler presented by Berndt et al. at CRYPTO' and its improvement by Arnold et al. For example, for an AES evaluation that protects against probes and faults, we improve the randomness complexity of the state-of-the-art construction when , leading to an improvement of up to a factor of .
Last updated:  2025-01-09
Scalable Post-Quantum Oblivious Transfers for Resource-Constrained Receivers
Aydin Abadi and Yvo Desmedt
It is imperative to modernize traditional core cryptographic primitives, such as Oblivious Transfer (OT), to address the demands of the new digital era, where privacy-preserving computations are executed on low-power devices. This modernization is not merely an enhancement but a necessity to ensure security, efficiency, and continued relevance in an ever-evolving technological landscape. This work introduces two scalable OT schemes: (1) Helix OT, a -out-of- OT, and (2) Priority OT, a -out-of- OT. Both schemes provide unconditional security, ensuring resilience against quantum adversaries. Helix OT achieves a receiver-side download complexity of . In big data scenarios, where certain data may be more urgent or valuable, we propose Priority OT. With a receiver-side download complexity of , this scheme allows data to be received based on specified priorities. By prioritizing data transmission, Priority OT ensures that the most important data is received first, optimizing bandwidth, storage, and processing resources. Performance evaluations indicate that Helix OT completes the transfer of 1 out of 16,777,216 messages in 9 seconds, and Priority OT handles 1,048,576 out of selections in 30 seconds. Both outperform existing -out-of- OTs (when ), underscoring their suitability for large-scale applications. To the best of our knowledge, Helix OT and Priority OT introduce unique advancements that distinguish them from previous schemes.
Last updated:  2025-01-09
Statistical testing of random number generators and their improvement using randomness extraction
Cameron Foreman, Richie Yeung, and Florian J. Curchod
Random number generators (RNGs) are notoriously challenging to build and test, especially for cryptographic applications. While statistical tests cannot definitively guarantee an RNG’s output quality, they are a powerful verification tool and the only universally applicable testing method. In this work, we design, implement, and present various post-processing methods, using randomness extractors, to improve the RNG output quality and compare them through statistical testing. We begin by performing intensive tests on three RNGs—the 32-bit linear feedback shift register (LFSR), Intel’s ‘RDSEED,’ and IDQuantique’s ‘Quantis’—and compare their performance. Next, we apply the different post-processing methods to each RNG and conduct further intensive testing on the processed output. To facilitate this, we introduce a comprehensive statistical testing environment, based on existing test suites, that can be parametrised for lightweight (fast) to intensive testing.
Last updated:  2025-01-09
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
Daniel Benarroch, Matteo Campanelli, Dario Fiore, Kobi Gurkan, and Dimitris Kolonelos
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some , `` and holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data. We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving for a value that is in a public commitment . We also extend our results to support {\em non-membership proofs}, i.e. proving . Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form `` and holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for . Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16. We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and -- faster proving time for a set of size .
Last updated:  2025-01-09
BIP32-Compatible Threshold Wallets
Poulami Das, Andreas Erwig, Sebastian Faust, Philipp-Florens Lehwalder, Julian Loss, Ziyan Qu, and Siavash Riahi
Cryptographic wallets are an essential tool to securely store and maintain users’ secret keys and consequently their funds in Blockchain networks. A compelling approach to construct such wallets is to share the user’s secret key among several devices, such that an adversary must corrupt multiple machines to extract the entire secret key. Indeed, many leading cryptocurrency companies such as Coinbase, Binance, or ZenGo have started offering such distributed wallets to their customers. An important feature of a cryptographic wallet is its compatibility with the so-called BIP32 specification, the most widely adopted standard for cryptographic wallets. Essentially, BIP32 specifies the notion of a hierarchical deterministic wallet, which allows to create a key hierarchy in a deterministic fashion. Unfortunately, despite significant interest, no practically efficiently solution for a fully distributed wallet scheme, that also follows the BIP32 standard, exists. In this work, we show the first concretely efficient construction of a fully distributed wallet that is compliant with the BIP32 standard. To this end, we first provide a game-based notion of threshold signatures with rerandomizable keys and show an instantiation via the Gennaro and Goldfeder threshold ECDSA scheme (CCS’18). We then observe that one of BIP32’s key derivation mechanisms, the so-called hardened derivation, cannot efficiently be translated to the threshold setting. Instead, we devise a novel and efficient hardened derivation mechanism for the threshold setting that satisfies the same properties as the original mechanism as specified by BIP32. As a final contribution, we evaluate our solution with respect to its running time and communication cost.
Last updated:  2025-01-09
Hash your Keys before Signing: BUFF Security of the Additional NIST PQC Signatures
Thomas Aulbach, Samed Düzlü, Michael Meyer, Patrick Struck, and Maximiliane Weishäupl
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only out of schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
Last updated:  2025-01-09
ZODA: Zero-Overhead Data Availability
Uncategorized
Alex Evans, Nicolas Mohnblatt, and Guillermo Angeris
Show abstract
Uncategorized
We introduce ZODA, short for 'zero-overhead data availability,' which is a protocol for proving that symbols received from an encoding (for tensor codes) were correctly constructed. ZODA has optimal overhead for both the encoder and the samplers. Concretely, the ZODA scheme incurs essentially no incremental costs (in either computation or size) beyond those of the tensor encoding itself. In particular, we show that a slight modification to the encoding scheme for tensor codes allows sampled rows and columns of this modified encoding to become proofs of their own correctness. When used as part of a data availability sampling protocol, both encoders (who encode some data using a tensor code with some slight modifications) and samplers (who sample symbols from the purported encoding and verify that these samples are correctly encoded) incur no incremental communication costs and only small additional computational costs over having done the original tensor encoding. ZODA additionally requires no trusted setup and is plausibly post-quantum secure.
Last updated:  2025-01-08
Parametrizing Maximal Orders Along Supersingular -Isogeny Paths
Laia Amorós, James Clements, and Chloe Martindale
Suppose you have a supersingular -isogeny graph with vertices given by -invariants defined over , where and . We give an explicit parametrization of the maximal orders in appearing as endomorphism rings of the elliptic curves in this graph that are steps away from a root vertex with -invariant 1728. This is the first explicit parametrization of this kind and we believe it will be an aid in better understanding the structure of supersingular -isogeny graphs that are widely used in cryptography. Our method makes use of the inherent directions in the supersingular isogeny graph induced via Bruhat-Tits trees, as studied in [1]. We also discuss how in future work other interesting use cases, such as , could benefit from the same methodology.
Last updated:  2025-01-08
Concurrent Security of Anonymous Credentials Light, Revisited
Julia Kastner, Julian Loss, and Omar Renawi
We revisit the concurrent security guarantees of the well-known Anonymous Credentials Light (ACL) scheme (Baldimtsi and Lysyanskaya, CCS'13). This scheme was originally proven secure when executed sequentially, and its concurrent security was left as an open problem. A later work of Benhamouda et al. (EUROCRYPT'21) gave an efficient attack on ACL when executed concurrently, seemingly resolving this question once and for all. In this work, we point out a subtle flaw in the attack of Benhamouda et al. on ACL and show, in spite of popular opinion, that it can be proven concurrently secure. Our modular proof in the algebraic group model uses an ID scheme as an intermediate step and leads to a major simplification of the complex security argument for Abe's Blind Signature scheme by Kastner et al. (PKC'22).
Last updated:  2025-01-08
A New Paradigm for Server-Aided MPC
Alessandra Scafuro and Tanner Verber
The server-aided model for multiparty computation (MPC) was introduced to capture a real-world scenario where clients wish to off-load the heavy computation of MPC protocols to dedicated servers. A rich body of work has studied various trade-offs between security guarantees (e.g., semi-honest vs malicious), trust assumptions (e.g., the threshold on corrupted servers), and efficiency. However, all existing works make the assumption that all clients must agree on employing the same servers, and accept the same corruption threshold. In this paper, we challenge this assumption and introduce a new paradigm for server-aided MPC, where each client can choose their own set of servers and their own threshold of corrupted servers. In this new model, the privacy of each client is guaranteed as long as their own threshold is satisfied, regardless of the other servers/clients. We call this paradigm per-party private server-aided MPC to highlight both a security and efficiency guarantee: (1) per-party privacy, which means that each party gets their own privacy guarantees that depend on their own choice of the servers; (2) per-party complexity, which means that each party only needs to communicate with their chosen servers. Our primary contribution is a new theoretical framework for server-aided MPC. We provide two protocols to show feasibility, but leave it as a future work to investigate protocols that focus on concrete efficiency.
Last updated:  2025-01-08
Round-Optimal Compiler for Semi-Honest to Malicious Oblivious Transfer via CIH
Varun Madathil, Alessandra Scafuro, and Tanner Verber
A central question in the theory of cryptography is whether we can build protocols that achieve stronger security guarantees, e.g., security against malicious adversaries, by combining building blocks that achieve much weaker security guarantees, e.g., security only against semi-honest adversaries; and with the minimal number of rounds. An additional focus is whether these building blocks can be used only as a black-box. Since Oblivious Transfer (OT) is the necessary and sufficient building block to securely realize any two-party (and multi-party) functionality, theoreticians often focus on proving whether maliciously secure OT can be built from a weaker notion of OT. There is a rich body of literature that provides (black-box) compilers that build malicious OT from OTs that achieve weaker security such as semi-malicious OT and defensibly secure OT, within the minimal number of rounds. However, no round-optimal compiler exists that builds malicious OT from the weakest notion of semi-honest OT, in the plain model. Correlation intractable hash (CIH) functions are special hash functions whose properties allow instantiating the celebrated Fiat-Shamir transform, and hence reduce the round complexity of public-coin proof systems. In this work, we devise the first round-optimal compiler from semi-honest OT to malicious OT, by a novel application of CIH for collapsing rounds in the plain model. We provide the following contributions. First, we provide a new CIH-based round-collapsing construction for general cut-and-choose. This gadget can be used generally to prove the correctness of the evaluation of a function. Then, we use our gadget to build the first round-optimal compiler from semi-honest OT to malicious OT. Our compiler uses the semi-honest OT protocol and the other building blocks in a black-box manner. However, for technical reasons, the underlying CIH construction requires the upper bound of the circuit size of the semi-honest OT protocol used. The need for this upper-bound makes our protocol not fully black-box, hence is incomparable with existing, fully black-box, compilers.
Last updated:  2025-01-08
Blind accumulators for e-voting
Sergey Agievich
We present a novel cryptographic primitive, blind accumulator, aimed at constructing e-voting systems. Blind accumulators collect private keys of eligible voters in a decentralized manner not getting information about the keys. Once the accumulation is complete, a voter processes the resulting accumulator and derives a public key which refers to a private key previously added by this voter. Public keys are derived deterministically and can therefore stand as fixed voter pseudonyms. The voter can prove that the derived key refers to some accumulated private key without revealing neither that key nor the voter itself. The voter uses the accumulated private key to sign a ballot. The corresponding public key is used to verify the signature. Since the public key is fixed, it is easy to achieve verifiability, to protect against multiple submissions of ballots by the same voter or, conversely, to allow multiple submissions but count only the last one. We suggest a syntax of blind accumulators and security requirements for them. We embed blind accumulators in the Pseudonymous Key Generation (PKG) protocol which details the use of accumulators in practical settings close to e-voting. We propose an instantiation of the blind accumulator scheme whose main computations resemble the Diffie-Hellman protocol. We justify the security of the proposed instantiation.
Last updated:  2025-01-08
Compressing Encrypted Data Over Small Fields
Nils Fleischhacker, Kasper Green Larsen, and Mark Simkin
A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors. Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem. Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval. Concretely, assume one is given a vector with at most distinct indices , such that and assume is an additively homomorphic encryption scheme. The authors show that one can compress , without knowing the secret key , into a digest with size dependent on the upper bound on the sparsity , but not on the total vector length . Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector. In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces. Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
Last updated:  2025-01-08
Highly Efficient Server-Aided Multiparty Subfield VOLE Distribution Protocol
Dongyu Wu
In recent development of secure multi-party computation (MPC), pseudorandom correlations of subfield vector oblivious linear evaluation (sVOLE) type become popular due to their amazing applicability in multi-dimensional MPC protocols such as privacy-preserving biometric identification and privacy-preserving machine learning protocols. In this paper, we introduce a novel way of VOLE distribution in three-party and four-party honest majority settings with the aid of a trusted server. This new method significantly decreases the communication cost and the memory storage of random sVOLE instances. On the other hand, it also enables a streamline distribution process that can generate a sVOLE instance of an arbitrary length, which results in 100 percent of utility rate of random sVOLE in multi-dimensional MPC protocols while preserving complete precomputability.
Last updated:  2025-01-08
How to use your brain for cryptography without trustworthy machines
Wakaha Ogata, Toi Tomita, Kenta Takahashi, and Masakatsu Nishigaki
In this work, we study cryptosystems that can be executed securely without fully trusting all machines, but only trusting the user's brain. This paper focuses on signature scheme. We first introduce a new concept called ``server-aided in-brain signature,'' which is a cryptographic protocol between a human brain and multiple servers to sign a message securely even if the user's device and servers are not completely trusted. Second, we propose a concrete scheme that is secure against mobile hackers in servers and malware infecting user's devices.
Last updated:  2025-01-07
Committing AE from Sponges: Security Analysis of the NIST LWC Finalists
Juliane Krämer, Patrick Struck, and Maximiliane Weishäupl
Committing security has gained considerable attention in the field of authenticated encryption (AE). This can be traced back to a line of recent attacks, which entail that AE schemes used in practice should not only provide confidentiality and authenticity, but also committing security. Roughly speaking, a committing AE scheme guarantees that ciphertexts will decrypt only for one key. Despite the recent research effort in this area, the finalists of the NIST lightweight cryptography standardization process have not been put under consideration yet. We close this gap by providing an analysis of these schemes with respect to their committing security. Despite the structural similarities the finalists exhibit, our results are of a quite heterogeneous nature: We break four of the schemes with effectively no costs, while for two schemes our attacks are costlier, yet still efficient. For the remaining three schemes ISAP, Ascon, and (a slightly modified version of) Schwaemm, we give formal security proofs. Our analysis reveals that sponges are well-suited for building committing AE schemes. Furthermore, we show several negative results when applying the zero-padding method to the NIST finalists.
Last updated:  2025-01-07
The cool and the cruel: separating hard parts of LWE secrets
Niklas Nolte, Mohamed Malhou, Emily Wenger, Samuel Stevens, Cathy Yuanchen Li, Francois Charton, and Kristin Lauter
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation [20]. Known attacks on sparse binary LWE secrets include the sparse dual attack [5] and the hybrid sparse dual-meet in the middle attack [19], which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial lattice reduction. The key observation is that, after lattice reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both cruel and cool bits of the secret. We recover secrets in dimensions n = 256, 512, 768, 1024 and provide concrete attack timings. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter [34]) applied in parallel. We also apply our new attack to RLWE with 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Last updated:  2025-01-07
Concrete Quantum Cryptanalysis of Shortest Vector Problem
Hyunji Kim, Kyungbae Jang, Anubhab Baksi, Sumanta Chakraborty, and Hwajeong Seo
This paper presents quantum circuits for the Nguyen–Vidick (NV) sieve algorithm to solve the Shortest Vector Problem (SVP) in lattice-based cryptography. We focus on optimizing the circuit depth of the quantum NV sieve, leveraging Grover’s algorithm to reduce the search complexity. Using the proposed quantum NV sieve, we estimate the quantum re- sources required to solve SVP for various dimensions. Specifically, for a dimension size of 512 (the parameter for Kyber-512), our implemen- tation achieves a quantum attack cost of 2126.0045 in terms of the gate count–depth product metric used by National Institute of Standards and Technology (NIST). To optimize circuit depth, we employ carry-lookahead and carry-save adders for efficient multi-addition operations. Further, our quantum NV sieve performs precise sieving by implementing fixed-point arithmetic, incorporating essential components (such as input setting, up-scaling, and two’s complement). To the best of our knowledge, previous work on quantum cryptanaly- sis of SVP using the sieve algorithm has remained theoretical, without proposing quantum circuits. Our work humbly demonstrates that the post-quantum security of lattice- based cryptography (with respect to the quantum attack complexity) falls between that of multivariate-based and code-based cryptography.
Last updated:  2025-01-06
Cryptography is Rocket Science: Analysis of BPSec
Benjamin Dowling, Britta Hale, Xisen Tian, and Bhagya Wimalasiri
Space networking has become an increasing area of development with the advent of commercial satellite networks such as those hosted by Starlink and Kuiper, and increased satellite and space presence by governments around the world. Yet, historically such network designs have not been made public, leading to limited formal cryptographic analysis of the security offered by them. One of the few public protocols used in space networking is the Bundle Protocol, which is secured by Bundle Protocol Security (BPSec), an Internet Engineering Task Force (IETF) standard. We undertake a first analysis of BPSec under its default security context, building a model of the secure channel security goals stated in the IETF standard, and note issues therein with message loss detection. We prove BPSec secure, and also provide a stronger construction, one that supports the Bundle Protocol's functionality goals while also ensuring destination awareness of missing message components.
Last updated:  2025-01-06
Security Guidelines for Implementing Homomorphic Encryption
Jean-Philippe Bossuat, Rosario Cammarota, Ilaria Chillotti, Benjamin R. Curtis, Wei Dai, Huijing Gong, Erin Hales, Duhyeong Kim, Bryan Kumara, Changmin Lee, Xianhui Lu, Carsten Maple, Alberto Pedrouzo-Ulloa, Rachel Player, Yuriy Polyakov, Luis Antonio Ruiz Lopez, Yongsoo Song, and Donggeon Yhee
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it was considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning with Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper, we provide examples of parameter sets for LWE targeting particular security levels that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.
Last updated:  2025-01-06
Leveled Functional Bootstrapping via External Product Tree
Zhihao Li, Xuan Shen, Xianhui Lu, Ruida Wang, Yuan Zhao, Zhiwei Wang, and Benqiang Wei
Multi-input and large-precision lookup table (LUT) evaluation pose significant challenges in Fully Homomorphic Encryption (FHE). Currently, two modes are employed to address this issue. One is tree-based functional bootstrapping (TFBS), which uses multiple blind rotations to construct a tree for LUT evaluation. The second is circuit bootstrapping, which decomposes all inputs into bits and utilizes a CMux tree for LUT evaluation. In this work, we propose a novel mode that is leveled functional bootstrapping. This mode utilizes the external product tree to perform multi-input functional bootstrapping. We implement TFBS and LFBS within the OpenFHE library. The results demonstrate that our method outperforms TFBS in both computational efficiency and bootstrapping key size. Specifically, for 12-bit and 16-bit input LUTs, our method is approximately two to three orders of magnitude faster than TFBS. Finally, we introduce a novel scheme-switching framework that supports large-precision polynomial and non-polynomial evaluations. The framework leverages digital extraction techniques to enable seamless switching between the BFV and LFBS schemes.
Last updated:  2025-01-06
On Witness Encryption and Laconic Zero-Knowledge Arguments
Yanyi Liu, Noam Mazor, and Rafael Pass
Witness encryption (WE) (Garg et al, STOC’13) is a powerful cryptographic primitive that is closely related to the notion of indistinguishability obfuscation (Barak et, JACM’12, Garg et al, FOCS’13). For a given NP-language , WE for enables encrypting a message using an instance as the public-key, while ensuring that efficient decryption is possible by anyone possessing a witness for , and if , then the encryption is hiding. We show that this seemingly sophisticated primitive is equivalent to a communication-efficient version of one of the most classic cryptographic primitives—namely that of a zero-knowledge argument (Goldwasser et al, SIAM’89, Brassard et al, JCSS’88): for any NP-language , the following are equivalent: - There exists a witness encryption for L; - There exists a laconic (i.e., the prover communication is bounded by ) special-honest verifier zero-knowledge (SHVZK) argument for . Our approach is inspired by an elegant (one-sided) connection between (laconic) zero-knowledge arguments and public-key encryption established by Berman et al (CRYPTO’17) and Cramer-Shoup (EuroCrypt’02), and the equivalence between a notion of so-called “predictable arguments” and witness encryption by Faonio, Nielsen, and Venturi (PKC’17).
Last updated:  2025-01-06
Efficient Authentication Protocols from the Restricted Syndrome Decoding Problem
Thomas Johansson, Mustafa Khairallah, and Vu Nguyen
In this paper, we introduce an oracle version of the Restricted Syndrome Decoding Problem (RSDP) and propose novel authentication protocols based on the hardness of this problem. They follow the basic structure of the HB-family of authentication protocols and later improvements but demonstrate several advantages. An appropriate choice of multiplicative subgroup and ring structure gives rise to a very efficient hardware implementation compared to other \emph{Learning Parity with Noise} based approaches. In addition, the new protocols also have lower key size, lower communication costs, and potentially better completeness/soundness compared to learning-based alternatives. This is appealing in the context of low-cost, low-powered authenticating devices such as radio frequency identification (RFID) systems. Lastly, we show that with additional assumptions, RSDP can be used to instantiate a Man-in-the-Middle secured authentication protocol.
Last updated:  2025-01-06
Lossy Cryptography from Code-Based Assumptions
Quang Dao and Aayush Jain
Over the past few decades, we have seen a proliferation of advanced cryptographic primitives with lossy or homomorphic properties built from various assumptions such as Quadratic Residuosity, Decisional Diffie-Hellman, and Learning with Errors. These primitives imply hard problems in the complexity class (statistical zero-knowledge); as a consequence, they can only be based on assumptions that are broken in . This poses a barrier for building advanced primitives from code-based assumptions, as the only known such assumption is Learning Parity with Noise (LPN) with an extremely low noise rate , which is broken in quasi-polynomial time. In this work, we propose a new code-based assumption: Dense-Sparse LPN, that falls in the complexity class and is conjectured to be secure against subexponential time adversaries. Our assumption is a variant of LPN that is inspired by McEliece's cryptosystem and random XOR in average-case complexity. Roughly, the assumption states that for a random (dense) matrix , random sparse matrix , and sparse noise vector drawn from the Bernoulli distribution with inverse polynomial noise probability. We leverage our assumption to build lossy trapdoor functions (Peikert-Waters STOC 08). This gives the first post-quantum alternative to the lattice-based construction in the original paper. Lossy trapdoor functions, being a fundamental cryptographic tool, are known to enable a broad spectrum of both lossy and non-lossy cryptographic primitives; our construction thus implies these primitives in a generic manner. In particular, we achieve collision-resistant hash functions with plausible subexponential security, improving over a prior construction from LPN with noise rate that is only quasi-polynomially secure.
Last updated:  2025-01-06
Leakage-Free Probabilistic Jasmin Programs
José Bacelar Almeida, Denis Firsov, Tiago Oliveira, and Dominique Unruh
This paper presents a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove the equivalence between both definitions. We also prove that our new characterizations are compositional and relate our new definitions to existing ones from prior work, which could only be applied to deterministic programs. To provide practical evidence, we use the Jasmin framework to develop a rejection sampling algorithm and provide an EasyCrypt proof that ensures the algorithm's implementation is leakage-free while not being constant-time.
Last updated:  2025-01-06
Ringtail: Practical Two-Round Threshold Signatures from Learning with Errors
Cecilia Boschini, Darya Kaviani, Russell W. F. Lai, Giulio Malavolta, Akira Takahashi, and Mehdi Tibouchi
A threshold signature scheme splits the signing key among parties, such that any -subset of parties can jointly generate signatures on a given message. Designing concretely efficient post-quantum threshold signatures is a pressing question, as evidenced by NIST's recent call. In this work, we propose, implement, and evaluate a lattice-based threshold signature scheme, Ringtail, which is the first to achieve a combination of desirable properties: (i) The signing protocol consists of only two rounds, where the first round is message-independent and can thus be preprocessed offline. (ii) The scheme is concretely efficient and scalable to parties. For -bit security and parties, we achieve KB signature size and KB of online communication. (iii) The security is based on the standard learning with errors (LWE) assumption in the random oracle model. This improves upon the state-of-the-art (with comparable efficiency) which either has a three-round signing protocol [Eurocrypt'24] or relies on a new non-standard assumption [Crypto'24]. To substantiate the practicality of our scheme, we conduct the first WAN experiment deploying a lattice-based threshold signature, across 8 countries in 5 continents. We observe that an overwhelming majority of the end-to-end latency is consumed by network latency, underscoring the need for round-optimized schemes.
Last updated:  2025-01-06
Fiat-Shamir Bulletproofs are Non-Malleable (in the Random Oracle Model)
Chaya Ganesh, Claudio Orlandi, Mahak Pancholi, Akira Takahashi, and Daniel Tschudi
Bulletproofs (Bünz et al. IEEE S&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their non-interactive version obtained using the Fiat-Shamir transform. A security proof for this setting is necessary for ruling out malleability attacks. These attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. An earlier version of this work (Ganesh et al. EUROCRYPT 2022) provided evidence for non-malleability of Fiat-Shamir Bulletproofs. This was done by proving simulation-extractability, which implies non-malleability, in the algebraic group model. In this work, we generalize the former result and prove simulation extractability in the programmable random oracle model, removing the need for the algebraic group model. Along the way, we establish a generic chain of reductions for Fiat-Shamir-transformed multi-round public-coin proofs to be simulation-extractable in the (programmable) random oracle model, which may be of independent interest.
Last updated:  2025-01-06
Privacy-Preserving Dijkstra
Benjamin Ostrovsky
Given a graph , represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called -normalized replicated adjacency list (which we abbreviate to -normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of . Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our -normalization proceeds in two steps: First, we show how for any graph , given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from to that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes rounds and secure operations. Our algorithm also outputs two secret-shared arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such a format, this step could be omitted. Second, given a secret-shared adjacency list for any graph , where vertices are integers from to and are sorted, we show an oblivious -normalization algorithm that takes rounds and secure operations. We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves secure operations and rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into a constant number of RAM memory words. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
Last updated:  2025-01-05
On the Independence Assumption in Quasi-Cyclic Code-Based Cryptography
Maxime Bombar, Nicolas Resch, and Emiel Wiedijk
Cryptography based on the presumed hardness of decoding codes -- i.e., code-based cryptography -- has recently seen increased interest due to its plausible security against quantum attackers. Notably, of the four proposals for the NIST post-quantum standardization process that were advanced to their fourth round for further review, two were code-based. The most efficient proposals -- including HQC and BIKE, the NIST submissions alluded to above -- in fact rely on the presumed hardness of decoding structured codes. Of particular relevance to our work, HQC is based on quasi-cyclic codes, which are codes generated by matrices consisting of two cyclic blocks. In particular, the security analysis of HQC requires a precise understanding of the Decryption Failure Rate (DFR), whose analysis relies on the following heuristic: given random "sparse" vectors (say, each coordinate is i.i.d. Bernoulli) multiplied by fixed "sparse" quasi-cyclic matrices , the weight of resulting vector is very concentrated around its expectation. In the documentation, the authors model the distribution of as a vector with independent coordinates (and correct marginal distribution). However, we uncover cases where this modeling fails. While this does not invalidate the (empirically verified) heuristic that the weight of is concentrated, it does suggest that the behavior of the noise is a bit more subtle than previously predicted. Lastly, we also discuss implications of our result for potential worst-case to average-case reductions for quasi-cyclic codes.
Last updated:  2025-01-05
How Much Public Randomness Do Modern Consensus Protocols Need?
Joseph Bonneau, Benedikt Bünz, Miranda Christ, and Yuval Efron
Modern blockchain-based consensus protocols aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries. These goals are usually achieved using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary. Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
Last updated:  2025-01-05
Stronger Security and Constructions of Multi-Designated Verifier Signatures
Ivan Damgård, Helene Haagh, Rebekah Mercer, Anca Nițulescu, Claudio Orlandi, and Sophia Yakoubov
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. To extend OTR to group messaging we need to consider issues that are not present in the 2-party case. In group OTR (as in two-party OTR), the sender should be able to authenticate (or sign) his messages so that group members can verify who sent a message (that is, signatures should be unforgeable, even by group members). Also as in the two-party case, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that a corrupt sender cannot create confusion in the group as to what he said: if any group member accepts a signature, then all of them do. To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS), which intuitively seem to target exactly the properties we require. However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (referred to as source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered. The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency. We give several constructions of our stronger notion of MDVS: one from generic standard primitives such as pseudorandom functions, pseudorandom generators, key agreement and NIZKs; one from specific instances of these primitives (for concrete efficiency); and one from functional encryption. The third construction requires an involved trusted setup step — including verification keys derived from a master secret — but this trusted setup buys us verifier-identity-based signing, for which such trusted setup is unavoidable. Additionally, in the third construction, the signature size can be made smaller by assuming a bound on colluding verifiers.
Last updated:  2025-01-05
Probabilistic Attacks and Enhanced Security for "Private Set Intersection in the Internet Setting from Lightweight Oblivious PRF"
Zhuang Shan, Leyou Zhang, Qing Wu, and Qiqi Lai
Privacy Set Intersection (PSI) has been an important research topic within privacy computation. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other private information. Therefore, PSI can be applied to various real-world scenarios. Chase and Miao presented an impressive construction ``Private set intersection in the Internet setting from lightweight oblivious prf'' (CM20 for short) at Crypto 2020, highlighting its convenient structure and optimal communication cost. However, it does have some security vulnerabilities. Let be the privacy set of user , be the privacy set of user . The CM20 protocol uses a pseudorandom function (PRF) to encrypt the privacy of into and the privacy of into , as . It then sends random data to user and random data to user using a random oblivious transfer technique. User computes and sends to user , and user uses and to re-encrypt . Repeat this until re-encrypts all the privacy in all the privacy sets , packages them up and sends them to , who completes the privacy set intersection. However, if an adversary obtains and by any means, they can recover the PRF's encryption of the user's privacy, and the recovery process is non-trivial. This significantly weakens the security of the CM20 protocol. In this paper, we make three main contributions. First, based on the above analysis, we present a method for attacking CM20, called {\em probabilistic attacks}. This attack is based on estimating and analysing the frequency distribution of the encrypted data from the PRF and the probability distribution of the original private data, and determining the relationship between the two. Although not 100\% effective, this method of attack poses a significant threat to the security of user data. Secondly, we introduce a new tool called the {\em perturbed pseudorandom generator} (PPRG). We show that the PPRG can overcome probabilistic attacks by replacing the random oblivious transfer and one of the hash functions (originally there were two) in CM20. Finally, we provide a dedicated indistinguishability against chosen-plaintext attack (IND-CPA) security model for this PSI protocol. The efficiency analysis shows that the proposed PSI is comparable to CM20's PSI, whether on a PC, MAC, pad or mobile phone.
Last updated:  2025-01-04
Leuvenshtein: Efficient FHE-based Edit Distance Computation with Single Bootstrap per Cell
Wouter Legiest, Jan-Pieter D'Anvers, Bojan Spasic, Nam-Luc Tran, and Ingrid Verbauwhede
This paper presents a novel approach to calculating the Levenshtein (edit) distance within the framework of Fully Homomorphic Encryption (FHE), specifically targeting third-generation schemes like TFHE. Edit distance computations are essential in applications across finance and genomics, such as DNA sequence alignment. We introduce an optimised algorithm that significantly reduces the cost of edit distance calculations called Leuvenshtein. This algorithm specifically reduces the number of programmable bootstraps (PBS) needed per cell of the calculation, lowering it from approximately 28 operations—required by the conventional Wagner-Fisher algorithm—to just 1. Additionally, we propose an efficient method for performing equality checks on characters, reducing ASCII character comparisons to only 2 PBS operations. Finally, we explore the potential for further performance improvements by utilizing preprocessing when one of the input strings is unencrypted. Our Leuvenshtein achieves up to faster performance compared to the best available TFHE implementation and up to faster than an optimised implementation of the Wagner-Fisher algorithm. Moreover, when offline preprocessing is possible due to the presence of one unencrypted input on the server side, an additional speedup can be achieved.
Last updated:  2025-01-04
Dynamically Available Common Subset
Yuval Efron and Ertem Nusret Tas
Internet-scale consensus protocols used by blockchains are designed to remain operational in the presence of unexpected temporary crash faults (the so-called sleepy model of consensus) -- a critical feature for the latency-sensitive financial applications running on these systems. However, their leader-based architecture, where a single block proposer is responsible for creating the block at each height, makes them vulnerable to short-term censorship attacks, in which the proposers profit at the application layer by excluding certain transactions. In this work, we introduce an atomic broadcast protocol, secure in the sleepy model, that ensures the inclusion of all transactions within a constant expected time, provided that at least one participating node is non-censoring at all times. Unlike traditional approaches, our protocol avoids designating a single proposer per block height, instead leveraging a so-called dynamically available common subset (DACS) protocol -- the first of its kind in the sleepy model. Additionally, our construction guarantees deterministic synchronization -- once an honest node confirms a block, all other honest nodes do so within a constant time, thus addressing a shortcoming of many low-latency sleepy protocols.
Last updated:  2025-01-04
Publicly-Detectable Watermarking for Language Models
Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, and Mingyuan Wang
We present a publicly-detectable watermarking scheme for LMs: the detection algorithm contains no secret information, and it is executable by anyone. We embed a publicly-verifiable cryptographic signature into LM output using rejection sampling and prove that this produces unforgeable and distortion-free (i.e., undetectable without access to the public key) text output. We make use of error-correction to overcome periods of low entropy, a barrier for all prior watermarking schemes. We implement our scheme and find that our formal claims are met in practice.
Last updated:  2025-01-04
A New Method for Solving Discrete Logarithm Based on Index Calculus
Jianjun HU
Index Calculus (IC) algorithm is the most effective probabilistic algorithm for solving discrete logarithms over finite fields of prime numbers, and it has been widely applied to cryptosystems based on elliptic curves. Since the IC algorithm was proposed in 1920, the research on it has never stopped, especially discretization of prime numbers on the finite fields, both the algorithm itself and its application have been greatly developed. Of course, there has been some research on elliptic curves,but with little success. For the IC algorithm, scholars pay more attention to how to improve the probability of solving and reduce the time complexity of calculation. It is the first time for the IICA to study the optimization problem of the IC by using the method of integer. However, the IICA only studies the case of integer up, and fails to consider the case of integer down. It is found that the integer direction of the IICA can be integer up or integer down, but the concept of modular multiplication needs to be used when integer down. After optimizing the IICA, the probability of successful solution of discrete logarithm is increased by nearly 2 times, and the number of transformations is also reduced to a certain extent, thus reducing the time complexity of solution. The re-optimized the IC algorithm greatly improves the probability of successful the IC solution. This research result poses a serious challenge to cryptosystems based on finite fields of prime numbers.
Last updated:  2025-01-03
Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks
Xihan Xiong, Zhipeng Wang, Xi Chen, William Knottenbelt, and Michael Huth
In the Proof of Stake (PoS) Ethereum ecosystem, users can stake ETH on Lido to receive stETH, a Liquid Staking Derivative (LSD) that represents staked ETH and accrues staking rewards. LSDs improve the liquidity of staked assets by facilitating their use in secondary markets, such as for collateralized borrowing on Aave or asset exchanges on Curve. The composability of Lido, Aave, and Curve enables an emerging strategy known as leverage staking, an iterative process that enhances financial returns while introducing potential risks. This paper establishes a formal framework for leverage staking with stETH and identifies 442 such positions on Ethereum over 963 days. These positions represent a total volume of 537,123 ETH (877m USD). Our data reveal that 81.7% of leverage staking positions achieved an Annual Percentage Rate (APR) higher than conventional staking on Lido. Despite the high returns, we also recognize the potential risks. For example, the Terra crash incident demonstrated that token devaluation can impact the market. Therefore, we conduct stress tests under extreme conditions of significant stETH devaluation to evaluate the associated risks. Our simulations reveal that leverage staking amplifies the risk of cascading liquidations by triggering intensified selling pressure through liquidation and deleveraging processes. Furthermore, this dynamic not only accelerates the decline of stETH prices but also propagates a contagion effect, endangering the stability of both leveraged and ordinary positions.
Last updated:  2025-01-03
Wave Hello to Privacy: Efficient Mixed-Mode MPC using Wavelet Transforms
José Reis, Mehmet Ugurbil, Sameer Wagh, Ryan Henry, and Miguel de Vega
This paper introduces new protocols for secure multiparty computation (MPC) leveraging Discrete Wavelet Transforms (DWTs) for computing nonlinear functions over large domains. By employing DWTs, the protocols significantly reduce the overhead typically associated with Lookup Table-style (LUT) evaluations in MPC. We state and prove foundational results for DWT-compressed LUTs in MPC, present protocols for 9 of the most common activation functions used in ML, and experimentally evaluate the performance of our protocols for large domain sizes in the LAN and WAN settings. Our protocols are extremely fast -- for instance, when considering 64-bit inputs, computing 1000 parallel instances of the sigmoid function, with an error less than takes only a few hundred milliseconds incurs just 29\,KiB of online communication (40 bytes per evaluation).
Last updated:  2025-01-03
A Note on the Minimality of One-Way Functions in Post-Quantum Cryptography
Sam Buxbaum and Mohammad Mahmoody
In classical cryptography, one-way functions (OWFs) play a central role as the minimal primitive that (almost) all primitives imply. The situation is more complicated in quantum cryptography, in which honest parties and adversaries can use quantum computation and communication, and it is known that analogues of OWFs in the quantum setting might not be minimal. In this work we ask whether OWFs are minimal for the intermediate setting of post-quantum cryptography, in which the protocols are classical while they shall resist quantum adversaries. We show that for a wide range of natural settings, if a primitive Q implies OWFs, then so does its (uniformly or non-uniformly secure) post-quantum analogue. In particular, we show that if a primitive Q implies any other primitive P that has a 2-message security game (e.g., OWFs) through a black-box classical security reduction R, then one can always (efficiently) turn any polynomial-size quantum adversary breaking P into a polynomial-size quantum adversary breaking Q. Note that this result holds even if the implementation of P using that of Q is arbitrarily non-black-box. We also prove extensions of this result for when the reduction R anticipates its oracle adversary to be deterministic, whenever either of the following conditions hold: (1) the adversary needs to win the security game of Q only with non-negligible probability (e.g., Q is collision-resistant hashing) or (2) that either of P and Q have "falsifiable" security games (this is the case when P is OWFs). Our work leaves open answering our main question when Q implies OWFs through a non-black-box security reduction, or when P uses a more complicated security game than a two-message one.
Last updated:  2025-01-03
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Jia Liu and Mark Manulis
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms. This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions. To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Last updated:  2025-01-03
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification
Romain Gay, Aayush Jain, Huijia Lin, and Amit Sahai
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation (iO), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions. New Assumption: Previous to our work, all constructions of iO from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects. New Techniques: We introduce a number of new techniques: \begin{itemize} \item We show how to build partially-hiding \emph{public-key} functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups. \item We construct single-ciphertext and single-secret-key functional encryption for all circuits with long outputs, which has the features of {\em linear} key generation and compact ciphertext, assuming only the LWE assumption. \end{itemize} Simplification: Unlike prior works, our new techniques furthermore let us construct {\em public-key} functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the {\em polynomial hardness} of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext is independent of the size of the circuit to be computed, and grows only sublinearly in the output size of the circuit and polynomially in the input size and the depth of the circuit. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve .
Last updated:  2025-01-02
DL-SCADS: Deep Learning-Based Post-Silicon Side-Channel Analysis Using Decomposed Signal
Dipayan Saha and Farimah Farahmandi
Side-channel analysis (SCA) does not aim at the algorithm's weaknesses but rather its implementations. The rise of machine learning (ML) and deep learning (DL) is giving adversaries advanced capabilities to perform stealthy attacks. In this paper, we propose DL-SCADS, a DL-based approach along with signal decomposition techniques to leverage the power of secret key extraction from post-silicon EM/power side-channel traces. We integrate previously proven effective ideas of model ensembling and automated hyperparameter tuning with signal decomposition to develop an efficient and robust side-channel attack. Extensive experiments are performed on Advanced Encryption Standard (AES) and Post-Quantum Cryptography (PQC) to demonstrate the efficacy of our approach. The evaluation of the performance of the side-channel attack employing various decomposition techniques and comparison with the proposed approach in a range of datasets are also tabulated.
Last updated:  2025-01-02
A Combinatorial Approach to IoT Data Security
Anandarup Roy, Bimal Kumar Roy, Kouichi Sakurai, and Suprita Talnikar
This article explores the potential of Secret Sharing-Based Internet of Things (SBIoT) as a promising cryptographic element across diverse applications, including secure data storage in commercial cloud systems (Datachest), smart home environments (encompassing sensors, cameras, smart locks, and smart assistants), and e-health applications (protecting patient data and medical records). Beyond these applications, the paper makes two key contributions: the introduction of a novel cheater identification algorithm designed to verify the correct submission of shares during secret reconstruction, and empirical validation through experimental studies to support the theoretical advancements. This multifaceted approach not only demonstrates the versatility of SBIoT but also proposes innovative mechanisms to enhance security and integrity, contributing to the development of a more robust cryptographic framework. This article expands upon the work presented in the poster "A Combinatorial Approach to IoT Data Security" at IWSEC 2023, Yokohama, Japan.
Last updated:  2025-01-01
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
Yuval Ishai, Elaine Shi, and Daniel Wichs
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions). In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
Last updated:  2025-01-01
Attribute Based Encryption for Turing Machines from Lattices
Shweta Agrawal, Simran Kumari, and Shota Yamada
We provide the first attribute based encryption (ABE) scheme for Turing machines supporting unbounded collusions from lattice assumptions. In more detail, the encryptor encodes an attribute together with a bound on the machine running time and a message into the ciphertext, the key generator embeds a Turing machine into the secret key and decryption returns if and only if . Crucially, the input and machine can be of unbounded size, the time bound can be chosen dynamically for each input and decryption runs in input specific time. Previously the best known ABE for uniform computation supported only non-deterministic log space Turing machines ( from pairings (Lin and Luo, Eurocrypt 2020). In the post-quantum regime, the state of the art supports non-deterministic finite automata from LWE in the key setting (Agrawal, Maitra and Yamada, Crypto 2019). In more detail, our results are: 1. We construct the first ABE for from the LWE, evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) and Tensor LWE (Wee, Eurocrypt 2022) assumptions. This yields the first (conjectured) post-quantum ABE for . 2. Relying on LWE, evasive LWE and a new assumption called LWE, we construct ABE for all Turing machines. At a high level, the circular tensor LWE assumption incorporates circularity into the tensor LWE (Wee, Eurocrypt 2022) assumption. Towards our ABE for Turing machines, we obtain the first CP-ABE for circuits of unbounded depth and size from the same assumptions -- this may be of independent interest.
Last updated:  2025-01-01
Post-Quantum DNSSEC with Faster TCP Fallbacks
Aditya Singh Rawat and Mahabir Prasad Jhanwar
In classical DNSSEC, a drop-in replacement with quantum-safe cryptography would increase DNS query resolution times by a factor of . Since a DNS response containing large post-quantum signatures is likely to get marked truncated () by a nameserver (resulting in a wasted UDP round-trip), the client (here, the resolver) would have to retry its query over TCP, further incurring a of two round-trips due to the three-way TCP handshake. We present : a backward-compatible protocol that eliminates round-trips from the preceding flow by 1) sending TCP handshake data in the initial DNS/UDP flight itself, and 2) immediately streaming the DNS response over TCP after authenticating the client with a cryptographic cookie. Our experiments show that DNSSEC over , with either Falcon-512 or Dilithium-2 as the zone signing algorithm, is practically as fast as the currently deployed ECDSA P-256 and RSA-2048 setups in resolving DNS queries.
Last updated:  2025-01-01
Smaug: Modular Augmentation of LLVM for MPC
Radhika Garg and Xiao Wang
Secure multi-party computation (MPC) is a crucial tool for privacy-preserving computation, but it is getting increasingly complicated due to recent advancements and optimizations. Programming tools for MPC allow programmers to develop MPC applications without mastering all cryptography. However, most existing MPC programming tools fail to attract real users due to the lack of documentation, maintenance, and the ability to compose with legacy codebases. In this work, we build Smaug, a modular extension of LLVM. Smaug seamlessly brings all LLVM support to MPC programmers, including error messaging, documentation, code optimization, and frontend support to compile from various languages to LLVM intermediate representation (IR). Smaug can efficiently convert non-oblivious LLVM IR to their oblivious counterparts while applying popular optimizations as LLVM code transformations. With benchmarks written in C++ and Rust and backends for Yao and GMW protocols, we observe that Smaug performs as well as (and sometimes much better than) prior tools using domain-specific languages with similar backends. Finally, we use Smaug to compile open-source projects that implement Minesweeper and Blackjack, producing usable two-party games with ease.
Last updated:  2025-01-01
What is "legal" and "illegal?": Social Norms, Current Practices and Perceived Risks among the Cryptocurrency Users in Bangladesh
Tanusree Sharma, Atm Mizanur Rahman, Silvia Sandhi, Yang Wang, Rifat Shahriyar, and S M Taiabul Haque
Cryptocurrency practices worldwide are seen as innovative, yet they navigate a fragmented regulatory environment. Many local authorities aim to balance promoting innovation, safeguarding consumers, and managing potential threats. In particular, it is unclear how people deal with cryptocurrencies in the regions where trading or mining is prohibited. This insight is crucial in conveying the risk reduction strategies. To address this, we conducted semi-structured interviews with 28 cryptocurrency traders and miners from Bangladesh, where the local authority is hostile towards cryptocurrencies. Our research revealed that the participants use unique strategies to mitigate risks around cryptocurrencies. Our findings indicate a prevalent uncertainty at both personal and organizational levels concerning the interpretation of laws, a situation worsened by the actions of the major financial service providers who indirectly facilitate cryptocurrency transactions. We further connect our findings to the broader issues in HCI regarding folk models, informal market and legality, and education and awareness.
Last updated:  2025-01-01
Nearly Quadratic Asynchronous Distributed Key Generation
Ittai Abraham, Renas Bacho, Julian Loss, and Gilad Stern
We prove that for any , given a VRF setup and assuming secure erasures, there exists a protocol for Asynchronous Distributed Key Generation (ADKG) that is resilient to a strongly adaptive adversary that can corrupt up to parties. With all but negligible probability, all nonfaulty parties terminate in an expected rounds and send a total expected messages.
Last updated:  2024-12-31
Compact Key Storage in the Standard Model
Yevgeniy Dodis and Daniel Jost
In recent work [Crypto'24], Dodis, Jost, and Marcedone introduced Compact Key Storage (CKS) as a modern approach to backup for end-to-end (E2E) secure applications. As most E2E-secure applications rely on a sequence of secrets from which, together with the ciphertexts sent over the network, all content can be restored, Dodis et al. introduced CKS as a primitive for backing up . The authors provided definitions as well as two practically efficient schemes (with different functionality-efficiency trade-offs). Both, their security definitions and schemes relied however on the random oracle model (ROM). In this paper, we first show that this reliance is inherent. More concretely, we argue that in the standard model, one cannot have a general CKS instantiation that is applicable to all "CKS-compatible games", as defined by Dodis et al., and realized by their ROM construction. Therefore, one must restrict the notion of CKS-compatible games to allow for standard model CKS instantiations. We then introduce an alternative standard-model CKS definition that makes concessions in terms of functionality (thereby circumventing the impossibility). More precisely, we specify CKS which does not recover the original secret but a derived key , and then observe that this still suffices for many real-world applications. We instantiate this new notion based on minimal assumptions. For passive security, we provide an instantiation based on one-way functions only. For stronger notions, we additionally need collision-resistant hash functions and dual-PRFs, which we argue to be minimal. Finally, we provide a modularization of the CKS protocols of Dodis et al. In particular, we present a unified protocol (and proof) for standard-model equivalents for both protocols introduced in the original work.
Last updated:  2024-12-31
NMFT: A Copyrighted Data Trading Protocol based on NFT and AI-powered Merkle Feature Tree
Dongming Zhang, Lei Xie, and Yu Tao
With the rapid growth of blockchain-based Non-Fungible Tokens (NFTs), data trading has evolved to incorporate NFTs for ownership verification. However, the NFT ecosystem faces significant challenges in copyright protection, particularly when malicious buyers slightly modify the purchased data and re-mint it as a new NFT, infringing upon the original owner's rights. In this paper, we propose a copyright-preserving data trading protocol to address this challenge. First, we introduce the Merkle Feature Tree (MFT), an enhanced version of the traditional Merkle Tree that incorporates an AI-powered feature layer above the data layer. Second, we design a copyright challenge phase during the trading process, which recognizes the data owner with highly similar feature vectors and earlier on-chain timestamp as the legitimate owner. Furthermore, to achieve efficient and low-gas feature vector similarity computation on blockchain, we employ Locality-Sensitive Hashing (LSH) to compress high-dimensional floating-point feature vectors into single uint256 integers. Experiments with multiple image and text feature extraction models demonstrate that LSH effectively preserves the similarity between highly similar feature vectors before and after compression, thus supporting similarity-based copyright challenges. Experimental results on the Ethereum Sepolia testnet demonstrate NMFT's scalability with sublinear growth in gas consumption while maintaining stable latency.
Last updated:  2024-12-31
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-12-30
Secure Vault scheme in the Cloud Operating Model
Rishiraj Bhattacharyya, Avradip Mandal, and Meghna Sengupta
The rising demand for data privacy in cloud-based environments has led to the development of advanced mechanisms for securely managing sensitive information. A prominent solution in this domain is the "Data Privacy Vault," a concept that is being provided commercially by companies such as Hashicorp, Basis Theory, Skyflow Inc., VGS, Evervault, Protegrity, Anonomatic, and BoxyHQ. However, no existing work has rigorously defined the security notions required for a Data Privacy Vault or proven them within a formal framework which is the focus of this paper. Among its other uses, data privacy vaults are increasingly being used as storage for LLM training data which necessitates a scheme that enables users to securely store sensitive information in the cloud while allowing controlled access for performing analytics on specific non-sensitive attributes without exposing sensitive data. Conventional solutions involve users generating encryption keys to safeguard their data, but these solutions are not deterministic and are therefore unsuited for the LLM setting. To address this, we propose a novel framework that is deterministic as well as semantically secure. Our scheme operates in the Cloud Operating model where the server is trusted but stateless, and the storage is outsourced. We provide a formal definition and a concrete instantiation of this data privacy vault scheme. We introduce a novel tokenization algorithm that serves as the core mechanism for protecting sensitive data within the vault. Our approach not only generates secure, unpredictable tokens for sensitive data but also securely stores sensitive data while enabling controlled data retrieval based on predefined access levels. Our work fills a significant gap in the existing literature by providing a formalized framework for the data privacy vault, complete with security proofs and a practical construction - not only enhancing the understanding of vault schemes but also offering a viable solution for secure data management in the era of cloud computing.
Last updated:  2024-12-30
FO derandomization sometimes damages security
Daniel J. Bernstein
FO derandomization is a common step in protecting against chosen-ciphertext attacks. There are theorems qualitatively stating that FO derandomization preserves ROM OW-CPA security. However, quantitatively, these theorems are loose, allowing the possibility of the derandomized security level being considerably smaller than the original security level. Many cryptosystems rely on FO derandomization without adjusting parameters to account for this looseness. This paper proves, for two examples of a randomized ROM PKE, that derandomizing the PKE degrades ROM OW-CPA security by a factor close to the number of hash queries. The first example can be explained by the size of the message space of the PKE; the second cannot. This paper also gives a concrete example of a randomized non-ROM PKE that appears to have the same properties regarding known attacks. As a spinoff, this paper presents a 2^88-guess attack exploiting derandomization to break one out of 2^40 ciphertexts for a FrodoKEM-640 public key. This attack contradicts the official FrodoKEM claim that "the FrodoKEM parameter sets comfortably match their target security levels with a large margin". The official responses to this attack so far include (1) renaming FrodoKEM as "ephemeral FrodoKEM" and (2) proposing a newly patched "FrodoKEM". This paper does not involve new cryptanalysis: the attacks are straightforward. What is new is finding examples where derandomization damages security.
Last updated:  2024-12-30
Exploring Large Integer Multiplication for Cryptography Targeting In-Memory Computing
Florian Krieger, Florian Hirner, and Sujoy Sinha Roy
Emerging cryptographic systems such as Fully Homomorphic Encryption (FHE) and Zero-Knowledge Proofs (ZKP) are computation- and data-intensive. FHE and ZKP implementations in software and hardware largely rely on the von Neumann architecture, where a significant amount of energy is lost on data movements. A promising computing paradigm is computing in memory (CIM), which enables computations to occur directly within memory, thereby reducing data movements and energy consumption. However, efficiently performing large integer multiplications - critical in FHE and ZKP - is an open question, as existing CIM methods are limited to small operand sizes. In this work, we address this question by exploring advanced algorithmic approaches for large integer multiplication, identifying the Karatsuba algorithm as the most effective for CIM applications. Thereafter, we design the first Karatsuba multiplier for resistive CIM crossbars. Our multiplier uses a three-stage pipeline to enhance throughput and, additionally, balances memory endurance with efficient array sizes. Compared to existing CIM multiplication methods, when scaled up to the bit widths required in ZKP and FHE, our design achieves up to 916x in throughput and 281x in area-time product improvements.
Last updated:  2024-12-30
Two generalizations of almost perfect nonlinearity
Claude Carlet
Almost perfect nonlinear (in brief, APN) functions are vectorial functions playing roles in several domains of information protection, at the intersection of computer science and mathematics. Their definition comes from cryptography and is also related to coding theory. When they are used as substitution boxes (S-boxes, which are the only nonlinear components in block ciphers), APN functions contribute optimally to the resistance against differential attacks. This makes of course a strong cryptographic motivation for their study, which has been very active since the 90's, and has posed interesting and difficult mathematical questions, some of which are still unanswered. Since the introduction of differential attacks, more recent types of cryptanalyses have been designed, such as integral attacks. No notion about S-boxes has been identified which would play a similar role with respect to integral attacks. In this paper, we study two generalizations of APNness that are natural from a mathematical point of view, since they directly extend classical characterizations of APN functions. We call these two notions strong non-normality and sum-freedom. The former existed already for Boolean functions (it had been introduced by Dobbertin) and the latter is new. We study how these two notions are related to cryptanalyses (the relation is weaker for strong non-normality). The two notions behave differently from each other while they have similar definitions. They behave differently from differential uniformity, which is a well-known generalization of APNness. We study the different ways to define them. We prove their satisfiability, their monotonicity, their invariance under classical equivalence relations and we characterize them by the Walsh transform. We finally begin a study of the multiplicative inverse function (used as a substitution box in the Advanced Encryption Standard and other block ciphers) from the viewpoint of these two notions. In particular, we find a simple expression of the sum of the values taken by this function over affine subspaces of that are not vector subspaces. This formula shows that the sum never vanishes on such affine spaces. We also give a formula for the case of a vector space defined by one of its bases.
Last updated:  2024-12-30
PQConnect: Automated Post-Quantum End-to-End Tunnels
Daniel J. Bernstein, Tanja Lange, Jonathan Levin, and Bo-Yin Yang
This paper introduces PQConnect, a post-quantum end-to-end tunneling protocol that automatically protects all packets between clients that have installed PQConnect and servers that have installed and configured PQConnect. Like VPNs, PQConnect does not require any changes to higher-level protocols and application software. PQConnect adds cryptographic protection to unencrypted applications, works in concert with existing pre-quantum applications to add post-quantum protection, and adds a second application-independent layer of defense to any applications that have begun to incorporate application-specific post-quantum protection. Unlike VPNs, PQConnect automatically creates end-to-end tunnels to any number of servers using automatic peer discovery, with no need for the client administrator to configure per-server information. Each server carries out a client-independent configuration step to publish an announcement that the server's name accepts PQConnect connections. Any PQConnect client connecting to that name efficiently finds this announcement, automatically establishes a post-quantum point-to-point IP tunnel to the server, and routes traffic for that name through that tunnel. The foundation of security in PQConnect is the server's long-term public key used to encrypt and authenticate all PQConnect packets. PQConnect makes a conservative choice of post-quantum KEM for this public key. PQConnect also uses a smaller post-quantum KEM for forward secrecy, and elliptic curves to ensure pre-quantum security even in case of security failures in KEM design or KEM software. Security of the handshake component of PQConnect has been symbolically proven using Tamarin.
Last updated:  2024-12-29
Quantum One-Time Protection of any Randomized Algorithm
Sam Gunn and Ramis Movassagh
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens. A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model. Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
Last updated:  2024-12-29
Encrypted Multi-map that Hides Query, Access, and Volume Patterns
Uncategorized
Alexandra Boldyreva and Tianxin Tang
Show abstract
Uncategorized
We present an encrypted multi-map, a fundamental data structure underlying searchable encryption/structured encryption. Our protocol supports updates and is designed for applications demanding very strong data security. Not only it hides the information about queries and data, but also the query, access, and volume patterns. Our protocol utilizes a position-based ORAM and an encrypted dictionary. We provide two instantiations of the protocol, along with their operation-type-revealing variants, all using PathORAM but with different encrypted dictionary instantiations (AVL tree or BSkiplist). Their efficiency has been evaluated through both asymptotic and concrete complexity analysis, outperforming prior work while achieving the same level of strong security. We have implemented our instantiations and evaluated their performance on two real-world email databases (Enron and Lucene). We also discuss the strengths and limitations of our construction, including its resizability, and highlight that optimized solutions, even with heavy network utilization, may become practical as network speed improves.
Last updated:  2024-12-28
Computing the Hermite Normal Form: A Survey
Leon Damer
The Hermite Normal Form (HNF) of a matrix is an analogue of the echolon form over the integers. Any integer matrix can be transformed into its unique HNF. A common obstacle in computing the HNF is the extensive blow up of intermediate values. As first approach to this problem, we discuss the . It keeps the entries bounded by , the determinant of the lattice, and has a time complexity of , where is the dimension of the matrix. Although this algorithm is very useful if the determinant is small, in the general case, the entries still become extremely large. Secondly, we study the . It has a time complexity of , where denotes the largest absolute value of the input matrix. This is as fast as the best previously known algorithms, but in contrast, it assures space complexity linear in the input size, i.e. . As last algorithm to compute the HNF we analyze the , which is based on the first two algorithms. It achieves a much faster runtime in practice, yielding a heuristic runtime of , while keeping the linear space complexity. Besides some performance speed ups, the and are precisely the algorithms implemented by SageMath.
Last updated:  2024-12-27
An Embedded Domain-Specific Language for Using One-Hot Vectors and Binary Matrices in Secure Computation Protocols
Andrei Lapets
The use of secure computation protocols within production software systems and applications is complicated by the fact that such protocols sometimes rely upon -- or are most compatible with -- unusual or restricted models of computation. We employ the features of a contemporary and widely used programming language to create an embedded domain-specific language for working with user-defined functions as binary matrices that operate on one-hot vectors. At least when working with small finite domains, this allows programmers to overcome the restrictions of more simple secure computation protocols that support only linear operations (such as addition and scalar multiplication) on private inputs. Notably, programmers are able to define their own input and output domains, to use all available host language features and libraries to define functions that operate on these domains, and to translate inputs, outputs, and functions between their usual host language representations and their one-hot vector or binary matrix forms. Furthermore, these features compose in a straightforward way with simple secure computation libraries available for the host language.
Last updated:  2024-12-27
Definition of End-to-end Encryption
Mallory Knodel, Sofía Celi, Olaf Kolkman, and Gurshabad Grover
This document provides a definition of end-to-end encryption (E2EE). End-to-end encryption is an application of cryptographic mechanisms to provide security and privacy to communication between endpoints. Such communication can include messages, email, video, audio, and other forms of media. E2EE provides security and privacy through confidentiality, integrity, authenticity and forward secrecy for communication amongst people.
Last updated:  2024-12-27
Zero Knowledge Memory-Checking Techniques for Stacks and Queues
Alexander Frolov
There are a variety of techniques for implementing read/write memory inside of zero-knowledge proofs and validating consistency of memory accesses. These techniques are generally implemented with the goal of implementing a RAM or ROM. In this paper, we present memory techniques for more specialized data structures: queues and stacks. We first demonstrate a technique for implementing queues in arithmetic circuits that requires 3 multiplication gates and 1 advice value per read and 2 multiplication gates per write. This is based on using Horner's Rule to evaluate 2 polynomials at random points and check that the values read from the queue are equal to the values written to the queue as vectors. Next, we present a stack scheme based on an optimized version of the RAM scheme of Yang and Heath that requires 5 multiplication gates and 4 advice values per read and 2 multiplication gates per write. This optimizes the RAM scheme by observing that reads and writes to a stack are already "paired" which avoids the need for inserting dummy operations for each access as in a stack. We also introduce a different notion of "multiplexing" or "operation privacy" that is better suited to the use case of stacks and queues. All of the techniques we provide are based on evaluating polynomials at random points and using randomly evaluated polynomials as universal hash functions to check set/vector equality.
Last updated:  2024-12-27
Fully Hybrid TLSv1.3 in WolfSSL on Cortex-M4
Mila Anastasova, Reza Azarderakhsh, and Mehran Mozaffari Kermani
To provide safe communication across an unprotected medium such as the internet, network protocols are being established. These protocols employ public key techniques to perform key exchange and authentication. Transport Layer Security (TLS) is a widely used network protocol that enables secure communication between a server and a client. TLS is employed in billions of transactions per second. Contemporary protocols depend on traditional methods that utilize the computational complexity of factorization or (elliptic curve) logarithm mathematics problems. The ongoing advancement in the processing power of classical computers requires an ongoing increase in the security level of the underlying cryptographic algorithms. This study focuses on the analysis of Curve448 and Edwards curve Ed448, renowned for their superior security features that offer a 224-bit level of security as part of the TLSv1.3 protocol. The exponential advancement of quantum computers, however, presents a substantial threat to secure network communication that depends on classical crypto schemes, irrespective of their degree of security. Quantum computers have the capability to resolve these challenges within a feasible timeframe. In order to successfully transition to Post-Quantum secure network protocols, it is imperative to concurrently deploy both classical and post-quantum algorithms. This is done to fulfill the requirements of both enterprises and governments, while also instilling more assurance in the reliability of the post-quantum systems. This paper presents a detailed hybrid implementation architecture of the TLSv1.3 network protocol. We showcase the first deployment of Curve448 and Crystals-Kyber for the purpose of key exchanging, and Ed448 and Crystals-Dilithium for verifying the authenticity of entities and for X.509 Public Key Infrastructure (PKI). We rely upon the widely used OpenSSL library and the specific wolfSSL library for embedded devices to provide our results for server and client applications.
Last updated:  2024-12-27
ClusterGuard: Secure Clustered Aggregation for Federated Learning with Robustness
Yulin Zhao, Zhiguo Wan, and Zhangshuang Guan
Federated Learning (FL) enables collaborative model training while preserving data privacy by avoiding the sharing of raw data. However, in large-scale FL systems, efficient secure aggregation and dropout handling remain critical challenges. Existing state-of-the-art methods, such as those proposed by Liu et al. (UAI'22) and Li et al. (ASIACRYPT'23), suffer from prohibitive communication overhead, implementation complexity, and vulnerability to poisoning attacks. Alternative approaches that utilize partially connected graph structures (resembling client grouping) to reduce communication costs, such as Bell et al. (CCS'20) and ACORN (USENIX Sec'23), face the risk of adversarial manipulation during the graph construction process. To address these issues, we propose ClusterGuard, a secure clustered aggregation scheme for federated learning. ClusterGuard leverages Verifiable Random Functions (VRF) to ensure fair and transparent cluster selection and employs a lightweight key-homomorphic masking mechanism, combined with efficient dropout handling, to achieve secure clustered aggregation. Furthermore, ClusterGuard incorporates a dual filtering mechanism based on cosine similarity and norm to effectively detect and mitigate poisoning attacks. Extensive experiments on standard datasets demonstrate that ClusterGuard achieves over 2x efficiency improvement compared to advanced secure aggregation methods. Even with 20% of clients being malicious, the trained model maintains accuracy comparable to the original model, outperforming state-of-the-art robustness solutions. ClusterGuard provides a more efficient, secure, and robust solution for practical federated learning.
Last updated:  2024-12-27
zkFFT: Extending Halo2 with Vector Commitments & More
Aram Jivanyan, Gohar Hovhannisyan, Hayk Hovhannisyan, and Nerses Asaturyan
This paper introduces zkFFT, a novel zero-knowledge argument designed to efficiently generate proofs for FFT (Fast Fourier Transform) relations. Our approach enables the verification that one committed vector is the FFT of another, addressing an efficiency need in general-purpose non-interactive zero-knowledge proof systems where the proof relation utilizes vector commitments inputs. We present a concrete enhancement to the Halo2 proving system, demonstrating how zkFFT optimizes proofs in scenarios where the proof relation includes one or more vector commitments. Specifically, zkFFT incorporates streamlined logic within Halo2 and similar systems, augmenting proof and verification complexity by only , where is the vector size. This represents a substantial improvement over conventional approach, which often necessitates specific circuit extensions to validate the integrity of vector commitments and their corresponding private values in the arithmetic framework of the proof relation. The proposed zkFFT method supports multiple vector commitments with only a logarithmic increase in extension costs, making it highly scalable. This capability is pivotal for practical applications involving multiple pre-committed values within proof statements. Apart from Halo2, our technique can be adapted to any other zero-knowledge proof system that relies on arithmetization, where each column is treated as an evaluation of a polynomial over a specified domain, computes this polynomial via FFT, and subsequently commits to the resulting polynomial using a polynomial commitment scheme based on inner-product arguments. Along with efficient lookup and permutation arguments, zkFFT will streamline and significantly optimize the generation of zero-knowledge proofs for arbitrary relations. Beyond the applications in augmenting zero-knowledge proof systems, we believe that the formalized zkFFT argument can be of independent interest.
Last updated:  2024-12-27
Algebraic Zero Knowledge Contingent Payment
Javier Gomez-Martinez, Dimitrios Vasilopoulos, Pedro Moreno-Sanchez, and Dario Fiore
In this work, we introduce Modular Algebraic Proof Contingent Payment (MAPCP), a novel zero-knowledge contingent payment (ZKCP) construction. Unlike previous approaches, MAPCP is the first that simultaneously avoids using zk-SNARKs as the tool for zero-knowledge proofs and HTLC contracts to atomically exchange a secret for a payment. As a result, MAPCP sidesteps the common reference string (crs) creation problem and is compatible with virtually any cryptocurrency, even those with limited or no smart contract support. Moreover, MAPCP contributes to fungibility, as its payment transactions blend seamlessly with standard cryptocurrency payments. We analyze the security of MAPCP and demonstrate its atomicity, meaning that, (i) the buyer gets the digital product after the payment is published in the blockchain (buyer security); and (ii) the seller receives the payment if the buyer gets access to the digital product (seller security). Moreover, we present a construction of MAPCP in a use case where a customer pays a notary in exchange for a document signature.
Last updated:  2024-12-27
Designated-Verifier zk-SNARKs Made Easy
Chen Li and Fangguo Zhang
Zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) is a kind of proof system that enables a prover to convince a verifier that an NP statement is true efficiently. In the last decade, various studies made a lot of progress in constructing more efficient and secure zk-SNARKs. Our research focuses on designated-verifier zk-SNARKs, where only the verifier knowing some secret verification state can be convinced by the proof. A natural idea of getting a designated-verifier zk-SNARK is encrypting a publicly-verifiable zk-SNARK's proof via public-key encryption. This is also the core idea behind the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain designated-verifier zk-SNARKs. However, the transformation only applies to zk-SNARKs which requires the complicated trusted setup phase and sticks on storage-expensive common reference strings. The loss of the secret verification state also makes the proof immediately lose the designated-verifier property. To address these issues, we first define "strong designated-verifier" considering the case where the adversary has access to the secret verification state, then propose a construction of strong designated-verifier zk-SNARKs. The construction inspired by designated verifier signatures based on two-party ring signatures does not use encryption and can be applied on any public-verifiable zk-SNARKs to yield a designated-verifiable variant. We introduce our construction under the circuit satisfiability problem and implement it in Circom, then test it on different zk-SNARKs, showing the validity of our construction.
Last updated:  2024-12-26
Generalized Cryptanalysis of Cubic Pell RSA
Hao Kang and Mengce Zheng
The RSA (Rivest-Shamir-Adleman) cryptosystem is a fundamental algorithm of public key cryptography and is widely used across various information domains. For an RSA modulus represented as , with its factorization remaining unknown, security vulnerabilities arise when attackers exploit the key equation . To enhance the security, Murru and Saettone introduced cubic Pell RSA --- a variant of RSA based on the cubic Pell equation, where the key equation becomes . In this paper, we further investigate the security implications surrounding the generalized key equation . We present a novel attack strategy aimed at recovering the prime factors and under specific conditions satisfied by , , and . Our generalized attack employs lattice-based Coppersmith's techniques and extends several previous attack scenarios, thus deepening the understanding of mathematical cryptanalysis.
Last updated:  2024-12-26
Improved Lattice-Based Attack on Mersenne Low Hamming Ratio Search Problem
Mengce Zheng and Wei Yan
This paper investigates the Mersenne number-based cryptosystem, with a particular focus on its associated hard problem. Specifically, we aim to enhance the existing lattice-based attack on the Mersenne low Hamming ratio search problem. Unlike the previous approach of directly employing lattice reduction algorithm, we apply the lattice-based method to solving polynomial equations derived from the above problem. We extend the search range for vulnerabilities in weak keys and increase the success probability of key recovery attack. To validate the efficacy and accuracy of our proposed improvements, we conduct numerical computer experiments. These experiments serve as a concrete validation of the practicality and effectiveness of our improved attack.
Last updated:  2024-12-26
Optimally Secure TBC Based Accordion Mode
Nilanjan Datta, Avijit Dutta, Shibam Ghosh, and Hrithik Nandi
The design of tweakable wide block ciphers has advanced significantly over the past two decades. This evolution began with the approach of designing a wide block cipher by Naor and Reingold. Since then, numerous tweakable wide block ciphers have been proposed, many of which build on existing block ciphers and are secure up to the birthday bound for the total number of blocks queried. Although there has been a slowdown in the development of tweakable wide block cipher modes in last couple of years, the latest NIST proposal for accordion modes has reignited interest and momentum in the design and analysis of these ciphers. Although new designs have emerged, their security often falls short of optimal (i.e., -bit) security, where is the output size of the primitive. In this direction, designing an efficient tweakable wide block cipher with -bit security seems to be an interesting research problem. An optimally secure tweakable wide block cipher mode can easily be turned into a misuse-resistant RUP secure authenticated encryption scheme with optimal security. This paper proposes , which turns an -bit tweakable block cipher (TBC) with -bit tweak into a variable input length tweakable block cipher. Unlike tweakable \textsf{HCTR}, ensures -bit security regardless of tweak repetitions. We also propose two TBC-based almost-xor-universal hash functions, named and , and use them as the underlying hash functions in the construction to create two TBC-based -bit secure tweakable wide block cipher modes, and . Experimental results show that both and exhibit excellent software performance when their underlying TBC is instantiated with \textsf{Deoxys-BC-128-128}.
Last updated:  2024-12-26
Improved Universal Thresholdizer from Iterative Shamir Secret Sharing
Jung Hee Cheon, Wonhee Cho, and Jiseung Kim
The universal thresholdizer, introduced at CRYPTO'18, is a cryptographic scheme that transforms any cryptosystem into a threshold variant, thereby enhancing its applicability in threshold cryptography. It enables black-box construction of one-round threshold signature schemes based on the Learning with Errors problem, and similarly, facilitates one-round threshold ciphertext-attack secure public key encryption when integrated with non-threshold schemes. Current constructions of universal thresholdizer are fundamentally built upon linear secret sharing schemes. One approach employs Shamir's secret sharing, which lacks compactness and results in ciphertext sizes of , and another approach uses -linear secret sharing scheme (-LSSS), which is compact but induces high communication costs due to requiring secret shares. In this work, we introduce a communication-efficient universal thresholdizer by revising the linear secret sharing scheme. We propose a specialized linear secret sharing scheme, called TreeSSS, which reduces the number of required secret shares while maintaining the compactness of the universal thresholdizer. TreeSSS can also serve as a subroutine for constructing lattice based -out-of- threshold cryptographic primitives such as threshold fully homomorphic encryptions and threshold signatures. In this context, TreeSSS offers the advantage of lower communication overhead due to the reduced number of secret shares involved.
Last updated:  2024-12-26
Solving AES-SAT Using Side-Channel Hints: A Practical Assessment
Elena Dubrova
Side-channel attacks exploit information leaked through non-primary channels, such as power consumption, electromagnetic emissions, or timing, to extract sensitive data from cryptographic devices. Over the past three decades, side-channel analysis has evolved into a mature research field with well-established methodologies for analyzing standard cryptographic algorithms like the Advanced Encryption Standard (AES). However, the integration of side-channel analysis with formal methods remains relatively unexplored. In this paper, we present a hybrid attack on AES that combines side-channel analysis with SAT. We model AES as a SAT problem and leverage hints of the input and output values of the S-boxes, extracted via profiled deep learning-based power analysis, to solve it. Experimental results on an ATXmega128D4 MCU implementation of AES-128 demonstrate that the SAT-assisted approach consistently recovers the full encryption key from a single trace, captured from devices different from those used for profiling, within one hour. In contrast, without SAT assistance, the success rate remains below 80% after 26 hours of key enumeration.
Last updated:  2024-12-26
Strongly Secure Universal Thresholdizer
Ehsan Ebrahimi and Anshu Yadav
A universal thresholdizer (UT), constructed from a threshold fully homomorphic encryption by Boneh et. al , Crypto 2018, is a general framework for universally thresholdizing many cryptographic schemes. However, their framework is insufficient to construct strongly secure threshold schemes, such as threshold signatures and threshold public-key encryption, etc. In this paper, we strengthen the security definition for a universal thresholdizer and propose a scheme which satisfies our stronger security notion. Our UT scheme is an improvement of Boneh et. al ’s construction at the level of threshold fully homomorphic encryption using a key homomorphic pseudorandom function. We apply our strongly secure UT scheme to construct strongly secure threshold signatures and threshold public-key encryption.
Last updated:  2024-12-26
Report on evaluation of KpqC Round-2 candidates
Daniel J. Bernstein, Jolijn Cottaar, Emanuele Di Giandomenico, Kathrin Hövelmanns, Andreas Hülsing, Mikhail Kudinov, Tanja Lange, Mairon Mahzoun, Matthias Meijers, Alex Pellegrini, Alberto Ravagnani, Silvia Ritsch, Sven Schäge, Tianxin Tang, Monika Trimoska, Marc Vorstermans, and Fiona Johanna Weber
This report covers our analysis (security, proofs, efficiency) of the Round-2 candidates to the Korean post-quantum competiton KpqC. Signature systems covered in the report are AIMer, HAETAE, MQ-Sign, and NCC-Sign; KEMs covered are NTRU+, Paloma, REDOG, and SMAUG-T.
Last updated:  2024-12-25
Tightly-Secure Blind Signatures in Pairing-Free Groups
Nicholas Brandt, Dennis Hofheinz, Michael Klooß, and Michael Reichle
We construct the first blind signature scheme that achieves all of the following properties simultaneously: - it is tightly secure under a standard (i.e., non-interactive, non--type) computational assumption, - it does not require pairings, - it does not rely on generic, non-black-box techniques (like generic NIZK proofs). The third property enables a reasonably efficient solution, and in fact signatures in our scheme comprise 10 group elements and 29 -elements. Our scheme starts from a pairing-based non-blind signature scheme (Abe et al., JoC 2023), and uses recent techniques of Chairattana-Apirom, Tessaro, and Zhu (CRYPTO 2024) to replace the pairings used in this scheme with non-interactive zero-knowledge proofs in the random oracle model. This conversion is not generic or straightforward (also because the mentioned previous works have converted only significantly simpler signature schemes), and we are required to improve upon and innovate existing techniques in several places. As an interesting side note, and unlike previous works, our techniques only require a non-programmable random oracle, and our signature scheme achieves predicate blindness (which means that the user can prove statements about the signed message during the signing process).
Last updated:  2024-12-24
Perfectly Secure Fluid MPC with Abort and Linear Communication Complexity
Alexander Bienstock, Daniel Escudero, and Antigoni Polychroniadou
The \emph{Fluid} multiparty computation (MPC) model, introduced in (Choudhuri \emph{et al.} CRYPTO 2021), addresses dynamic scenarios where participants can join or leave computations between rounds. Communication complexity initially stood at elements per gate, where is the number of parties in a committee online at a time. This held for both statistical security (honest majority) and computational security (dishonest majority) in (Choudhuri \emph{et al.}~CRYPTO'21) and (Rachuri and Scholl, CRYPTO'22), respectively. The work of Bienstock \emph{et al.}~CRYPTO'23) improved communication to elements per gate. However, it's important to note that the perfectly secure setting with one-third corruptions per committee has only recently been addressed in the work of (David \emph{et al.}~CRYPTO'23). Notably, their contribution marked a significant advancement in the Fluid MPC literature by introducing guaranteed output delivery. However, this achievement comes at the cost of prohibitively expensive communication, which scales to elements per gate. In this work, we study the realm of perfectly secure Fluid MPC under one-third active corruptions. Our primary focus lies in proposing efficient protocols that embrace the concept of security with abort. Towards this, we design a protocol for perfectly secure Fluid MPC that requires only \emph{linear} communication of elements per gate, matching the communication of the non-Fluid setting. Our results show that, as in the case of computational and statistical security, perfect security with abort for Fluid MPC comes "for free (asymptotically linear in ) with respect to traditional non-Fluid MPC, marking a substantial leap forward in large scale dynamic computations, such as Fluid MPC.
Last updated:  2024-12-24
Quantum Sieving for Code-Based Cryptanalysis and Its Limitations for ISD
Lynn Engelberts, Simona Etinski, and Johanna Loyer
Sieving using near-neighbor search techniques is a well-known method in lattice-based cryptanalysis, yielding the current best runtime for the shortest vector problem in both the classical [BDGL16] and quantum [BCSS23] setting. Recently, sieving has also become an important tool in code-based cryptanalysis. Specifically, using a sieving subroutine, [GJN23, DEEK24] presented a variant of the information-set decoding (ISD) framework, which is commonly used for attacking cryptographically relevant instances of the decoding problem. The resulting sieving-based ISD framework yields complexities close to the best-performing classical algorithms for the decoding problem such as [BJMM12, BM18]. It is therefore natural to ask how well quantum versions perform. In this work, we introduce the first quantum algorithms for code sieving by designing quantum variants of the aforementioned sieving subroutine. In particular, using quantum-walk techniques, we provide a speed-up over the best known classical algorithm from [DEEK24] and over a variant using Grover's algorithm [Gro96]. Our quantum-walk algorithm exploits the structure of the underlying search problem by adding a layer of locality-sensitive filtering, inspired by the quantum-walk algorithm for lattice sieving from [CL21]. We complement our asymptotic analysis of the quantum algorithms with numerical results, and observe that our quantum speed-ups for code sieving behave similarly as those observed in lattice sieving. In addition, we show that a natural quantum analog of the sieving-based ISD framework does not provide any speed-up over the first presented quantum ISD algorithm [Ber10]. Our analysis highlights that the framework should be adapted in order to outperform the state-of-the-art of quantum ISD algorithms [KT17, Kir18].
Last updated:  2024-12-24
Sneaking up the Ranks: Partial Key Exposure Attacks on Rank-Based Schemes
Giuseppe D'Alconzo, Andre Esser, Andrea Gangemi, and Carlo Sanna
A partial key exposure attack is a key recovery attack where an adversary obtains a priori partial knowledge of the secret key, e.g., through side-channel leakage. While for a long time post-quantum cryptosystems, unlike RSA, have been believed to be resistant to such attacks, recent results by Esser, May, Verbel, and Wen (CRYPTO ’22), and by Kirshanova and May (SCN ’22), have refuted this belief. In this work, we focus on partial key exposure attacks in the context of rank-metric-based schemes, particularly targeting the RYDE, MIRA, and MiRitH digital signatures schemes, which are active candidates in the NIST post-quantum cryptography standardization process. We demonstrate that, similar to the RSA case, the secret key in RYDE can be recovered from a constant fraction of its bits. Specifically, for NIST category I parameters, our attacks remain efficient even when less than 25% of the key material is leaked. Interestingly, our attacks lead to a natural improvement of the best generic attack on RYDE without partial knowledge, reducing security levels by up to 9 bits. For MIRA and MiRitH our attacks remain efficient as long as roughly 57%-60% of the secret key material is leaked. Additionally, we initiate the study of partial exposure of the witness in constructions following the popular MPCitH (MPC-in-the-Head) paradigm. We show a generic reduction from recovering RYDE and MIRA’s witness to the MinRank problem, which again leads to efficient key recovery from constant fractions of the secret witness in both cases.
Last updated:  2024-12-24
A Lattice Attack Against a Family of RSA-like Cryptosystems
George Teseleanu
Let be the product of two balanced prime numbers and . In 2002, Elkamchouchi, Elshenawy, and Shaban introduced an interesting RSA-like cryptosystem that, unlike the classical RSA key equation , uses the key equation . The scheme was further extended by Cotan and Te\c seleanu to a variant that uses the key equation , where . Furthermore, they provide a continued fractions attack that recovers the secret key if . In this paper we improve this bound using a lattice based method. Moreover, our method also leads to the factorisation of the modulus , while the continued fractions one does not (except for ).
Last updated:  2024-12-24
Partial Exposure Attacks Against a Family of RSA-like Cryptosystems
George Teseleanu
An RSA generalization using complex integers was introduced by Elkamchouchi, Elshenawy, and Shaban in 2002. This scheme was further extended by Cotan and Teșeleanu to Galois fields of order . In this generalized framework, the key equation is , where and are prime numbers. Note that, the classical RSA, and the Elkamchouchi \emph{et al.} key equations are special cases, namely and . In addition to introducing this generic family, Cotan and Teșeleanu describe a continued fractions attack capable of recovering the secret key if . This bound was later improved by Teșeleanu using a lattice based method. In this paper, we explore other lattice attacks that could lead to factoring the modulus . Namely, we propose a series of partial exposure attacks that can aid an adversary in breaking this family of cryptosystems if certain conditions hold.
Last updated:  2024-12-24
One Solves All: Exploring ChatGPT's Capabilities for Fully Automated Simple Power Analysis on Cryptosystems
Wenquan Zhou, An Wang, Yaoling Ding, Congming Wei, Jingqi Zhang, and Liehuang Zhu
Side-channel analysis is a powerful technique to extract secret data from cryptographic devices. However, this task heavily relies on experts and specialized tools, particularly in the case of simple power analysis (SPA). Meanwhile, ChatGPT, a leading example of large language models, has attracted great attention and been widely applied for assisting users with complex tasks. Despite this, ChatGPT’s capabilities for fully automated SPA, where prompts and traces are input only once, have yet to be systematically explored and improved. In this paper, we introduce a novel prompt template with three expert strategies and conduct a large-scale evaluation of ChatGPT’s capabilities for SPA. We establish a dataset comprising seven sets of real power traces from various implementations of public-key cryptosystems, including RSA, ECC, and Kyber, as well as eighteen sets of simulated power traces that illustrate typical SPA leakage patterns. The results indicate that ChatGPT fails to be directly used for SPA. However, by applying the expert strategies, we successfully recovered the private keys for all twenty-five traces, which demonstrate that non-experts can use ChatGPT with our expert strategies to perform fully automated SPA.
Last updated:  2024-12-23
On the complexity of the problem of solving systems of tropical polynomial equations of degree two
Ivan Buchinskiy, Matvei Kotov, and Alexander Treier
In this paper, we investigate the computational complexity of the problem of solving a one-sided system of equations of degree two of a special form over the max-plus algebra. Also, we consider the asymptotic density of solvable systems of this form. Such systems have appeared during the analysis of some tropical cryptography protocols that were recently suggested. We show how this problem is related to the integer linear programming problem and prove that this problem is NP-complete. We show that the asymptotic density of solvable systems of this form with some restrictions on the coefficients, the number of variables, and the number of equations is 0. As a corollary, we prove that this problem (with some restrictions on the coefficients, the number of variables, and the number of equations) is decidable generically in polynomial time.
Last updated:  2024-12-23
Weightwise Almost Perfectly Balanced Functions, Construction From A Permutation Group Action View
Deepak Kumar Dalai, Krishna Mallick, and Pierrick Méaux
The construction of Boolean functions with good cryptographic properties over subsets of vectors with fixed Hamming weight is significant for lightweight stream ciphers like FLIP. In this article, we propose a general method to construct a class of Weightwise Almost Perfectly Balanced (WAPB) Boolean functions using the action of a cyclic permutation group on . This class generalizes the Weightwise Perfectly Balanced (WPB) -variable Boolean function construction by Liu and Mesnager to any . We show how to bound the nonlinearity and weightwise nonlinearities of functions from this construction. Additionally, we explore two significant permutation groups, and , where is a binary-cycle permutation and is a rotation. We theoretically analyze the cryptographic properties of the WAPB functions derived from these permutations and experimentally evaluate their nonlinearity parameters for between 4 and 10.
Last updated:  2024-12-23
Exact Template Attacks with Spectral Computation
Meriem Mahar, Mammar Ouladj, Sylvain Guilley, Hacène Belbachir, and Farid Mokrane
The so-called Gaussian template attacks (TA) is one of the optimal Side-Channel Analyses (SCA) when the measurements are captured with normal noise. In the SCA literature, several optimizations of its implementation are introduced, such as coalescence and spectral computation. The coalescence consists of averaging traces corresponding to the same plaintext value, thereby coalescing (synonymous: compacting) the dataset. Spectral computation consists of sharing the computational workload when estimating likelihood across key hypotheses. State-of-the-art coalescence leverages the Law of Large Numbers (LLN) to compute the mean of equivalent traces. This approach comes with a drawback because the LLN is just an asymptotic approximation. So it does not lead to an exact Template Attack, especially for a few number of traces. In this paper, we introduce a way of calculating the TA exactly and with the same computational complexity (using the spectral approach), without using the LLN, regardless of the number of messages. For the experimental validation of this approach, we use the ANSSI SCA Database (ASCAD), with different numbers of messages and different amounts of samples per trace. Recall that this dataset concerns a software implementation of AES-128 bits, running on an ATMEGA-8515 microprocessor.
Last updated:  2024-12-23
COCO: Coconuts and Oblivious Computations for Orthogonal Authentication
Yamya Reiki
Authentication often bridges real-world individuals and their virtual public identities, like usernames, user IDs and e-mails, exposing vulnerabilities that threaten user privacy. This research introduces COCO (Coconuts and Oblivious Computations for Orthogonal Authentication), a framework that segregates roles among Verifiers, Authenticators, and Clients to achieve privacy-preserving authentication. COCO eliminates the need for Authenticators to directly access virtual public identifiers or real-world identifiers for authentication. Instead, the framework leverages Oblivious Pseudorandom Functions (OPRFs) and an extended Coconut Credential Scheme to ensure privacy by introducing separate unlinkable orthogonal authentication identifiers and a full-consensus mechanism to perform zero-knowledge authentications whose proof-s are unlinkable across multiple sessions. Authentication process becomes self-contained, preventing definitive reverse tracing of virtual public identifiers to real-world identifiers.
Last updated:  2024-12-23
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form and . A malicious voter could send an invalid encrypted vote such as , which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-12-23
(Deep) Learning about Elliptic Curve Cryptography
Diana Maimut, Alexandru Cristian Matei, and George Teseleanu
Motivated by the interest in elliptic curves both from a theoretical (algebraic geometry) and applied (cryptography) perspective, we conduct a preliminary study on the underlying mathematical structure of these mathematical structures. Hence, this paper mainly focuses on investigating artificial intelligence techniques to enhance the efficiency of Schoof's algorithm for point counting across various elliptic curve distributions, achieving varying levels of success.
Last updated:  2024-12-23
The Number of the Beast: Reducing Additions in Fast Matrix Multiplication Algorithms for Dimensions up to 666
Erik Mårtensson and Paul Stankovski Wagner
While a naive algorithm for multiplying two 2 × 2 matrices requires eight multiplications and four additions, Strassen showed how to compute the same matrix product using seven multiplications and 18 additions. Winograd reduced the number of additions to 15, which was assumed to be optimal. However, by introducing a change of basis, Karstadt and Schwartz showed how to lower the number of additions to 12, which they further showed to be optimal within this generalized Karstadt-Schwartz (KS) framework. Since the cost of the change of basis is smaller than one addition (per each recursive level), it is disregarded in this cost metric. In this work we present improved methods for classical optimization of the number of additions in Strassen-type matrix multiplication schemes, but for larger matrix sizes, and without including any change of basis. We find specific examples where our methods beat the best instances found within the KS framework, the most impressive one being Laderman’s algorithm for multiplying 3 × 3 matrices, which we reduce from the naive 98 additions to 62, compared to 74 in the KS framework. We indicate that our approach performs better compared to previous work within the KS framework, as the matrix dimensions increase. We additionally apply our algorithms to another reference set of algorithms due to Moosbauer and Kauers for which we do not have results in the KS framework as a comparison. For parameters (n, m, k), when multiplying an (n × m) matrix with an (m × k) matrix, the schoolbook algorithm uses nk(m − 1) additions. Using the reference set of algorithms we find that our algorithm leads to an optimized number of additions of roughly cnk(m − 1), where c is a small constant which is independent of the dimensions. We further provide concrete and practical implementations of our methods that are very efficient for dimensions including (and surpassing) the 666 limit, i.e. (n, m, k) = (6, 6, 6), in our reference set of fast multiplication algorithms.
Last updated:  2024-12-23
Security Analysis of SFrame
Takanori Isobe, Ryoma Ito, and Kazuhiko Minematsu
Increasing privacy consciousness has popularized the use of end-to-end encryption (E2EE). In this paper, we discuss the security of SFrame, an E2EE mechanism proposed to the Internet Engineering Task Force for video/audio group communications over the Internet. Despite being a quite recent project, SFrame has been deployed in several real-world applications. The original specification of SFrame is evaluated herein to find critical issues that can cause impersonation (forgery) attacks with a practical complexity by a malicious group member. Further investigations have revealed that these issues are present in several publicly available SFrame implementations. Therefore, we provide several countermeasures against all the proposed attacks and considerations from performance and security perspectives toward their implementation.
Last updated:  2024-12-23
Two Halves Make a Whole: How to Reconcile Soundness and Robustness in Watermarking for Large Language Models
Lei Fan, Chenhao Tang, Weicheng Yang, and Hong-Sheng Zhou
Watermarking techniques have been used to safeguard AI-generated content. In this paper, we study publicly detectable watermarking schemes (Fairoze et al.), and have several research findings. First, we observe that two important security properties, robustness and soundness, may conflict with each other. We then formally investigate these two properties in the presence of an arguably more realistic adversary that we called editing-adversary, and we can prove an impossibility result that, the robustness and soundness properties cannot be achieved via a publicly-detectable single watermarking scheme. Second, we demonstrate our main result: we for the first time introduce the new concept of publicly- detectable dual watermarking scheme, for AI-generated content. We provide a novel construction by using two publicly-detectable watermarking schemes; each of the two watermarking schemes can achieve “half” of the two required properties: one can achieve robustness, and the other can achieve soundness. Eventually, we can combine the two halves into a whole, and achieve the robustness and soundness properties at the same time. Our construction has been implemented and evaluated.
Last updated:  2024-12-23
Programming Equation Systems of Arithmetization-Oriented Primitives with Constraints
Mengyu Chang, Kexin Qiao, Junjie Cheng, Changhai Ou, and Liehuang Zhu
Arithmetization-Oriented (AO) cryptographic algorithms operate on large finite fields. The most threatening attack on such designs is the Gröbner basis attack, which solves the equation system encoded from the cryptanalysis problem. However, encoding a primitive as a system of equations is not unique, and finding the optimal one with low solving complexity is a challenge. This paper introduces an automatic tool that converts the CICO problem into a Mixed-Integer Quadratic Constraint Programming (MIQCP) model, using integer variables and constraints to track degree propagation and determine variable introduction points. The optimal MIQCP solution provides the lowest solving complexity. We build models for Griffin, Anemoi, and Ciminion permutations to cover modules comprehensively. Experiments show reduced Gröbner basis attack complexity, lower than designers’ bounds for small numbers of rounds, e.g. up to 8 rounds for Griffin.This tool can be used for security evaluation against Gröbner basis attack in new designs.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.