All papers (21939 results)

Last updated:  2024-05-02
Mempool Privacy via Batched Threshold Encryption: Attacks and Defenses
Arka Rai Choudhuri, Sanjam Garg, Julien Piet, and Guru-Vamsi Policharla
With the rising popularity of DeFi applications it is important to implement protections for regular users of these DeFi platforms against large parties with massive amounts of resources allowing them to engage in market manipulation strategies such as frontrunning/backrunning. Moreover, there are many situations (such as recovery of funds from vulnerable smart contracts) where a user may not want to reveal their transaction until it has been executed. As such, it is clear that preserving the privacy of transactions in the mempool is an important goal. In this work we focus on achieving mempool transaction privacy through a new primitive that we term batched-threshold encryption, which is a variant of threshold encryption with strict efficiency requirements to better model the needs of resource constrained environments such as blockchains. Unlike the naive use of threshold encryption, which requires communication proportional to $O(nB)$ to decrypt $B$ transactions with a committee of $n$ parties, our batched-threshold encryption scheme only needs $O(n)$ communication. We additionally discuss pitfalls in prior approaches that use (vanilla) threshold encryption for mempool privacy. To show that our scheme is concretely efficient, we implement our scheme and find that transactions can be encrypted in under 6 ms, independent of committee size, and the communication required to decrypt an entire batch of $B$ transactions is 80 bytes per party, independent of the number of transactions $B$, making it an attractive choice when communication is very expensive. If deployed on Ethereum, which processes close to 500 transaction per block, it takes close to 2.8 s for each committee member to compute a partial decryption and under 3.5 s to decrypt all transactions for a block in single-threaded mode.
Last updated:  2024-05-01
Blockchain Price vs. Quantity Controls
Abdoulaye Ndiaye
This paper studies the optimal transaction fee mechanisms for blockchains, focusing on the distinction between price-based ($\mathcal{P}$) and quantity-based ($\mathcal{Q}$) controls. By analyzing factors such as demand uncertainty, validator costs, cryptocurrency price fluctuations, price elasticity of demand, and levels of decentralization, we establish criteria that determine the selection of transaction fee mechanisms. We present a model framed around a Nash bargaining game, exploring how blockchain designers and validators negotiate fee structures to balance network welfare with profitability. Our findings suggest that the choice between $\mathcal{P}$ and $\mathcal{Q}$ mechanisms depends critically on the blockchain’s specific technical and economic features. The study concludes that no single mechanism suits all contexts and highlights the potential for hybrid approaches that adaptively combine features of both $\mathcal{P}$ and $\mathcal{Q}$ to meet varying demands and market conditions.
Last updated:  2024-05-01
Agile, Post-quantum Secure Cryptography in Avionics
Karolin Varner, Wanja Zaeske, Sven Friedrich, Aaron Kaiser, and Alice Bowman
To introduce a post-quantum-secure encryption scheme specifically for use in flight-computers, we used avionics’ module-isolation methods to wrap a recent encryption standard (HPKE – Hybrid Public Key Encryption) within a software partition. This solution proposes an upgrade to HPKE, using quantum-resistant ciphers (Kyber/ML-KEM and Dilithium/ML-DSA) redundantly alongside well-established ciphers, to achieve post-quantum security. Because cryptographic technology can suddenly become obsolete as attacks become more sophisticated, "crypto-agility" -– the ability to swiftly replace ciphers – represents the key challenge to deployment of software like ours. Partitioning is a crucial method for establishing such agility, as it enables the replacement of compromised software without affecting software on other partitions, greatly simplifying the certification process necessary in an avionics environment. Our performance measurements constitute initial evidence that both the memory and performance characteristics of this approach are suitable for deployment in flight-computers currently in use. Prior to optimisation, performance measurements show a modest memory requirement of under 400 KB of RAM, but employ a more substantial stack usage of just under 200 KB. Our most advanced redundant post-quantum cipher is five times slower than its non-redundant, pre-quantum counterpart.
Last updated:  2024-04-30
Private Analytics via Streaming, Sketching, and Silently Verifiable Proofs
Mayank Rathee, Yuwen Zhang, Henry Corrigan-Gibbs, and Raluca Ada Popa
We present Whisper, a system for privacy-preserving collection of aggregate statistics. Like prior systems, a Whisper deployment consists of a small set of non-colluding servers; these servers compute aggregate statistics over data from a large number of users without learning the data of any individual user. Whisper’s main contribution is that its server- to-server communication cost and its server-side storage costs scale sublinearly with the total number of users. In particular, prior systems required the servers to exchange a few bits of information to verify the well-formedness of each client submission. In contrast, Whisper uses silently verifiable proofs, a new type of proof system on secret-shared data that allows the servers to verify an arbitrarily large batch of proofs by exchanging a single 128-bit string. This improvement comes with increased client-to-server communication, which, in cloud computing, is typically cheaper (or even free) than the cost of egress for server-to-server communication. To reduce server storage, Whisper approximates certain statistics using small-space sketching data structures. Applying randomized sketches in an environment with adversarial clients requires a careful and novel security analysis. In a deployment with two servers and 100,000 clients of which 1% are malicious, Whisper can improve server-to-server communication for vector sum by three orders of magnitude while each client’s communication increases by only 10%.
Last updated:  2024-04-30
Homomorphic Evaluation of LWR-based PRFs and Application to Transciphering
Amit Deo, Marc Joye, Benoit Libert, Benjamin R. Curtis, and Mayeul de Bellabre
Certain applications such as FHE transciphering require randomness while operating over encrypted data. This randomness has to be obliviously generated in the encrypted domain and remain encrypted throughout the computation. Moreover, it should be guaranteed that independent-looking random coins can be obliviously generated for different computations. In this work, we consider the homomorphic evaluation of pseudorandom functions (PRFs) with a focus on practical lattice-based candidates. In the homomorphic PRF evaluation setting, given a fully homomorphic encryption of the PRF secret key $\vec{s}$, it should be possible to homomorphically compute encryptions of PRF evaluations $\{ \text{PRF}_{\vec{s}}(x_i) \}_{i=1}^M$ for public inputs $\{ x_i\}_{i=1}^M$. We consider this problem for PRF families based on the hardness of the Learning-With-Rounding (LWR) problem introduced by Banerjee, Peikert and Rosen (Eurocrypt '12). We build on the random-oracle variant of a PRF construction suggested by Banerjee et al. and demonstrate that it can be evaluated using only two sequential programmable bootstraps in the TFHE homomorphic encryption scheme. We also describe several modifications of this PRF---which we prove as secure as the original function---that support homomorphic evaluations using only one programmable bootstrap per slot. Numerical experiments were conducted using practically relevant FHE parameter sets from the TFHE-rs library. Our benchmarks show that a throughput of about $1000$ encrypted pseudorandom bits per second (resp. $900$ encrypted pseudorandom bits per second) can be achieved on an AWS hpc7a.96xlarge machine (resp. on a standard laptop with an Apple M2 chip), on a single thread. The PRF evaluation keys in our experiments have sizes roughly $40\%$ and $60\%$ of a bootstrapping key. Applying our solution to transciphering enables important bandwidth savings, typically trading $64$-bit values for $4$-bit values per transmitted ciphertext.
Last updated:  2024-04-30
Pando: Extremely Scalable BFT Based on Committee Sampling
Xin Wang, Haochen Wang, Haibin Zhang, and Sisi Duan
Byzantine fault-tolerant (BFT) protocols are known to suffer from the scalability issue. Indeed, their performance degrades drastically as the number of replicas $n$ grows. While a long line of work has attempted to achieve the scalability goal, these works can only scale to roughly a hundred replicas. In this paper, we develop BFT protocols from the so-called committee sampling approach that selects a small committee for consensus and conveys the results to all replicas. Such an approach, however, has been focused on the Byzantine agreement (BA) problem (considering replicas only) instead of the BFT problem (in the client-replica model); also, the approach is mainly of theoretical interest only, as concretely, it works for impractically large $n$. We build an extremely efficient, scalable, and adaptively secure BFT protocol called Pando in partially synchronous environments based on the committee sampling approach. In particular, we devise novel BFT building blocks targeting scalability, including communication-efficient and computation-efficient consistent broadcast and atomic broadcast protocols. Pando inherits some inherent issues of committee sampling-based protocols: Pando can only achieve near-optimal resilience (i.e., $f<(1/3-\epsilon)n$, where $f$ is the number of faulty replicas and $\epsilon$ is a small constant), and Pando attains safety and liveness only probabilistically. Interestingly, to make $\epsilon$ come close to 0 (near-optimal resilience), $n$ needs to be sufficiently large but not impractically large, e.g., $n>500$---just what we need for scalable BFT. Our evaluation on Amazon EC2 shows that in contrast to existing protocols, Pando can easily scale to a thousand replicas in the WAN environment, achieving a throughput of 62.57 ktx/sec.
Last updated:  2024-04-30
Xproofs: New Aggregatable and Maintainable Matrix Commitment with Optimal Proof Size
Xinwei Yong, Jiaojiao Wu, and Jianfeng Wang
Vector Commitment (VC) enables one to commit to a vector, and then the element at a specific position can be opened, with proof of consistency to the initial commitment. VC is a powerful primitive with various applications, including stateless cryptocurrencies. Recently, matrix commitment Matproofs (Liu and Zhang CCS 2022), as an extension of VC, has been proposed to reduce the communication and computation complexity of VC-based cryptocurrencies. However, Matproofs requires linear-sized public parameters, and the aggregated proof size may also increase linearly with the number of individual proofs aggregated. Additionally, the proof updating process involves the third party, known as Proof-Serving Nodes (PSNs), which leads to extra storage and communication overhead. In this paper, we first propose a multi-dimensional variant of matrix commitment and construct a new matrix commitment scheme for two-dimensional matrix, called 2D-Xproofs, which achieves optimal aggregated proof size without using PSNs. Furthermore, we present a highly maintainable three-dimensional scheme, 3D-Xproofs, which updates all proofs within time sublinear in the size of the committed matrix without PSNs' assistance. More generally, we could further increase the matrix dimensionality to achieve more efficient proof updates. Finally, we demonstrate the security of our schemes, showing that both schemes are position binding. We also implement both schemes, and the results indicate that our schemes enjoy constant-sized aggregated proofs and sublinear-sized public parameters, and the proof update time in 3D-Xproofs is $2.5\times$ faster than Matproofs.
Last updated:  2024-04-29
Faster Private Decision Tree Evaluation for Batched Input from Homomorphic Encryption
Kelong Cong, Jiayi Kang, Georgio Nicolas, and Jeongeun Park
Privacy-preserving decision tree evaluation (PDTE) allows a client that holds feature vectors to perform inferences against a decision tree model on the server side without revealing feature vectors to the server. Our work focuses on the non-interactive batched setting where the client sends a batch of encrypted feature vectors and then obtains classifications, without any additional interaction. This is useful in privacy-preserving credit scoring, biometric authentication, and many more applications. In this paper, we propose two novel non-interactive batched PDTE protocols, BPDTE_RCC and BPDTE_CW, based on two new ciphertext-plaintext comparison algorithms, the improved range cover comparison (RCC) comparator and the constant-weight (CW) piece-wise comparator, respectively. Compared to the current state-of-the-art Level Up (CCS'23), our comparison algorithms are up to $72\times$ faster for batched inputs of 16 bits. Moreover, we introduced a new tree traversal method called Adapted SumPath, to achieve $\mathcal{O}(1)$ complexity of the server's response, whereas Level Up has $\mathcal{O}(2^d)$ for a depth-$d$ tree where the client needs to look up classification values in a table. Overall, our PDTE protocols attain the optimal server-to-client communication complexity and are up to $17\times$ faster than Level Up in batch size 16384.
Last updated:  2024-04-29
Some amortization techniques for FRI-based SNARKs
Albert Garreta, Hayk Hovhanissyan, Aram Jivanyan, Ignacio Manzur, Isaac Villalobos, and Michał Zając
We present two techniques to improve the computational and/or communication costs of STARK proofs: packing and modular split-and-pack. Packing allows to generate a single proof of the satisfiability of several constraints. We achieve this by packing the evaluations of all relevant polynomials in the same Merkle leaves, and combining all DEEP FRI functions into a single randomized validity function. Our benchmarks show that packing reduces the verification time and proof size compared to individually proving the satisfiability of each witness, while only increasing the prover time moderately. Modular split-and-pack is a proof acceleration technique where the prover divides a witness into smaller sub-witnesses. It then uses packing to prove the simultaneous satisfiability of each sub-witness. Compared to producing a proof of the original witness, splitting improves the prover time and memory usage, while increasing the verifier time and proof size. Ideas similar to modular split-and-pack seem to be used throughout the industry, but 1) generally execution traces are split by choosing the first $k$ rows, then the next $k$ rows, and so on; and 2) full recursion is used to prove the simultaneous satisfiability of the sub-witnesses, usually combined with a final wrapper proof (typically a Groth16 proof). We present a different way to split the witness that allows for an efficient re-writing of Plonkish-type constraints. Based on our benchmarks, we believe this approach (together with a wrapper proof) can improve upon existing splitting methods, resulting in a faster prover at essentially no cost in proof size and verification time. Both techniques apply to popular FRI-based proof systems such as ethSTARK, Plonky2/3, RISC Zero, and Boojum.
Last updated:  2024-04-29
FE[r]Chain: Enforcing Fairness in Blockchain Data Exchanges Through Verifiable Functional Encryption
Camille Nuoskala, Reyhaneh Rabbaninejad, Tassos Dimitriou, and Antonis Michalas
Functional Encryption (FE) allows users to extract specific function-related information from encrypted data while preserving the privacy of the underlying plaintext. Though significant research has been devoted to developing secure and efficient Multi-Input Functional Encryption schemes supporting diverse functions, there remains a noticeable research gap in the development of verifiable FE schemes. Functionality and performance have received considerable attention, however, the crucial aspect of verifiability in FE has been relatively understudied. Another important aspect that prior research in FE with outsourced decryption has not adequately addressed is the fairness of the data-for-money exchange between a curator and an analyst. This paper focuses on addressing these gaps by proposing a verifiable FE scheme for inner product computation. The scheme not only supports the multi-client setting but also extends its functionality to accommodate multiple users -- an essential feature in modern privacy-respecting services. Additionally, it demonstrates how this FE scheme can be effectively utilized to ensure fairness and atomicity in a payment protocol, further enhancing the trustworthiness of data exchanges.
Last updated:  2024-04-29
Secure Latent Dirichlet Allocation
Thijs Veugen, Vincent Dunning, Michiel Marcus, and Bart Kamphorst
Topic modelling refers to a popular set of techniques used to discover hidden topics that occur in a collection of documents. These topics can, for example, be used to categorize documents or label text for further processing. One popular topic modelling technique is Latent Dirichlet Allocation (LDA). In topic modelling scenarios, the documents are often assumed to be in one, centralized dataset. However, sometimes documents are held by different parties, and contain privacy- or commercially-sensitive information that cannot be shared. We present a novel, decentralized approach to train an LDA model securely without having to share any information about the content of the documents with the other parties. We preserve the privacy of the individual parties using a combination of privacy enhancing technologies. We show that our decentralized, privacy preserving LDA solution has a similar accuracy compared to an (insecure) centralised approach. With $1024$-bit Paillier keys, a topic model with $5$ topics and $3000$ words can be trained in around $16$ hours. Furthermore, we show that the solution scales linearly in the total number of words and the number of topics.
Last updated:  2024-04-29
Information-theoretic security with asymmetries
Tim Beyne and Yu Long Chen
In this paper, we study the problem of lower bounding any given cost function depending on the false positive and false negative probabilities of adversaries against indistinguishability security notions in symmetric-key cryptography. We take the cost model as an input, so that this becomes a purely information-theoretical question. We propose power bounds as an easy-to-use alternative for advantage bounds in the context of indistinguishability with asymmetric cost functions. We show that standard proof techniques such as hybrid arguments and the H-coefficient method can be generalized to the power model, and apply these techniques to the PRP-PRF switching lemma, the Even-Mansour (EM) construction, and the sum-of-permutations (SoP) construction. As the final and perhaps most useful contribution, we provide two methods to convert single-user power bounds into multi-user power bounds, and investigate their relation to the point-wise proximity method of Hoang and Tessaro (Crypto 2016). These method are applied to obtain tight multi-user power bounds for EM and SoP.
Last updated:  2024-05-02
Cryptographic Accumulators: New Definitions, Enhanced Security, and Delegatable Proofs
Anaïs Barthoulot, Olivier Blazy, and Sébastien Canard
Cryptographic accumulators, introduced in 1993 by Benaloh and De Mare, represent a set with a concise value and offer proofs of (non-)membership. Accumulators have evolved, becoming essential in anonymous credentials, e-cash, and blockchain applications. Various properties like dynamic and universal emerged for specific needs, leading to multiple accumulator definitions. In 2015, Derler, Hanser, and Slamanig proposed a unified model, but new properties, including zero-knowledge security, have arisen since. We offer a new definition of accumulators, based on Derler et al.’s, that is suitable for all properties. We also introduce a new security property, unforgeability of private evaluation, to protect accumulator from forgery and we verify this property in Barthoulot, Blazy, and Canard’s recent accumulator. Finally we provide discussions on security properties of accumulators and on the delegatable (non-)membership proofs property.
Last updated:  2024-04-29
Cryptanalytic Audit of the XHash Sponge Function and its Components
Vincent Rijmen
In this audit we started from the security analysis provided in the design documentation of XHash8/12. We extended the analysis in several directions and confirmed the security claims that were made by the designers.
Last updated:  2024-04-29
Implementation and Performance Analysis of Homomorphic Signature Schemes
Davide Carnemolla, Dario Catalano, Mario Di Raimondo, and Federico Savasta
Homomorphic signatures allow to validate computation on signed data. Alice, holding a dataset, $\{m_1 , \ldots , m_t \}$ uses her secret key $\sf sk$ to sign these data and stores the authenticated dataset on a remote server. The server can later (publicly) compute $m = f(m_1,...,m_t)$ together with a signature $\sigma$ certifying that $m$ is indeed the correct output of the computation $f$. Over the last fifteen years, the problem of realizing homomorphic signatures has been the focus of numerous research works, with constructions now ranging from very efficient ones supporting linear functions to very expressive ones supporting (up to) arbitrary circuits. In this work we tackle the question of assessing the practicality of schemes belonging to this latter class. Specifically, we implement the GVW lattice based scheme for circuits from STOC 2015 and two, recently proposed, pairings based constructions building from functional commitments. Our experiments show that (both) pairings based schemes outperform GVW on all fronts.
Last updated:  2024-04-29
Monchi: Multi-scheme Optimization For Collaborative Homomorphic Identification
Alberto Ibarrondo, Ismet Kerenciler, Hervé Chabanne, Vincent Despiegel, and Melek Önen
This paper introduces a novel protocol for privacy-preserving biometric identification, named Monchi, that combines the use of homomorphic encryption for the computation of the identification score with function secret sharing to obliviously compare this score with a given threshold and finally output the binary result. Given the cost of homomorphic encryption, BFV in this solution, we study and evaluate the integration of two packing solutions that enable the regrouping of multiple templates in one ciphertext to improve efficiency meaningfully. We propose an end-to-end protocol, prove it secure and implement it. Our experimental results attest to Monchi's applicability to the real-life use case of an airplane boarding scenario with 1000 passengers,taking less than one second to authorize/deny access to the plane to each passenger via biometric identification while maintaining the privacy of all passengers.
Last updated:  2024-04-30
Ipotane: Achieving the Best of All Worlds in Asynchronous BFT
Xiaohai Dai, Chaozheng Ding, Hai Jin, Julian Loss, and Ling Ren
State-of-the-art asynchronous Byzantine Fault Tolerance (BFT) protocols integrate a partially-synchronous optimistic path. The holy grail in this paradigm is to match the performance of a partially-synchronous protocol in favorable situations and match the performance of a purely asynchronous protocol in unfavorable situations. Several prior works have made progress toward this goal by matching the efficiency of a partially-synchronous protocol in favorable conditions. However, their performance compared to purely asynchronous protocols is reduced when network conditions are unfavorable. To address these shortcomings, a recent work, Abraxas (CCS'23), presents the first optimistic asynchronous BFT protocol that retains stable throughput in all situations. However, Abraxas still incurs very high worst-case latency in unfavorable situations because it is slow at detecting the failure of its optimistic path. Another recent work, ParBFT (CCS'23) guarantees good latency in all situations, but suffers from reduced throughput in unfavorable situations due to its use of extra Asynchronous Binary Agreement (ABA) instances. To approach our holy grail, we propose Ipotane, which delivers performance comparable to partially-synchronous protocols in favorable situations, and attains performance on par with purely asynchronous protocols in unfavorable situations—in both throughput and latency. Ipotane also runs the two paths simultaneously. It adopts two-chain HotStuff as the optimistic path, thus achieving high performance in favorable situations. As for the pessimistic path, we introduce a new primitive Dual-functional Byzantine Agreement (DBA), which packs the functionalities of biased ABA and Validated Asynchronous Byzantine Agreement (VABA). Ipotane runs DBA instances continuously as the pessimistic path. DBA’s ABA functionality quickly detects the optimistic path’s failure, ensuring Ipotane’s low latency in unfavorable situations. Meanwhile, the VABA functionality continuously produces blocks, maintaining Ipotane’s high throughput. Additionally, the biased property ensures that blocks committed via the optimistic path are respected by DBA instances, guaranteeing consistency across two paths. We conduct extensive experiments to demonstrate that Ipotane achieves high throughput and low latency in all situations.
Last updated:  2024-05-01
Compact and Secure Zero-Knowledge Proofs for Quantum-Resistant Cryptography from Modular Lattice Innovations
Samuel Lavery
This paper presents a comprehensive security analysis of the Adh zero-knowledge proof system, a novel lattice-based, quantum-resistant proof of possession system. The Adh system offers compact key and proof sizes, making it suitable for real-world digital signature and public key agreement protocols. We explore its security by reducing it to the hardness of the Module-ISIS problem and introduce three new variants: Module-ISIS+, Module-ISIS*, and Module-ISIS**. These constructions enhance security through variations on chaining mechanisms. We also provide a reduction to the module modulus subset sum problem under conservative assumptions. Empirical evidence and statistical testing support the zero-knowledge, completeness, and soundness properties of the Adh proof system. Comparative analysis demonstrates the Adh system's advantages in terms of key and proof sizes over existing post-quantum schemes like Kyber and Dilithium. This paper represents an early preprint and is a work in progress. The core security arguments and experimental results are present, and formal proofs and additional analysis are provided. We invite feedback and collaboration from the research community to further strengthen the security foundations of the Adh system and explore its potential applications in quantum-resistant cryptography.
Last updated:  2024-04-28
A New Hash-based Enhanced Privacy ID Signature Scheme
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
The elliptic curve-based Enhanced Privacy ID (EPID) signature scheme is broadly used for hardware enclave attestation by many platforms that implement Intel Software Guard Extensions (SGX) and other devices. This scheme has also been included in the Trusted Platform Module (TPM) specifications and ISO/IEC standards. However, it is insecure against quantum attackers. While research into quantum-resistant EPID has resulted in several lattice-based schemes, Boneh et al. have initiated the study of EPID signature schemes built only from symmetric primitives. We observe that for this line of research, there is still room for improvement. In this paper, we propose a new hash-based EPID scheme, which includes a novel and efficient signature revocation scheme. In addition, our scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing hardware enclave attestation applications. The security of our scheme is proved under the Universal Composability (UC) model. Finally, we have implemented our EPID scheme, which, to our best knowledge, is the first implementation of EPID from symmetric primitives.
Last updated:  2024-04-28
Hash-based Direct Anonymous Attestation
Liqun Chen, Changyu Dong, Nada El Kassem, Christopher J.P. Newton, and Yalan Wang
Direct Anonymous Attestation (DAA) was designed for the Trusted Platform Module (TPM) and versions using RSA and elliptic curve cryptography have been included in the TPM specifications and in ISO/IEC standards. These standardised DAA schemes have their security based on the factoring or discrete logarithm problems and are therefore insecure against quantum attackers. Research into quantum-resistant DAA has resulted in several lattice-based schemes. Now in this paper, we propose the first post-quantum DAA scheme from symmetric primitives. We make use of a hash-based signature scheme, which is a slight modification of SPHINCS+, as a DAA credential. A DAA signature, proving the possession of such a credential, is a multiparty computation-based non-interactive zero-knowledge proof. The security of our scheme is proved under the Universal Composability (UC) model. While maintaining all the security properties required for a DAA scheme, we try to make the TPM's workload as low as possible. Our DAA scheme can handle a large group size (up to $2^{60}$ group members), which meets the requirements of rapidly developing TPM applications.
Last updated:  2024-04-28
Sphinx-in-the-Head: Group Signatures from Symmetric Primitives
Liqun Chen, Changyu Dong, Christopher J. P. Newton, and Yalan Wang
Group signatures and their variants have been widely used in privacy-sensitive scenarios such as anonymous authentication and attestation. In this paper, we present a new post-quantum group signature scheme from symmetric primitives. Using only symmetric primitives makes the scheme less prone to unknown attacks than basing the design on newly proposed hard problems whose security is less well-understood. However, symmetric primitives do not have rich algebraic properties, and this makes it extremely challenging to design a group signature scheme on top of them. It is even more challenging if we want a group signature scheme suitable for real-world applications, one that can support large groups and require few trust assumptions. Our scheme is based on MPC-in-the-head non-interactive zero-knowledge proofs, and we specifically design a novel hash-based group credential scheme, which is rooted in the SPHINCS+ signature scheme but with various modifications to make it MPC (multi-party computation) friendly. The security of the scheme has been proved under the fully dynamic group signature model. We provide an implementation of the scheme and demonstrate the feasibility of handling a group size as large as $2^{60}$. This is the first group signature scheme from symmetric primitives that supports such a large group size and meets all the security requirements.
Last updated:  2024-04-28
Encrypted KNN Implementation on Distributed Edge Device Network
B Pradeep Kumar Reddy, Ruchika Meel, and Ayantika Chatterjee
Machine learning (ML) as a service has emerged as a rapidly expanding field across various industries like healthcare, finance, marketing, retail and e-commerce, Industry 4.0, etc where a huge amount of data is gen- erated. To handle this amount of data, huge computational power is required for which cloud computing used to be the first choice. However, there are several challenges in cloud computing like limitations of bandwidth, network connectivity, higher latency, etc. To address these issues, edge computing is prominent nowadays, where the data from sensor nodes is collected and processed on low-cost edge devices. As simple sensor nodes are not capable of handling complex computations of ML models, data from sensor nodes need to be transferred to some nearest edge devices for further processing. If this sensor data is related to some security- critical application, the privacy of such sensitive data needs to be preserved both during communication from sensor node to edge device and computation in edge nodes. This increased need to perform edge-based ML on privacy-preserved data has led to a surge in interest in homomorphic encryption (HE) due to its ability to perform computations on encrypted form of data. The highest form of HE, Fully Homomorphic Encryption (FHE), is capable of theoretically handling arbitrary encrypted algorithms but comes with huge computational overhead. Hence, the implementation of such a complex encrypted ML model on a single edge node is not very practical in terms of latency requirements. Our paper introduces a low-cost encrypted ML framework on a distributed edge cluster, where multiple low-cost edge devices (Raspberry Pi boards) are clustered to perform encrypted distributed K-Nearest Neighbours (KNN) algorithm computations. Our experimental result shows, KNN prediction on standard Wisconsin breast cancer dataset takes approximately 1.2 hours, implemented on a cluster of six pi boards, maintaining end-to-end data confidentiality of critical medical data without any re- quirement of costly cloud-based computation resource support
Last updated:  2024-04-28
Weightwise (almost) perfectly balanced functions based on total orders
Pierrick Méaux
he unique design of the FLIP cipher necessitated a generalization of standard cryptographic criteria for Boolean functions used in stream ciphers, prompting a focus on properties specific to subsets of $\mathbb{F}_2^n$ rather than the entire set. This led to heightened interest in properties related to fixed Hamming weight sets and the corresponding partition of $\mathbb{F}_2^n$ into n+1 such sets. Consequently, the concept of Weightwise Almost Perfectly Balanced (WAPB) functions emerged, which are balanced on each of these sets.Various studies have since proposed WAPB constructions and examined their cryptographic parameters for use in stream cipher filters. In this article, we introduce a general approach to constructing WAPB functions using the concept of order, which simplifies implementation and enhances cryptographic strength. We present two new constructions: a recursive method employing multiple orders on binary strings, and another utilizing just two orders. We establish lower bounds for nonlinearity and weightwise nonlinearities within these classes. By instantiating specific orders, we demonstrate that some achieve minimal algebraic immunity, while others provide functions with guaranteed optimal algebraic immunity. Experimental results in 8 and 16 variables indicate that using orders based on field representation significantly outperforms other methods in terms of both global and weightwise algebraic immunity and nonlinearity. Additionally, we extend the recursive construction to create WAPB functions for any value of n, with experiments in 10, 12, and 14 variables confirming that these order-based functions exhibit robust cryptographic parameters. In particular, those based on field orders display optimal degrees and algebraic immunity, and strong weightwise nonlinearities and algebraic immunities.
Last updated:  2024-04-27
Efficient Quantum Algorithm for SUBSET-SUM Problem
Sanchita Ghosh, Anant Sharma, Sreetama Das, and Shibdas Roy
Problems in the complexity class $NP$ are not all known to be solvable, but are verifiable given the solution, in polynomial time by a classical computer. The complexity class $BQP$ includes all problems solvable in polynomial time by a quantum computer. Prime factorization is in $NP$ class, and is also in $BQP$ class, owing to Shor's algorithm. The hardest of all problems within the $NP$ class are called $NP$-complete. If a quantum algorithm can solve an $NP$-complete problem in polynomial time, it would imply that a quantum computer can solve all problems in $NP$ in polynomial time. Here, we present a polynomial-time quantum algorithm to solve an $NP$-complete variant of the $SUBSET-SUM$ problem, thereby, rendering $NP\subseteq BQP$. We illustrate that given a set of integers, which may be positive or negative, a quantum computer can decide in polynomial time whether there exists any subset that sums to zero. There are many real-world applications of our result, such as finding patterns efficiently in stock-market data, or in recordings of the weather or brain activity. As an example, the decision problem of matching two images in image processing is $NP$-complete, and can be solved in polynomial time, when amplitude amplification is not required.
Last updated:  2024-04-27
Toward Independent Key Encryption based on Q-Problem
Abdelkader Laouid, Mostefa Kara, and Mohammad Hammoudeh
This paper defines a post-quantum encryption scheme based on discussion cryptography by introducing a new post-quantum hard problem called Q-Problem. The idea behind this scheme is to hide the keys of each entity, and the encryption process is based on secret message holders using only random private keys.
Last updated:  2024-04-27
Jumping for Bernstein-Yang Inversion
Li-Jie Jian, Ting-Yuan Wang, Bo-Yin Yang, and Ming-Shing Chen
This paper achieves fast polynomial inverse operations specifically tailored for the NTRU Prime KEM on ARMv8 NEON instruction set benchmarking on four processor architectures: Cortex-A53, Cortex-A72, Cortex-A76 and Apple M1. We utilize the jumping divison steps of the constant-time GCD algorithm from Bernstein and Yang (TCHES’19) and optimize underlying polynomial multiplication of various lengths to improve the efficiency for computing polynomial inverse operations in NTRU Prime.
Last updated:  2024-04-26
Key-Homomorphic and Aggregate Verifiable Random Functions
Giulio Malavolta
A verifiable random function (VRF) allows one to compute a random-looking image, while at the same time providing a unique proof that the function was evaluated correctly. VRFs are a cornerstone of modern cryptography and, among other applications, are at the heart of recently proposed proof-of-stake consensus protocols. In this work we initiate the formal study of aggregate VRFs, i.e., VRFs that allow for the aggregation of proofs/images into a small di- gest, whose size is independent of the number of input proofs/images, yet it still enables sound verification. We formalize this notion along with its security properties and we propose two constructions: The first scheme is conceptually simple, concretely efficient, and uses (asymmetric) bilinear groups of prime order. Pseudorandomness holds in the random oracle model and aggregate pseudorandomness is proven in the algebraic group model. The second scheme is in the standard model and it is proven secure against the learning with errors (LWE) problem. As a cryptographic building block of independent interest, we introduce the notion of key homomorphic VRFs, where the verification keys and the proofs are endowed with a group structure. We conclude by discussing several applications of key-homomorphic and aggregate VRFs, such as distributed VRFs and aggregate proof-of-stake protocols.
Last updated:  2024-04-26
GraphOS: Towards Oblivious Graph Processing
Javad Ghareh Chamani, Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou, and Rasool Jalili
We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing: the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure, so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace, i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS(translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are “padded” to the worst case.
Last updated:  2024-04-26
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
Xuanji Meng, Xiao Sui, Zhaoxin Yang, Kang Rong, Wenbo Xu, Shenglong Chen, Ying Yan, and Sisi Duan
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high $O(n^3)$ message cost, where $n$ is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more efficient DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal $O(n)$ messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols. Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications. We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as $n$ grows.
Last updated:  2024-04-26
On Proving Pairings
Andrija Novakovic and Liam Eagen
In this paper we explore efficient ways to prove correctness of elliptic curve pairing relations. Pairing-based cryptographic protocols such as the Groth16 and Plonk SNARKs and the BLS signature scheme are used extensively in public blockchains such as Ethereum due in large part to their small size. However the relatively high cost of pairing computation remains a practical problem for many use cases such as verification ``in circuit" inside a SNARK. This naturally arises in recursive SNARK composition and SNARKs of BLS based consensus protocols. To improve pairing verification, we first show that the final exponentiation step of pairing verification can be replaced with a more efficient ``residue check," which can be incorporated into the Miller loop. Then, we show how to reduce the cost of the Miller loop by pre-computing all the necessary lines, and how this is especially efficient when the second pairing argument is fixed in advance. This is the case for BLS signatures with a fixed public key, as well as for KZG based SNARKs like Plonk and two of the three Groth16 pairings. Finally, we show how to improve of the protocol of [gar] by combining quotients, which allows us to more efficiently prove higher degree relations. These techniques also carry over naturally to pairing verification, for example on-chain verification or as part of the BitVM(2) protocol for Bitcoin smart contracts. We instantiate algorithms and show results for the BN254 curve.
Last updated:  2024-04-26
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when needing to ensure privacy and the correctness of work carried out by a potentially untrusted party. Recent advancements in the discipline of applied zero-knowledge cryptography, and probabilistic proof systems in general, have led to a means of generating proofs of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we present the application of a non-interactive, plausibly-post-quantum-secure, probabilistically-checkable argument system utilized as an efficiently verifiable guarantee that a privacy mechanism was irrefutably applied to a machine-learning model during the training process. That is, we prove the correct training of a differentially-private (DP) linear regression over a dataset of 60,000 samples on a single machine in 55 minutes, verifying the entire computation in 47 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. Finally, we show how this task can be run in parallel, leading to further dramatic reductions in prover and verifier runtime complexity. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-04-26
A note on ``a lightweight mutual and transitive authentication mechanism for IoT network''
Zhengjun Cao and Lihua Liu
We show the authentication mechanism [Ad Hoc Networks, 2023, 103003] fails to keep user anonymity, not as claimed.
Last updated:  2024-04-25
Towards Permissionless Consensus in the Standard Model via Fine-Grained Complexity
Marshall Ball, Juan Garay, Peter Hall, Aggelos Kiayias, and Giorgos Panagiotakos
We investigate the feasibility of permissionless consensus (aka Byzantine agreement) under standard assumptions. A number of protocols have been proposed to achieve permissionless consensus, most notably based on the Bitcoin protocol; however, to date no protocol is known that can be provably instantiated outside of the random oracle model. In this work, we take the first steps towards achieving permissionless consensus in the standard model. In particular, we demonstrate that worst-case conjectures in fine-grained complexity, in particular the orthogonal vectors conjecture (implied by the Strong Exponential Time Hypothesis), imply permissionless consensus in the random beacon model—a setting where a fresh random value is delivered to all parties at regular intervals. This gives a remarkable win-win result: either permissionless consensus exists relative to a random beacon, or there are non-trivial worst-case algorithmic speed-ups for a host of natural algorithmic problems (including SAT). Our protocol achieves resilience against adversaries that control an inverse-polynomial fraction of the honest computational power, i.e., adversarial power $A = T^{1−ε} $ for some constant $ε > 0$, where $T$ denotes the honest computational power. This relatively low threshold is a byproduct of the slack in the fine-grained complexity conjectures. One technical highlight is the construction of a Seeded Proof of Work: a Proof of Work where many (correlated) challenges can be derived from a single short public seed, and yet still no non-trivial amortization is possible.
Last updated:  2024-04-25
Regev Factoring Beyond Fibonacci: Optimizing Prefactors
Seyoon Ragavan
In this note, we improve the space-efficient variant of Regev's quantum factoring algorithm [Reg23] proposed by Ragavan and Vaikuntanathan [RV24] by constant factors in space and/or size. This allows us to bridge the significant gaps in concrete efficiency between the circuits by [Reg23] and [RV24]; [Reg23] uses far fewer gates, while [RV24] uses far fewer qubits. The main observation is that the space-efficient quantum modular exponentiation technique by [RV24] can be modified to work with more general sequences of integers than the Fibonacci numbers. We parametrize this in terms of a linear recurrence relation, and through this formulation construct three different circuits for quantum factoring: - A circuit that uses $\approx 12.4n$ qubits and $\approx 54.9n^{1/2}$ multiplications of $n$-bit integers. - A circuit that uses $(9+\epsilon)n$ qubits and $O_\epsilon(n^{1/2})$ multiplications of $n$-bit integers, for any $\epsilon > 0$. - A circuit that uses $(24+\epsilon)n^{1/2}$ multiplications of $n$-bit integers, and $O_\epsilon(n)$ qubits, for any $\epsilon > 0$. In comparison, the original circuit by [Reg23] uses at least $\approx 3n^{3/2}$ qubits and $\approx 6n^{1/2}$ multiplications of $n$-bit integers, while the space-efficient variant by [RV24] uses $\approx 10.32n$ qubits and $\approx 138.3n^{1/2}$ multiplications of $n$-bit integers (although a very simple modification of their Fibonacci-based circuit uses $\approx 11.32n$ qubits and only $\approx 103.7n^{1/2}$ multiplications of $n$-bit integers). The improvements proposed in this note take effect for sufficiently large values of $n$; it remains to be seen whether they can also provide benefits for practical problem sizes.
Last updated:  2024-04-25
Organizing Records for Retrieval in Multi-Dimensional Range Searchable Encryption
Mahdieh Heidaripour, Ladan Kian, Maryam Rezapour, Mark Holcomb, Benjamin Fuller, Gagan Agrawal, and Hoda Maleki
Storage of sensitive multi-dimensional arrays must be secure and efficient in storage and processing time. Searchable encryption allows one to trade between security and efficiency. Searchable encryption design focuses on building indexes, overlooking the crucial aspect of record retrieval. Gui et al. (PoPETS 2023) showed that understanding the security and efficiency of record retrieval is critical to understand the overall system. A common technique for improving security is partitioning data tuples into parts. When a tuple is requested, the entire relevant part is retrieved, hiding the tuple of interest. This work assesses tuple partitioning strategies in the dense data setting, considering parts that are random, $1$-dimensional, and multi-dimensional. We consider synthetic datasets of $2$, $3$ and $4$ dimensions, with sizes extending up to $2$M tuples. We compare security and efficiency across a variety of record retrieval methods. Our findings are: 1. For most configurations, multi-dimensional partitioning yields better efficiency and less leakage. 2. 1-dimensional partitioning outperforms multi-dimensional partitioning when the first (indexed) dimension is any size as long as the query is large in all other dimensions except the (the first dimension can be any size). 3. The leakage of 1-dimensional partitioning is reduced the most when using a bucketed ORAM (Demertiz et al., USENIX Security 2020).
Last updated:  2024-04-25
NTRU-based FHE for Larger Key and Message Space
Robin Jadoul, Axel Mertens, Jeongeun Park, and Hilder V. L. Pereira
The NTRU problem has proven a useful building block for efficient bootstrapping in Fully Homomorphic Encryption (FHE) schemes, and different such schemes have been proposed. FINAL (ASIACRYPT 2022) first constructed FHE using homomorphic multiplexer (CMux) gates for the blind rotation operation. Later, XZD+23 (CRYPTO 2023) gave an asymptotic optimization by changing the ciphertext format to enable ring automorphism evaluations. In this work, we examine an adaptation to FINAL to evaluate CMux gates of higher arity and the resulting tradeoff to running times and bootstrapping key sizes. In this setting, we can compare the time and space efficiency of both bootstrapping protocols with larger key space against each other and the state of the art.
Last updated:  2024-05-01
Vision Mark-32: ZK-Friendly Hash Function Over Binary Tower Fields
Tomer Ashur, Mohammad Mahzoun, Jim Posen, and Danilo Šijačić
Zero-knowledge proof systems are widely used in different applications on the Internet. Among zero-knowledge proof systems, SNARKs are a popular choice because of their fast verification time and small proof size. The efficiency of zero-knowledge systems is crucial for usability, resulting in the development of so-called arithmetization-oriented ciphers. In this work, we introduce Vision Mark-32, a modified instance of Vision defined over binary tower fields, with an optimized number of rounds and an efficient MDS matrix. We implement a fully-pipelined Vision Mark-32 permutation on Alveo U55C FPGA accelerator card and argue an order of magnitude better hardware efficiency compared to the popular Poseidon hash. Our fully-pipelined Vision Mark-32 implementation runs at 250 MHz and uses 398 kLUT and 104 kFF. Lastly, we delineate how to implement each step efficiently in hardware.
Last updated:  2024-04-25
Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
Qin Yuan, Chunlei Li, Xiangyong Zeng, Tor Helleseth, and Debiao He
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
Last updated:  2024-04-25
BackMon: IC Backside Tamper Detection using On-Chip Impedance Monitoring
Tahoura Mosavirik and Shahin Tajik
The expansion of flip-chip technologies and a lack of backside protection make the integrated circuit (IC) vulnerable to certain classes of physical attacks mounted from the IC’s backside. Laser-assisted probing, electromagnetic, and body-basing injection attacks are examples of such attacks. Unfortunately, there are few countermeasures proposed in the literature, and none are available commercially. Those that do exist are not only expensive but are incompatible with current IC manufacturing processes. They also cannot be integrated into legacy systems, such as field programmable gate arrays (FPGAs), which are integral parts of many of the industrial and defense systems. In this paper, we demonstrate how the impedance monitoring of the printed circuit board (PCB) and IC package’s power distribution network (PDN) using on-chip circuit-based network analyzers can detect IC backside tampering. Our method is based on the fact that any tampering attempt to expose the backside silicon substrate, such as the removal of the fan and heat sinks, leads to changes in the equivalent impedance of the package’s PDN, and hence, scanning the package impedance will reveal whether the package integrity has been violated. To validate our claims, we deploy an on-FPGA network analyzer on an AMD Zynq UltraScale+ MPSoC manufactured with 16 nm technology, which is part of a multi-PCB system. We conduct a series of experiments at different temperatures, leveraging the difference of means as the statistical metric, to demonstrate the effectiveness of our method in detecting tamper events required to expose the IC backside silicon.
Last updated:  2024-04-24
Conditional disclosure of secrets with quantum resources
Vahid R. Asadi, Kohdai Kuroiwa, Debbie Leung, Alex May, Sabrina Pasterski, and Chris Waddell
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret $z$ to a referee if and only if a Boolean function $f$ has $f(x,y)=1$. Alice knows $x,z$, Bob knows $y$, and the referee knows $x,y$. Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the $f$-routing position-verification scheme, our results have relevance to the security of these schemes.
Last updated:  2024-04-24
Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
Cédric Pilatte
In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
Last updated:  2024-04-24
MUSEN: Aggregatable Key-Evolving Verifiable Random Functions and Applications
Bernardo David, Rafael Dowsley, Anders Konring, and Mario Larangeira
A Verifiable Random Function (VRF) can be evaluated on an input by a prover who holds a secret key, generating a pseudorandom output and a proof of output validity that can be verified using the corresponding public key. VRFs are a central building block of committee election mechanisms that sample parties to execute tasks in cryptographic protocols, e.g. generating blocks in a Proof-of-Stake (PoS) blockchain or executing a round of MPC protocols. We propose the notion, and a matching construction, of an Aggregatable Key-Evolving VRF (A-KE-VRF) with the following extra properties: 1. Aggregation: combining proofs for several VRF evaluations of different inputs under different secret keys into a single constant size proof; 2. Key-Evolving: preventing adversaries who corrupt a party (learning their secret key) from ``forging'' proofs of past VRF evaluations. As an immediate application, we improve on the block size of PoS blockchains and on the efficiency of Proofs of Proof-of-Stake (PoPoS). Furthermore, the A-KE-VRF notion allows us to construct Encryption to the Future (EtF) and Authentication from the Past (AfP) schemes with a Key-Evolving property, which provides forward security. An EtF scheme allows for sending a message to a party who is randomly selected to execute a role in the future, while an AfP scheme allows for this party to authenticate their messages as coming from a past execution of this role. These primitives are essential for realizing the YOSO MPC Framework (CRYPTO'21).
Last updated:  2024-04-24
Distributed & Scalable Oblivious Sorting and Shuffling
Nicholas Ngai, Ioannis Demertzis, Javad Ghareh Chamani, and Dimitrios Papadopoulos
Existing oblivious systems offer robust security by concealing memory access patterns, but they encounter significant scalability and performance challenges. Recent efforts to enhance the practicality of these systems involve embedding oblivious computation, e.g., oblivious sorting and shuffling, within Trusted Execution Environments (TEEs). For instance, oblivious sort has been heavily utilized: in Oblix (S&P'18), when oblivious indexes are created and accessed; in Snoopy's high-throughput oblivious key-value (SOSP'21) during initialization and when the input requests are deduplicated and prepared for delivery; in Opaque (NSDI'17) for all the proposed oblivious SQL operators; in the state-of-the-art non-foreign key oblivious join approach (PVLDB'20). Additionally, oblivious sort/shuffle find applications in Signal's commercial solution for contact discovery, anonymous Google's Key Transparency, Searchable Encryption, software monitoring, and differentially private federated learning with user privacy. In this work, we address the scalability bottleneck of oblivious sort and shuffle by re-designing these approaches to achieve high efficiency in distributed multi-enclave environments. First, we propose a multi-threaded bitonic sort optimized for the distributed setting, making it the most performant oblivious sort for small number of enclaves (up to 4). For larger numbers of enclaves, we propose a novel oblivious bucket sort, which improves data locality and network consumption and outperforms our optimized distributed bitonic-sort by up to 5-6x. To the best of our knowledge, these are the first distributed oblivious TEE-based sorting solutions. For reference, we are able to sort 2 GiB of data in 1 second and 128 GiB in 53.4 seconds in a multi-enclave test. A fundamental building block of our oblivious bucket-sort is an oblivious shuffle that improves the prior state-of-the-art result (CCS'22) by up to 9.5x in the distributed multi-enclave setting---interestingly it is better by 10% even in the single-enclave/multi-thread setting.
Last updated:  2024-05-02
Exponential Quantum Speedup for the Traveling Salesman Problem
Anant Sharma, Nupur Deshpande, Sanchita Ghosh, Sreetama Das, and Shibdas Roy
The traveling salesman problem is the problem of finding out the shortest route in a network of cities, that a salesman needs to travel to cover all the cities, without visiting the same city more than once. This problem is known to be $NP$-hard with a brute-force complexity of $O(N^N)$ or $O(N^{2N})$ for $N$ number of cities. This problem is equivalent to finding out the shortest Hamiltonian cycle in a given graph, if at least one Hamiltonian cycle exists in it. Quantum algorithms for this problem typically provide with a quadratic speedup only, using Grover's search, thereby having a complexity of $O(N^{N/2})$ or $O(N^N)$. We present a bounded-error quantum polynomial-time (BQP) algorithm for solving the problem, providing with an exponential speedup. The overall complexity of our algorithm is $O(N^3\log(N)\kappa/\epsilon + 1/\epsilon^3)$, where the errors $\epsilon$ are $O(1/{\rm poly}(N))$, and $\kappa$ is the not-too-large condition number of the matrix encoding all Hamiltonian cycles.
Last updated:  2024-04-23
Interactive Threshold Mercurial Signatures and Applications
Masaya Nanri, Octavio Perez Kempner, Mehdi Tibouchi, and Masayuki Abe
Equivalence class signatures allow a controlled form of malleability based on equivalence classes defined over the message space. As a result, signatures can be publicly randomized and adapted to a new message representative in the same equivalence class. Notably, security requires that an adapted signature-message pair looks indistinguishable from a random signature-message pair in the space of valid signatures for the new message representative. Together with the decisional Diffie-Hellman assumption, this yields an unlinkability notion (class-hiding), making them a very attractive building block for privacy-preserving primitives. Mercurial signatures are an extension of equivalence class signatures that allow malleability for the key space. Unfortunately, the most efficient construction to date suffers a severe limitation that limits their application: only a weak form of public key class-hiding is supported. In other words, given knowledge of the original signing key and randomization of the corresponding public key, it is possible to identify whether they are related. In this work, we put forth the notion of interactive threshold mercurial signatures and show how they help to overcome the above-mentioned limitation. Moreover, we present constructions in the two-party and multi-party settings, assuming at least one honest signer. We also discuss related applications, including blind signatures, multi-signatures, and threshold ring signatures. To showcase the practicality of our approach, we implement the proposed constructions, comparing them against related alternatives.
Last updated:  2024-04-22
POKE: A Framework for Efficient PKEs, Split KEMs, and OPRFs from Higher-dimensional Isogenies
Andrea Basso
We introduce a new framework, POKE, to build cryptographic protocols from irrational isogenies using higher-dimensional representations. The framework enables two parties to manipulate higher-dimensional representations of isogenies to efficiently compute their pushforwards, and ultimately to obtain a shared secret. We provide three constructions based on POKE: the first is a PKE protocol, which is one of the most compact post-quantum PKEs and possibly the most efficient isogeny-based PKE to date. We then introduce a validation technique to ensure the correctness of uniSIDH public keys: by combining the validation method with a POKE-based construction, we obtain a split KEM, a primitive that generalizes NIKEs and can be used to instantiate a post-quantum version of the Signal's X3DH protocol. The third construction builds upon the split KEM and its validation method to obtain a round-optimal verifiable OPRF. It is the first such construction that does not require more than $\lambda$ isogeny computations, and it is significantly more compact and more efficient than all other isogeny-based OPRFs.
Last updated:  2024-04-22
Complete group law for genus 2 Jacobians on Jacobian coordinates
Elif Ozbay Gurler and Huseyin Hisil
This manuscript provides complete, inversion-free, and explicit group law formulas in Jacobian coordinates for the genus 2 hyperelliptic curves of the form $y^2 = x^5 + a_3 x^3 + a_2 x^2 + a_1 x + a_0$ over a field $K$ with $char(K) \ne 2$. The formulas do not require the use of polynomial arithmetic operations such as resultant, mod, or gcd computations but only operations in $K$.
Last updated:  2024-04-22
Deep Selfish Proposing in Longest-Chain Proof-of-Stake Protocols
Roozbeh Sarenche, Svetla Nikova, and Bart Preneel
It has been shown that the selfish mining attack enables a miner to achieve an unfair relative revenue, posing a threat to the progress of longest-chain blockchains. Although selfish mining is a well-studied attack in the context of Proof-of-Work blockchains, its impact on the longest-chain Proof-of-Stake (LC-PoS) protocols needs yet to be addressed. This paper involves both theoretical and implementation-based approaches to analyze the selfish proposing attack in the LC-PoS protocols. We discuss how factors such as the nothing-at-stake phenomenon and the proposer predictability in PoS protocols can make the selfish proposing attack in LC-PoS protocols more destructive compared to selfish mining in PoW. In the first part of the paper, we use combinatorial tools to theoretically assess the selfish proposer’s block ratio in simplistic LC-PoS environments and under simplified network connection. However, these theoretical tools or classical MDP-based approaches cannot be applied to analyze the selfish proposing attack in real-world and more complicated LC-PoS environments. To overcome this issue, in the second part of the paper, we employ deep reinforcement learning techniques to find the near-optimal strategy of selfish proposing in more sophisticated protocols. The tool implemented in the paper can help us analyze the selfish proposing attack across diverse blockchain protocols with different reward mechanisms, predictability levels, and network conditions.
Last updated:  2024-04-22
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
Sebastian Bitzer, Jeroen Delvaux, Elena Kirshanova, Sebastian Maaßen, Alexander May, and Antonia Wachter-Zeh
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information. Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit. We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits.
Last updated:  2024-04-22
New SAT-based Model for Quantum Circuit Decision Problem: Searching for Low-Cost Quantum Implementation
Jingwen Chen, Qun Liu, Yanhong Fan, Lixuan Wu, Boyun Li, and Meiqin Wang
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
Last updated:  2024-04-28
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Huiqiang Liang, Haining Lu, and Geng Wang
Machine learning as a service requires the client to trust the server and provide its own private information to use this service. Usually, clients may worry that their private data is being collected by server without effective supervision, and the server also aims to ensure proper management of the user data to foster the advancement of its services. In this work, we focus on private decision tree evaluation (PDTE) which can alleviates such privacy concerns associated with classification tasks using decision tree. After the evaluation, except for some hyperparameters, the client only receives the classification results from the server, while the server learns nothing. Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP, and CDCMP, which are based on the leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using this private comparison: TECMP-PDTE, RDCMP-PDTE, and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes under 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56$\times$ faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14$\times$ speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, the TECMP-PDTE maintains a comparable performance with the current work.
Last updated:  2024-04-22
Efficient KZG-based Univariate Sum-check and Lookup Argument
Yuncong Zhang, Shi-Feng Sun, and Dawu Gu
We propose a novel KZG-based sum-check scheme, dubbed $\mathsf{Losum}$, with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size $k$---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element. Using $\mathsf{Losum}$ as a component, we then construct a new lookup argument, named $\mathsf{Locq}$, which enjoys a smaller proof size and a lower verification cost compared to the state of the arts $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++. Specifically, the proving cost of $\mathsf{Locq}$ is comparable to $\mathsf{cq}$, keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, $\mathsf{Locq}$ costs four pairings, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ require five, five and six pairings, respectively. For proof size, a $\mathsf{Locq}$ proof consists of four $\mathbb{G}_1$ elements and one $\mathbb{G}_2$ element; when instantiated with the BLS12-381 curve, the proof size of $\mathsf{Locq}$ is $2304$ bits, while $\mathsf{cq}$, $\mathsf{cq}$+ and $\mathsf{cq}$++ have $3840$, $3328$ and $2944$ bits, respectively. Moreover, $\mathsf{Locq}$ is zero-knowledge as $\mathsf{cq}$+ and $\mathsf{cq}$++, whereas $\mathsf{cq}$ is not. $\mathsf{Locq}$ is more efficient even compared to the non-zero-knowledge (and more efficient) versions of $\mathsf{cq}$+ and $\mathsf{cq}$++.
Last updated:  2024-04-22
Lattice-Based Succinct Mercurial Functional Commitment for Circuits: Definitions and Constructions
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, Zoe L. Jiang, and Min Xie
Vector commitments gain a lot of attention because of their wide usage in applications such as blockchain and accumulator. Mercurial vector commitments and mercurial functional commitments (MFC), as significant variants of VC, are the central techniques to construct more advanced cryptographic primitives such as zero-knowledge set and zero-knowledge functional elementary database (ZK-FEDB). However, the current MFC only supports linear functions, limiting its application, i.e. building the ZK-FEDB that only supports linear function queries. Besides, to our best knowledge, the existing MFC and ZK-FEDBs, including the one proposed by Zhang and Deng (ASIACRYPT '23) using RSA accumulators, are all in the group model and cannot resist the attack of quantum computers. To break these limitations, we formalize the first system model and security model of MFC for circuits. Then, we target some specific properties of a new falsifiable assumption, i.e. the $\mathsf{BASIS}$ assumption proposed by Wee and Wu (EUROCRYPT '23) to construct the first lattice-based succinct mercurial functional commitment for circuits. To the application, we show that our constructions can be used to build the first lattice-based ZK-FEDB directly within the existing generic framework.
Last updated:  2024-04-22
$\mathsf{Cougar}$: Cubic Root Verifier Inner Product Argument under Discrete Logarithm Assumption
Hyeonbum Lee, Seunghun Paik, Hyunjung Son, and Jae Hong Seo
An inner product argument (IPA) is a cryptographic primitive used to construct a zero-knowledge proof (ZKP) system, which is a notable privacy-enhancing technology. We propose a novel efficient IPA called $\mathsf{Cougar}$. $\mathsf{Cougar}$ features cubic root verifier and logarithmic communication under the discrete logarithm (DL) assumption. At Asiacrypt2022, Kim et al. proposed two square root verifier IPAs under the DL assumption. Our main objective is to overcome the limitation of square root complexity in the DL setting. To achieve this, we combine two distinct square root IPAs from Kim et al.: one with pairing ($\mathsf{Protocol3}$) and one without pairing ($\mathsf{Protocol4}$). To construct $\mathsf{Cougar}$, we first revisit $\mathsf{Protocol4}$ and reconstruct it to make it compatible with the proof system for the homomorphic commitment scheme. Next, we utilize $\mathsf{Protocol3}$ as the proof system for the reconstructed $\mathsf{Protocol4}$. Furthermore, we provide a soundness proof for $\mathsf{Cougar}$ in the DL assumption.
Last updated:  2024-04-22
Subverting Cryptographic Protocols from A Fine-Grained Perspective - A Case Study on 2-Party ECDSA
Jialiu Cheng, Yi Wang, Rongmao Chen, and Xinyi Huang
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Last updated:  2024-04-22
Non-interactive Blind Signatures from Lattices
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, and Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Coincidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver. With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications. Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we investigate efficient NIBS with post-quantum security. We design the first practical NIBS, as well as non-interactive partially blind signatures called tagged NIBS, from lattice-based assumptions. We also propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature). Finally, we propose new enhanced security properties for NIBS, that could be of practical and theoretical interest.
Last updated:  2024-04-24
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Jie Xie, Yuncong Hu, and Yu Yu
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Last updated:  2024-04-21
FHERMA: Building the Open-Source FHE Components Library for Practical Use
Gurgen Arakelov, Nikita Kaskov, Daria Pianykh, and Yuriy Polyakov
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Last updated:  2024-04-21
A Security Analysis of Restricted Syndrome Decoding Problems
Ward Beullens, Pierre Briaud, and Morten Øygarden
Restricted syndrome decoding problems (R-SDP and R-SDP($G$)) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures. This work improves our understanding of the security of both problems. Firstly, we propose and implement a novel collision attack on R-SDP($G$) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Last updated:  2024-04-22
Practical Delegatable Attribute-Based Anonymous Credentials with Chainable Revocation
Min Xie, Peichen Ju, Yanqi Zhao, Zoe L. Jiang, Junbin Fang, Yong Yu, and Xuan Wang
Delegatable Anonymous Credentials (DAC) are an enhanced Anonymous Credentials (AC) system that allows credential owners to use credentials anonymously, as well as anonymously delegate them to other users. In this work, we introduce a new concept called Delegatable Attribute-based Anonymous Credentials with Chainable Revocation (DAAC-CR), which extends the functionality of DAC by allowing 1) fine-grained attribute delegation, 2) issuers to restrict the delegation capabilities of the delegated users at a fine-grained level, including the depth of delegation and the sets of delegable attributes, and 3) chainable revocation, meaning if a credential within the delegation chain is revoked, all subsequent credentials derived from it are also invalid. We provide a practical DAAC-CR instance based on a novel primitive that we identify as structure-preserving signatures on equivalence classes on vector commitments (SPSEQ-VC). This primitive may be of independent interest, and we detail an efficient construction. Compared to traditional DAC systems that rely on non-interactive zero-knowledge (NIZK) proofs, the credential size in our DAAC-CR instance is constant, independent of the length of delegation chain and the number of attributes. We formally prove the security of our scheme in the generic group model and demonstrate its practicality through performance benchmarks.
Last updated:  2024-04-20
New Security Proofs and Techniques for Hash-and-Sign with Retry Signature Schemes
Benoît Cogliati, Pierre-Alain Fouque, Louis Goubin, and Brice Minaud
Hash-and-Sign with Retry is a popular technique to design efficient signature schemes from code-based or multivariate assumptions. Contrary to Hash-and-Sign signatures based on preimage-sampleable functions as defined by Gentry, Peikert and Vaikuntanathan (STOC 2008), trapdoor functions in code-based and multivariate schemes are not surjective. Therefore, the standard approach uses random trials. Kosuge and Xagawa (PKC 2024) coined it the Hash-and-Sign with Retry paradigm. As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting. In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
Last updated:  2024-04-20
The Practical Advantage of RSA over ECC and Pairings
Zhengjun Cao and Lihua Liu
The coexistence of RSA and elliptic curve cryptosystem (ECC) had continued over forty years. It is well-known that ECC has the advantage of shorter key than RSA, which often leads a newcomer to assume that ECC runs faster. In this report, we generate the Mathematica codes for RSA-2048 and ECC-256, which visually show that RSA-2048 runs three times faster than ECC-256. It is also estimated that RSA-2048 runs 48,000 times faster than Weil pairing with 2 embedding degree and a fixed point.
Last updated:  2024-04-19
Low-latency Secure Integrated Sensing and Communication with Transmitter Actions
Truman Welling, Onur Gunlu, and Aylin Yener
This paper considers an information theoretic model of secure integrated sensing and communication, represented as a wiretap channel with action dependent states. This model allows one to secure a part of the transmitted message against a sensed target that eavesdrops the communication, while allowing transmitter actions to change the channel statistics. An exact secrecy-distortion region is given for a physically-degraded channel. Moreover, a finite-length achievability region is established for the model using an output statistics of random binning method, giving an achievable bound for low-latency applications.
Last updated:  2024-04-19
Classical Commitments to Quantum States
Sam Gunn, Yael Tauman Kalai, Anand Natarajan, and Agi Villanyi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work). Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
Last updated:  2024-04-19
Security Analysis of XHASH8/12
Léo Perrin
We have investigated both the padding scheme and the applicability of algebraic attacks to both XHash8 and XHash12. The only vulnerability of the padding scheme we can find is plausibly applicable only in the multi-rate setting---for which the authors make no claim---and is safe otherwise. For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks. We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider. At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
Last updated:  2024-04-22
Generic MitM Attack Frameworks on Sponge Constructions
Xiaoyang Dong, Boxin Zhao, Lingyue Qin, Qingliang Hou, Shun Zhang, and Xiaoyun Wang
This paper proposes general meet-in-the-middle (MitM) attack frameworks for preimage and collision attacks on hash functions based on (generalized) sponge construction. As the first contribution, our MitM preimage attack framework covers a wide range of sponge-based hash functions, especially those with lower claimed security level for preimage compared to their output size. Those hash functions have been very widely standardized (e.g., Ascon-Hash, PHOTON, etc.), but are rarely studied against preimage attacks. Even the recent MitM attack framework on sponge construction by Qin et al. (EUROCRYPT 2023) cannot attack those hash functions. As the second contribution, our MitM collision attack framework shows a different tool for the collision cryptanalysis on sponge construction, while previous collision attacks on sponge construction are mainly based on differential attacks. Most of the results in this paper are the first third-party cryptanalysis results. If cryptanalysis previously existed, our new results significantly improve the previous results, such as improving the previous 2-round collision attack on Ascon-Hash to the current 4 rounds, improving the previous 3.5-round quantum preimage attack on SPHINCS$^+$-Haraka to our 4-round classical preimage attack, etc.
Last updated:  2024-04-22
Worst-Case to Average-Case Hardness of LWE: A Simple and Practical Perspective
Divesh Aggarwal, Leong Jin Ming, and Alexandra Veliche
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness $−$ the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems, specifically the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem, to LWE. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any probabilistic polynomial-time algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Last updated:  2024-04-18
Secret-Sharing Schemes for High Slices
Amos Beimel, Oriol Farràs, and Oded Nir
In a secret-sharing scheme, a secret is shared among $n$ parties such that the secret can be recovered by authorized coalitions, while it should be kept hidden from unauthorized coalitions. In this work we study secret-sharing for $k$-slice access structures, in which coalitions of size $k$ are either authorized or not, larger coalitions are authorized and smaller are unauthorized. Known schemes for these access structures had smaller shares for small $k$'s than for large ones; hence our focus is on "high" $(n-k)$-slices where $k$ is small. Our work is inspired by several motivations: 1) Obtaining efficient schemes (with perfect or computational security) for natural families of access structures; 2) Making progress in the search for better schemes for general access structures, which are often based on schemes for slice access structures; 3) Proving or disproving the conjecture by Csirmaz (J. Math. Cryptol., 2020) that an access structures and its dual can be realized by secret-sharing schemes with the same share size. The main results of this work are: - Perfect schemes for high slices. We present a scheme for $(n-k)$-slices with information-theoretic security and share size $kn\cdot 2^{\tilde{O}(\sqrt{k \log n})}$. Using a different scheme with slightly larger shares, we prove that the ratio between the optimal share size of $k$-slices and that of their dual $(n-k)$-slices is bounded by $n$. - Computational schemes for high slices. We present a scheme for $(n-k)$-slices with computational security and share size $O(k^2 \lambda \log n)$ based on the existence of one-way functions. Our scheme makes use of a non-standard view point on Shamir secret-sharing that allows to share many secrets with different thresholds with low cost. - Multislice access structures. $(a:b)$-multislices are access structures that behave similarly to slices, but are unconstrained on coalitions in a wider range of cardinalities between $a$ and $b$. We use our new schemes for high slices to realize multislices with the same share sizes that their duals have today. This solves an open question raised by Applebaum and Nir (Crypto, 2021), and allows to realize hypergraph access structures that are chosen uniformly at random under a natural set of distributions with share size $2^{0.491n+o(n)}$ compared to the previous result of $2^{0.5n+o(n)}$.
Last updated:  2024-04-18
Improved Provable Reduction of NTRU and Hypercubic Lattices
Henry Bambury and Phong Q. Nguyen
Lattice-based cryptography typically uses lattices with special properties to improve efficiency. We show how blockwise reduction can exploit lattices with special geometric properties, effectively reducing the required blocksize to solve the shortest vector problem to half of the lattice's rank, and in the case of the hypercubic lattice $\mathbb{Z}^n$, further relaxing the approximation factor of blocks to $\sqrt{2}$. We study both provable algorithms and the heuristic well-known primal attack, in the case where the lattice has a first minimum that is almost as short as that of the hypercubic lattice $\mathbb{Z}^n$. Remarkably, these near-hypercubic lattices cover Falcon and most concrete instances of the NTRU cryptosystem: this is the first provable result showing that breaking NTRU lattices can be reduced to finding shortest lattice vectors in halved dimension, thereby providing a positive response to a conjecture of Gama, Howgrave-Graham and Nguyen at Eurocrypt 2006.
Last updated:  2024-04-26
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Mustafa Khairallah
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Last updated:  2024-04-17
Probabilistically Checkable Arguments for all NP
Shany Ben-David
A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome the limitations of PCPs. A succinct PCA has a proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time), which is impossible for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs for NC that are publicly-verifiable and have constant query complexity under the sub-exponential hardness of LWE. We construct a publicly-verifiable succinct PCA with constant query complexity for all NP in the adaptive security setting. Our PCA scheme offers several improvements compared to the Bronfman and Rothblum construction: (1) it applies to all problems in NP, (2) it achieves adaptive security, and (3) it can be realized under any of the following assumptions: the polynomial hardness of LWE; $O(1)$-LIN on bilinear maps; or sub-exponential DDH. Moreover, our PCA scheme has a succinct prover, which means that for any NP relation that can be verified in time $T$ and space $S$, the proof can be generated in time $O_{\lambda,m}(T\cdot\mathrm{polylog}(T))$ and space $O_{\lambda,m}(S\cdot\mathrm{polylog}(T))$. Here, ${O}_{\lambda,m}$ accounts for polynomial factors in the security parameter and in the size of the witness. En route, we construct a new complexity-preserving RAM delegation scheme that is used in our PCA construction and may be of independent interest.
Last updated:  2024-04-26
A Characterization of AE Robustness as Decryption Leakage Indistinguishability
Ganyuan Cao
Robustness has emerged as an important criterion for authenticated encryption, alongside the requirements of confidentiality and integrity. We introduce a novel notion, denoted as IND-rCCA, to formalize the robustness of authenticated encryption from the perspective of decryption leakage. This notion is an augmentation of common notions defined for AEAD schemes by considering indistinguishability of potential leakage due to decryption failure, particularly in the presence of multiple checks for failures. With this notion, we study the disparity between a single-error decryption function and the actual leakage incurred during decryption. We introduce the concept of error unicity to require that only one error is disclosed, whether explicitly via decryption or implicitly via leakage, even there are multiple checks for failures. This aims to mitigate the security issue caused by disclosing multiple errors via leakage. We further extend this notion to IND-sf-rCCA to formalize the stateful security involving out-of-order ciphertext. Furthermore, we revisit the robustness of the Encode-then-Encrypt-then-MAC (EEM) paradigm, addressing concerns arising from the disclosure of multiple error messages. We then propose a modification to boost its robustness, thereby ensuring error unicity.
Last updated:  2024-04-17
Blockchain-based decentralized identity system: Design and security analysis
Gewu BU, Serge Fdida, Maria Potop-Butucaru, and Bilel Zaghdoudi
This paper presents a novel blockchain-based decentralized identity system (DID), tailored for enhanced digital identity management in Internet of Things (IoT) and device-to-device (D2D) networks. The proposed system features a hierarchical structure that effectively merges a distributed ledger with a mobile D2D network, ensuring robust security while streamlining communication. Central to this design are the gateway nodes, which serve as intermediaries, facilitating DID registration and device authentication through smart contracts and distributed storage systems. A thorough security analysis underscores the system’s resilience to common cyber threats and adherence to critical principles like finality and liveness.
Last updated:  2024-04-17
Cryptanalysis of signature schemes based on the root extraction problem over braid group
Djimnaibeye Sidoine, Guy Mobouale Wamba, Abiodoun Clement Hounkpevi, Tieudjo Daniel, and Djiby Sow
Cumplido, María et al. have recently shown that the Wang-Hu digital signature is not secure and has presented a potential attack on the root extraction problem. The effectiveness of generic attacks on solving this problem for braids is still uncertain and it is unknown if it is possible to create braids that require exponential time to solve these problems. In 2023, Lin and al. has proposed a post-quantum signature scheme similar to the Wang-Hu scheme that is proven to be able to withstand attacks from quantum computers. However, evidence is presented here for the existence of an algorithm based on mean-set attacks that can recover the private key in both schemes without solving the root extraction problem. In the post-quantum signature version, we prove that the attacker can forge a signature passing the verification without recovering the private key
Last updated:  2024-04-16
Analysis of Multivariate Encryption Schemes: Application to Dob and C*
Morten Øygarden, Patrick Felke, and Håvard Raddum
A common strategy for constructing multivariate encryption schemes is to use a central map that is easy to invert over an extension field, along with a small number of modifications to thwart potential attacks. In this work we study the effectiveness of these modifications, by deriving estimates for the number of degree fall polynomials. After developing the necessary tools, we focus on encryption schemes using the $C^*$ and Dobbertin central maps, with the internal perturbation (ip), and $Q_+$ modifications. For these constructions we are able to accurately predict the number of degree fall polynomials produced in a Gröbner basis attack, up to and including degree five for the Dob encryption scheme and four for $C^*$. The predictions remain accurate even when fixing variables. Based on this new theory we design a novel attack on Dob, which completely recovers the secret key for the parameters suggested by its designers. Due to the generality of the presented techniques, we also believe that they are of interest to the analysis of other big field schemes.
Last updated:  2024-05-01
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Enrico Bottazzi
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Last updated:  2024-04-16
The Case of Small Prime Numbers Versus the Okamoto-Uchiyama Cryptosystem
George Teseleanu
In this paper we study the effect of using small prime numbers within the Okamoto-Uchiyama public key encryption scheme. We introduce two novel versions and prove their security. Then we show how to choose the system's parameters such that the security results hold. Moreover, we provide a practical comparison between the cryptographic algorithms we introduced and the original Okamoto-Uchiyama cryptosystem.
Last updated:  2024-04-27
Asymptotics for the standard block size in primal lattice attacks: second order, formally verified
Daniel J. Bernstein
Many proposals of lattice-based cryptosystems estimate security levels by following a recipe introduced in the New Hope proposal. This recipe, given a lattice dimension n, modulus q, and standard deviation s, outputs a "primal block size" β and a security level growing linearly with β. This β is minimal such that some κ satisfies ((n+κ)s^2+1)^{1/2} < (d/β)^{1/2} δ^{2β−d−1} q^{κ/d}, where d = n + κ + 1 and δ = (β(πβ)^{1/β}/(2π exp 1))^{1/2(β−1)}. This paper identifies how β grows with n, with enough precision to show the impact of adjusting q and s by constant factors. Specifically, this paper shows that if lg q grows as Q_0 lg n + Q_1 + o(1) and lg s grows as S_0 lg n + S_1 + o(1), where 0 <= S_0 <= 1/2 < Q_0 − S_0, then β/n grows as z_0 + (z_1+o(1))/lg n, where z_0 = 2Q_0/(Q_0−S_0+1/2)^2 and z_1 has a formula given in the paper. The paper provides a traditional-format proof and a proof verified by the HOL Light proof assistant.
Last updated:  2024-04-16
Hash your Keys before Signing: BUFF Security of the Additional NIST PQC Signatures
Thomas Aulbach, Samed Düzlü, Michael Meyer, Patrick Struck, and Maximiliane Weishäupl
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only $6$ out of $40$ schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
Last updated:  2024-04-16
Revisiting the Security of Fiat-Shamir Signature Schemes under Superposition Attacks
Quan Yuan, Chao Sun, and Tsuyoshi Takagi
The Fiat-Shamir transformation is a widely employed technique in constructing signature schemes, known as Fiat-Shamir signature schemes (FS-SIG), derived from secure identification (ID) schemes. However, the existing security proof only takes into account classical signing queries and does not consider superposition attacks, where the signing oracle is quantum-accessible to the adversaries. Alagic et al. proposed a security model called blind unforgeability (BUF, Eurocrypt'20), regarded as a preferable notion under superposition attacks. In this paper, we conduct a thorough security analysis of FS-SIGs in the BUF model. First, we propose a special property for ID schemes called quantum special honest-verifier zero-knowledge (qsHVZK), which is stronger than classical HVZK. We prove that qsHVZK is a sufficient property for BUF (with implicit rejection) of the resulting FS-SIG in the quantum random oracle model (QROM). Next, we give an efficient construction of (a weaker variant) of qsHVZK ID scheme based on the quantum hardness of LWE problems. To avoid enhancing the requirement of HVZK, we then progress to the deterministic FS-SIG (DFS) for more efficient constructions. We show that if the pseudorandom function is quantum-access-secure (QPRF), then we can prove the BUF security of the resulting DFS only with the requirement of the standard (multi-)HVZK in the QROM. A similar result can be extended to the hedged version of FS-SIG.
Last updated:  2024-04-16
Blind-Folded: Simple Power Analysis Attacks using Data with a Single Trace and no Training
Xunyue Hu, Quentin L. Meunier, and Emmanuelle Encrenaz
Side-Channel Attacks target the recovery of key material in cryptographic implementations by measuring physical quantities such as power consumption during the execution of a program. Simple Power Attacks consist in deducing secret information from a trace using a single or a few samples, as opposed to differential attacks which require many traces. Software cryptographic implementations now all contain a data-independent execution path, but often do not consider variations in power consumption associated to data. In this work, we show that a technique commonly used to select a value from different possible values in a control-independant way leads to significant power differences depending on the value selected. This difference is actually so important that a single sample can be considered for attacking one condition, and no training on other traces is required. We exploit this finding to propose the first single-trace attack without any knowledge gained on previous executions, using trace folding. We target the two modular exponentiation implementations in Libgcrypt, getting respectively 100% and 99.98% of correct bits in average on 30 executions using 2,048-bit exponents. We also use this technique to attack the scalar multiplication in ECDSA, successfully recovering all secret nonces on 1,000 executions. Finally, the insights we gained from this work allow us to show that a proposed counter-measure from the litterature for performing the safe loading of precomputed operands in the context of windowed implementations can be attacked as well.
Last updated:  2024-04-16
Digital Signatures for Authenticating Compressed JPEG Images
Simon Erfurth
We construct a digital signature scheme for images that allows the image to be compressed without invalidating the signature. More specifically, given a JPEG image signed with our signature scheme, a third party can compress the image using JPEG compression, and, as long as the quantization tables only include powers of two, derive a valid signature for the compressed image, without access to the secret signing key, and without interaction with the signer. Our scheme is constructed using a standard digital signature scheme and a hash function as building blocks. This form of signatures that allow image compression could be useful in mitigating some of the threats posed by generative AI and fake news, without interfering with all uses of generative AI. Taking inspiration from related signature schemes, we define a notion of unforgeability and prove our construction to be secure. Additionally, we show that our signatures have size 32.5 kb under standard parameter choices. Using image quality assessment metrics, we show that JPEG compression with parameters as specified by our scheme, does not result in perceivably reduced visual fidelity, compared to standard JPEG compression.
Last updated:  2024-04-18
Hidden $\Delta$-fairness: A Novel Notion for Fair Secure Two-Party Computation
Saskia Bayreuther, Robin Berger, Felix Dörre, Jeremias Mechler, and Jörn Müller-Quade
Secure two-party computation allows two mutually distrusting parties to compute a joint function over their inputs, guaranteeing properties such as input privacy or correctness. For many tasks, such as joint computation of statistics, it is important that when one party receives the result of the computation, the other party also receives the result. Unfortunately, this property, which is called fairness, is unattainable in the two-party setting for arbitrary functions. So weaker variants have been proposed. One such notion, proposed by Pass et al. (EUROCRYPT 2017) is called $\Delta$-fairness. Informally, it guarantees that if a corrupt party receives the output in round $r$ and stops participating in the protocol, then the honest party receives the output by round $\Delta(r)$. This notion is achieved by using so-called secure enclaves. In many settings, $\Delta$-fairness is not sufficient, because a corrupt party is guaranteed to receive its output before the honest party, giving the corrupt party an advantage in further interaction. Worse, as $\Delta$ is known to the corrupt party, it can abort the protocol when it is most advantageous. We extend the concept of $\Delta$-fairness by introducing a new fairness notion, which we call hidden $\Delta$-fairness, which addresses these problems. First of all, under our new notion, a corrupt party may not benefit from aborting, because it may not, with probability $\frac{1}{2}$, learn the result first. Moreover, $\Delta$ and other parameters are sampled according to a given distribution and remain unknown to the participants in the computation. We propose a 2PC protocol that achieves hidden $\Delta$-fairness, also using secure enclaves, and prove its security in the Generalized Universal Composability (GUC) framework.
Last updated:  2024-04-24
Encryption Based Covert Channel for Large Language Models
Yongge Wang
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Last updated:  2024-04-29
A Complete Beginner Guide to the Number Theoretic Transform (NTT)
Ardianto Satriawan, Rella Mareta, and Hanho Lee
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity $O(n \log{n})$ instead of $O(n^2)$ when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
Last updated:  2024-04-17
Efficient Implementations of Square-root Vélu's Formulas
Jianming Lin, Weize Wang, Chang-An Zhao, and Yuhao Zheng
In the implementation of isogeny-based schemes, V\'{e}lu's formulas are essential for constructing and evaluating odd degree isogenies. Bernstein et al. proposed an approach known as $\surd$elu, which computes an $\ell$-isogeny at a cost of $\tilde{\mathcal{O}}(\sqrt{\ell})$ finite field operations. This paper presents two key improvements to enhance the efficiency of the implementation of $\surd$\'{e}lu from two aspects: optimizing the partition involved in $\surd$\'{e}lu and speeding up the computations of the sums of products used in polynomial multiplications over finite field $\mathbb{F}_p$ with large prime characteristic $p$. To optimize the partition, we adjust it to enhance the utilization of $x$-coordinates and eliminate the computational redundancy, which can ultimately reduce the number of $\mathbb{F}_p$-multiplications. The speedup of the sums of products is to employ two techniques: lazy reduction (abbreviated as LZYR) and generalized interleaved Montgomery multiplication (abbreviated as INTL). These techniques aim to minimize the underlying operations such as $\mathbb{F}_p$-reductions and assembly memory instructions. We present an optimized C and assembly code implementation of $\surd$\'{e}lu for the CTIDH512 instantiation. In terms of $\ell$-isogeny computations in CTIDH512, the performance of clock cycles applying new partition + INTL (resp. new partition + LZYR) offers an improvement up to $16.05\%$ (resp. $ 10.96\%$) compared to the previous work.
Last updated:  2024-04-16
A Note on Quantum Algorithms for Lattice Problems
Omri Shmueli
Recently, a paper by Chen (eprint 2024/555) has claimed to construct a quantum polynomial-time algorithm that solves the Learning With Errors Problem (Regev, JACM 2009), for a range of parameters. As a byproduct of Chen's result, it follows that Chen's algorithm solves the Gap Shortest Vector Problem, for gap $g(n) = \tilde{O}\left( n^{4.5} \right)$. In this short note we point to an error in the claims of Chen's paper.
Last updated:  2024-04-17
Improved Alternating Moduli PRFs and Post-Quantum Signatures
Navid Alamati, Guru-Vamsi Policharla, Srinivasan Raghuraman, and Peter Rindal
We revisit the alternating moduli paradigm for constructing symmetric key primitives with a focus on constructing highly efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating moduli paradigm of Boneh et al. (TCC 2018) enables the construction of various symmetric key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli, first over $\mathbb{F}_2$ and then over $\mathbb{F}_3$. The first contribution focuses on efficient two-party evaluation of alternating moduli PRFs, effectively building an oblivious pseudorandom function. We present a generalization of the PRF proposed by Boneh et al. (TCC 18) along with methods to lower the communication and computation. We then provide several variants of our protocols, with different computation and communication tradeoffs, for evaluating the PRF. Most are in the OT/VOLE hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ lesser communication. Our next contribution is the efficient evaluation of the OWF $f(x)=B\cdot_3 (A\cdot_2 x)$ proposed by Dinur et al. (CRYPTO 21) where $A \in \mathbb{F}^{m\times n}_2, B\in\mathbb{F}^{t\times m}_3$ and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\hspace{-3px}[x]\hspace{-3px}]$ over $\mathbb{F}_2$, locally computing $[\hspace{-3px}[v]\hspace{-3px}]=A\cdot_2 [\hspace{-3px}[x]\hspace{-3px}]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\hspace{-3px}[y]\hspace{-3px}]=B\cdot_3 [\hspace{-3px}[v]\hspace{-3px}]$. We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the OWF, achieving state of art performance. The resulting signature has a size ranging from 4.0-5.5 KB, achieving between $2\text{-}3\times$ reduction compared to Dinur et al. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric key primitives, including the latest NIST PQC competition submissions. We additionally show that our core techniques can be extended to build very small post-quantum ring signatures for small-medium sized rings that are competitive with state-of-the-art lattice based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Last updated:  2024-04-16
Fault Attack on SQIsign
JeongHwan Lee, Donghoe Heo, Hyeonhak Kim, Gyusang Kim, Suhri Kim, Heeseok Kim, and Seokhie Hong
In this paper, we introduce the first fault attack on SQIsign. By injecting a fault into the ideal generator during the commitment phase, we demonstrate a meaningful probability of inducing the generation of order $\mathcal{O}_0$. The probability is bounded by one parameter, the degree of commitment isogeny. We also show that the probability can be reasonably estimated by assuming uniform randomness of a random variable, and provide empirical evidence supporting the validity of this approximation. In addition, we identify a loop-abort vulnerability due to the iterative structure of the isogeny operation. Exploiting these vulnerabilities, we present key recovery fault attack scenarios for two versions of SQIsign---one deterministic and the other randomized. We then analyze the time complexity and the number of queries required for each attack. Finally, we discuss straightforward countermeasures that can be implemented against the attack.
Last updated:  2024-04-15
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Duy Nguyen
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), stands as a robust generalization of (Multi-Client) Functional Encryption. It enables users to dynamically join and contribute private inputs to individually-controlled joint functions, all without requiring a trusted authority. Agrawal et al. (TCC’21) further extended this line of research by presenting the first DDFE construction for function-hiding inner products (FH-IP-DDFE) in the random oracle model (ROM). Recently, Shi et al. (PKC'23) proposed the first Multi-Client Functional Encryption construction for function-hiding inner products based on standard assumptions without using random oracles. However, their construction still necessitates a trusted authority, leaving the question of whether a fully-fledged FH-IP-DDFE can exist in the standard model as an exciting open problem. In this work, we provide an affirmative answer to this question by proposing a FH-IP-DDFE construction based on the Symmetric External Diffie-Hellman (SXDH) assumption in the standard model. Our approach relies on a novel zero-sharing scheme termed Updatable Pseudorandom Zero Sharing, which introduces new properties related to updatability in both definition and security models. We further instantiate this scheme in groups where the Decisional Diffie-Hellman (DDH) assumption holds. Moreover, our proposed pseudorandom zero sharing scheme serves as a versatile tool to enhance the security of pairing-based DDFE constructions for functionalities beyond inner products. As a concrete example, we present the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Last updated:  2024-04-15
Tight Multi-user Security of Ascon and Its Large Key Extension
Bishwajit Chakraborty, Chandranan Dhar, and Mridul Nandi
The Ascon cipher suite has recently become the preferred standard in the NIST Lightweight Cryptography standardization process. Despite its prominence, the initial dedicated security analysis for the Ascon mode was conducted quite recently. This analysis demonstrated that the Ascon AEAD mode offers superior security compared to the generic Duplex mode, but it was limited to a specific scenario: single-user nonce-respecting, with a capacity strictly larger than the key size. In this paper, we eliminate these constraints and provide a comprehensive security analysis of the Ascon AEAD mode in the multi-user setting, where the capacity need not be larger than the key size. Regarding data complexity $D$ and time complexity $T$, our analysis reveals that Ascon achieves AEAD security when $T$ is bounded by $\min\{2^{\kappa}/\mu, 2^c\}$ (where $\kappa$ is the key size, and $\mu$ is the number of users), and $DT$ is limited to $2^b$ (with $b$ denoting the size of the underlying permutation, set at 320 for Ascon). Our results align with NIST requirements, showing that Ascon allows for a tag size as small as 64 bits while supporting a higher rate of 192 bits, provided the number of users remains within recommended limits. However, this security becomes compromised as the number of users increases significantly. To address this issue, we propose a variant of the Ascon mode called LK-Ascon, which enables doubling the key size. This adjustment allows for a greater number of users without sacrificing security, while possibly offering additional resilience against quantum key recovery attacks. We establish tight bounds for LK-Ascon, and furthermore show that both Ascon and LK-Ascon maintain authenticity security even when facing nonce-misuse adversaries.
Last updated:  2024-04-15
Assessing the quality of Random Number Generators through Neural Networks
José Luis Crespo, Javier González-Villa, Jaime Gutierrez, and Angel Valle
In this paper we address the use of Neural Networks (NN) for the assessment of the quality and hence safety of several Random Number Generators (RNGs), focusing both on the vulnerability of classical Pseudo Random Number Generators (PRNGs), such as Linear Congruential Generators (LCGs) and the RC4 algorithm, and extending our analysis to non-conventional data sources, such as Quantum Random Number Generators (QRNGs) based on Vertical-Cavity Surface- Emitting Laser (VCSEL). Among the results found, we identified a sort of classification of generators under different degrees of susceptibility, underlining the fundamental role of design decisions in enhancing the safety of PRNGs. The influence of network architecture design and associated hyper-parameters variations was also explored, highlighting the effectiveness of longer sequence lengths and convolutional neural networks in enhancing the discrimination of PRNGs against other RNGs. Moreover, in the prediction domain, the proposed model is able to deftly distinguish the raw data of our QRNG from truly random ones, exhibiting a cross-entropy error of 0.52 on the test data-set used. All these findings reveal the potential of NNs to enhance the security of RNGs, while highlighting the robustness of certain QRNGs, in particular the VCSEL-based variants, for high-quality random number generation applications.
Last updated:  2024-04-15
Determination of cryptographic tables and properties related to the revised boomerang and its application to a fundamental S-box
Said Eddahmani and Sihem Mesnager
In symmetric cryptography, vectorial Boolean functions over finite fields F2n derive strong S-boxes. To this end, the S-box should satisfy a list of tests to resist existing attacks, such as the differential, linear, boomerang, and variants. Several tables are employed to measure an S- box’s resistance, such as the difference distribution table (DDT) and the boomerang connectivity table (BCT). Following the boomerang attacks recently revisited in terms of the boomerang switch effect, with a lustra- tion highlighting the power of this technique, a tool called the Boomerang Difference Table (BDT), an alternative to the classical Boomerang BCT, was introduced. Next, two novel tables have been introduced, namely, the Upper Boomerang Connectivity Table (UBCT) and the Lower Boomerang Connectivity Table (LBCT), which are considered improvements over BCT while allowing systematic evaluation of boomerangs to return over mul- tiple rounds. This paper focuses on the new tools for measuring the revisited version of boomerang attacks and the related tables UBCT, LBCT, as well as the so-called Extended Boomerang Connectivity Table (EBCT). Specifically, we shall study the properties of these novel tools and investigate the corresponding tables. We also study their interconnections, their links to the DDT, and their values for affine equivalent vectorial functions and compositional inverses of permutations of F2n . Moreover, we introduce the concept of the nontrivial boomerang connectivity uniformity and determine the explicit values of all the entries of the EBCT, LBCT, and EBCT for the important cryptographic case of the inverse function.
Last updated:  2024-04-15
On complexity of the problem of solving systems of tropical polynomial equations of degree two
Ivan Buchinskiy, Matvei Kotov, and Alexander Treier
In this paper, we investigate the computational complexity of the problem of solving a one-sided system of equations of degree two of a special form over the max-plus algebra. Also, we consider the asymptotic density of solvable systems of this form. Such systems have appeared during the analysis of some tropical cryptography protocols that were recently suggested. We show how this problem is related to the integer linear programming problem and prove that this problem is NP-complete. We show that the asymptotic density of solvable systems of this form with some restrictions on the coefficients, the number of variables, and the number of equations is 0. As a corollary, we prove that this problem (with some restrictions on the coefficients, the number of variables, and the number of equations) is decidable generically in polynomial time.
Last updated:  2024-04-15
Pairing Optimizations for Isogeny-based Cryptosystems
Shiping Cai, Kaizhan Lin, and Chang-An Zhao
In isogeny-based cryptography, bilinear pairings are regarded as a powerful tool in various applications, including key compression, public-key validation and torsion basis generation. However, in most isogeny-based protocols, the performance of pairing computations is unsatisfactory due to the high computational cost of the Miller function. Reducing the computational expense of the Miller function is crucial for enhancing the overall performance of pairing computations in isogeny-based cryptography. This paper addresses this efficiency bottleneck. To achieve this, we propose several techniques for a better implementation of pairings in isogeny-based cryptosystems. We use (modified) Jacobian coordinates and present new algorithms for Miller function computations to compute pairings of order $2^\bullet$ and $3^\bullet$. For pairings of arbitrary order, which are crucial for key compression in some SIDH-based schemes (such as M-SIDH and binSIDH), we combine Miller doublings with Miller additions/subtractions, leading to a considerable speedup. Moreover, the optimizations for pairing applications in CSIDH-based protocols are also considered in this paper. In particular, our approach for supersingularity verification in CSIDH is 15.3% faster than Doliskani's test, which is the state-of-the-art.
Last updated:  2024-04-15
PoMMES: Prevention of Micro-architectural Leakages in Masked Embedded Software
Jannik Zeitschner and Amir Moradi
Software solutions to address computational challenges are ubiquitous in our daily lives. One specific application area where software is often used is in embedded systems, which, like other digital electronic devices, are vulnerable to side-channel analysis attacks. Although masking is the most common countermeasure and provides a solid theoretical foundation for ensuring security, recent research has revealed a crucial gap between theoretical and real-world security. This shortcoming stems from the micro-architectural effects of the underlying micro-processor. Common security models used to formally verify masking schemes such as the d-probing model fully ignore the micro-architectural leakages that lead to a set of instructions that unintentionally recombine the shares. Manual generation of masked assembly code that remains secure in the presence of such micro-architectural recombinations often involves trial and error, and is non-trivial even for experts. Motivated by this, we present PoMMES, which enables inexperienced software developers to automatically compile masked functions written in a high-level programming language into assembly code, while preserving the theoretically proven security in practice. Compared to the state of the art, based on a general model for microarchitectural effects, our scheme allows the generation of practically secure masked software at arbitrary security orders for in-order processors. The major contribution of PoMMES is its micro-architecture aware register allocation algorithm, which is one of the crucial steps during the compilation process. In addition to simulation-based assessments that we conducted by open-source tools dedicated to evaluating masked software implementations, we confirm the effectiveness of the PoMMES-generated codes through experimental analysis. We present the result of power consumption based leakage assessments of several case studies running on a Cortex M0+ micro-controller, which is commonly deployed in industry.
Last updated:  2024-04-15
Tokenised Multi-client Provisioning for Dynamic Searchable Encryption with Forward and Backward Privacy
Arnab Bag, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) has opened up an attractive avenue for privacy-preserved processing of outsourced data on the untrusted cloud infrastructure. SSE aims to support efficient Boolean query processing with optimal storage and search overhead over large real databases. However, current constructions in the literature lack the support for multi-client search and dynamic updates to the encrypted databases, which are essential requirements for the widespread deployment of SSE on real cloud infrastructures. Trivially extending a state-of-the-art single client dynamic construction, such as ODXT (Patranabis et al., NDSS’21), incurs significant leakage that renders such extension insecure in practice. Currently, no SSE construction in the literature offers efficient multi-client query processing and search with dynamic updates over large real databases while maintaining a benign leakage profile. This work presents the first dynamic multi-client SSE scheme Nomos supporting efficient multi-client conjunctive Boolean queries over an encrypted database. Precisely, Nomos is a multi-reader-single-writer construction that allows only the gate-keeper (or the data-owner) - a trusted entity in the Nomos framework, to update the encrypted database stored on the adversarial server. Nomos achieves forward and type-II backward privacy of dynamic SSE constructions while incurring lesser leakage than the trivial extension of ODXT to a multi- client setting. Furthermore, our construction is practically efficient and scalable - attaining linear encrypted storage and sublinear search overhead for conjunctive Boolean queries. We provide an experimental evaluation of software implementation over an extensive real dataset containing millions of records. The results show that Nomos performance is comparable to the state-of-the-art static conjunctive SSE schemes in practice.
Last updated:  2024-04-15
Split Gröbner Bases for Satisfiability Modulo Finite Fields
Alex Ozdemir, Shankara Pailoor, Alp Bassa, Kostas Ferles, Clark Barrett, and Işil Dillig
Satisfiability modulo finite fields enables automated verification for cryptosystems. Unfortunately, previous solvers scale poorly for even some simple systems of field equations, in part because they build a full Gröbner basis (GB) for the system. We propose a new solver that uses multiple, simpler GBs instead of one full GB. Our solver, implemented within the cvc5 SMT solver, admits specialized propagation algorithms, e.g., for understanding bitsums. Experiments show that it solves important bitsum-heavy determinism benchmarks far faster than prior solvers, without introducing much overhead for other benchmarks.
Last updated:  2024-04-26
MiniCast: Minimizing the Communication Complexity of Reliable Broadcast
Thomas Locher and Victor Shoup
We give a new protocol for reliable broadcast with improved communication complexity for long messages. Namely, to reliably broadcast a message a message $m$ over an asynchronous network to a set of $n$ parties, of which fewer than $n/3$ may be corrupt, our protocol achieves a communication complexity of $1.5 |m| n + O( \kappa n^2 \log(n) )$, where $\kappa$ is the output length of a collision-resistant hash function. This result improves on the previously best known bound for long messages of $2 |m| n + O( \kappa n^2 \log(n) )$.
Last updated:  2024-04-15
Large-Scale Private Set Intersection in the Client-Server Setting
Yunqing Sun, Jonathan Katz, Mariana Raykova, Phillipp Schoppmann, and Xiao Wang
Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing anything else. In some applications of PSI, a server holds a large set and needs to run PSI with many clients, each with its own small set. In this setting, however, all existing protocols fall short: they either incur too much cost to compute the intersections for many clients or cannot achieve the desired security requirements. We design a protocol that particularly suits this setting with simulation-based security against malicious adversaries. In our protocol, the server publishes a one-time, linear-size encoding of its set. Then, multiple clients can each perform a cheap interaction with the server of complexity linear in the size of each client's set. A key ingredient of our protocol is an efficient instantiation of an oblivious verifiable unpredictable function, which could be of independent interest. To obtain the intersection, the client can download the encodings directly, which can be accelerated via content distribution networks or peer-to-peer networks since the same encoding is used by all clients; alternatively, clients can fetch only the relevant ones using verifiable private information retrieval. Our implementation shows very high efficiency. For a server holding $10^8$ elements and each client holding $10^3$ elements, the size of the server's encoding is 800MB; interacting with each client uses 60MB of communication and runs in under 5s in a WAN network with 120Mbps bandwidth. Compared with the state-of-the-art PSI protocol, our scheme requires only 0.017 USD per client on an AWS server, which is 5x lower.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.