Papers updated in last 365 days (Page 28 of 2931 results)
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
The rising tide of data breaches targeting large data storage centres and servers has raised serious privacy and security concerns. Homomorphic Encryption schemes offer an effective defence against such attacks, but their adoption has been hindered by substantial computational and communication overheads, particularly on the client's side.
The Hybrid Homomorphic Encryption (HEE) protocol was developed to mitigate these issues. However, the susceptibility of HHE to strong attacks, specifically physical attacks, has been largely unexplored.
While physical attacks like the Differential Fault Analysis (DFA) have proved very effective in the field of symmetric cryptography, prior works have largely relied on strong assumptions like nonce reuse, limiting their feasibility in a real-world setting.
In this work, we introduce a novel attack- SASTA, which presents, to the best of our knowledge, the first generalized analysis of HHE under DFA. Our analysis uncovers a significant limitation of the HHE protocol where a single fault leads to complete key recovery not only for the standard scheme-AES but also for the new HHE tailored Symmetric Encryption (SE) schemes -- RASTA, PASTA, MASTA, and HERA. We further extend SASTA to effectively target Authenticated Transciphering protocols. Unlike prior works, the key advantage of SASTA is that it does not require nonce reuse.
We demonstrate a proof-of-concept of our attack on an off-the-shelf ATXmega128D4-AU microcontroller running HHE firmware and mount end-to-end key recovery attacks. Finally, we discuss conventional countermeasures to defend against SASTA. Our work highlights that despite HHE's advantages of improving performance and reducing communication overhead, further analysis of its security guarantees is required.
A New Approach to Efficient and Secure Fixed-point Computation
Secure Multi-Party Computation (MPC) constructions typically allow computation over a finite field or ring. While useful for many applications, certain real-world applications require the usage of decimal numbers.
While it is possible to emulate floating-point operations in MPC, fixed-point computation has gained more traction in the practical space due to its simplicity and efficient realizations.
Even so, current protocols for fixed-point MPC still require computing a secure truncation after each multiplication gate.
In this paper, we show a new paradigm for realizing fixed-point MPC.
Starting from an existing MPC protocol over arbitrary, large, finite fields or rings, we show how to realize MPC over a residue number system (RNS).
This allows us to leverage certain mathematical structures to construct a secure algorithm for efficient approximate truncation by a static and public value.
We then show how this can be used to realize highly efficient secure fixed-point computation.
In contrast to previous approaches, our protocol does not require any multiplications of secret values in the underlying MPC scheme to realize truncation but instead relies on preprocessed pairs of correlated random values, which we show can be constructed very efficiently, when accepting a small amount of leakage and robustness in the strong, covert model.
We proceed to implement our protocol, with SPDZ as the underlying MPC protocol, and achieve significantly faster fixed-point multiplication.
Private Analytics via Streaming, Sketching, and Silently Verifiable Proofs
We present Whisper, a system for privacy-preserving collection of aggregate statistics. Like prior systems, a Whisper deployment consists of a small set of non-colluding servers; these servers compute aggregate statistics over data from a large number of users without learning the data of any individual user. Whisper’s main contribution is that its server- to-server communication cost and its server-side storage costs scale sublinearly with the total number of users. In particular, prior systems required the servers to exchange a few bits of information to verify the well-formedness of each client submission. In contrast, Whisper uses silently verifiable proofs, a new type of proof system on secret-shared data that allows the servers to verify an arbitrarily large batch of proofs by exchanging a single 128-bit string. This improvement comes with increased client-to-server communication, which, in cloud computing, is typically cheaper (or even free) than the cost of egress for server-to-server communication. To reduce server storage, Whisper approximates certain statistics using small-space sketching data structures. Applying randomized sketches in an environment with adversarial clients requires a careful and novel security analysis. In a deployment with two servers and 100,000 clients of which 1% are malicious, Whisper can improve server-to-server communication for vector sum by three orders of magnitude while each client’s communication increases by only 10%.
Batch PIR and Labeled PSI with Oblivious Ciphertext Compression
In this paper, we study two problems: oblivious compression and decompression of ciphertexts. In oblivious compression, a server holds a set of ciphertexts with a subset of encryptions of zeroes whose positions are only known to the client. The goal is for the server to effectively compress the ciphertexts obliviously, while preserving the non-zero plaintexts and without learning the plaintext values. For oblivious decompression, the client, instead, succinctly encodes a sequence of plaintexts such that the server may decode encryptions of all plaintexts value, but the zeroes may be replaced with arbitrary values. We present solutions to both problems that construct lossless compressions only 5% more than the optimal minimum using only additive homomorphism. The crux of both algorithms involve embedding ciphertexts as random linear systems that are efficiently solvable.
Using our compression schemes, we obtain state-of-the-art schemes for batch private information retrieval (PIR) where a client wishes to privately retrieve multiple entries from a server-held database in one query. We show that our compression schemes may be used to reduce communication by up to 30% for batch PIR in both the single- and two-server settings.
Additionally, we study labeled private set intersection (PSI) in the unbalanced setting where one party's set is significantly smaller than the other party's set and each entry has associated data. By utilizing our novel compression algorithm, we present a protocol with 65-88% reduction in communication with comparable computation compared to prior works.
Verifiable FHE via Lattice-based SNARKs
Fully Homomorphic Encryption (FHE) is a prevalent cryptographic primitive that allows for computation on encrypted data. In various cryptographic protocols, this enables outsourcing computation to a third party while retaining the privacy of the inputs to the computation. However, these schemes make an honest-but-curious assumption about the adversary. Previous work has tried to remove this assumption by combining FHE with Verifiable Computation (VC). Recent work has increased the flexibility of this approach by introducing integrity checks for homomorphic computations over rings.
However, efficient FHE for circuits of large multiplicative depth also requires non-ring computations called maintenance operations, i.e. modswitching and keyswitching, which cannot be efficiently verified by existing constructions. We propose the first efficiently verifiable FHE scheme that allows for arbitrary depth homomorphic circuits by utilizing the double-CRT representation in which FHE schemes are typically computed, and using lattice-based SNARKs to prove components of this computation separately, including the maintenance operations. Therefore, our construction can theoretically handle bootstrapping operations.
We also present the first implementation of a verifiable computation on encrypted data for a computation that contains multiple ciphertext-ciphertext multiplications.
Concretely, we verify the homomorphic computation of an approximate neural network containing three layers and more than 100 ciphertexts in less than 1 second while maintaining reasonable prover costs.
Efficient Linkable Ring Signatures: New Framework and Post-Quantum Instantiations
In this paper, we introduce a new framework for constructing linkable ring signatures (LRS). Our framework is based purely on signatures of knowledge (SoK) which allows one to issue signatures on behalf of any NP-statement using the corresponding witness. Our framework enjoys the following advantages: (1) the security of the resulting LRS depends only on the security of the underlying SoK; (2) the resulting LRS naturally supports online/offline signing (resp. verification), where the output of the offline signing (resp. verification) can be re-used across signatures of the same ring. For a ring size , our framework requires an SoK of the NP statement with size .
To instantiate our framework, we adapt the well-known post-quantum secure non-interactive argument of knowledge (NIAoK), ethSTARK, into an SoK. This SoK is inherently post-quantum secure and has a signature size poly-logarithmic in the size of the NP statement. Thus, our resulting LRS has a signature size of . By comparison, existing post-quantum ring signatures, regardless of linkability considerations, have signature sizes of at best. Furthermore, leveraging online/offline verification, part of the verification of signatures on the same ring can be shared, resulting in a state-of-the-art amortized verification cost of .
Our LRS also performs favourably against existing schemes in practical scenarios. Concretely, our scheme has the smallest signature size among all post-quantum linkable ring signatures with non-slanderability for ring size larger than . In our experiment, at -bit security and ring size of , our LRS has a size of KB, and an amortized verification cost of ms, surpassing the state-of-the-art by a significant margin. Even without considering amortization, the verification time for a single signature is ms, comparable to those featuring linear signature size. A similar performance advantage can also be seen at signing. Furthermore, our LRS has extremely short public keys ( bytes), while public keys of existing constructions are in the order of kilobytes.
FE[r]Chain: Enforcing Fairness in Blockchain Data Exchanges Through Verifiable Functional Encryption
Functional Encryption (FE) allows users to extract specific function-related information from encrypted data while preserving the privacy of the underlying plaintext. Though significant research has been devoted to developing secure and efficient Multi-Input Functional Encryption schemes supporting diverse functions, there remains a noticeable research gap in the development of verifiable FE schemes. Functionality and performance have received considerable attention, however, the crucial aspect of verifiability in FE has been relatively understudied. Another important aspect that prior research in FE with outsourced decryption has not adequately addressed is the fairness of the data-for-money exchange between a curator and an analyst. This paper focuses on addressing these gaps by proposing a verifiable FE scheme for inner product computation. The scheme not only supports the multi-client setting but also extends its functionality to accommodate multiple users -- an essential feature in modern privacy-respecting services. Additionally, it demonstrates how this FE scheme can be effectively utilized to ensure fairness and atomicity in a payment protocol, further enhancing the trustworthiness of data exchanges.
Secure Latent Dirichlet Allocation
Topic modelling refers to a popular set of techniques used to discover hidden topics that occur in a collection of documents. These topics can, for example, be used to categorize documents or label text for further processing. One popular topic modelling technique is Latent Dirichlet Allocation (LDA). In topic modelling scenarios, the documents are often assumed to be in one, centralized dataset. However, sometimes documents are held by different parties, and contain privacy- or commercially-sensitive information that cannot be shared.
We present a novel, decentralized approach to train an LDA model securely without having to share any information about the content of the documents with the other parties. We preserve the privacy of the individual parties using a combination of privacy enhancing technologies.
We show that our decentralized, privacy preserving LDA solution has a similar accuracy compared to an (insecure) centralised approach. With -bit Paillier keys, a topic model with topics and words can be trained in around hours. Furthermore, we show that the solution scales linearly in the total number of words and the number of topics.
Cryptanalytic Audit of the XHash Sponge Function and its Components
In this audit we started from the security analysis provided in the design
documentation of XHash8/12. We extended the analysis in several directions and
confirmed the security claims that were made by the designers.
Implementation and Performance Analysis of Homomorphic Signature Schemes
Homomorphic signatures allow to validate computation on signed data. Alice, holding a dataset, uses her secret key to sign these data and stores the authenticated dataset on a remote server. The server can later (publicly) compute together with a signature certifying that is indeed the correct output of the computation . Over the last fifteen years, the problem of realizing homomorphic signatures has been the focus of numerous research works, with constructions now ranging from very efficient ones supporting linear functions to very expressive ones supporting (up to) arbitrary circuits. In this work we tackle the question of assessing the practicality of schemes belonging to this latter class. Specifically, we implement the GVW lattice based scheme for circuits from STOC 2015 and two, recently proposed, pairings based constructions building from functional commitments. Our experiments show that (both) pairings based schemes outperform GVW on all fronts.
Monchi: Multi-scheme Optimization For Collaborative Homomorphic Identification
This paper introduces a novel protocol for privacy-preserving biometric identification, named Monchi, that combines the use of homomorphic encryption for the computation of the identification score with function secret sharing to obliviously compare this score with a given threshold and finally output the binary result. Given the cost of homomorphic encryption, BFV in this solution, we study and evaluate the integration of two packing solutions that enable the regrouping of multiple templates in one ciphertext to improve efficiency meaningfully. We propose an end-to-end protocol, prove it secure and implement it. Our experimental results attest to Monchi's applicability to the real-life use case of an airplane boarding scenario with 1000 passengers,taking less than one second to authorize/deny access to the plane to each passenger via biometric identification while maintaining the privacy of all passengers.
A Complete Beginner Guide to the Number Theoretic Transform (NTT)
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity instead of when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
A New Hash-based Enhanced Privacy ID Signature Scheme
The elliptic curve-based Enhanced Privacy ID (EPID) signature scheme is broadly used for hardware enclave attestation by many platforms that implement Intel Software Guard Extensions (SGX) and other devices. This scheme has also been included in the Trusted Platform Module (TPM) specifications and ISO/IEC standards. However, it is insecure against quantum attackers. While research into quantum-resistant EPID has resulted in several lattice-based schemes, Boneh et al. have initiated the study of EPID signature schemes built only from symmetric primitives. We observe that for this line of research, there is still room for improvement. In this paper, we propose a new hash-based EPID scheme, which includes a novel and efficient signature revocation scheme. In addition, our scheme can handle a large group size (up to group members), which meets the requirements of rapidly developing hardware enclave attestation applications. The security of our scheme is proved under the Universal Composability (UC) model. Finally, we have implemented our EPID scheme, which, to our best knowledge, is the first implementation of EPID from symmetric primitives.
Hash-based Direct Anonymous Attestation
Direct Anonymous Attestation (DAA) was designed for the Trusted Platform Module (TPM) and versions using RSA and elliptic curve cryptography have been included in the TPM specifications and in ISO/IEC standards. These standardised DAA schemes have their security based on the factoring or discrete logarithm problems and are therefore insecure against quantum attackers. Research into quantum-resistant DAA has resulted in several lattice-based schemes. Now in this paper, we propose the first post-quantum DAA scheme from symmetric primitives. We make use of a hash-based signature scheme, which is a slight modification of SPHINCS+, as a DAA credential. A DAA signature, proving the possession of such a credential, is a multiparty computation-based non-interactive zero-knowledge proof. The security of our scheme is proved under the Universal Composability (UC) model. While maintaining all the security properties required for a DAA scheme, we try to make the TPM's workload as low as possible. Our DAA scheme can handle a large group size (up to group members), which meets the requirements of rapidly developing TPM applications.
Sphinx-in-the-Head: Group Signatures from Symmetric Primitives
Group signatures and their variants have been widely used in privacy-sensitive scenarios such as anonymous authentication and attestation. In this paper, we present a new post-quantum group signature scheme from symmetric primitives. Using only symmetric primitives makes the scheme less prone to unknown attacks than basing the design on newly proposed hard problems whose security is less well-understood. However, symmetric primitives do not have rich algebraic properties, and this makes it extremely challenging to design a group signature scheme on top of them. It is even more challenging if we want a group signature scheme suitable for real-world applications, one that can support large groups and require few trust assumptions. Our scheme is based on MPC-in-the-head non-interactive zero-knowledge proofs, and we specifically design a novel hash-based group credential scheme, which is rooted in the SPHINCS+ signature scheme but with various modifications to make it MPC (multi-party computation) friendly. The security of the scheme has been proved under the fully dynamic group signature model. We provide an implementation of the scheme and demonstrate the feasibility of handling a group size as large as . This is the first group signature scheme from symmetric primitives that supports such a large group size and meets all the security requirements.
Last updated: 2024-04-28
Encrypted KNN Implementation on Distributed Edge Device Network
Machine learning (ML) as a service has emerged as a rapidly expanding field across various industries like
healthcare, finance, marketing, retail and e-commerce, Industry 4.0, etc where a huge amount of data is gen-
erated. To handle this amount of data, huge computational power is required for which cloud computing used
to be the first choice. However, there are several challenges in cloud computing like limitations of bandwidth,
network connectivity, higher latency, etc. To address these issues, edge computing is prominent nowadays,
where the data from sensor nodes is collected and processed on low-cost edge devices. As simple sensor
nodes are not capable of handling complex computations of ML models, data from sensor nodes need to be
transferred to some nearest edge devices for further processing. If this sensor data is related to some security-
critical application, the privacy of such sensitive data needs to be preserved both during communication from
sensor node to edge device and computation in edge nodes. This increased need to perform edge-based ML
on privacy-preserved data has led to a surge in interest in homomorphic encryption (HE) due to its ability to
perform computations on encrypted form of data. The highest form of HE, Fully Homomorphic Encryption
(FHE), is capable of theoretically handling arbitrary encrypted algorithms but comes with huge computational
overhead. Hence, the implementation of such a complex encrypted ML model on a single edge node is not
very practical in terms of latency requirements. Our paper introduces a low-cost encrypted ML framework on
a distributed edge cluster, where multiple low-cost edge devices (Raspberry Pi boards) are clustered to perform
encrypted distributed K-Nearest Neighbours (KNN) algorithm computations. Our experimental result shows,
KNN prediction on standard Wisconsin breast cancer dataset takes approximately 1.2 hours, implemented on
a cluster of six pi boards, maintaining end-to-end data confidentiality of critical medical data without any re-
quirement of costly cloud-based computation resource support
Weightwise (almost) perfectly balanced functions based on total orders
he unique design of the FLIP cipher necessitated a generalization of standard cryptographic criteria for Boolean functions used in stream ciphers, prompting a focus on properties specific to subsets of rather than the entire set. This led to heightened interest in properties related to fixed Hamming weight sets and the corresponding partition of into n+1 such sets. Consequently, the concept of Weightwise Almost Perfectly Balanced (WAPB) functions emerged, which are balanced on each of these sets.Various studies have since proposed WAPB constructions and examined their cryptographic parameters for use in stream cipher filters.
In this article, we introduce a general approach to constructing WAPB functions using the concept of order, which simplifies implementation and enhances cryptographic strength. We present two new constructions: a recursive method employing multiple orders on binary strings, and another utilizing just two orders. We establish lower bounds for nonlinearity and weightwise nonlinearities within these classes. By instantiating specific orders, we demonstrate that some achieve minimal algebraic immunity, while others provide functions with guaranteed optimal algebraic immunity. Experimental results in 8 and 16 variables indicate that using orders based on field representation significantly outperforms other methods in terms of both global and weightwise algebraic immunity and nonlinearity. Additionally, we extend the recursive construction to create WAPB functions for any value of n, with experiments in 10, 12, and 14 variables confirming that these order-based functions exhibit robust cryptographic parameters. In particular, those based on field orders display optimal degrees and algebraic immunity, and strong weightwise nonlinearities and algebraic immunities.
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer.
This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
SOK: Research Motivations of Public-Key Cryptography
The design, proposal, and analysis of cryptographic primitives and protocols (schemes) are one of the primary research fields in cryptology. To advance this research field, it is crucial to fully understand their research motivations. In this paper, we systematically introduce the research motivations for designing and proposing new schemes in public-key cryptography. We found that all research motivations aim to produce benefits for humanity including efficiency, security, and functionality, although some of them may be not obvious or only hold conditionally. We categorize benefits in research motivations into 3 ways, 6 types, and 17 areas. As examples, we introduce 40 research strategies within these areas for exploring benefits, each presented as ``From less-adj (in the first scheme) To more-adj (in the second scheme)", where ``adj" here refers to an adjective word representing a positive outcome. This SOK paper aims to provide valuable insights into the driving forces behind advancements in public-key cryptography, facilitating future research efforts in this field.
Jumping for Bernstein-Yang Inversion
This paper achieves fast polynomial inverse operations specifically tailored for the NTRU Prime KEM on ARMv8 NEON instruction set benchmarking on four processor architectures: Cortex-A53, Cortex-A72, Cortex-A76 and Apple M1. We utilize the jumping divison steps of the constant-time GCD algorithm from Bernstein and Yang (TCHES’19) and optimize underlying polynomial multiplication of various lengths to improve the efficiency for computing polynomial inverse operations in NTRU Prime.
Verifiable Encryption from MPC-in-the-Head
Uncategorized
Uncategorized
Verifiable encryption (VE) is a protocol where one can provide assurance that an encrypted plaintext satisfies certain properties, or relations.
It is an important building block in cryptography with many useful applications, such as key escrow, group signatures, optimistic fair exchange, and others. However, the majority of previous VE schemes are restricted to instantiation with specific public-key encryption schemes or relations.
In this work, we propose a novel framework that realizes VE protocols using zero-knowledge proof systems based on the MPC-in-the-head paradigm (Ishai et al. STOC 2007). Our generic compiler can turn a large class of zero-knowledge proofs into secure VE protocols for any secure public-key encryption scheme with the undeniability property, a notion that essentially guarantees binding of encryption when used as a commitment scheme.
Our framework is versatile: because the circuit proven by the MPC-in-the-head prover is decoupled from a complex encryption function, the work of the prover is focused on proving the encrypted data satisfies the relation, not the proof of plaintext knowledge. Hence, our approach allows for instantiation with various combinations of properties about the encrypted data and encryption functions. We then consider concrete applications, to demonstrate the efficiency of our framework, by first giving a new approach and implementation to verifiably encrypt discrete logarithms in any prime order group more efficiently than was previously known. Then we give the first practical verifiable encryption scheme for AES keys with post-quantum security, along with an implementation and benchmarks.
MiniCast: Minimizing the Communication Complexity of Reliable Broadcast
We give a new protocol for reliable broadcast with improved communication complexity for long messages. Namely, to reliably broadcast a message a message over an asynchronous network to a set of parties, of which fewer than may be corrupt, our protocol achieves a communication complexity of , where is the output length of a collision-resistant hash function. This result improves on the previously best known bound for long messages of .
Efficient Post-Quantum Secure Deterministic Threshold Wallets from Isogenies
Cryptocurrency networks crucially rely on digital signature schemes, which are used as an authentication mechanism for transactions. Unfortunately, most major cryptocurrencies today, including Bitcoin and Ethereum, employ signature schemes that are susceptible to quantum adversaries, i.e., an adversary with access to a quantum computer can forge signatures and thereby spend coins of honest users. In cryptocurrency networks, signature schemes are typically not executed in isolation, but within a so-called cryptographic wallet. In order to achieve security against quantum adversaries, the signature scheme and the cryptographic wallet must withstand quantum attacks.
In this work, we advance the study on post-quantum secure signature and wallet schemes. That is, we provide the first formal model for deterministic threshold wallets and we show a generic post-quantum secure construction from any post-quantum secure threshold signature scheme with rerandomizable keys. We then instantiate our construction from the isogeny-based signature scheme CSI-FiSh and we show that our instantiation significantly improves over prior work.
GraphOS: Towards Oblivious Graph Processing
We propose GraphOS, a system that allows a client that owns a graph database to outsource it to an untrusted server for storage and querying. It relies on doubly-oblivious primitives and trusted hardware to achieve a very strong privacy and efficiency notion which we call oblivious graph processing: the server learns nothing besides the number of graph vertexes and edges, and for each query its type and response size. At a technical level, GraphOS stores the graph on a doubly-oblivious data structure, so that all vertex/edge accesses are indistinguishable. For this purpose, we propose Omix++, a novel doubly-oblivious map that outperforms the previous state of the art by up to 34×, and may be of independent interest. Moreover, to avoid any leakage from CPU instruction fetching during query evaluation, we propose algorithms for four fundamental graph queries (BFS/DFS traversal, minimum spanning tree, and single-source shortest paths) that have a fixed execution trace, i.e., the sequence of executed operations is independent of the input. By combining these techniques, we eliminate all information that a hardware adversary observing the memory access pattern within the protected enclave can infer. We benchmarked GraphOS against the best existing solution, based on oblivious relational DBMS(translating graph queries to relational operators). GraphOS is not only significantly more performant (by up to two orders of magnitude for our tested graphs) but it eliminates leakage related to the graph topology that is practically inherent when a relational DBMS is used unless all operations are “padded” to the worst case.
Earn While You Reveal: Private Set Intersection that Rewards Participants
In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called “Anesidora”, that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage where is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Properties of Lattice Isomorphism as a Cryptographic Group Action
In recent years, the Lattice Isomorphism Problem (LIP) has served as an underlying assumption to construct quantum-resistant cryptographic primitives, e.g. the zero-knowledge proof and digital signature scheme by Ducas and van Woerden (Eurocrypt 2022), and the HAWK digital signature scheme (Asiacrypt 2022).
While prior lines of work in group action cryptography, e.g. the works of Brassard and Yung (Crypto 1990), and more recently Alamati, De Feo, Montgomery and Patranabis (Asiacrypt 2020), focused on studying the discrete logarithm problem and isogeny-based problems in the group action framework, in recent years this framing has been used for studying the cryptographic properties of computational problems based on the difficulty of determining equivalence between algebraic objects. Examples include Permutation and Linear Code Equivalence Problems used in LESS (Africacrypt 2020), and the Tensor Isomorphism Problem (TCC 2019). This study delves into the quadratic form version of LIP, examining it through the lens of group actions.
In this work we (1) give formal definitions and study the cryptographic properties of this group action (LIGA), (2) demonstrate that LIGA lacks both weak unpredictability and weak pseudorandomness, and (3) under certain assumptions, establish a theoretical trade-off between time complexity and the required number of samples for breaking weak unpredictability, for large dimensions. We also conduct experiments supporting our analysis. Additionally, we employ our findings to formulate new hard problems on quadratic forms.
On Proving Pairings
In this paper we explore efficient ways to prove correctness of elliptic curve pairing relations. Pairing-based cryptographic protocols such as the Groth16 and Plonk SNARKs and the BLS signature scheme are used extensively in public blockchains such as Ethereum due in large part to their small size. However the relatively high cost of pairing computation remains a practical problem for many use cases such as verification ``in circuit" inside a SNARK. This naturally arises in recursive SNARK composition and SNARKs of BLS based consensus protocols.
To improve pairing verification, we first show that the final exponentiation step of pairing verification can be replaced with a more efficient ``residue check," which can be incorporated into the Miller loop. Then, we show how to reduce the cost of the Miller loop by pre-computing all the necessary lines, and how this is especially efficient when the second pairing argument is fixed in advance. This is the case for BLS signatures with a fixed public key, as well as for KZG based SNARKs like Plonk and two of the three Groth16 pairings. Finally, we show how to improve of the protocol of [gar] by combining quotients, which allows us to more efficiently prove higher degree relations. These techniques also carry over naturally to pairing verification, for example on-chain verification or as part of the BitVM(2) protocol for Bitcoin smart contracts. We instantiate algorithms and show results for the BN254 curve.
A note on ``a lightweight mutual and transitive authentication mechanism for IoT network''
We show the authentication mechanism [Ad Hoc Networks, 2023, 103003] fails to keep user anonymity, not as claimed.
Towards Permissionless Consensus in the Standard Model via Fine-Grained Complexity
We investigate the feasibility of permissionless consensus (aka Byzantine agreement) under standard assumptions. A number of protocols have been proposed to achieve permissionless consensus, most notably based on the Bitcoin protocol; however, to date no protocol is known that can be provably instantiated outside of the random oracle model.
In this work, we take the first steps towards achieving permissionless consensus in the standard model. In particular, we demonstrate that worst-case conjectures in fine-grained complexity, in particular the orthogonal vectors conjecture (implied by the Strong Exponential Time Hypothesis), imply permissionless consensus in the random beacon model—a setting where a fresh random value is delivered to all parties at regular intervals. This gives a remarkable win-win result: either permissionless consensus exists relative to a random beacon, or there are non-trivial worst-case algorithmic speed-ups for a host of natural algorithmic problems (including SAT).
Our protocol achieves resilience against adversaries that control an inverse-polynomial fraction of the honest computational power, i.e., adversarial power for some constant , where denotes the honest computational power. This relatively low threshold is a byproduct of the slack in the fine-grained complexity conjectures.
One technical highlight is the construction of a Seeded Proof of Work: a Proof of Work where many (correlated) challenges can be derived from a single short public seed, and yet still no non-trivial amortization is possible.
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of to sieve a -dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists.
Under some assumptions about the constant terms of memory access, we estimate increases in bit security between to bits for different Kyber parameter sets and to bits for Dilithium.
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Organizing Records for Retrieval in Multi-Dimensional Range Searchable Encryption
Storage of sensitive multi-dimensional arrays must be secure and efficient in storage and processing time. Searchable encryption allows one to trade between security and efficiency. Searchable encryption design focuses on building indexes, overlooking the crucial aspect of record retrieval. Gui et al. (PoPETS 2023) showed that understanding the security and efficiency of record retrieval is critical to understand the overall system. A common technique for improving security is partitioning data tuples into parts. When a tuple is requested, the entire relevant part is retrieved, hiding the tuple of interest.
This work assesses tuple partitioning strategies in the dense data setting, considering parts that are random, -dimensional, and multi-dimensional. We consider synthetic datasets of , and dimensions, with sizes extending up to M tuples. We compare security and efficiency across a variety of record retrieval methods. Our findings are:
1. For most configurations, multi-dimensional partitioning yields better efficiency and less leakage.
2. 1-dimensional partitioning outperforms multi-dimensional partitioning when the first (indexed) dimension is any size as long as the query is large in all other dimensions except the (the first dimension can be any size).
3. The leakage of 1-dimensional partitioning is reduced the most when using a bucketed ORAM (Demertiz et al., USENIX Security 2020).
NTRU-based FHE for Larger Key and Message Space
The NTRU problem has proven a useful building block for efficient bootstrapping in Fully Homomorphic Encryption (FHE) schemes, and different such schemes have been proposed. FINAL (ASIACRYPT 2022) first constructed FHE using homomorphic multiplexer (CMux) gates for the blind rotation operation. Later, XZD+23 (CRYPTO 2023) gave an asymptotic optimization by changing the ciphertext format to enable ring automorphism evaluations. In this work, we examine an adaptation to FINAL to evaluate CMux gates of higher arity and the resulting tradeoff to running times and bootstrapping key sizes. In this setting, we can compare the time and space efficiency of both bootstrapping protocols with larger key space against each other and the state of the art.
Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
Attribute-based Keyed (Fully) Homomorphic Encryption
Keyed homomorphic public key encryption (KHPKE) is a variant of homomorphic public key encryption, where only users who have a homomorphic evaluation key can perform a homomorphic evaluation. Then, KHPKE satisfies the CCA2 security against users who do not have a homomorphic evaluation key, while it satisfies the CCA1 security against users who have the key. Thus far, several KHPKE schemes have been proposed under the standard Diffie-Hellman-type assumptions and keyed fully homomorphic encryption (KFHE) schemes have also been proposed from lattices although there are no KFHE schemes secure solely under the LWE assumption in the standard model. As a natural extension, there is an identity-based variant of KHPKE; however, the security is based on a -type assumption and there are no attribute-based variants. Moreover, there are no identity-based variants of KFHE schemes due to the complex design of the known KFHE schemes. In this paper, we provide two constructions of attribute-based variants. First, we propose an attribute-based KFHE (ABKFHE) scheme from lattices. We start by designing the first KFHE scheme secure solely under the LWE assumption in the standard model. Since the design is conceptually much simpler than known KFHE schemes, we replace their building blocks with attribute-based ones and obtain the proposed ABKFHE schemes. Next, we propose an efficient attribute-based KHPKE (ABKHE) scheme from a pair encoding scheme (PES). Due to the benefit of PES, we obtain various ABKHE schemes that contain the first identity-based KHPKE scheme secure under the standard -linear assumption and the first pairing-based ABKHE schemes supporting more expressive predicates.
Conditional disclosure of secrets with quantum resources
The conditional disclosure of secrets (CDS) primitive is among the simplest cryptographic settings in which to study the relationship between communication, randomness, and security. CDS involves two parties, Alice and Bob, who do not communicate but who wish to reveal a secret to a referee if and only if a Boolean function has . Alice knows , Bob knows , and the referee knows . Recently, a quantum analogue of this primitive called CDQS was defined and related to f-routing, a task studied in the context of quantum position-verification. CDQS has the same inputs, outputs, and communication pattern as CDS but allows the use of shared entanglement and quantum messages. We initiate the systematic study of CDQS, with the aim of better understanding the relationship between privacy and quantum resources in the information theoretic setting. We begin by looking for quantum analogues of results already established in the classical CDS literature. Doing so we establish a number of basic properties of CDQS, including lower bounds on entanglement and communication stated in terms of measures of communication complexity. Because of the close relationship to the -routing position-verification scheme, our results have relevance to the security of these schemes.
Simple constructions of linear-depth t-designs and pseudorandom unitaries
Uniformly random unitaries, i.e., unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that ``look'' sufficiently Haar random while also being efficient to implement.
Two different notions of derandomisation have emerged:
-designs are random unitaries that information-theoretically reproduce the first moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random.
In this work, we take a unified approach to constructing -designs and PRUs. For this, we introduce and analyse the `` ensemble'', the product of a random computational basis permutation , a random binary phase operator , and a random Clifford unitary . We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the ensemble to show the following:
1. Linear-depth -designs. We give the first construction of a (diamond-error) approximate -design with circuit depth linear in . This follows from the ensemble by replacing the random phase and permutation operators with their -wise independent counterparts.
2. Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e., we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts.
3. Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from to qubits, a small modification of our PRU construction achieves adaptive security, i.e., even a distinguisher that can query the isometry adaptively in sequence cannot distinguish it from Haar random isometries. This gives the first construction of adaptive pseudorandom isometries. Under an additional conjecture, this proof also extends to adaptive PRUs.
Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms
In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.
Rate-1 Fully Local Somewhere Extractable Hashing from DDH
Somewhere statistically binding (SSB) hashing allows us to sample a special hashing key such that the digest statistically binds the input at secret locations. This hash function is said to be somewhere extractable (SE) if there is an additional trapdoor that allows the extraction of the input bits at the locations from the digest.
Devadas, Goyal, Kalai, and Vaikuntanathan (FOCS 2022) introduced a variant of somewhere extractable hashing called rate-1 fully local SE hash functions. The rate-1 requirement states that the size of the digest is (where is the security parameter). The fully local property requires that for any index , there is a "very short" opening showing that -th bit of the hashed input is equal to for some . The size of this opening is required to be independent of and in particular, this means that its size is independent of the size of the digest. Devadas et al. gave such a construction from Learning with Errors (LWE).
In this work, we give a construction of a rate-1 fully local somewhere extractable hash function from Decisional Diffie-Hellman (DDH) and BARGs. Under the same assumptions, we give constructions of rate-1 BARG and RAM SNARG with partial input soundness whose proof sizes are only matched by prior constructions based on LWE.
Distributed & Scalable Oblivious Sorting and Shuffling
Existing oblivious systems offer robust security by concealing memory access patterns, but they encounter significant scalability and performance challenges. Recent efforts to enhance the practicality of these systems involve embedding oblivious computation, e.g., oblivious sorting and shuffling, within Trusted Execution Environments (TEEs). For instance, oblivious sort has been heavily utilized: in Oblix (S&P'18), when oblivious indexes are created and accessed; in Snoopy's high-throughput oblivious key-value (SOSP'21) during initialization and when the input requests are deduplicated and prepared for delivery; in Opaque (NSDI'17) for all the proposed oblivious SQL operators; in the state-of-the-art non-foreign key oblivious join approach (PVLDB'20). Additionally, oblivious sort/shuffle find applications in Signal's commercial solution for contact discovery, anonymous Google's Key Transparency, Searchable Encryption, software monitoring, and differentially private federated learning with user privacy.
In this work, we address the scalability bottleneck of oblivious sort and shuffle by re-designing these approaches to achieve high efficiency in distributed multi-enclave environments. First, we propose a multi-threaded bitonic sort optimized for the distributed setting, making it the most performant oblivious sort for small number of enclaves (up to 4). For larger numbers of enclaves, we propose a novel oblivious bucket sort, which improves data locality and network consumption and outperforms our optimized distributed bitonic-sort by up to 5-6x. To the best of our knowledge, these are the first distributed oblivious TEE-based sorting solutions. For reference, we are able to sort 2 GiB of data in 1 second and 128 GiB in 53.4 seconds in a multi-enclave test. A fundamental building block of our oblivious bucket-sort is an oblivious shuffle that improves the prior state-of-the-art result (CCS'22) by up to 9.5x in the distributed multi-enclave setting---interestingly it is better by 10% even in the single-enclave/multi-thread setting.
Hadamard Product Argument from Lagrange-Based Univariate Polynomials
Hadamard product is a point-wise product for two vectors. This paper presents a new scheme to prove Hadamard-product relation as a sub-protocol for SNARKs based on univariate polynomials. Prover uses linear cryptographic operations to generate the proof containing logarithmic field elements. The verification takes logarithmic cryptographic operations with constant numbers of pairings in bilinear group. The construction of the scheme is based on the Lagrange-based KZG commitments (Kate, Zaverucha, and Goldberg
at Asiacrypt 2010) and the folding technique. We construct an inner-product protocol from folding technique on univariate polynomials in Lagrange form, and by carefully choosing the random polynomials suitable for folding technique, we construct a Hadamard-product protocol from the inner-product protocol, giving an alternative to prove linear algebra relations in linear time, and the protocol has a better concrete proof size than previous works.
Guarding the First Order: The Rise of AES Maskings
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model.
The low-area variant has five cycles of latency and a serialized area cost of . The low-latency variant reduces the latency to three cycles while increasing the serialized area by compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated: 2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4).
In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied.
In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
SQIAsignHD: SQIsignHD Adaptor Signature
Adaptor signatures can be viewed as a generalized form of the standard digital signature schemes where a secret randomness is hidden within a signature. Adaptor signatures are a recent cryptographic primitive and are becoming an important tool for blockchain applications such as cryptocurrencies to reduce on-chain costs, improve fungibility, and contribute to off-chain forms of payment in payment-channel networks, payment-channel hubs, and atomic swaps. However, currently used adaptor signature constructions are vulnerable to quantum adversaries due to Shor's algorithm. In this work, we introduce , a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, as the underlying hard relation. We, furthermore, show that our scheme is secure in the Quantum Random Oracle Model (QROM).
flookup: Fractional decomposition-based lookups in quasi-linear time independent of table size
We present a protocol for checking the values of a committed polynomial over a multiplicative subgroup of size are contained in a table . After an preprocessing step, the prover algorithm runs in *quasilinear* time .
We improve upon the recent breakthrough results Caulk[ZBK+22] and Caulk+[PK22], which were the first to achieve the complexity sublinear in the full table size with prover time being and , respectively.
We pose further improving this complexity to as the next important milestone for efficient zk-SNARK lookups.
Complete group law for genus 2 Jacobians on Jacobian coordinates
This manuscript provides complete, inversion-free, and explicit group law formulas in Jacobian coordinates for the genus 2 hyperelliptic curves of the form over a field with . The formulas do not require the use of polynomial arithmetic operations such as resultant, mod, or gcd computations but only operations in .
Deep Selfish Proposing in Longest-Chain Proof-of-Stake Protocols
It has been shown that the selfish mining attack enables a miner to achieve an unfair relative revenue, posing a threat to the progress of longest-chain blockchains. Although selfish mining is a well-studied attack in the context of Proof-of-Work blockchains, its impact on the longest-chain Proof-of-Stake (LC-PoS) protocols needs yet to be addressed. This paper involves both theoretical and implementation-based approaches to analyze the selfish proposing attack in the LC-PoS protocols. We discuss how factors such as the nothing-at-stake phenomenon and the proposer predictability in PoS protocols can make the selfish proposing attack in LC-PoS protocols more destructive compared to selfish mining in PoW. In the first part of the paper, we use combinatorial tools to theoretically assess the selfish proposer’s block ratio in simplistic LC-PoS environments and under simplified network connection. However, these theoretical tools or classical MDP-based approaches cannot be applied to analyze the selfish proposing attack in real-world and more complicated LC-PoS environments. To overcome this issue, in the second part of the paper, we employ deep reinforcement learning techniques to find the near-optimal strategy of selfish proposing in more sophisticated protocols. The tool implemented in the paper can help us analyze the selfish proposing attack across diverse blockchain protocols with different reward mechanisms, predictability levels, and network conditions.
New SAT-based Model for Quantum Circuit Decision Problem: Searching for Low-Cost Quantum Implementation
In recent years, quantum technology has been rapidly developed. As security analyses for symmetric ciphers continue to emerge, many require an evaluation of the resources needed for the quantum circuit implementation of the encryption algorithm. In this regard, we propose the quantum circuit decision problem, which requires us to determine whether there exists a quantum circuit for a given permutation f using M ancilla qubits and no more than K quantum gates within the circuit depth D. Firstly, we investigate heuristic algorithms and classical SAT-based models in previous works, revealing their limitations in solving the problem. Hence, we innovatively propose an improved SAT-based model incorporating three metrics of quantum circuits. The model enables us to find the optimal quantum circuit of an arbitrary 3 or 4-bit S-box under a given optimization goal based on SAT solvers, which has proved the optimality of circuits constructed by the tool, LIGHTER-R. Then, by combining different criteria in the model, we find more compact quantum circuit implementations of S-boxes such as RECTANGLE and GIFT. For GIFT S-box, our model provides the optimal quantum circuit that only requires 8 gates with a depth of 31. Furthermore, our model can be generalized to linear layers and improve the previous SAT-based model proposed by Huang et al. in ASIACRYPT 2022 by adding the criteria on the number of qubits and the circuit depth.
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys.
A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large.
It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed.
It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form.
Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Secure Integrated Sensing and Communication Under Correlated Rayleigh Fading
We consider a secure integrated sensing and communication (ISAC) scenario, in which a signal is transmitted through a state-dependent wiretap channel with one legitimate receiver with which the transmitter communicates and one honest-but-curious target that the transmitter wants to sense. The secure ISAC channel is modeled as two state-dependent fast-fading channels with correlated Rayleigh fading coefficients and independent additive Gaussian noise components. Delayed channel outputs are fed back to the transmitter to improve the communication performance and to estimate the channel state sequence. We establish and illustrate an achievable secrecy-distortion region for degraded secure ISAC channels under correlated Rayleigh fading. We also evaluate the inner bound for a large set of parameters to derive practical design insights for secure ISAC methods. The presented results include in particular parameter ranges for which the secrecy capacity of a classical wiretap channel setup is surpassed and for which the channel capacity is approached.
Efficient KZG-based Univariate Sum-check and Lookup Argument
We propose a novel KZG-based sum-check scheme, dubbed , with optimal efficiency. Particularly, its proving cost is one multi-scalar-multiplication of size ---the number of non-zero entries in the vector, its verification cost is one pairing plus one group scalar multiplication, and the proof consists of only one group element.
Using as a component, we then construct a new lookup argument, named , which enjoys a smaller proof size and a lower verification cost compared to the state of the arts , + and ++. Specifically, the proving cost of is comparable to , keeping the advantage that the proving cost is independent of the table size after preprocessing. For verification, costs four pairings, while , + and ++ require five, five and six pairings, respectively. For proof size, a proof consists of four elements and one element; when instantiated with the BLS12-381 curve, the proof size of is bits, while , + and ++ have , and bits, respectively. Moreover, is zero-knowledge as + and ++, whereas is not. is more efficient even compared to the non-zero-knowledge (and more efficient) versions of + and ++.
Subverting Cryptographic Protocols from A Fine-Grained Perspective - A Case Study on 2-Party ECDSA
The revelations of Edward Snowden in 2013 rekindled concerns within the cryptographic community regarding the potential subversion of cryptographic systems. Bellare et al. (CRYPTO'14) introduced the notion of Algorithm Substitution Attacks (ASAs), which aim to covertly leak sensitive information by undermining individual cryptographic primitives. In this work, we delve deeply into the realm of ASAs against protocols built upon cryptographic primitives. In particular, we revisit the existing ASA model proposed by Berndt et al. (AsiaCCS'22), providing a more fine-grained perspective. We introduce a novel ASA model tailored for protocols, capable of capturing a wide spectrum of subversion attacks. Our model features a modular representation of subverted parties within protocols, along with fine-grained definitions of undetectability. To illustrate the practicality of our model, we applied it to Lindell's two-party ECDSA protocol (CRYPTO'17), unveiling a range of ASAs targeting the protocol's parties with the objective of extracting secret key shares. Our work offers a comprehensive ASA model suited to cryptographic protocols, providing a useful framework for understanding ASAs against protocols.
Quantum copy-protection of compute-and-compare programs in the quantum random oracle model
Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" - a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem.
In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" - a more expressive generalization of point functions. A compute-and-compare program is specified by a function and a string within its range: on input , outputs , if , and otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model.
As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage. Finally, as a third contribution, we elucidate the relationship between unclonable encryption and copy-protection for multi-bit output point functions.
On the Two-sided Permutation Inversion Problem
In the permutation inversion problem, the task is to find the preimage of some challenge value, given oracle access to the permutation. This is a fundamental problem in query complexity, and appears in many contexts, particularly cryptography. In this work, we examine the setting in which the oracle allows for quantum queries to both the forward and the inverse direction of the permutation---except that the challenge value cannot be submitted to the latter. Within that setting, we consider two options for the inversion algorithm: whether it can get quantum advice about the permutation, and whether it must produce the entire preimage (search) or only the first bit (decision). We prove several theorems connecting the hardness of the resulting variations of the inversion problem, and establish a number of lower bounds. Our results indicate that, perhaps surprisingly, the inversion problem does not become significantly easier when the adversary is granted oracle access to the inverse, provided it cannot query the challenge itself.
FHERMA: Building the Open-Source FHE Components Library for Practical Use
Fully Homomorphic Encryption (FHE) is a powerful Privacy-Enhancing Technology (PET) that enables computations on encrypted data without having access to the secret key. While FHE holds immense potential for enhancing data privacy and security, creating its practical applications is associated with many difficulties. A significant barrier is the absence of easy-to-use, standardized components that developers can utilize as foundational building blocks. Addressing this gap requires constructing a comprehensive library of FHE components, a complex endeavor due to multiple inherent problems. We propose a competition-based approach for building such a library. More concretely, we present FHERMA, a new challenge platform that introduces black-box and white-box challenges, and fully automated evaluation of submitted FHE solutions. The initial challenges on the FHERMA platform are motivated by practical problems in machine learning and blockchain. The winning solutions get integrated into an open-source library of FHE components, which is available to all members of the PETs community under the Apache 2.0 license.
Ponyta: Foundations of Side-Contract-Resilient Fair Exchange
This paper is subsumed by Rapidash (https://eprint.iacr.org/2022/1063). Please use Rapidash for the citation.
Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV).
We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Maximizing Miner Revenue in Transaction Fee Mechanism Design
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor.
In this paper, we show that if we make a mildly stronger reasonable-world assumption than prior works, we can circumvent the known limitations on miner revenue, and design auctions that generate optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
A Security Analysis of Restricted Syndrome Decoding Problems
Restricted syndrome decoding problems (R-SDP and R-SDP( )) provide an interesting basis for post-quantum cryptography. Indeed, they feature in CROSS, a submission in the ongoing process for standardizing post-quantum signatures.
This work improves our understanding of the security of both problems.
Firstly, we propose and implement a novel collision attack on R-SDP( ) that provides the best attack under realistic restrictions on memory. Secondly, we derive precise complexity estimates for algebraic attacks on R-SDP that are shown to be accurate by our experiments. We note that neither of these improvements threatens the updated parameters of CROSS.
Updatable, Aggregatable, Succinct Mercurial Vector Commitment from Lattice
Vector commitments (VC) and their variants attract a lot of attention due to their wide range of usage in applications such as blockchain and accumulator. Mercurial vector commitment (MVC), as one of the important variants of VC, is the core technique for building more complicated cryptographic applications, such as the zero-knowledge set (ZKS) and zero-knowledge elementary database (ZK-EDB). However, to the best of our knowledge, the only post-quantum MVC construction is trivially implied by a generic framework proposed by Catalano and Fiore (PKC '13) with lattice-based components which causes auxiliary information and any additional advanced properties, that is, updatable and aggregatable.
A major difficulty in constructing a lattice-based MVC is that it is not trivial to construct a lattice-based VC that satisfies a critical property called ``mercurial hiding". In this paper, we identify some specific features of a new falsifiable family of basis-augmented SIS assumption ( ) proposed by Wee and Wu (EUROCRYPT '23) that can be utilized to construct the mercurial vector commitment from lattice updatability and aggregatability with auxiliary information. We extend stateless update and differential update to the mercurial vector commitment and define a property, named updatable mercurial hiding. Then, we show how to modify our constructions to obtain the updatable mercurial vector commitment that satisfies these properties. To aggregate the openings, our constructions perfectly inherit the ability to aggregate in the assumption, which can break the limitation of binding in the current aggregatable MVCs. In the end, we show that our constructions can be used to build the various kinds of lattice-based ZKS and ZK-EDB directly within the existing framework.
New Security Proofs and Techniques for Hash-and-Sign with Retry Signature Schemes
Hash-and-Sign with Retry is a popular technique to design efficient signature schemes from code-based or multivariate assumptions. Contrary to Hash-and-Sign signatures based on preimage-sampleable functions as defined by Gentry, Peikert and Vaikuntanathan (STOC 2008), trapdoor functions in code-based and multivariate schemes are not surjective. Therefore, the standard approach uses random trials. Kosuge and Xagawa (PKC 2024) coined it the Hash-and-Sign with Retry paradigm.
As many attacks have appeared on code-based and multivariate schemes, we think it is important for the ongoing NIST competition to look at the security proofs of these schemes. The original proof of Sakumoto, Shirai, and Hiwatari (PQCrypto 2011) was flawed, then corrected by Chatterjee, Das and Pandit (INDOCRYPT 2022). The fix is still not sufficient, as it only works for very large finite fields. A new proof in the Quantum ROM model was proposed by Kosuge and Xagawa (PKC 2024), but it is rather loose, even when restricted to the classical setting.
In this paper, we introduce several tools that yield tighter security bounds for Hash-and-Sign with Retry signatures in the classical setting. These include the Hellinger distance, stochastic dominance arguments, and a new combinatorial tool to transform a proof in the non-adaptative setting to the adaptative setting. Ultimately, we obtain a sharp bound for the security of Hash-and-Sign with Retry signatures, applicable to various code-based and multivariate schemes. Focusing on NIST candidates, we apply these results to the MAYO, PROV, and modified UOV signature schemes. In most cases, our bounds are tight enough to apply with the real parameters of those schemes; in some cases, smaller parameters would suffice.
The Practical Advantage of RSA over ECC and Pairings
The coexistence of RSA and elliptic curve cryptosystem (ECC) had continued over forty years. It is well-known that ECC has the advantage of shorter key than RSA, which often leads a newcomer to assume that ECC runs faster. In this report, we generate the Mathematica codes for RSA-2048 and ECC-256, which visually show that RSA-2048 runs three times faster than ECC-256. It is also estimated that RSA-2048 runs 48,000 times faster than Weil pairing with 2 embedding degree and a fixed point.
Low-latency Secure Integrated Sensing and Communication with Transmitter Actions
This paper considers an information theoretic model of secure integrated sensing and communication, represented as a wiretap channel with action dependent states. This model allows one to secure a part of the transmitted message against a sensed target that eavesdrops the communication, while allowing transmitter actions to change the channel statistics. An exact secrecy-distortion region is given for a physically-degraded channel. Moreover, a finite-length achievability region is established for the model using an output statistics of random binning method, giving an achievable bound for low-latency applications.
Classical Commitments to Quantum States
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work).
Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
Security Analysis of XHASH8/12
We have investigated both the padding scheme and the applicability of algebraic attacks to both XHash8 and XHash12. The only vulnerability of the padding scheme we can find is plausibly applicable only in the multi-rate setting---for which the authors make no claim---and is safe otherwise.
For algebraic attack relying on the computation and exploitation of a Gröbner basis, our survey of the literature suggests to base a security argument on the complexity of the variable elimination step rather than that of the computation of the Gröbner basis itself. Indeed, it turns out that the latter complexity is hard to estimate---and is sometimes litteraly non-existent. Focusing on the elimination step, we propose a generalization of the "FreeLunch" approach which, under a reasonable conjecture about the behaviour of the degree of polynomial ideals of dimension 0, is sufficient for us to argue that both XHash8 and XHash12 are safe against such attacks.
We implemented a simplified version of the generation (and resolution) of the corresponding set of equations in SAGE, which allowed us to validate our conjecture at least experimentally, and in fact to show that the lower bound it provides on the ideal degree is not tight---meaning we are a priori understimating the security of these permutations against the algebraic attacks we consider.
At this stage, if used as specified, these hash functions seem safe from Gröbner bases-based algebraic attacks.
Analyzing the complexity of reference post-quantum software: the case of lattice-based KEMs
Software for various post-quantum KEMs has been submitted by the KEM design teams to the SUPERCOP testing framework. The ref/*.c and ref/*.h files together occupy, e.g., 848 lines for ntruhps4096821, 928 lines for ntruhrss701, 1316 lines for sntrup1277, and 2633 lines for kyber1024.
It is easy to see that these numbers overestimate the inherent complexity of software for these KEMs. It is more difficult to systematically measure this complexity.
This paper takes these KEMs as case studies and applies consistent rules to streamline the ref software for the KEMs, while still passing SUPERCOP's tests and preserving the decomposition of specified KEM operations into functions. The resulting software occupies 381 lines for ntruhps4096821, 385 lines for ntruhrss701, 472 lines for kyber1024, and 478 lines for sntrup1277. This paper also identifies the external subroutines used in each case, identifies the extent to which code is shared across different parameter sets, quantifies various software complications specific to each KEM, and traces how differences in KEM design goals produced different software complications.
As a spinoff, this paper presents a kyber512 key-recovery demo exploiting variations in timings of the Kyber reference code.
PriDe CT: Towards Public Consensus, Private Transactions, and Forward Secrecy in Decentralized Payments
Anonymous Zether, proposed by Bunz et al. (FC, 2020) and subsequently improved by Diamond (IEEE S&P, 2021) is an account-based confidential payment mechanism that works by using a smart contract to achieve privacy (i.e. identity of receivers to transactions and payloads are hidden). In this work, we look at simplifying the existing protocol while also achieving batching of transactions for multiple receivers, while ensuring consensus and forward secrecy. To the best of our knowledge, this work is the first to formally study the notion of forward secrecy in the setting of blockchain, borrowing a very popular and useful idea from the world of secure messaging. Specifically, we introduce:
- FUL-Zether, a forward-secure version of Zether (Bunz et al., FC, 2020).
- PRIvate DEcentralized Confidental Transactions (PriDe CT), a much-simplified version of Anonymous Zether that achieves competitive performance and enables batching of transactions for multiple receivers.
- PRIvate DEcentralized Forward-secure Until Last update
Confidential Transactions (PriDeFUL CT), a forward-secure version of PriDe CT.
We also present an open-source, Ethereum-based implementation of our system.
PriDe CT uses linear homomorphic encryption as Anonymous Zether but with simpler zero-knowledge proofs. PriDeFUL CT uses an updatable public key encryption scheme to achieve forward secrecy by introducing a new DDH-based construction in the standard model.
In terms of transaction sizes, Quisquis (Asiacrypt, 2019), which is the only cryptocurrency that supports batchability (albeit in the UTXO model), has 15 times more group elements than PriDe CT. Meanwhile, for a ring of receivers, Anonymous Zether requires more terms even without accounting for the ability to batch in PriDe CT. Further, our implementation indicates that, for , even if there were 7 intended receivers, PriDe CT outperforms Anonymous Zether in proving time and gas consumption.
Lattice-based, more general anti-leakage model and its application in decentralization
In the case of standard \LWE samples , is typically uniformly over . Under the \DLWE assumption, the conditional distribution of and is expected to be consistent. However, in the case where an adversary chooses adaptively, the disparity between the two entities may be larger. In this work, our primary focus is on the quantification of the Average Conditional Min-Entropy of , where is chosen by the adversary. Brakerski and D\"{o}ttling answered the question in one case: they proved that when is uniformly chosen from , it holds that . We prove that for any , when is uniformly chosen from or is sampled from a discrete Gaussian distribution, there are also similar results.
As an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product. As an application of the above results, we improved the multi-key fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts.
The Direction of Updatable Encryption Does Matter
We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper.
We also present a UE scheme that is secure in the backward-leak uni-directional key updates setting under the learning with errors assumption.
Quantum Algorithms for Lattice Problems
Uncategorized
Uncategorized
We show a polynomial time quantum algorithm for solving the learning with errors problem (LWE) with certain polynomial modulus-noise ratios. Combining with the reductions from lattice problems to LWE shown by Regev [J.ACM 2009], we obtain polynomial time quantum algorithms for solving the decisional shortest vector problem (GapSVP) and the shortest independent vector problem (SIVP) for all -dimensional lattices within approximation factors of . Previously, no polynomial or even subexponential time quantum algorithms were known for solving GapSVP or SIVP for all lattices within any polynomial approximation factors.
To develop a quantum algorithm for solving LWE, we mainly introduce two new techniques. First, we introduce Gaussian functions with complex variances in the design of quantum algorithms. In particular, we exploit the feature of the Karst wave in the discrete Fourier transform of complex Gaussian functions. Second, we use windowed quantum Fourier transform with complex Gaussian windows, which allows us to combine the information from both time and frequency domains. Using those techniques, we first convert the LWE instance into quantum states with purely imaginary Gaussian amplitudes, then convert purely imaginary Gaussian states into classical linear equations over the LWE secret and error terms, and finally solve the linear system of equations using Gaussian elimination. This gives a polynomial time quantum algorithm for solving LWE.
Pushing the Limit of Vectorized Polynomial Multiplication for NTRU Prime
We conduct a systematic examination of vector arithmetic for polynomial multiplications in software. Vector instruction sets and extensions typically specify a fixed number of registers, each holding a power-of-two number of bits, and support a wide variety of vector arithmetic on registers. Programmers then try to align mathematical computations with the vector arithmetic supported by the designated instruction set or extension. We delve into the intricacies of this process for polynomial multiplications. In particular, we introduce “vectorization- friendliness” and “permutation-friendliness”, and review “Toeplitz matrix- vector product” to systematically identify suitable mappings from homo- morphisms to vectorized implementations.
To illustrate how the formalization works, we detail the vectorization of polynomial multiplication in used in the parameter set sntrup761 of the NTRU Prime key encapsulation mechanism. For practical evaluation, we implement vectorized polynomial multipliers for the ring with AVX2 and Neon. We benchmark our AVX2 implementation on Haswell and Skylake and our Neon implementation on Cortex-A72 and the “Firestorm” core of Apple M1 Pro.
Our AVX2-optimized implementation is 1.99−2.16 times faster than the state-of-the-art AVX2-optimized implementation by [Bernstein, Brum- ley, Chen, and Tuveri, USENIX Security 2022] on Haswell and Skylake, and our Neon-optimized implementation is 1.29−1.36 times faster than the state-of-the-art Neon-optimized implementation by [Hwang, Liu, and Yang, ACNS 2024] on Cortex-A72 and Apple M1 Pro.
For the overall scheme with AVX2, we reduce the batch key generation cycles (amortized with batch size 32) by 7.9%−12.0%, encapsulation cycles by 7.1%−10.3%, and decapsulation cycles by 10.7%−13.3% on Haswell and Skylake. For the overall performance with Neon, we reduce the encapsulation cycles by 3.0%−6.6% and decapsulation cycles by 12.8%−15.1% on Cortex-A72 and Apple M1 Pro.
Bootstrapping in FHEW-like Cryptosystems
FHEW and TFHE are fully homomorphic encryption (FHE) cryptosystems that can evaluate arbitrary Boolean circuits on encrypted data by bootstrapping after each gate evaluation. The FHEW cryptosystem was originally designed based on standard (Ring, circular secure) LWE assumptions, and its initial implementation was able to run bootstrapping in less than 1 second. The TFHE cryptosystem used somewhat stronger assumptions, such as (Ring, circular secure) LWE over the torus with binary secret distribution, and applied several other optimizations to reduce the bootstrapping runtime to less than 0.1 second. Up to now, the gap between the underlying security assumptions prevented a fair comparison of the cryptosystems for the same security settings.
We present a unified framework that includes the original and extended variants of both FHEW and TFHE cryptosystems, and implement it in the open-source PALISADE lattice cryptography library using modular arithmetic. Our analysis shows that the main distinction between the cryptosystems is the bootstrapping procedure used: Alperin-Sherif--Peikert (AP) for FHEW vs. Gama--Izabachene--Nguyen--Xie (GINX) for TFHE. All other algorithmic optimizations in TFHE equally apply to both cryptosystems. The GINX bootstrapping method makes essential the use of binary secrets, and cannot be directly applied to other secret distributions. In the process of comparing the two schemes, we present a simple, lightweight method to extend GINX bootstrapping (e.g., as employed by TFHE) to ternary uniform and Gaussian secret distributions, which are included in the HE community security standard. Our comparison of the AP and GINX bootstrapping methods for different secret distributions suggests that the TFHE/GINX cryptosystem provides better performance for binary and ternary secrets while FHEW/AP is faster for Gaussian secrets. We make a recommendation to consider the variants of FHEW and TFHE cryptosystems based on ternary and Gaussian secrets for standardization by the HE community.
PLASMA: Private, Lightweight Aggregated Statistics against Malicious Adversaries
Private heavy-hitters is a data-collection task where multiple clients possess private bit strings, and data-collection servers aim to identify the most popular strings without learning anything about the clients' inputs. In this work, we introduce PLASMA: a private analytics framework in the three-server setting that protects the privacy of honest clients and the correctness of the protocol against a coalition of malicious clients and a malicious server.
Our core primitives are a verifiable incremental distributed point function (VIDPF) and a batched consistency check, which are of independent interest. Our VIDPF introduces new methods to validate client inputs based on hashing. Meanwhile, our batched consistency check uses Merkle trees to validate multiple client sessions together in a batch. This drastically reduces server communication across multiple client sessions, resulting in significantly less communication compared to related works. Finally, we compare PLASMA with the recent works of Asharov et al. (CCS'22) and Poplar (S&P'21) and compare in terms of monetary cost for different input sizes.
Computing -isogenies between Kummer lines
We use theta groups to study -isogenies between Kummer lines, with a particular focus on the Montgomery model. This allows us to recover known formulas, along with more efficient forms for translated isogenies, which require only for evaluation. We leverage these translated isogenies to build a hybrid ladder for scalar multiplication on Montgomery curves with rational -torsion, which cost per bit, compared to for the standard Montgomery ladder.
On digital signatures based on group actions: QROM security and ring signatures
Group action based cryptography was formally proposed in the seminal paper of Brassard and Yung (Crypto 1990). Based on oneway group action, there is a well-known digital signature design based on the Goldreich–Micali–Widgerson (GMW) zero-knowledge protocol for the graph isomorphism problem and the Fiat–Shamir (FS) transformation. Recently, there is a revival of activities on group action based cryptography and the GMW-FS design, as witnessed by the schemes SeaSign (Eurocrypt 2019), CSI-FiSh (Asiacrypt 2019), LESS (Africacrypt 2020), ATFE (Eurocrypt 2022), and MEDS (Africacrypt 2023).
The contributions of this paper are two-fold: the first is about the GMW-FS design in general, and the second is on the ATFE-GMW-FS scheme.
First, we study the QROM security and ring signatures of the GMW-FS design in the group action framework. We distil properties of the underlying group action for the GMW-FS design to be secure in the quantum random oracle model (QROM). We also show that this design supports a linkable ring signature construction following the work of Beullens, Katsumata and Pintore (Asiacrypt 2020).
Second, we apply the above results to prove the security of the ATFE-GMW-FS scheme in the QROM model. We then describe a linkable ring signature scheme based on it, and provide an implementation of the ring signature scheme. Preliminary experiments suggest that our scheme is competitive among existing post-quantum ring signatures.
Modular Sumcheck Proofs with Applications to Machine Learning and Image Processing
Cryptographic proof systems provide integrity, fairness, and privacy in applications that outsource data processing tasks. However, general-purpose proof systems do not scale well to large inputs. At the same time, ad-hoc solutions for concrete applications - e.g., machine learning or image processing - are more efficient but lack modularity, hence they are hard to extend or to compose with other tools of a data-processing pipeline.
In this paper, we combine the performance of tailored solutions with the versatility of general-purpose proof systems. We do so by introducing a modular framework for verifiable computation of sequential operations. The main tool of our framework is a new information-theoretic primitive called Verifiable Evaluation Scheme on Fingerprinted Data (VE) that captures the properties of diverse sumcheck-based interactive proofs, including the well-established GKR protocol. Thus, we show how to compose VEs for specific functions to obtain verifiability of a data-processing pipeline.
We propose a novel VE for convolution operations that can handle multiple input-output channels and batching, and we use it in our framework to build proofs for (convolutional) neural networks and image processing. We realize a prototype implementation of our proof systems, and show that we achieve up to faster proving time and shorter proofs compared to the state-of-the-art, in addition to asymptotic improvements.
Nomadic: Normalising Maliciously-Secure Distance with Cosine Similarity for Two-Party Biometric Authentication
Computing the distance between two non-normalized vectors and , represented by and comparing it to a predefined public threshold is an essential functionality used in privacy-sensitive applications such as biometric authentication, identification, machine learning algorithms ({\em e.g.,} linear regression, k-nearest neighbors, etc.), and typo-tolerant password-based authentication.
Tackling a widely used distance metric, {\sc Nomadic} studies the privacy-preserving evaluation of cosine similarity in a two-party (2PC) distributed setting. We illustrate this setting in a scenario where a client uses biometrics to authenticate to a service provider, outsourcing the distance calculation to two computing servers. In this setting, we propose two novel 2PC protocols to evaluate the normalising cosine similarity between non-normalised two vectors followed by comparison to a public threshold, one in the semi-honest and one in the malicious setting. Our protocols combine additive secret sharing with function secret sharing, saving one communication round by employing a new building block to compute the composition of a function yielding a binary result with a subsequent binary gate. Overall, our protocols outperform all prior works, requiring only two communication rounds under a strong threat model that also deals with malicious inputs via normalisation. We evaluate our protocols in the setting of biometric authentication using voice, and the obtained results reveal a notable efficiency improvement compared to existing state-of-the-art works.
Hidden -fairness: A Novel Notion for Fair Secure Two-Party Computation
Secure two-party computation allows two mutually distrusting parties to compute a joint function over their inputs, guaranteeing properties such as input privacy or correctness.
For many tasks, such as joint computation of statistics, it is important that when one party receives the result of the computation, the other party also receives the result.
Unfortunately, this property, which is called fairness, is unattainable in the two-party setting for arbitrary functions. So weaker variants have been proposed.
One such notion, proposed by Pass et al. (EUROCRYPT 2017) is called -fairness.
Informally, it guarantees that if a corrupt party receives the output in round and stops participating in the protocol, then the honest party receives the output by round .
This notion is achieved by using so-called secure enclaves.
In many settings, -fairness is not sufficient, because a corrupt party is guaranteed to receive its output before the honest party, giving the corrupt party an advantage in further interaction.
Worse, as is known to the corrupt party, it can abort the protocol when it is most advantageous.
We extend the concept of -fairness by introducing a new fairness notion, which we call hidden -fairness, which addresses these problems.
First of all, under our new notion, a corrupt party may not benefit from aborting, because it may not, with probability , learn the result first.
Moreover, and other parameters are sampled according to a given distribution and remain unknown to the participants in the computation.
We propose a 2PC protocol that achieves hidden -fairness, also using secure enclaves, and prove its security in the Generalized Universal Composability (GUC) framework.
Proximity Testing with Logarithmic Randomness
A fundamental result dating to Ligero (Des. Codes Cryptogr. '23) establishes that each fixed linear block code exhibits proximity gaps with respect to the collection of affine subspaces, in the sense that each given subspace either resides entirely close to the code, or else contains only a small portion which resides close to the code. In particular, any given subspace's failure to reside entirely close to the code is necessarily witnessed, with high probability, by a uniformly randomly sampled element of that subspace. We investigate a variant of this phenomenon in which the witness is not sampled uniformly from the subspace, but rather from a much smaller subset of it. We show that a logarithmic number of random field elements (in the dimension of the subspace) suffice to effect an analogous proximity test, with moreover only a logarithmic (multiplicative) loss in the possible prevalence of false witnesses. We discuss applications to recent noninteractive proofs based on linear codes, including Brakedown (CRYPTO '23).
1/0 Shades of UC: Photonic Side-Channel Analysis of Universal Circuits
A universal circuit (UC) can be thought of as a programmable circuit that can simulate any circuit up to a certain size by specifying its secret configuration bits. UCs have been incorporated into various applications, such as private function evaluation (PFE). Recently, studies have attempted to formalize the concept of semiconductor intellectual property (IP) protection in the context of UCs. This is despite the observations made in theory and practice that, in reality, the adversary may obtain additional information about the secret when executing cryptographic protocols. This paper aims to answer the question of whether UCs leak information unintentionally, which can be leveraged by the adversary to disclose the configuration bits. In this regard, we propose the first photon emission analysis against UCs relying on computer vision-based approaches. We demonstrate that the adversary can utilize a cost-effective solution to take images to be processed by off-the-shelf algorithms to extract configuration bits. We examine the efficacy of our method in two scenarios: (1) the design is small enough to be captured in a single image during the attack phase, and (2) multiple images should be captured to launch the attack by deploying a divide-and-conquer strategy. To evaluate the effectiveness of our attack, we use metrics commonly applied in side-channel analysis, namely rank and success rate. By doing so, we show that our profiled photon emission analysis achieves a success rate of 1 by employing a few templates (concretely, only 18 images were used as templates).
The Last Challenge Attack: Exploiting a Vulnerable Implementation of the Fiat-Shamir Transform in a KZG-based SNARK
The Fiat-Shamir transform [1] is a well-known and widely employed technique for converting sound public-coin interactive protocols into sound non-interactive protocols. Even though the transformation itself is relatively clear and simple, some implementations choose to deviate from the specifications, for example for performance reasons. In this short note, we present a vulnerability arising from such a deviation in a KZG-based PLONK verifier implementation. This deviation stemmed from the incorrect computation of the last challenge of the PLONK protocol [2], where the KZG batching proof challenge was computed before, and, hence, independently from the KZG evaluation proofs. More generally, such a vulnerability may affect any KZG [3] implementation where one uses batched KZG proof evaluations for at least two distinct evaluation points. We call an attack enabled by such a deviation a Last Challenge Attack. For concreteness, we show that when a PLONK verifier implementation presents such a deviation, a malicious PLONK prover can mount a Last Challenge Attack to construct verifiable proofs of false statements. The described vulnerability was initially discovered as part of an audit, and has been responsibly disclosed to the developers and fixed. A proof of concept of the vulnerability, in which a proof is forged for an arbitrary public input, is made available.
In addition to the above, in this work we also provide a security proof of the knowledge-soundness of the batched KZG scheme with evaluations for at least two distinct values.
Efficient Implementations of Square-root Vélu's Formulas
In the implementation of isogeny-based schemes, V\'{e}lu's formulas are essential for constructing and evaluating odd degree isogenies.
Bernstein et al. proposed an approach known as elu, which computes an -isogeny at a cost of finite field operations. This paper presents two key improvements to enhance the efficiency of
the implementation
of
\'{e}lu from two aspects: optimizing the partition involved in \'{e}lu and speeding up the computations of the sums of products used in polynomial multiplications over finite field with large prime characteristic . To optimize the partition, we adjust it to enhance the utilization of -coordinates and eliminate the computational redundancy, which can ultimately reduce the number of -multiplications. The speedup of the sums of products is to employ two techniques: lazy reduction (abbreviated as LZYR) and generalized interleaved Montgomery multiplication (abbreviated as INTL). These techniques aim to minimize the underlying operations such as -reductions and assembly memory instructions. We present an optimized C and assembly code implementation of \'{e}lu for the CTIDH512 instantiation. In terms of -isogeny computations in CTIDH512, the performance of clock cycles applying new partition + INTL (resp. new partition + LZYR) offers an improvement up to (resp. ) compared to the previous work.
Breaking Bicoptor from S P 2023 Based on Practical Secret Recovery Attack
At S P 2023, a family of secure three-party computing protocols called Bicoptor was proposed by Zhou et al., which is used to compute non-linear functions in privacy preserving machine learning. In these protocols, two parties respectively hold the corresponding shares of the secret, while a third party acts as an assistant. The authors claimed that neither party in the Bicoptor can independently compromise the confidentiality of the input, intermediate, or output. In this paper, we point out that this claim is incorrect. The assistant can recover the secret in the DReLU protocol, which is the basis of Bicoptor. The restoration of its secret will result in the security of the remaining protocols in Bicoptor being compromised. Specifically, we provide two secret recovery attacks regarding the DReLU protocol. The first attack method belongs to a clever enumeration method, which is mainly due to the derivation of the modular equation about the secret and its share. The key of the second attack lies in solving the small integer root problem of a modular equation, as the lattices involved are only 3 or 4 dimensions, the LLL algorithm can effectively work. For the system settings selected by Bicoptor, our experiment shows that the desired secret in the DReLU protocol can be restored within one second on a personal computer. Therefore, when using cryptographic protocols in the field of privacy preserving machine learning, it is not only important to pay attention to design overhead, but also to be particularly careful of potential security threats.
An efficient quantum parallel repetition theorem and applications
We prove a tight parallel repetition theorem for -message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of -message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent -message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07].
As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary.
Real-Valued Somewhat-Pseudorandom Unitaries
We explore a very simple distribution of unitaries: random (binary) phase -- Hadamard -- random (binary) phase -- random computational-basis permutation.
We show that this distribution is statistically indistinguishable from random Haar unitaries for any polynomial set of orthogonal input states (in any basis) with polynomial multiplicity.
This shows that even though real-valued unitaries cannot be completely pseudorandom (Haug, Bharti, Koh, arXiv:2306.11677), we can still obtain some pseudorandom properties without giving up on the simplicity of a real-valued unitary.
Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computational-basis permutation, would suffice, assuming that the input is orthogonal and flat (that is, has high min-entropy when measured in the computational basis).
Using quantum-secure one-way functions (which imply quantum-secure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
Panacea: Non-interactive and Stateless Oblivious RAM
Oblivious RAM (ORAM) allows a client to outsource storage to a
remote server while hiding the data access pattern from the server. Many ORAM designs have been proposed to reduce the computational overhead and bandwidth blowup for the client. A recent work, Onion Ring ORAM (CCS'19), is able to achieve bandwidth blowup in the online phase using fully homomorphic encryption (FHE) techniques, at the cost of a computationally expensive client-side offline phase. Furthermore, such a scheme can be categorized as a stateful construction, meaning that the client has to locally maintain a dynamic state representing the order of remote database elements.
We present Panacea: a novel design of ORAM based on FHE techniques, which is non-interactive and stateless, achieves bandwidth blowup, and does not require an expensive offline phase for the client to perform; in that sense, our design is the first of its kind among other ORAM designs. To provide the client with such performance benefits, our design delegates all expensive computation to the resourceful server.
We additionally show how to boost the server performance significantly using probabilistic batch codes at the cost of only 1.5x in additional bandwidth blowup and 3x expansion in server storage, but less amortized bandwidth.
Our experimental results show that our design, with the batching technique, is practical in terms of server computation overhead as well.
Specifically, for a database size of , it takes only seconds of amortized computation time for a server to respond to a query.
As a result of the statelessness and low computational overhead on the client, and reasonable computational overhead on the server,
our design is very suitable to be deployed as a cloud-based privacy-preserving storage outsourcing solution with a portable client running on a lightweight device.
Analysis of Multivariate Encryption Schemes: Application to Dob and C*
A common strategy for constructing multivariate encryption schemes is to use a central map that is easy to invert over an extension field, along with a small number of modifications to thwart potential attacks. In this work we study the effectiveness of these modifications, by deriving estimates for the number of degree fall polynomials. After developing the necessary tools, we focus on encryption schemes using the and Dobbertin central maps, with the internal perturbation (ip), and modifications. For these constructions we are able to accurately predict the number of degree fall polynomials produced in a Gröbner basis attack, up to and including degree five for the Dob encryption scheme and four for . The predictions remain accurate even when fixing variables. Based on this new theory we design a novel attack on Dob, which completely recovers the secret key for the parameters suggested by its designers. Due to the generality of the presented techniques, we also believe that they are of interest to the analysis of other big field schemes.
The Case of Small Prime Numbers Versus the Okamoto-Uchiyama Cryptosystem
In this paper we study the effect of using small prime numbers within the Okamoto-Uchiyama public key encryption scheme. We introduce two novel versions and prove their security. Then we show how to choose the system's parameters such that the security results hold. Moreover, we provide a practical comparison between the cryptographic algorithms we introduced and the original Okamoto-Uchiyama cryptosystem.
Fault Attack on SQIsign
In this paper, we introduce the first fault attack on SQIsign. By injecting a fault into the ideal generator during the commitment phase, we demonstrate a meaningful probability of inducing the generation of order . The probability is bounded by one parameter, the degree of commitment isogeny. We also show that the probability can be reasonably estimated by assuming uniform randomness of a random variable, and provide empirical evidence supporting the validity of this approximation. In addition, we identify a loop-abort vulnerability due to the iterative structure of the isogeny operation. Exploiting these vulnerabilities, we present key recovery fault attack scenarios for two versions of SQIsign---one deterministic and the other randomized. We then analyze the time complexity and the number of queries required for each attack. Finally, we discuss straightforward countermeasures that can be implemented against the attack.
Revisiting the Security of Fiat-Shamir Signature Schemes under Superposition Attacks
The Fiat-Shamir transformation is a widely employed technique in constructing signature schemes, known as Fiat-Shamir signature schemes (FS-SIG), derived from secure identification (ID) schemes. However, the existing security proof only takes into account classical signing queries and does not consider superposition attacks, where the signing oracle is quantum-accessible to the adversaries. Alagic et al. proposed a security model called blind unforgeability (BUF, Eurocrypt'20), regarded as a preferable notion under superposition attacks.
In this paper, we conduct a thorough security analysis of FS-SIGs in the BUF model. First, we propose a special property for ID schemes called quantum special honest-verifier zero-knowledge (qsHVZK), which is stronger than classical HVZK. We prove that qsHVZK is a sufficient property for BUF (with implicit rejection) of the resulting FS-SIG in the quantum random oracle model (QROM). Next, we give an efficient construction of (a weaker variant) of qsHVZK ID scheme based on the quantum hardness of LWE problems.
To avoid enhancing the requirement of HVZK, we then progress to the deterministic FS-SIG (DFS) for more efficient constructions. We show that if the pseudorandom function is quantum-access-secure (QPRF), then we can prove the BUF security of the resulting DFS only with the requirement of the standard (multi-)HVZK in the QROM. A similar result can be extended to the hedged version of FS-SIG.
Digital Signatures for Authenticating Compressed JPEG Images
We construct a digital signature scheme for images that allows the image to be compressed without invalidating the signature. More specifically, given a JPEG image signed with our signature scheme, a third party can compress the image using JPEG compression, and, as long as the quantization tables only include powers of two, derive a valid signature for the compressed image, without access to the secret signing key, and without interaction with the signer. Our scheme is constructed using a standard digital signature scheme and a hash function as building blocks. This form of signatures that allow image compression could be useful in mitigating some of the threats posed by generative AI and fake news, without interfering with all uses of generative AI.
Taking inspiration from related signature schemes, we define a notion of unforgeability and prove our construction to be secure. Additionally, we show that our signatures have size 32.5 kb under standard parameter choices. Using image quality assessment metrics, we show that JPEG compression with parameters as specified by our scheme, does not result in perceivably reduced visual fidelity, compared to standard JPEG compression.
Last updated: 2024-04-16
A Note on Quantum Algorithms for Lattice Problems
Recently, a paper by Chen (eprint 2024/555) has claimed to construct a quantum polynomial-time algorithm that solves the Learning With Errors Problem (Regev, JACM 2009), for a range of parameters. As a byproduct of Chen's result, it follows that Chen's algorithm solves the Gap Shortest Vector Problem, for gap . In this short note we point to an error in the claims of Chen's paper.
On the Feasibility of Identity-based Encryption with Equality Test against Insider Attacks
Public key encryption with equality test, proposed by Yang et al. (CT-RSA 2010), allows anyone to check whether two ciphertexts of distinct public keys are encryptions of the same plaintext or not using trapdoors, and identity-based encryption with equality test (IBEET) is its identity-based variant. As a variant of IBEET, IBEET against insider attacks (IBEETIA) was proposed by Wu et al. (ACISP 2017), where a token is defined for each identity and is used for encryption. Lee et al. (ACISP 2018) and Duong et al. (ProvSec 2019) proposed IBEETIA schemes constructed by identity-based encryption (IBE) related complexity assumptions. Later, Emura and Takayasu (IEICE Transactions 2023) demonstrated that symmetric key encryption and pseudo-random permutations are sufficient to construct IBEETIA which is secure in the previous security definition.
In this paper, we demonstrate a sufficient condition that IBEETIA implies IBE. We define one-wayness against chosen-plaintext/ciphertext attacks for the token generator (OW-TG-CPA/CCA) and for token holders (OW-TH-CPA/CCA), which were not considered in the previous security definition. We show that OW-TG-CPA secure IBEETIA with additional conditions implies OW-CPA secure IBE. On the other hand, we propose a generic construction of OW-TH-CCA secure IBEETIA from public key encryption.
Our results suggest a design principle to efficiently construct IBEETIA without employing IBE-related complexity assumptions.
Tight Multi-user Security of Ascon and Its Large Key Extension
The Ascon cipher suite has recently become the preferred standard in the NIST Lightweight Cryptography standardization process. Despite its prominence, the initial dedicated security analysis for the Ascon mode was conducted quite recently. This analysis demonstrated that the Ascon AEAD mode offers superior security compared to the generic Duplex mode, but it was limited to a specific scenario: single-user nonce-respecting, with a capacity strictly larger than the key size. In this paper, we eliminate these constraints and provide a comprehensive security analysis of the Ascon AEAD mode in the multi-user setting, where the capacity need not be larger than the key size. Regarding data complexity and time complexity , our analysis reveals that Ascon achieves AEAD security when is bounded by (where is the key size, and is the number of users), and is limited to (with denoting the size of the underlying permutation, set at 320 for Ascon). Our results align with NIST requirements, showing that Ascon allows for a tag size as small as 64 bits while supporting a higher rate of 192 bits, provided the number of users remains within recommended limits. However, this security becomes compromised as the number of users increases significantly. To address this issue, we propose a variant of the Ascon mode called LK-Ascon, which enables doubling the key size. This adjustment allows for a greater number of users without sacrificing security, while possibly offering additional resilience against quantum key recovery attacks. We establish tight bounds for LK-Ascon, and furthermore show that both Ascon and LK-Ascon maintain authenticity security even when facing nonce-misuse adversaries.
A Digital Identity in the Hands of Swiss Citizens
The Swiss law on electronic identity (LSIE) was rejected on March 7, 2021. Its opponents accused it of involving private companies which could thus collect citizens' data and store them centrally. Six motions with identical wording were tabled on March 10, 2021: they all ask the Swiss Federal Council to set up a state-run system allowing citizens to prove their identity online in complete confidence. They stipulate that only necessary information is collected and stored in a decentralized manner. The Swiss Federal Council has recommended to Parliament to approve these motions on May 26, 2021, and wishes to propose a new e-ID solution responding to citizens' concerns as soon as possible. The Federal Department of Justice and Police has been asked to draw up a first draft presenting several technical solutions and specifying their respective costs. Following the publication of a working document on September 2, 2021, a public consultation was opened. It ended on October 14, 2021, with a public debate organized at the Government House in Bern and broadcasted live on a virtual platform. Self-Sovereign Identity (SSI) is one of the solutions identified during this process. It gives the citizens control of their electronic identity: they hold credentials issued by public administrations and choose the data they wish to disclose when they authenticate with a service (they can for example prove that they are over 18 without specifying their exact date of birth).
We propose here a decentralized and user-centric e-ID system based on SSI principles. Our solution embraces an open-source philosophy, fostering transparency and community involvement. We employ blockchain technology as a design pattern to establish trust and ensure the immutability of identity-related data. By design, our solution ensures the right to be forgotten by exclusively storing the digests of verifiable credentials on the blockchain. To demonstrate the feasibility and effectiveness of our SSI solution, we have developed a proof of concept leveraging the Partisia blockchain.
Assessing the quality of Random Number Generators through Neural Networks
In this paper we address the use of Neural Networks (NN) for the
assessment of the quality and hence safety of several Random Number Generators (RNGs), focusing both on the vulnerability of classical Pseudo Random Number Generators (PRNGs), such as Linear Congruential Generators (LCGs) and the RC4 algorithm, and extending our analysis to non-conventional data sources, such as Quantum Random Number Generators (QRNGs) based on Vertical-Cavity Surface-
Emitting Laser (VCSEL). Among the results found, we identified a sort of classification of generators under different degrees of susceptibility, underlining the fundamental role of design decisions in enhancing the safety of PRNGs. The influence of network architecture design and associated hyper-parameters variations was also explored, highlighting the effectiveness of longer sequence lengths and convolutional neural
networks in enhancing the discrimination of PRNGs against other RNGs. Moreover, in the prediction domain, the proposed model is able to deftly distinguish the raw data of our QRNG from truly random ones, exhibiting a cross-entropy error of 0.52 on the test data-set used. All these findings reveal the potential of NNs to enhance the security of RNGs, while highlighting the robustness of certain QRNGs, in particular the VCSEL-based variants, for high-quality random number generation applications.
X-Wing: The Hybrid KEM You’ve Been Looking For
X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security gaurantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.
Determination of cryptographic tables and properties related to the revised boomerang and its application to a fundamental S-box
In symmetric cryptography, vectorial Boolean functions over finite fields F2n derive strong S-boxes. To this end, the S-box should satisfy a list of tests to resist existing attacks, such as the differential, linear, boomerang, and variants. Several tables are employed to measure an S- box’s resistance, such as the difference distribution table (DDT) and the boomerang connectivity table (BCT). Following the boomerang attacks recently revisited in terms of the boomerang switch effect, with a lustra- tion highlighting the power of this technique, a tool called the Boomerang Difference Table (BDT), an alternative to the classical Boomerang BCT, was introduced. Next, two novel tables have been introduced, namely, the Upper Boomerang Connectivity Table (UBCT) and the Lower Boomerang Connectivity Table (LBCT), which are considered improvements over BCT while allowing systematic evaluation of boomerangs to return over mul- tiple rounds.
This paper focuses on the new tools for measuring the revisited version of boomerang attacks and the related tables UBCT, LBCT, as well as the so-called Extended Boomerang Connectivity Table (EBCT). Specifically, we shall study the properties of these novel tools and investigate the corresponding tables. We also study their interconnections, their links to the DDT, and their values for affine equivalent vectorial functions and compositional inverses of permutations of F2n . Moreover, we introduce the concept of the nontrivial boomerang connectivity uniformity and determine the explicit values of all the entries of the EBCT, LBCT, and EBCT for the important cryptographic case of the inverse function.
SCALLOP-HD: group action from 2-dimensional isogenies
We present SCALLOP-HD, a novel group action that builds upon the recent SCALLOP group action introduced by De Feo, Fouotsa, Kutas, Leroux, Merz, Panny and Wesolowski in 2023. While our group action uses the same action of the class group on -oriented curves where for a large prime and small as SCALLOP, we introduce a different orientation representation: The new representation embeds an endomorphism generating in a -isogeny between abelian varieties of dimension with Kani's Lemma, and this representation comes with a simple algorithm to compute the class group action. Our new approach considerably simplifies the SCALLOP framework, potentially surpassing it in efficiency — a claim supported by preliminary implementation results in SageMath. Additionally, our approach streamlines parameter selection. The new representation allows us to select efficiently a class group of smooth order, enabling polynomial-time generation of the lattice of relation, hence enhancing scalability in contrast to SCALLOP.
To instantiate our SCALLOP-HD group action, we introduce a new technique to apply Kani's Lemma in dimension 2 with an isogeny diamond obtained from commuting endomorphisms. This method allows one to represent arbitrary endomorphisms with isogenies in dimension 2, and may be of independent interest.
SDFA: Statistical-Differential Fault Attack on Linear Structured SBox-Based Ciphers
At Asiacrypt 2021, Baksi et al. introduced DEFAULT, the first block cipher designed to resist differential fault attacks (DFA) at the algorithm level, boasting of a 64-bit DFA security. The cipher initially employed a straightforward key schedule, where a single key was XORed in all rounds, and the key schedule was updated by incorporating round-independent keys in a rotating fashion. However, during Eurocrypt 2022, Nageler et al. presented a DFA attack that exposed vulnerabilities in the claimed DFA security of DEFAULT, reducing it by up to 20 bits in the case of the simple key schedule and even allowing for unique key recovery in the presence of rotating keys. In this work, we have significantly improved upon the existing differential fault attack (DFA) on the DEFAULT cipher. Our enhanced attack allows us to effectively recover the encryption key with minimal faults. We have accomplished this by computing deterministic differential trails for up to five rounds, injecting around 5 faults into the simple key schedule for key recovery, recovering equivalent keys with just 36 faults in the DEFAULT-LAYER, and introducing a generic DFA approach suitable for round-independent keys within the DEFAULT cipher. These results represent the most efficient key recovery achieved for the DEFAULT cipher under DFA attacks. Additionally, we have introduced a novel fault attack called the Statistical-Differential Fault Attack (SDFA), specifically tailored for linear-structured SBOX-based ciphers like DEFAULT. This novel technique has been successfully applied to BAKSHEESH, resulting in a nearly unique key recovery. Our findings emphasize the vulnerabilities present in linear-structured SBOX-based ciphers, including both DEFAULT and BAKSHEESH, and underscore the challenges in establishing robust DFA protection for such cipher designs. In summary, our research highlights the significant risks associated with designing linear-structured SBOX-based block ciphers with the aim of achieving cipher-level DFA protection.