All papers in 2022 (1781 results)

Last updated:  2022-12-31
COA-Secure Obfuscation and Applications
Ran Canetti, Suvradip Chakraborty, Dakshita Khurana, Nishanth Kumar, Oxana Poburinnaya, Manoj Prabhakaran
We put forth a new paradigm for program obfuscation, where obfuscated programs are endowed with proofs of ``well-formedness.'' In addition to asserting existence of an underlying plaintext program with an attested structure and functionality, these proofs also prevent mauling attacks, whereby an adversary surreptitiously creates an obfuscated program based on secrets which are embedded in a given obfuscated program. We call this new guarantee Chosen Obfuscation Attack (COA) security. We define and construct general-purpose COA-secure Probabilistic Indistinguishability Obfuscators for circuits, assuming sub-exponential IO for circuits and CCA commitments. To demonstrate the power of the new notion, we use it to realize, in the plain model: - Structural Watermarking, which is a new form of software watermarking that provides significantly broader protection than current schemes and features a keyless, public verification process. - Completely CCA encryption, which is a strengthening of completely non-malleable encryption. We also show, based on the same assumptions, a generic method for enhancing any obfuscation mechanism that guarantees any semantic-style form of hiding to one that provides also COA security.
Last updated:  2023-06-11
More Efficient Key Ranking for Optimal Collision Side-Channel Attacks
Cezary Glowacz
In [Optimal Collision Side-Channel Attacks] we studied collision side-channel attacks, and derived an optimal distinguisher for key ranking. In this note we propose a heuristic estimation procedure for key ranking based on this distinguisher, and provide estimates of lower bounds for secret key ranks in collision side-channel attacks. The procedure employs nonuniform sampling introduced in [MCRank: Monte Carlo Key Rank Estimation for Side-Channel Security Evaluations], and it is more efficient than the subset uniform sampling procedure [A Note on Key Ranking for Optimal Collision Side-Channel Attacks].
Last updated:  2022-12-31
Batching, Aggregation, and Zero-Knowledge Proofs in Bilinear Accumulators
Shravan Srinivasan, Ioanna Karantaidou, Foteini Baldimtsi, Charalampos Papamanthou
An accumulator is a cryptographic primitive that allows a prover to succinctly commit to a set of values while being able to provide proofs of (non-)membership. A batch proof is an accumulator proof that can be used to prove (non-)membership of multiple values simultaneously. In this work, we present a zero-knowledge batch proof with constant proof size and constant verification in the Bilinear Pairings (BP) setting. Our scheme is 16x to 42x faster than state-of-the-art SNARK-based zero-knowledge batch proofs in the RSA setting. Additionally, we propose protocols that allow a prover to aggregate multiple individual non-membership proofs, in the BP setting, into a single batch proof of constant size. Our construction for aggregation satisfies a strong soundness definition - one where the accumulator value can be chosen arbitrarily. We evaluate our techniques and systematically compare them with RSA-based alternatives. Our evaluation results showcase several scenarios for which BP accumulators are clearly preferable and can serve as a guideline when choosing between the two types of accumulators.
Last updated:  2022-12-30
Asynchronous Delegated Private Set Intersection with Hiding of Intersection Size
Wyatt Howe, Andrei Lapets, Frederick Jansen, Tanner Braun, Ben Getchell
Integrating private set intersection (PSI) protocols within real-world data workflows, software applications, or web services can be challenging. This can occur because data contributors and result recipients do not have the technical expertise, information technology infrastructure, or other resources to participate throughout the execution of a protocol and/or to incur all the communication costs associated with participation. Furthermore, contemporary workflows, applications, and services are often designed around RESTful APIs that might not require contributors or recipients to remain online or to maintain state. Asynchronous delegated PSI protocol variants can better match the expectations of software engineers by (1) allowing data contributors to contribute their inputs and then to depart permanently, and (2) allowing result recipients to request their result only once they are ready to do so. However, such protocols usually accomplish this by introducing an additional party that learns some information about the size of the intersection. This work presents an asynchronous delegated PSI protocol variant that does not reveal the intersection size to the additional party. It is shown that such a protocol can have, on average, linear time and space complexity.
Last updated:  2022-12-29
Weightwise perfectly balanced functions and nonlinearity
Uncategorized
Agnese Gini, Pierrick Méaux
Show abstract
Uncategorized
In this article we realize a general study on the nonlinearity of weightwise perfectly balanced (WPB) functions. First, we derive upper and lower bounds on the nonlinearity from this class of functions for all $n$. Then, we give a general construction that allows us to provably provide WPB functions with nonlinearity as low as $2^{n/2-1}$ and WPB functions with high nonlinearity, at least $2^{n-1}-2^{n/2}$. We provide concrete examples in $8$ and $16$ variables with high nonlinearity given by this construction. In $8$ variables we experimentally obtain functions reaching a nonlinearity of $116$ which corresponds to the upper bound of Dobbertin's conjecture, and it improves upon the maximal nonlinearity of WPB functions recently obtained with genetic algorithms. Finally, we study the distribution of nonlinearity over the set of WPB functions. We examine the exact distribution for $n=4$ and provide an algorithm to estimate the distributions for $n=8$ and $16$, together with the results of our experimental studies for $n=8$ and $16$.
Last updated:  2022-12-29
Offset-Based BBB-Secure Tweakable Block-ciphers with Updatable Caches
Arghya Bhattacharjee, Ritam Bhaumik, Mridul Nandi
A nonce-respecting tweakable blockcipher is the building-block for the OCB authenticated encryption mode. An XEX-based TBC is used to process each block in OCB. However, XEX can provide at most birthday bound privacy security, whereas in Asiacrypt 2017, beyond-birthday-bound (BBB) forging security of OCB3 was shown by Bhaumik and Nandi. In this paper we study how at a small cost we can construct a nonce-respecting BBB-secure tweakable blockcipher. We propose the OTBC-3 construction, which maintains a cache that can be easily updated when used in an OCB-like mode. We show how this can be used in a BBB-secure variant of OCB with some additional keys and a few extra blockcipher calls but roughly the same amortised rate.
Last updated:  2022-12-29
Candidate Trapdoor Claw-Free Functions from Group Actions with Applications to Quantum Protocols
Navid Alamati, Giulio Malavolta, Ahmadreza Rahimi
Trapdoor Claw-free Functions (TCFs) are two-to-one trapdoor functions where it is computationally hard to find a claw, i.e., a colliding pair of inputs. TCFs have recently seen a surge of renewed interest due to new applications to quantum cryptography: as an example, TCFs enable a classical machine to verify that some quantum computation has been performed correctly. In this work, we propose a new family of (almost two-to-one) TCFs based on conjectured hard problems on isogeny-based group actions. This is the first candidate construction that is not based on lattice-related problems and the first scheme (from any plausible post-quantum assumption) with a deterministic evaluation algorithm. To demonstrate the usefulness of our construction, we show that our TCF family can be used to devise a computational test of qubit, which is the basic building block used in the general verification of quantum computations.
Last updated:  2022-12-28
PECO: methods to enhance the privacy of DECO protocol
Manuel B. Santos
The DECentralized Oracle (DECO) protocol enables the verifiable provenance of data from Transport Layer Security (TLS) connections through secure two-party computation and zero-knowledge proofs. In this paper, we present PECO, an extension of DECO that enhances privacy features through the integration of two new private three-party handshake protocols (P3P-HS). PECO allows any web user to prove to a verifier the properties of data from TLS connections without disclosing the identity of the servers. Like DECO's three-party handshake protocol, PECO's P3P-HS methods do not require any changes on the server side. PECO offers two options: one that provides $k-$anonymity for the server's identity, and another that completely masks the server's identity from the verifier. PECO is based on three main protocols: (a) commit-and-proof zero-knowledge proofs (CP-ZKP) that enable the proof of relations under committed values in zero-knowledge, (b) verification of Elliptic Curve Digital Signature Algorithm (ECDSA) signatures under a committed public key without revealing the key (zkAttest), and (c) a proof of membership to verify that a committed key belongs to a set of keys. We estimate the performance of both P3P-HS protocols and compare it to TLS timeout using state-of-the-art implementations.
Last updated:  2023-04-07
SoK: Decentralized Finance (DeFi) Attacks
Liyi Zhou, Xihan Xiong, Jens Ernstberger, Stefanos Chaliasos, Zhipeng Wang, Ye Wang, Kaihua Qin, Roger Wattenhofer, Dawn Song, Arthur Gervais
Within just four years, the blockchain-based Decentralized Finance (DeFi) ecosystem has accumulated a peak total value locked (TVL) of more than 253 billion USD. This surge in DeFi’s popularity has, unfortunately, been accompanied by many impactful incidents. According to our data, users, liquidity providers, speculators, and protocol operators suffered a total loss of at least 3.24 billion USD from Apr 30, 2018 to Apr 30, 2022. Given the blockchain’s transparency and increasing incident frequency, two questions arise: How can we systematically measure, evaluate, and compare DeFi incidents? How can we learn from past attacks to strengthen DeFi security? In this paper, we introduce a common reference frame to systematically evaluate and compare DeFi incidents, including both attacks and accidents. We investigate 77 academic papers, 30 audit reports, and 181 real-world incidents. Our data reveals several gaps between academia and the practitioners’ community. For example, few academic papers address “price oracle attacks” and “permissonless interactions”, while our data suggests that they are the two most frequent incident types (15% and 10.5% correspondingly). We also investigate potential defenses, and find that: (i) 103 (56%) of the attacks are not executed atomically, granting a rescue time frame for defenders; (ii) SoTA bytecode similarity analysis can at least detect 31 vulnerable/23 adversarial contracts; and (iii) 33 (15.3%) of the adversaries leak potentially identifiable information by interacting with centralized exchanges.
Last updated:  2022-12-28
You Can Sign but Not Decrypt: Hierarchical Integrated Encryption and Signature
Min Zhang, Binbin Tu, Yu Chen
Recently, Chen et al. (ASIACRYPT 2021) introduced a notion called hierarchical integrated signature and encryption (HISE), which provides a new principle for combining public key schemes. It uses a single public key for both signature and encryption schemes, and one can derive a decryption key from the signing key but not vice versa. Whereas, they left the dual notion where the signing key can be derived from the decryption key as an open problem. In this paper, we resolve the problem by formalizing the notion called hierarchical integrated encryption and signature (HIES). Similar to HISE, it features a unique public key for both encryption and signature components and has a two-level key derivation mechanism, but reverses the hierarchy between signing key and decryption key, i.e. one can derive a signing key from the decryption key but not vice versa. This property enables secure delegation of signing capacity in the public key reuse setting. We present a generic construction of HIES from constrained identity-based encryption. Furthermore, we instantiate our generic HIES construction and implement it. The experimental result demonstrates that our HIES scheme is comparable to the best Cartesian product combined public-key scheme in terms of efficiency, and is superior in having richer functionality as well as retaining merits of key reuse.
Last updated:  2022-12-28
Security analysis for BIKE, Classic McEliece and HQC against the quantum ISD algorithms
Asuka Wakasugi, Mitsuru Tada
Since 2016, NIST has been standardrizing Post-Quantum Cryptosystems, PQCs. Code-Based Cryptosystem, CBC, which is considered to be one of PQCs, uses the Syndrome Decoding Problem as the basis for its security. NIST's PQC standardization project is currently in its 4th round and some CBC encryption schemes remain there. In this paper, we consider the quantum security for these cryptosystems.
Last updated:  2022-12-27
Cryptographic Primitives with Hinting Property
Navid Alamati, Sikhar Patranabis
A hinting pseudorandom generator (PRG) is a potentially stronger variant of PRG with a ``deterministic'' form of circular security with respect to the seed of the PRG (Koppula and Waters, CRYPTO 2019). Hinting PRGs enable many cryptographic applications, most notably CCA-secure public-key encryption and trapdoor functions. In this paper, we study cryptographic primitives with the hinting property, yielding the following results: We present a novel and conceptually simpler approach for designing hinting PRGs from certain decisional assumptions over cyclic groups or isogeny-based group actions, which enables simpler security proofs as compared to the existing approaches for designing such primitives. We introduce hinting weak pseudorandom functions (wPRFs), a natural extension of the hinting property to wPRFs, and show how to realize circular/KDM-secure symmetric-key encryption from any hinting wPRF. We demonstrate that our simple approach for building hinting PRGs can be extended to realize hinting wPRFs from the same set of decisional assumptions. We propose a stronger version of the hinting property, which we call the functional hinting property, that guarantees security even in the presence of hints about functions of the secret seed/key. We show how to instantiate functional hinting PRGs/wPRFs for certain (families of) functions by building upon our simple techniques for realizing plain hinting PRGs/wPRFs. We also demonstrate the applicability of a functional hinting wPRF with certain algebraic properties in realizing KDM-secure public-key encryption in a black-box manner. We show the first black-box separation between hinting wPRFs (and hinting PRGs) from public-key encryption using simple realizations of these primitives given only a random oracle.
Last updated:  2022-12-27
PoRt: Non-Interactive Continuous Availability Proof of Replicated Storage
Reyhaneh Rabaninejad, Bin Liu, Antonis Michalas
Secure cryptographic storage is one of the most important issues that both businesses and end-users take into account before moving their data to either centralized clouds or blockchain-based decen- tralized storage marketplace. Recent work [4 ] formalizes the notion of Proof of Storage-Time (PoSt) which enables storage servers to demonstrate non-interactive continuous availability of outsourced data in a publicly verifiable way. The work also proposes a stateful compact PoSt construction, while leaving the stateless and transpar- ent PoSt with support for proof of replication as an open problem. In this paper, we consider this problem by constructing a proof system that enables a server to simultaneously demonstrate con- tinuous availability and dedication of unique storage resources for encoded replicas of a data file in a stateless and publicly verifi- able way. We first formalize Proof of Replication-Time (PoRt) by extending PoSt formal definition and security model to provide support for replications. Then, we provide a concrete instantia- tion of PoRt by designing a lightweight replica encoding algorithm where replicas’ failures are efficiently located through an efficient comparison-based verification process, after the data deposit period ends. PoRt’s proofs are aggregatable: the prover can take several sequentially generated proofs and efficiently aggregate them into a single, succinct proof. The protocol is also stateless in the sense that the client can efficiently extend the deposit period by incre- mentally updating the tags and without requiring to download the outsourced file replicas. We also demonstrate feasible extensions of PoRt to support dynamic data updates, and be transparent to enable its direct use in decentralized storage networks, a property not supported in previous proposals. Finally, PoRt’s verification cost is independent of both outsourced file size and deposit length.
Last updated:  2023-01-06
Continuous Group Key Agreement with Flexible Authorization and Its Applications
Kaisei Kajita, Keita Emura, Kazuto Ogawa, Ryo Nojima, Go Ohtake
Secure messaging (SM) protocols allow users to communicate securely over an untrusted infrastructure. The IETF currently works on the standardization of secure group messaging (SGM), which is SM done by a group of two or more people. Alwen et al. formally defined the key agreement protocol used in SGM as continuous group key agreement (CGKA) at CRYPTO 2020. In their CGKA protocol, all of the group members have the same rights and a trusted third party is needed. On the contrary, some SGM applications may have a user in the group who has the role of an administrator. When the administrator as the group manager (GM) is distinguished from other group members, i.e., in a one-to-many setting, it would be better for the GM and the other group members to have different authorities. We achieve this flexible autho-rization by incorporating a ratcheting digital signature scheme (Cremers et al. at USENIX Security 2021) into the existing CGKA protocol and demonstrate that such a simple modification allows us to provide flexible authorization. This one-to-many setting may be reminiscent of a multi-cast key agreement protocol proposed by Bienstock et al. at CT-RSA 2022, where GM has the role of adding and removing group members. Although the role of the GM is fixed in advance in the Bienstock et al. protocol, the GM can flexibly set the role depending on the application in our protocol. On the other hand, in Alwen et al.’s CGKA protocol, an external public key infrastructure (PKI) functionality as a trusted third party manages the confidential information of users, and the PKI can read all messages until all users update their own keys. In contrast, the GM in our protocol has the same role as the PKI functionality in the group, so no third party outside the group handles confidential informa-tion of users and thus no one except group members can read messages regardless of key updates. Our proposed protocol is useful in the creation of new applications such as broadcasting services.
Last updated:  2023-10-25
Do Not Trust in Numbers: Practical Distributed Cryptography With General Trust
Orestis Alpos and Christian Cachin
In distributed cryptography independent parties jointly perform some cryptographic task. In the last decade distributed cryptography has been receiving more attention than ever. Distributed systems power almost all applications, blockchains are becoming prominent, and, consequently, numerous practical and efficient distributed cryptographic primitives are being deployed. The failure models of current distributed cryptographic systems, however, lack expressibility. Assumptions are only stated through numbers of parties, thus reducing this to threshold cryptography, where all parties are treated as identical and correlations cannot be described. Distributed cryptography does not have to be threshold-based. With general distributed cryptography the authorized sets, the sets of parties that are sufficient to perform some task, can be arbitrary, and are usually modeled by the abstract notion of a general access structure. Although the necessity for general distributed cryptography has been recognized long ago and many schemes have been explored in theory, relevant practical aspects remain opaque. It is unclear how the user specifies a trust structure efficiently or how this is encoded within a scheme, for example. More importantly, implementations and benchmarks do not exist, hence the efficiency of the schemes is not known. Our work fills this gap. We show how an administrator can intuitively describe the access structure as a Boolean formula. This is then converted into encodings suitable for cryptographic primitives, specifically, into a tree data structure and a monotone span program. We focus on three general distributed cryptographic schemes: verifiable secret sharing, common coin, and distributed signatures. For each one we give the appropriate formalization and security definition in the general-trust setting. We implement the schemes and assess their efficiency against their threshold counterparts. Our results suggest that the general distributed schemes can offer richer expressibility at no or insignificant extra cost. Thus, they are appropriate and ready for practical deployment.
Last updated:  2022-12-27
Systematically Quantifying Cryptanalytic Non-Linearities in Strong PUFs
Durba Chatterjee, Kuheli Pratihar, Aritra Hazra, Ulrich Rührmair, Debdeep Mukhopadhyay
Physically Unclonable Functions~(PUFs) with large challenge space~(also called Strong PUFs) are promoted for usage in authentications and various other cryptographic and security applications. In order to qualify for these cryptographic applications, the Boolean functions realized by PUFs need to possess a high non-linearity~(NL). However, with a large challenge space~(usually $\geq 64$ bits), measuring NL by classical techniques like Walsh transformation is computationally infeasible. In this paper, we propose the usage of a heuristic-based measure called non-homomorphicity test which estimates the NL of Boolean functions with high accuracy in spite of not needing access to the entire challenge-response set. We also combine our analysis with a technique used in linear cryptanalysis, called Piling-up lemma, to measure the NL of popular PUF compositions. As a demonstration to justify the soundness of the metric, we perform extensive experimentation by first estimating the NL of constituent Arbiter/Bistable Ring PUFs using the non-homomorphicity test, and then applying them to quantify the same for their XOR compositions namely XOR Arbiter PUFs and XOR Bistable Ring PUF. Our findings show that the metric explains the impact of various parameter choices of these PUF compositions on the NL obtained and thus promises to be used as an important objective criterion for future efforts to evaluate PUF designs. While the framework is not representative of the machine learning robustness of PUFs, it can be a useful complementary tool to analyze the cryptanalytic strengths of PUF primitives.
Last updated:  2023-06-29
A Deep Learning Aided Differential Distinguisher Improvement Framework with More Lightweight and Universality
Jiashuo Liu, Jiongjiong Ren, Shaozhen Chen
In CRYPTO 2019, Gohr opens up a new direction for cryptanalysis. He successfully applied deep learning to differential cryptanalysis against the NSA block cipher SPECK32/64, achieving higher accuracy than traditional differential distinguishers. Until now, one of the mainstream research directions is increasing the training sample size and utilizing different neural networks to improve the accuracy of neural distinguishers. This conversion mindset may lead to a huge number of parameters, heavy computing load, and a large number of memory in the distinguishers training process. However, in the practical application of cryptanalysis, the applicability of the attacks method in a resource-constrained environment is very important. Therefore, we focus on the cost optimization and aim to reduce network parameters for differential neural cryptanalysis. In this paper, we propose two cost-optimized neural distinguisher improvement methods from the aspect of data format and network structure, respectively. Firstly, we obtain a partial output difference neural distinguisher using only 4-bits training data format which is constructed with a new advantage bits search algorithm based on two key improvement conditions. In addition, we perform an interpretability analysis of the new neural distinguishers whose results are mainly reflected in the relationship between the neural distinguishers, truncated differential, and advantage bits. Secondly, we replace the traditional convolution with the depthwise separable convolution to reduce the training cost without affecting the accuracy as much as possible. Overall, the number of training parameters can be reduced by less than 50\% by using our new network structure for training neural distinguishers. Finally, we apply the network structure to the partial output difference neural distinguishers. The combinatorial approach have led to a further reduction in the number of parameters (approximately 30\% of Gohr's distinguishers for SPECK).
Last updated:  2022-12-25
Wi-Fi Security: Do We Still Have to Look Back?
Karim Lounis
Wi-Fi is a wireless communication technology that has been around since the late nineties. Nowadays, it is the most adopted wireless short-range communication technology in various IoT (Internet of Things) applications and on many wireless AI (Artificial Intelligent) systems. Although Wi-Fi security has significantly improved throughout the past years, it is still having some limitations. Some vulnerabilities still exist allowing attackers to generate different types of attacks. These attacks can breach the authentication, confidentiality, and data integrity of Wi-Fi systems. At the same time, many vulnerabilities have been fixed or patched, and the attacks that were relying on those vulnerabilities would fail on modern Wi-Fi systems. Therefore, it is important for security engineers, in general, and for wireless intelligent system designers, in particular, to be aware of the existing vulnerabilities and feasible attacks on modern Wi-Fi systems and their respective countermeasures. That would help them to not have to look back and care about attacks that can no longer be generated on today’s Wi-Fi systems. In this light, we devote this paper to extensively review the attacks on Wi-Fi. We group the attacks into feasible and unfeasible. Also, for each attack, we discuss the possible countermeasures to mitigate it.
Last updated:  2023-01-08
cq: Cached quotients for fast lookups
Liam Eagen, Dario Fiore, Ariel Gabizon
We present a protocol called $\mathsf{cq}$ for checking the values of a committed polynomial $f(X)\in \mathbb{F}_{<n}(X)$ over a multiplicative subgroup $H\subset \mathbb{F}$ of size $n$ are contained in a table $t\in \mathbb{F}^N$. After an $O(N \log N)$ preprocessing step, the prover algorithm runs in time $O(n\log n)$. Thus, we continue to improve upon the recent breakthrough sequence of results [ZBKMNS,PK,GK,ZGKMR] starting from $\mathsf{Caulk}$ [ZBKMNS], which achieve sublinear complexity in the table size $N$. The two most recent works in this sequence $\mathsf{Ba}\mathit{loo}$ [ZGKMR] and $\mathsf{flookup}$ [GK] achieved prover complexity $O(n\log^2 n)$. Moreover, $\mathsf{cq}$ has the following attractive features. 1. As in [ZBKMNS,PK,ZGKMR] our construction relies on homomorphic table commitments, which makes them amenable to vector lookups. 2. As opposed to the previous four works, our verifier doesn't involve pairings with prover defined $\mathbb{G}_2$ points, which makes recursive aggregation of proofs more convenient.
Last updated:  2023-10-18
On the Impossibility of Surviving (Iterated) Deletion of Weakly Dominated Strategies in Rational MPC
Johannes Blömer, Jan Bobolz, and Henrik Bröcher
Rational multiparty computation (rational MPC) provides a framework for analyzing MPC protocols through the lens of game theory. One way to judge whether an MPC protocol is rational is through weak domination: Rational players would not adhere to an MPC protocol if deviating never decreases their utility, but sometimes increases it. Secret reconstruction protocols are of particular importance in this setting because they represent the last phase of most (rational) MPC protocols. We show that most secret reconstruction protocols from the literature are not, in fact, stable with respect to weak domination. Furthermore, we formally prove that (under certain assumptions) it is impossible to design a secret reconstruction protocol which is a Nash equlibrium but not weakly dominated if (1) shares are authenticated or (2) half of all players may form a coalition.
Last updated:  2022-12-23
A Family of Block Ciphers Based on Multiple Quasigroups
Umesh Kumar, V. Ch. Venkaiah
A family of block ciphers parametrized by an optimal quasigroup is proposed in this paper. The proposed cipher uses sixteen $4\times 4$ bits S-boxes as an optimal quasigroup of order 16. Since a maximum of $16!$ optimal quasigroups of order 16 can be formed, the family consists of $C^{16!}_1$ cryptosystems. All the sixteen S-boxes have the highest algebraic degree and are optimal with the lowest linearity and differential characteristics. Therefore, these S-boxes are secure against linear and differential attacks. The proposed cipher is analyzed against various attacks, including linear and differential attacks, and we found it to be resistant to these attacks. The proposed cipher is implemented in C++, compared its performance with existing quasigroup based block ciphers, and we found that our proposal is more efficient than existing quasigroup based proposals. We also evaluated our cipher using various statistical tests of the NIST-STS test suite, and we found it to pass these tests. We also established in this study that the randomness of our cipher is almost the same as that of the AES-128.
Last updated:  2024-03-01
Fully Succinct Batch Arguments for NP from Indistinguishability Obfuscation
Rachit Garg, Kristin Sheridan, Brent Waters, and David J. Wu
Non-interactive batch arguments for $\mathsf{NP}$ provide a way to amortize the cost of $\mathsf{NP}$ verification across multiple instances. In particular, they allow a prover to convince a verifier of multiple $\mathsf{NP}$ statements with communication that scales sublinearly in the number of instances. In this work, we study fully succinct batch arguments for $\mathsf{NP}$ in the common reference string (CRS) model where the length of the proof scales not only sublinearly in the number of instances $T$, but also sublinearly with the size of the $\mathsf{NP}$ relation. Batch arguments with these properties are special cases of succinct non-interactive arguments (SNARGs); however, existing constructions of SNARGs either rely on idealized models or strong non-falsifiable assumptions. The one exception is the Sahai-Waters SNARG based on indistinguishability obfuscation. However, when applied to the setting of batch arguments, we must impose an a priori bound on the number of instances. Moreover, the size of the common reference string scales linearly with the number of instances. In this work, we give a direct construction of a fully succinct batch argument for $\mathsf{NP}$ that supports an unbounded number of statements from indistinguishability obfuscation and one-way functions. Then, by additionally relying on a somewhere statistically binding (SSB) hash function, we show how to extend our construction to obtain a fully succinct and updatable batch argument. In the updatable setting, a prover can take a proof $\pi$ on $T$ statements $(x_1, \ldots, x_T)$ and "update" it to obtain a proof $\pi'$ on $(x_1, \ldots, x_T, x_{T + 1})$. Notably, the update procedure only requires knowledge of a (short) proof for $(x_1, \ldots, x_T)$ along with a single witness $w_{T + 1}$ for the new instance $x_{T + 1}$. Importantly, the update does not require knowledge of witnesses for $x_1, \ldots, x_T$.
Last updated:  2023-06-08
Bingo: Adaptivity and Asynchrony in Verifiable Secret Sharing and Distributed Key Generation
Ittai Abraham, Philipp Jovanovic, Mary Maller, Sarah Meiklejohn, Gilad Stern
We present Bingo, an adaptively secure and optimally resilient packed asynchronous verifiable secret sharing (PAVSS) protocol that allows a dealer to share $f+1$ secrets with a total communication complexity of $O(\lambda n^2)$ words, where $\lambda$ is the security parameter and $n$ is the number of parties. Using Bingo, we obtain an adaptively secure validated asynchronous Byzantine agreement (VABA) protocol that uses $O(\lambda n^3)$ expected words and constant expected time, which we in turn use to construct an adaptively secure high-threshold asynchronous distributed key generation (ADKG) protocol that uses $O(\lambda n^3)$ expected words and constant expected time. To the best of our knowledge, our ADKG is the first to allow for an adaptive adversary while matching the asymptotic complexity of the best known static ADKGs.
Last updated:  2022-12-22
SuperNova: Proving universal machine executions without universal circuits
Abhiram Kothapalli, Srinath Setty
This paper introduces SuperNova, a new recursive proof system for incrementally producing succinct proofs of correct execution of programs on a stateful machine with a particular instruction set (e.g., EVM, RISC-V). A distinguishing aspect of SuperNova is that the cost of proving a step of a program is proportional only to the size of the circuit representing the instruction invoked by the program step. This is a stark departure from prior works that employ universal circuits where the cost of proving a program step is proportional at least to the sum of sizes of circuits representing each supported instruction—even though a particular program step invokes only one of the supported instructions. Naturally, SuperNova can support a rich instruction set without affecting the per-step proving costs. SuperNova achieves its cost profile by building on Nova, a prior high-speed recursive proof system, and leveraging its internal building block, folding schemes, in a new manner. We formalize SuperNova’s approach as a way to realize non-uniform IVC, a generalization of IVC. Furthermore, SuperNova’s prover costs and the recursion overhead are the same as Nova’s, and in fact, SuperNova is equivalent to Nova for machines that support a single instruction.
Last updated:  2022-12-22
An Injectivity Analysis of CRYSTALS-Kyber and Implications on Quantum Security
Xiaohui Ding, Muhammed F. Esgin, Amin Sakzad, Ron Steinfeld
The One-Way to Hiding (O2H) Lemma is a central component of proofs of chosen-ciphertext attack (CCA) security of practical public-key encryption schemes using variants of the Fujisaki-Okamoto (FO) transform in the Quantum Random Oracle Model (QROM). Recently, Kuchta et al. (EUROCRYPT ’20) introduced a new QROM proof technique, called Measure-Rewind-Measure (MRM), giving an improved variant of the O2H lemma, with a new security reduction that does not suffer from a square-root advantage security loss as in the earlier work of Bindel et al. (TCC ’19).However, the FO transform QROM CCA security reduction based on the improved MRM O2H lemma still requires an injectivity assumption on the underlying CPA-secure determinstic public-key encryption scheme. In particular, the tightness of the concrete security reduction relies on a sufficiently small injectivity bound, and obtaining such bounds for concrete schemes was left as an open problem by Kuchta et al. (EUROCRYPT ’20). In this paper, we address the above problem by deriving concrete bounds on the injectivity of the deterministic CPA-secure variant of CRYSTALS-Kyber, the public-key encryption scheme selected for standardisation by the NIST Post-Quantum Cryptograpy (PQC) standardisation process. We evaluate our bounds numerically for the CRYSTALS-Kyber parameter sets, and show that the effect of injectivity on the tightness of the QROM CCA security of the Fujisaki-Okamoto transformed Kyber KEM is negligible, i.e. allows for a tight QROM CCA security reduction. Consequently, we give tightest QROM CCA security bounds to date for a simplified ‘single hashing’ variant of Kyber CCAKEM against attacks with low quantum circuit depth. Our bounds apply for all the Kyber parameter sets, based on the hardness of the Module Learning with Errors (MLWE) problem.
Last updated:  2022-12-22
CRS-Updatable Asymmetric Quasi-Adaptive NIZK Arguments
Behzad Abdolmaleki, Daniel Slamanig
A critical aspect for the practical use of non-interactive zero-knowledge (NIZK) arguments in the common reference string (CRS) model is the demand for a trusted setup, i.e., a trusted generation of the CRS. Recently, motivated by its increased use in real-world applications, there has been a growing interest in concepts that allow to reduce the trust in this setup. In particular one demands that the zero-knowledge and ideally also the soundness property hold even when the CRS generation is subverted. One important line of work in this direction is the so-called updatable CRS for NIZK by Groth et al. (CRYPTO’18). The basic idea is that everyone can update a CRS and there is a way to check the correctness of an update. This guarantees that if at least one operation (the generation or one update) have been performed honestly, the zero-knowledge and the soundness properties hold. Later, Lipmaa (SCN’20) adopted this notion of updatable CRS to quasi-adaptive NIZK (QA-NIZK) arguments. In this work, we continue the study of CRS-updatable QA-NIZK and analyse the most efficient asymmetric QA-NIZKs by González et al. (ASIACRYPT’15) in a setting where the CRS is fully subverted and propose an updatable version of it. In contrast to the updatable QA- NIZK by Lipmaa (SCN’20) which represents a symmetric QA-NIZK and requires a new non-standard knowledge assumption for the subversion zero-knowledge property, our technique to construct updatable asymmetric QA-NIZK is under a well-known standard knowledge assumption, i.e., the Bilinear Diffie-Hellman Knowledge of Exponents assumption. Furthermore, we show the knowledge soundness of the (updatable) asymmetric QA-NIZKs, an open problem posed by Lipmaa, which makes them compatible with modular zk-SNARK frameworks such as LegoS- NARK by Campanelli et al. (ACM CCS’19).
Last updated:  2023-03-02
Towards Secure Evaluation of Online Functionalities (Corrected and Extended Version)
Andreas Klinger, Ulrike Meyer
To date, ideal functionalities securely realized with secure multi-party computation (SMPC) mainly considers functions of the private inputs of a fixed number of a priori known parties. In this paper, we generalize these definitions such that protocols implementing online algorithms in a distributed fashion can be proven to be privacy-preserving. Online algorithms compute online functionalities that allow parties to arrive and leave over time, to provide multiple inputs and to obtain multiple outputs. In particular, the set of parties participating changes over time, i.e., at different points in time different sets of parties evaluate a function over their private inputs. To this end, we propose the notion of an online trusted third party that allows to prove the security of SMPC protocols implementing online functionalities or online algorithms, respectively. We show that any online functionality can be implemented perfectly secure in the presence of a semi-honest adversary, if strictly less than 1/2 of the parties participating are corrupted. We show that the same result holds in the presence of a malicious adversary if it corrupts strictly less than 1/3 of the parties and always allows the corrupted parties to arrive and provide input. Note, this is the corrected and extended version of the work presented in [24].
Last updated:  2022-12-27
An SVP attack on Vortex
zhenfei zhang
In [BS22], the authors proposed a lattice based hash function that is useful for building zero-knowledge proofs with superior performance. In this short note we analysis the underlying lattice problem with the classic shortest vector problem, and show that 2 out of 15 proposed parameter sets for this hash function do not achieve the claimed security.
Last updated:  2024-02-04
DSKE: Digital Signature with Key Extraction
Zhipeng Wang, Orestis Alpos, Alireza Kavousi, Sze Yiu Chau, Duc V. Le, and Christian Cachin
This work introduces DSKE, digital signatures with key extraction. In a DSKE scheme, the private key can be extracted if more than a threshold of signatures on different messages are ever created while, within the threshold, each signature continues to authenticate the signed message. We give a formal definition of DSKE, as well as two provably secure constructions, one from hash-based digital signatures and one from polynomial commitments. We demonstrate that DSKE is useful for various applications, such as spam prevention and deniability. First, we introduce the GroupForge signature scheme, leveraging DSKE constructions to achieve deniability in digital communication. GroupForge integrates DSKE with a Merkle tree and timestamps to produce a ``short-lived'' signature equipped with extractable sets, ensuring deniability under a fixed public key. We illustrate that GroupForge can serve as a viable alternative to Keyforge in the non-attributable email protocol of Specter, Park, and Green (USENIX Sec '21), thereby eliminating the need for continuous disclosure of outdated private keys. Second, we leverage the inherent extraction property of DSKE to develop a Rate-Limiting Nullifier (RLN) scheme. RLN efficiently identifies and expels spammers once they exceed a predetermined action threshold, thereby jeopardizing their private keys. Moreover, we implement both variants of the DSKE scheme to demonstrate their performance and show it is comparable to existing signature schemes. We also implement GroupForge from the polynomial commitment-based DSKE and illustrate the practicality of our proposed method.
Last updated:  2022-12-21
IsoLock: Thwarting Link-Prediction Attacks on Routing Obfuscation by Graph Isomorphism
Shaza Elsharief, Lilas Alrahis, Johann Knechtel, Ozgur Sinanoglu
Logic locking/obfuscation secures hardware designs from untrusted entities throughout the globalized semiconductor supply chain. Machine learning (ML) recently challenged the security of locking: such attacks successfully captured the locking-induced, structural design modifications to decipher obfuscated gates. Although routing obfuscation eliminates this threat, more recent attacks exposed new vulnerabilities, like link formation, breaking such schemes. Thus, there is still a need for advanced, truly learning-resilient locking solutions. Here we propose IsoLock, a provably-secure locking scheme that utilizes isomorphic structures which ML models and other structural methods cannot discriminate. Unlike prior work, IsoLock’s security promise neither relies on re-synthesis nor on dedicated sub-circuits. Instead, IsoLock introduces isomorphic key-gate structures within the design via systematic routing obfuscation. We theoretically prove the security of IsoLock against modeling attacks. Further, we lock ISCAS-85 and ITC-99 benchmarks and launch state-of-the-art ML attacks, SCOPE and MuxLink, as well as the Redundancy and SAAM attacks, which only decipher an average of 0–6% of the key, well confirming the resilience of IsoLock. All in all, IsoLock is proposed to break the cycle of “cat and mouse” in locking and attack studies, through a provably-secure locking approach against structural ML attacks.
Last updated:  2023-10-27
Pseudorandomness of Decoding, Revisited: Adapting OHCP to Code-Based Cryptography
Maxime Bombar, Alain Couvreur, and Thomas Debris-Alazard
Recent code-based cryptosystems rely, among other things, on the hardness of the decisional decoding problem. If the search version is well understood, both from practical and theoretical standpoints, the decision version has been less studied in the literature, and little is known about its relationships with the search version, especially for structured variants. On the other hand, in the world of Euclidean lattices, the situation is rather different, and many reductions exist, both for unstructured and structured versions of the underlying problems. For the latter versions, a powerful tool called the OHCP framework (for Oracle with Hidden Center Problem), which appears to be very general, has been introduced by Peikert et al. (STOC 2017) and has proved to be very useful as a black box inside reductions. In this work, we revisit this technique and extract the very essence of this framework, namely the Oracle Comparison Problem (OCP), to show how to recover the support of the error, solving an Oracle with Hidden Support Problem (OHSP), more suitable for code-based cryptography. This yields a new worst-case to average-case search-to-decision reduction. We then turn to the structured versions and explain why this is not as straightforward as for Euclidean lattices. If we fail to give a search-to-decision reduction for structured codes, we believe that our work opens the way towards new reductions for structured codes, given that the OHCP framework proved to be so powerful in lattice-based cryptography. Furthermore, we also believe that this technique could be extended to codes endowed with other metrics, such as the rank metric, for which no reduction is known.
Last updated:  2022-12-20
Faster Dual Lattice Attacks by Using Coding Theory
Kevin Carrier, Yixin Shen, Jean-Pierre Tillich
We present a faster dual lattice attack on the Learning with Errors (LWE) problem, based on ideas from coding theory. Basically, it consists of revisiting the most recent dual attack of \cite{Matzov22} and replacing modulus switching by a decoding algorithm. This replacement achieves a reduction from small LWE to plain LWE with a very significant reduction of the secret dimension. We also replace the enumeration part of this attack by betting that the secret is zero on the part where we want to enumerate it and iterate this bet over other choices of the enumeration part. We estimate the complexity of this attack by making the optimistic, but realistic guess that we can use polar codes for this decoding task. We show that under this assumption the best attacks on Kyber and Saber can be improved by 1 and 6 bits.
Last updated:  2023-11-07
Computational Hardness of the Permuted Kernel and Subcode Equivalence Problems
Paolo Santini, Marco Baldi, and Franco Chiaraluce
The Permuted Kernel Problem (PKP) asks to find a permutation which maps an input matrix into the kernel of some given vector space. The literature exhibits several works studying its hardness in the case of the input matrix being mono-dimensional (i.e., a vector), while the multi-dimensional case has received much less attention and, de facto, only the case of a binary ambient finite field has been studied. The Subcode Equivalence Problem (SEP), instead, asks to find a permutation so that a given linear code becomes a subcode of another given code. At the best of our knowledge, no algorithm to solve the SEP has ever been proposed. In this paper we study the computational hardness of solving these problems. We first show that, despite going by different names, PKP and SEP are exactly the same problem. Then we consider the state-of-the-art solver for the mono-dimensional PKP (namely, the KMP algorithm, proposed by Koussa, Macario-Rat and Patarin), generalize it to the multi-dimensional case and analyze both the finite and the asymptotic regimes. We further propose a new algorithm, which can be thought of as a refinement of KMP. In the asymptotic regime our algorithm does not improve on KMP but, in the finite regime (and for parameters of practical interest), we achieve significant improvements, especially for the multi-dimensional version of PKP. As an evidence, we show that it is the fastest algorithm to attack several recommended instances of cryptosystems based on PKP. As a side-effect, given the mentioned equivalence between PKP and SEP, all the algorithms we analyze in this paper can be used to solve instances of the latter problem.
Last updated:  2022-12-20
RMC-PVC: A Multi-Client Reusable Verifiable Computation Protocol (Long version)
Pascal Lafourcade, Gael Marcadet, Léo Robert
The verification of computations performed by an untrusted server is a cornerstone for delegated computations, especially in multi- clients setting where inputs are provided by different parties. As- suming a common secret between clients, a garbled circuit offers the attractive property to ensure the correctness of a result computed by the untrusted server while keeping the input and the function private. Yet, this verification can be guaranteed only once. Based on the notion of multi-key homomorphic encryption (MKHE), we propose RMC-PVC a multi-client verifiable computation proto- col, able to verify the correctness of computations performed by an untrusted server for inputs (encoded for a garbled circuit) provided by multiple clients. Thanks to MKHE, the garbled circuit is reusable an arbitrary number of times. In addition, each client can verify the computation by its own. Compared to a single-key FHE scheme, the MKHE usage in RMC-PVC allows to reduce the workload of the server and thus the response delay for the client. It also enforce the privacy of inputs, which are provided by different clients.
Last updated:  2023-02-26
Duoram: A Bandwidth-Efficient Distributed ORAM for 2- and 3-Party Computation
Adithya Vadapalli, Ryan Henry, Ian Goldberg
We design, analyze, and implement Duoram, a fast and bandwidth-efficient distributed ORAM protocol suitable for secure 2- and 3-party computation settings. Following Doerner and shelat's Floram construction (CCS 2017), Duoram leverages (2,2)-distributed point functions (DPFs) to represent PIR and PIR-writing queries compactly—but with a host of innovations that yield massive asymptotic reductions in communication cost and notable speedups in practice, even for modestly sized instances. Specifically, Duoram introduces a novel method for evaluating dot products of certain secret-shared vectors using communication that is only logarithmic in the vector length. As a result, for memories with $n$ addressable locations, Duoram can perform a sequence of $m$ arbitrarily interleaved reads and writes using just $O(m\lg n)$ words of communication, compared with Floram's $O(m \sqrt{n})$ words. Moreover, most of this work can occur during a data-independent preprocessing phase, leaving just $O(m)$ words of online communication cost for the sequence—i.e., a constant online communication cost per memory access.
Last updated:  2022-12-19
Clipaha: A Scheme to Perform Password Stretching on the Client
Francisco Blas Izquierdo Riera, Magnus Almgren, Pablo Picazo-Sanchez, Christian Rohner
Password security relies heavily on the choice of password by the user but also on the one-way hash functions used to protect stored passwords. To compensate for the increased computing power of attackers, modern password hash functions like Argon2, have been made more complex in terms of computational power and memory requirements. Nowadays, the computation of such hash functions is performed usually by the server (or authenticator) instead of the client. Therefore, constrained Internet of Things devices cannot use such functions when authenticating users. Additionally, the load of computing such functions may expose servers to denial of service attacks. In this work, we discuss client-side hashing as an alternative. We propose Clipaha, a client-side hashing scheme that allows using high-security password hashing even on highly constrained server devices. Clipaha is robust to a broader range of attacks compared to previous work and covers important and complex usage scenarios. Our evaluation discusses critical aspects involved in client-side hashing. We also provide an implementation of Clipaha in the form of a web library and benchmark the library on different systems to understand its mixed JavaScript and WebAssembly approach's limitations. Benchmarks show that our library is 50\% faster than similar libraries and can run on some devices where previous work fails.
Last updated:  2022-12-19
Leakage Resilient l-more Extractable Hash and Applications to Non-Malleable Cryptography
Aggelos Kiayias, Feng-Hao Liu, Yiannis Tselekounis
$\ell$-more extractable hash functions were introduced by Kiayias et al. (CCS '16) as a strengthening of extractable hash functions by Goldwasser et al. (Eprint '11) and Bitansky et al. (ITCS '12, Eprint '14). In this work, we define and study an even stronger notion of leakage-resilient $\ell$-more extractable hash functions, and instantiate the notion under the same assumptions used by Kiayias et al. and Bitansky et al. In addition, we prove that any hash function that can be modeled as a Random Oracle (RO) is leakage resilient $\ell$-more extractable, while it is however, not extractable according to the definition by Goldwasser et al. and Bitansky et al., showing a separation of the notions. We show that this tool has many interesting applications to non-malleable cryptography. Particularly, we can derive efficient, continuously non-malleable, leakage-resilient codes against split-state attackers (TCC '14), both in the CRS and the RO model. Additionally, we can obtain succinct non-interactive non-malleable commitments both in the CRS and the RO model, satisfying a stronger definition than the prior ones by Crescenzo et al. (STOC '98), and Pass and Rosen (STOC '05), in the sense that the simulator does not require access to the original message, while the attacker's auxiliary input is allowed to depend on it.
Last updated:  2022-12-19
Worst and Average Case Hardness of Decoding via Smoothing Bounds
Thomas Debris-Alazard, Nicolas Resch
In this work, we consider the worst and average case hardness of the decoding problems that are the basis for code-based cryptography. By a decoding problem, we consider inputs of the form $(\mathbf{G}, \mathbf{m} \mathbf{G} + \mathbf{t})$ for a matrix $\mathbf{G}$ that generates a code and a noise vector $\mathbf{t}$, and the algorithm's goal is to recover $\mathbf{m}$. We consider a natural strategy for creating a reduction to an average-case problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an average-case decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle $\mathcal{O}_{\mathbf{x}}$ outputs independent samples of the form $\langle \mathbf{x}, \mathbf{a} \rangle + e$, where $\mathbf{a}$ is a uniformly random vector and $e$ is a noise bit. Such an approach is (implicit in) the previous worst-case to average-case reductions for coding problems (Brakerski et al Eurocrypt 2019, Yu and Zhang CRYPTO 2021). To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (Debris-Alazard et al IACR Eprint 2022), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear programming bounds, on the weight distribution of the code generated here by $\mathbf{G}$. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effective to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worst-case instance, as well as from an average-case instance. While we are unable to improve the parameters of the worst-case to average-case reductions of Brakerski et al or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the average-case to average-case reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inverse-polynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essentially useless for the regime of parameters where the rate of the code is inverse polynomial in the block-length. By highlighting these shortcomings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
Last updated:  2022-12-19
Ring Signatures with User-Controlled Linkability
Dario Fiore, Lydia Garms, Dimitris Kolonelos, Claudio Soriente, Ida Tucker
Anonymous authentication primitives, e.g., group or ring signatures, allow one to realize privacy-preserving data collection applications, as they strike a balance between authenticity of data being collected and privacy of data providers. At PKC 2021, Diaz and Lehmann defined group signatures with User-Controlled Linkability (UCL) and provided an instantiation based on BBS+ signatures. In a nutshell, a signer of a UCL group signature scheme can link any of her signatures: linking evidence can be produced at signature time, or after signatures have been output, by providing an explicit linking proof. In this paper, we introduce Ring Signatures with User-Controlled Linkability (RS-UCL). Compared to group signatures with user-controlled linkability, RS-UCL require no group manager and can be instantiated in a completely decentralized manner. We also introduce a variation, User Controlled and Autonomous Linkability (RS-UCAL), which gives the user full control of the linkability of their signatures. We provide a formal model for both RS-UCL and RS-UCAL and introduce a compiler that can upgrade any ring signature scheme to RS-UCAL. The compiler leverages a new primitive we call Anonymous Key Randomizable Signatures (AKRS) — a signature scheme where the verification key can be randomized — that can be of independent interest. We also provide different instantiations of AKRS based on Schnorr signatures and on lattices. Finally, we show that an AKRS scheme can additionally be used to construct an RS-UCL scheme.
Last updated:  2024-02-08
A Simple Noncommutative UOV Scheme
Lih-Chung Wang, Po-En Tseng, Yen-Liang Kuan, and Chun-Yen Chou
In this paper, we propose a simple noncommutative-ring based UOV signature scheme with key-randomness alignment: Simple NOVA, which can be viewed as a simplified version of NOVA[48]. We simplify the design of NOVA by skipping the perturbation trick used in NOVA, thus shortens the key generation process and accelerates the signing and verification. Together with a little modification accordingly, this alternative version of NOVA is also secure and may be more suitable for practical uses. We also use Magma to actually implement and give a detailed security analysis against known major attacks.
Last updated:  2022-12-19
Demystifying the comments made on “A Practical Full Key Recovery Attack on TFHE and FHEW by Inducing Decryption Errors”
Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without the need for decryption. Therefore, in the world of cloud computing, FHE provides an essential means for users to garner different computational services from potentially untrusted servers while keeping sensitive data private. In such a context, the security and privacy guarantees of well-known FHE schemes become paramount. In a research article, we (Chaturvedi et al., ePrint 2022/1563) have shown that popular FHE schemes like TFHE and FHEW are vulnerable to CVO (Ciphertext Verification Oracle) attacks, which belong to the family of “reaction attacks” [6]. We show, for the first time, that feedback from the client (user) can be craftily used by the server to extract the error (noise) associated with each computed ciphertext. Once the errors for some m ciphertext (m > n, where n = key size) are retrieved, the original secret key can be trivially leaked using the standard Gaussian Elimination method. The results in the paper (Chaturvedi et al., ePrint 2022/1563) show that FHE schemes should be subjected to further security evaluations, specifically in the context of system-wide implementation, such that CVO-based attacks can be eliminated. Quite recently, Michael Walter published a document (ePrint 2022/1722), claiming that the timing channel we used in our work (Chaturvedi et al., ePrint 2022/1563) “are false”. In this document, we debunk this claim and explain how we use the timing channel to improve the CVO attack. We explain that the CVO-based attack technique we proposed in the paper (Chaturvedi et al., ePrint 2022/1563) is a result of careful selection of perturbation values and the first work in literature that showed reaction based attacks are possible in the context of present FHE schemes in a realistic cloud setting. We further argue that for an attacker, any additional information that can aid a particular attack shall be considered as leakage and must be dealt with due importance to stymie the attack.
Last updated:  2023-03-08
A Holistic Approach Towards Side-Channel Secure Fixed-Weight Polynomial Sampling
Markus Krausz, Georg Land, Jan Richter-Brockmann, Tim Güneysu
The sampling of polynomials with fixed weight is a procedure required by round-4 Key Encapsulation Mechanisms (KEMs) for Post-Quantum Cryptography (PQC) standardization (BIKE, HQC, McEliece) as well as NTRU, Streamlined NTRU Prime, and NTRU LPRrime. Recent attacks have shown in this context that side-channel leakage of sampling methods can be exploited for key recoveries. While countermeasures regarding such timing attacks have already been presented, still, there is no comprehensive work covering solutions that are also secure against power side channels. To close this gap, the contribution of this work is threefold: First, we analyze requirements for the different use cases of fixed weight sampling. Second, we demonstrate how all known sampling methods can be implemented securely against timing and power/EM side channels and propose performance-enhancing modifications. Furthermore, we propose a new, comparison-based methodology that outperforms existing methods in the masked setting for the three round-4 KEMs BIKE, HQC, and McEliece. Third, we present bitsliced and arbitrary-order masked soft- ware implementations and benchmarked them for all relevant cryptographic schemes to be able to infer recommendations for each use case. Additionally, we provide a hardware implementation of our new method as a case study and analyze the feasibility of implementing the other approaches in hardware.
Last updated:  2022-12-19
On blindness of several ElGamal-type blind signatures
Alexandra Babueva, Liliya Akhmetzyanova, Evgeny Alekseev, Oleg Taraskin
Blind signature schemes are the essential element of many complex information systems such as e-cash and e-voting systems. They should provide two security properties: unforgeability and blindness. The former one is standard for all signature schemes and ensures that a valid signature can be generated only during the interaction with the secret signing key holder. The latter one is more specific for this class of signature schemes and means that there is no way to link a (message, signature) pair to the certain execution of the signing protocol. In the current paper we discuss the blindness property and various security notions formalizing this property. We analyze several ElGamal-type blind signature schemes regarding blindness. We present effective attacks violating blindness on three schemes. All the presented attacks may be performed by any external observer and do not require signing key knowledge. One of the schemes conceivably became broken due to an incorrect understanding of blindness property.
Last updated:  2023-02-11
Removing the Field Size Loss from Duc et al.'s Conjectured Bound for Masked Encodings
Julien Béguinot, Wei Cheng, Sylvain Guilley, Yi Liu, Loïc Masure, Olivier Rioul, François-Xavier Standaert
At Eurocrypt 2015, Duc et al. conjectured that the success rate of a side-channel attack targeting an intermediate computation encoded in a linear secret-sharing, a.k.a masking with \(d+1\) shares, could be inferred by measuring the mutual information between the leakage and each share separately. This way, security bounds can be derived without having to mount the complete attack. So far, the best proven bounds for masked encodings were nearly tight with the conjecture, up to a constant factor overhead equal to the field size, which may still give loose security guarantees compared to actual attacks. In this paper, we improve upon the state-of-the-art bounds by removing the field size loss, in the cases of Boolean masking and arithmetic masking modulo a power of two. As an example, when masking in the AES field, our new bound outperforms the former ones by a factor \(256\). Moreover, we provide theoretical hints that similar results could hold for masking in other fields as well.
Last updated:  2023-09-26
Regularizers to the Rescue: Fighting Overfitting in Deep Learning-based Side-channel Analysis
Azade Rezaeezade and Lejla Batina
Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques' effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying four powerful and easy-to-use regularization techniques to eight combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are $L_1$, $L_2$, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, $L_1$ and $L_2$ are the most effective. Finally, if training time matters, early stopping is the best technique.
Last updated:  2024-02-02
An algorithm for efficient detection of $(N,N)$-splittings and its application to the isogeny problem in dimension 2
Maria Corte-Real Santos, Craig Costello, and Sam Frengley
We develop an efficient algorithm to detect whether a superspecial genus 2 Jacobian is optimally $(N, N)$-split for each integer $N \leq 11$. Incorporating this algorithm into the best-known attack against the superspecial isogeny problem in dimension 2 gives rise to significant cryptanalytic improvements. Our implementation shows that when the underlying prime $p$ is 100 bits, the attack is sped up by a factor $25{\tt x}$; when the underlying prime is 200 bits, the attack is sped up by a factor $42{\tt x}$; and, when the underlying prime is 1000 bits, the attack is sped up by a factor $160{\tt x}$.
Last updated:  2023-11-22
BlindHub: Bitcoin-Compatible Privacy-Preserving Payment Channel Hubs Supporting Variable Amounts
Xianrui Qin, Shimin Pan, Arash Mirzaei, Zhimei Sui, Oğuzhan Ersoy, Amin Sakzad, Muhammed F. Esgin, Joseph K. Liu, Jiangshan Yu, and Tsz Hon Yuen
Payment Channel Hub (PCH) is a promising solution to the scalability issue of first-generation blockchains or cryptocurrencies such as Bitcoin. It supports off-chain payments between a sender and a receiver through an intermediary (called the tumbler). Relationship anonymity and value privacy are desirable features of privacy-preserving PCHs, which prevent the tumbler from identifying the sender and receiver pairs as well as the payment amounts. To our knowledge, all existing Bitcoin-compatible PCH constructions that guarantee relationship anonymity allow only a (predefined) fixed payment amount. Thus, to achieve payments with different amounts, they would require either multiple PCH systems or running one PCH system multiple times. Neither of these solutions would be deemed practical. In this paper, we propose the first Bitcoin-compatible PCH that achieves relationship anonymity and supports variable amounts for payment. To achieve this, we have several layers of technical constructions, each of which could be of independent interest to the community. First, we propose $\textit{BlindChannel}$, a novel bi-directional payment channel protocol for privacy-preserving payments, where {one of the channel parties} is unable to see the channel balances. Then, we further propose $\textit{BlindHub}$, a three-party (sender, tumbler, receiver) protocol for private conditional payments, where the tumbler pays to the receiver only if the sender pays to the tumbler. The appealing additional feature of BlindHub is that the tumbler cannot link the sender and the receiver while supporting a variable payment amount. To construct BlindHub, we also introduce two new cryptographic primitives as building blocks, namely $\textit{Blind Adaptor Signature}$(BAS), and $\textit{Flexible Blind Conditional Signature}$. BAS is an adaptor signature protocol built on top of a blind signature scheme. Flexible Blind Conditional Signature is a new cryptographic notion enabling us to provide an atomic and privacy-preserving PCH. Lastly, we instantiate both BlindChannel and BlindHub protocols and present implementation results to show their practicality.
Last updated:  2023-03-02
Mind Your Path: On (Key) Dependencies in Differential Characteristics
Thomas Peyrin, Quan Quan Tan
Cryptanalysts have been looking for differential characteristics in ciphers for decades and it remains unclear how the subkey values and more generally the Markov assumption impacts exactly their probability estimation. There were theoretical efforts considering some simple linear relationships between differential characteristics and subkey values, but the community has not yet explored many possible nonlinear dependencies one can find in differential characteristics. Meanwhile, the overwhelming majority of cryptanalysis works still assume complete independence between the cipher rounds. We give here a practical framework and a corresponding tool to investigate all such linear or nonlinear effects and we show that they can have an important impact on the security analysis of many ciphers. Surprisingly, this invalidates many differential characteristics that appeared in the literature in the past years: we have checked differential characteristics from 8 articles (4 each for both SKINNY and GIFT) and most of these published paths are impossible or working only for a very small proportion of the key space. We applied our method to SKINNY and GIFT, but we expect more impossibilities for other ciphers. To showcase our advances in the dependencies analysis, in the case of SKINNY we are able to obtain a more accurate probability distribution of a differential characteristic with respect to the keys (with practical verification when it is computationally feasible). Our work indicates that newly proposed differential characteristics should now come with an analysis of how the key values and the Markov assumption might or might not affect/invalidate them. In this direction, more constructively, we include a proof of concept of how one can incorporate additional constraints into Constraint Programming so that the search for differential characteristics can avoid (to a large extent) differential characteristics that are actually impossible due to dependency issues our tool detected.
Last updated:  2023-01-06
New and Improved Constructions for Partially Equivocable Public Key Encryption
Benoît Libert, Alain Passelègue, Mahshid Riahinia
Non-committing encryption (NCE) is an advanced form of public-key encryption which guarantees the security of a Multi-Party Computation (MPC) protocol in the presence of an adaptive adversary. Brakerski et al. (TCC 2020) recently proposed an intermediate notion, termed Packed Encryption with Partial Equivocality (PEPE), which implies NCE and preserves the ciphertext rate (up to a constant factor). In this work, we propose three new constructions of rate-1 PEPE based on standard assumptions. In particular, we obtain the first constant ciphertext-rate NCE construction from the LWE assumption with polynomial modulus, and from the Subgroup Decision assumption. We also propose an alternative DDH-based construction with guaranteed polynomial running time.
Last updated:  2023-04-18
TreeSync: Authenticated Group Management for Messaging Layer Security
Théophile Wallez, Jonathan Protzenko, Benjamin Beurdouche, Karthikeyan Bhargavan
Messaging Layer Security (MLS), currently undergoing standardization at the IETF, is an asynchronous group messaging protocol that aims to be efficient for large dynamic groups, while providing strong guarantees like forward secrecy (FS) and post-compromise security (PCS). While prior work on MLS has extensively studied its group key establishment component (called TreeKEM), many flaws in early designs of MLS have stemmed from its group integrity and authentication mechanisms that are not as well-understood. In this work, we identify and formalize TreeSync: a sub-protocol of MLS that specifies the shared group state, defines group management operations, and ensures consistency, integrity, and authentication for the group state across all members. We present a precise, executable, machine-checked formal specification of TreeSync, and show how it can be composed with other components to implement the full MLS protocol. Our specification is written in F* and serves as a reference implementation of MLS; it passes the RFC test vectors and is interoperable with other MLS implementations. Using the DY* symbolic protocol analysis framework, we formalize and prove the integrity and authentication guarantees of TreeSync, under minimal security assumptions on the rest of MLS. Our analysis identifies a new attack and we propose several changes that have been incorporated in the latest MLS draft. Ours is the first testable, machine-checked, formal specification for MLS, and should be of interest to both developers and researchers interested in this upcoming standard.
Last updated:  2022-12-16
Linear Cryptanalysis of Reduced-Round Simeck Using Super Rounds
Reham Almukhlifi, Poorvi Vora
The Simeck family of lightweight block ciphers was proposed by Yang et al. in 2015, which combines the design features of the NSA-designed block ciphers Simon and Speck. Linear cryptanalysis using super-rounds was proposed by Almukhlifi and Vora to increase the efficiency of implementing Matsui’s second algorithm and achieved good results on all variants of Simon. The improved linear attacks result from the observation that, after four rounds of encryption, one bit of the left half of the state of the cipher depends on only 17 key bits (19 key bits for the larger variants of the cipher). Furthermore, due to the similarity between the design of Simon and Simeck, we were able to follow the same attack model and present improved linear attacks against all variants of Simeck. In this paper, we present attacks on 19-rounds of Simeck 32/64, 28-rounds of Simeck 48/96, and 33-rounds of Simeck 64/128, often with the direct recovery of the full master key without repeating the attack over multiple rounds. We also verified the results of linear cryptanalysis on 8, 10, and 12 rounds for Simeck 32/64.
Last updated:  2024-01-02
Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice
Andrew Fregly, Joseph Harvey, Burton S. Kaliski Jr., and Swapneel Sheth
We introduce the Merkle Tree Ladder (MTL) mode of operation for signature schemes. MTL mode signs messages using an underlying signature scheme in such a way that the resulting signatures are condensable: a set of MTL mode signatures can be conveyed from a signer to a verifier in fewer bits than if the MTL mode signatures were sent individually. In MTL mode, the signer sends a shorter condensed signature for each message of interest and occasionally provides a longer reference value that helps the verifier process the condensed signatures. We show that in a practical scenario involving random access to an initial series of 10,000 signatures that expands gradually over time, MTL mode can reduce the size impact of the NIST PQC signature algorithms, which have signature sizes of 666 to 49,856 bytes with example parameters at various security levels, to a condensed signature size of 248 to 472 bytes depending on the selected security level. Even adding the overhead of the reference values, MTL mode signatures still reduce the overall signature size impact under a range of operational assumptions. Because MTL mode itself is quantum-safe, the mode can support long-term cryptographic resiliency in applications where signature size impact is a concern without limiting cryptographic diversity only to algorithms whose signatures are naturally short.
Last updated:  2024-01-31
Acsesor: A New Framework for Auditable Custodial Secret Storage and Recovery
Melissa Chase, Hannah Davis, Esha Ghosh, and Kim Laine
Custodial secret management services provide a convenient centralized user experience, portability, and emergency recovery for users who cannot reliably remember or store their own credentials and cryptographic keys. Unfortunately, these benefits are only available when users compromise the security of their secrets and entrust them to a third party. This makes custodial secret management service providers ripe targets for exploitation, and exposes valuable and sensitive data to data leaks, insider attacks, and password cracking, etc. In non-custodial solutions (utilized by some password managers and cryptocurrency wallets), the users are in charge of a high-entropy secret, such as a cryptographic secret key or a long passphrase, that controls access to their data. While these solutions have a stronger security model, the obvious downside here is the usability: it is very difficult for people to store cryptographic secrets reliably. We present Acsesor: a new framework for auditable custodial secret management with decentralized trust. Our framework offers a middle-ground between a fully custodial and fully non-custodial recovery system: it enhances custodial recovery systems with cryptographically assured access monitoring and a distributed trust assumption. In particular, the Acsesor framework distributes the recovery process across a set of (user-chosen) guardians. However, the user is never required to interact directly with the guardians during recovery, which allows us to retain the high usability of centralized custodial solutions. By allowing the guardians to implement flexible user-chosen response policies, Acsesor can address a broad range of problem scenarios in classical secret management solutions. Finally, we also instantiate the Acsesor framework with a base protocol built of standard primitives: standard encryption schemes, commitment schemes, and privacy-preserving transparency ledgers.
Last updated:  2022-12-21
Efficient Zero Knowledge Arguments for Bilinear Matrix Relations over Finite Fields and Knowledge-Soundness Enhancement via Operations over Extended Field
Yuan Tian
In data-intensive private computing applications various relations appear as or can be reduced to matrix relations. In this paper we investigate two problems related to constructing the zero-knowledge argument (ZKA) protocols for matrix relations (in commit-and-prove paradigm). In the first part, we establish the ZKA for some bilinear matrix relations over Fp. The relations in consideration include (1) general forms of bilinear relations with two witness matrices and some most important special cases. (2) some special forms of bilinear relations with three or four witness matrices. (3) eigenvalue relation. In private computing tasks various important relations are instances or special cases of these relations, e.g., matrix multiplicative relation, inverse relation, similarity relation, some structure decomposition relation and some isomorphic relations for lattices and graphs, etc. Instead of applying the general linearization approach to dealing with these non-linear relations, our approach is matrix-specific. The matrix equation is treated as a tensor identity and probabilistic-equivalent reduction techniques (amortization) are widely applied to reduce non-linear matrix relations to vector nonlinear relations. With the author’s knowledge, currently there are no other systematic works on ZKA for nonlinear matrix relations. Our approach significantly outperforms the general linearization approach in all important performances, e.g., for n-by-t matrix witnesses the required size of c.r.s (only used as the public-key for commitment) can be compressed by 2nt times and the number of rounds, group and field elements in messages are all decreased by ~1/2 for large-size matrix. In the second part, we enhance knowledge-soundness of ZKA for the linear matrix relation over the ground field Fp. By treating the matrix in F_p^(n×td) as a nt-dimensional vector over the d-th extended field over Fp and applying appropriate reductions, we decrease the knowledge-error of the original ZKA over Fp from O(1/p) down to O(1/pd). This is comparable to the general parallel repetition approach which improves knowledge-error to the same degree, but our approach (matrix-specific) at the same time significantly improves other performances, e.g., smaller-sized c.r.s., fewer rounds and shorter messages.
Last updated:  2022-12-15
Find Thy Neighbourhood: Privacy-Preserving Local Clustering
Pranav Shriram A, Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, Bhavish Raj Gopal
Identifying a cluster around a seed node in a graph, termed local clustering, finds use in several applications, including fraud detection, targeted advertising, community detection, etc. However, performing local clustering is challenging when the graph is distributed among multiple data owners, which is further aggravated by the privacy concerns that arise in disclosing their view of the graph. This necessitates designing solutions for privacy-preserving local clustering and is addressed for the first time in the literature. We propose using the technique of secure multiparty computation (MPC) to achieve the same. Our local clustering algorithm is based on the heat kernel PageRank (HKPR) metric, which produces the best-known cluster quality. En route to our final solution, we have two important steps: (i) designing data-oblivious equivalent of the state-of-the-art algorithms for computing local clustering and HKPR values, and (ii) compiling the data-oblivious algorithms into its secure realisation via an MPC framework that supports operations over fixed-point arithmetic representation such as multiplication and division. Keeping efficiency in mind for large graphs, we choose the best-known honest-majority 3-party framework of SWIFT (Koti et al., USENIX'21) and enhance it with some of the necessary yet missing primitives, before using it for our purpose. We benchmark the performance of our secure protocols, and the reported run time showcases the practicality of the same. Further, we perform extensive experiments to evaluate the accuracy loss of our protocols. Compared to their cleartext counterparts, we observe that the results are comparable and thus showcase the practicality of the designed protocols.
Last updated:  2022-12-14
Optimization for SPHINCS+ using Intel Secure Hash Algorithm Extensions
Thomas Hanson, Qian Wang, Santosh Ghosh, Fernando Virdia, Anne Reinders, Manoj R. Sastry
SPHINCS+ was selected as a candidate digital signature scheme for standardization by the NIST Post-Quantum Cryptography Standardization Process. It offers security capabilities relying only on the security of cryptographic hash functions. However, it is less efficient than the lattice-based schemes. In this paper, we present an optimized software library for the SPHINCS+ signature scheme, which combines the Intel® Secure Hash Algorithm Extensions (SHA-NI) and AVX2 vector instructions. We obtain significant speed-up of SPHINCS+-128f-simple on both non-optimized (70%) and AVX2 reference implementations (8% -23%) offering 128-bit security.
Last updated:  2024-01-09
A note on SPHINCS+ parameter sets
Stefan Kölbl and Jade Philipoom
In this note, we explore parameter sets for SPHINCS+ which support a smaller number of signatures than $2^{64}$, but are otherwise compatible with the SLH-DSA specification. In practice, use cases for which a low number of signatures per key pair suffice are common, and as we will show this allows a significant reduction in signature size and verification speed for SPHINCS+. For this we carry out a larger search through the SPHINCS+ parameter space, comparing it with the current parameter sets and further showing that for carefully chosen parameter the security degrades slowly if one exceeds the limits. Finally, we provide a case study for firmware signing on OpenTitan to demonstrate the efficiency of these alternative parameters.
Last updated:  2023-03-20
Formal Analysis of SPDM: Security Protocol and Data Model version 1.2
Cas Cremers, Alexander Dax, Aurora Naska
DMTF is a standards organization by major industry players in IT infrastructure including AMD, Alibaba, Broadcom, Cisco, Dell, Google, Huawei, IBM, Intel, Lenovo, and NVIDIA, which aims to enable interoperability, e.g., including cloud, virtualization, network, servers and storage. It is currently standardizing a security protocol called SPDM, which aims to secure communication over the wire and to enable device attestation, notably also explicitly catering for communicating hardware components. The SPDM protocol inherits requirements and design ideas from IETF’s TLS 1.3. However, its state machines and transcript handling are substantially different and more complex. While architecture, specification, and open-source libraries of the current versions of SPDM are publicly available, these include no significant security analysis of any kind. In this work we develop the first formal models of the three modes of the SPDM protocol version 1.2.1, and formally analyze their main security properties.
Last updated:  2024-03-07
Asymptotically Optimal Message Dissemination with Applications to Blockchains
Chen-Da Liu-Zhang, Christian Matt, and Søren Eller Thomsen
Messages in large-scale networks such as blockchain systems are typically disseminated using flooding protocols, in which parties send the message to a random set of peers until it reaches all parties. Optimizing the communication complexity of such protocols and, in particular, the per-party communication complexity is of primary interest since nodes in a network are often subject to bandwidth constraints. Previous flooding protocols incur a per-party communication complexity of $\Omega(l\cdot \gamma^{-1} \cdot (\log(n) + \kappa))$ bits to disseminate an $l$-bit message among $n$ parties with security parameter $\kappa$ when it is guaranteed that a $\gamma$ fraction of the parties remain honest. In this work, we present the first flooding protocols with a per-party communication complexity of $O(l\cdot \gamma^{-1})$ bits. We further show that this is asymptotically optimal and that our protocols can be instantiated provably securely in the usual setting for proof-of-stake blockchains. To demonstrate that one of our new protocols is not only asymptotically optimal but also practical, we perform several probabilistic simulations to estimate the concrete complexity for given parameters. Our simulations show that our protocol significantly improves the per-party communication complexity over the state-of-the-art for practical parameters. Hence, for given bandwidth constraints, our results allow to, e.g., increase the block size, improving the overall throughput of a blockchain.
Last updated:  2022-12-14
On Side-Channel and CVO Attacks against TFHE and FHEW
Michael Walter
The recent work of Chaturvedi et al. (ePrint 2022/685) claims to observe leakage about secret information in a ciphertext of TFHE through a timing side-channel on the (untrusted) server. In (Chaturvedi et al., ePrint 2022/1563) this is combined with an active attack against TFHE and FHEW. The claims in (Chaturvedi et al., ePrint 2022/685) about the non-trivial leakage from a ciphertext would have far-reaching implications, since the server does not have any secret inputs. In particular, this would mean a weakening of LWE in general, since an adversary could always simulate a server on which there is side channel leakage. In this short note, we show that the claims made in the two aforementioned works with regards to the leakage through the timing side channel are false. We demonstrate that the active attack, a standard attack against IND-CPA secure LWE-based encryption, can be mounted just as efficiently without the "side channel information".
Last updated:  2023-06-26
Glimpse: On-Demand PoW Light Client with Constant-Size Storage for DeFi
Giulia Scaffino, Lukas Aumayr, Zeta Avarikioti, Matteo Maffei
Cross-chain communication is instrumental in unleashing the full potential of blockchain technologies, as it allows users and developers to exploit the unique design features and the profit opportunities of different existing blockchains. The majority of interoperability solutions are provided by centralized exchanges and bridge protocols based on a trusted majority, both introducing undesirable trust assumptions compared to native blockchain assets. Hence, increasing attention has been given to decentralized solutions: Light and super-light clients paved the way for chain relays, which allow verifying on a blockchain the state of another blockchain by respectively verifying and storing a linear and logarithmic amount of data. Unfortunately, relays turn out to be inefficient in terms of computational costs, storage, or compatibility. We introduce Glimpse, an on-demand bridge that leverages a novel on-demand light client construction with only constant on-chain storage, cost, and computational overhead. Glimpse is expressive, enabling a plethora of DeFi and off-chain applications such as lending, pegs, proofs of oracle attestations, and betting hubs. Glimpse also remains compatible with blockchains featuring a limited scripting language such as the Liquid Network (a pegged sidechain of Bitcoin), for which we present a concrete instantiation. We prove Glimpse security in the Universal Composability (UC) framework and further conduct an economic analysis. We evaluate the cost of Glimpse for Bitcoin-like chains: verifying a simple transaction has at most 700 bytes of on-chain overhead, resulting in a one-time fee of $3, only twice as much as a standard Bitcoin transaction.
Last updated:  2022-12-12
Red Team vs. Blue Team: A Real-World Hardware Trojan Detection Case Study Across Four Modern CMOS Technology Generations
Endres Puschner, Thorben Moos, Steffen Becker, Christian Kison, Amir Moradi, Christof Paar
Verifying the absence of maliciously inserted Trojans in ICs is a crucial task – especially for security-enabled products. Depending on the concrete threat model, different techniques can be applied for this purpose. Assuming that the original IC layout is benign and free of backdoors, the primary security threats are usually identified as the outsourced manufacturing and transportation. To ensure the absence of Trojans in commissioned chips, one straightforward solution is to compare the received semiconductor devices to the design files that were initially submitted to the foundry. Clearly, conducting such a comparison requires advanced laboratory equipment and qualified experts. Nevertheless, the fundamental techniques to detect Trojans which require evident changes to the silicon layout are nowadays well-understood. Despite this, there is a glaring lack of public case studies describing the process in its entirety while making the underlying datasets publicly available. In this work, we aim to improve upon this state of the art by presenting a public and open hardware Trojan detection case study based on four different digital ICs using a Red Team vs. Blue Team approach. Hereby, the Red Team creates small changes acting as surrogates for inserted Trojans in the layouts of 90 nm, 65 nm, 40 nm, and 28 nm ICs. The quest of the Blue Team is to detect all differences between digital layout and manufactured device by means of a GDSII–vs–SEM-image comparison. Can the Blue Team perform this task efficiently? Our results spark optimism for the Trojan seekers and answer common questions about the efficiency of such techniques for relevant IC sizes. Further, they allow to draw conclusions about the impact of technology scaling on the detection performance.
Last updated:  2023-09-22
Two-Round Concurrent 2PC from Sub-Exponential LWE
Behzad Abdolmaleki, Saikrishna Badrinarayanan, Rex Fernando, Giulio Malavolta, Ahmadreza Rahimi, and Amit Sahai
Secure computation is a cornerstone of modern cryptography and a rich body of research is devoted to understanding its round complexity. In this work, we consider two-party computation (2PC) protocols (where both parties receive output) that remain secure in the realistic setting where many instances of the protocol are executed in parallel (concurrent security). We obtain a two-round concurrent-secure 2PC protocol based on a single, standard, post-quantum assumption: The subexponential hardness of the learning-with-errors (LWE) problem. Our protocol is in the plain model, i.e., it has no trusted setup, and it is secure in the super-polynomial simulation framework of Pass (EUROCRYPT 2003). Since two rounds are minimal for (concurrent) 2PC, this work resolves the round complexity of concurrent 2PC from standard assumptions. As immediate applications, our work establishes feasibility results for interesting cryptographic primitives, such as the first two-round password authentication key exchange (PAKE) protocol in the plain model and the first two-round concurrent secure computation protocol for quantum circuits (2PQC).
Last updated:  2023-04-18
Identity-based Matchmaking Encryption with Stronger Security and Instantiation on Lattices
Yuejun Wang, Baocang Wang, Qiqi Lai, Yu Zhan
An identity-based matchmaking encryption (IB-ME) scheme proposed at JOC 2021 supports anonymous but authenticated communications in a way that communication parties can both specify the senders or receivers on the fly. IB-ME is easy to be used in several network applications requiring privacy-preserving for its efficient implementation and special syntax. In the literature, IB-ME schemes are built from the variants of Diffie-Hellman assumption and all fail to retain security for quantum attackers. Despite the rigorous security proofs in previous security models, the existing schemes are still possibly vulnerable to some potential neglected attacks. Aiming at the above problems, we provide a stronger security definition of authenticity considering new attacks to fit real-world scenarios and then propose a generic construction of IB-ME satisfying the new model. Inspired by the prior IB-ME construction of Chen et al., the proposed scheme is constructed by combining 2-level anonymous hierarchical IBE (HIBE) and identity-based signature (IBS) schemes. In order to upgrade lattice-based IB-ME with better efficiency, we additionally improve a lattice IBS, as an independent technical contribution, to shorten its signature and thus reduce the final IB-ME ciphertext size. By combining the improved IBS and any 2-level adaptively-secure lattice-based HIBE with anonymity, we finally obtain the first lattice-based construction that implements IB-ME directly.
Last updated:  2022-12-12
Scaling Blockchain-Based Tokens with Joint Cryptographic Accumulators
Trevor Miller
As digital tokens on blockchains such as non-fungible tokens (NFTs) increase in popularity and scale, the existing interfaces (ERC-721, ERC-20, and many more) are being exposed for being expensive and not scalable. As a result, tokens are being forced to be implemented on alternative blockchains where it is cheaper but less secure. To offer a solution without making security tradeoffs, we propose using joint cryptographic accumulators (e.g. joint Merkle trees). We propose a method of creating such joint accumulators in a decentralized fashion which is secured by the same set of validating nodes as existing blockchains. Such accumulators allow the tokens for certain applications to be implemented using up to 99.99% less of the blockchain’s resources by outsourcing most of the storage and computational requirements to the users creating the tokens. This is done without sacrificing permanence and verifiability of these tokens. This system achieves optimizations mainly by allowing certain storage of a blockchain to be used in a cross-application manner, instead of a per-application manner. Additionally, we show how it can be beneficial in other areas like privacy-preserving timestamps or shortening file hashes
Last updated:  2022-12-12
Area-time Efficient Implementation of NIST Lightweight Hash Functions Targeting IoT Applications
Safiullah Khan, Wai-Kong Lee, Angshuman Karmakar, Jose Maria Bermudo Mera, Abdul Majeed, Seong Oun Hwang
To mitigate cybersecurity breaches, secure communication is crucial for the Internet of Things (IoT) environment. Data integrity is one of the most significant characteristics of security, which can be achieved by employing cryptographic hash functions. In view of the demand from IoT applications, the National Institute of Standards and Technology (NIST) initiated a standardization process for lightweight hash functions. This work presents field-programmable gate array (FPGA) implementations and carefully worked out optimizations of four Round-3 finalists in the NIST standardization process. A novel compact PHOTON-Beetle implementation is proposed wherein the underlying matrix multiplication is executed in serialized fashion to achieve a small hardware footprint. SPARKLE implementations are carried out by implementing the ARX-box in serialized, parallelized, and hybrid approaches. For Ascon and XOODYAK, the proposed implementations compute certain permutation rounds in one clock cycle in order to explore the trade-off between computation time and hardware area. As a result, this work achieves the smallest hardware footprint for PHOTON-Beetle consuming an area 3.4× smaller than state-of-the-art implementations. Ascon and XOODYAK are implemented in a flexible manner that achieves throughput-to-area (TP/A) ratios 1.8× and 3.9× higher, respectively, compared to implementations found in the literature. In addition, we propose the first FPGA implementations for the SPARKLE hash function. These efficient implementations provide guidelines for choosing a suitable architecture for applications in demand that can be employed in the IoT environment to achieve data integrity for various applications.
Last updated:  2023-01-31
An Algebraic Attack Against McEliece-like Cryptosystems Based on BCH Codes
Freja Elbro, Christian Majenz
We present an algebraic attack on a McEliece-like scheme based on BCH codes (BCH-McEliece), where the Goppa code is replaced by a suitably permuted BCH code. Our attack continues the line of work devising attacks against McEliece-like schemes with Goppa-like codes, with the goal of getting a better understanding of why Goppa codes are so intractable. Our starting point is the work of Faugère, Perret and Portzamparc (Asiacrypt 2014). We take their algebraic model and adapt and improve their attack algorithm so that it can handle BCH-McEliece. We implement the attack and exhibit a parameter range where our attack is practical while generic attacks suggest cryptographic security.
Last updated:  2023-02-21
Meet-in-the-Middle Preimage Attacks on Sponge-based Hashing
Lingyue Qin, Jialiang Hua, Xiaoyang Dong, Hailun Yan, Xiaoyun Wang
The Meet-in-the-Middle (MitM) attack has been widely applied to preimage attacks on Merkle-Damg{\aa}rd (MD) hashing. In this paper, we introduce a generic framework of the MitM attack on sponge-based hashing. We find certain bit conditions can significantly reduce the diffusion of the unknown bits and lead to longer MitM characteristics. To find good or optimal configurations of MitM attacks, e.g., the bit conditions, the neutral sets, and the matching points, we introduce the bit-level MILP-based automatic tools on Keccak, Ascon and Xoodyak. To reduce the scale of bit-level models and make them solvable in reasonable time, a series of properties of the targeted hashing are considered in the modelling, such as the linear structure and CP-kernel for Keccak, the Boolean expression of Sbox for Ascon. Finally, we give an improved 4-round preimage attack on Keccak-512/SHA3, and break a nearly 10 years’ cryptanalysis record. We also give the first preimage attacks on 3-/4-round Ascon-XOF and 3-round Xoodyak-XOF.
Last updated:  2022-12-10
Breaking a Fifth-Order Masked Implementation of CRYSTALS-Kyber by Copy-Paste
Elena Dubrova, Kalle Ngo, Joel Gärtner
CRYSTALS-Kyber has been selected by the NIST as a public-key encryption and key encapsulation mechanism to be standardized. It is also included in the NSA's suite of cryptographic algorithms recommended for national security systems. This makes it important to evaluate the resistance of CRYSTALS-Kyber's implementations to side-channel attacks. The unprotected and first-order masked software implementations have been already analysed. In this paper, we present deep learning-based message recovery attacks on the $\omega$-order masked implementations of CRYSTALS-Kyber in ARM Cortex-M4 CPU for $\omega \leq 5$. The main contribution is a new neural network training method called recursive learning. In the attack on an $\omega$-order masked implementation, we start training from an artificially constructed neural network $M^{\omega}$ whose weights are partly copied from a model $M^{\omega-1}$ trained on the $(\omega-1)$-order masked implementation, and then extended to one more share. Such a method allows us to train neural networks that can recover a message bit with the probability above 99% from high-order masked implementations.
Last updated:  2022-12-10
KEMTLS vs. Post-Quantum TLS: Performance On Embedded Systems
Ruben Gonzalez, Thom Wiggers
TLS is ubiquitous in modern computer networks. It secures transport for high-end desktops and low-end embedded devices alike. However, the public key cryptosystems currently used within TLS may soon be obsolete as large-scale quantum computers, once realized, would be able to break them. This threat has led to the development of post-quantum cryptography (PQC). The U.S. standardization body NIST is currently in the process of concluding a multi-year search for promising post-quantum signature schemes and key encapsulation mechanisms (KEMs). With the first PQC standards around the corner, TLS will have to be updated soon. However, especially for small microcontrollers, it appears the current NIST post-quantum signature finalists pose a challenge. Dilithium suffers from very large public keys and signatures; while Falcon has significant hardware requirements for efficient implementations. KEMTLS is a proposal for an alternative TLS handshake protocol that avoids authentication through signatures in the TLS handshake. Instead, it authenticates the peers through long-term KEM keys held in the certificates. The KEMs considered for standardization are more efficient in terms of computation and/or bandwidth than the post-quantum signature schemes. In this work, we compare KEMTLS to TLS 1.3 in an embedded setting. To gain meaningful results, we present implementations of KEMTLS and TLS 1.3 on a Cortex-M4-based platform. These implementations are based on the popular WolfSSL embedded TLS library and hence share a majority of their code. In our experiments, we consider both protocols with the remaining NIST finalist signature schemes and KEMs, except for Classic McEliece which has too large public keys. Both protocols are benchmarked and compared in terms of run-time, memory usage, traffic volume and code size. The benchmarks are performed in network settings relevant to the Internet of Things, namely low-latency broadband, LTE-M and Narrowband IoT. Our results show that KEMTLS can reduce handshake time by up to 38%, can lower peak memory consumption and can save traffic volume compared to TLS 1.3.
Last updated:  2023-01-16
Nonce- and Redundancy-encrypting Modes with Farfalle
Seth Hoffert
Nonces are a fact of life for achieving semantic security. Generating a uniformly random nonce can be costly and may not always be feasible. Using anything other than uniformly random bits can result in information leakage; e.g., a timestamp can deanonymize a communication and a counter can leak the quantity of transmitted messages. Ideally, we would like to be able to efficiently encrypt the nonce to 1) avoid needing uniformly random bits and 2) avoid information leakage. This paper presents new modes built on top of Farfalle that achieve nonce and redundancy encryption in the AEAD and onion AE settings.
Last updated:  2023-02-20
Formal Analysis of Session-Handling in Secure Messaging: Lifting Security from Sessions to Conversations
Cas Cremers, Charlie Jacomme, Aurora Naska
The building blocks for secure messaging apps, such as Signal’s X3DH and Double Ratchet (DR) protocols, have received a lot of attention from the research community. They have notably been proved to meet strong security properties even in the case of compromise such as Forward Secrecy (FS) and Post-Compromise Security (PCS). However, there is a lack of formal study of these properties at the application level. Whereas the research works have studied such properties in the context of a single ratcheting chain, a conversation between two persons in a messaging application can in fact be the result of merging multiple ratcheting chains. In this work, we initiate the formal analysis of secure messaging taking the session-handling layer into account, and apply our approach to Sesame, Signal’s session management. We first experimentally show practical scenarios in which PCS can be violated in Signal by a clone attacker, despite its use of the Double Ratchet. We identify how this is enabled by Signal’s session-handling layer. We then design a formal model of the session-handling layer of Signal that is tractable for automated verification with the Tamarin prover, and use this model to rediscover the PCS violation and propose two provably secure mechanisms to offer stronger guarantees.
Last updated:  2023-12-20
Dory: Faster Asynchronous BFT with Reduced Communication for Permissioned Blockchains
Zongyang Zhang, You Zhou, Sisi Duan, Haibin Zhang, Bin Hu, Licheng Wang, and Jianwei Liu
Asynchronous Byzantine fault-tolerance (BFT) protocols (e.g., HoneyBadger and Dumbo family protocols) have received increasing attention as the consensus mechanism of permissioned blockchains, given their particular robustness against timing and performance attacks. However, there is a substantial performance gap before they can be applied in real systems. In this paper, we identify and address two critical issues, and design Dory, an asynchronous BFT consensus protocol with improved efficiency and lower communication compared to the state-of-the-art protocol, sDumbo. At the core of our approach are two new building blocks reducing the communication cost and a novel framework utilizing transactions with quadratic message complexity. We have implemented Dory and sDumbo in a new Golang library. Via a deployment using up to 151 participants on Amazon EC2, we show that Dory consistently outperforms sDumbo during both failure and failure-free scenarios. For instance, Dory has up to 5x the throughput of sDumbo in the failure-free scenario.
Last updated:  2022-12-09
Expert Mental Models of SSI Systems and Implications for End-User Understanding
Alexandra Mai
Self-sovereign identity (SSI) systems have gained increasing attention over the last five years. In a variety of fields (e.g., education, IT security, law, government), developers and researchers are attempting to give end-users back their right to and control of their data. Although prototypes and theoretical concepts for SSI applications exist, the majority of them are still in their infancy. Due to missing definitions and standards, there is currently a lack of common understanding of SSI system within the (IT) community. To investigate current commonalities and differences in SSI understanding, I contribute the first qualitative user study (N=13) on expert mental models of SSI and its associated threat landscape. The study results highlight the need for a general definition of SSI and further standards for such systems, as experts' perceptions of SSI requirements vary widely. Based on the expert interviews, I constructed a minimal knowledge map for (potential) SSI end-users and formulated design guidelines for SSI to facilitate broad adoption in the wild and improve privacy-preserving usage.
Last updated:  2023-11-02
Private Access Control for Function Secret Sharing
Sacha Servan-Schreiber, Simon Beyzerov, Eli Yablon, and Hyojae Park
Function Secret Sharing (FSS; Eurocrypt 2015) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares of f(x) on an input x, without learning information about f. In this paper, we initiate the study of access control for FSS. Given the shares of f, the evaluators can ensure that the dealer is authorized to share the provided function. For a function family F and an access control list defined over the family, the evaluators receiving the shares of f ∈ F can efficiently check that the dealer knows the access key for f. This model enables new applications of FSS, such as: – anonymous authentication in a multi-party setting, – access control in private databases, and – authentication and spam prevention in anonymous communication systems. Our definitions and constructions abstract and improve the concrete efficiency of several re- cent systems that implement ad-hoc mechanisms for access control over FSS. The main building block behind our efficiency improvement is a discrete-logarithm zero-knowledge proof-of-knowledge over secret-shared elements, which may be of independent interest. We evaluate our constructions and show a 50–70× reduction in computational overhead com- pared to existing access control techniques used in anonymous communication. In other applications, such as private databases, the processing cost of introducing access control is only 1.5–3× when amortized over databases with 500,000 or more items.
Last updated:  2022-12-09
Optimized Implementation of Encapsulation and Decapsulation of Classic McEliece on ARMv8
Minjoo Sim, Siwoo Eum, Hyeokdong Kwon, Hyunjun Kim, Hwajeong Seo
Recently, the results of the NIST PQC contest were announced. Classic McEliece, one of the 3rd round candidates, was selected as the fourth round candidate. Classic McEliece is the only code-based cipher in the NIST PQC finalists in third round and the algorithm is regarded as secure. However, it has low efficiency. In this paper, we propose an efficient software implementation of Classic McEliece, a code-based cipher, on 64-bit ARMv8 processors. Classic McEliece can be divided into Key Generation, Encapsulation, and Decapsulation. Among them, we propose an optimal implementation for Encapsulation and Decapsulation. Optimized Encapsulation implementation utilizes vector registers to perform 16-byte parallel operations, and optimize using the specificity of the identity matrix. Decapsulation implemented efficient Multiplication and Inversion on $F_2^m$ field. Compared with the previous results, Encapsulation showed the performance improvement of up-to 1.99× than the-state-of-art works.
Last updated:  2022-12-09
Careful with MAc-then-SIGn: A Computational Analysis of the EDHOC Lightweight Authenticated Key Exchange Protocol
Felix Günther, Marc Ilunga Tshibumbu Mukendi
EDHOC is a lightweight authenticated key exchange protocol for IoT communication, currently being standardized by the IETF. Its design is a trimmed-down version of similar protocols like TLS 1.3, building on the SIGn-then-MAc (SIGMA) rationale. In its trimming, however, EDHOC notably deviates from the SIGMA design by sending only short, non-unique credential identifiers, and letting recipients perform trial verification to determine the correct communication partner. Done naively, this can lead to identity misbinding attacks when an attacker can control some of the user keys, invalidating the original SIGMA security analysis and contesting the security of EDHOC. In this work, we formalize a multi-stage key exchange security model capturing the potential attack vectors introduced by non-unique credential identifiers. We show that EDHOC, in its draft version 17, indeed achieves session key security and user authentication even in a strong model where the adversary can register malicious keys with colliding identifiers, given that the employed signature scheme provides so-called exclusive ownership. Through our security result, we confirm cryptographic improvements integrated by the IETF working group in recent draft versions of EDHOC based on recommendations from our and others' analysis.
Last updated:  2023-02-02
Some applications of higher dimensional isogenies to elliptic curves (overview of results)
Damien Robert
We give some applications of the "embedding Lemma". The first one is a polynomial time (in $\log q$) algorithm to compute the endomorphism ring $\mathrm{End}(E)$ of an ordinary elliptic curve $E/\mathbb{F}_q$, provided we are given the factorisation of $Δ_π$. In particular, this computation can be done in quantum polynomial time. The second application is an algorithm to compute the canonical lift of $E/\mathbb{F}_q$, $q=p^n$, (still assuming that $E$ is ordinary) to precision $m$ in time $\tilde{O}(n m \log^{O(1)} p)$. We deduce a point counting algorithm of complexity $\tilde{O}(n^2 \log^{O(1)} p)$. In particular the complexity is polynomial in $\log p$, by contrast of what is usually expected of a $p$-adic cohomology computation. The third application is a quasi-linear CRT algorithm to compute Siegel modular polynomials of elliptic curves, which does not rely on any heuristic or conditional result (like GRH). We also outline how to generalize these algorithms to (ordinary) abelian varieties.
Last updated:  2022-12-08
Doubly Efficient Private Information Retrieval and Fully Homomorphic RAM Computation from Ring LWE
Wei-Kai Lin, Ethan Mook, Daniel Wichs
A (single server) private information retrieval (PIR) allows a client to read data from a public database held on a remote server, without revealing to the server which locations she is reading. In a doubly efficient PIR (DEPIR), the database is first preprocessed, but the server can subsequently answer any client's query in time that is sub-linear in the database size. Prior work gave a plausible candidate for a public-key variant of DEPIR, where a trusted party is needed to securely preprocess the database and generate a corresponding public key for the clients; security relied on a new non-standard code-based assumption and a heuristic use of ideal obfuscation. In this work we construct the stronger unkeyed notion of DEPIR, where the preprocessing is a deterministic procedure that the server can execute on its own. Moreover, we prove security under just the standard ring learning-with-errors (RingLWE) assumption. For a database of size $N$ and any constant $\varepsilon>0$, the preprocessing run-time and size is $O(N^{1+\varepsilon})$, while the run-time and communication-complexity of each PIR query is $polylog(N)$. We also show how to update the preprocessed database in time $O(N^\varepsilon)$. Our approach is to first construct a standard PIR where the server's computation consists of evaluating a multivariate polynomial; we then convert it to a DEPIR by preprocessing the polynomial to allow for fast evaluation, using the techniques of Kedlaya and Umans (STOC '08). Building on top of our DEPIR, we construct general fully homomorphic encryption for random-access machines (RAM-FHE), which allows a server to homomorphically evaluate an arbitrary RAM program $P$ over a client's encrypted input $x$ and the server's preprocessed plaintext input $y$ to derive an encryption of the output $P(x,y)$ in time that scales with the RAM run-time of the computation rather than its circuit size. Prior work only gave a heuristic candidate construction of a restricted notion of RAM-FHE. In this work, we construct RAM-FHE under the RingLWE assumption with circular security. For a RAM program $P$ with worst-case run-time $T$, the homomorphic evaluation runs in time $T^{1+\varepsilon} \cdot polylog(|x| + |y|)$.
Last updated:  2022-12-08
SCB Mode: Semantically Secure Length-Preserving Encryption
Fabio Banfi
To achieve semantic security, symmetric encryption schemes classically require ciphertext expansion. In this paper we provide a means to achieve semantic security while preserving the length of messages at the cost of mildly sacrificing correctness. Concretely, we propose a new scheme that can be interpreted as a secure alternative to (or wrapper around) plain Electronic Codebook (ECB) mode of encryption, and for this reason we name it Secure Codebook (SCB). Our scheme is the first length-preserving encryption scheme to effectively achieve semantic security.
Last updated:  2023-02-11
On Zero-Knowledge Proofs over the Quantum Internet
Mark Carney
This paper presents a new method for quantum identity authentication (QIA) protocols. The logic of classical zero-knowledge proofs (ZKPs) due to Schnorr is applied in quantum circuits and algorithms. This novel approach gives an exact way with which a prover $P$ can prove they know some secret by encapsulating it in a quantum state before sending to a verifier $V$ by means of a quantum channel - allowing for a ZKP wherein an eavesdropper or manipulation can be detected with a fail-safe design. This is achieved by moving away from the hardness of the Discrete Logarithm Problem towards the hardness of estimating quantum states. This paper presents a method with which this can be achieved and some bounds for the security of the protocol provided. With the anticipated advent of a `quantum internet', such protocols and ideas may soon have utility and execution in the real world.
Last updated:  2023-07-07
Comparative Study of HDL algorithms for Intrusion Detection System in Internet of Vehicles
Manoj Srinivas Botla, Jai Bala Srujan Melam, Raja Stuthi Paul Pedapati, Srijanee Mookherji, Vanga Odelu, Rajendra Prasath
Internet of vehicles (IoV) has brought technological revolution in the fields of intelligent transport system and smart cities. With the rise in self-driven cars and AI managed traffic system, threats to such systems have increased significantly. There is an immediate need to mitigate such attacks and ensure security, trust and privacy. Any malfunctioning or misbehaviour in an IoV based system can lead to fatal accidents. This is because IoV based systems are sensitive in nature involving human lives either on or off the roads. Any compromise to such systems can affect user safety and incur in service delays. For IoV users, the Intrusion Detection System (IDS) is crucial to protect them from different malware-based attacks and to ensure the security of users and infrastructures. Machine Learning approaches are used for extracting useful features from network traffic and also for predicting the patterns of anomalous activities. We use two datasets, namely Balanced DDoS dataset and Car-Hacking Dataset for comparative study of intrusion detection using various machine learning approaches. The comparative study shows the differences of various machine learning and deep learning approaches against two datasets.
Last updated:  2022-12-07
SoK: Use of Cryptography in Malware Obfuscation
Hassan Asghar, Benjamin Zi Hao Zhao, Muhammad Ikram, Giang Nguyen, Dali Kaafar, Sean Lamont, Daniel Coscia
We look at the use of cryptography to obfuscate malware. Most surveys on malware obfuscation only discuss simple encryption techniques (e.g., XOR encryption), which are easy to defeat (in principle), since the decryption algorithm and the key is shipped within the program. This SoK proposes a principled definition of malware obfuscation, and categorises instances of malware obfuscation that use cryptographic tools into those which evade detection and those which are detectable. The SoK first examines easily detectable schemes such as string encryption, class encryption and XOR encoding, found in most obfuscated malware. It then details schemes that can be shown to be hard to break, such as the use of environmental keying. We also analyse formal cryptographic obfuscation, i.e., the notions of indistinguishability and virtual black box obfuscation, from the lens of our proposed model on malware obfuscation.
Last updated:  2022-12-07
Digital Signature from Syndrome Decoding Problem
Abdelhaliem Babiker
This paper introduces new digital signature scheme whose security against existential forgery under adaptive chosen message attack is based on hardness of the Syndrome Decoding Problem. The hardness assumption is quite simple and hence easy to analyze and investigate. The scheme as whole is neat with intuitive security definition and proof in addition to elegant and efficient signing and verifying algorithms. We propose parameter sets for three security levels (128-bits, 192-bits, and 256 bits) and estimate the corresponding sizes of the keys and the signature for each level. Additionally, the scheme has an interesting feature of signature verification using an arbitrary part of the public key, which allows the verifying party to store a small random secret part of the public key rather than the full-size public key. Using small part of the public key for verification gives us more time and memory efficient verification mode which we call Light Verification Key Mode (LVK) mode. Also, we suggest Light Signing Key Mode (LSK) which enables a smaller size of the private (signing) key while maintaining the same security level.
Last updated:  2023-05-18
RISC-V Instruction Set Extensions for Lightweight Symmetric Cryptography
Hao Cheng, Johann Großschädl, Ben Marshall, Dan Page, Thinh Pham
The NIST LightWeight Cryptography (LWC) selection process aims to standardise cryptographic functionality which is suitable for resource-constrained devices. Since the outcome is likely to have significant, long-lived impact, careful evaluation of each submission with respect to metrics explicitly outlined in the call is imperative. Beyond the robustness of submissions against cryptanalytic attack, metrics related to their implementation (e.g., execution latency and memory footprint) form an important example. Aiming to provide evidence allowing richer evaluation with respect to such metrics, this paper presents the design, implementation, and evaluation of one separate Instruction Set Extension (ISE) for each of the 10 LWC final round submissions, namely Ascon, Elephant, GIFT-COFB, Grain-128AEADv2, ISAP, PHOTON-Beetle, Romulus, Sparkle, TinyJAMBU, and Xoodyak; although we base the work on use of RISC-V, we argue that it provides more general insight.
Last updated:  2023-02-13
Post-Quantum Anonymity of Kyber
Varun Maram, Keita Xagawa
Kyber is a key-encapsulation mechanism (KEM) that was recently selected by NIST in its PQC standardization process; it is also the only scheme to be selected in the context of public-key encryption (PKE) and key establishment. The main security target for KEMs, and their associated PKE schemes, in the NIST PQC context has been IND-CCA security. However, some important modern applications also require their underlying KEMs/PKE schemes to provide anonymity (Bellare et al., ASIACRYPT 2001). Examples of such applications include anonymous credential systems, cryptocurrencies, broadcast encryption schemes, authenticated key exchange, and auction protocols. It is hence important to analyze the compatibility of NIST's new PQC standard in such "beyond IND-CCA" applications. Some starting steps were taken by Grubbs et al. (EUROCRYPT 2022) and Xagawa (EUROCRYPT 2022) wherein they studied the anonymity properties of most NIST PQC third round candidate KEMs. Unfortunately, they were unable to show the anonymity of Kyber because of certain technical barriers. In this paper, we overcome said barriers and resolve the open problems posed by Grubbs et al. (EUROCRYPT 2022) and Xagawa (EUROCRYPT 2022) by establishing the anonymity of Kyber, and the (hybrid) PKE schemes derived from it, in a post-quantum setting. Along the way, we also provide an approach to obtain tight IND-CCA security proofs for Kyber with concrete bounds; this resolves another issue identified by the aforementioned works related to the post-quantum IND-CCA security claims of Kyber from a provable security point-of-view. Our results also extend to Saber, a NIST PQC third round finalist, in a similar fashion.
Last updated:  2022-12-07
ELSA: Secure Aggregation for Federated Learning with Malicious Actors
Mayank Rathee, Conghao Shen, Sameer Wagh, Raluca Ada Popa
Federated learning (FL) is an increasingly popular approach for machine learning (ML) in cases where the train- ing dataset is highly distributed. Clients perform local training on their datasets and the updates are then aggregated into the global model. Existing protocols for aggregation are either inefficient, or don’t consider the case of malicious actors in the system. This is a major barrier in making FL an ideal solution for privacy-sensitive ML applications. We present ELSA, a secure aggregation protocol for FL, which breaks this barrier - it is efficient and addresses the existence of malicious actors at the core of its design. Similar to prior work on Prio and Prio+, ELSA provides a novel secure aggregation protocol built out of distributed trust across two servers that keeps individual client updates private as long as one server is honest, defends against malicious clients and is efficient end-to-end. Compared to prior works, the distinguishing theme in ELSA is that instead of the servers generating cryptographic correlations interactively, the clients act as untrusted dealers of these correlations without compromising the protocol’s security. This leads to a much faster protocol while also achieving stronger security at that ef- ficiency compared to prior work. We introduce new techniques that retain privacy even when a server is malicious at a small added cost of 7-25% in runtime with negligible increase in communication over the case of semi-honest server. Our work improves end-to-end runtime over prior work with similar security guarantees by big margins - single-aggregator RoFL by up to 305x (for the models we consider), and distributed trust Prio by up to 8x
Last updated:  2022-12-07
Security Analysis of a Color Image Encryption Scheme Based on Dynamic Substitution and Diffusion Operations
George Teseleanu
In 2019, Essaid et al. proposed an encryption scheme for color images based on chaotic maps. Their solution uses two enhanced chaotic maps to dynamically generate the secret substitution boxes and the key bytes used by the cryptosystem. Note that both types of parameters are dependent on the size of the original image. The authors claim that their proposal provides enough security for transmitting color images over unsecured channels. Unfortunately, this is not the case. In this paper, we introduce two cryptanalytic attacks for Essaid et al.'s encryption scheme. The first one is a chosen plaintext attack, which for a given size, requires $256$ chosen plaintexts to allow an attacker to decrypt any image of this size. The second attack is a a chosen ciphertext attack, which compared to the first one, requires $512$ chosen ciphertexts to break the scheme for a given size. These attacks are possible because the generated substitution boxes and key bits remain unchanged for different plaintext images.
Last updated:  2022-12-07
More Efficient Adaptively Secure Lattice-based IBE with Equality Test in the Standard Model
Kyoichi Asano, Keita Emura, Atsushi Takayasu
Identity-based encryption with equality test (IBEET) is a variant of identity-based encryption (IBE), where any users who have trapdoors can check whether two ciphertexts are encryption of the same plaintext. Although several lattice-based IBEET schemes have been proposed, they have drawbacks in either security or efficiency. Specifically, most schemes satisfy only selective security, while adaptively secure schemes in the standard model suffer from large master public keys that consist of linear numbers of matrices. In other words, known lattice-based IBEET schemes perform poorly compared to the state-of-the-art lattice-based IBE schemes (without equality test). In this paper, we propose a semi-generic construction of CCA-secure lattice-based IBEET from a certain class of lattice-based IBE schemes. As a result, we obtain the first lattice-based IBEET schemes with adaptive security and CCA security in the standard model. Furthermore, our semi-generic construction can use several state-of-the-art lattice-based IBE schemes as underlying schemes. Then, we have adaptively secure lattice-based IBEET schemes whose public keys have only poly-log matrices.
Last updated:  2022-12-06
Secret Key Recovery Attacks on Masked and Shuffled Implementations of CRYSTALS-Kyber and Saber
Linus Backlund, Kalle Ngo, Joel Gärtner, Elena Dubrova
Shuffling is a well-known countermeasure against side-channel analysis. It typically uses the Fisher-Yates (FY) algorithm to generate a random permutation which is then utilized as the loop iterator to index the processing of the variables inside the loop. The processing order is scrambled as a result, making side-channel analysis more difficult. Recently, a side-channel attack on a masked and shuffled implementation of Saber requiring 61,680 power traces to extract the secret key was reported. In this paper, we present an attack that can recover the secret key of Saber from 4,608 traces. The key idea behind the 13-fold improvement is to recover FY indexes directly, rather than by extracting the message Hamming weight and bit flipping, as in the previous attack. We capture a power trace during the execution of the decapsulation algorithm for a given ciphertext, recover FY indexes 0 and 255, and extract the corresponding two message bits. Then, we modify the ciphertext to cyclically rotate the message, capture a power trace, and extract the next two message bits with FY indexes 0 and 255. In this way, all message bits can be extracted. By recovering messages contained in k ∗ l chosen ciphertexts constructed using a new method based on error-correcting codes with length l, where k is the security level, we recover the long term secret key. To demonstrate the generality of the presented approach, we also recover the secret key from a masked and shuffled implementation of CRYSTALS-Kyber, which NIST recently selected as a new public-key encryption and key-establishment algorithm to be standardized.
Last updated:  2024-01-30
TokenWeaver: Privacy Preserving and Post-Compromise Secure Attestation
Cas Cremers, Gal Horowitz, Charlie Jacomme, and Eyal Ronen
Modern attestation based on Trusted Execution Environments (TEEs) can significantly reduce the risk of secret compromise, allowing users to securely perform sensitive computations such as running cryptographic protocols for authentication across security critical services. However, this has also made TEEs a high-value attack target, driving an arms race between novel compromise attacks and continuous TEEs updates. Ideally, we want to achieve Post-Compromise Security (PCS): even after a TEE compromise, we can update it back into a secure state. However, at the same time, we would like to guarantee the privacy of users, in particular preventing providers (such as Intel, Google, or Samsung) or services from tracking users across services. This requires unlinkability, which seems incompatible with standard PCS healing mechanisms. In this work, we develop TokenWeaver, the first privacy-preserving post-compromise secure attestation method with automated formal proofs for its core properties. We base our construction on weaving together two types of token chains, one of which is linkable and the other is unlinkable. We provide the full formal models based on the Tamarin and DeepSec provers, including protocol, security properties, and proofs for reproducibility, as well as a proof-of-concept implementation in python that shows the simplicity and applicability of our solution.
Last updated:  2022-12-06
Private Re-Randomization for Module LWE and Applications to Quasi-Optimal ZK-SNARKs
Ron Steinfeld, Amin Sakzad, Muhammed F. Esgin, Veronika Kuchta
We introduce the first candidate lattice-based Designated Verifier (DV) ZK-SNARK protocol with \emph{quasi-optimal proof length} (quasi-linear in the security/privacy parameter), avoiding the use of the exponential smudging technique. Our ZK-SNARK also achieves significant improvements in proof length in practice, with proofs length below $6$ KB for 128-bit security/privacy level. Our main technical result is a new regularity theorem for `private' re-randomization of Module LWE (MLWE) samples using discrete Gaussian randomization vectors, also known as a lattice-based leftover hash lemma with leakage, which applies with a discrete Gaussian re-randomization parameter that is polynomial in the statistical privacy parameter. To obtain this result, we obtain bounds on the smoothing parameter of an intersection of a random $q$-ary SIS module lattice, Gadget SIS module lattice, and Gaussian orthogonal module lattice over standard power of 2 cyclotomic rings, and a bound on the minimum of module gadget lattices. We then introduce a new candidate \emph{linear-only} homomorphic encryption scheme called Module Half-GSW (HGSW), which is a variant of the GSW somewhat homomorphic encryption scheme over modules, and apply our regularity theorem to provide smudging-free circuit-private homomorphic linear operations for Module HGSW.
Last updated:  2023-04-08
Efficient Zero-Knowledge Arguments for Some Matrix Relations over Ring and Non-malleable Enhancement
Yuan Tian
Various matrix relations widely appeared in data-intensive computations, as a result their zero-knowledge proofs/arguments (ZKP/ZKA) are naturally required in large-scale private computing applications. In the first part of this paper, we concretely establish efficient commit-and-proof zero-knowledge arguments for linear matrix relation AU = B and bilinear relation UTQV = Y over the residue ring Zm with logarithmic message complexity. We take a direct, matrix-oriented (rather than vector-oriented in usual) approach to such establishments on basis of the elegant commitment scheme over finite ring recently established by Attema et al[16]. The constructed protocols are public-coin and in c.r.s paradigm (c.r.s used only as the public-key of the commitment scheme), suitable for any size matrices and significantly outperform the protocols constructed in usual approach with smaller-sized c.r.s.(e.g., decreased by a factor of d for linear and by n2d for bilinear relation where d is the extension degree of Galois ring and n is the order of square witness), fewer rounds (decreased by a fraction >logd/2logn for linear and >1/2 for bilinear relation) and lower message complexity (e.g., number of ring elements decreased by a fraction > logd/logn for linear and >1/6 for bilinear relation) for large-size squares. The on-line computational complexities are almost the same in both approaches. In the second part, on basis of the simulation-sound tag-based trapdoor commitment schemes we establish a general compiler to transform any public coin proof/argument protocol into the one which is concurrently non-malleable with unchanged number of rounds, properly increased message and computational complexity. Such enhanced protocols, e.g., the versions compiled from those constructed in the first part of this work, can run in parallel environment while keeping all their security properties, particularly resisting man-in-the-middle attacks.
Last updated:  2024-02-27
Funshade: Function Secret Sharing for Two-Party Secure Thresholded Distance Evaluation
Alberto Ibarrondo, Hervé Chabanne, and Melek Önen
We propose a novel privacy-preserving, two-party computation of various distance metrics (e.g., Hamming distance, Scalar Product) followed by a comparison with a fixed threshold, which is known as one of the most useful and popular building blocks for many different applications including machine learning, biometric matching, etc. Our solution builds upon recent advances in function secret sharing and makes use of an optimized version of arithmetic secret sharing. Thanks to this combination, our new solution named Funshade is the first to require only one round of communication and two ring elements of communication in the online phase, outperforming all prior state-of-the-art schemes while relying on lightweight cryptographic primitives. Lastly, we implement our solution from scratch in portable C and expose it in Python, testifying its high performance by running secure biometric identification against a database of 1 million records in ∼10 seconds with full correctness and 32-bit precision, without parallelization.
Last updated:  2022-12-04
Stronger Security and Generic Constructions for Adaptor Signatures
Wei Dai, Tatsuaki Okamoto, Go Yamamoto
Adaptor signatures have seen wide applications in layer-2 and peer-to-peer blockchain ap- plications such as atomic swaps and payment channels. We first identify two shortcomings of previous literature on adaptor signatures. (1) Current aim of “script-less” adaptor signatures restricts instantiability, limiting designs based on BLS or current NIST PQC candidates. (2) We identify gaps in current formulations of security. In particular, we show that current notions do not rule out a class of insecure schemes. Moreover, a natural property concerning the on-chain unlinkability of adaptor signatures has not been formalized. We then address these shortcomings by providing new and stronger security notions, as well as new generic constructions from any signature scheme and hard relation. On definitions: 1. We develop security notions that strictly imply previous notions. 2. We formalize the notion of unlinkability for adaptor signatures. 3. We give modular proof frameworks that facilitate simpler proofs. On constructions: 1. We give a generic construction of adaptor signature from any signature scheme and any hard relation, showing that theoretically, (linkable) adaptor signatures can be constructed from any one-way function. 2. We also give an unlinkable adaptor signature construction from any signature scheme and any strongly random-self reducible relation, which we show instantiations of using DL, RSA, and LWE.
Last updated:  2022-12-04
Practical Quantum-Safe Voting from Lattices, Extended
Ian Black, Emma McFall, Juliet Whidden, Bryant Xie, Ryann Cartor
E-voting offers significant potential savings in time and money compared to current voting systems. Unfortunately, many current e-voting schemes are susceptible to quantum attacks. In this paper, we expand upon EVOLVE, an existing lattice-based quantum-secure election scheme introduced by Pino et al. We are able to make these expansions by extending the dimensions of the voter's ballot and creating additional proofs, allowing for applicability to realistic election schemes. Thus, we present our system of schemes, called EVOLVED (Electronic Voting from Lattices with Verification and Extended Dimensions). We present schemes for numerous different types of elections including Single-Choice Voting, Borda Count, and Instant Runoff.
Last updated:  2022-12-04
CoRA: Collaborative Risk-Aware Authentication
Mastooreh Salajegheh, Shashank Agrawal, Maliheh Shirvanian, Mihai Christodorescu,, Payman Mohassel
Today, authentication faces the trade-off of security versus usability. Two factor authentication, for example, is one way to improve security at the cost of requiring user interaction for every round of authentication. Most 2FA methods are bound to user's phone and fail if the phone is not available. We propose CoRA, a Collaborative Risk-aware Authentication method that takes advantage of any and many devices that the user owns. CoRA increases security, and preserves usability and privacy by using threshold MACs and by tapping into the knowledge of the devices instead of requiring user knowledge or interaction. Using CoRA, authentication tokens are generated collaboratively by multiple devices owned by the user, and the token is accompanied by a risk factor that indicates the reliability of the token to the authentication server. CoRA relies on a device-centric trust assessment to determine the relative risk factor and on threshold cryptography to ensure no single point of failure. CoRA does not assume any secure element or physical security for the devices. In this paper, we present the architecture and security analysis of CoRA. In an associated user study we discover that 78% of users have at least three devices with them at most times, and 93% have at least two, suggesting that deploying CoRA multi-factor authentication is practical today.
Last updated:  2022-12-04
Division in the Plactic Monoid
Chris Monico
In [1], a novel cryptographic key exchange technique was proposed using the plactic monoid, based on the apparent difficulty of solving division problems in that monoid. Specifically, given elements c, b in the plactic monoid, the problem is to find q for which qb = c, given that such a q exists. In this paper, we introduce a metric on the plactic monoid and use it to give a probabilistic algorithm for solving that problem which is fast for parameter values in the range of interest.
Last updated:  2024-01-23
Powers of Tau in Asynchrony
Sourav Das, Zhuolun Xiang, and Ling Ren
The $q$-Strong Diffie-Hellman ($q$-SDH) parameters are foundational to efficient constructions of many cryptographic primitives such as zero-knowledge succinct non-interactive arguments of knowledge, polynomial/vector commitments, verifiable secret sharing, and randomness beacon. The only existing method to generate these parameters securely is highly sequential, requires synchrony assumptions, and has very high communication and computation costs. For example, to generate parameters for any given $q$, each party incurs a communication cost of $\Omega(nq)$ and requires $\Omega(n)$ rounds. Here $n$ is the number of parties in the secure multiparty computation protocol. Since $q$ is typically large, i.e., on the order of billions, the cost is highly prohibitive. In this paper, we present a distributed protocol to generate $q$-SDH parameters in an asynchronous network. In a network of $n$ parties, our protocol tolerates up to one-third of malicious parties. Each party incurs a communication cost of $O(q + n^2\log q)$ and the protocol finishes in $O(\log q + \log n)$ expected rounds. We provide a rigorous security analysis of our protocol. We implement our protocol and evaluate it with up to 128 geographically distributed parties. Our evaluation illustrates that our protocol is highly scalable and results in a 2-6$\times$ better runtime and 4-13$\times$ better per-party bandwidth usage compared to the state-of-the-art synchronous protocol for generating $q$-SDH parameters.
Last updated:  2024-03-01
Interactive Authentication
Deepak Maram, Mahimna Kelkar, and Ittay Eyal
Authentication is the first, crucial step in securing digital assets like cryptocurrencies and online services like banking. It relies on principals maintaining exclusive access to credentials like cryptographic signing keys, passwords, and physical devices. But both individuals and organizations struggle to manage their credentials, resulting in loss of assets and identity theft. In this work, we study mechanisms with back-and-forth \textit{interaction} with the principals. For example, a user receives an email notification about sending money from her bank account and is given a period of time to abort. We define \emph{the authentication problem}, where a \emph{mechanism} interacts with a user and an attacker. A mechanism's success depends on the scenario---which credentials each principal knows. The \emph{profile} of a mechanism is the set of scenarios in which it succeeds. The subset relation on profiles defines a partial order on mechanisms. We bound the profile size and discover three types of novel mechanisms that are \emph{maximally secure}. We show the efficacy of our model by analyzing existing mechanisms and make concrete improvement proposals: Using ``sticky'' messages for security notifications, prioritizing credentials when accessing one's bank account, and using one of our maximal mechanisms to improve a popular cryptocurrency wallet. We demonstrate the practicality of our mechanisms by implementing the latter.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.