All papers in 2022 (Page 12 of 1781 results)

Last updated:  2022-05-30
Refuting the Dream XOR Lemma via Ideal Obfuscation and Resettable MPC
Saikrishna Badrinarayanan, Yuval Ishai, Dakshita Khurana, Amit Sahai, Daniel Wichs
We provide counterexamples to the ``dream'' version of Yao's XOR Lemma. In particular, we put forward explicit candidates for hard predicates, such that the advantage of predicting the XOR of many independent copies does not decrease beyond some fixed negligible function, even as the number of copies gets arbitrarily large. We provide two such constructions: 1) Our first construction is in the ideal obfuscation model (alternatively, assuming virtual black-box obfuscation for a concrete class of circuits). It develops a general framework that may be of broader interest, and allows us to embed an instance of a resettably-secure multiparty computation protocol into a one-way function. Along the way, we design the first resettably-secure multiparty computation protocol for general functionalities in the plain model with super-polynomial simulation, under standard assumptions. 2) The second construction relies on public-coin differing-inputs obfuscation (PCdiO) along with a certain form of hash-function security called extended second-preimage resistance (ESPR). It starts with a previously known counterexample to the dream direct-product hardness amplification based on ESPR, and uses PCdiO to upgrade it into a counterexample for the XOR lemma. Prior to our work, even completely heuristic counterexamples of this type were not known.
Last updated:  2024-01-24
Practical Delegatable Anonymous Credentials From Equivalence Class Signatures
Omid Mir, Daniel Slamanig, Balthazar Bauer, and René Mayrhofer
Anonymous credentials systems (ACs) are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators are also hidden. A credential issuer's identity may convey information about a user's identity even when all other information about the user is protected. We present a novel delegatable anonymous credential scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. In particular, our DAC credentials do not grow with delegations, i.e., are of constant size. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments which can be extended by additional set commitments. Signatures additionally include a user's public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ signatures, the signatures and messages can be publicly randomized and thus allow unlinkable showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable selective, efficient showings in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Last updated:  2023-06-15
Vandermonde meets Regev: Public Key Encryption Schemes Based on Partial Vandermonde Problems
Katharina Boudgoust, Amin Sakzad, Ron Steinfeld
PASS Encrypt is a lattice-based public key encryption scheme introduced by Hoffstein and Silverman in 2015. The efficiency and algebraic properties of PASS Encrypt and of the underlying partial Vandermonde knapsack problem (PV-Knap) make them an attractive starting point for building efficient post-quantum cryptographic primitives. Recall that PV-Knap asks to recover a polynomial of small norm from a partial list of its Vandermonde transform. Unfortunately, the security foundations of PV-Knap-based encryption are not well understood, and in particular, no security proof for PASS Encrypt is known. In this work, we make progress in this direction. First, we present a modified version of PASS Encrypt with a security proof based on decision PV-Knap and a leaky variant of it, named the PASS problem. We next study an alternative approach to build encryption based on PV-Knap. To this end, we introduce the partial Vandermonde LWE problem (PV-LWE), which we show is computationally equivalent to PV-Knap. Following Regev's design for LWE-based encryption, we use PV-LWE to construct an efficient encryption scheme. Its security is based on PV-LWE and a hybrid variant of PV-Knap and Polynomial LWE. Finally, we give a refined analysis of the concrete security of both schemes against best known lattice attacks.
Last updated:  2022-05-30
New Constructions of Collapsing Hashes
Mark Zhandry
Collapsing is a post-quantum strengthening of collision resistance, needed to lift many classical results to the quantum setting. Unfortunately, the only existing standard-model proofs of collapsing hashes require LWE. We construct the first collapsing hashes from the quantum hardness of any one of the following problems: - LPN in a variety of low noise or high-hardness regimes, essentially matching what is known for collision resistance from LPN. - Finding cycles on exponentially-large expander graphs, such as those arising from isogenies on elliptic curves. - The "optimal" hardness of finding collisions in *any* hash function. - The *polynomial* hardness of finding collisions, assuming a certain plausible regularity condition on the hash. As an immediate corollary, we obtain the first statistically hiding post-quantum commitments and post-quantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash $H'$ from a post-quantum collision-resistant hash function $H$, regardless of whether or not $H$ itself is collapsing, assuming $H$ satisfies a certain regularity condition we call "semi-regularity."
Last updated:  2022-06-13
A Conjecture on Hermite Constants
Leon Mächler, David Naccache
As of today, the Hermite constants $\gamma_n$ are only known for $n\in \{1,2,3,4,5,6,7,8,24\}$. We noted that the known values of $(4/\gamma_n)^n$ coincide with the values of the minimal determinants of any $n$-dimensional integral lattice when the length of the smallest lattice element $\mu$ is fixed to 4. Based on this observation, we conjecture that the values of $\gamma_n^n$ for $n=9,\ldots,23$ are those given in Table 2. We provide a supporting argument to back this conjecture. We also provide a provable lower bound on the Hermite constants for $1\leq n\leq24$.
Last updated:  2023-02-23
Finding many Collisions via Reusable Quantum Walks
Xavier Bonnetain, André Chailloux, André Schrottenloher, Yixin Shen
Given a random function $f$ with domain $[2^n]$ and codomain $[2^m]$, with $m \geq n$, a collision of $f$ is a pair of distinct inputs with the same image. Collision finding is an ubiquitous problem in cryptanalysis, and it has been well studied using both classical and quantum algorithms. Indeed, the quantum query complexity of the problem is well known to be $\Theta(2^{m/3})$, and matching algorithms are known for any value of $m$. The situation becomes different when one is looking for multiple collision pairs. Here, for $2^k$ collisions, a query lower bound of $\Theta(2^{(2k+m)/3})$ was shown by Liu and Zhandry (EUROCRYPT 2019). A matching algorithm is known, but only for relatively small values of $m$, when many collisions exist. In this paper, we improve the algorithms for this problem and, in particular, extend the range of admissible parameters where the lower bound is met. Our new method relies on a chained quantum walk algorithm, which might be of independent interest. It allows to extract multiple solutions of an MNRS-style quantum walk, without having to recompute it entirely: after finding and outputting a solution, the current state is reused as the initial state of another walk. As an application, we improve the quantum sieving algorithms for the shortest vector problem (SVP), with a complexity of $2^{0.2563d + o(d)}$ instead of the previous $2^{0.2570d + o(d)}$.
Last updated:  2022-06-24
MPClan: Protocol Suite for Privacy-Conscious Computations
Nishat Koti, Shravani Patil, Arpita Patra, Ajith Suresh
The growing volumes of data being collected and its analysis to provide better services are creating worries about digital privacy. To address privacy concerns and give practical solutions, the literature has relied on secure multiparty computation. However, recent research has mostly focused on the small-party honest-majority setting of up to four parties, noting efficiency concerns. In this work, we extend the strategies to support a larger number of participants in an honest-majority setting with efficiency at the center stage. Cast in the preprocessing paradigm, our semi-honest protocol improves the online complexity of the decade-old state-of-the-art protocol of Damgård and Nielson (CRYPTO'07). In addition to having an improved online communication cost, we can shut down almost half of the parties in the online phase, thereby saving up to 50$\%$ in the system's operational costs. Our maliciously secure protocol also enjoys similar benefits and requires only half of the parties, except for one-time verification, towards the end. To showcase the practicality of the designed protocols, we benchmark popular applications such as deep neural networks, graph neural networks, genome sequence matching, and biometric matching using prototype implementations. Our improved protocols aid in bringing up to 60-80$\%$ savings in monetary cost over prior work.
Last updated:  2023-06-11
A Note on Key Ranking for Optimal Collision Side-Channel Attacks
Cezary Glowacz
In "Optimal collision side-channel attacks" (https://eprint.iacr.org/2019/828) we studied, and derived an optimal distinguisher for key ranking. In this note we propose a heuristic estimation procedure for key ranking based on this distinguisher, and provide estimates of lower bounds for secret key ranks in collision side channel attacks.
Last updated:  2023-06-07
Meet-in-the-Filter and Dynamic Counting with Applications to Speck
Alex Biryukov, Luan Cardoso dos Santos, Je Sen Teh, Aleksei Udovenko, Vesselin Velichkov
We propose a new cryptanalytic tool for differential cryptanalysis, called meet-in-the-filter (MiF). It is suitable for ciphers with a slow or incomplete diffusion layer such as the ones based on Addition-Rotation-XOR (ARX). The main idea of the MiF technique is to stop the difference propagation earlier in the cipher, allowing to use differentials with higher probability. This comes at the expense of a deeper analysis phase in the bottom rounds possible due to the slow diffusion of the target cipher. The MiF technique uses a meet-in-the-middle matching to construct differential trails connecting the differential’s output and the ciphertext difference. The proposed trails are used in the key recovery procedure, reducing time complexity and allowing flexible time-data trade-offs. In addition, we show how to combine MiF with a dynamic counting technique for key recovery. We illustrate MiF in practice by reporting improved attacks on the ARX-based family of block ciphers Speck. We improve the time complexities of the best known attacks up to 15 rounds of Speck32 and 20 rounds of Speck64/128. Notably, our new attack on 11 rounds of Speck32 has practical analysis and data complexities of $2^{24.66}$ and $2^{26.70}$ respectively, and was experimentally verified, recovering the master key in a matter of seconds. It significantly improves the previous deep learning-based attack by Gohr from CRYPTO 2019, which has time complexity $2^{38}$. As an important milestone, our conventional cryptanalysis method sets a new high benchmark to beat for cryptanalysis relying on machine learning.
Last updated:  2023-10-21
CENSOR: Privacy-preserving Obfuscation for Outsourcing SAT formulas
Tassos Dimitriou and Khazam Alhamdan
We propose a novel obfuscation technique that can be used to outsource hard satisfiability (SAT) formulas to the cloud. Servers with large computational power are typically used to solve SAT instances that model real-life problems in task scheduling, AI planning, circuit verification and more. However, outsourcing data to the cloud may lead to privacy and information breaches since satisfying assignments may reveal considerable information about the underlying problem modeled by SAT. In this work, we develop CENSOR (privaCy prEserviNg obfuScation for Outsourcing foRmulas), a novel SAT obfuscation framework that resembles Indistinguishability Obfuscation. At the core of the framework lies a mechanism that transforms any formula to a random one with the same number of satisfying assignments. As a result, obfuscated formulas are indistinguishable from each other thus preserving the input-output privacy of the original SAT instance. Contrary to prior solutions that are rather adhoc in nature, we formally prove the security of our scheme. Additionally, we show that obfuscated formulas are within a polynomial factor of the original ones thus achieving polynomial slowdown. Finally, the whole process is efficient in practice, allowing solutions to original instances to be easily recovered from obfuscated ones. A byproduct of our method is that all NP problems can be potentially outsourced to the cloud by means of reducing to SAT.
Last updated:  2022-06-16
The Gap Is Sensitive to Size of Preimages: Collapsing Property Doesn't Go Beyond Quantum Collision-Resistance for Preimages Bounded Hash Functions
Shujiao Cao, Rui Xue
As an enhancement of quantum collision-resistance, the collapsing property of hash functions proposed by Unruh (EUROCRYPT 2016) emphasizes the hardness for distinguishing a superposition state of a hash value from a collapsed one. The collapsing property trivially implies the quantum collision-resistance. However, it remains to be unknown whether there is a reduction from the collapsing hash functions to the quantum collision-resistant hash functions. In this paper, we further study the relations between these two properties and derive two intriguing results as follows: Firstly, when the size of preimages of each hash value is bounded by some polynomial, we demonstrate that the collapsing property and the collision-resistance must hold simultaneously. This result is proved via a semi-black-box manner by taking advantage of the invertibility of a unitary quantum circuit. Next, we further consider the relations between these two properties in the exponential-sized preimages case. By giving a construction of polynomial bounded hash functions, which preserves the quantum collision-resistance, we show the existence of collapsing hash functions is implied by the quantum collision-resistant hash functions when the size of preimages is not too large to the expected value. Our results indicate that the gap between these two properties is sensitive to the size of preimages. As a corollary, our results also reveal the non-existence of polynomial bounded equivocal collision-resistant hash functions.
Last updated:  2022-07-22
Practical UC-Secure Zero-Knowledge Smart Contracts
Jayamine Alupotha, Xavier Boyen
Zero-knowledge defines that verifier(s) learns nothing but predefined statement(s); e.g., verifiers learn nothing except the program's path for the respective transaction in a zero-knowledge contract program. Intra-Privacy or insiders' zero-knowledge --- ability to maintain a secret in a multi-party computation --- is an essential security property for smart contracts of Confidential Transactions (CT). Otherwise, the users have to reveal their confidential coin amounts to each other even if it is not a condition of the contract, contradicting the idea of zero-knowledge. For example, in an escrow contract, the escrow should not learn buyers' or sellers' account balances if the escrow has to pay into their accounts. Current private computational platforms, including homomorphic encryption and (ZK-)SNARK, can not be used in CT's smart contracts because homomorphic encryption requires secret key sharing, and (ZK-)SNARK requires a different setup for each computation which has to be stored on the blockchain. Existing private smart contracts are not intra-private even though they are inter-private --- participants can maintain secrets from verifiers but not from other participants, accordingly. To fill this research gap, we introduce the notion of ``Confidential Integer Processing'' (CIP) with two intra-private single-setup zero-knowledge programming protocols, (1) ``CIP-DLP'' from the Discrete Log Problem (DLP) targeting Ring/Aggregable CT like Monero and Mimblewimble, and (2) ``CIP-SIS'' from Approximate (Ring-Modular-) Short Integer Solution Problem (Approx-SIS) aiming at lattice-based Ring/Aggregable CT. To the best of our knowledge, our CIP protocols are the first practical public zero-knowledge contract protocols that are also secure under the Universal Composability (UC) framework without any hardware magic or trusted offline computations.
Last updated:  2022-05-29
On those Boolean functions that are coset leaders of first order Reed-Muller codes
Claude Carlet, Serge Feukoua
In this paper, we study the class of those Boolean functions that are coset leaders of first order Reed-Muller codes. We study their properties and try to better understand their structure (which seems complex), by studying operations on Boolean functions that can provide coset leaders (we show that these operations all provide coset leaders when the operands are coset leaders, and that some can even produce coset leaders without the operands being coset leaders). We characterize those coset leaders that belong to the well known classes of direct sums of monomial Boolean functions and Maiorana-McFarland functions. Since all the functions of Hamming weight at most $2^{n-2}$ are automatically coset leaders, we are interested in constructing infinite classes of coset leaders having possibly Hamming weight larger than $2^{n-2}$.
Last updated:  2022-09-13
Key-Reduced Variants of 3kf9 with Beyond-Birthday-Bound Security
Yaobin Shen, Ferdinand Sibleyras
3kf9 is a three-key CBC-type MAC that enhances the standardized integrity algorithm f9 (3GPP-MAC). It has beyond-birthday-bound security and is expected to be a possible candidate in constrained environments when instantiated with lightweight blockciphers. Two variants 2kf9 and 1kf9 were proposed to reduce key size for efficiency, but recently, Leurent et al. (CRYPTO'18) and Shen et al. (CRYPTO'21) pointed out critical flaws on these two variants and invalidated their security proofs with birthday-bound attacks. In this work, we revisit previous constructions of key-reduced variants of 3kf9 and analyze what went wrong in security analyzes. Interestingly, we find that a single doubling at the end can not only fix 2kf9 to go beyond the birthday bound, but also help 1kf9 to go beyond the birthday bound. We then propose two new key-reduced variants of 3kf9, called n2kf9 and n1kf9. By leveraging previous attempts, we prove that n2kf9 is secure up to 2^{2n/3} queries, and prove that n1kf9 is secure up to 2^{2n/3} queries when the message space is prefix-free. We also provide beyond-birthday analysis of n2kf9 in the multi-user setting. Note that compared to EMAC and CBC-MAC, the additional cost to provide a higher security guarantee is expected to be minimal for n2kf9 and n1kf9. It only requires one additional blockcipher call and one doubling.
Last updated:  2023-03-30
Arithmetic Tuples for MPC
Toomas Krips, Ralf Kuesters, Pascal Reisert, Marc Rivinius
Some of the most efficient protocols for Multi-Party Computation (MPC) use a two-phase approach where correlated randomness, in particular Beaver triples, is generated in the offline phase and then used to speed up the online phase. Recently, more complex correlations have been introduced to optimize certain operations even further, such as matrix triples for matrix multiplications. In this paper, our goal is to speed up the evaluation of multivariate polynomials and therewith of whole arithmetic circuits in the online phase. To this end, we introduce a new form of correlated randomness: arithmetic tuples. Arithmetic tuples can be fine tuned in various ways to the constraints of application at hand, in terms of round complexity, bandwidth, and tuple size. We show that for many real-world setups an arithmetic tuples based online phase outperforms state-of-the-art protocols based on Beaver triples.
Last updated:  2022-05-28
Deciding and reconstructing linear equivalence of uniformly distributed functions
Ivana Ivkovic, Nikolay Kaleyski
We describe an efficient algorithm for testing and recovering linear equivalence between a pair of $k$-to-$1$ discrete functions with a specific structure. In particular, for $k = 3$ this applies to many APN functions over fields of even characteristic, and for $k = 2$ this applies to all known planar functions over fields of odd characteristic. Our approach is significantly faster than all known methods for testing equivalence, and allows linear equivalence to be tested in practice for dimensions much higher than what has been possible before (for instance, we can efficiently test equivalence for $n = 12$ or $n = 14$ in the case of 3-to-1 APN functions over $\mathbb{F}_{2^n}$, and for $n = 8$ or $n = 9$ in the case of 2-to-1 planar functions over $\mathbb{F}_{3^n}$ within a few minutes even in the worst case). We also develop supplementary algorithms allowing our approach to be extended to the more general case of EA-equivalence. Classifying 3-to-1 APN functions over $\mathbb{F}_{2^n}$ for dimensions as high as $n = 14$ up to EA-equivalence can be performed in a matter of minutes using the developed framework.
Last updated:  2023-08-16
NOVA, a Noncommutative-ring Based Unbalanced Oil and Vinegar Signature Scheme with Key-randomness Alignment
Lih-Chung Wang, Po-En Tseng, Yen-Liang Kuan, Chun-Yen Chou
In this paper, we propose a noncommutative-ring based unbalanced oil and vinegar signature scheme with key-randomness alignment: NOVA (Noncommutative Oil and Vinegar with Alignment). Instead of fields or even commutative rings, we show that noncommutative rings can be used for algebraic cryptosystems. At the same or better level of security requirement, NOVA has a much smaller public key than UOV (Unbalanced Oil and Vinegar), which makes NOVA practical in most situations. We use Magma to actually implement and give a detailed security analysis against known major attacks.
Last updated:  2023-06-09
The $c-$differential uniformity and boomerang uniformity of three classes of permutation polynomials over $\mathbb{F}_{2^n}$
Qian Liu, Zhiwei Huang, Jianrui Xie, Ximeng Liu, Jian Zou
Permutation polynomials with low $c$-differential uniformity and boomerang uniformity have wide applications in cryptography. In this paper, by utilizing the Weil sums technique and solving some certain equations over $\mathbb{F}_{2^n}$, we determine the $c$-differential uniformity and boomerang uniformity of these permutation polynomials: (1) $f_1(x)=x+\mathrm{Tr}_1^n(x^{2^{k+1}+1}+x^3+x+ux)$, where $n=2k+1$, $u\in\mathbb{F}_{2^n}$ with $\mathrm{Tr}_1^n(u)=1$; (2) $f_2(x)=x+\mathrm{Tr}_1^n(x^{{2^k}+3}+(x+1)^{2^k+3})$, where $n=2k+1$; (3) $f_3(x)=x^{-1}+\mathrm{Tr}_1^n((x^{-1}+1)^d+x^{-d})$, where $n$ is even and $d$ is a positive integer. The results show that the involutions $f_1(x)$ and $f_2(x)$ are APcN functions for $c\in\mathbb{F}_{2^n}\backslash \{0,1\}$. Moreover, the boomerang uniformity of $f_1(x)$ and $f_2(x)$ can attain $2^n$. Furthermore, we generalize some previous works and derive the upper bounds on the $c$-differential uniformity and boomerang uniformity of $f_3(x)$.
Last updated:  2022-09-08
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari, Matthew Jagielski, Alina Oprea
Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, by design, MPC protocols faithfully compute the training functionality, which the adversarial ML community has shown to leak private information and can be tampered with in poisoning attacks. In this work, we argue that model ensembles, implemented in our framework called SafeNet, are a highly MPC-amenable way to avoid many adversarial ML attacks. The natural partitioning of data amongst owners in MPC training allows this approach to be highly scalable at training time, provide provable protection from poisoning attacks, and provably defense against a number of privacy attacks. We demonstrate SafeNet's efficiency, accuracy, and resilience to poisoning on several machine learning datasets and models trained in end-to-end and transfer learning scenarios. For instance, SafeNet reduces backdoor attack success significantly, while achieving $39\times$ faster training and $36 \times$ less communication than the four-party MPC framework of Dalskov et al. Our experiments show that ensembling retains these benefits even in many non-iid settings. The simplicity, cheap setup, and robustness properties of ensembling make it a strong first choice for training ML models privately in MPC.
Last updated:  2022-06-08
SHORTSTACK : Distributed, Fault-tolerant, Oblivious Data Access
Midhul Vuppalapati, Kushal Babel, Anurag Khandelwal, Rachit Agarwal
Many applications that benefit from data offload to cloud services operate on private data. A now-long line of work has shown that, even when data is offloaded in an encrypted form, an adversary can learn sensitive information by analyzing data access patterns. Existing techniques for oblivious data access—that protect against access pattern attacks—require a centralized and stateful trusted proxy to orchestrate data accesses from applications to cloud services. We show that, in failure-prone deployments, such a centralized and stateful proxy results in violation of oblivious data access security guarantees and/or in system unavailability. We thus initiate the study of distributed, fault-tolerant, oblivious data access. We present SHORTSTACK , a distributed proxy architecture for oblivious data access in failure-prone deployments. SHORTSTACK achieves the classical obliviousness guarantee—access patterns observed by the adversary being independent of the input—even under a powerful passive persistent adversary that can force failure of arbitrary (bounded-sized) subset of proxy servers at arbitrary times. We also introduce a security model that enables studying oblivious data access with distributed, failure-prone, servers. We provide a formal proof that SHORTSTACK enables oblivious data access under this model, and show empirically that SHORTSTACK performance scales near-linearly with number of distributed proxy servers.
Last updated:  2022-10-20
Protego: Efficient, Revocable and Auditable Anonymous Credentials with Applications to Hyperledger Fabric
Aisling Connolly, Jerome Deschamps, Pascal Lafourcade, Octavio Perez Kempner
Recent works to improve privacy in permissioned blockchains like Hyperledger Fabric rely on Idemix, the only anonymous credential system that has been integrated to date. The current Idemix implementation in Hyperledger Fabric (v2.4) only supports a fixed set of attributes; it does not support revocation features, nor does it support anonymous endorsement of transactions (in Fabric, transactions need to be approved by a subset of peers before consensus). A prototype Idemix extension by Bogatov et al. (CANS, 2021) was proposed to include revocation, auditability, and to gain privacy for users. In this work, we explore how to gain efficiency, functionality, and further privacy, departing from recent works on anonymous credentials based on Structure-Preserving Signatures on Equivalence Classes. As a result, we extend previous works to build a new anonymous credential scheme called Protego. We also present a variant of it (Protego Duo) based on a different approach to hiding the identity of an issuer during showings. We also discuss how both can be integrated into Hyperledger Fabric and provide a prototype implementation. Finally, our results show that Protego and Protego Duo are at least twice as fast as state-of-the-art approaches based on Idemix.
Last updated:  2022-09-16
Secure Sampling with Sublinear Communication
Seung Geol Choi, Dana Dachman-Soled, S. Dov Gordon, Linsheng Liu, Arkady Yerukhimovich
Random sampling from specified distributions is an important tool with wide applications for analysis of large-scale data. In this paper we study how to randomly sample when the distribution is partitioned among two parties' private inputs. Of course, a trivial solution is to have one party send a (possibly encrypted) description of its weights to the other party who can then sample over the entire distribution (possibly using homomorphic encryption). However, this approach requires communication that is linear in the input size which is prohibitively expensive in many settings. In this paper, we investigate secure 2-party sampling with \emph{sublinear communication} for many standard distributions. We develop protocols for $L_1$, and $L_2$ sampling. Additionally, we investigate the feasibility of sublinear product sampling, showing impossibility for the general problem and showing a protocol for a restricted case of the problem. We additionally show how such product sampling can be used to instantiate a sublinear communication 2-party exponential mechanism for differentially-private data release.
Last updated:  2022-05-27
ABE for Circuits with Constant-Size Secret Keys and Adaptive Security
Hanjun Li, Huijia Lin, Ji Luo
An important theme in research on attribute-based encryption (ABE) is minimizing the sizes of the secret keys and ciphertexts. In this work, we present two new ABE schemes with *constant-size* secret keys, that is, the key size is independent of the sizes of policies or attributes, and dependent only on the security parameter lambda. * We construct the first key-policy ABE scheme for circuits with constant-size secret keys, |sk_f|=poly(lambda), which concretely consist of only three group elements. The previous state-of-the-art construction by [Boneh et al., Eurocrypt '14] has key size polynomial in the maximum depth d of the policy circuits, |sk_f|=poly(d,lambda). Our new scheme removes this dependency of key size on d while keeping the ciphertext size the same, which grows linearly in the attribute length and polynomially in the maximal depth, |ct|=|x|poly(d,lambda). * We present the first ciphertext-policy ABE scheme for Boolean formulae that simultaneously has constant-size keys and succinct ciphertexts of size independent of the policy formulae, in particular, |sk_f|=poly(lambda) and |ct_x|=poly(|x|,lambda). Concretely, each secret key consists of only two group elements. Previous ciphertext-policy ABE schemes either have succinct ciphertexts but non constant-size keys [Agrawal--Yamada, Eurocrypt '20; Agrawal--Wichs--Yamada, TCC '20], or constant-size keys but large ciphertexts that grow with the policy size, as well as the attribute length. Our second construction is the first ABE scheme achieving *double succinctness*, where both keys and ciphertexts are smaller than the corresponding attributes and policies tied to them. Our constructions feature new ways of combining lattices with pairing groups for building ABE and are proven selectively secure based on LWE and in the generic (pairing) group model. We further show that when replacing the LWE assumption with its adaptive variant introduced in [Quach--Wee--Wichs FOCS '18] the constructions become adaptively secure.
Last updated:  2022-06-01
Unclonable Polymers and Their Cryptographic Applications
Ghada Almashaqbeh, Ran Canetti, Yaniv Erlich, Jonathan Gershoni, Tal Malkin, Itsik Pe’er, Anna Roitburd-Berman, Eran Tromer
We propose a mechanism for generating and manipulating protein polymers to obtain a new type of consumable storage that exhibits intriguing cryptographic "self-destruct" properties, assuming the hardness of certain polymer-sequencing problems. To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled secure vault where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) one-time programs, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input. Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.
Last updated:  2023-09-06
BASALISC: Programmable Hardware Accelerator for BGV Fully Homomorphic Encryption
Robin Geelen, Michiel Van Beirendonck, Hilder V. L. Pereira, Brian Huffman, Tynan McAuley, Ben Selfridge, Daniel Wagner, Georgios Dimou, Ingrid Verbauwhede, Frederik Vercauteren, and David W. Archer
Fully Homomorphic Encryption (FHE) allows for secure computation on encrypted data. Unfortunately, huge memory size, computational cost and bandwidth requirements limit its practicality. We present BASALISC, an architecture family of hardware accelerators that aims to substantially accelerate FHE computations in the cloud. BASALISC is the first to implement the BGV scheme with fully-packed bootstrapping – the noise removal capability necessary for arbitrary-depth computation. It supports a customized version of bootstrapping that can be instantiated with hardware multipliers optimized for area and power. BASALISC is a three-abstraction-layer RISC architecture, designed for a 1 GHz ASIC implementation and underway toward 150mm2 die tape-out in a 12nm GF process. BASALISC's four-layer memory hierarchy includes a two-dimensional conflict-free inner memory layer that enables 32 Tb/s radix-256 NTT computations without pipeline stalls. Its conflict-resolution permutation hardware is generalized and re-used to compute BGV automorphisms without throughput penalty. BASALISC also has a custom multiply-accumulate unit to accelerate BGV key switching. The BASALISC toolchain comprises a custom compiler and a joint performance and correctness simulator. To evaluate BASALISC, we study its physical realizability, emulate and formally verify its core functional units, and we study its performance on a set of benchmarks. Simulation results show a speedup of more than 5,000× over HElib – a popular software FHE library.
Last updated:  2023-01-05
Quantum Augmented Dual Attack
Martin R. Albrecht, Yixin Shen
We present a quantum augmented variant of the dual lattice attack on the Learning with Errors (LWE) problem, using classical memory with quantum random access (QRACM). Applying our results to lattice parameters from the literature, we find that our algorithm outperforms previous algorithms, assuming unit cost access to a QRACM. On a technical level, we show how to obtain a quantum speedup on the search for Fast Fourier Transform (FFT) coefficients above a given threshold by leveraging the relative sparseness of the FFT and using quantum amplitude estimation. We also discuss the applicability of the Quantum Fourier Transform in this context. Furthermore, we give a more rigorous analysis of the classical and quantum expected complexity of guessing part of the secret vector where coefficients follow a discrete Gaussian (mod \(q\)).
Last updated:  2024-04-06
Bit Security as Cost to Demonstrate Advantage
Keewoo Lee
We revisit the question of what the definition of bit security should be, previously answered by Micciancio-Walter (Eurocrypt 2018) and Watanabe-Yasunaga (Asiacrypt 2021). Our new definition is simple, but (i) captures both search and decision primitives in a single framework like Micciancio-Walter, and (ii) has a firm operational meaning like Watanabe-Yasunaga. It also matches intuitive expectations and can be well-formulated regarding Hellinger distance. To support and justify the new definition, we prove several classic security reductions with respect to our bit security. We also provide pathological examples that indicate the ill-definedness of bit security defined in Micciancio-Walter and Watanabe-Yasunaga.
Last updated:  2022-06-01
Torsion point attacks on ``SIDH-like'' cryptosystems
Péter Kutas, Christophe Petit
Isogeny-based cryptography is a promising approach for post-quantum cryptography. The best-known protocol following that approach is the supersingular isogeny Diffie-Hellman protocol (SIDH); this protocol was turned into the CCA-secure key encapsulation mechanism SIKE, which was submitted to and remains in the third round of NIST's post-quantum standardization process as an ``alternate'' candidate. Isogeny-based cryptography generally relies on the conjectured hardness of computing an isogeny between two isogenous elliptic curves, and most cryptanalytic work referenced on SIKE's webpage exclusively focuses on that problem. Interestingly, the hardness of this problem is sufficient for neither SIDH nor SIKE. In particular, these protocols reveal additional information on the secret isogeny, in the form of images of specific torsion points through the isogeny. This paper surveys existing cryptanalysis approaches exploiting this often called ``torsion point information'', summarizes their current impact on SIKE and related algorithms, and suggests some research directions that might lead to further impact.
Last updated:  2023-08-09
Fast Unbalanced Private Set Union from Fully Homomorphic Encryption
Binbin Tu, Yu Chen, Qi Liu, Cong Zhang
Private set union (PSU) allows two parties to compute the union of their sets without revealing anything else. It has been widely used in various applications. While several computationally efficient PSU protocols have been developed for the balanced case, they have a potential limitation in their communication complexity, which grows (super)-linearly with the size of the larger set. This poses a challenge when performing PSU in the unbalanced setting, where one party is a constrained device holding a small set, and another is a service provider holding a large set. In this work, we propose a generic construction of unbalanced PSU from leveled fully homomorphic encryption and a newly introduced protocol called permuted matrix private equality test. By instantiating the generic construction, we obtain two unbalanced PSU protocols whose communication complexity is linear in the size of the smaller set, and logarithmic in the larger set. We implement our protocols. Experiments demonstrate that our protocols outperform all previous protocols in the unbalanced setting. The larger difference between the sizes of two sets, the better our protocols perform. For input sets of sizes $2^{10}$ and $2^{20}$ with items of length $128$ bits, our PSU requires only $2.767$ MB of communication. Compared with the state-of-the-art PSU proposed by Zhang et al. (USENIX Security 2023), there are $37 \times$ shrink in communication and roughly $10 - 35 \times$ speedup in the running time depending on the network environments.
Last updated:  2024-02-01
Private Set Operations from Multi-Query Reverse Private Membership Test
Yu Chen, Min Zhang, Cong Zhang, Minglang Dong, and Weiran Liu
Private set operations allow two parties to perform secure computation on their private sets, including intersection, union and functions of intersection/union. In this paper, we put forth a framework to perform private set operations. The technical core of our framework is the multi-query reverse private membership test (mqRPMT) protocol (Zhang et al., USENIX Security 2023), in which a client with a vector $X = (x_1, \dots, x_n)$ interacts with a server holding a set $Y$, and eventually the server learns only a bit vector $(e_1, \dots, e_n)$ indicating whether $x_i \in Y$ without learning the value of $x_i$, while the client learns nothing. We present two constructions of mqRPMT from newly introduced cryptographic notions, one is based on commutative weak pseudorandom function (cwPRF), and the other is based on permuted oblivious pseudorandom function (pOPRF). Both cwPRF and pOPRF can be realized from the decisional Diffie-Hellman (DDH)-like assumptions in the random oracle model. We also introduce a slightly weaker version of mqRPMT dubbed mqRPMT$^*$, in which the client also learns the cardinality of $X \cap Y$. We show that mqRPMT$^*$ can be built from a category of multi-query private membership test (mqPMT) called Sigma-mqPMT, which in turn can be realized from DDH-like assumptions or oblivious polynomial evaluation. This makes the first step towards establishing the relation between mqPMT and mqRPMT. We demonstrate the practicality of our framework with implementations. By plugging our cwPRF-based mqRPMT into the framework, we obtain various PSO protocols that are superior or competitive to the state-of-the-art protocols. For intersection functionality, our protocol is faster than the most efficient one for small sets. For cardinality functionality, our protocol achieves a $2.4-10.5\times$ speedup and a $10.9-14.8\times$ shrink in communication cost. For cardinality-with-sum functionality, our protocol achieves a $28.5-76.3\times$ speedup and $7.4\times$ shrink in communication cost. For union functionality, our protocol is the first one that attains strict linear complexity, and requires the lowest concrete computation and communication costs in all settings, achieving a $2.7-17\times$ speedup and about $2\times$ shrink in communication cost. Specifically, for input sets of size $2^{20}$, our PSU protocol requires roughly 100 MB of communication and 16 seconds using 4 threads on a laptop in the LAN setting. Our improvement on PSU also translates to related functionality, yielding the most efficient private-ID protocol to date. Moreover, by plugging our FHE-based mqRPMT$^*$ to the general framework, we obtain a PSU$^*$ protocol (the sender additionally learns the intersection size) suitable for unbalanced setting, whose communication complexity is linear in the size of the smaller set and logarithmic in the larger set.
Last updated:  2023-03-19
Revisiting the Efficiency of Asynchronous Multi Party Computation Against General Adversaries
Ananya Appan, Anirudh Chandramouli, Ashish Choudhury
In this paper, we design secure multi-party computation (MPC) protocols in the asynchronous communication setting with optimal resilience. Our protocols are secure against a computationally-unbounded malicious adversary, characterized by an adversary structure $\mathcal{Z}$, which enumerates all possible subsets of potentially corrupt parties. Our protocols incur a communication of $\mathcal{O}(|\mathcal{Z}|^2)$ and $\mathcal{O}(|\mathcal{Z}|)$ bits per multiplication for perfect and statistical security respectively. These are the first protocols with this communication complexity, as such protocols were known only in the synchronous communication setting (Hirt and Tschudi, ASIACRYPT 2013).
Last updated:  2022-05-26
Supersingular Non-Superspecial Abelian Surfaces in Cryptography
Jason T. LeGrow, Yan Bo Ti, Lukas Zobernig
We consider the use of supersingular abelian surfaces in cryptography. Several generalisations of well-known cryptographic schemes and constructions based on supersingular elliptic curves to the 2-dimensional setting of superspecial abelian surfaces have been proposed. The computational assumptions in the superspecial 2-dimensional case can be reduced to the corresponding 1-dimensional problems via a product decomposition by observing that every superspecial abelian surface is non-simple and separably isogenous to a product of supersingular elliptic curves. Instead, we propose to use supersingular non-superspecial isogeny graphs where such a product decomposition does not have a computable description via separable isogenies. We study the advantages and investigate security concerns of the move to supersingular non-superspecial abelian surfaces.
Last updated:  2022-05-25
IBE with Incompressible Master Secret and Small Identity Secrets
Nico Döttling, Sanjam Garg, Sruthi Sekar, Mingyuan Wang
Side-stepping the protection provided by cryptography, exfiltration attacks are becoming a considerable real-world threat. With the goal of mitigating the exfiltration of cryptographic keys, big-key cryptosystems have been developed over the past few years. These systems come with very large secret keys which are thus hard to exfiltrate. Typically, in such systems, the setup time must be large as it generates the large secret key. However, subsequently, the encryption and decryption operations, that must be performed repeatedly, are required to be efficient. Specifically, the encryption uses only a small public key and the decryption only accesses small ciphertext-dependent parts of the full secret key. Nonetheless, these schemes require decryption to have access to the entire secret key. Thus, using such big-key cryptosystems necessitate that users carry around large secret keys on their devices, which can be a hassle and in some cases might also render exfiltration easy. With the goal of removing this problem, in this work, we initiate the study of big-key identity-based encryption (bk-IBE). In such a system, the master secret key is allowed to be large but we require that the identity-based secret keys are short. This allows users to use the identity-based short keys as the ephemeral secret keys that can be more easily carried around and allow for decrypting ciphertexts matching a particular identity, e.g. messages that were encrypted on a particular date. In particular: -We build a new definitional framework for bk-IBE capturing a range of applications. In the case when the exfiltration is small our definition promises stronger security --- namely, an adversary can break semantic security for only a few identities, proportional to the amount of leakage it gets. In contrast, in the catastrophic case where a large fraction of the master secret key has been ex-filtrated, we can still resort to a guarantee that the ciphertexts generated for a randomly chosen identity (or, an identity with enough entropy) remain protected. We demonstrate how this framework captures the best possible security guarantees. -We show the first construction of such a bk-IBE offering strong security properties. Our construction is based on standard assumptions on groups with bilinear pairings and brings together techniques from seemingly different contexts such as leakage resilient cryptography, reusable two-round MPC, and laconic oblivious transfer. We expect our techniques to be of independent interest.
Last updated:  2022-07-01
Dynamic Searchable Encryption with Optimal Search in the Presence of Deletions
Javad Ghareh Chamani, Dimitrios Papadopoulos, Mohammadamin Karbasforushan, Ioannis Demertzis
We focus on the problem of Dynamic Searchable Encryption (DSE) with efficient (optimal/quasi-optimal) search in the presence of deletions. Towards that end, we first propose $\mathsf{OSSE}$, the first DSE scheme that can achieve asymptotically optimal search time, linear to the result size and independent of any prior deletions, improving the previous state of the art by a multiplicative logarithmic factor. We then propose our second scheme $\mathsf{LLSE}$, that achieves a sublogarithmic search overhead ($\log\log i_w$, where $i_w$ is the number or prior insertions for a keyword) compared to the optimal achieved by $\mathsf{OSSE}$. While this is slightly worse than our first scheme, it still outperforms prior works, while also achieving faster deletions and asymptotically smaller server storage. Both schemes have standard leakage profiles and are forward-and-backward private. Our experimental evaluation is very encouraging as it shows our schemes consistently outperform the prior state-of-the-art DSE by 1.2-6.6$\times$ in search computation time, while also requiring just a single roundtrip to receive the search result. Even compared with prior simpler and very efficient constructions in which all deleted records are returned as part of the result, our $\mathsf{OSSE}$ achieves better performance for deletion rates ranging from 45-55%, while the previous state-of-the-art quasi-optimal scheme achieves this for 65-75% deletion rates.
Last updated:  2022-09-14
Quantum Implementation and Analysis of DEFAULT
Kyungbae Jang, Anubhab Baksi, Jakub Breier, Hwajeong Seo, Anupam Chattopadhyay
In this paper, we present the quantum implementation and analysis of the recently proposed block cipher, DEFAULT. DEFAULT is consisted of two components, namely DEFAULT-LAYER and DEFAULT-CORE. Two instances of DEFAULT-LAYER is used before and after DEFAULT-CORE (the so-called `sandwich construction'). We discuss about the the various choices made to keep the cost for the basic quantum circuit and that of the Grover's oracle search, and compare it with the levels of quantum security specified by the United States' National Institute of Standards and Technology (NIST). All in all, our work nicely fits in the research trend of finding the possible quantum vulnerability of symmetric key ciphers.
Last updated:  2022-10-17
Faster Non-interactive Verifiable Computing
Pascal Lafourcade, Gael Marcadet, Léo Robert
In 1986, A.Yao introduced the notion of garbled circuits, designed to verify the correctness of computations performed on an untrusted server. However, correctness is guaranteed for only one input, meaning that a new garbled circuit must be created for each new input. To address this drawback, in 2010 Gennaro et al. performed the evaluation of the garbled circuit homomorphically using Fully Homomorphic Encryption scheme, allowing to reuse the same garbled circuit for new inputs. Their solution requires to encrypt the garbled circuit at every new input. In this paper, we propose a verifiable-computation scheme allowing to verify the correctness of computations performed by an untrusted server for multiple inputs, where the garbled circuit is homomorphically encrypted only once. Hence, we have a faster scheme comparing to Gennaro’s solution, since for each new input, we reduce the computations by the size of the circuit representing the function to be computed, for the same security level. The key point to obtain this speed-up is to rely on Multi-Key Homomorphic Encryption (MKHE) and then to encrypt only once the garbled circuit.
Last updated:  2022-05-25
Round-Optimal Multi-Party Computation with Identifiable Abort
Michele Ciampi, Divya Ravi, Luisa Siniscalchi, Hendrik Waldner
Secure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations. The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.
Last updated:  2023-07-03
DiLizium 2.0: Revisiting Two-Party Crystals-Dilithium
Peeter Laud, Nikita Snetkov, Jelizaveta Vakarjuk
In previous years there has been an increased interest in designing threshold signature schemes. Most of the recent works focus on constructing threshold versions of ECDSA or Schnorr signature schemes due to their appealing usage in blockchain technologies. Additionally, a lot of research is being done on cryptographic schemes that are resistant to quantum computer attacks. In this work, we propose a new version of the two-party Dilithium signature scheme. The security of our scheme is based on the hardness of Module-LWE and Module-SIS problems. In our construction, we follow a similar logic as Damgård et al. (PKC 2021) and use an additively homomorphic commitment scheme. However, compared to them, our protocol uses signature compression techniques from the original Dilithium signature scheme which makes it closer to the version submitted to the NIST PQC competition. We focus on two-party signature schemes in the context of user authentication.
Last updated:  2022-05-25
Accelerating the Best Trail Search on AES-Like Ciphers
Seonggyeom Kim, Deukjo Hong, Jaechul Sung, Seokhie Hong
In this study, we accelerate Matsui's search algorithm to search for the best differential and linear trails of AES-like ciphers. Our acceleration points are twofold. The first exploits the structure and branch number of an AES-like round function to apply strict pruning conditions to Matsui's search algorithm. The second employs permutation characteristics in trail search to reduce the inputs that need to be analyzed. We demonstrate the optimization of the search algorithm by obtaining the best differential and linear trails of existing block ciphers: AES, LED, MIDORI-64, CRAFT, SKINNY, PRESENT, and GIFT. In particular, our search program finds the full-round best differential and linear trails of GIFT-64 (in approx. 1 s and 10 s) and GIFT-128 (in approx. 89 h and 452 h), respectively. For a more in-depth application, we leverage the acceleration to investigate the optimal DC/LC resistance that GIFT-variants, called BOGI-based ciphers, can achieve. To this end, we identify all the BOGI-based ciphers and reduce them into 41,472 representatives. Deriving 16-, 32-, 64-, and 128-bit BOGI-based ciphers from the representatives, we obtain their best trails until 15, 15, 13, and 11 rounds, respectively. The investigation shows that 12 rounds are the minimum threshold for a 64-bit BOGI-based cipher to prevent efficient trails for DC/LC, whereas GIFT-64 requires 14 rounds. Moreover, it is shown that GIFT can provide better resistance by only replacing the existing bit permutation. Specifically, the bit permutation variants of GIFT-64 and GIFT-128 require fewer rounds, one and two, respectively, to prevent efficient differential and linear trails.
Last updated:  2022-05-25
Statistical Effective Fault Attacks: The other Side of the Coin
Navid Vafaei, Sara Zarei, Nasour Bagheri, Maria Eichlseder, Robert Primas, Hadi Soleimany
The introduction of Statistical Ineffective Fault Attacks (SIFA) has led to a renewed interest in fault attacks. SIFA requires minimal knowledge of the concrete implementation and is effective even in the presence of common fault or power analysis countermeasures. However, further investigations reveal that undesired and frequent ineffective events, which we refer to as the noise phenomenon, are the bottleneck of SIFA that can considerably diminish its strength. This includes noise associated with the attack’s setup and caused by the countermeasures utilized in the implementation. This research aims to address this significant drawback. We present two novel statistical fault attack variants that are far more successful in dealing with these noisy conditions. The first variant is the Statistical Effective Fault Attack (SEFA), which exploits the non-uniform distribution of intermediate variables in circumstances when the induced faults are effective. The idea behind the second proposed method, dubbed Statistical Hybrid Fault Attacks (SHFA), is to take advantage of the biased distributions of both effective and ineffective cases simultaneously. Our experimental results in various case studies, including noise-free and noisy setups, back up our reasoning that SEFA surpasses SIFA in several instances and that SHFA outperforms both or is at least as efficient as the best of them.
Last updated:  2022-11-25
Self-Timed Masking: Implementing Masked S-Boxes Without Registers
Mateus Simões, Lilian Bossuet, Nicolas Bruneau, Vincent Grosso, Patrick Haddad, Thomas Sarno
Masking is one of the most used side-channel protection techniques. However, a secure masking scheme requires additional implementation costs, e.g. random number, and transistor count. Furthermore, glitches and early evaluation can temporally weaken a masked implementation in hardware, creating a potential source of exploitable leakages. Registers are generally used to mitigate these threats, hence increasing the implementation's area and latency. In this work, we show how to design glitch-free masking without registers with the help of the dual-rail encoding and asynchronous logic. This methodology is used to implement low-latency masking with arbitrary protection order. Finally, we present a side-channel evaluation of our first and second order masked AES implementations.
Last updated:  2022-05-24
Dialektos: Privacy-preserving Smart Contracts
Tadas Vaitiekūnas
Digital ledger technologies supporting smart contracts usually does not ensure any privacy for user transactions or state. Most solutions to this problem either use private network setups, centralized parties, hardware enclaves, or cryptographic primitives, which are novel, complex, and computationally expensive. This paper looks into an alternative way of implementing smart contracts. Our construction of a protocol for smart contracts employs an overlay protocol design pattern for decentralized applications, which separates transaction ordering from transaction validation. This enables consensus on application state while revealing only encrypted versions of transactions to public consensus protocol network. UTXO-based smart contract model allows partitioning state of distributed ledger in a way that participants would need to decrypt and reach consensus only on those transactions, which are relevant to them. We present security analysis, which shows that, assuming presence of a secure consensus protocol, our construction achieves consensus on UTXO-based transactions, while hiding most of transaction details from all protocol parties, except a limited subset of parties, which need particular transactions for construction of their state.
Last updated:  2022-05-29
Anamorphic Encryption: Private Communication against a Dictator
Giuseppe Persiano, Duong Hieu Phan, Moti Yung
Cryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet. However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as "the bad guys can utilize other encryption system" so all other cryptosystems have to be declared illegal, or that "allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government)." It has remained a fundamental open issue, though, to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date. In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to immediately (with no latency) send piggybacked secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments’ attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.
Last updated:  2023-05-18
Impossibilities in Succinct Arguments: Black-box Extraction and More
Matteo Campanelli, Chaya Ganesh, Hamidreza Khoshakhlagh, Janno Siim
The celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages, we must assume non-falsifiable assumptions. We further investigate those barriers by showing new negative and positive results related to the proof size. 1. We start by formalizing a folklore lower bound for the proof size of black-box extractable arguments based on the hardness of the language. This separates knowledge-sound SNARGs (SNARKs) in the random oracle model (that can have black-box extraction) and those in the standard model. 2. We find a positive result in the non-adaptive setting. Under the existence of non-adaptively sound SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP. 3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting. 4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also, in the preprocessing model, it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way. Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.
Last updated:  2024-01-09
Conditional Attribute-Based Proxy Re-Encryption: Definitions and Constructions from LWE
Lisha Yao, Jian Weng, Pengfei Wu, Xiaoguo Li, Yi Liu, Junzuo Lai, Guomin Yang, and Robert H. Deng
Attribute-based proxy re-encryption (AB-PRE) is one of the essential variants for proxy re-encryption. It allows a proxy with a re-encryption key to transform a ciphertext associated with an access policy and decryptable by a delegator into another ciphertext associated with a new access policy, thereafter other delegatees can decrypt. However, with AB-PRE, the proxy is to switch the underlying policies of all ciphertexts indiscriminately. The delegator cannot decide which ciphertext would be transformed, taking no flexibility in controlling it for real use. In this paper, we propose a notion of Conditional AB-PRE (CAB-PRE), supporting completely fine-grained control for ciphertexts, in both decryption and delegation. In CAB-PRE, the proxy can convert the underlying policy of a ciphertext only if this ciphertext satisfies a specific condition set by the delegator in the re-encryption key. We formalize the security of this notion in the honest re-encryption attacks (HRA) setting, and present a concrete construction secure under adaptive corruptions in the standard model. As a building block, we design an adaptively HRA-secure (ciphertext-policy) AB-PRE based on the learning with errors (LWE) problem, which solves an open problem left by Susilo et al. in ESORICS '21. Finally, we introduce a well-matched conditional delegation tailored to inner-product predicates and integrate it into this AB-PRE to derive our HRA-secure CAB-PRE scheme.
Last updated:  2022-05-23
Integer Syndrome Decoding in the Presence of Noise
Vlad-Florin Dragoi, Brice Colombier, Pierre-Louis Cayrel, Vincent Grosso
Code-based cryptography received attention after the NIST started the post-quantum cryptography standardization process in 2016. A central NP-hard problem is the binary syndrome decoding problem, on which the security of many code-based cryptosystems lies. The best known methods to solve this problem all stem from the information-set decoding strategy, first introduced by Prange in 1962. A recent line of work considers augmented versions of this strategy, with hints typically provided by side-channel information. In this work, we consider the integer syndrome decoding problem, where the integer syndrome is available but might be noisy. We study how the performance of the decoder is affected by the noise. We provide experimental results on cryptographic parameters for the BIKE and Classic McEliece cryptosystems, which are finalist and alternate candidates for the third round of the NIST standardization process, respectively.
Last updated:  2022-05-23
Post-Quantum Secure Boot on Vehicle Network Processors
Joppe W. Bos, Brian Carlson, Joost Renes, Marius Rotaru, Daan Sprenkels, Geoffrey P. Waters
The ability to trust a system to act safely and securely strongly relies on the integrity of the software that it runs. To guarantee authenticity of the software one can include cryptographic data such as digital signatures on application images that can only be generated by trusted parties. These are typically based on cryptographic primitives such as Rivest-Shamir-Adleman (RSA) or Elliptic-Curve Cryptography (ECC), whose security will be lost whenever a large enough quantum computer is built. For that reason, migration towards Post-Quantum Cryptography (PQC) is necessary. This paper investigates the practical impact of migrating the secure boot flow on a Vehicle Network Processor (S32G274A) towards PQC. We create a low-memory fault-attack- resistant implementation of the Dilithium signature verification algorithm and evaluate its impact on the boot flow.
Last updated:  2022-05-23
Round-Optimal Lattice-Based Threshold Signatures, Revisited
Shweta Agrawal, Damien Stehle, Anshu Yadav
Threshold signature schemes enable distribution of the signature issuing capability to multiple users, to mitigate the threat of signing key compromise. Though a classic primitive, these signatures have witnessed a surge of interest in recent times due to relevance to modern applications like blockchains and cryptocurrencies. In this work, we study round-optimal threshold signatures in the post- quantum regime and improve the only known lattice-based construction by Boneh et al [CRYPTO’18] as follows: • Efficiency. We reduce the amount of noise flooding used in the construction from $2^{\Omega(\lambda)}$ down to $\sqrt{Q}$, where $Q$ is the bound on the number of generated signatures and $\lambda$ is the security parameter. By using lattice hardness assumptions over polynomial rings, this allows to decrease the signature bit-lengths from $\widetilde{O}(\lambda^3)$ to~$\widetilde{O}(\lambda)$, bringing them significantly closer to practice. Our improvement relies on a careful analysis using Rényi divergence rather than statistical distance in the security proof. • Instantiation. The construction of Boneh et al requires a standard signature scheme to be evaluated homomorphically. To instantiate this, we provide a homomorphism-friendly variant of Lyubashevsky’s signature [EUROCRYPT ’12] which achieves low circuit depth by being “rejection-free” and uses an optimal, moderate noise flooding of $\sqrt{Q}$, matching the above. • Towards Adaptive Security. The construction of Boneh et al satisfies only selective security, where all the corrupted parties must be announced before any signing query is made. We improve this in two ways: in the Random Oracle Model, we obtain partial adaptivity where signing queries can be made before the corrupted parties are announced but the set of corrupted parties must be announced all at once. In the standard model, we obtain full adaptivity, where parties can be corrupted at any time but this construction is in a weaker pre-processing model where signers must be provided correlated randomness of length proportional to the number of signatures, in an offline preprocessing phase.
Last updated:  2022-05-23
CUDA-Accelerated RNS Multiplication in Word-Wise Homomorphic Encryption Schemes
Shiyu Shen, Hao Yang, Yu Liu, Zhe Liu, Yunlei Zhao
Homomorphic encryption (HE), which allows computation over encrypted data, has often been used to preserve privacy. However, the computationally heavy nature and complexity of network topologies make the deployment of HE schemes in the Internet of Things (IoT) scenario difficult. In this work, we propose CARM, the first optimized GPU implementation that covers BGV, BFV and CKKS, targeting for accelerating homomorphic multiplication using GPU in heterogeneous IoT systems. We offer constant-time low-level arithmetic with minimum instructions and memory usage, as well as performance- and memory-prior configurations, and exploit a parametric and generic design, and offer various trade-offs between resource and efficiency, yielding a solution suitable for accelerating RNS homomorphic multiplication on both high-performance and embedded GPUs. Through this, we can offer more real-time evaluation results and relieve the computational pressure on cloud devices. We deploy our implementations on two GPUs and achieve up to 378.4×, 234.5×, and 287.2× speedup for homomorphic multiplication of BGV, BFV, and CKKS on Tesla V100S, and 8.8×, 9.2×, and 10.3× on Jetson AGX Xavier, respectively.
Last updated:  2022-05-23
Recovering Rainbow's Secret Key with a First-Order Fault Attack
Uncategorized
Thomas Aulbach, Tobias Kovats, Juliane Krämer, Soundes Marzougui
Show abstract
Uncategorized
Rainbow, a multivariate digital signature scheme and third round finalist in NIST's PQC standardization process, is a layered version of the unbalanced oil and vinegar (UOV) scheme. We introduce two fault attacks, each focusing on one of the secret linear transformations $T$ and $S$ used to hide the structure of the central map in Rainbow. The first fault attack reveals a part of $T$ and we prove that this is enough to achieve a full key recovery with negligible computational effort for all parameter sets of Rainbow. The second one unveils $S$, which can be extended to a full key recovery by the Kipnis-Shamir attack. Our work exposes the secret transformations used in multivariate signature schemes as an important attack vector for physical attacks, which need further protection. Our attacks target the optimized Cortex-M4 implementation and require only first-order instruction skips and a moderate amount of faulted signatures.
Last updated:  2023-08-27
Watermarking PRFs against Quantum Adversaries
Fuyuki Kitagawa and Ryo Nishimaki
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries. In this work, we define secure watermarking PRFs and PKE for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs and one watermarking PKE as follows. - We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit. - We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter). - We construct a publicly extractable watermarking PKE against quantum adversaries from standard PKE. The marking algorithm can directly generate a marked decryption from a decryption key, and the extraction algorithm uses a public key of the PKE scheme for extraction. We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notions of extraction-less watermarking PRFs and PKE as crucial building blocks to achieve the results above by combining the tool with our quantum extraction technique.
Last updated:  2022-05-23
Enforcing fine-grained constant-time policies
Uncategorized
Basavesh Ammanaghatta Shivakumar, Gilles Barthe, Benjamin Grégoire, Vincent Laporte, Swarn Priya
Show abstract
Uncategorized
Cryptographic constant-time (CT) is a popular programming disci- pline used by cryptographic libraries to protect themselves against timing attacks. The CT discipline aims to enforce that program ex- ecution does not leak secrets, where leakage is defined by a formal leakage model. In practice, different leakage models coexist, some- times even within a single library, both to reflect different architec- tures and to accommodate different security-efficiency trade-offs. Constant-timeness is popular and can be checked automatically by many tools. However, most sound tools are focused on a baseline (BL) leakage model. In contrast, (sound) verification methods for other leakage models are less developed, in part because these mod- els require modular arithmetic reasoning. In this paper, we develop a systematic, sound, approach for enforcing fine-grained constant- time policies beyond the BL model. Our approach combines two main ingredients: a verification infrastructure, which proves that source programs are constant-time, and a compiler infrastructure, which provably preserves constant-timeness for these fine-grained policies. By making these infrastructures parametric in the leakage model, we achieve the first approach that supports fine-grained constant-time policies. We implement the approach in the Jasmin framework for high-assurance cryptography, and we evaluate our approach with examples from the literature: OpenSSL and wolfSSL. We found a bug in OpenSSL and provided a formally verified fix.
Last updated:  2022-05-23
Feel the Quantum Functioning: Instantiating Generic Multi-Input Functional Encryption from Learning with Errors (extended version)?
Alexandros Bakas, Antonis Michalas, Eugene Frimpong, Reyhaneh Rabbaninejad
Functional Encryption (FE) allows users who hold a specific decryption key, to learn a specific function of encrypted data while the actual plaintexts remain private. While FE is still in its infancy, it is our strong belief that in the years to come, this remarkable cryptographic primitive will have matured to the degree that will make it an integral part of access control systems, especially cloud-based ones. To this end, we believe it is of great importance to provide not only theoretical and generic constructions but also concrete instantiations of FE schemes from well-studied cryptographic assumptions. Therefore, in this paper, we undertake the task of presenting two instantiations of the generic work presented in [8] from the Decisional Diffie-Hellman (DDH) problem that also satisfies the property of verifiable decryption. Moreover, we present a novel multi-input FE (MIFE) scheme, that can be instantiated from Regev's cryptosystem, and thus remains secure even against quantum adversaries. Finally, we provide a multi-party computation (MPC) protocol that allows our MIFE construction to be deployed in the multi-client mode
Last updated:  2022-05-23
High-Performance Polynomial Multiplication Hardware Accelerators for KEM Saber and NTRU
Elizabeth Carter, Pengzhou He, Jiafeng Xie
Along the rapid development in building large-scale quantum computers, post-quantum cryptography (PQC) has drawn significant attention from research community recently as it is proven that the existing public-key cryptosystems are vulnerable to the quantum attacks. Following this direction, this paper presents a novel implementation of high-performance polynomial multiplication hardware accelerators for key encapsulation mechanism (KEM) Saber and NTRU, two PQC algorithms that are currently under the consideration by the National Institute of Standards and Technology (NIST) PQC standardization process. In total, we have carried out three layers of efforts to obtain the proposed work. First of all, we have proposed a new Dual Cyclic-Row Oriented Processing (Dual-CROP) technique to build a high-performance polynomial multiplication hardware accelerator for KEM Saber. Then, we have extended this hardware accelerator to NTRU with proper innovation and adjustment. Finally, through a series of complexity analysis and implementation based comparison, we have shown that the proposed hardware accelerators obtain better area-time complexities than known existing ones. It is expected that the outcome of this work can impact the ongoing NIST PQC standardization process and can be deployed further to construct efficient cryptoprocessors.
Last updated:  2022-05-30
Secure Hierarchical Deterministic Wallet Supporting Stealth Address
Xin Yin, Zhen Liu, Guomin Yang, Guoxing Chen, Haojin Zhu
Over the past decade, cryptocurrency has been undergoing a rapid development. Digital wallet, as the tool to store and manage the cryptographic keys, is the primary entrance for the public to access cryptocurrency assets. Hierarchical Deterministic Wallet (HDW), proposed in Bitcoin Improvement Proposal 32 (BIP32), has attracted much attention and been widely used in the community, due to its virtues such as easy backup/recovery, convenient cold-address management, and supporting trust-less audits and applications in hierarchical organizations. While HDW allows the wallet owner to generate and manage his keys conveniently, Stealth Address (SA) allows a payer to generate fresh address (i.e., public key) for the receiver without any interaction, so that users can achieve ``one coin each address" in a very convenient manner, which is widely regarded as a simple but effective way to protect user privacy. Consequently, SA has also attracted much attention and been widely used in the community. However, as so far, there is not a secure wallet algorithm that provides the virtues of both HDW and SA. Actually, even for standalone HDW, to the best of our knowledge, there is no strict definition of syntax and models that captures the functionality and security (i.e., safety of coins and privacy of users) requirements that practical scenarios in cryptocurrency impose on wallet. As a result, the existing wallet algorithms either have (potential) security flaws or lack crucial functionality features. In this work, we formally define the syntax and security models of Hierarchical Deterministic Wallet supporting Stealth Address (HDWSA), capturing the functionality and security (including safety and privacy) requirements imposed by the practice in cryptocurrency, which include all the versatile functionalities that lead to the popularity of HDW and SA as well as all the security guarantees that underlie these functionalities. We propose a concrete HDWSA construction and prove its security in the random oracle model. We implement our scheme and the experimental results show that the efficiency is suitable for typical cryptocurrency settings.
Last updated:  2023-07-14
New method for combining Matsui’s bounding conditions with sequential encoding method
Senpeng Wang, Dengguo Feng, Bin Hu, Jie Guan, Kai Zhang, Tairong Shi
As the first generic method for finding the optimal differential and linear characteristics, Matsui's branch and bound search algorithm has played an important role in evaluating the security of symmetric ciphers. By combining Matsui's bounding conditions with automatic search models, search efficiency can be improved. In this paper, by studying the properties of Matsui's bounding conditions, we give the general form of bounding conditions that can eliminate all the impossible solutions determined by Matsui's bounding conditions. Then, a new method of combining bounding conditions with sequential encoding method is proposed. With the help of some small size Mixed Integer Linear Programming (MILP) models, we can use fewer variables and clauses to build Satisfiability Problem (SAT) models. As applications, we use our new method to search for the optimal differential and linear characteristics of some SPN, Feistel, and ARX block ciphers. The number of variables and clauses and the solving time of the SAT models are decreased significantly. In addition, we find some new differential and linear characteristics covering more rounds.
Last updated:  2024-03-05
Dashing and Star: Byzantine Fault Tolerance with Weak Certificates
Sisi Duan, Haibin Zhang, Xiao Sui, Baohan Huang, Changchun Mu, Gang Di, and Xiaoyun Wang
State-of-the-art Byzantine fault-tolerant (BFT) protocols assuming partial synchrony such as SBFT and HotStuff use \textit{regular certificates} obtained from $2f+1$ (partial) signatures. We show that one can use \textit{weak certificates} obtained from only $f+1$ signatures to \textit{assist} in designing more robust and more efficient BFT protocols. We design and implement two BFT systems: Dashing (a family of two HotStuff-style BFT protocols) and Star (a parallel BFT framework). We first present Dashing1 that targets both efficiency and robustness using weak certificates. Dashing1 is also network-adaptive in the sense that it can leverage network connection discrepancy to improve performance. We show that Dashing1 outperforms HotStuff in various failure-free and failure scenarios. We then present Dashing2 enabling a \textit{one-phase} fast path by using \textit{strong certificates} from $3f+1$ signatures. We then leverage weak certificates to build Star, a highly scalable BFT framework that delivers transactions from $n-f$ replicas. Star compares favorably with existing protocols in terms of liveness, communication, state transfer, scalability, and/or robustness under failures. We demonstrate that Dashing achieves 47\%-107\% higher peak throughput than HotStuff for experiments on Amazon EC2. Meanwhile, unlike all known BFT protocols whose performance degrades as $f$ grows large, the peak throughput of Star increases as $f$ grows. When deployed in a WAN with 91 replicas across five continents, Star achieves an impressive throughput of 256 ktx/sec, 2.38x that of Narwhal.
Last updated:  2022-05-23
Cryptanalysis of Three Quantum Money Schemes
Andriyan Bilyk, Javad Doliskani, Zhiyong Gong
We investigate the security assumptions behind three public-key quantum money schemes. Aaronson and Christiano proposed a scheme based on hidden subspaces of the vector space $\mathbb{F}_2^n$ in 2012. It was conjectured by Pena et al in 2015 that the hard problem underlying the scheme can be solved in quasi-polynomial time. We confirm this conjecture by giving a polynomial time quantum algorithm for the underlying problem. Our algorithm is based on computing the Zariski tangent space of a random point in the hidden subspace. Zhandry proposed a scheme based on multivariate hash functions in 2017. We give a polynomial time quantum algorithm for cloning a money state with high probability. Our algorithm uses the verification circuit of the scheme to produce a banknote from a given serial number. Kane proposed a scheme based on modular forms in 2018. The underlying hard problem in Kane's scheme is cloning a quantum state that represents an eigenvector of a set of Hecke operators. We give a polynomial time quantum reduction from this hard problem to a linear algebra problem. The latter problem is much easier to understand, and we hope that our reduction opens new avenues to future cryptanalyses of this scheme.
Last updated:  2022-05-23
Fast Fully Secure Multi-Party Computation over Any Ring with Two-Thirds Honest Majority
Anders Dalskov, Daniel Escudero, Ariel Nof
We introduce a new MPC protocol to securely compute any functionality over an arbitrary black-box finite ring (which may not be commutative), tolerating $t<n/3$ active corruptions while \textit{guaranteeing output delivery} (G.O.D.). Our protocol is based on replicated secret-sharing, whose share size is known to grow exponentially with the number of parties $n$. However, even though the internal storage and computation in our protocol remains exponential, the communication complexity of our protocol is \emph{constant}, except for a light constant-round check that is performed at the end before revealing the output. Furthermore, the amortized communication complexity of our protocol is not only constant, but very small: only $1 + \frac{t-1}{n}<1\frac{1}{3}$ ring elements per party, per multiplication gate over two rounds of interaction. This improves over the state-of-the art protocol in the same setting by Furukawa and Lindell (CCS 2019), which has a communication complexity of $2\frac{2}{3}$ \emph{field} elements per party, per multiplication gate and while achieving fairness only. As an alternative, we also describe a variant of our protocol which has only one round of interaction per multiplication gate on average, and amortized communication cost of $\le 1\frac{1}{2}$ ring elements per party on average for any natural circuit. Motivated by the fact that efficiency of distributed protocols are much more penalized by high communication complexity than local computation/storage, we perform a detailed analysis together with experiments in order to explore how large the number of parties can be, before the storage and computation overhead becomes prohibitive. Our results show that our techniques are viable even for a moderate number of parties (e.g., $n>10$).
Last updated:  2022-05-23
Efficient and Accurate homomorphic comparisons
Olive Chakraborty, Martin Zuber
We design and implement a new efficient and accurate Fully homomorphic argmin/min or argmax/max comparison operator, which finds its application in numerous real-world use cases as a classifier. In particular we propose two versions of our algorithms using different tools from TFHE's functional bootstrapping toolkit. Our algorithm scales to any number of input data points with linear time complexity and logarithmic noise-propagation. Our algorithm is the fastest on the market for non-parallel comparisons with a high degree of accuracy and precision. For MNIST and SVHN datasets, which work under the PATE framework, using our algorithm, we achieve an accuracy of around 99.95 % for both.
Last updated:  2022-08-17
Caulk: Lookup Arguments in Sublinear Time
Uncategorized
Arantxa Zapico, Vitalik Buterin, Dmitry Khovratovich, Mary Maller, Anca Nitulescu, Mark Simkin
Show abstract
Uncategorized
We present position-hiding linkability for vector commitment schemes: one can prove in zero knowledge that one or $m$ values that comprise commitment cm all belong to the vector of size $N$ committed to in C. Our construction Caulk can be used for membership proofs and lookup arguments and outperforms all existing alternatives in prover time by orders of magnitude. For both single- and multi-membership proofs Caulk beats SNARKed Merkle proofs by the factor of 100 even if the latter instantiated with Poseidon hash. Asymptotically our prover needs $O(m^2 + m\log N)$ time to prove a batch of $m$ openings, whereas proof size is $O(1)$ and verifier time is $O(\log(\log N))$. As a lookup argument, Caulk is the first scheme with prover time sublinear in the table size, assuming $O(N\log N)$ preprocessing time and $O(N)$ storage. It can be used as a subprimitive in verifiable computation schemes in order to drastically decrease the lookup overhead. Our scheme comes with a reference implementation and benchmarks.
Last updated:  2022-09-14
Synthesizing Quantum Circuits of AES with Lower T-depth and Less Qubits
Zhenyu Huang, Siwei Sun
The significant progress in the development of quantum computers has made the study of cryptanalysis based on quantum computing an active topic. To accurately estimate the resources required to carry out quantum attacks, the involved quantum algorithms have to be synthesized into quantum circuits with basic quantum gates. In this work, we present several generic synthesis and optimization techniques for circuits implementing the quantum oracles of iterative symmetric-key ciphers that are commonly employed in quantum attacks based on Grover and Simon’s algorithms. Firstly, a general structure for implementing the round functions of block ciphers in-place is proposed. Then, we present some novel techniques for synthesizing efficient quantum circuits of linear and non-linear cryptographic building blocks. We apply these techniques to AES and systematically investigate the strategies for depth-width trade-offs. Along the way, we derive a quantum circuit for the AES S-box with provably minimal T-depth based on some new observations on its classical circuit. As a result, the T-depth and width (number of qubits) required for implementing the quantum circuits of AES are significantly reduced. Compared with the circuit proposed in EUROCRYPT 2020, the T-depth is reduced from 60 to 40 without increasing the width or 30 with a slight increase in width. These circuits are fully implemented in Microsoft Q# and the source code is publicly available. Compared with the circuit proposed in ASIACRYPT 2020, the width of one of our circuits is reduced from 512 to 371, and the Toffoli-depth is reduced from 2016 to 1558 at the same time. Actually, we can reduce the width to 270 at the cost of increased depth. Moreover, a full spectrum of depth-width trade-offs is provided, setting new records for the synthesis and optimization of quantum circuits of AES.
Last updated:  2023-04-04
Breaking the $t< n/3$ Consensus Bound: Asynchronous Dynamic Proactive Secret Sharing under Honest Majority
Christophe Levrat, Matthieu Rambaud, Antoine Urban
A proactive secret sharing scheme (PSS), expressed in the dynamic-membership setting, enables a committee of n holders of secret-shares, dubbed as players, to securely hand-over new shares of the same secret to a new committee. We dub such a sub-protocol as a Refresh. All existing PSS under an honest majority, require the use of a broadcast (BC) in each refresh. BC is costly to implement, and its security relies on timing assumptions on the network. So the privacy of the secret and/or its guaranteed delivery, either depend on network assumptions, or, on the reliability of a public ledger. By contrast, PSS over asynchronous channels do not have these constraints. However, all of them (but one, with exponential complexity) use asynchronous verifiable secret sharing (AVSS) and consensus (MVBA and/or ACS), which are impossible under asynchrony beyond t<n/3 corruptions, whatever the setup. We present a PSS, named asynchronous-proactive secret sharing (APSS), which is the first PSS under honest majority with guaranteed output delivery in a completely asynchronous network. More generally, APSS allows any flexible threshold $t<n$, such that privacy and correctness are guaranteed up to t corruptions, and liveness as soon as $t+1$ players behave honestly. Correctness can be lifted to any number of corruptions, provided a linearly homomorphic commitment scheme. Moreover, each refresh completes at the record speed of $2\delta$, where $\delta$ is the actual message delivery delay. APSS demonstrates that proactive refreshes are possible as long as players of the initial committee only, have a common view on a set of (publicly committed or encrypted) shares. Despite not providing consensus on a unique set of shares, APSS surprisingly enables the opening of any linear map over secrets { non-interactively, without consensus }. This, in turn, applies to threshold signing, decryption and randomness generation. APSS can also be directly integrated into the asynchronous Schnorr threshold signing scheme "Roast" [CCS'22]. Of independent interest, we: - provide the first UC formalization (and proof) of proactive AVSS, furthermore for arbitrary thresholds; - provide additional mechanisms enabling players of a committee to start a refresh then erase their old shares, synchronously up to $\delta$ from each other; - improve by 50x the verification speed of the NIZKs of encrypted re-sharing of [Cascudo et al, Asiacrypt'22], by using novel optimizations of batch Schnorr proofs of knowledge. We demonstrate efficiency of APSS with an implementation which uses this optimization as baseline.
Last updated:  2022-05-23
A simple proof of ARX completeness
Adriano Koleci
In the recent years there has been a growing interest in ARX ciphers thanks to their performance in low cost architectures. This work is a short and simple proof that Add, Rotate and Exclusive-OR (ARX) operations generate the permutation group S_{2^n} and it is made up by elementary arguments with minimal use of group theory.
Last updated:  2023-01-08
SO-CCA Secure PKE in the Quantum Random Oracle Model or the Quantum Ideal Cipher Model
Shingo Sato, Junji Shikata
Selective opening (SO) security is one of the most important security notions of public key encryption (PKE) in a multi-user setting. Even though messages and random coins used in some ciphertexts are leaked, SO security guarantees the confidentiality of the other ciphertexts. Actually, it is shown that there exist PKE schemes which meet the standard security such as indistinguishability against chosen ciphertext attacks (IND-CCA security) but do not meet SO security against chosen ciphertext attacks. Hence, it is important to consider SO security in the multi-user setting. On the other hand, many researchers have studied cryptosystems in the security model where adversaries can submit quantum superposition queries (i.e., quantum queries) to oracles. In particular, IND-CCA secure PKE and KEM schemes in the quantum random oracle model have been intensively studied so far. In this paper, we show that two kinds of constructions of hybrid encryption schemes meet simulation-based SO security against chosen ciphertext attacks (SIM-SO-CCA security) in the quantum random oracle model or the quantum ideal cipher model. The first scheme is constructed from any IND-CCA secure KEM and any simulatable data encapsulation mechanism (DEM). The second one is constructed from any IND-CCA secure KEM based on Fujisaki-Okamoto transformation and any strongly unforgeable message authentication code (MAC). We can apply any IND-CCA secure KEM scheme to the first one if the underlying DEM scheme meets simulatability, whereas we can apply strongly unforgeable MAC to the second one if the underlying KEM is based on Fujisaki-Okamoto transformation.
Last updated:  2022-09-02
Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles
Ren Ishibashi, Kazuki Yoneyama
Authenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.
Last updated:  2022-09-08
Smoothing Codes and Lattices: Systematic Study and New Bounds
Thomas Debris, Léo Ducas, Nicolas Resch, Jean-Pierre Tillich
In this article we revisit smoothing bounds in parallel between lattices and codes. Initially introduced by Micciancio and Regev, these bounds were instantiated with Gaussian distributions and were crucial for arguing the security of many lattice-based cryptosystems. Unencumbered by direct application concerns, we provide a systematic study of how these bounds are obtained for both lattices and codes, transferring techniques between both areas. We also consider multiple choices of spherically symmetric noise distribution. We found that the best strategy for a worst-case bound combines Parseval's Identity, the Cauchy-Schwarz inequality, and the second linear programming bound, and this holds for both codes and lattices and all noise distributions at hand. For an average-case analysis, the linear programming bound can be replaced by a tight average count. This alone gives optimal results for spherically uniform noise over random codes and random lattices. This also improves previous Gaussian smoothing bound for worst-case lattices, but surprisingly this provides even better results with uniform ball noise than for Gaussian (or Bernouilli noise for codes). This counter-intuitive situation can be resolved by adequate decomposition and truncation of Gaussian and Bernouilli distributions into a superposition of uniform noise, giving further improvement for those cases, and putting them on par with the uniform cases.
Last updated:  2022-05-27
PPRKS: A Privacy Preserving Range Keyword Search Scheme
Yu Zhang, Zongbin Wang, Tihong Qin
Privacy preserving keyword search (PPKS) is investigated in this paper, which aims to ensure the privacy of clients and servers when a database is accessed. Range query has been recognized as a common operation in databases. In this paper, a formal definition of PPKS supporting range query is given, a scheme (PPRKS) is presented in accordance with Paillier’s cryptosystem. To the best of our knowledge, PPRKS has been the only existing scheme that can effectively preserve the privacy of range keyword search. Moreover, it is demonstrated that the security of PPRKS is dependent on the semantic security of Paillier’s cryptosystem. A detailed performance analysis and a simulation are conducted to verify the practicality of PPRKS. As revealed by the theoretical analysis and the experimental results, the proposed scheme is practical.
Last updated:  2023-02-16
GLUE: Generalizing Unbounded Attribute-Based Encryption for Flexible Efficiency Trade-Offs
Marloes Venema, Greg Alpár
Ciphertext-policy attribute-based encryption is a versatile primitive that has been considered extensively to securely manage data in practice. Especially completely unbounded schemes are attractive, because they do not restrict the sets of attributes and policies. So far, any such schemes that support negations in the access policy or that have online/offline extensions have an inefficient decryption algorithm. In this work, we propose GLUE (Generalized, Large-universe, Unbounded and Expressive), which is a novel scheme that allows for the efficient implementation of the decryption while allowing the support of both negations and online/offline extensions. We achieve these properties simultaneously by uncovering an underlying dependency between encryption and decryption, which allows for a flexible trade-off in their efficiency. For the security proof, we devise a new technique that enables us to generalize multiple existing schemes. As a result, we obtain a completely unbounded scheme supporting negations that, to the best of our knowledge, outperforms all existing such schemes in the decryption algorithm.
Last updated:  2022-05-23
Cryptanalysis of Reduced Round SPEEDY
Raghvendra Rohit, Santanu Sarkar
SPEEDY is a family of ultra low latency block ciphers proposed by Leander, Moos, Moradi and Rasoolzadeh at TCHES 2021. Although the designers gave some differential/linear distinguishers for reduced rounds, a concrete cryptanalysis considering key recovery attacks on SPEEDY was completely missing. The latter is crucial to understand the security margin of designs like SPEEDY which typically use low number of rounds to have low latency. In this work, we present the first third-party cryptanalysis of SPEEDY-$r$-192, where $r \in \{5, 6, 7\}$ is the number of rounds and 192 is block and key size in bits. We identify cube distinguishers for 2 rounds with data complexities $2^{14}$ and $2^{13}$, while the differential/linear distinguishers provided by designers has a complexity of $2^{39}$. Notably, we show that there are several such cube distinguishers, and thus, we then provide a generic description of them. We also investigate the structural properties of 13-dimensional cubes and give experimental evidence that the partial algebraic normal form of certain state bits after two rounds is always the same. Next, we utilize the 2 rounds distinguishers to mount a key recovery attack on 3 rounds SPEEDY. Our attack require $2^{17.6}$ data, $2^{25.5}$ bits of memory and $2^{52.5}$ time. Our results show that the practical variant of SPEEDY, i.e., SPEEDY-5-192 has a security margin of only 2 rounds. We believe our work will bring new insights in understanding the security of SPEEDY.
Last updated:  2022-12-09
Further Cryptanalysis of a Type of RSA Variants
Gongyu Shi, Geng Wang, Dawu Gu
To enhance the security or the efficiency of the standard RSA cryptosystem, some variants have been proposed based on elliptic curves, Gaussian integers or Lucas sequences. A typical type of these variants which we called Type-A variants have the specified modified Euler's totient function $\psi(N)=(p^2-1)(q^2-1)$. But in 2018, based on cubic Pell equation, Murru and Saettone presented a new RSA-like cryptosystem, and it is another type of RSA variants which we called Type-B variants, since their scheme has $\psi(N)=(p^2+p+1)(q^2+q+1)$. For RSA-like cryptosystems, four key-related attacks have been widely analyzed, i.e., the small private key attack, the multiple private keys attack, the partial key exposure attack and the small prime difference attack. These attacks are well-studied on both standard RSA and Type-A variants. Recently, the small private key attack on Type-B variants has also been analyzed. In this paper, we make further cryptanalysis of Type-B variants, that is, we propose the first theoretical results of multiple private keys attack, partial key exposure attack as well as small prime difference attack on Type-B variants, and the validity of our attacks are verified by experiments. Our results show that for all three attacks, Type-B variants are less secure than standard RSA.
Last updated:  2022-05-23
On the Differential Spectrum of a Differentially $3$-Uniform Power Function
Uncategorized
Tingting Pang, Nian Li, Xiangyong Zeng
Show abstract
Uncategorized
In this paper, we investigate the cardinality, denoted by $(j_1,j_2,j_3,j_4)_2$, of the intersection of $(\mathcal{C}^{(2)}_{j_1}-1)\cap(\mathcal{C}^{(2)}_{j_2}-2)\cap(\mathcal{C}^{(2)}_{j_3}-3) \cap(\mathcal{C}^{(2)}_{j_4}-4)$ for $j_1,j_2,j_3,j_4\in\{0,1\}$, where $\mathcal{C}^{(2)}_0, \mathcal{C}^{(2)}_1$ are the cyclotomic classes of order two over the finite field $\mathbb{F}_{p^n}$, $p$ is an odd prime and $n$ is a positive integer. By making most use of the results on cyclotomic classes of orders two and four as well as the cardinality of the intersection $(\mathcal{C}^{(2)}_{i_1}-1)\cap(\mathcal{C}^{(2)}_{i_2}-2)\cap(\mathcal{C}^{(2)}_{i_3}-3)$, we compute the values of $(j_1,j_2,j_3,j_4)_2$ in the case of $p=5$, where $i_1,i_2,i_3\in\{0,1\}$. As a consequence, the power function $x^{\frac{5^n-1}{2}+2}$ over $\mathbb{F}_{5^n}$ is shown to be differentially $3$-uniform and its differential spectrum is also completely determined.
Last updated:  2023-02-27
Optimal Single-Server Private Information Retrieval
Mingxun Zhou, Wei-Kai Lin, Yiannis Tselekounis, Elaine Shi
We construct a single-server pre-processing Private Information Retrieval (PIR) scheme with optimal bandwidth and server computation (up to poly-logarithmic factors), assuming hardness of the Learning With Errors (LWE) problem. Our scheme achieves amortized $\widetilde{O}_{\lambda}(\sqrt{n})$ server and client computation and $\widetilde{O}_\lambda(1)$ bandwidth per query, completes in a single roundtrip, and requires $\widetilde{O}_\lambda(\sqrt{n})$ client storage. In particular, we achieve a significant reduction in bandwidth over the state-of-the-art scheme by Corrigan-Gibbs, Henzinger, and Kogan (Eurocrypt'22): their scheme requires as much as $\widetilde{O}_{\lambda}(\sqrt{n})$ bandwidth per query, with comparable computational and storage overhead as ours.
Last updated:  2022-09-28
Practical Provably Secure Flooding for Blockchains
Chen-Da Liu-Zhang, Christian Matt, Ueli Maurer, Guilherme Rito, Søren Eller Thomsen
In recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks. To close this apparent gap, we present the first practical flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal. The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol.
Last updated:  2022-05-23
Noise*: A Library of Verified High-Performance Secure Channel Protocol Implementations (Long Version)
Son Ho, Jonathan Protzenko, Abhishek Bichhawat, Karthikeyan Bhargavan
The Noise protocol framework defines a succinct notation and execution framework for a large class of 59+ secure channel protocols, some of which are used in popular applications such as WhatsApp and WireGuard. We present a verified implementation of a Noise protocol compiler that takes any Noise protocol, and produces an optimized C implementation with extensive correctness and security guarantees. To this end, we formalize the complete Noise stack in F*, from the low-level cryptographic library to a high-level API. We write our compiler also in F*, prove that it meets our formal specification once and for all, and then specialize it on-demand for any given Noise protocol, relying on a novel technique called hybrid embedding. We thusa establish functional correctness, memory safety and a form of side-channel resistance for the generated C code for each Noise protocol. We propagate these guarantees to the high-level API, using defensive dynamic checks to prevent incorrect uses of the protocol. Finally, we formally state and prove the security of our Noise code, by building on a symbolic model of cryptography in F*, and formally link high-level API security goals stated in terms of security levels to low-level cryptographic guarantees. Ours are the first comprehensive verification results for a protocol compiler that targets C code and the first verified implementations of any Noise protocol. We evaluate our framework by generating implementations for all 59 Noise protocols and by comparing the size, performance, and security of our verified code against other (unverified) implementations and prior security analyses of Noise.
Last updated:  2022-05-23
Security Against Honorific Adversaries: Efficient MPC with Server-aided Public Verifiability
Li Duan, Yufan Jiang, Yong Li, Jörn Müller-Quade, Andy Rupp
Secure multiparty computation (MPC) allows distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. MPC adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. Covert security was first introduced by Aumann and Lindell in 2007, which models a third type of active adversaries who cheat but can be caught with a probability. However, this probability is predefined externally, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. In this paper, we propose a new security notion called security against honorific adversaries, who may cheat during the protocol execution but are extremely unwilling to be punished. Intuitively, honorific adversaries can cheat successfully, but decisive evidence of misbehavior will be left to honest parties with a probability close to one. By introducing an independent but not trusted auditor to the MPC ideal functionality in the universal composability framework (UC), we avoid heavy cryptographic machinery in detection and complicated discussion about the probability of being caught. With this new notion, we construct new provably secure protocols without cut-and-choose for garbled circuits that are much more efficient than those in the covert and malicious model, with slightly more overhead than passively secure protocols.
Last updated:  2022-05-23
Weighted Attribute-Based Encryption with Parallelized Decryption
Alexandru Ionita
Unlike conventional ABE systems, which support Boolean attributes (with only 2 states: "1" and "0", or "Present" and "Absent"), weighted Attribute-based encryption schemes also support numerical values attached to attributes, and each terminal node of the access structure contains a threshold for a minimum weight. We propose a weighted ABE system, with access policy of logarithmic expansion, by dividing each weighted attribute in sub-attributes. On top of that, we show that the decryption can be parallelized, leading to a notable improvement in running time, compared to the serial version.
Last updated:  2022-06-22
Algorithm Substitution Attacks against Receivers
Marcel Armour, Bertram Poettering
This work describes a class of Algorithm Substitution Attack (ASA) generically targeting the receiver of a communication between two parties. Our work provides a unified framework that applies to any scheme where a secret key is held by the receiver; in particular, message authentication schemes (MACs), authenticated encryption (AEAD) and public key encryption (PKE). Our unified framework brings together prior work targeting MAC schemes and AEAD schemes; we extend prior work by showing that public key encryption may also be targeted. ASAs were initially introduced by Bellare, Paterson and Rogaway in light of revelations concerning mass surveillance, as a novel attack class against the confidentiality of encryption schemes. Such an attack replaces one or more of the regular scheme algorithms with a subverted version that aims to reveal information to an adversary (engaged in mass surveillance), while remaining undetected by users. Previous work looking at ASAs against encryption schemes can be divided into two groups. ASAs against PKE schemes target key generation by creating subverted public keys that allow an adversary to recover the secret key. ASAs against symmetric encryption target the encryption algorithm and leak information through a subliminal channel in the ciphertexts. We present a new class of attack that targets the decryption algorithm of an encryption scheme for symmetric encryption and public key encryption, or the verification algorithm for an authentication scheme. We present a generic framework for subverting a cryptographic scheme between a sender and receiver, and show how a decryption oracle allows a subverter to create a subliminal channel which can be used to leak secret keys. We then show that the generic framework can be applied to authenticated encryption with associated data, message authentication schemes, public key encryption and KEM/DEM constructions. We consider practical considerations and specific conditions that apply for particular schemes, strengthening the generic approach. Furthermore, we show how the hybrid subversion of key generation and decryption algorithms can be used to amplify the effectiveness of our decryption attack. We argue that this attack represents an attractive opportunity for a mass surveillance adversary. Our work serves to refine the ASA model and contributes to a series of papers that raises awareness and understanding about what is possible with ASAs.
Last updated:  2022-05-17
Distributed Blockchain Price Oracle
Léonard Lys, Maria Potop-Butucaru
Blockchain oracles are systems that connect blockchains with the outside world by interfacing with external data providers. They provide decentralized applications with the external information needed for smart contract execution. In this paper, we focus on decentralized price oracles, which are distributed systems that provide exchange rates of digital assets to smart contracts. They are the cornerstone of the safety of some decentralized finance applications such as stable coins or lending protocols. They consist of a network of nodes called oracles that gather information from off-chain sources such as an exchange market’s API and feed it to smart contracts. Among the desired properties of a price oracle system are low latency, availability, and low operating cost. Moreover, they should overcome constraints such as having diverse data sources which is known as the freeloading problem or Byzantine failures. In this paper, we define the distributed price oracle problem and present PoWacle, the first asynchronous decentralized oracle protocol that copes with Byzantine behavior.
Last updated:  2023-01-24
Combined Fault Injection and Real-Time Side-Channel Analysis for Android Secure-Boot Bypassing
Uncategorized
Clément Fanjas, Clément Gaine, Driss Aboulkassimi, Simon Pontié, Olivier Potin
Show abstract
Uncategorized
The Secure-Boot is a critical security feature in modern devices based on System-on-Chips (SoC). It ensures the authenticity and integrity of the code before its execution, avoiding the SoC to run malicious code. To the best of our knowledge, this paper presents the first bypass of an Android Secure-Boot by using an Electromagnetic Fault Injection (EMFI). Two hardware characterization methods are combined to conduct this experiment. A real-time Side-Channel Analysis (SCA) is used to synchronize an EMFI during the Linux Kernel authentication step of the Android Secure-Boot of a smartphone-grade SoC. This new synchronization method is called Synchronization by Frequency Detection (SFD). It is based on the detection of the activation of a characteristic frequency in the target electromagnetic emanations. In this work we present a proof-of-concept of this new triggering method. By triggering the attack upon the activation of this characteristic frequency, we successfully bypassed this security feature, effectively running Android OS with a compromised Linux Kernel with one success every 15 minutes.
Last updated:  2022-05-17
A Better Method to Analyze Blockchain Consistency
Lucianna Kiffer, Rajmohan Rajaraman, abhi shelat
The celebrated Nakamoto consensus protocol ushered in several new consensus applications including cryptocurrencies. A few recent works have analyzed important properties of blockchains, including most significantly, consistency, which is a guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol. To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting. The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains: • Our new analysis provides a tighter guarantee on the consistency property of Nakamoto’s protocol, including for parameter regimes which previous work could not consider; • We analyze a family of delaying attacks and extend them to other protocols; • We analyze how long a participant should wait before considering a high-value transaction “confirmed”; • We analyze the consistency of CliqueChain, a variation of the Chainweb system; • We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore "balancing"-attack. In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages. We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.
Last updated:  2023-02-10
A Nearly Tight Proof of Duc et al.'s Conjectured Security Bound for Masked Implementations
Uncategorized
Loïc Masure, Olivier Rioul, François-Xavier Standaert
Show abstract
Uncategorized
We prove a bound that approaches Duc et al.'s conjecture from Eurocrypt 2015 for the side-channel security of masked implementations. Let \(Y\) be a sensitive intermediate variable of a cryptographic primitive taking its values in a set \(\mathcal{Y}\). If \(Y\) is protected by masking (a.k.a. secret sharing) at order \(d\) (i.e., with $d+1$ shares), then the complexity of any non-adaptive side-channel analysis --- measured by the number of queries to the target implementation required to guess the secret key with sufficient confidence --- is lower bounded by a quantity inversely proportional to the product of mutual informations between each share of \(Y\) and their respective leakage. Our new bound is nearly tight in the sense that each factor in the product has an exponent of \(-1\) as conjectured, and its multiplicative constant is\(\mathcal{O}\left(\log |\mathcal{Y}| \cdot |\mathcal{Y}|^{-1} \cdot C^{-d}\right)\), where \(C = 2 \log(2) \approx 1.38\). It drastically improves upon previous proven bounds, where the exponent was \(-1/2\), and the multiplicative constant was \(\mathcal{O}\left(|\mathcal{Y}|^{-d}\right)\). As a consequence for side-channel security evaluators, it is possible to provably and efficiently infer the security level of a masked implementation by simply analyzing each individual share, under the necessary condition that the leakage of these shares are independent.
Last updated:  2022-05-17
TenderTee: Secure Tendermint
Lionel Beltrando, Maria Potop-Butucaru, Jose Alfaro
Blockchain and distributed ledger technologies have emerged as one of the most revolutionary distributed systems, with the goal of eliminating centralised intermediaries and installing distributed trusted services. They facilitate trustworthy trades and exchanges over the Internet, power cryptocurrencies, ensure transparency for documents, and much more. Committee based-blockchains are considered today as a viable alternative to the original proof-of-work paradigm, since they offer strong consistency and are energy efficient. One of the most popular committee based-blockchain is Tendermint used as core by several popular blockchains such Tezos, Binance Smart Chain or Cosmos. Interestingly, Tendermint as many other committee based-blockchains is designed to tolerate one third of Byzantine nodes. In this paper we propose TenderTee, an enhanced version of Tendermint, able to tolerate one half of Byzantine nodes. The resilience improvement is due to the use of a trusted abstraction, a light version of attested append-only memory, which makes the protocol immune to equivocation (i.e behavior of a faulty node when it sends different faulty messages to different nodes). Furthermore, we prove the correctness of TenderTee for both one-shot and repeated consensus specifications.
Last updated:  2022-05-24
Verifiable and forward private conjunctive keyword search from DIA tree
Laltu Sardar, Sushmita Ruj
In a dynamic searchable encryption (DSE) scheme, a cloud server can search on encrypted data that the client stores and updates from time to time. Due to information leakage during the search and update phase, DSE schemes are prone to file injection attacks. If during document addition, a DSE scheme does not leak any information about the previous search results, the scheme is said to be forward private. A DSE scheme that supports conjunctive keyword search should be forward private. There has been a fair deal of work on designing forward private DSE schemes in the presence of an honest-but-curious cloud server. However, a malicious cloud server might not run the protocol correctly and still want to be undetected. In a verifiable DSE, the cloud server not only returns the result of a search query but also provides proof that the result is computed correctly. We design a forward private DSE scheme that supports conjunctive keyword search. At the heart of the construction is our proposed data structure called the dynamic interval accumulation tree (DIA tree). It is an accumulator-based authentication tree that efficiently returns both membership and non-membership proofs. Using the DIA tree, we can convert any single keyword forward private DSE scheme to a verifiable forward private DSE scheme that can support conjunctive queries as well. Our proposed scheme has the same storage as the base DSE scheme and low computational overhead on the client-side. We have shown the efficiency of our design by comparing it with existing conjunctive DSE schemes. The comparison also shows that our scheme is suitable for practical use.
Last updated:  2022-05-17
Foundations of Dynamic BFT
Sisi Duan, Haibin Zhang
This paper studies dynamic BFT, where replicas can join and leave the system dynamically, a primitive that is nowadays increasingly needed. We provide a formal treatment for dynamic BFT protocols, endowing them with a flexible syntax and various security definitions. We demonstrate the challenges of extending static BFT to dynamic BFT. Then we design and implement Dyno, a highly efficient dynamic BFT protocol under the partial synchrony model. We show that Dyno can seamlessly handle membership changes without incurring performance degradation.
Last updated:  2022-05-17
Zero Knowledge Proofs of Elliptic Curve Inner Products from Principal Divisors and Weil Reciprocity
Liam Eagen
Zero Knowledge proofs of Elliptic Curve Inner Products (ECIPs) and elliptic curve operations more generally are an increasingly important part of zero knowledge protocols and a significant bottle neck in recursive proof composition over amicable cycles of elliptic curves. To prove ECIPs more efficiently, I represent a collection of points that sum to zero using a polynomial element of the function field and evaluate this function at a random principal divisor. By Weil reciprocity, this is equal to the function interpolating the random divisor evaluated at the original points. Taking the logarithmic derivative of both expressions allows the prover to use a similar technique to the Bulletproofs++ permutation argument and take linear combinations logarithmic derivatives of divisor witnesses and collect terms for the same basis point by adding the multiplicities. The linear combination can be random or can be structured to cancel intermediate points in computing the sum. Since the multiplicities are field elements, this system can prove ECIP relations in zero knowledge with respect to the linear combination, the curve points, or both. Compared to existing techniques, the witness size is reduced by up to a factor of 10 and the number of multiplications by a factor of about 100 with significantly more flexibility in the organization of the protocol. The specific improvement will depend on the instantiating proof system, number of curve points, and which information is zero knowledge. This technique also works, with small modification, for proving multiexponentiations in the multiplicative group of the field.
Last updated:  2022-05-17
On the Cryptographic Fragility of the Telegram Ecosystem
Theo von Arx, Kenneth G. Paterson
Telegram is a popular messenger with more than 550 million monthly active users and a large ecosystem of different clients. Telegram has its own bespoke transport layer security protocol, MTProto 2.0. This protocol was recently subjected to a detailed study by Albrecht et al. (IEEE S&P 2022). They gave attacks on the protocol and its implementations, along with a security proof for a modified version of the protocol. We complement that study by analysing a range of third-party client implementations of MTProto 2.0. We report practical replay attacks for the Pyrogram, Telethon and GramJS clients, and a more theoretical timing attack against the MadelineProto client. We show how vulnerable third-party clients can affect the security of the entire ecosystem, including official clients. Our analysis reveals that many third-party clients fail to securely implement MTProto 2.0. We discuss the reasons for these failures, focussing on complications in the design of MTProto 2.0 that lead developers to omit security-critical features or to implement the protocol in an insecure manner. We also discuss changes that could be made to MTProto 2.0 to remedy this situation. Overall, our work highlights the cryptographic fragility of the Telegram ecosystem.
Last updated:  2022-12-16
A CONCRETE approach to torus fully homomorphic encryption
Uncategorized
Maria Ferrara, Antonio Tortora
Uncategorized
The homomorphic encryption allows to operate on encrypted data, making any action less vulnerable to hacking. The implementation of a fully homomorphic cryptosystem has long been impracticable. A breakthrough was achieved only in 2009 thanks to Gentry and his innovative idea of bootstrapping. TFHE is a torus-based fully homomorphic cryptosystem using the bootstrapping technique. This paper aims to present TFHE from an algebraic point of view, starting from the CONCRETE library which implements TFHE.
Last updated:  2022-05-25
On the Security Proof of CKO+21 Secret Sharing Scheme
Yupu Hu, Shanshan Zhang, Baocang Wang, Siyue Dong
On CRYPTO2021, Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obattu, and Sruthi Sekar presented a novel secret sharing scheme, called CKO+21 scheme. This scheme makes use of Shamir secret sharing schemes and randomness extractors as its basic components, to generate a multi-layer encapsulation structure. The authors claimed that CKO+21 scheme satisfied “leakage resilience”, that is, the privacy still held under both “not enough revealing” and “appropriate leakage”. More important is that authors presented a bulky proof for the security of CKO+21 scheme. In this paper we only consider the simple case of \((n,t)\) threshold secret sharing. We find following 5 facts about CKO+21 scheme, which are the basic reasons we negate the security proof of CKO+21 scheme. (1) In the expression of share of CKO+21 scheme, some bottom Shamir share is simply included, rather than encapsulated. (2) The leakage of the share is not a random leakage, but rather related to the inquiry of the attacker, that is, a chosen leakage. (3) The permitted leakage length of each share is proportional to the share length. (4) The bottom Shamir scheme has such special feature: when the length of the share $l^{*}$ is kept unchanged, it can make the number of shares $n$, the threshold value $t$, and the difference value $n-t+1$ any large, as long as $t<n$. (5) There is no additional assumption for the bottom Shamir scheme, especially no clear negating its “leakage recoverability” and “contaminated leakage irrecoverability”, defined in this paper. In this paper we point that, CKO+21 scheme didn’t successfully prove its security. As long as the bottom Shamir secret sharing scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the security proof of CKO+21 scheme is wrong. It needs to be pointed out that “leakage recoverability” and “contaminated leakage irrecoverability” cannot be naturally negated by “privacy” of Shamir scheme, and up to now there is not a proof that Shamir scheme doesn’t satisfy “leakage recoverability” or “contaminated leakage irrecoverability”. The detailed contribution of this paper is as follow. CKO+21 scheme designed several leakage models: \(\mathsf{Leak}{\mathsf{B}_0}\),\(\mathsf{Leak}{\mathsf{A}_1}\),\(\mathsf{Leak}{\mathsf{B}_1}\),\(\mathsf{Leak}{\mathsf{A}_2}\),\(\mathsf{Leak}{\mathsf{B}_2}\),$\cdots$,\(\mathsf{Leak}{\mathsf{A}_h}\),\(\mathsf{Leak}{\mathsf{B}_h}\),\(\mathsf{Leak}{\mathsf{C}}\), where \(\mathsf{Leak}{\mathsf{B}_0}\) is the practical leakage model, \(\mathsf{Leak}{\mathsf{C}}\) is a leakage model independent of the secret message. CKO+21 scheme claimed that an attacker cannot distinguish two adjacent leakage models, so the scheme is “leakage resilient”. We point that, if the bottom Shamir scheme satisfies both “leakage recoverability” and “contaminated leakage irrecoverability”, the attacker can distinguish \(\mathsf{Leak}{\mathsf{B}_0}\) and \(\mathsf{Leak}{\mathsf{A}_1}\) with non-negligible probability. Besides, if the bottom Shamir scheme doesn’t satisfy “leakage recoverability”. Shamir scheme itself has some ability to resist leakage, and the bulky structure of CKO+21 scheme is not necessary. When leakage function is extended to general function, the security proof of CKO+21 scheme can be more easily negated. Because CKO+21 scheme didn’t clearly restrict the range of leakage function (In fact, leakage function should be restricted within the range of simple functions), this paper chooses a $P/poly$ function as the leakage function, enabling an attacker to distinguish \(\mathsf{Leak}{\mathsf{B}_0}\) and \(\mathsf{Leak}{\mathsf{A}_1}\) simpler and quicker. Detailedly speaking, under the first explaining of Shamir parameters, the attacker inquires the higher $\tau$ bits of a modular $p$ linear function of the bottom Shamir share from each share, then distinguishes \(\mathsf{Leak}{\mathsf{B}_0}\) and \(\mathsf{Leak}{\mathsf{A}_1}\) simpler and quicker.
Last updated:  2023-03-29
Chaghri --- an FHE-friendly Block Cipher
Tomer Ashur, Mohammad Mahzoun, Dilara Toprakhisar
The Recent progress in practical applications of secure computation protocols has also attracted attention to the symmetric-key primitives underlying them. Whereas traditional ciphers have evolved to be efficient with respect to certain performance metrics, advanced cryptographic protocols call for a different focus. The so called arithmetic complexity is viewed through the number and layout of non-linear operations in the circuit implemented by the protocol. Symmetric-key algorithms that are optimized with respect to this metric are said to be algebraic ciphers. Previous work targeting ZK and MPC protocols delivered great improvement in the performance of these applications both in lab and in practical use. Interestingly, despite its apparent benefits to privacy-aware cloud computing, algebraic ciphers targeting FHE did not attract similar attention. In this paper we present Chaghri, an FHE-friendly block cipher enabling efficient transciphering in BGV-like schemes. A complete Chaghri circuit can be implemented using only 16 multiplications, 32 Frobenius automorphisms and 32 rotations, all arranged in a depth-32 circuit. Our HElib implemention achieves a throughput of 0.26 seconds-per-bit which is 65% faster than AES in the same setting.
Last updated:  2022-05-17
Software Evaluation for Second Round Candidates in NIST Lightweight Cryptography
Ryota Hira, Tomoaki Kitahara, Daiki Miyahara, Yuko Hara-Azumi, Yang Li, Kazuo Sakiyama
Lightweight cryptography algorithms are increasing in value because they can enhance security under limited resources. National Institute of Standards and Technology is working on standardising lightweight authenticated encryption with associated data. Thirty-two candidates are included in the second round of the NIST selection process, and their specifications differ with respect to various points. Therefore, for each algorithm, the differences in specifications are expected to affect the algorithm's performance. This study aims to facilitate the selection and design of those algorithms according to the usage scenarios. For this purpose, we investigate and compare the 32 lightweight cryptography algorithm candidates using specifications and software implementations. The results indicate that latency and memory usage depend on parameters and nonlinear operations. In terms of memory usage, a difference exists in ROM usage, but not in the RAM usage from our experiments using ARM platform. We also discovered that the data size to be processed efficiently differs according to the padding scheme, mode of operation, and block size.
Last updated:  2024-02-04
Secure Merge in Linear Time and O(log log N) Rounds
Mark Blunk, Paul Bunn, Samuel Dittmer, Steve Lu, and Rafail Ostrovsky
The problem of Secure Merge consists of combining two sorted lists (which are either held separately by two parties, or secret-shared among two or more parties), and outputting a single merged (sorted) list, secret-shared among all parties. Just as insecure algorithms for comparison-based sorting are slower than merging (i.e., for lists of size $n$, $\Theta(n \log n)$ versus $\Theta(n)$), we explore whether the analogous separation exists for secure protocols; namely, if there exist techniques for performing secure merge that are more performant than simply invoking a secure multiparty sort. We answer this question affirmatively by constructing a semi-honest protocol with optimal $\Theta(n)$ communication and computation, and $\Theta(\log \log n)$ rounds of communication. Our results are based solely on black-box use of basic secure primitives, such as secure comparison and secure shuffle. Since two-party secure primitives require computational assumptions, while three-party secure primitive do not, our result imply a computationally secure two-party secure merge protocol and an information-theoretically secure three-party secure merge protocol with these bounds. Secure sort is a fundamental building block used in many MPC protocols, e.g., various private set intersection protocols and oblivious RAM protocols. More efficient secure sort can lead to concrete improvements in the overall run-time. Since secure sort can often be replaced by secure merge -- since inputs (from different participating players) can be presorted -- an efficient secure merge protocol has wide applicability. There are also a range of applications in the field of secure databases, including secure database joins, as well as updatable database storage and search, whereby secure merge can be used to insert new entries into an existing (sorted) database. In building our secure merge protocol, we develop several subprotocols that may be of independent interest. For example, we develop a protocol for secure asymmetric merge (where one list is much larger than the other).
Last updated:  2022-05-17
Unnecessary Input Heuristics & PayJoin Transactions
Uncategorized
Simin Ghesmati, Andreas Kern, Aljosha Judmayer, Nicholas Stifter and
Show abstract
Uncategorized
Over the years, several privacy attacks targeted at UTXO-based cryptocurrencies such as Bitcoin have been proposed. This has led to an arms race between increasingly sophisticated analysis approaches and a continuous stream of proposals that seek to counter such attacks against users' privacy. Recently, PayJoin was presented as a new technique for mitigating one of the most prominent heuristics, namely \emph{common input ownership}. This heuristic assumes that the inputs of a transaction, and thus the associated addresses, belong to the same entity. However, a problem with PayJoin is that implementations can accidentally reveal such transactions if the corresponding inputs from involved parties are not chosen carefully. Specifically, if a transaction is formed in a way such that it contains seemingly unnecessary inputs, it can be identified through so-called unnecessary input heuristic (UIH). What is not yet clear is the impact of naive coin selection algorithms within PayJoin implementations that may flag such transactions as PayJoin. This paper investigates the resemblance of PayJoin transactions to ordinary payment transactions by examining the significance of the unnecessary input heuristic in transactions with more than one input and exactly two outputs which is the common template of recent PayJoin transactions.
Last updated:  2022-05-17
Efficient Lifting for Shorter Zero-Knowledge Proofs and Post-Quantum Signatures
Daniel Kales, Greg Zaverucha
MPC-in-the-head based zero-knowledge proofs allow one to prove knowledge of a preimage for a circuit defined over a finite field F. In recent proofs the soundness depends on the size F, and small fields require more parallel repetitions, and therefore produce larger proofs. In this paper we develop and systematically apply lifting strategies to such proof protocols in order to increase soundness and reduce proof size. The strategies are (i) lifting parts of the protocol to extension fields of F, (ii) using reverse- multiplication friendly embeddings to pack elements of F into a larger field and (iii) to use an alternative circuit representation. Using a combination of these strategies at different points in the protocol, we design two new proof systems well suited to small circuits defined over small fields. As a case study we consider efficient constructions of post-quantum signatures, where a signature is a proof of knowledge of a one-way function preimage, and two commonly used one-way functions are defined over small fields (AES and LowMC). We find that carefully applying these lifting strategies gives shorter signatures than the state-of-the-art: our AES-based signatures are 1.3x shorter than Banquet (PKC 2021) and our LowMC-based signatures are almost 2x shorter than the NIST-candidate algorithm Picnic3. We implement our schemes and provide benchmarks. Finally, we also give other optimizations: some generally applicable to this class of proofs, and some specific to the circuits we focused on.
Last updated:  2022-09-05
Doubly Efficient Interactive Proofs over Infinite and Non-Commutative Rings
Eduardo Soria-Vazquez
We introduce the first proof system for layered arithmetic circuits over an arbitrary ring $R$ that is (possibly) non-commutative and (possibly) infinite, while only requiring black-box access to its arithmetic and a subset $A \subseteq R$. Our construction only requires limited commutativity and regularity properties from $A$, similar to recent work on efficient information theoretic multi-party computation over non-commutative rings by Escudero and Soria-Vazquez (CRYPTO 2021), but furthermore covering infinite rings. We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol. Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.
Last updated:  2022-10-14
A survey of elliptic curves for proof systems
Diego F. Aranha, Youssef El Housni, Aurore Guillevic
Elliptic curves have become key ingredients for instantiating zero-knowledge proofs and more generally proof systems. Recently, there have been many tailored constructions of these curves that aim at efficiently implementing different kinds of proof systems. In this survey we provide the reader with a comprehensive overview on existing work and revisit the contributions in terms of efficiency and security. We present an overview at three stages of the process: curves to instantiate a SNARK, curves to instantiate a recursive SNARK, and also curves to express an elliptic-curve related statement. We provide new constructions of curves for SNARKs and generalize the state-of-the-art constructions for recursive SNARKs. We also exhaustively document the existing work and open-source implementations.
Last updated:  2022-08-17
Towards Practical Homomorphic Time-Lock Puzzles: Applicability and Verifiability
Yi Liu, Qi Wang, Siu-Ming Yiu
Time-lock puzzle schemes allow one to encrypt messages for the future. More concretely, one can efficiently generate a time-lock puzzle for a secret/solution $s$, such that $s$ remains hidden until a specified time $T$ has elapsed, even for any parallel adversaries. However, since computation on secrets within multiple puzzles can be performed only when \emph{all} of these puzzles are solved, the usage of classical time-lock puzzles is greatly limited. Homomorphic time-lock puzzle (HTLP) schemes were thus proposed to allow evaluating functions over puzzles directly without solving them. However, although efficient HTLP schemes exist, more improvements are still needed for practicability. In this paper, we improve HTLP schemes to broaden their application scenarios from the aspects of \emph{applicability} and \emph{verifiability}. In terms of applicability, we design the \emph{first} multiplicatively HTLP scheme with the solution space over $\mathbb{Z}_n^*$, which is more expressible than the original one, \eg representing integers. Then, to fit HTLP into scenarios requiring verifiability that is missing in existing schemes, we propose three \emph{simple} and \emph{fast} protocols for both the additively HTLP scheme and our multiplicatively HTLP scheme, respectively. The first two protocols allow a puzzle solver to convince others of the correctness of the solution or the invalidity of the puzzle so that others do not need to solve the puzzle themselves. The third protocol allows a puzzle generator to prove the validity of his puzzles. It is shown that a puzzle in our scheme is only $1.25$KB, and one multiplication on puzzles takes simply $0.01$ms. Meanwhile, the overhead of each protocol is less than $0.6$KB in communication and $40$ms in computation. Hence, HTLP still demonstrates excellent efficiency in both communication and computation with these versatile properties.
Last updated:  2022-05-17
Revisiting the Uber Assumption in the Algebraic Group Model: Fine-Grained Bounds in Hidden-Order Groups and Improved Reductions in Bilinear Groups
Uncategorized
Lior Rotem
Show abstract
Uncategorized
We prove strong security guarantees for a wide array of computational and decisional problems, both in hidden-order groups and in bilinear groups, within the algebraic group model (AGM) of Fuchsbauer, Kiltz and Loss (CRYPTO '18). As our first contribution, we put forth a new fine-grained variant of the Uber family of assumptions in hidden-order groups. This family includes in particular the repeated squaring function of Rivest, Shamir and Wagner, which underlies their time-lock puzzle as well as the main known candidates for verifiable delay functions; and a computational variant of the generalized BBS problem, which underlies the timed commitments of Boneh and Naor (CRYPTO '00). We then provide two results within a variant of the AGM, which show that the hardness of solving problems in this family in a less-than-trivial number of steps is implied by well-studied assumptions. The first reduction may be applied in any group (and in particular, class groups), and is to the RSA assumption; and our second reduction is in RSA groups with a modulus which is the product of two safe primes, and is to the factoring assumption. Additionally, we prove that the hardness of any computational problem in the Uber family of problems in bilinear groups is implied by the hardness of the $q$-discrete logarithm problem. The parameter $q$ in our reduction is the maximal degree in which a variable appears in the polynomials which define the specific problem within the Uber family. This improves upon a recent result of Bauer, Fuchsbauer and Loss (CRYPTO '20), who obtained a similar implication but for a parameter $q$ which is lower bounded by the maximal total degree of one of the above polynomials. We discuss the implications of this improvement to prominent group key-exchange protocols.
Last updated:  2022-05-17
A Fully-Constructive Discrete-Logarithm Preprocessing Algorithm with an Optimal Time-Space Tradeoff
Lior Rotem, Gil Segev
Identifying the concrete hardness of the discrete logarithm problem is crucial for instantiating a vast range of cryptographic schemes. Towards this goal, Corrigan-Gibbs and Kogan (EUROCRYPT '18) extended the generic-group model for capturing "preprocessing" algorithms, offering a tradeoff between the space $S$ required for storing their preprocessing information, the time $T$ required for their online phase, and their success probability. Corrigan-Gibbs and Kogan proved an upper bound of $\widetilde{O}(S T^2/N)$ on the success probability of any such algorithm, where $N$ is the prime order of the group, matching the known preprocessing algorithms. However, the known algorithms assume the availability of truly random hash functions, without taking into account the space required for storing them as part of the preprocessing information, and the time required for evaluating them in essentially each and every step of the online phase. This led Corrigan-Gibbs and Kogan to pose the open problem of designing a discrete-logarithm preprocessing algorithm that is fully constructive in the sense that it relies on explicit hash functions whose description lengths and evaluation times are taken into account in the algorithm's space-time tradeoff. We present a fully constructive discrete-logarithm preprocessing algorithm with an asymptotically optimal space-time tradeoff (i.e., with success probability $\widetilde{\Omega}(S T^2/N)$). In addition, we obtain an algorithm that settles the corresponding tradeoff for the computational Diffie-Hellman problem. Our approach is based on derandomization techniques that provide rather weak independence guarantees. On the one hand, we show that such guarantees can be realized in our setting with only a minor efficiency overhead. On the other hand, exploiting such weak guarantees requires a more subtle and in-depth analysis of the underlying combinatorial structure compared to that of the known preprocessing algorithms and their analyses.
Last updated:  2022-06-07
Ponyta: Foundations of Side-Contract-Resilient Fair Exchange
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV). We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.