All papers in 2021 (Page 2 of 1705 results)

Last updated:  2021-12-31
Inflation-Tracking Proof-of-Work Crypto-Currencies
Charanjit S. Jutla
We show that Bitcoin and other egalitarian crypto-currencies are unstable as store-of-value as they fail to track inflation of local currencies closely, and the price dynamic is purely driven by speculation. Based on rational expectations equilibrium, we argue that if the coins awarded during mining are increased in proportion to increase in difficulty of the underlying cryptographic puzzle, then the price of the coin is likely to track inflation of local currencies closely over medium to long term. Further, a hyper-geometric tapering, instead of a geometric tapering, of the mining award over time is recommended for bootstrapping interest in the crypto-currency.
Last updated:  2022-12-01
The most efficient indifferentiable hashing to elliptic curves of $j$-invariant $1728$
Dmitrii Koshelev
This article makes an important contribution to solving the long-standing problem of whether all elliptic curves can be equipped with a hash function (indifferentiable from a random oracle) whose running time amounts to one exponentiation in the basic finite field $\mathbb{F}_{\!q}$. More precisely, we construct a new indifferentiable hash function to any ordinary elliptic $\mathbb{F}_{\!q}$-curve $E_a$ of $j$-invariant $1728$ with the cost of extracting one quartic root in $\mathbb{F}_{\!q}$. As is known, the latter operation is equivalent to one exponentiation in finite fields with which we deal in practice. In comparison, the previous fastest random oracles to $E_a$ require to perform two exponentiations in $\mathbb{F}_{\!q}$. Since it is highly unlikely that there is a hash function to an elliptic curve without exponentiations at all (even if it is supersingular), the new result seems to be unimprovable.
Last updated:  2023-07-03
CHEX-MIX: Combining Homomorphic Encryption with Trusted Execution Environments for Two-party Oblivious Inference in the Cloud
Deepika Natarajan, Andrew Loveless, Wei Dai, Ronald Dreslinski
Data, when coupled with state-of-the-art machine learning models, can enable remarkable applications. But, there exists an underlying tension: users wish to keep their data private, and model providers wish to protect their intellectual property. Homomorphic encryption (HE) and multi-party computation (MPC) techniques have been proposed as solutions to this problem; however, both techniques require model providers to fully trust the server performing the machine learning computation. This limits the scale of inference applications, since it prevents model providers from leveraging shared public cloud infrastructures. In this work, we present CHEX-MIX, a solution to the problem of privacy-preserving machine learning between two mutually distrustful parties in an untrusted cloud setting. CHEX-MIX relies on a combination of HE and trusted execution environments (TEEs), using HE to provide clients with confidentiality guarantees, and TEEs to provide model providers with confidentiality guarantees and protect the integrity of computation from malicious cloud adversaries. Unlike prior solutions to this problem, such as multi-key HE, single-key HE, MPC, or TEE-only techniques, our solution assumes that both the client and the cloud can be malicious, makes no collusion assumptions, and frees model providers from needing to maintain private online infrastructures. Our results show that CHEX-MIX can execute at high efficiency, with low communication cost, while providing security guarantees unaddressed by prior work. Compared to a recent multi-key HE work that allows partial cloud offload, for example, CHEX-MIX achieves a 3× lower communication cost and a 3× faster computation time.
Last updated:  2021-12-09
A Note on P/poly Validity of GVW15 Predicate Encryption Scheme
Yupu Hu, Siyue Dong, Baocang Wang, Jun Liu
Predicate encryption (PE) is a cutting-edge research topic in cryptography, and an essential component of a research route: identity-based encryption (IBE)→attribute-based encryption (ABE)→predicate encryption (PE)→functional encryption (FE). GVW15 predicate encryption scheme is a major predicate encryption scheme. The bottom structure is BGG+14 attribute-based encryption scheme, which is combined with a fully homomorphic encryption (FHE) scheme. A crucial operation of the scheme is modulus reduction, by which the modulus $Q$ of the fully homomorphic encryption ciphertext (also referred to as the inner modulus) is scaled down to the modulus $q$ of the attribute ciphertext (also referred to as the outer modulus). ‘Therefore’, the noise in the fully homomorphic encryption ciphertext (also referred to as the inner noise) is reduced to polynomial size, allowing for the follow-up exhaustion of noise size and hence correct decryption. We argue in this paper that there is no evidence to support the $P/poly$ validity of GVW15 predicate encryption scheme, that is, when addressing $P/poly$ functions, there is no evidence to show GVW15 scheme can be implemented. In specific, when addressing $P/poly$ functions, there is no indication that the modulus reduction in GVW15 predicate encryption scheme can scale the noise in the fully homomorphic encryption ciphertext (the inner noise) down to polynomial size. Our argument is separated into two parts. First, under a compact inner modulus $Q$, an intuition is that modulus reduction should reduce the inner noise to about the same size as the outer noise (i.e. the noise in the attribute ciphertext), which is super-polynomial in size. Breaking this intuition requires a special proof which GVW15 predicate encryption (PE) scheme does not provide. Second, under an enlarged inner modulus $Q$, the outer modulus is enlarged correspondingly. As a result, the static target of modulus reduction is lost. Even so, the size of inner noise can still be reduced to polynomial size by using proper modulus reduction, as long as it can be proved that the ratio of increments of outer modulus and inner modulus is smaller than the ratio of original outer modulus $q$ and original inner modulus $Q$. However, GVW15 PE scheme failed to provide such proof. Moreover, it appears hopeless to get such proof, based on our observations.
Last updated:  2022-03-03
Post-Quantum Security of the Even-Mansour Cipher
Gorjan Alagic, Chen Bai, Jonathan Katz, Christian Majenz
The Even-Mansour cipher is a simple method for constructing a (keyed) pseudorandom permutation $E$ from a public random permutation $P:\{0,1\}^n \rightarrow \{0,1\}^n$. It is a core ingredient in a wide array of symmetric-key constructions, including several lightweight cryptosystems presently under consideration for standardization by NIST. It is secure against classical attacks, with optimal attacks requiring $q_E$ queries to $E$ and $q_P$ queries to $P$ such that $q_E \cdot q_P \approx 2^n$. If the attacker is given *quantum* access to both $E$ and $P$, however, the cipher is completely insecure, with attacks using $q_E, q_P = O(n)$ queries known. In any plausible real-world setting, however, a quantum attacker would have only *classical* access to the keyed permutation $E$ implemented by honest parties, while retaining quantum access to $P$. Attacks in this setting with $q_E \cdot q_P^2 \approx 2^n$ are known, showing that security degrades as compared to the purely classical case, but leaving open the question as to whether the Even-Mansour cipher can still be proven secure in this natural ``post-quantum'' setting. We resolve this question, showing that any attack in that setting requires $q_E \cdot q^2_P + q_P \cdot q_E^2 \approx 2^n$. Our results apply to both the two-key and single-key variants of Even-Mansour. Along the way, we establish several generalizations of results from prior work on quantum-query lower bounds that may be of independent interest.
Last updated:  2022-09-23
A New Isogeny Representation and Applications to Cryptography
Antonin Leroux
This paper focuses on isogeny representations, defined as ways to evaluate isogenies and verify membership to the language of isogenous supersingular curves (the set of triples $D,E_1,E_2$ with a cyclic isogeny of degree $D$ between $E_1$ and $E_2$). The tasks of evaluating and verifying isogenies are fundamental for isogeny-based cryptography. Our main contribution is the design of the suborder representation, a new isogeny representation targeted at the case of (big) prime degree. The core of our new method is the revelation of endomorphisms of smooth norm inside a well-chosen suborder of the codomain's endomorphism ring. This new representation appears to be opening interesting prospects for isogeny-based cryptography under the hardness of a new computational problem: the SubOrder to Ideal Problem (SOIP). As an application, we introduce pSIDH, a new NIKE based on the suborder representation. Studying new assumption appears to be particularly crucial in the light of the recent attacks against isogeny-based cryptography. In order to manipulate efficiently the suborder representation, we develop several heuristic algorithmic tools to solve norm equations inside a new family of quaternion orders. These new algorithms may be of independent interest.
Last updated:  2022-07-26
How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols
Pankaj Dayama, Arpita Patra, Protik Paul, Nitin Singh, Dhinakaran Vinayagamurthy
Traditional zero-knowledge protocols have been studied and optimized for the setting where a single prover holds the complete witness and tries to convince a verifier about a predicate on the witness, without revealing any additional information to the verifier. In this work, we study the notion of distributed-prover zero knowledge (DPZK) for arbitrary predicates where the witness is shared among multiple mutually distrusting provers and they want to convince a verifier that their shares together satisfy the predicate. We make the following contributions to the notion of distributed proof generation: (i) we propose a new MPC-style security definition to capture the adversarial settings possible for different collusion models between the provers and the verifier, (ii) we discuss new efficiency parameters for distributed proof generation such as the number of rounds of interaction and the amount of communication among the provers, and (iii) we propose a compiler that realizes distributed proof generation from the zero-knowledge protocols in the Interactive Oracle Proofs (IOP) paradigm. Our compiler can be used to obtain DPZK from arbitrary IOP protocols, but the concrete efficiency overheads are substantial in general. To this end, we contribute (iv) a new zero-knowledge IOP $\textsf{Graphene}$ which can be compiled into an efficient DPZK protocol. The $(\mathsf{D} + 1)$-DPZK protocol $\text{D-Graphene}$, with $\mathsf{D}$ provers and one verifier, admits $O(N^{1/c})$ proof size with a communication complexity of $O(\mathsf{D}^2\cdot (N^{1-2/c} + N_s))$, where $N$ is the number of gates in the arithmetic circuit representing the predicate and $N_s$ is the number of wires that depends on inputs from two or more parties. Significantly, only the distributed proof generation in $\text{D-Graphene}$ requires interaction among the provers. $\text{D-Graphene}$ compares favourably with the DPZK protocols obtained from the state-of-art zero-knowledge protocols, even those not modelled as IOPs.
Last updated:  2021-12-09
Modelling IBE-based Key Exchange Protocol using Tamarin Prover
Srijanee Mookherji, Vanga Odelu, Rajendra Prasath
Tamarin Prover is a formal security analysis tool that is used to analyse security properties of various authentication and key exchange protocols. It provides built-ins like Diffie-Hellman, Hashing, XOR, Symmetric and Asymmetric encryption as well as Bilinear pairings. The shortfall in Tamarin Prover is that it does not support elliptic curve point addition operation. In this paper, we present a simple IBE (Identity-Based Encryption) based key exchange protocol and tamarin model. For modelling, we define a function to replace the point addition operation by the concept of pre-computation. We demonstrate that the security model functions for theoretical expectation and is able to resist Man-In-The-Middle (MITM) Attack. This model can be used to analyse the formal security of authentication and key exchange protocols designed based-on the IBE technique.
Last updated:  2021-12-09
Cryptographic Analysis of the Bluetooth Secure Connection Protocol Suite
Marc Fischlin, Olga Sanina
We give a cryptographic analysis of the Bluetooth Secure Connections Protocol Suite. Bluetooth supports several subprotocols, such as Numeric Comparison, Passkey Entry, and Just Works, in order to match the devices' different input/output capabilities. Previous analyses (e.g., Lindell, CT-RSA'09, or Troncoso and Hale, NDSS'21) often considered (and confirmed) the security of single subprotocols only. Recent practically verified attacks, however, such as the Method Confusion Attack (von Tschirschnitz et al., S&P'21) against Bluetooth's authentication and key secrecy property, often exploit the bad interplay of different subprotocols. Even worse, some of these attacks demonstrate that one cannot prove the Bluetooth protocol suite to be a secure authenticated key exchange protocol. We therefore aim at the best we can hope for and show that the protocol still matches the common key secrecy requirements of a key exchange protocol if one assumes a trust-on-first-use (TOFU) relationship. This means that the adversary needs to mount an active attack during the initial connection, otherwise the subsequent reconnections remain secure. Investigating the cryptographic strength of the Bluetooth protocol, we also look into the privacy mechanism of address randomization in Bluetooth (which is only available in the Low Energy version). We show that the cryptography indeed provides a decent level of address privacy, although this does not rule out identification of devices via other means, such as physical characteristics.
Last updated:  2022-04-04
SHealS and HealS: isogeny-based PKEs from akey validation method for SIDH
Tako Boris Fouotsa, Christophe Petit
In 2016, Galbraith et al. presented an adaptive attack on the SIDH key exchange protocol. In SIKE, one applies a variant of the Fujisaki-Okamoto transform to force Bob to reveal his encryption key to Alice, which Alice then uses to re-encrypt Bob's ciphertext and verify its validity. Therefore, Bob can not reuse his encryption keys. There have been two other proposed countermeasures enabling static-static private keys: k-SIDH and its variant by Jao and Urbanik. These countermeasures are relatively expensive since they consist in running multiple parallel instances of SIDH. In this paper, firstly, we propose a new countermeasure to the GPST adaptive attack on SIDH. Our countermeasure does not require key disclosure as in SIKE, nor multiple parallel instances as in k-SIDH. We translate our countermeasure into a key validation method for SIDH-type schemes. Secondly, we use our key validation to design HealSIDH, an efficient SIDH-type static-static key interactive exchange protocol. Thirdly, we derive a PKE scheme SHealS using HealSIDH. SHealS uses larger primes compared to SIKE, has larger keys and ciphertexts, but only $4$ isogenies are computed in a full execution of the scheme, as opposed to $5$ isogenies in SIKE. We prove that SHealS is IND-CPA secure relying on a new assumption we introduce and we conjecture its IND-CCA security. We suggest HealS, a variant of SHealS using a smaller prime, providing smaller keys and ciphertexts. As a result, HealSIDH is a practically efficient SIDH based (interactive) key exchange incorporating a "direct" countermeasure to the GPST adaptive attack.
Last updated:  2021-12-06
A formula for disaster: a unified approach to elliptic curve special-point-based attacks
Vladimir Sedlacek, Jesús-Javier Chi-Domínguez, Jan Jancar, Billy Bob Brumley
The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.
Last updated:  2021-12-06
On the Bottleneck Complexity of MPC with Correlated Randomness
Claudio Orlandi, Divya Ravi, Peter Scholl
At ICALP 2018, Boyle et al. introduced the notion of the bottleneck complexity of a secure multi-party computation (MPC) protocol. This measures the maximum communication complexity of any one party in the protocol, aiming to improve load-balancing among the parties. In this work, we study the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time. We present two constructions of bottleneck-efficient MPC protocols, whose bottleneck complexity is independent of the number of parties: 1. A protocol for computing abelian programs, based only on one-way functions. 2. A protocol for selection functions, based on any linearly homomorphic encryption scheme. Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption.
Last updated:  2021-12-06
Interpreting and Mitigating Leakage-abuse Attacks in Searchable Symmetric Encryption
Lei Xu, Huayi Duan, Anxin Zhou, Xingliang Yuan, Cong Wang
Searchable symmetric encryption (SSE) enables users to make confidential queries over always encrypted data while confining information disclosure to pre-defined leakage profiles. Despite the well-understood performance and potentially broad applications of SSE, recent leakage-abuse attacks (LAAs) are questioning its real-world security implications. They show that a passive adversary with certain prior information of a database can recover queries by exploiting the legitimately admitted leakage. While several countermeasures have been proposed, they are insufficient for either security, i.e., handling only specific leakage like query volume, or efficiency, i.e., incurring large storage and bandwidth overhead. We aim to fill this gap by advancing the understanding of LAAs from a fundamental algebraic perspective. Our investigation starts by revealing that the index matrices of a plaintext database and its encrypted image can be linked by linear transformation. The invariant characteristics preserved under the transformation encompass and surpass the information exploited by previous LAAs. They allow one to unambiguously link encrypted queries with corresponding keywords, even with only partial knowledge of the database. Accordingly, we devise a new powerful attack and conduct a series of experiments to show its effectiveness. In response, we propose a new security notion to thwart LAAs in general, inspired by the principle of local differential privacy (LDP). Under the notion, we further develop a practical countermeasure with tunable privacy and efficiency guarantee. Experiment results on representative real-world datasets show that our countermeasure can reduce the query recovery rate of LAAs, including our own.
Last updated:  2022-07-28
The Need for Speed: A Fast Guessing Entropy Calculation for Deep Learning-based SCA
Guilherme Perin, Lichao Wu, Stjepan Picek
The adoption of deep neural networks for profiling side-channel attacks (SCA) opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: extensive hyperparameter search to find optimal deep learning models. As deep learning models usually suffer from overfitting due to their high fitting capacity, it is crucial to avoid over-training regimes, which require a correct number of epochs. For that, \textit{early stopping} is employed as an efficient regularization method that requires a consistent validation metric. Although guessing entropy is a highly informative metric for profiling SCA, it is time-consuming, especially if computed for all epochs during training and the number of validation traces is significantly large. This paper shows that guessing entropy can be efficiently computed during training by reducing the number of validation traces without affecting the efficiency of early stopping decisions. Our solution significantly speeds up the process, impacting hyperparameter search and overall profiling attack performances. Our fast guessing entropy calculation is up to 16$\times$ faster, resulting in more hyperparameter tuning experiments and allowing security evaluators to find more efficient deep learning model.
Last updated:  2022-04-08
Practical Asynchronous Distributed Key Generation
Sourav Das, Thomas Yurek, Zhuolun Xiang, Andrew Miller, Lefteris Kokoris-Kogias, Ling Ren
Distributed Key Generation (DKG) is a technique to bootstrap threshold cryptosystems without a trusted third party and is a building block to decentralized protocols such as randomness beacons, threshold signatures, and general multiparty computation. Until recently, DKG protocols have assumed the synchronous model and thus are vulnerable when their underlying network assumptions do not hold. The recent advancements in asynchronous DKG protocols are insufficient as they either have poor efficiency or limited functionality, resulting in a lack of concrete implementations. In this paper, we present a simple and concretely efficient asynchronous DKG (ADKG) protocol. In a network of $n$ nodes, our ADKG protocol can tolerate up to $t<n/3$ malicious nodes and have an expected $O(\kappa n^3)$ communication cost, where $\kappa$ is the security parameter. Our ADKG protocol produces a field element as the secret and is thus compatible with off-the-shelf threshold cryptosystems. We implement our ADKG protocol and evaluate it using a network of up to 128 nodes in geographically distributed AWS instances. Our evaluation shows that our protocol takes as low as 3 and 9.5 seconds to terminate for 32 and 64 nodes, respectively. Also, each node sends only 0.7 Megabytes and 2.9 Megabytes of data during the two experiments, respectively.
Last updated:  2021-12-06
Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation
David Heath, Vladimir Kolesnikov, Stanislav Peceny
Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling.
Last updated:  2021-12-06
SoK: Validating Bridges as a Scaling Solution for Blockchains
Patrick McCorry, Chris Buckland, Bennet Yee, Dawn Song
Off-chain protocols are a promising solution to the cryptocurrency scalability dilemma. It focuses on moving transactions from a blockchain network like Ethereum to another off-chain system while ensuring users can transact with assets that reside on the underlying blockchain. Several startups have collectively raised over $100m to implement off-chain systems which rely on a validating bridge smart contract to self-enforce the safety of user funds and liveness of transaction execution. It promises to offer a Coinbase-like experience as users can transact on an off-chain system while still retaining the underlying blockchain’s security for all processed transactions. Unfortunately, the literature for validating bridges is highly disparate across message boards, chat rooms and for-profit ventures that fund its rapid development. This Systematization of Knowledge focuses on presenting the emerging field in an accessible manner and to bring forth the immediate research problems that must be solved before we can extend Ethereum’s security to new (and experimental) off-chain systems.
Last updated:  2022-04-01
IRShield: A Countermeasure Against Adversarial Physical-Layer Wireless Sensing
Paul Staat, Simon Mulzer, Stefan Roth, Veelasha Moonsamy, Aydin Sezgin, Christof Paar
Wireless radio channels are known to contain information about the surrounding propagation environment, which can be extracted using established wireless sensing methods. Thus, today's ubiquitous wireless devices are attractive targets for passive eavesdroppers to launch reconnaissance attacks. In particular, by overhearing standard communication signals, eavesdroppers obtain estimations of wireless channels which can give away sensitive information about indoor environments. For instance, by applying simple statistical methods, adversaries can infer human motion from wireless channel observations, allowing to remotely monitor premises of victims. In this work, building on the advent of intelligent reflecting surfaces (IRSs), we propose IRShield as a novel countermeasure against adversarial wireless sensing. IRShield is designed as a plug-and-play privacy-preserving extension to existing wireless networks. At the core of IRShield, we design an IRS configuration algorithm to obfuscate wireless channels. We validate the effectiveness with extensive experimental evaluations. In a state-of-the-art human motion detection attack using off-the-shelf Wi-Fi devices, IRShield lowered detection rates to 5% or less.
Last updated:  2022-05-16
Low-Bandwidth Threshold ECDSA via Pseudorandom Correlation Generators
Damiano Abram, Ariel Nof, Claudio Orlandi, Peter Scholl, Omer Shlomovits
Digital signature schemes are a fundamental component of secure distributed systems, and the theft of a signing-key might have huge real-world repercussions e.g., in applications such as cryptocurrencies. Threshold signature schemes mitigate this problem by distributing shares of the secret key on several servers and requiring that enough of them interact to be able to compute a signature. In this paper, we provide a novel threshold protocol for ECDSA, arguably the most relevant signature scheme in practice. Our protocol is the first one where the communication complexity of the preprocessing phase is only logarithmic in the number of ECDSA signatures to be produced later, and it achieves therefore a so-called silent preprocessing. Our protocol achieves active security against any number of arbitrarily corrupted parties.
Last updated:  2022-01-23
Cryptanalysis of a Type of White-Box Implementations of the SM4 Block Cipher
Jiqiang Lu, Jingyu Li
The SM4 block cipher was first released in 2006 as SMS4 used in the Chinese national standard WAPI, and became a Chinese national standard in 2016 and an ISO international standard in 2021. White-box cryptography aims primarily to protect the secret key used in a cryptographic software implementation in the white-box scenario that assumes an attacker to have full access to the execution environment and execution details of an implementation. Since white-box cryptography has many real-life applications nowadays, a few white-box implementations of the SM4 block cipher has been proposed with its increasingly wide use, among which a type of constructions is dominated, that use an affine diagonal block encoding to protect the original XOR sum of the three branches entering the S-box layer of a round and use its inverse to protect the original input of the S-box layer, such as Xiao and Lai's implementation in 2009, Shang's implementation in 2016 and Yao and Chen's implementation in 2020. In this paper, we show that this type of white-box SM4 constructions can be somewhat equivalent to a plain implementation mostly with Boolean masks from a security viewpoint, by devising collision-based attacks on Xiao and Lai's, Shang's and Yao and Chen's implementations with a time complexity of respectively about $2^{22}$, $2^{39}$ and $2^{22}$ to peel off most white-box operations until only Boolean masks remain. Besides, we present a collision-based attack on a white-box SM4 implementation with a time complexity of about $2^{17.1}$ to recover an original round key, which uses a linear diagonal block encoding instead of an affine diagonal block encoding. Our results show that generating such a white-box SM4 implementation with affine encodings can be simplified into generating a plain implementation with Boolean masks (if its security expectation is beyond the above-mentioned complexity), and the effect of an affine encoding is significantly better than the effect of a linear encoding in the sense of our cryptanalysis results.
Last updated:  2021-12-06
Searchable Encryption for Conjunctive Queries with Extended Forward and Backward Privacy
Cong Zuo, Shangqi Lai, Xingliang Yuan, Joseph K. Liu, Jun Shao, Huaxiong Wang
Recent developments in the field of Dynamic Searchable Symmetric Encryption (DSSE) with forward and backward privacy have attracted much attention from both research and industrial communities. However, most forward and backward private DSSE schemes support single keyword queries only, which impedes its prevalence in practice. Until recently, Patranabis et al. (NDSS 2021) introduced a forward and backward private DSSE for conjunctive queries (named ODXT) based on the Oblivious Cross-Tags (OXT) framework. Unfortunately, its security is not comprehensive for conjunctive queries, and it deploys “lazy deletion”, which incurs more communication cost. Besides, it cannot delete a file in certain circumstances. To address these problems, we introduce two forward and backward private DSSE schemes with conjunctive queries (named SDSSE-CQ and SDSSE-CQ-S). To analysis their security, we present two new levels of backward privacy (named Type-O and Type-O$^-$, where Type-O$^-$ is more secure than Type-O), which describe the leakages of conjunctive queries with OXT framework more accurately. Finally, the security and experimental evaluation demonstrate that our proposed schemes achieve better security with comparable computation and communication increase in comparison with ODXT.
Last updated:  2022-08-12
ppSAT: Towards Two-Party Private SAT Solving
Ning Luo, Samuel Judson, Timos Antonopoulos, Ruzica Piskac, Xiao Wang
We design and implement a privacy-preserving Boolean satisfiability (ppSAT) solver, which allows mutually distrustful parties to evaluate the conjunction of their input formulas while maintaining privacy. We first define a family of security guarantees reconcilable with the (known) exponential complexity of SAT solving, and then construct an oblivious variant of the classic DPLL algorithm which can be integrated with existing secure two-party computation (2PC) techniques. We further observe that most known SAT solving heuristics are unsuitable for 2PC, as they are highly data-dependent in order to minimize the number of exploration steps. Faced with how best to trade off between the number of steps and the cost of obliviously executing each one, we design three efficient oblivious heuristics, one deterministic and two randomized. As a result of this effort we are able to evaluate our ppSAT solver on small but practical instances arising from the haplotype inference problem in bioinformatics. We conclude by looking towards future directions for making ppSAT solving more practical, most especially the integration of conflict-driven clause learning (CDCL).
Last updated:  2022-10-05
Orientations and the supersingular endomorphism ring problem
Uncategorized
Benjamin Wesolowski
Show abstract
Uncategorized
We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalised Riemann hypothesis. We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential. Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).
Last updated:  2021-12-03
CoTree: Push the Limits of Conquerable Space in Collision-Optimized Side-Channel Attacks
Changhai Ou, Debiao He, Zhu Wang, Kexin Qiao, Shihui Zheng, Siew-Kei Lam
By introducing collision information into side-channel distinguishers, the existing collision-optimized attacks exploit collision detection algorithm to transform the original candidate space under consideration into a significantly smaller collision chain space, thus achieving more efficient key recovery. However, collision information is detected very repeatedly since collision chains are created from the same sub-chains, i.e., with the same candidates on their first several sub-keys. This aggravates when exploiting more collision information. The existing collision detection algorithms try to alleviate this, but the problem is still very serious. In this paper, we propose a highly-efficient detection algorithm named Collision Tree (CoTree) for collision-optimized attacks. CoTree exploits tree structure to store the chains creating from the same sub-chain on the same branch. It then exploits a top-down tree building procedure and traverses each node only once when detecting their collisions with a candidate of the sub-key currently under consideration. Finally, it launches a bottom-up branch removal procedure to remove the chains unsatisfying the collision conditions from the tree after traversing all candidates (within given threshold) of this sub-key, thus avoiding the traversal of the branches satisfying the collision condition. These strategies make our CoTree significantly alleviate the repetitive collision detection, and our experiments verify that it significantly outperforms the existing works.
Last updated:  2022-09-07
Anonymous Authenticated Communication
Fabio Banfi, Ueli Maurer
Anonymity and authenticity are apparently conflicting goals. Anonymity means hiding a party's identity whereas authenticity means proving a party's identity. So how can a set of senders authenticate their messages without revealing their identity? Despite the paradoxical nature of this problem, there exist many cryptographic schemes designed to achieve both goals simultaneously, in some form. This paper provides a composable treatment of communication channels that achieve different forms of anonymity and authenticity. More specifically, three channel functionalities for many senders and one receiver are introduced which provide some trade-off between authenticity and anonymity (of the senders). For each of them, composably realizing it is proved to corresponds to the use of a certain type of cryptographic scheme, namely (1) a new type of scheme which we call bilateral signatures (syntactically related to designated verifier signatures), (2) partial signatures, and (3) ring signatures. This treatment hence provides composable semantics for (game-based) security definitions for these types of schemes. The results of this paper can be interpreted as the dual of the work by Kohlweiss et al. (PETS 2013), where composable notions for anonymous confidential communication were introduced and related to the security definitions of certain types of public-key encryption schemes, and where the treatment of anonymous authenticated communication was stated as an open problem.
Last updated:  2022-10-14
High Order Side-Channel Security for Elliptic-Curve Implementations
Sonia Belaïd, Matthieu Rivain
Elliptic-curve implementations protected with state-of-the-art countermeasures against side-channel attacks might still be vulnerable to advanced attacks that recover secret information from a single leakage trace. The effectiveness of these attacks is boosted by the emergence of deep learning techniques for side-channel analysis which relax the control or knowledge an adversary must have on the target implementation. In this paper, we provide generic countermeasures to withstand these attacks for a wide range of regular elliptic-curve implementations. We first introduce a framework to formally model a regular algebraic program which consists of a sequence of algebraic operations indexed by key-dependent values. We then introduce a generic countermeasure to protect these types of programs against advanced single-trace side-channel attacks. Our scheme achieves provable security in the noisy leakage model under a formal assumption on the leakage of randomized variables. To demonstrate the applicability of our solution, we provide concrete examples on several widely deployed scalar multiplication algorithms and report some benchmarks for a protected implementation on a smart card.
Last updated:  2022-09-19
Le Mans: Dynamic and Fluid MPC for Dishonest Majority
Rahul Rachuri, Peter Scholl
Most MPC protocols require the set of parties to be active for the entire duration of the computation. Deploying MPC for use cases such as complex and resource-intensive scientific computations increases the barrier of entry for potential participants. The model of Fluid MPC (Crypto 2021) tackles this issue by giving parties the flexibility to participate in the protocol only when their resources are free. As such, the set of parties is dynamically changing over time. In this work, we extend Fluid MPC, which only considered an honest majority, to the setting where the majority of participants at any point in the computation may be corrupt. We do this by presenting variants of the SPDZ protocol, which support dynamic participants. Firstly, we describe a universal preprocessing for SPDZ, which allows a set of $n$ parties to compute some correlated randomness, such that later on, any subset of the parties can use this to take part in an online secure computation. We complement this with a Dynamic SPDZ online phase, designed to work with our universal preprocessing, as well as a protocol for securely realising the preprocessing. Our preprocessing protocol is designed to efficiently use pseudorandom correlation generators, thus, the parties' storage and communication costs can be almost independent of the function being evaluated. We then extend this to support a fluid online phase, where the set of parties can dynamically evolve during the online phase. Our protocol achieves maximal fluidity and security with abort, similarly to the previous, honest majority construction. Achieving this requires a careful design and techniques to guarantee a small state complexity, allowing us to switch between committees efficiently.
Last updated:  2023-02-02
On Quantum Query Complexities of Collision-Finding in Non-Uniform Random Functions
Tianci Peng, Shujiao Cao, Rui Xue
Collision resistance and collision finding are now extensively exploited in Cryptography, especially in the case of quantum computing. For any function $f:[M]\to[N]$ with $f(x)$ uniformly distributed over $[N]$, Zhandry has shown that the number $\Theta(N^{1/3})$ of queries is both necessary and sufficient for finding a collision in $f$ with constant probability. However, there is still a gap between the upper and the lower bounds of query complexity in general non-uniform distributions. In this paper, we investigate the quantum query complexity of collision-finding problem with respect to general non-uniform distributions. Inspired by previous work, we pose the concept of collision domain and a new parameter $\gamma$ that heavily depends on the underlying non-uniform distribution. We then present a quantum algorithm that uses $O(\gamma^{1/6})$ quantum queries to find a collision for any non-uniform random function. By making a transformation of a problem in non-uniform setting into a problem in uniform setting, we are also able to show that $\Omega(\gamma^{1/6}\log^{-1/2}\gamma)$ quantum queries are necessary in collision-finding in any non-uniform random function. The upper bound and the lower bound in this work indicates that the proposed algorithm is nearly optimal with query complexity in general non-uniform case.
Last updated:  2022-02-24
SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate Proofs
Michael Rosenberg, Mary Maller, Ian Miers
Moderation is an essential tool to fight harassment and prevent spam. The use of strong user identities makes moderation easier, but trends towards strong identity pose serious privacy issues, especially when identities are linked across social media platforms. Zero-knowledge blocklists allow cross-platform blocking of users but, counter-intuitively, do not link users identities inter- or intra-platform, or to the fact they were blocked. Unfortunately, existing approaches (Tsang et al. '10), require that servers do work linear in the size of the blocklist for each verification of a non-membership proof. We design and implement SNARKBlock, a new protocol for zero-knowledge blocklisting with server-side verification that is logarithmic in the size of the blocklist. SnarkBlock is also the first approach to support ad-hoc, federated blocklisting: websites can mix and match their own blocklists from other blocklists and dynamically choose which identity providers they trust. Our core technical advance, of separate interest, is $\mathsf{HICIAP}$, a zero-knowledge proof that aggregates $n$ Groth16 proofs into one $O(\log n)$-sized proof which also shows that the input proofs share a common hidden input.
Last updated:  2022-11-23
Shared Permutation for Syndrome Decoding: New Zero-Knowledge Protocol and Code-Based Signature
Thibauld Feneuil, Antoine Joux, Matthieu Rivain
Zero-knowledge proofs are an important tool for many cryptographic protocols and applications. The threat of a coming quantum computer motivates the research for new zero-knowledge proof techniques for (or based on) post-quantum cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) of random linear codes. This problem is known to be NP-hard and the cryptanalysis state of affairs has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. As a simple public-coin three-round protocol, it can be converted to a post-quantum signature scheme through the famous Fiat-Shamir transform. The main drawback of this protocol is its high soundness error of $2/3$, meaning that it should be repeated $\approx 1.7\lambda$ times to reach a $\lambda$-bit security. In this paper, we improve this three-decade-old state of affairs by introducing a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Our protocol achieves a soundness error of 1/n for an arbitrary n in complexity O(n). Our construction requires the verifier to trust some of the variables sent by the prover which can be ensured through a cut-and-choose approach. We provide an optimized version of our zero-knowledge protocol which achieves arbitrary soundness through parallel repetitions and merged cut-and-choose phase. While turning this protocol into a signature scheme, we achieve a signature size of 17 KB for 128-bit security. This represents a significant improvement over previous constructions based on the syndrome decoding problem for random linear codes.
Last updated:  2021-12-03
Shorter Lattice-Based Group Signatures via ``Almost Free'' Encryption and Other Optimizations
Vadim Lyubashevsky, Ngoc Khanh Nguyen, Maxime Plancon, Gregor Seiler
We present an improved lattice-based group signature scheme whose parameter sizes and running times are independent of the group size. The signature length in our scheme is around $200$KB, which is approximately a $3$X reduction over the previously most compact such scheme, based on any quantum-safe assumption, of del Pino et al. (ACM CCS 2018). The improvement comes via several optimizations of some basic cryptographic components that make up group signature schemes, and we think that they will find other applications in privacy-based lattice cryptography.
Last updated:  2021-12-03
Ascon PRF, MAC, and Short-Input MAC
Christoph Dobraunig, Maria Eichlseder, Florian Mendel, Martin Schläffer
The cipher suite Ascon v1.2 already provides authenticated encryption schemes, hash, and extendable output functions. Furthermore, the underlying permutation is also used in two instances of Isap v2.0, an authenticated encryption scheme designed to provide enhanced robustness against side-channel and fault attacks. In this paper, we enrich the functionality one can get out of Ascon's permutation by providing efficient Pseudorandom Functions (PRFs), a Message Authentication Code (MAC) and a fast short-input PRF for messages up to 128 bits.
Last updated:  2021-12-03
Improved Security Bound of \textsf{(E/D)WCDM}
Nilanjan Datta, Avijit Dutta, Kushankur Dutta
In CRYPTO'16, Cogliati and Seurin proposed a block cipher based nonce based MAC, called {\em Encrypted Wegman-Carter with Davies-Meyer} (\textsf{EWCDM}), that gives $2n/3$ bit MAC security in the nonce respecting setting and $n/2$ bit security in the nonce misuse setting, where $n$ is the block size of the underlying block cipher. However, this construction requires two independent block cipher keys. In CRYPTO'18, Datta et al. came up with a single-keyed block cipher based nonce based MAC, called {\em Decrypted Wegman-Carter with Davies-Meyer} (\textsf{DWCDM}), that also provides $2n/3$ bit MAC security in the nonce respecting setting and $n/2$ bit security in the nonce misuse setting. However, the drawback of \textsf{DWCDM} is that it takes only $2n/3$ bit nonce. In fact, authors have shown that \textsf{DWCDM} cannot achieve beyond the birthday bound security with $n$ bit nonces. In this paper, we prove that \textsf{DWCDM} with $3n/4$ bit nonces provides MAC security up to $O(2^{3n/4})$ MAC queries against all nonce respecting adversaries. We also improve the MAC bound of \textsf{EWCDM} from $2n/3$ bit to $3n/4$ bit. The backbone of these two results is a refined treatment of extended mirror theory that systematically estimates the number of solutions to a system of bivariate affine equations and non-equations, which we apply on the security proofs of the constructions to achieve $3n/4$ bit security.
Last updated:  2022-03-04
Integral Attacks on Pyjamask-96 and Round-Reduced Pyjamask-128 (Full version)
Jiamin Cui, Kai Hu, Qingju Wang, Meiqin Wang
In order to provide benefits in the areas of fully homomorphic encryption (FHE), multi-party computation (MPC), post-quantum signature schemes, or efficient masked implementations for side-channel resistance, reducing the number of multiplications has become a quite popular trend for the symmetric cryptographic primitive designs. With an aggressive design strategy exploiting the extremely simple and low-degree S-box and low number of rounds, Pyjamask, the fundamental block cipher of the AEAD with the same name, has the smallest number of AND gates per bit among all the existing block ciphers (except LowMC or Rasta which work on unconventional plaintext/key sizes). Thus, although the AEAD Pyjamask stuck at the second round of the NIST lightweight cryptography standardization process, the block cipher Pyjamask itself still attracts a lot of attention. Not very unexpectedly, the low degree and the low number of rounds are the biggest weakness of Pyjamask. At FSE 2020, Dobraunig et al. successfully mounted an algebraic and higher-order differential attack on full Pyjamask-96, one member of the Pyjamask block cipher family. However, the drawback of this attack is that it has to use the full codebook, which makes the attack less appealing. In this paper, we take integral attacks as our weapon, which are also sensitive to the low degree. Based on a new 11-round integral distinguisher found by state-of-the-art detection techniques, and combined with the relationship between round keys that reduces the involved keys, we give the key recovery attack on the full Pyjamask-96 without the full codebook for the first time. Further, the algebraic and higher-order differential technique does not work for Pyjamask-128, the other member of the Pyjamask block cipher family. To better understand the security margin of Pyjamask-128, we present the first third-party cryptanalysis on Pyjamask-128 up to 11 out of 14 rounds.
Last updated:  2021-12-10
Tight Security for Key-Alternating Ciphers with Correlated Sub-Keys
Stefano Tessaro, Xihu Zhang
A substantial effort has been devoted to proving optimal bounds for the security of key-alternating ciphers with independent sub-keys in the random permutation model (e.g., Chen and Steinberger, EUROCRYPT '14; Hoang and Tessaro, CRYPTO '16). While common in the study of multi-round constructions, the assumption that sub-keys are truly independent is not realistic, as these are generally highly correlated and generated from shorter keys. In this paper, we show the existence of non-trivial distributions of limited independence for which a t-round key-alternating cipher achieves optimal security. Our work is a natural continuation of the work of Chen et al. (CRYPTO '14) which considered the case of t = 2 when all-subkeys are identical. Here, we show that key-alternating ciphers remain secure for a large class of (t-1)-wise and (t-2)-wise independent distribution of sub-keys. Our proofs proceed by generalizations of the so-called Sum-Capture Theorem, which we prove using Fourier-analytic techniques.
Last updated:  2021-12-03
Multicast Key Agreement, Revisited
Alexander Bienstock, Yevgeniy Dodis, Yi Tang
Multicast Key Agreement (MKA) is a long-overlooked natural primitive of large practical interest. In traditional MKA, an omniscient group manager privately distributes secrets over an untrusted network to a dynamically-changing set of group members. The group members are thus able to derive shared group secrets across time, with the main security requirement being that only current group members can derive the current group secret. There indeed exist very efficient MKA schemes in the literature that utilize symmetric-key cryptography. However, they lack formal security analyses, efficiency analyses regarding dynamically changing groups, and more modern, robust security guarantees regarding user state leakages: forward secrecy (FS) and post-compromise security (PCS). The former ensures that group secrets prior to state leakage remain secure, while the latter ensures that after such leakages, users can quickly recover security of group secrets via normal protocol operations. More modern Secure Group Messaging (SGM) protocols allow a group of users to asynchronously and securely communicate with each other, as well as add and remove each other from the group. SGM has received significant attention recently, including in an effort by the IETF Messaging Layer Security (MLS) working group to standardize an eponymous protocol. However, the group key agreement primitive at the core of SGM protocols, Continuous Group Key Agreement (CGKA), achieved by the TreeKEM protocol in MLS, suffers from bad worst-case efficiency and heavily relies on less efficient (than symmetric-key cryptography) public-key cryptography. We thus propose that in the special case of a group membership change policy which allows a single member to perform all group additions and removals, an upgraded version of classical Multicast Key Agreement (MKA) may serve as a more efficient substitute for CGKA in SGM. We therefore present rigorous, stronger MKA security definitions that provide increasing levels of security in the case of both user and group manager state leakage, and that are suitable for modern applications, such as SGM. We then construct a formally secure MKA protocol with strong efficiency guarantees for dynamic groups. Finally, we run experiments which show that the left-balanced binary tree structure used in TreeKEM can be replaced with red-black trees in MKA for better efficiency.
Last updated:  2023-05-01
ABBY: Automating leakage modeling for side-channels analysis
Omid Bazangani, Alexandre Iooss, Ileana Buhan, Lejla Batina
We introduce ABBY, an open-source side-channel leakage profiling framework that targets the microarchitectural layer. Existing solutions to characterize the microarchitectural layer are device-specific and require extensive manual effort. The main innovation of ABBY is the collection of data, which can automatically characterize the microarchitecture of a target device and has the additional benefit of being scalable. Using ABBY, we create two sets of data which capture the interaction of instructions for the ARM CORTEX-M0/M3 architecture. These sets are the first to capture detailed information on the microarchitectural layer. They can be used to explore various leakage models suitable for creating sidechannel leakage simulators. A preliminary evaluation of a leakage model produced with our dataset of real-world cryptographic implementations shows performance comparable to state-of-the-art leakage simulators.
Last updated:  2021-12-03
Impeccable Circuits III
Uncategorized
Shahram Rasoolzadeh, Aein Rezaei Shahmirzadi, Amir Moradi
Show abstract
Uncategorized
As a recent fault-injection attack, SIFA defeats most of the known countermeasures. Although error-correcting codes have been shown effective against SIFA, they mainly require a large redundancy to correct a few bits. In this work, we propose a hybrid construction with the ability to detect and correct injected faults at the same time. We provide a general implementation methodology which guarantees the correction of up to $t_c$-bit faults and the detection of at most $t_d$ faulty bits. Exhaustive evaluation of our constructions, by the open-source fault diagnostic tool VerFI, indicate the success of our designs in achieving the desired goals.
Last updated:  2021-12-02
Structural and Statistical Analysis of Multidimensional Linear Approximations of Random Functions and Permutations
Tomer Ashur, Mohsin Khan, Kaisa Nyberg
The goal of this paper is to investigate linear approximations of random functions and permutations. Our motivation is twofold. First, before the distinguishability of a practical cipher from an ideal one can be analysed, the cryptanalyst must have an accurate understanding of the statistical behaviour of the ideal cipher. Secondly, this issue has been neglected both in old and in more recent studies, particularly when multiple linear approximations are being used simultaneously. Traditional models have been based on the average behaviour and simplified using other assumptions such as independence of the linear approximations. Multidimensional cryptanalysis was introduced to avoid making artificial assumptions about statistical independence of linear approximations. On the other hand, it has the drawback of including many trivial approximations that do not contribute to the attack but just cause a waste of time and memory. We show for the first time in this paper that the trivial approximations reduce the degree of freedom of the related χ² distribution. Previously, the affine linear cryptanalysis was proposed to allow removing trivial approximations and, at the same time, admitting a solid statistical model. In this paper, we identify another type of multidimensional linear approximation, called Davies-Meyer approximation, which has similar advantages, and present full statistical models for both the affine and the Davies-Meyer type of multidimensional linear approximations. The new models given in this paper are realistic, accurate and easy to use. They are backed up by standard statistical tools such as Pearson’s χ² test and finite population correction and demonstrated to work accurately using practical examples.
Last updated:  2021-12-02
Towards Using Blockchain Technology to Prevent Diploma Fraud
Qiang Tang
After its debut with Bitcoin in 2009, Blockchain has attracted enormous attention and been used in many different applications as a trusted black box. Many applications focus on exploiting the Blockchain-native features (e.g. trust from consensus, and smart contracts) while paying less attention to the application-specific requirements. In this paper, we initiate a systematic study on the applications in the education and training sector, where Blockchain is leveraged to combat diploma fraud. We present a general system structure for digitized diploma management systems and identify both functional and non-functional requirements. Our analysis show that all existing Blockchain-based systems fall short in meeting these requirements. Inspired by the analysis, we propose a Blockchain-facilitated solution by leveraging some basic cryptographic primitives and data structures. Following-up analysis show that our solution respects all the identified requirements very well.
Last updated:  2022-09-02
Practical, Round-Optimal Lattice-Based Blind Signatures
Shweta Agrawal, Elena Kirshanova, Damien Stehle, Anshu Yadav
Blind signatures are a fundamental cryptographic primitive with numerous practical applications. While there exist many practical blind signatures from number-theoretic assumptions, the situation is far less satisfactory from post-quantum assumptions. In this work, we provide the first overall practical, lattice-based blind signature, supporting an unbounded number of signature queries and additionally enjoying optimal round complexity. We provide a detailed estimate of parameters achieved -- we obtain a signature of size slightly above 45KB, for a core-SVP hardness of 109 bits. The run-times of the signer, user and verifier are also very small. Our scheme relies on the Gentry, Peikert and Vaikuntanathan signature [STOC'08] and non-interactive zero-knowledge proofs for linear relations with small unknowns, which are significantly more efficient than their general purpose counterparts. Its security stems from a new and arguably natural assumption which we introduce, called the one-more-ISIS assumption. This assumption can be seen as a lattice analogue of the one-more-RSA assumption by Bellare et al [JoC'03]. To gain confidence in our assumption, we provide a detailed analysis of diverse attack strategies.
Last updated:  2021-12-02
Communication-Efficient Proactive MPC for Dynamic Groups with Dishonest Majorities
Karim Eldefrawy, Tancrède Lepoint, Antonin Leroux
Secure multiparty computation (MPC) has recently been increasingly adopted to secure cryptographic keys in enterprises, cloud infrastructure, and cryptocurrency and blockchain-related settings such as wallets and exchanges. Using MPC in blockchains and other distributed systems highlights the need to consider dynamic settings. In such dynamic settings, parties, and potentially even parameters of underlying secret sharing and corruption tolerance thresholds of sub-protocols, may change over the lifetime of the protocol. In particular, stronger threat models -- in which \emph{mobile} adversaries control a changing set of parties (up to $t$ out of $n$ involved parties at any instant), and may eventually corrupt \emph{all $n$ parties} over the course of a protocol's execution -- are becoming increasingly important for such real world deployments; secure protocols designed for such models are known as Proactive MPC (PMPC). In this work, we construct the first efficient PMPC protocol for \emph{dynamic} groups (where the set of parties changes over time) secure against a \emph{dishonest majority} of parties. Our PMPC protocol only requires $O(n^2)$ (amortized) communication per secret, compared to existing PMPC protocols that require $O(n^4)$ and only consider static groups with dishonest majorities. At the core of our PMPC protocol is a new efficient technique to perform multiplication of secret shared data (shared using a bivariate scheme) with $O(n \sqrt{n})$ communication with security against a dishonest majority without requiring pre-computation. We also develop a new efficient bivariate batched proactive secret sharing (PSS) protocol for dishonest majorities, which may be of independent interest. This protocol enables multiple dealers to contribute different secrets that are efficiently shared together in one batch; previous batched PSS schemes required all secrets to come from a single dealer.
Last updated:  2021-12-02
Towards Post-Quantum Security for Cyber-Physical Systems: Integrating PQC into Industrial M2M Communication
Sebastian Paul, Patrik Scheible, Friedrich Wiemer
The threat of a cryptographically relevant quantum computer contributes to an increasing interest in the field of post-quantum cryptography (PQC). Compared to existing research efforts regarding the integration of PQC into the Transport Layer Security (TLS) protocol, industrial communication protocols have so far been neglected. Since industrial cyber-physical systems (CPS) are typically deployed for decades, protection against such long-term threats is needed. In this work, we propose two novel solutions for the integration of post-quantum (PQ) primitives (digital signatures and key establishment) into the industrial protocol Open Platform Communications Unified Architecture (OPC UA): a hybrid solution combining conventional cryptography with PQC and a solution solely based on PQC. Both approaches provide mutual authentication between client and server and are realized with certificates fully compliant to the X.509 standard. We implement the two solutions and measure and evaluate their performance across three different security levels. All selected algorithms (Kyber, Dilithium, and Falcon) are candidates for standardization by the National Institute of Standards and Technology (NIST). We show that Falcon is a suitable option - especially - when using floating-point hardware provided by our ARM-based evaluation platform. Our proposed hybrid solution provides PQ security for early adopters but comes with additional performance and communication requirements. Our solution solely based on PQC shows superior performance across all evaluated security levels in terms of handshake duration compared to conventional OPC UA but comes at the cost of increased handshake sizes. In addition to our performance evaluation, we provide a proof of security in the symbolic model for our two PQC-based variants of OPC UA. For this proof, we use the cryptographic protocol verifier ProVerif and formally verify confidentiality and authentication properties of our quantum-resistant variants.
Last updated:  2022-11-27
Concurrently Composable Non-Interactive Secure Computation
Andrew Morgan, Rafael Pass
We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC. Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017). In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of: - a non-interactive (weakly) CCA-secure commitment, - a stand-alone secure SPS-NISC with subexponential security, and satisfies the notion of "angel-based" UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness. We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.
Last updated:  2022-06-09
Quantum Time/Memory/Data Tradeoff Attacks
Orr Dunkelman, Nathan Keller, Eyal Ronen, Adi Shamir
One of the most celebrated and useful cryptanalytic algorithms is Hellman's time/memory tradeoff (and its Rainbow Table variant), which can be used to invert random-looking functions on $N$ possible values with time and space complexities satisfying $TM^2=N^2$. As a search problem, one can always transform it into the quantum setting by using Grover's algorithm, but this algorithm does not benefit from the possible availability of auxiliary advice obtained during a free preprocessing stage. However, at FOCS'20 it was rigorously shown that a small amount of quantum auxiliary advice (which can be stored in a quantum memory of size $M \leq O(\sqrt{N})$) cannot possibly yield an attack which is better than Grover's algorithm. In this paper we develop new quantum versions of Hellman's cryptanalytic attack which use large memories in the standard QACM (Quantum Accessible Classical Memory) model of computation. In particular, we improve Hellman's tradeoff curve to $T^{4/3}M^2=N^2$. When we generalize the cryptanalytic problem to a time/memory/data tradeoff attack (in which one has to invert $f$ for at least one of $D$ given values), we get the generalized curve $T^{4/3}M^2D^2=N^2$. A typical point on this curve is $D=N^{0.2}$, $M=N^{0.6}$, and $T=N^{0.3}$, whose time is strictly lower than both Grover's algorithm and the classical Hellman algorithm (both of which require $T=N^{0.4}$ for these $D$ and $M$ parameters).
Last updated:  2021-11-29
SAND: an AND-RX Feistel lightweight block cipher supporting S-box-based security evaluations
Shiyao Chen, Yanhong Fan, Ling Sun, Yong Fu, Haibo Zhou, Yongqing Li, Meiqin Wang, Weijia Wang, Chun Guo
We revisit designing AND-RX block ciphers, that is, the designs assembled with the most fundamental binary operations---AND, Rotation and XOR operations and do not rely on existing units. Likely, the most popular representative is the NSA cipher \texttt{SIMON}, which remains one of the most efficient designs, but suffers from difficulty in security evaluation. As our main contribution, we propose \texttt{SAND}, a new family of lightweight AND-RX block ciphers. To overcome the difficulty regarding security evaluation, \texttt{SAND} follows a novel design approach, the core idea of which is to restrain the AND-RX operations to be within nibbles. By this, \texttt{SAND} admits an equivalent representation based on a $4\times8$ \textit{synthetic S-box} ($SSb$). This enables the use of classical S-box-based security evaluation approaches. Consequently, for all versions of \texttt{SAND}, (a) we evaluated security bounds with respect to differential and linear attacks, and in both single-key and related-key scenarios; (b) we also evaluated security against impossible differential and zero-correlation linear attacks. This better understanding of the security enables the use of a relatively simple key schedule, which makes the ASIC round-based hardware implementation of \texttt{SAND} to be one of the state-of-art Feistel lightweight ciphers. As to software performance, due to the natural bitslice structure, \texttt{SAND} reaches the same level of performance as \texttt{SIMON} and is among the most software-efficient block ciphers.
Last updated:  2021-11-29
Facial Template Protection via Lattice-based Fuzzy Extractors
Kaiyi Zhang, Hongrui Cui, Yu Yu
With the growing adoption of facial recognition worldwide as a popular authentication method, there is increasing concern about the invasion of personal privacy due to the lifetime irrevocability of facial features. In principle, {\it Fuzzy Extractors} enable biometric-based authentication while preserving the privacy of biometric templates. Nevertheless, to our best knowledge, most existing fuzzy extractors handle binary vectors with Hamming distance, and no explicit construction is known for facial recognition applications where $\ell_2$-distance of real vectors is considered. In this paper, we utilize the dense packing feature of certain lattices (e.g., $\rm E_8$ and Leech) to design a family of {\it lattice-based} fuzzy extractors that docks well with existing neural network-based biometric identification schemes. We instantiate and implement the generic construction and conduct experiments on publicly available datasets. Our result confirms the feasibility of facial template protection via fuzzy extractors.
Last updated:  2022-06-05
RSA Key Recovery from Digit Equivalence Information
Chitchanok Chuengsatiansup, Andrew Feutrill, Rui Qi Sim, Yuval Yarom
The seminal work of Heninger and Shacham (Crypto 2009) demonstrated a method for reconstructing secret RSA keys from artial information of the key components. In this paper we further investigate this approach but apply it to a different context that appears in some side-channel attacks. We assume a fixed-window exponentiation algorithm that leaks the equivalence between digits, without leaking the value of the digits themselves. We explain how to exploit the side-channel information with the Heninger-Shacham algorithm. To analyse the complexity of the approach, we model the attack as a Markov process and experimentally validate the accuracy of the model. Our model shows that the attack is feasible in the commonly used case where the window size is 5.
Last updated:  2021-11-29
Performance bounds for QC-MDPC codes decoders
Marco Baldi, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi, Paolo Santini
Quasi-Cyclic Moderate-Density Parity-Check (QC-MDPC) codes are receiving increasing attention for their advantages in the context of post-quantum asymmetric cryptography based on codes. However, a fundamentally open question concerns modeling the performance of their decoders in the region of a low decoding failure rate (DFR). We provide two approaches for bounding the performance of these decoders, and study their asymptotic behavior. We first consider the well-known Maximum Likelihood (ML) decoder, which achieves optimal performance and thus provides a lower bound on the performance of any sub-optimal decoder. We provide lower and upper bounds on the performance of ML decoding of QC-MDPC codes and show that the DFR of the ML decoder decays polynomially in the QC-MDPC code length when all other parameters are fixed. Secondly, we analyze some hard to decode error patterns for Bit-Flipping (BF) decoding algorithms, from which we derive some lower bounds on the DFR of BF decoders applied to QC-MDPC codes.
Last updated:  2021-11-29
Diving Deep into the Weak Keys of Round Reduced Ascon
Raghvendra Rohit, Santanu Sarkar
At ToSC 2021, Rohit \textit{et al.} presented the first distinguishing and key recovery attacks on 7 rounds Ascon without violating the designer's security claims of nonce-respecting setting and data limit of $2^{64}$ blocks per key. So far, these are the best attacks on 7 rounds Ascon. However, the distinguishers require (impractical) $2^{60}$ data while the data complexity of key recovery attacks exactly equals $2^{64}$. Whether there are any practical distinguishers and key recovery attacks (with data less than $2^{64}$) on 7 rounds Ascon is still an open problem. In this work, we give positive answers to these questions by providing a comprehensive security analysis of Ascon in the weak key setting. Our first major result is the 7-round cube distinguishers with complexities $2^{46}$ and $2^{33}$ which work for $2^{82}$ and $2^{63}$ keys, respectively. Notably, we show that such weak keys exist for any choice (out of 64) of 46 and 33 specifically chosen nonce variables. In addition, we improve the data complexities of existing distinguishers for 5, 6 and 7 rounds by a factor of $2^{8}, 2^{16}$ and $2^{27}$, respectively. Our second contribution is a new theoretical framework for weak keys of Ascon which is solely based on the algebraic degree. Based on our construction, we identify $2^{127.99}$, $2^{127.97}$ and $2^{116.34}$ weak keys (out of $2^{128}$) for 5, 6 and 7 rounds, respectively. Next, we present two key recovery attacks on 7 rounds with different attack complexities. The best attack can recover the secret key with $2^{63}$ data, $2^{69}$ bits of memory and $2^{115.2}$ time. Our attacks are far from threatening the security of full 12 rounds Ascon, but we expect that they provide new insights into Ascon's security.
Last updated:  2022-02-18
Accelerator for Computing on Encrypted Data
Sujoy Sinha Roy, Ahmet Can Mert, Aikata, Sunmin Kwon, Youngsam Shin, Donghoon Yoo
Fully homomorphic encryption enables computation on encrypted data, and hence it has a great potential in privacy-preserving outsourcing of computations. In this paper, we present a complete instruction-set processor architecture ‘Medha’ for accelerating the cloud-side operations of an RNS variant of the HEAAN homomorphic encryption scheme. Medha has been designed following a modular hardware design approach to attain a fast computation time for computationally expensive homomorphic operations on encrypted data. At every level of the implementation hierarchy, we explore possibilities for parallel processing. Starting from hardware-friendly parallel algorithms for the basic building blocks, we gradually build heavily parallel RNS polynomial arithmetic units. Next, many of these parallel units are interconnected elegantly so that their interconnections require the minimum number of nets, therefore making the overall architecture placement-friendly on the implementation platform. As homomorphic encryption is computation- as well as data-centric, the speed of homomorphic evaluations depends greatly on the way the data variables are handled. For Medha, we take a memory-conservative design approach and get rid of any off-chip memory access during homomorphic evaluations. Our instruction-set accelerator Medha is programmable and it supports all homomorphic evaluation routines of the leveled fully RNS-HEAAN scheme. For a reasonably large parameter with the polynomial ring dimension 214 and ciphertext coefficient modulus 438-bit (corresponding to 128-bit security), we implemented Medha in a Xilinx Alveo U250 card. Medha achieves the fastest computation latency to date and is almost 2.4× faster in latency and also somewhat smaller in area than a state-of-the-art reconfigurable hardware accelerator for the same parameter.
Last updated:  2021-11-29
How to Claim a Computational Feat
Clémence Chevignard, Rémi Géraud-Stewart, Antoine Houssais, David Naccache, Edmond de Roffignac
Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew? The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture $C(x)=\mbox{True}$ for all $x$ in some large set or interval $U$. How can mathematicians challenge this claim without performing all the expensive computations again? This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing $x$, for randomly chosen $x \in U$. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked $C(x)$ over $U$. Unlike ``traditional'' interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub ``\emph{refutation-precious}'': expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing $C(x)$ over all of $U$.
Last updated:  2022-12-06
Performance Evaluation of Post-Quantum TLS 1.3 on Resource-Constrained Embedded Systems
George Tasopoulos, Jinhui Li, Apostolos P. Fournaris, Raymond K. Zhao, Amin Sakzad, Ron Steinfeld
Transport Layer Security (TLS) constitutes one of the most widely used protocols for securing Internet communications and has also found broad acceptance in the Internet of Things (IoT) domain. As we progress toward a security environment resistant to quantum computer attacks, TLS needs to be transformed to support post-quantum cryptography. However, post-quantum TLS is still not standardised, and its overall performance, especially in resource-constrained, IoT-capable, embedded devices, is not well understood. In this paper, we showcase how TLS 1.3 can be transformed into quantum-safe by modifying the TLS 1.3 architecture in order to accommodate the latest Post-Quantum Cryptography (PQC) algorithms from NIST PQC process. Furthermore, we evaluate the execution time, memory, and bandwidth requirements of this proposed post-quantum variant of TLS 1.3 (PQ TLS 1.3). This is facilitated by integrating the pqm4 and PQClean library implementations of almost all PQC algorithms selected for standardisation by the NIST PQC process, as well as the alternatives to be evaluated in a new round (Round 4). The proposed solution and evaluation focuses on the lower end of resource-constrained embedded devices. Thus, the evaluation is performed on the ARM Cortex-M4 embedded platform NUCLEO-F439ZI that provides $180$ MHz clock rate, $2$ MB Flash Memory, and $256$ KB SRAM. To the authors' knowledge, this is the first systematic, thorough, and complete timing, memory usage, and network traffic evaluation of PQ TLS 1.3 for all the NIST PQC process selections and upcoming candidate algorithms, that explicitly targets resource-constrained embedded systems.
Last updated:  2021-11-29
Time-memory Trade-offs for Saber+ on Memory-constrained RISC-V
Jipeng Zhang, Junhao Huang, Zhe Liu, Sujoy Sinha Roy
Saber is a module-lattice-based key encapsulation scheme that has been selected as a finalist in the NIST Post-Quantum Cryptography Standardization Project. As Saber computes on considerably large matrices and vectors of polynomials, its efficient implementation on memory-constrained IoT devices is very challenging. In this paper, we present an implementation of Saber with a minor tweak to the original Saber protocol for achieving reduced memory consumption and better performance. We call this tweaked implementation `Saber+', and the difference compared to Saber is that we use different generation methods of public matrix \(\boldsymbol{A}\) and secret vector \(\boldsymbol{s}\) for memory optimization. Our highly optimized software implementation of Saber+ on a memory-constrained RISC-V platform achieves 48\% performance improvement compared with the best state-of-the-art memory-optimized implementation of original Saber. Specifically, we present various memory and performance optimizations for Saber+ on a memory-constrained RISC-V microcontroller, with merely 16KB of memory available. We utilize the Number Theoretic Transform (NTT) to speed up the polynomial multiplication in Saber+. For optimizing cycle counts and memory consumption during NTT, we carefully compare the efficiency of the complete and incomplete-NTTs, with platform-specific optimization. We implement 4-layers merging in the complete-NTT and 3-layers merging in the 6-layer incomplete-NTT. An improved on-the-fly generation strategy of the public matrix and secret vector in Saber+ results in low memory footprint. Furthermore, by combining different optimization strategies, various time-memory trade-offs are explored. Our software implementation for Saber+ on selected RISC-V core takes just 3,809K, 3,594K, and 3,193K clock cycles for key generation, encapsulation, and decapsulation, respectively, while consuming only 4.8KB of stack at most.
Last updated:  2021-11-29
Blockchain for IoT: A Critical Analysis Concerning Performance and Scalability
Ziaur Rahman, Xun Yi, Ibrahim Khalil, Andrei Kelarev
The world has been experiencing a mind-blowing expansion of blockchain technology since it was first introduced as an emerging means of cryptocurrency called bitcoin. Currently, it has been regarded as a pervasive frame of reference across almost all research domains, ranging from virtual cash to agriculture or even supply-chain to the Internet of Things. The ability to have a self-administering register with legitimate immutability makes blockchain appealing for the Internet of Things (IoT). As billions of IoT devices are now online in distributed fashion, the huge challenges and questions require to addressed in pursuit of urgently needed solutions. The present paper has been motivated by the aim of facilitating such efforts. The contribution of this work is to figure out those trade-offs the IoT ecosystem usually encounters because of the wrong choice of blockchain technology. Unlike a survey or review, the critical findings of this paper target sorting out specific security challenges of blockchain-IoT Infrastructure. The contribution includes how to direct developers and researchers in this domain to pick out the unblemished combinations of Blockchain enabled IoT applications. In addition, the paper promises to bring a deep insight on Ethereum, Hyperledger blockchain and IOTA technology to show their limitations and prospects in terms of performance and scalability.
Last updated:  2021-11-29
Chaos and Logistic Map based Key Generation Technique for AES-driven IoT Security
Ziaur Rahman, Ibrahim Khalil, Mousumi Sumi
Several efforts have been seen claiming the lightweight block ciphers as a necessarily suitable substitute in securing the Internet of Things. Currently, it has been able to envisage as a pervasive frame of reference almost all across the privacy preserving of smart and sensor-oriented appliances. Different approaches are likely to be inefficient, bringing desired degree of security considering the easiness and surely the process of simplicity but security. Strengthening the well-known symmetric key and block dependent algorithm using either chaos motivated logistic map or elliptic curve has shown a far-reaching potential to be a discretion in secure real-time communication. The popular feature of logistic maps, such as the un-foreseeability and randomness often expected to be used in dynamic key-propagation in sync with chaos and scheduling technique towards data integrity. As a bit alternation in keys, able to come up with oversize deviation, also would have consequence to leverage data confidentiality. Henceforth it may have proximity to time consumption, which may lead to a challenge to make sure instant data exchange between participating node entities. In consideration of delay latency required to both secure encryption and decryption, the proposed approach suggests a modification on the key-origination matrix along with S-box. It has plausibly been taken us to this point that the time required proportionate to the plain-text sent while the plain-text disproportionate to the probability happening a letter on the message made. In line with that the effort so far sought how apparent chaos escalates the desired key-initiation before message transmission.
Last updated:  2021-12-06
Kicking-the-Bucket: Fast Privacy-Preserving Trading Using Buckets
Mariana Botelho da Gama, John Cartlidge, Antigoni Polychroniadou, Nigel P. Smart, Younes Talibi Alaoui
We examine bucket-based and volume-based algorithms for privacy-preserving asset trading in a financial dark pool. Our bucket-based algorithm places orders in quantised buckets, whereas the volume-based algorithm allows any volume size but requires more complex validation mechanisms. In all cases, we conclude that these algorithms are highly efficient and offer a practical solution to the commercial problem of preserving privacy of order information in a dark pool trading venue.
Last updated:  2023-04-10
Just how hard are rotations of $\mathbb{Z}^n$? Algorithms and cryptography with the simplest lattice
Huck Bennett, Atul Ganju, Pura Peetathawatchai, Noah Stephens-Davidowitz
$\newcommand{\Z}{\mathbb{Z}} \newcommand{\basis}{B}$We study the computational problem of finding a shortest non-zero vector in a rotation of $\Z^n$, which we call $\Z$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\Z$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\Z$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\Z$SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if $\Z$SVP actually is hard, then what consequences would follow? Our results are as follows. 1. We show that $\Z$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\Z$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\Z$SVP from SVP. As a consequence of this reduction, we obtain a $2^{n/2 + o(n)}$-time algorithm for $\Z$SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices---semi-stable lattices with not-too-large $\lambda_1$.) 2. We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\Z$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\Z^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\Z^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\Z^n$ has the largest smoothing parameter.'' 3. We show a distribution of bases $\basis$ for rotations of $\Z^n$ such that, if $\Z$SVP is hard for any input basis, then $\Z$SVP is hard on input $\basis$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\Z^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (Eurocrypt, 2022), and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in different contexts in work of Cash, Hofheinz, Kiltz, and Peikert, as well as Aono, Espitau, and Nguyen in different contexts.) 4. We perform experiments to determine how practical basis reduction performs on bases of $\Z^n$ that are generated in different ways and how heuristic sieving algorithms perform on $\Z^n$. Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on $\Z^n$.
Last updated:  2021-11-29
SoK: Plausibly Deniable Storage
Chen Chen, Xiao Liang, Bogdan Carbunar, Radu Sion
Data privacy is critical in instilling trust and empowering the societal pacts of modern technology-driven democracies. Unfortunately, it is under continuous attack by overreaching or outright oppressive governments, including some of the world's oldest democracies. Increasingly-intrusive anti-encryption laws severely limit the ability of standard encryption to protect privacy. New defense mechanisms are needed. Plausible deniability (PD) is a powerful property, enabling users to hide the existence of sensitive information in a system under direct inspection by adversaries. Popular encrypted storage systems such as TrueCrypt and other research efforts have attempted to also provide plausible deniability. Unfortunately, these efforts have often operated under less well-defined assumptions and adversarial models. Careful analyses often uncover not only high overheads but also outright security compromise. Further, our understanding of adversaries, the underlying storage technologies, as well as the available plausible deniable solutions have evolved dramatically in the past two decades. The main goal of this work is to systematize this knowledge. It aims to: - identify key PD properties, requirements, and approaches; - present a direly-needed unified framework for evaluating security and performance; - explore the challenges arising from the critical interplay between PD and modern system layered stacks; - propose a new "trace-oriented" PD paradigm, able to decouple security guarantees from the underlying systems and thus ensure a higher level of flexibility and security independent of the technology stack. This work is meant also as a trusted guide for system and security practitioners around the major challenges in understanding, designing, and implementing plausible deniability into new or existing systems.
Last updated:  2021-11-29
Improving Deep Learning Networks for Profiled Side-Channel Analysis Using Performance Improvement Techniques
Damien Robissout, Lilian Bossuet, Amaury Habrard, Vincent Grosso
The use of deep learning techniques to perform side-channel analysis attracted the attention of many researchers as they obtained good performances with them. Unfortunately, the understanding of the neural networks used to perform side-channel attacks is not very advanced yet. In this paper, we propose to contribute to this direction by studying the impact of some particular deep learning techniques for tackling side-channel attack problems. More precisely, we propose to focus on three existing techniques: batch normalization, dropout and weight decay, not yet used in side-channel context. By combining adequately these techniques for our problem, we show that it is possible to improve the attack performance, i.e. the number of traces needed to recover the secret, by more than 55%. Additionally, they allow us to have a gain of more than 34% in terms of training time. We also show that an architecture trained with such techniques is able to perform attacks efficiently even in the context of desynchronized traces.
Last updated:  2022-05-18
Longest Chain Consensus Under Bandwidth Constraint
Joachim Neu, Srivatsan Sridhar, Lei Yang, David Tse, Mohammad Alizadeh
Spamming attacks are a serious concern for consensus protocols, as witnessed by recent outages of a major blockchain, Solana. They cause congestion and excessive message delays in a real network due to its bandwidth constraints. In contrast, longest chain (LC), an important family of consensus protocols, has previously only been proven secure assuming an idealized network model in which all messages are delivered within bounded delay. This model-reality mismatch is further aggravated for Proof-of-Stake (PoS) LC where the adversary can spam the network with equivocating blocks. Hence, we extend the network model to capture bandwidth constraints, under which nodes now need to choose carefully which blocks to spend their limited download budget on. To illustrate this point, we show that 'download along the longest header chain', a natural download rule for Proof-of-Work (PoW) LC, is insecure for PoS LC. We propose a simple rule 'download towards the freshest block', formalize two common heuristics 'not downloading equivocations' and 'blocklisting', and prove in a unified framework that PoS LC with any one of these download rules is secure in bandwidth-constrained networks. In experiments, we validate our claims and showcase the behavior of these download rules under attack. By composing multiple instances of a PoS LC protocol with a suitable download rule in parallel, we obtain a PoS consensus protocol that achieves a constant fraction of the network's throughput limit even under worst-case adversarial strategies.
Last updated:  2022-05-05
Information Dispersal with Provable Retrievability for Rollups
Kamilla Nazirkhanova, Joachim Neu, David Tse
The ability to verifiably retrieve transaction or state data stored off-chain is crucial to blockchain scaling techniques such as rollups or sharding. We formalize the problem and design a storage- and communication-efficient protocol using linear erasure-correcting codes and homomorphic vector commitments. Motivated by application requirements for rollups, our solution Semi-AVID-PR departs from earlier Verifiable Information Dispersal schemes in that we do not require comprehensive termination properties. Compared to Data Availability Oracles, under no circumstance do we fall back to returning empty blocks. Distributing a file of 22 MB among 256 storage nodes, up to 85 of which may be adversarial, requires in total ~70 MB of communication and storage, and ~41 seconds of single-thread runtime (<3 seconds on 16 threads) on an AMD Opteron 6378 processor when using the BLS12-381 curve. Our solution requires no modification to on-chain contracts of Validium rollups such as StarkWare's StarkEx. Additionally, it provides privacy of the dispersed data against honest-but-curious storage nodes. Finally, we discuss an application of our Semi-AVID-PR scheme to data availability verification schemes based on random sampling.
Last updated:  2022-09-23
Post-Quantum Zero Knowledge, Revisited (or: How to do Quantum Rewinding Undetectably)
Alex Lombardi, Fermi Ma, Nicholas Spooner
When do classical zero-knowledge protocols remain secure against quantum attacks? In this work, we develop the techniques, tools, and abstractions necessary to answer this question for foundational protocols: 1) We prove that the Goldreich-Micali-Wigderson protocol for graph non-isomorphism and the Feige-Shamir protocol for NP remain zero-knowledge against quantum adversaries. At the heart of our proof is a new quantum rewinding technique that enables extracting information from multiple invocations of a quantum adversary without disturbing its state. 2) We prove that the Goldreich-Kahan protocol for NP is post-quantum zero knowledge using a simulator that can be seen as a natural quantum extension of the classical simulator. Our results achieve negligible simulation error, appearing to contradict a recent impossibility result due to Chia-Chung-Liu-Yamakawa (FOCS 2021). This brings us to our final contribution: 3) We introduce coherent-runtime expected quantum polynomial time, a simulation notion that (1) precisely captures all of our zero-knowledge simulators, (2) cannot break any polynomial hardness assumptions, (3) implies strict polynomial-time epsilon-simulation and (4) is not subject to the CCLY impossibility. In light of our positive results and the CCLY negative results, we propose coherent-runtime simulation to be the appropriate quantum analogue of classical expected polynomial-time simulation.
Last updated:  2021-12-13
An End-to-End Bitstream Tamper Attack Against Flip-Chip FPGAs
Fahim Rahman, Farimah Farahmandi, Mark Tehranipoor
FPGA bitstream encryption and authentication can be defeated by various techniques and it is critical to understand how these vulnerabilities enable extraction and tampering of commercial FPGA bitstreams. We exploit the physical vulnerability of bitstream encryption keys to readout using failure analysis equipment and conduct an end-to-end bitstream tamper attack. Our work underscores the feasibility of supply chain bitstream tampering and the necessity of guarding against such attacks in critical systems.
Last updated:  2021-11-23
Revisiting the Security of COMET Authenticated Encryption Scheme
Shay Gueron, Ashwin Jha, Mridul Nandi
COMETv1, by Gueron, Jha and Nandi, is a mode of operation for nonce-based authenticated encryption with associated data functionality. It was one of the second round candidates in the ongoing NIST Lightweight Cryptography Standardization Process. In this paper, we study a generalized version of COMETv1, that we call gCOMET, from provable security perspective. First, we present a comprehensive and complete security proof for gCOMET in the ideal cipher model. Second, we view COMET, the underlying mode of operation in COMETv1, as an instantiation of gCOMET, and derive its concrete security bounds. Finally, we propose another instantiation of gCOMET, dubbed COMETv2, and show that this version achieves better security guarantees as well as memory-efficient implementations as compared to COMETv1.
Last updated:  2021-11-23
Lightweight Swarm Authentication
George Teseleanu
In this paper we describe a provably secure authentication protocol for resource limited devices. The proposed algorithm performs whole-network authentication using very few rounds and in a time logarithmic in the number of nodes. Compared to one-to-one node authentication and previous proposals, our protocol is more efficient: it requires less communication and computation and, in turn, lower energy consumption.
Last updated:  2021-11-22
Route Discovery in Private Payment Channel Networks
Zeta Avarikioti, Mahsa Bastankhah, Mohammad Ali Maddah-Ali, Krzysztof Pietrzak, Jakub Svoboda, Michelle Yeo
In this work, we are the first to explore route discovery in private channel networks. We first determine what ``ideal" privacy for a routing protocol means in this setting. We observe that protocols achieving this strong privacy definition exist by leveraging (topology hiding) Multi-Party Computation but they are (inherently) inefficient as route discovery must involve the entire network. We then present protocols with weaker privacy guarantees but much better efficiency. In particular, route discovery typically only involves small fraction of the nodes but some information on the topology and balances -- beyond what is necessary for performing the transaction -- is leaked. The core idea is that both sender and receiver gossip a message which then slowly propagates through the network, and the moment any node in the network receives both messages, a path is found. In our first protocol the message is always sent to all neighbouring nodes with a delay proportional to the fees of that edge. In our second protocol the message is only sent to one neighbour chosen randomly with a probability proportional to its degree. While the first instantiation always finds the cheapest path, the second might not, but it involves a smaller fraction of the network. % We discuss some extensions like employing bilinear maps so the gossiped messages can be re-randomized, making them unlikeable and thus improving privacy. We also discuss some extensions to further improve privacy by employing bilinear maps. Simulations of our protocols on the Lightning network topology (for random transactions and uniform fees) show that our first protocol (which finds the cheapest path) typically involves around 12\% of the 6376 nodes, while the second only touches around 18 nodes $(<0.3\%)$, and the cost of the path that is found is around twice the cost of the optimal one.
Last updated:  2021-11-22
SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost
Nishanth Chandran, Divya Gupta, Sai Lakshmi Bhavana Obbattu, Akash Shah
Secure inference allows a model owner (or, the server) and the input owner (or, the client) to perform inference on machine learning model without revealing their private information to each other. A large body of work has shown efficient cryptographic solutions to this problem through secure 2- party computation. However, they assume that both parties are semi-honest, i.e., follow the protocol specification. Recently, Lehmkuhl et al. showed that malicious clients can extract the whole model of the server using novel model-extraction attacks. To remedy the situation, they introduced the client-malicious threat model and built a secure inference system, MUSE, that provides security guarantees, even when the client is malicious. In this work, we design and build SIMC, a new cryptographic system for secure inference in the client malicious threat model. On secure inference benchmarks considered by MUSE, SIMC has 23 − 29× lesser communication and is up to 11.4× faster than MUSE. SIMC obtains these improvements using a novel protocol for non-linear activation functions (such as ReLU) that has > 28× lesser communication and is up to 43× more performant than MUSE. In fact, SIMC's performance beats the state-of-the-art semi-honest secure inference system! Finally, similar to MUSE, we show how to push the majority of the cryptographic cost of SIMC to an input independent preprocessing phase. While the cost of the online phase of this protocol, SIMC++, is same as that of MUSE, the overall improvements of SIMC translate to similar improvements to the preprocessing phase of MUSE.
Last updated:  2023-12-22
PNB-focused Differential Cryptanalysis of ChaCha Stream Cipher
Shotaro Miyashita, Ryoma Ito, and Atsuko Miyaji
This study focuses on differential cryptanalysis of the ChaCha stream cipher. In the conventional approach, an adversary first searches for an input/output differential pair with the highest differential bias and then analyzes the probabilistic neutral bits (PNB) based on the obtained input/output differential pair. However, although the time and data complexities for the attack can be estimated by the differential bias and PNB obtained by this approach, the combination of the differential bias and PNB is not always optimal. In addition, the existing studies have not performed a comprehensive analysis of the PNB; thus, they have not provided an upper bound on the number of rounds required for a differential attack that uses a single-bit truncated differential to be successful. To address these limitations, we propose a PNB-focused differential attack on reduced-round ChaCha by first comprehensively analyzing the PNB for all possible single-bit truncated output differences and then searching for the input/output differential pair with the highest differential bias based on the obtained PNB. The best existing attack on ChaCha, proposed by Beierle et al. at CRYPTO 2020, works on up to 7 rounds, whereas the most extended attack we observed works on up to 7.25 rounds using the proposed PNB-focused approach. The time complexity, data complexity, and success probability of the proposed attack are \(2^{255.62}\), \(2^{48.36}\), and 0.5, respectively. Although the proposed attack is less efficient than a brute force attack, it is the first dedicated attack on the target and provides both a baseline and useful components (i.e., differential bias and PNB) for improved attacks.
Last updated:  2021-11-22
SoK: Tokenization on Blockchain
Gang Wang, Mark Nixon
Blockchain, a potentially disruptive technology, advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it is required to handle different kinds of digital assets and data. The next-generation blockchain ecosystem is expected to consist of numerous applications, and each application may have a distinct representation of digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, commonly referred to as fungible assets. In contrast, others that are not fungible are widely referred to as non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive study of the current progress of tokenization on blockchain. First, we explore general principles and practical schemes to tokenize digital assets for blockchain and classify digitized tokens into three categories: fungible, non-fungible, and semi-fungible. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.
Last updated:  2021-11-22
Light-OCB: Parallel Lightweight Authenticated Cipher with Full Security
Avik Chakraborti, Nilanjan Datta, Ashwin Jha, Cuauhtemoc Manicillas Lopez, Mridul Nandi
This paper proposes a lightweight authenticated encryption (AE) scheme, called Light-OCB, which can be viewed as a lighter variant of the CAESAR winner OCB as well as a faster variant of the high profi le NIST LWC competition submission LOCUS-AEAD. Light-OCB is structurally similar to LOCUS-AEAD and uses a nonce-based derived key that provides optimal security, and short-tweak tweakable blockcipher (tBC) for efficient domain separation. Light-OCB improves over LOCUS-AEAD by reducing the number of primitive calls, and thereby signi ficantly optimizing the throughput. To establish our claim, we provide FPGA hardware implementation details and benchmark for Light-OCB against LOCUS-AEAD and several other well-known AEs. The implementation results depict that, when instantiated with the tBC TweGIFT64, Light-OCB achieves an extremely low hardware footprint - consuming only around 1128 LUTs and 307 slices (signifi cantly lower than that for LOCUS-AEAD) while maintaining a throughput of 880 Mbps, which is almost twice as that of LOCUS-AEAD. To the best of our knowledge, this fi gure is signi ficantly better than all the known implementation results of other lightweight ciphers with parallel structures.
Last updated:  2021-11-22
An Optimized GHV-Type HE Scheme: Simpler, Faster, and More Versatile
Liang Zhao, Ze Chen, Liqun Chen, Xinyi Huang
In this paper we present an optimized variant of Gentry, Halevi and Vaikuntanathan (GHV)'s Homomorphic Encryption (HE) scheme (EUROCRYPT'10). Our scheme is appreciably more efficient than the original GHV scheme without losing its merits of the (multi-key) homomorphic property and matrix encryption property. In this research, we first measure the density for the trapdoor pairs that are created by using Alwen and Peikert's trapdoor generation algorithm and Micciancio and Peikert's trapdoor generation algorithm, respectively, and use the measurement result to precisely discuss the time and space complexity of the corresponding GHV instantiations. We then propose a generic GHV-type construction with several optimizations that improve the time and space efficiency from the original GHV scheme. In particular, our new scheme can achieve asymptotically optimal time complexity and avoid generating and storing the inverse of the used trapdoor. Finally, we present an instantiation that, by using a new set of (lower) bound parameters, has the smaller sizes of the key and ciphertext than the original GHV scheme.
Last updated:  2022-02-17
The Legendre Symbol and the Modulo-2 Operator in Symmetric Schemes over (F_p)^n
Lorenzo Grassi, Dmitry Khovratovich, Sondre Rønjom, Markus Schofnegger
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol. In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over (F_p)^n. Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation. Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By fixing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
Last updated:  2022-05-30
On the Download Rate of Homomorphic Secret Sharing
Ingerid Fosli, Yuval Ishai, Victor I. Kolobov, Mary Wootters
A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over $\ell$ function evaluations. We obtain the following results. * In the case of linear information-theoretic HSS schemes for degree-$d$ multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large $\ell$ (polynomial in all problem parameters), the optimal rate can be realized using Shamir's scheme, even with secrets over $\mathbb{F}_2$. * We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature. * We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and $2^{-\Omega(\ell)}$ error probability.
Last updated:  2021-11-22
Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning
Jonathan Prokos, Tushar M. Jois, Neil Fendley, Roei Schuster, Matthew Green, Eran Tromer, Yinzhi Cao
Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft's PhotoDNA or Facebook's PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing -- thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech. In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting.
Last updated:  2022-07-16
Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets
Alex Ozdemir, Dan Boneh
A zk-SNARK is a powerful cryptographic primitive that provides a succinct and efficiently checkable argument that the prover has a witness to a public NP statement, without revealing the witness. However, in their native form, zk-SNARKs only apply to a secret witness held by a single party. In practice, a collection of parties often need to prove a statement where the secret witness is distributed or shared among them. We implement and experiment with *collaborative zk-SNARKs*: proofs over the secrets of multiple, mutually distrusting parties. We construct these by lifting conventional zk-SNARKs into secure protocols among $N$ provers to jointly produce a single proof over the distributed witness. We optimize the proof generation algorithm in pairing-based zk-SNARKs so that algebraic techniques for multiparty computation (MPC) yield efficient proof generation protocols. For some zk-SNARKs, optimization is more challenging. This suggests MPC "friendliness" as an additional criterion for evaluating zk-SNARKs. We implement three collaborative proofs and evaluate the concrete cost of proof generation. We find that over a 3Gb/s link, security against a malicious minority of provers can be achieved with *approximately the same runtime* as a single prover. Security against $N-1$ malicious provers requires only a $2\times$ slowdown. This efficiency is unusual since most computations slow down by orders of magnitude when securely distributed. This efficiency means that most applications that can tolerate the cost of a single-prover proof should also be able to tolerate the cost of a collaborative proof.
Last updated:  2022-06-26
Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges
Hosein Hadipour, Maria Eichlseder
The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules.
Last updated:  2022-10-09
An Alternative Approach for Computing Discrete Logarithms in Compressed SIDH
Kaizhan Lin, Weize Wang, Lin Wang, Chang-An Zhao
Currently, public-key compression of supersingular isogeny Diffie-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. Both of them require large storage for precomputation to accelerate the performance. In this paper, we propose a novel method to compute only three discrete logarithms instead of four, in exchange for computing a lookup table efficiently. We also suggest another alternative method to compute discrete logarithms with small storage. Our implementation shows that the efficiency of our first method is close to that of the previous work, and our algorithms perform better in some special cases. Although the implementation of the second method is not as efficient as the state of the art, the storage is reduced by a factor of about 3:77 to about 22:86. In particular, the storage requirement for discrete logarithms of the order-$3^{e_3}$ multiplicative group decreases from 390.00 KiB to 17.06 KiB when using the 751-bit prime. We believe that the latter method will be highly attractive in memory constrained environments.
Last updated:  2021-11-22
CoHA-NTT: A Configurable Hardware Accelerator for NTT-based Polynomial Multiplication
Kemal Derya, Ahmet Can Mert, Erdinç Öztürk, Erkay Savaş
In this paper, we introduce a configurable hardware architecture that can be used to generate unified and parametric NTT-based polynomial multipliers that support a wide range of parameters of lattice-based cryptographic schemes proposed for post-quantum cryptography. Both NTT and inverse NTT operations can be performed using the unified butterfly unit of our architecture, which constitutes the core building block in NTT operations. The multitude of this unit plays an essential role in achieving the performance goals of a specific application area or platform. To this end, the architecture takes the size of butterfly units as input and generates an efficient NTT-based polynomial multiplier hardware to achieve the desired throughput and area requirements. More specifically, the proposed hardware architecture provides run-time configurability for the scheme parameters and compile-time configurability for throughput and area requirements. This work presents the first architecture with both run-time and compile-time configurability for NTT-based polynomial multiplication operations to the best of our knowledge. The implementation results indicate that the advanced configurability has a negligible impact on the time and area of the proposed architecture and that its performance is on par with the state-of-the-art implementations in the literature, if not better. The proposed architecture comprises various sub-blocks such as modular multiplier and butterfly units, each of which can be of interest on its own for accelerating lattice-based cryptography. Thus, we provide the design rationale of each sub-block and compare it with those in the literature, including our earlier works in terms of configurability and performance.
Last updated:  2021-12-10
A Performance Evaluation of Pairing-Based Broadcast Encryption Systems
Arush Chhatrapati, Susan Hohenberger, James Trombo, Satyanarayana Vusirikala
In a broadcast encryption system, a sender can encrypt a message for any subset of users who are listening on a broadcast channel. The goal of broadcast encryption is to leverage the broadcasting structure to achieve better efficiency than individually encrypting to each user; in particular, reducing the bandwidth (i.e., ciphertext size) required to transmit securely, although other factors such as public and private key size and the time to execute setup, encryption and decryption are also important. In this work, we conduct a detailed performance evaluation of eleven public-key, pairing-based broadcast encryption schemes offering different features and security guarantees, including public-key, identity-based, traitor-tracing, private linear and augmented systems. We implemented each system using the MCL Java pairings library, reworking some of the constructions to achieve better efficiency. We tested their performance on a variety of parameter choices, resulting in hundreds of data points to compare, with some interesting results from the classic Boneh-Gentry-Waters scheme (CRYPTO 2005) to Zhandry's recent generalized scheme (CRYPTO 2020), and more. We combine this performance data and knowledge of the systems' features with data we collected on practical usage scenarios to determine which schemes are likely to perform best for certain applications, such as video streaming services, online gaming, live sports betting and smartphone streaming. This work can inform both practitioners and future cryptographic designs in this area.
Last updated:  2021-11-22
Amortizing Rate-1 OT and Applications to PIR and PSI
Melissa Chase, Sanjam Garg, Mohammad Hajiabadi, Jialin Li, Peihan Miao
Recent new constructions of rate-1 OT [Döttling, Garg, Ishai, Malavolta, Mour, and Ostrovsky, CRYPTO 2019] have brought this primitive under the spotlight and the techniques have led to new feasibility results for private-information retrieval, and homomorphic encryption for branching programs. The receiver communication of this construction consists of a quadratic (in the sender's input size) number of group elements for a single instance of rate-1 OT. Recently [Garg, Hajiabadi, Ostrovsky, TCC 2020] improved the receiver communication to a linear number of group elements for a single string-OT. However, most applications of rate-1 OT require executing it multiple times, resulting in large communication costs for the receiver. In this work, we introduce a new technique for amortizing the cost of multiple rate-1 OTs. Specifically, based on standard pairing assumptions, we obtain a two-message rate-1 OT protocol for which the amortized cost per string-OT is asymptotically reduced to only four group elements. Our results lead to significant communication improvements in PSI and PIR, special cases of SFE for branching programs. - PIR: We obtain a rate-1 PIR scheme with client communication cost of $O(\lambda\cdot\log N)$ group elements for security parameter $\lambda$ and database size $N$. Notably, after a one-time setup (or one PIR instance), any following PIR instance only requires communication cost $O(\log N)$ number of group elements. - PSI with unbalanced inputs: We apply our techniques to private set intersection with unbalanced set sizes (where the receiver has a smaller set) and achieve receiver communication of $O((m+\lambda) \log N)$ group elements where $m, N$ are the sizes of the receiver and sender sets, respectively. Similarly, after a one-time setup (or one PSI instance), any following PSI instance only requires communication cost $O(m \cdot \log N)$ number of group elements. All previous sublinear-communication non-FHE based PSI protocols for the above unbalanced setting were also based on rate-1 OT, but incurred at least $O(\lambda^2 m \log N)$ group elements.
Last updated:  2021-11-22
An Improved Range Proof with Base-3 Construction
Esra Günsay, Cansu Betin Onur, Murat Cenk
Zero-knowledge protocols (ZKPs) allow a party to prove the validation of secret information to some other party without revealing any information about the secret itself. Appropriate, effective, and efficient use of cryptographic ZKPs contributes to many novel advances in real-world privacy-preserving frameworks. One of the most important type of cryptographic ZKPs is the zero-knowledge range proofs (ZKRPs). Such proofs have wide range of applications such as anonymous credentials, cryptocurrencies, e-cash schemes etc. In many ZKRPs the secret is represented in binary then committed via a suitable commitment scheme. Though there exist different base approaches on bilinear paring-based and RSA-like based constructions, to our knowledge there is no study on investigating the discrete logarithm-based constructions. In this study, we focus on a range proof construction produced by Mao in 1998. This protocol contains a bit commitment scheme with an OR-construction. We investigate the effect of different base approach on Mao's range proof and compare the efficiency of these basis approaches. To this end, we have extended Mao's range proof to base-3 with a modified OR-proof. We derive the number of computations in modulo exponentiations and the cost of the number of integers exchanged between parties. Then, we have generalized these costs for the base-u construction. Here, we mainly show that comparing with other base approaches, the base-3 approach consistently provides approximately 12% efficiency in computation cost and 10% efficiency in communication cost. We implemented the base-3 protocol and demonstrated that the results are consistent with our theoretical computations.
Last updated:  2021-11-22
Perfect Trees: Designing Energy-Optimal Symmetric Encryption Primitives
Andrea Caforio, Subhadeep Banik, Yosuke Todo, Willi Meier, Takanori Isobe, Fukang Liu, Bin Zhang
Energy efficiency is critical in battery-driven devices, and designing energy- optimal symmetric-key ciphers is one of the goals for the use of ciphers in such environments. In the paper by Banik et al. (IACR ToSC 2018), stream ciphers were identified as ideal candidates for low-energy solutions. One of the main conclusions of this paper was that Trivium, when implemented in an unrolled fashion, was by far the most energy-efficient way of encrypting larger quantity of data. In fact, it was shown that as soon as the number of databits to be encrypted exceeded 320 bits, Trivium consumed the least amount of energy on STM 90 nm ASIC circuits and outperformed the Midori family of block ciphers even in the least energy hungry ECB mode (Midori was designed specifically for energy efficiency). In this work, we devise the first heuristic energy model in the realm of stream ciphers that links the underlying algebraic topology of the state update function to the consumptive behaviour. The model is then used to derive a metric that exhibits a heavy negative correlation with the energy consumption of a broad range of stream cipher architectures, i.e., the families of Trivium-like, Grain-like and Subterranean-like constructions. We demonstrate that this correlation is especially pronounced for Trivium-like ciphers which leads us to establish a link between the energy consumption and the security guarantees that makes it possible to find several alternative energy- optimal versions of Trivium that meet the requirements but consume less energy. We present two such designs Trivium-LE(F) and Trivium-LE(S) that consume around 15% and 25% less energy respectively making them the to date most energy-efficient encryption primitives. They inherit the same security level as Trivium, i.e., 80-bit security. We further present Triad-LE as an energy-efficient variant satisfying a higher security level. The simplicity and wide applicability of our model has direct consequences for the conception of future hardware-targeted stream ciphers as for the first time it is possible to optimize for energy during the design phase. Moreover, we extend the reach of our model beyond plain encryption primitives and propose a novel energy-efficient message authentication code Trivium-LE-MAC.
Last updated:  2021-11-25
On Cryptocurrency Wallet Design
Ittay Eyal
The security of cryptocurrency and decentralized blockchain-maintained assets relies on their owners safeguarding secrets, typically cryptographic keys. This applies equally to individuals keeping daily-spending amounts and to large asset management companies. Loss of keys and attackers gaining control of keys resulted in numerous losses of funds. The security of individual keys was widely studied with practical solutions available, from mnemonic phrases to dedicated hardware. There are also techniques for securing funds by requiring combinations of multiple keys. However, to the best of our knowledge, a crucial question was never addressed: How is wallet security affected by the number of keys, their types, and how they are combined? This is the focus of this work. We present a model where each key has certain probabilities for being safe, lost, leaked, or stolen (available only to an attacker). The number of possible wallets for a given number of keys is the Dedekind number, prohibiting an exhaustive search with many keys. Nonetheless, we bound optimal-wallet failure probabilities with an evolutionary algorithm. We evaluate the security (complement of failure probability) of wallets based on the number and types of keys used. Our analysis covers a wide range of settings and reveals several surprises. The failure probability general trend drops exponentially with the number of keys, but has a strong dependency on its parity. In many cases, but not always, heterogeneous keys (not all with the same fault probabilities) allow for superior wallets than homogeneous keys. Nonetheless, in the case of 3 keys, the common practice of requiring any pair is optimal in many settings. Our formulation of the problem and initial results reveal several open questions, from user studies of key fault probabilities to finding optimal wallets with very large numbers of keys. But they also have an immediate practical outcome, informing cryptocurrency users on optimal wallet design.
Last updated:  2021-11-22
Security evaluation against side-channel analysis at compilation time
Nicolas Bruneau, Charles Christen, Jean-Luc Danger, Adrien Facon, Sylvain Guilley
Masking countermeasure is implemented to thwart side-channel attacks. The maturity of high-order masking schemes has reached the level where the concepts are sound and proven. For instance, Rivain and Prouff proposed a full-fledged AES at CHES 2010. Some non-trivial fixes regarding refresh functions were needed though. Now, industry is adopting such solutions, and for the sake of both quality and certification requirements, masked cryptographic code shall be checked for correctness using the same model as that of the the theoretical protection rationale (for instance the probing leakage model). Seminal work has been initiated by Barthe et al. at EUROCRYPT 2015 for automated verification at higher orders on concrete implementations. In this paper, we build on this work to actually perform verification from within a compiler, so as to enable timely feedback to the developer. Precisely, our methodology enables to provide the actual security order of the code at the intermediate representation (IR) level, thereby identifying possible flaws (owing either to source code errors or to compiler optimizations). Second, our methodology allows for an exploitability analysis of the analysed IR code. In this respect, we formally handle all the symbolic expressions in the static single assignment (SSA) representation to build the optimal distinguisher function. This enables to evaluate the most powerful attack, which is not only function of the masking order $d$, but also on the number of leaking samples and of the expressions (e.g., linear vs non-linear leakages). This scheme allows to evaluate the correctness of a masked cryptographic code, and also its actual security in terms of number of traces in a given deployment context (characterized by a leakage model of the target CPU and the signal-to-noise ratio of the platform).
Last updated:  2021-11-22
Ark of the ECC: An open-source ECDSA power analysis attack on a FPGA based Curve P-256 implementation
Jean-Pierre Thibault, Colin O’Flynn, Alex Dewar
Power analysis attacks on ECC have been presented since almost the very beginning of DPA itself, even before the standardization of AES. Given that power analysis attacks against AES are well known and have a large body of practical artifacts to demonstrate attacks on both software and hardware implementations, it is surprising that these artifacts are generally lacking for ECC. In this work we begin to remedy this by providing a complete open-source ECDSA attack artifact, based on a high-quality hardware ECDSA core from the CrypTech project. We demonstrate an effective power analysis attack against an FPGA implementation of this core. As many recent secure boot solutions are using ECDSA, efforts into building open-source artifacts to evaluate attacks on ECDSA are highly relevant to ongoing academic and industrial research programs. To demonstrate the value of this evaluation platform, we implement several countermeasures and show that evaluating leakage on hardware is critical to understand the effectiveness of a countermeasure.
Last updated:  2021-11-20
Practical Garbled RAM: GRAM with $O(\log^2 n)$ Overhead
David Heath, Vladimir Kolesnikov, Rafail Ostrovsky
Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling. We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.
Last updated:  2023-07-17
Revisiting Mutual Information Analysis: Multidimensionality, Neural Estimation and Optimality Proofs
Valence Cristiani, Maxime Lecomte, Philippe Maurine
Recent works showed how Mutual Information Neural Estimation (MINE) could be applied to side-channel analysis in order to evaluate the amount of leakage of an electronic device. One of the main advantages of MINE over classical estimation techniques is to enable the computation between high dimensional traces and a secret, which is relevant for leakage assessment. However, optimally exploiting this information in an attack context in order to retrieve a secret remains a non-trivial task especially when a profiling phase of the target is not allowed. Within this context, the purpose of this paper is to address this problem based on a simple idea: there are multiple leakage sources in side-channel traces and optimal attacks should necessarily exploit most/all of them. To this aim, a new mathematical framework, designed to bridge classical Mutual Information Analysis (MIA) and the multidimensional aspect of neural-based estimators, is proposed. One of the goals is to provide rigorous proofs consolidating the mathematical basis behind MIA, thus alleviating inconsistencies found in the state of the art. This framework allows to derive a new attack called Neural Estimated Mutual Information Analysis (NEMIA). To the best of our knowledge, it is the first unsupervised attack able to benefit from both the power of deep learning techniques and the valuable theoretical properties of MI. Simulations and experiments show that NEMIA outperforms classical and more recent deep learning based unsupervised side-channel attacks, especially in low-information contexts.
Last updated:  2023-03-06
HOLMES: Efficient Distribution Testing for Secure Collaborative Learning
Ian Chang, Katerina Sotiraki, Weikeng Chen, Murat Kantarcioglu, Raluca Ada Popa
Using secure multiparty computation (MPC), organizations which own sensitive data (e.g., in healthcare, finance or law enforcement) can train machine learning models over their joint dataset without revealing their data to each other. At the same time, secure computation restricts operations on the joint dataset, which impedes computation to assess its quality. Without such an assessment, deploying a jointly trained model is potentially illegal. Regulations, such as the European Union's General Data Protection Regulation (GDPR), require organizations to be legally responsible for the errors, bias, or discrimination caused by their machine learning models. Hence, testing data quality emerges as an indispensable step in secure collaborative learning. However, performing distribution testing is prohibitively expensive using current techniques, as shown in our experiments. We present HOLMES, a protocol for performing distribution testing efficiently. In our experiments, compared with three non-trivial baselines, HOLMES achieves a speedup of more than 10x for classical distribution tests and up to 10^4x for multidimensional tests. The core of HOLMES is a hybrid protocol that integrates MPC with zero-knowledge proofs and a new ZK-friendly and naturally oblivious sketching algorithm for multidimensional tests, both with significantly lower computational complexity and concrete execution costs.
Last updated:  2023-11-04
Post-Quantum Simulatable Extraction with Minimal Assumptions: Black-Box and Constant-Round
Nai-Hui Chia, Kai-Min Chung, Xiao Liang, and Takashi Yamakawa
From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $NP \subseteq BQP$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property. Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs: - extractable commitments for which the extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation; - $\epsilon$-simulatable coin-flipping; - $\epsilon$-zero-knowledge arguments of knowledge for $NP$ for which the knowledge extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge arguments for $QMA$. At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the post-extraction state of the adversary.
Last updated:  2021-11-20
Blockchain-based Security Framework for Critical Industry 4.0 Cyber-physical System
Ziaur Rahman, Ibrahim Khalil, Xun Yi, Mohammed Atiquzzaman
There has been an intense concern for security alternatives because of the recent rise of cyber attacks, mainly targeting critical systems such as industry, medical, or energy ecosystem. Though the latest industry infrastructures largely depend on AI-driven maintenance, the prediction based on corrupted data undoubtedly results in loss of life and capital. Admittedly, an inadequate data-protection mechanism can readily challenge the security and reliability of the network. The shortcomings of the conventional cloud or trusted certificate-driven techniques have motivated us to exhibit a unique Blockchain-based framework for a secure and efficient industry 4.0 system. The demonstrated framework obviates the long-established certificate authority after enhancing the consortium Blockchain that reduces the data processing delay, and increases cost-effective throughput. Nonetheless, the distributed industry 4.0 security model entails cooperative trust than depending on a single party, which in essence indulges the costs and threat of the single point of failure. Therefore, multi-signature technique of the proposed framework accomplishes the multi-party authentication, which confirms its applicability for the real-time and collaborative cyber-physical system.
Last updated:  2023-02-13
Clarion: Anonymous Communication from Multiparty Shuffling Protocols
Saba Eskandarian, Dan Boneh
This paper studies the role of multiparty shuffling protocols in enabling more efficient metadata-hiding communication. We show that the process of shuffling messages can be expedited by having servers collaboratively shuffle and verify secret-shares of messages instead of using a conventional mixnet approach where servers take turns performing independent verifiable shuffles of user messages. We apply this technique to achieve both practical and asymptotic improvements in anonymous broadcast and messaging systems. We first show how to build a three server anonymous broadcast scheme, secure against one malicious server, that relies only on symmetric cryptography. Next, we adapt our three server broadcast scheme to a $k$-server scheme secure against $k-1$ malicious servers, at the cost of a more expensive per-shuffle preprocessing phase. Finally, we show how our scheme can be used to significantly improve the performance of the MCMix anonymous messaging system. We implement our shuffling protocol in a system called Clarion and find that it outperforms a mixnet made up of a sequence of verifiable (single-server) shuffles by $9.2\times$ for broadcasting small messages and outperforms the MCMix conversation protocol by $11.8\times$.
Last updated:  2021-11-20
InterTrust: Towards an Efficient Blockchain Interoperability Architecture with Trusted Services
Gang Wang, Mark Nixon
Blockchain as a potentially disruptive technology can advance many different fields, e.g., cryptocurrencies, supply chains, and the industrial Internet of Things. The next-generation blockchain ecosystem is expected to consist of various homogeneous and heterogeneous distributed ledgers. These ledger systems will inevitably require a certain level of proper cooperation of multiple blockchains to enrich advanced functionalities and enhance interoperable capabilities for future applications. The interoperability among blockchains will revolutionize current blockchain design principles, like the emergence of the Internet. However, the development of cross-blockchain applications involves much complexity regarding the variety of underlying cross-blockchain communication. With that regard, we propose an efficient, interoperable blockchain architecture, InterTrust, to support interoperability and trustworthiness among arbitrary blockchain systems (including homogeneous and heterogeneous blockchains). It consists of an atomic cross-chain communication protocol, which can be considered an agnostic protocol to integrate existing blockchain systems smoothly. InterTrust is powered by two innovative techniques: threshold signature scheme and trusted hardware. The threshold signature scheme guarantees consistency and verifiability in the target blockchain systems, and the trusted hardware guarantees trusted services among distinct blockchain systems. Combining these two techniques provides an efficient cross-chain communication protocol to facilitate atomic swaps and interoperable operations between different blockchain systems. Our interoperable architecture is robust to support arbitrary blockchain systems. We also present the security analysis on the scenarios of integrating our protocol into Byzantine fault tolerance based blockchain systems.
Last updated:  2021-11-20
BLOCK CIPHER DEFINED BY MATRIX PRESENTATION OF QUASIGROUPS
Smile Markovski, Vesna Dimitrova, Zlatka Trajcheska, Marija Petkovska, Mile Kostadinoski, Damjan Buhov
Designing new cryptosystems and their cryptanalysis is the basic cycle of advancement in the field of cryptography. In this paper we introduce a block cipher based on the quasigroup transformations, which are defined by the matrix presentation of the quasigroup operations. This type of quasigroup presentation is suitable for constructing a block cipher since it doesn't require too much memory space to store all the necessary data, so it can be used even for lightweight cryptographic purposes. For now, we are considering only the quasigroups of order 4. Constructions with quasigroups of higher order and examination of the strengths and weaknesses of this design will be considered in next papers.
Last updated:  2021-11-20
Compressed SIKE Round 3 on ARM Cortex-M4
Mila Anastasova, Mojtaba Bisheh-Niasar, Reza Azarderakhsh, Mehran Mozaffari Kermani
In 2016, the National Institute of Standards and Technology (NIST) initiated a standardization process among the post-quantum secure algorithms. Forming part of the alternate group of candidates after Round 2 of the process is the Supersingular Isogeny Key Encapsulation (SIKE) mechanism which attracts with the smallest key sizes offering post-quantum security in scenarios of limited bandwidth and memory resources. Even further reduction of the exchanged information is offered by the compression mechanism, proposed by Azarderakhsh et al., which, however, introduces a significant time overhead and increases the memory requirements of the protocol, making it challenging to integrate it into an embedded system. In this paper, we propose the first compressed SIKE implementation for a resource-constrained device, where we targeted the NIST recommended platform STM32F407VG featuring ARM Cortex-M4 processor. We integrate the isogeny-based implementation strategies described previously in the literature into the compressed version of SIKE. Additionally, we propose a new assembly design for the finite field operations particular for the compressed SIKE, and observe a speedup of up to 16% and up to 25% compared to the last best-reported assembly implementations for p434, p503, and p610.
Last updated:  2021-11-20
Pattern Devoid Cryptography
Gideon Samid
Pattern loaded ciphers are at risk of being compromised by exploiting deeper patterns discovered first by the attacker. This reality offers a built-in advantage to prime cryptanalysis institutions. On the flip side, risk of hidden math and faster computing undermines confidence in the prevailing cipher products. To avoid this risk one would resort to building security on the premise of lavish quantities of randomness. Gilbert S. Vernam did it in 1917. Using modern technology, the same idea of randomness-based security can be implemented without the inconvenience associated with the old Vernam cipher. These are Trans Vernam Ciphers that project security through a pattern-devoid cipher. Having no pattern to lean on, there is no pattern to crack. The attacker faces (i) a properly randomized shared cryptographic key combined with (ii) unilateral randomness, originated ad-hoc by the transmitter without pre-coordination with the recipient. The unlimited unilateral randomness together with the shared key randomness is set to project as much security as desired up to and including Vernam levels. Assorted Trans Vernam ciphers (TVC) are categorized and reviewed, presenting a cogent message in favor of a cryptographic pathway where transmitted secrets are credibly secured against attackers with faster computers and better mathematicians. A vision emerges: a cryptographic level playing field, consistent with the emerging culture of Web 3.0.
Last updated:  2021-11-15
More Lessons: Analysis of PUF-based Authentication Protocols for IoT
Karim Lounis, Mohammad Zulkernine
Authentication constitutes the foundation and vertebrae of all security properties. It is the procedure in which communicating parties prove their identities to each other, and generally establish and derive secret keys to enforce other services, such as confidentiality, data integrity, non-repudiation, and availability. PUFs (Physical Unclonable Functions) has been the subject of many subsequent publications on lightweight, lowcost, and secure-by-design authentication protocols. This has turned our attention to investigate the most recent PUF-based authentication protocols for IoT. In [1], we reviewed the security of some PUF-based authentication protocols that were proposed between 2016 and October 2020, and drew important security lessons to consider by future authentication protocol designers. In this paper, we extend our previous work by reviewing the security of fifteen PUF-based authentication protocols that were recently published during the past two years (2020 and 2021). We first provide the necessary background on PUFs and how they are used for authentication. Then, we analyze the security of these authentication protocols to identify and report common security issues and design flaws. We draw lessons and recommendations for future authentication protocol designers
Last updated:  2021-11-15
High-Speed Hardware Architectures and FPGA Benchmarking of CRYSTALS-Kyber, NTRU, and Saber
Viet Ba Dang, Kamyar Mohajerani, Kris Gaj
Performance in hardware has typically played a significant role in differentiating among leading candidates in cryptographic standardization efforts. Winners of two past NIST cryptographic contests (Rijndael in case of AES and Keccak in case of SHA-3) were ranked consistently among the two fastest candidates when implemented using FPGAs and ASICs. Hardware implementations of cryptographic operations may quite easily outperform software implementations for at least a subset of major performance metrics, such as latency, number of operations per second, power consumption, and energy usage, as well as in terms of security against physical attacks, including side-channel analysis. Using hardware also permits much higher flexibility in trading one subset of these properties for another. This paper presents high-speed hardware architectures for four lattice-based CCA-secure Key Encapsulation Mechanisms (KEMs), representing three NIST PQC finalists: CRYSTALS-Kyber, NTRU (with two distinct variants, NTRU-HPS and NTRU-HRSS), and Saber. We rank these candidates among each other and compare them with all other Round 3 KEMs based on the data from the previously reported work.
Last updated:  2021-11-15
Parallel Quantum Addition for Korean Block Cipher
Kyungbae Jang, Gyeongju Song, Hyunjun Kim, Hyeokdong Kwon, Hyunji Kim, Hwajeong Seo
Adversaries using quantum computers can employ new attacks on cryptography that are not possible with classical computers. Grover's search algorithm, a well-known quantum algorithm, can reduce the search complexity of $O(2^n)$ to $\sqrt{2^n}$ for symmetric key cryptography using an $n$-bit key. To apply the Grover search algorithm, the target encryption process must be implemented as a quantum circuit. In this paper, we present optimized quantum circuits for Korean block ciphers based on ARX architectures. We adopt the optimal quantum adder and design in parallel way with only a few trade-offs between quantum resources. As a result, we provide a performance improvement of 78\% in LEA, 85\% in HIGHT, and 70\% in CHAM in terms of circuit depth, respectively. Finally, we estimate the cost of the Grover key search for Korean block ciphers and evaluate the post-quantum security based on the criteria presented by NIST.
Last updated:  2021-11-15
z-OTS: a one-time hash-based digital signaturescheme with fast verification
Amos Zheng, Marcos A. Simplicio Jr.
Hash-based signature schemes are a class of post-quantum algorithms usually built upon one-time signature (OTS) solutions via hash-trees. The benefits of such schemes include small key sizes, efficient processing and the fact that they are simple to implement using a regular hash algorithm. In addition, their security properties are quite well understood, since they rely basically on the pre-image or collision resistance of the underlying hash function. Among the existing OTS schemes, W-OTS+ is among the most popular. One reason for such popularity is that the OTS public key can be recovered from the signature itself, which facilitates the construction of a multi-time signature scheme using Merkle trees. On the other hand, signature generation and verification in W-OTS+ take roughly the same time, which is not ideal for applications where each signature is expected to be verified several times, as in software stores, PKI certificate validation, and secure boot. It is also inconvenient when the devices that verify signatures have lower computational power than the signers. In such scenarios, it is desirable to design signature schemes enabling faster verification, even if such speed-ups come at the expense of a slower signature generation procedure. With this goal in mind, we hereby present and evaluate a novel OTS scheme, called z-OTS. The main interest of z-OTS is that it preserves all benefits of W-OTS+, but provides faster signature verification at the cost of a (not much) slower signature generation procedure. For example, for signature sizes equivalent to W-OTS+ with Winternitz parameter w=4, our simulations show that verification can be 30.3% faster with z-OTS, while key and signature generation become, respectively, 53.7% and 137.5% slower. Larger w leads to even more expressive gains in the verification procedure, besides providing lower overheads when generating keys and signatures.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.