Papers updated in last 183 days (1313 results)

Last updated:  2022-09-24
Linear-Time Probabilistic Proofs Over Every Field
Jonathan Bootle, Alessandro Chiesa, Ziyi Guan, and Siqi Liu
Interactive oracle proofs (IOPs) are a generalization of probabilistically checkable proofs that can be used to construct succinct arguments. Improvements in the efficiency of IOPs lead to improvements in the efficiency of succinct arguments. Key efficiency goals include achieving provers that run in linear time and verifiers that run in sublinear time, where the time complexity is with respect to the arithmetic complexity of proved computations over a finite field $\mathbb{F}$. We consider the problem of constructing IOPs for any given finite field $\mathbb{F}$ with a linear-time prover and polylogarithmic query complexity. Several previous works have achieved these efficiency requirements with $O(1)$ soundness error for NP-complete languages. However, constrained by the soundness error of the sumcheck protocol underlying these constructions, the IOPs achieve linear prover time only for instances in fields of size $\Omega(\log n)$. Recent work (Ron-Zewi and Rothblum, STOC 2022) overcomes this problem, but with linear verification time. We construct IOPs for the algebraic automata problem over any finite field $\mathbb{F}$ with a linear-time prover, polylogarithmic query complexity, and sublinear verification complexity. We additionally prove a similar result to Ron-Zewi and Rothblum for the NP-complete language R1CS using different techniques. The IOPs imply succinct arguments for (nondeterministic) arithmetic computations over any finite field with linear-time proving (given black-box access to a linear-time collision-resistant hash function). Inspired by recent constructions of reverse-multiplication-friendly embeddings, our IOP constructions embed problem instances over small fields into larger fields and adapt previous IOP constructions to the new instances. The IOP provers are modelled as random access machines and use precomputation techniques to achieve linear prover time. In this way, we avoid having to replace the sumcheck protocol.
Last updated:  2022-09-23
Post-Quantum Zero Knowledge, Revisited (or: How to do Quantum Rewinding Undetectably)
Alex Lombardi, Fermi Ma, and Nicholas Spooner
When do classical zero-knowledge protocols remain secure against quantum attacks? In this work, we develop the techniques, tools, and abstractions necessary to answer this question for foundational protocols: 1) We prove that the Goldreich-Micali-Wigderson protocol for graph non-isomorphism and the Feige-Shamir protocol for NP remain zero-knowledge against quantum adversaries. At the heart of our proof is a new quantum rewinding technique that enables extracting information from multiple invocations of a quantum adversary without disturbing its state. 2) We prove that the Goldreich-Kahan protocol for NP is post-quantum zero knowledge using a simulator that can be seen as a natural quantum extension of the classical simulator. Our results achieve negligible simulation error, appearing to contradict a recent impossibility result due to Chia-Chung-Liu-Yamakawa (FOCS 2021). This brings us to our final contribution: 3) We introduce coherent-runtime expected quantum polynomial time, a simulation notion that (1) precisely captures all of our zero-knowledge simulators, (2) cannot break any polynomial hardness assumptions, (3) implies strict polynomial-time epsilon-simulation and (4) is not subject to the CCLY impossibility. In light of our positive results and the CCLY negative results, we propose coherent-runtime simulation to be the appropriate quantum analogue of classical expected polynomial-time simulation.
Last updated:  2022-09-23
Non-interactive Mimblewimble transactions, revisited
Georg Fuchsbauer and Michele Orrù
Mimblewimble is a cryptocurrency protocol that promises to overcome notorious blockchain scalability issues and provides user privacy. For a long time its wider adoption has been hindered by the lack of non-interactive transactions, that is, payments for which only the sender needs to be online. Yu proposed a way of adding non-interactive transactions to stealth addresses to Mimblewimble, but this turned out to be flawed. Building on Yu and integrating ideas from Burkett, we give a fixed scheme and provide a rigorous security analysis strenghtening the previous security model from Eurocrypt'19. Our protocol is considered for implementation by MimbleWimbleCoin and a variant is now deployed as MimbleWimble Extension Blocks (MWEB) in Litecoin.
Last updated:  2022-09-23
Plactic key agreement (insecure?)
Daniel R. L. Brown
Plactic key agreement is a new key agreement scheme that uses Knuth’s multiplication of semistandard tableaus from combinatorial algebra. The security of plactic key agreement relies on the difficulty of some computational problems, such as division of semistandard tableaus. Division by erosion uses backtracking to divide tableaus. Division by erosion is estimated to be infeasible against public keys of 768 or more bytes. If division by erosion is the best attack against plactic key agreement, then secure plactic key agreement could be practical. Chris Monico found a new attack on plactic key agreement, which is fast, potentially polynomial-time, and could very well make plactic key agreement insecure.
Last updated:  2022-09-23
Quantum Rewinding for Many-Round Protocols
Russell W. F. Lai, Giulio Malavolta, and Nicholas Spooner
We investigate the security of succinct arguments against quantum adversaries. Our main result is a proof of knowledge-soundness in the post-quantum setting for a class of multi-round interactive protocols, including those based on the recursive folding technique of Bulletproofs. To prove this result, we devise a new quantum rewinding strategy, the first that allows for rewinding across many rounds. This technique applies to any protocol satisfying natural multi-round generalizations of special soundness and collapsing. For our main result, we show that recent Bulletproofs-like protocols based on lattices satisfy these properties, and are hence sound against quantum adversaries.
Last updated:  2022-09-23
The Generals’ Scuttlebutt: Byzantine-Resilient Gossip Protocols
Sandro Coretti, Aggelos Kiayias, Cristopher Moore, and Alexander Russell
One of the most successful applications of peer-to-peer communication networks is in the context of blockchain protocols, which—in Satoshi Nakamoto's own words—rely on the "nature of information being easy to spread and hard to stifle." Significant efforts were invested in the last decade into analyzing the security of these protocols, and invariably the security arguments known for longest-chain Nakamoto-style consensus use an idealization of this tenet. Unfortunately, the real-world implementations of peer-to-peer gossip-style networks used by blockchain protocols rely on a number of ad-hoc attack mitigation strategies that leave a glaring gap between the idealized communication layer assumed in formal security arguments for blockchains and the real world, where a wide array of attacks have been showcased. In this work we bridge this gap by presenting a Byzantine-resilient network layer for blockchain protocols. For the first time we quantify the problem of network-layer attacks in the context of blockchain security models, and we develop a design that thwarts resource restricted adversaries. Importantly, we focus on the proof-of-stake setting due to its vulnerability to Denial-of-Service (DoS) attacks stemming from the well-known deficiency (compared to the proof-of-work setting) known as nothing at stake. We present a Byzantine-resilient gossip protocol, and we analyze it in the Universal Composition framework. In order to prove security, we show novel results on expander properties of random graphs. Importantly, our gossip protocol can be based on any given bilateral functionality that determines a desired interaction between two "adjacent" peers in the networking layer and demonstrates how it is possible to use application-layer information to make the networking-layer resilient to attacks. Despite the seeming circularity, we demonstrate how to prove the security of a Nakamoto-style longest-chain protocol given our gossip networking functionality, and hence, we demonstrate constructively how it is possible to obtain provable security across protocol layers, given only bare-bone point-to-point networking, majority of honest stake, and a verifiable random function.
Last updated:  2022-09-23
TRIFORS: LINKable Trilinear Forms Ring Signature
Giuseppe D'Alconzo and Andrea Gangemi
We present TRIFORS (TRIlinear FOrms Ring Signature), a logarithmic post-quantum (linkable) ring signature based on a novel assumption regarding equivalence of alternating trilinear forms. The basis of this work is the construction by Beullens, Katsumata and Pintore from Asiacrypt 2020 to obtain a linkable ring signature from a cryptographic group action. The group action on trilinear forms used here is the same employed in the signature presented by Tang et al. at Eurocrypt 2022. We first define a sigma protocol that, given a set of public keys, the ring, allows to prove the knowledge of a secret key corresponding to a public one in the ring. Furthermore, some optimisations are used to reduce the size of the signature: among others, we use a novel application of the combinatorial number system to the space of the challenges. Using the Fiat-Shamir transform, we obtain a (linkable) ring signature of competitive length with the state-of-the-art among post-quantum proposals for security levels 128 and 192.
Last updated:  2022-09-23
Cycle-Accurate Power Side-Channel Analysis Using the ChipWhisperer: a Case Study on Gaussian Sampling
Nils Wisiol, Patrick Gersch, and Jean-Pierre Seifert
This paper presents an approach to uncover and analyze power side-channel leakages on a processor cycle level precision. By carefully designing and evaluating the measurement setup, accurate trace timing is enabled, which is used to overlay the trace with the corresponding assembly code. This methodology allows to expose the sources of leakage on a processor cycle scale, which allows for evaluating new implementations. It also exposes that the default ChipWhisperer configuration for STM32F4 targets used in prior work includes wait cycles that are rarely used in real-world applications, but affect power side-channel leakage. As an application for our setup, we target the widely used Sign-Flip function of Gaussian sampling code used in multiple Post-Quantum Key-Exchange Mechanisms and Signature schemes. We propose new implementations for the Sign-Flip function based on our analysis on the original implementation and further evaluate their leakage. Our findings allow the conclusion that unmasked cryptographic implementations of schemes based on Gaussian random numbers for STM32F4 cannot be secure against power side-channel, and that masking just the Gaussian sampler is not a viable option.
Last updated:  2022-09-23
Universal Composition with Global Subroutines: Capturing Global Setup within plain UC
Christian Badertscher, Ran Canetti, Julia Hesse, Björn Tackmann, and Vassilis Zikas
The Global and Externalized UC frameworks [Canetti-Dodis-Pass-Walfish, TCC 07] extend the plain UC framework to additionally handle protocols that use a ``global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use. We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows: - We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$). - We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment. We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.
Last updated:  2022-09-23
On the (Ir)Replaceability of Global Setups, or How (Not) to Use a Global Ledger
Christian Badertscher, Julia Hesse, and Vassilis Zikas
In universally composable (UC) security, a global setup is intended to capture the ideal behavior of a primitive which is accessible by multiple protocols, allowing them to share state. A representative example is the Bitcoin ledger. Indeed, since Bitcoin---and more generally blockchain ledgers---are known to be useful in various scenarios, it has become increasingly popular to capture such ledgers as global setup. Intuitively, one would expect UC to allow us to make security statements about protocols that use such a global setup, e.g., a global ledger, which can then be automatically translated into the setting where the setup is replaced by a protocol implementing it, such as Bitcoin. We show that the above reasoning is flawed and such a generic security-preserving replacement can only work under very (often unrealistic) strong conditions on the global setup and the security statement. For example, the UC security of Bitcoin for realizing a ledger proved by Badertscher et al. [CRYPTO'17] is not sufficient per se to allow us to replace the ledger by Bitcoin when used as a global setup. In particular, we cannot expect that all security statements in the global ledger-hybrid world would be preserved when using Bitcoin as a ledger. On the positive side, we provide characterizations of security statements for protocols that make use of global setups, for which the replacement is sound. Our results can be seen as a first guide on how to navigate the very tricky question of what constitutes a ``good'' global setup and how to use it in order to keep the modular protocol-design approach intact.
Last updated:  2022-09-23
A New Isogeny Representation and Applications to Cryptography
Antonin Leroux
This paper focuses on isogeny representations, defined as ways to evaluate isogenies and verify membership to the language of isogenous supersingular curves (the set of triples $D,E_1,E_2$ with a cyclic isogeny of degree $D$ between $E_1$ and $E_2$). The tasks of evaluating and verifying isogenies are fundamental for isogeny-based cryptography. Our main contribution is the design of the suborder representation, a new isogeny representation targeted at the case of (big) prime degree. The core of our new method is the revelation of endomorphisms of smooth norm inside a well-chosen suborder of the codomain's endomorphism ring. This new representation appears to be opening interesting prospects for isogeny-based cryptography under the hardness of a new computational problem: the SubOrder to Ideal Problem (SOIP). As an application, we introduce pSIDH, a new NIKE based on the suborder representation. Studying new assumption appears to be particularly crucial in the light of the recent attacks against isogeny-based cryptography. In order to manipulate efficiently the suborder representation, we develop several heuristic algorithmic tools to solve norm equations inside a new family of quaternion orders. These new algorithms may be of independent interest.
Last updated:  2022-09-23
Security Analysis of Subterranean 2.0
Ling Song, Yi Tu, Danping Shi, and Lei Hu
Subterranean 2.0 is a cipher suite that can be used for hashing, authenticated encryption, MAC computation, etc. It was designed by Daemen, Massolino, Mehrdad, and Rotella, and has been selected as a candidate in the second round of NIST's lightweight cryptography standardization process. Subterranean 2.0 is a duplex-based construction and utilizes a single-round permutation in the duplex. It is the simplicity of the round function that makes it an attractive target of cryptanalysis. In this paper, we examine the single-round permutation in various phases of Subterranean 2.0 and specify three related attack scenarios that deserve further investigation: keystream biases in the keyed squeezing phase, state collisions in the keyed absorbing phase, and one-round differential analysis in the nonce-misuse setting. To facilitate cryptanalysis in the first two scenarios, we novelly propose a set of size-reduced toy versions of Subterranean 2.0: Subterranean-m. Then we make an observation for the first time on the resemblance between the non-linear layer in the round function of Subterranean 2.0 and SIMON's round function. Inspired by the existing work on SIMON, we propose explicit formulas for computing the exact correlation of linear trails of Subterranean 2.0 and other ciphers utilizing similar non-linear operations. We then construct our models for searching trails to be used in the keystream bias evaluation and state collision attacks. Our results show that most instances of Subterranean-m are secure in the first two attack scenarios but there exist instances that are not. Further, we find a flaw in the designers' reasoning of Subterranean 2.0's linear bias but support the designers' claim that there is no linear bias measurable from at most $2^{96}$ data blocks. Due to the time-consuming search, the security of Subterranean 2.0 against the state collision attack in keyed modes still remains an open question. Finally, we observe that one-round differentials allow to recover state bits in the nonce-misuse setting. By proposing nested one-round differentials, we obtain a sufficient number of state bits, leading to a practical state recovery with only 20 repetitions of the nonce and 88 blocks of data. It is noted that our work does not threaten the security of Subterranean 2.0.
Last updated:  2022-09-22
Lattice Enumeration for Tower NFS: a 521-bit Discrete Logarithm Computation
Gabrielle De Micheli, Pierrick Gaudry, and Cécile Pierrot
The Tower variant of the Number Field Sieve (TNFS) is known to be asymptotically the most efficient algorithm to solve the discrete logarithm problem in finite fields of medium characteristics, when the extension degree is composite. A major obstacle to an efficient implementation of TNFS is the collection of algebraic relations, as it happens in dimension greater than 2. This requires the construction of new sieving algorithms which remain efficient as the dimension grows. In this article, we overcome this difficulty by considering a lattice enumeration algorithm which we adapt to this specific context. We also consider a new sieving area, a high-dimensional sphere, whereas previous sieving algorithms for the classical NFS considered an orthotope. Our new sieving technique leads to a much smaller running time, despite the larger dimension of the search space, and even when considering a larger target, as demonstrated by a record computation we performed in a 521-bit finite field GF(p^6). The target finite field is of the same form than finite fields used in recent zero-knowledge proofs in some blockchains. This is the first reported implementation of TNFS.
Last updated:  2022-09-22
Jammin' on the deck
Norica Băcuieți, Joan Daemen, Seth Hoffert, Gilles Van Assche, and Ronny Van Keer
Currently, a vast majority of symmetric-key cryptographic schemes are built as block cipher modes. The block cipher is designed to be hard to distinguish from a random permutation and this is supported by cryptanalysis, while (good) modes can be proven secure if a random permutation takes the place of the block cipher. As such, block ciphers form an abstraction level that marks the border between cryptanalysis and security proofs. In this paper, we investigate a re-factored version of symmetric-key cryptography built not around the block ciphers but rather the deck function: a keyed function with arbitrary input and output length and incrementality properties. This allows for modes of use that are simpler to analyze and still very efficient thanks to the excellent performance of currently proposed deck functions. We focus on authenticated encryption (AE) modes with varying levels of robustness. Our modes have built-in support for sessions, but are also efficient without them. As a by-product, we define a new ideal model for AE dubbed the jammin cipher. Unlike the OAE2 security models, the jammin cipher is both a operational ideal scheme and a security reference, and addresses real-world use cases such as bi-directional communication and multi-key security.
Last updated:  2022-09-22
On UC-Secure Range Extension and Batch Verification for ECVRF
Christian Badertscher, Peter Gaži, Iñigo Querejeta-Azurmendi, and Alexander Russell
Verifiable random functions (Micali et al., FOCS'99) allow a key-pair holder to verifiably evaluate a pseudorandom function under that particular key pair. These primitives enable fair and verifiable pseudorandom lotteries, essential in proof-of-stake blockchains such as Algorand and Cardano, and are being used to secure billions of dollars of capital. As a result, there is an ongoing IRTF effort to standardize VRFs, with a proposed ECVRF based on elliptic-curve cryptography appearing as the most promising candidate. In this work, towards understanding the general security of VRFs and in particular the ECVRF construction, we provide an ideal functionality in the Universal Composability (UC) framework (Canetti, FOCS'01) that captures VRF security, and show that ECVRF UC-realizes this functionality. We further show how the range of a VRF can generically be extended in a modular fashion based on the above functionality. This observation is particularly useful for protocols such as Ouroboros since it allows to reduce the number of VRF evaluations (per slot) and VRF verifications (per block) from two to one at the price of additional (but much faster) hash-function evaluations. Finally, we study batch verification in the context of VRFs. We provide a UC-functionality capturing a VRF with batch-verification capability, and propose modifications to ECVRF that allow for this feature. We again prove that our proposal UC-realizes the desired functionality. We provide a performance analysis showing that verification can yield a factor-two speedup for batches with 1024 proofs, at the cost of increasing the proof size from 80 to 128 bytes.
Last updated:  2022-09-22
GUC-Secure Commitments via Random Oracles: New Impossibility and Feasibility
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
In the UC framework, protocols must be subroutine respecting; therefore, shared trusted setup might cause security issues. To address this drawback, Generalized UC (GUC) framework is introduced by Canetti \emph{et al.} (TCC 2007). In this work, we investigate the impossibility and feasibility of GUC-secure commitments using global random oracles (GRO) as the trusted setup. In particular, we show that it is impossible to have a 2-round (1-round committing and 1-round opening) GUC-secure commitment in the global observable RO model by Canetti \emph{et al.} (CCS 2014). We then give a new round-optimal GUC-secure commitment that uses only Minicrypt assumptions (i.e. the existence of one-way functions) in the global observable RO model. Furthermore, we also examine the complete picture on round complexity of the GUC-secure commitments in various global RO models.
Last updated:  2022-09-22
Anonymity of NIST PQC Round 3 KEMs
Keita Xagawa
This paper investigates __anonymity__ of all NIST PQC Round 3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE. We show the following results: * NTRU is anonymous in the quantum random oracle model (QROM) if the underlying deterministic PKE is strongly disjoint-simulatable. NTRU is collision-free in the QROM. A hybrid PKE scheme constructed from NTRU as KEM and appropriate DEM is anonymous and robust. (Similar results for BIKE, FrodoKEM, HQC, NTRU LPRime, and SIKE hold except for two of three parameter sets of HQC.) * Classic McEliece is anonymous in the QROM if the underlying PKE is strongly disjoint-simulatable and a hybrid PKE scheme constructed from it as KEM and appropriate DEM is anonymous. * Grubbs, Maram, and Paterson pointed out that Kyber and Saber have a gap in the current IND-CCA security proof in the QROM (EUROCRYPT 2022). We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM. Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
Last updated:  2022-09-21
Secure Anycast Channels with Applications to 4G and 5G Handovers
Karl Norrman
In 3GPP mobile networks, application data is transferred between the phone and an access point over a wireless link. The mobile network wireless link is special since one channel endpoint is handed over from one access point to another as the phone physically moves. Key evolution during handover has been analyzed in various works, but these do not combine the analysis with analysis of the wireless-link application-data encryption protocol that uses the keys. To enable formal analysis of the 4G/5G wireless link, we develop a game-based security framework for such channels and define flexible key insulation security notions for application data transfer, including forward and backward security in the given adversary model. Our notions are modular and combine a bidirectional application data transfer channel with a generic framework for multiparty channel-evolution protocols. These two components interact, and the security of the channel-evolution protocol may rely on the security of the data transfer channel for some or all its messages. We also develop the first formal model of 4G/5G wireless link security including both handover key evolution and application data transfer, in the complexity theoretic setting. We prove the model secure w.r.t. our security notions. As a byproduct, we identify recommendations for improving the security of future mobile network standards to achieve key insulation. Specifically, we show that the current standards do not achieve forward secure encryption, even though this appears to be an explicit goal. We show how this can be rectified.
Last updated:  2022-09-21
Triply Adaptive UC NIZK
Ran Canetti, Pratik Sarkar, and Xiao Wang
Non-interactive zero knowledge (NIZK) enables proving the validity of NP statement without leaking anything else. We study multi-instance NIZKs in the common reference string (CRS) model, against an adversary that adaptively corrupts parties and chooses statements to be proven. We construct the first such $\textit{triply adaptive}$ NIZK that provides full adaptive soundness, as well as adaptive zero-knowledge, assuming either LWE or else LPN and DDH (previous constructions rely on non-falsifiable knowledge assumptions). In addition, our NIZKs are universally composable (UC). Along the way, we: - Formulate an ideal functionality, $\mathcal{F}_\textsf{NICOM}$, which essentially captures $\textit{non-interactive}$ commitments, and show that it is realizable by existing protocols using standard assumptions. - Define and realize, under standard assumptions, Sigma protocols which satisfy triply adaptive security with access to $\mathcal{F}_\textsf{NICOM}$. - Use the Fiat-Shamir transform, instantiated with correlation intractable hash functions, to compile a Sigma protocol with triply adaptive security with access to $\mathcal{F}_\textsf{NICOM}$ into a triply adaptive UC-NIZK argument in the CRS model with access to $\mathcal{F}_\textsf{NICOM}$, assuming LWE (or else LPN and DDH). - Use the UC theorem to obtain UC-NIZK in the CRS model.
Last updated:  2022-09-21
A Note on Key Ranking for Optimal Collision Side-Channel Attacks
Cezary Glowacz
In "Optimal collision side-channel attacks" (https://eprint.iacr.org/2019/828) we studied, and derived an optimal distinguisher for key ranking. In this note we propose a heuristic estimation procedure for key ranking based on this distinguisher, and provide estimates of lower bounds for secret key ranks in collision side channel attacks.
Last updated:  2022-09-21
A summary on the FRI low degree test
Ulrich Haböck
This document is an informal summary on the FRI low degree test [BSBHR18a], [BSCI+20], and DEEP algebraic linking from [BSGKS20]. Based on its most recent soundness analysis [BSCI+20], we discuss parameter settings for practical security levels, how FRI is turned into a polynomial commitment scheme, and the soundness of DEEP sampling in the list decoding regime. In particular, we illustrate the DEEP method applied to proving satisfiability of algebraic intermediate representations and prove a soundness error bound which slightly improves the one in [Sta21].
Last updated:  2022-09-21
MR-DSS – Smaller MinRank-based (Ring-)Signatures
Emanuele Bellini, Andre Esser, Carlo Sanna, and Javier Verbel
In the light of NIST’s announced reopening of the call for digital signature proposals in 2023 due to lacking diversity, there is a strong need for constructions based on other established hardness assumptions. In this work we construct a new post-quantum secure digital signature scheme based on the $MinRank$ problem, a problem with a long history of applications in cryptanalysis that led to a strong belief in its hardness. Initially following a design by Courtois (Asiacrypt '01) based on the Fiat--Shamir transform, we make use of several recent developments in the design of sigma protocols to reduce signature size and improve efficiency. This includes the recently introduced $sigma \; protocol \; with \; helper$ paradigm (Eurocrypt '19) and combinations with $cut$-$and$-$choose$ techniques (CCS '18). Moreover, we introduce several improvements to the core of the scheme to further reduce its signature size.
Last updated:  2022-09-21
Towards Globally Optimized Hybrid Homomorphic Encryption - Featuring the Elisabeth Stream Cipher
Orel Cosseron, Clément Hoffmann, Pierrick Méaux, and François-Xavier Standaert
Hybrid Homomorphic Encryption (HHE) reduces the amount of computation client-side and band- width usage in a Fully Homomorphic Encryption (FHE) framework. HHE requires the usage of specific sym- metric schemes that can be evaluated homomorphically efficiently. In this paper, we introduce the paradigm of Group Filter Permutator (GFP) as a generalization of the Improved Filter Permutator paradigm introduced by M ́eaux et al. From this paradigm, we specify Elisabeth , a family of stream cipher and give an instance: Elisabeth-4 . After proving the security of this scheme, we provide a Rust implementation of it and ensure its performance is comparable to state-of-the-art HHE. The true strength of Elisabeth lies in the available opera- tions server-side: while the best HHE applications were limited to a few multiplications server-side, we used data sent through Elisabeth-4 to homomorphically evaluate a neural network inference. Finally, we discuss the improvement and loss between the HHE and the FHE framework and give ideas to build more efficient schemes from the Elisabeth family
Last updated:  2022-09-21
Solving multivariate polynomial systems and an invariant from commutative algebra
Alessio Caminata and Elisa Gorla
The complexity of computing the solutions of a system of multivariate polynomial equations by means of Groebner bases computations is upper bounded by a function of the solving degree. In this paper, we discuss how to rigorously estimate the solving degree of a system, focusing on systems arising within public-key cryptography. In particular, we show that it is upper bounded by, and often equal to, the Castelnuovo-Mumford regularity of the ideal generated by the homogenization of the equations of the system, or by the equations themselves in case they are homogeneous. We discuss the underlying commutative algebra and clarify under which assumptions the commonly used results hold. In particular, we discuss the assumption of being in generic coordinates (often required for bounds obtained following this type of approach) and prove that systems that contain the field equations or their fake Weil descent are in generic coordinates. We also compare the notion of solving degree with that of degree of regularity, which is commonly used in the literature. We complement the paper with some examples of bounds obtained following the strategy that we describe.
Last updated:  2022-09-21
Statistical Decoding 2.0: Reducing Decoding to LPN
Kevin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, and Jean-Pierre Tillich
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than $0.3$. It gives for the first time after $60$ years, a better decoding algorithm for a significant range which does not belong to the ISD family.
Last updated:  2022-09-21
Stretching Cube Attacks: Improved Methods to Recover Massive Superpolies
Jiahui He, Kai Hu, Bart Preneel, and Meiqin Wang
Cube attacks exploit the algebraic properties of symmetric ciphers by recovering a special polynomial, the superpoly, and subsequently the secret key. When the algebraic normal forms of the corresponding Boolean functions are not available, the division property based approach allows to recover the exact superpoly in a clever way. However, the computational cost to recover the superpoly becomes prohibitive as the number of rounds of the cipher increases. For example, the nested monomial predictions (NMP) proposed at ASIACRYPT 2021 stuck at round 845 for Trivium. To alleviate the bottleneck of the NMP technique, i.e., the unsolvable model due to the excessive number of monomial trails, we shift our focus to the so-called valuable terms of a specific middle round that contribute to the superpoly. Two new techniques are introduced, namely, Non-zero Bit-based Division Property (NBDP) and Core Monomial Prediction (CMP), both of which result in a simpler MILP model compared to the MILP model of MP. It can be shown that the CMP technique offers a substantial improvement over the monomial prediction technique in terms of computational complexity of recovering valuable terms. Combining the divide-and-conquer strategy with these two new techniques, we catch the valuable terms more effectively and thus avoid wasting computational resources on intermediate terms contributing nothing to the superpoly. As an illustration of the power of our techniques, we apply our framework to Trivium, Grain, Kreyvium and Acorn. As a result, the computational cost of earlier attacks can be significantly reduced and the exact ANFs of the superpolies for 846-, 847- and 848-round Trivium, 192-round Grain, 895-round Kreyvium and 776-round Acorn can be recovered in practical time, even though the superpoly of 848-round Trivium contains over 500 million terms; this corresponds to respectively 3, 1, 1 and 1 rounds more than the previous best results. Moreover, by investigating the internal properties of Möbius transformation, we show how to perform key recovery using superpolies involving full key bits, which leads to the best key recovery attacks on the targeted ciphers.
Last updated:  2022-09-21
Enhancing Differential-Neural Cryptanalysis
Zhenzhen Bao, Jian Guo, Meicheng Liu, Li Ma, and Yi Tu
In CRYPTO 2019, Gohr shows that well-trained neural networks can perform cryptanalytic distinguishing tasks superior to traditional differential distinguishers. Moreover, applying an unorthodox key guessing strategy, an 11-round key-recovery attack on a modern block cipher Speck32/64 improves upon the published state-of-the-art result. This calls into the next questions. To what extent is the advantage of machine learning (ML) over traditional methods, and whether the advantage generally exists in the cryptanalysis of modern ciphers? To answer the first question, we devised ML-based key-recovery attacks on more extended round-reduced Speck32/64. We achieved an improved 12-round and the first practical 13-round attacks. The essential for the new results is enhancing a classical component in the ML-based attacks, that is, the neutral bits. To answer the second question, we produced various neural distinguishers on round-reduced Simon32/64 and provided comparisons with their pure differential-based counterparts.
Last updated:  2022-09-21
Accountable Light Client Systems for PoS Blockchains
Oana Ciobotaru, Fatemeh Shirazi, Alistair Stewart, and Sergey Vasilyev
A major challenge for blockchain interoperability is having an on-chain light client protocol that is both efficient and secure. We present a protocol that provides short proofs about the state of a decentralised consensus protocol while being able to detect misbehaving parties. To do this naively, a verifier would need to maintain an updated list of all participants' public keys which makes the corresponding proofs long. In general, existing solutions either lack accountability or are not efficient. We define and design a committee key scheme with short proofs that do not include any of the individual participants' public keys in plain. Our committee key scheme, in turn, uses a custom designed SNARK which has a fast prover time. Moreover, using our committee key scheme, we define and design an accountable light client system as the main cryptographic core for building bridges between proof of stake blockchains. Finally, we implement a prototype of our custom SNARK for which we provide benchmarks.
Last updated:  2022-09-21
The inspection model for zero-knowledge proofs and efficient Zerocash with secp256k1 keys
Huachuang Sun, Haifeng Sun, Kevin Singh, Akhil Sai Peddireddy, Harshad Patil, Jianwei Liu, and Weikeng Chen
Proving discrete log equality for groups of the same order is addressed by Chaum and Pedersen's seminal work. However, there has not been a lot of work in proving discrete log equality for groups of different orders. This paper presents an efficient solution, which leverages a technique we call delegated Schnorr. The discovery of this technique is guided by a design methodology that we call the inspection model, and we find it useful for protocol designs. We show two applications of this technique on the Findora blockchain: **Maxwell-Zerocash switching:** There are two privacy-preserving transfer protocols on the Findora blockchain, one follows the Maxwell construction and uses Pedersen commitments over Ristretto, one follows the Zerocash construction and uses Rescue over BLS12-381. We present an efficient protocol to convert assets between these two constructions while preserving the privacy. **Zerocash with secp256k1 keys:** Bitcoin, Ethereum, and many other chains do signatures on secp256k1. There is a strong need for ZK applications to not depend on special curves like Jubjub, but be compatible with secp256k1. Due to FFT unfriendliness of secp256k1, many proof systems (e.g., Groth16, Plonk, FRI) are infeasible. We present a solution using Bulletproofs over curve secq256k1 ("q") and delegated Schnorr which connects Bulletproofs to TurboPlonk over BLS12-381. We conclude the paper with (im)possibility results about Zerocash with only access to a deterministic ECDSA signing oracle, which is the case when working with MetaMask. This result shows the limitations of the techniques in this paper. This paper is under a bug bounty program through a grant from Findora Foundation.
Last updated:  2022-09-20
Post-Quantum Insecurity from LWE
Alex Lombardi, Ethan Mook, Willy Quach, and Daniel Wichs
We show that for many fundamental cryptographic primitives, proving classical security under the learning-with-errors (LWE) assumption, does not imply post-quantum security. This is despite the fact that LWE is widely believed to be post-quantum secure, and our work does not give any evidence otherwise. Instead, it shows that post-quantum insecurity can arise inside cryptographic constructions, even if the assumptions are post-quantum secure. Concretely, our work provides (contrived) constructions of pseudorandom functions, CPA-secure symmetric-key encryption, message-authentication codes, signatures, and CCA-secure public-key encryption schemes, all of which are proven to be classically secure under LWE via black-box reductions, but demonstrably fail to be post-quantum secure. All of these cryptosystems are stateless and non-interactive, but their security is defined via an interactive game that allows the attacker to make oracle queries to the cryptosystem. The polynomial-time quantum attacker can break these schemes by only making a few classical queries to the cryptosystem, and in some cases, a single query suffices. Previously, we only had examples of post-quantum insecurity under post-quantum assumptions for stateful/interactive protocols. Moreover, there appears to be a folklore belief that for stateless/non-interactive cryptosystems with black-box proofs of security, a quantum attack against the scheme should translate into a quantum attack on the assumption. This work shows otherwise. Our main technique is to carefully embed interactive protocols inside the interactive security games of the above primitives. As a result of independent interest, we also show a 3-round quantum disclosure of secrets (QDS) protocol between a classical sender and a receiver, where a quantum receiver learns a secret message in the third round but, assuming LWE, a classical receiver does not.
Last updated:  2022-09-20
Universal Atomic Swaps: Secure Exchange of Coins Across All Blockchains
Sri AravindaKrishnan Thyagarajan, Giulio Malavolta, and Pedro Moreno-Sánchez
Trading goods lies at the backbone of the modern economy and the recent advent of cryptocurrencies has opened the door for trading decentralized (digital) assets: A large fraction of the value of cryptocurrencies comes from the inter-currency exchange and trading, which has been arguably the most successful application of decentralized money. The security issues observed with centralized, custodial cryptocurrency exchanges have motivated the design of atomic swaps, a protocol for coin exchanges between any two users. Yet, somewhat surprisingly, no atomic swap protocol exists that simultaneously satisfies the following simple but desired properties: (i) non-custodial, departing from a third party trusted holding the coins from users during the exchange; (ii) universal, that is, compatible with all (current and future) cryptocurrencies; (iii) multi-asset, supporting the exchange of multiple coins in a single atomic swap. From a theoretical standpoint, in this work we show a generic protocol to securely swap $n$ coins from any (possible multiple) currencies for $\tilde{n}$ coins of any other currencies, for any $n$ and $\tilde{n}$. We do not require any custom scripting language supported by the corresponding blockchains, besides the bare minimum ability to verify signatures on transactions. For the special case when the blockchains use ECDSA or Schnorr signatures, we design a practically efficient protocol based on adaptor signatures and time-lock puzzles. As a byproduct of our approach, atomic swaps transactions no longer include custom scripts and are identical to standard one-to-one transactions. We also show that our protocol naturally generalizes to any cycle of users, i.e., atomic swaps with more than two participants. To demonstrate the practicality of our approach, we have evaluated a prototypical implementation of our protocol for Schnorr/ECDSA signatures and observed that an atomic swap requires below one second on commodity machines. Even on blockchains with expressive smart contract support (e.g., Ethereum), our approach reduces the on-chain cost both in terms of transaction size and gas cost.
Last updated:  2022-09-20
A New Approach to the Constant-Round Re-encryption Mix-Net
Myungsun Kim
The re-encryption mix-net (RMN) is a basic cryptographic tool that is widely used in the privacy protection domain and requires anonymity support; for example, it is used in electronic voting, web browsing, and location systems. To protect information about the relationship between senders and messages, a number of mix servers in RMNs shuffle and forward a list of input ciphertexts in a cascading manner. The output of the last mix server is decrypted to yield the set of original messages. The main downside of this approach is that the mixing process requires a number of rounds that is linear in the number of mix servers. This implies that a long round delay would cause network latency, which can dominate local computational latencies. To minimize the effect of network latency, RMN protocols with constant round complexity are more desirable. In this work, we propose a new RMN protocol that runs in $O(1)$ rounds in the number of mix servers and that UC-realizes a hybrid model with access to some functionalities for secure communication and zero-knowledge proof (ZKP). Interestingly, because our protocol does not require a ZKP protocol for a verifiable shuffle, we also achieve a considerable efficiency gain in terms of computation cost. Our main tools are secret sharing and an ElGamal encryption that is extended in the sense that it works on a multiplicative group under field extension. Importantly, this extended ElGamal encryption scheme acquires a new capability: it can efficiently decompose a decrypted message into unique values. We provide a detailed report on the theoretical performance and security analysis of this method.
Last updated:  2022-09-20
FPGA Design Deobfuscation by Iterative LUT Modification at Bitstream Level
Michail Moraitis and Elena Dubrova
Hardware obfuscation through redundancy addition is a well-known countermeasure against reverse engineering. For FPGA designs, such a technique can be implemented with a small overhead, however, its effectiveness is heavily dependent on the stealthiness of the redundant elements. Hardware opaque predicates can provide adequately stealthy constant values that can be used for obfuscation. However, in this report, we show that such obfuscation schemes can be defeated by ensuring the full controllability of each active look-up table input in a design via iterative bitstream modifications. We present an algorithm that works directly on the bitstream and does not require the possession of a netlist. The feasibility of our approach is verified with the example of an obfuscated SNOW 3G design implemented in a Xilinx 7-series FPGA.
Last updated:  2022-09-20
Secure Maximum Weight Matching Approximation on General Graphs (Full Version)
Andreas Brüggemann, Malte Breuer, Andreas Klinger, Thomas Schneider, and Ulrike Meyer
Privacy-preserving protocols for matchings on general graphs can be used for applications such as online dating, bartering, or kidney donor exchange. In addition, they can act as a building blocks for more complex protocols. While privacy preserving protocols for matchings on bipartite graphs are a well-researched topic, the case of general graphs has experienced significantly less attention so far. We address this gap by providing the first privacy-preserving protocol for maximum weight matching on general graphs. We present two protocol variants, which both compute an $1/2-$approximation instead of an optimal solution in favor of scalability. For $N$ nodes, the first variant requires $\mathcal{O}(N \log^2 N)$ rounds and $\mathcal{O}(N^3\log N)$ communication, and the second variant requires only $\mathcal{O}(N \log N)$ rounds and $\mathcal{O}(N^3)$ communication. We implement both variants and find that the first variant runs in $14.9$ minutes for $N=300$ nodes, while the second variant requires only $5.1$ minutes for $N=300$, and $12.5$ minutes for $N=400$.
Last updated:  2022-09-20
NTRU-$\nu$-um: Secure Fully Homomorphic Encryption from NTRU with Small Modulus
Kamil Kluczniak
NTRUEncrypt is one of the first lattice-based encryption schemes. Furthermore, the earliest fully homomorphic encryption (FHE) schemes rely on the NTRU problem. Currently, NTRU is one of the leading candidates in the NIST post-quantum standardization competition. What makes NTRU appealing is the age of the cryptosystem and relatively good performance. Unfortunately, FHE based on NTRU became impractical due to efficient attacks on NTRU instantiations with ``overstretched'' modulus. In particular, currently, NTRU-based FHE schemes to support a reasonable circuit depth require instantiating NTRU with a very large modulus. Breaking the NTRU problem for such large moduli turns out to be easy. Due to these attacks, any serious work on practical NTRU-based FHE essentially stopped. In this paper, we reactivate research on practical FHE that can be based on NTRU. We design an efficient bootstrapping scheme in which the noise growth is small enough to keep the modulus to dimension ratio relatively small, thus avoiding the negative consequences of ``overstretching'' the modulus. Our bootstrapping algorithm is an accumulator-type bootstrapping scheme analogous to AP/FHEW/TFHE. Finally, we show that we can use the bootstrapping procedure to compute any function over $\mathbb{Z}_t$. Consequently, we obtain one of the fastest FHE bootstrapping schemes able to compute any function over elements of a finite field alongside reducing the error.
Last updated:  2022-09-20
Revisiting Mutual Information Analysis: Multidimensionality, Neural Estimation and Optimality Proofs
Valence Cristiani, Maxime Lecomte, and Philippe Maurine
Recent works showed how Mutual Information Neural Estimation (MINE) could be applied to side-channel analysis in order to evaluate the amount of leakage of an electronic device. One of the main advantages of MINE over classical estimation techniques is to enable the computation between high dimensional traces and a secret, which is relevant for leakage assessment. However, optimally exploiting this information in an attack context in order to retrieve a secret remains a non-trivial task especially when a profiling phase of the target is not allowed. Within this context, the purpose of this paper is to address this problem based on a simple idea: there are multiple leakage sources in side-channel traces and optimal attacks should necessarily exploit most/all of them. To this aim, a new mathematical framework, designed to bridge classical Mutual Information Analysis (MIA) and the multidimensional aspect of neural-based estimators, is proposed. One of the goals is to provide rigorous proofs consolidating the mathematical basis behind MIA, thus alleviating inconsistencies found in the state of the art. This framework allows to derive a new attack called Neural Estimated Mutual Information Analysis (NEMIA). To the best of our knowledge, it is the first unsupervised attack able to benefit from both the power of deep learning techniques and the valuable theoretical properties of MI. Simulations and experiments show that NEMIA outperforms classical and more recent deep learning based unsupervised side-channel attacks, especially in low-information contexts.
Last updated:  2022-09-20
Optimizing Rectangle Attacks: A Unified and Generic Framework for Key Recovery
Ling Song, Nana Zhang, Qianqian Yang, Danping Shi, Jiahao Zhao, Lei Hu, and Jian Weng
The rectangle attack has shown to be a very powerful form of cryptanalysis against block ciphers. Given a rectangle distinguisher, one expects to mount key recovery attacks as efficiently as possible. In the literature, there have been four algorithms for rectangle key recovery attacks. However, their performance vary from case to case. Besides, numerous are the applications where the attacks lack optimality. In this paper, we investigate the rectangle key recovery in depth and propose a unified and generic key recovery algorithm, which supports any possible attacking parameters. Notably, it not only covers the four previous rectangle key recovery algorithms, but also unveils five types of new attacks which were missed previously. Along with the new key recovery algorithm, we propose a framework for automatically finding the best attacking parameters, with which the time complexity of the rectangle attack will be minimized using the new algorithm. To demonstrate the efficiency of the new key recovery algorithm, we apply it to Serpent, CRAFT, SKINNY and Deoxys-BC-256 based on existing distinguishers and obtain a series of improved rectangle attacks.
Last updated:  2022-09-20
Forward-Secure Encryption with Fast Forwarding
Yevgeniy Dodis, Daniel Jost, and Harish Karthikeyan
Forward-secure encryption (FSE) allows communicating parties to refresh their keys across epochs, in a way that compromising the current secret key leaves all prior encrypted communication secure. We investigate a novel dimension in the design of FSE schemes: fast-forwarding (FF). This refers to the ability of a stale communication party, that is "stuck" in an old epoch, to efficiently "catch up" to the newest state, and frequently arises in practice. While this dimension was not explicitly considered in prior work, we observe that one can augment prior FSEs -- both in symmetric- and public-key settings -- to support fast-forwarding which is sublinear in the number of epochs. However, the resulting schemes have disadvantages: the symmetric-key scheme is a security parameter slower than any conventional stream cipher, while the public-key scheme inherits the inefficiencies of the HIBE-based forward-secure PKE. To address these inefficiencies, we look at the common real-life situation which we call the bulletin board model, where communicating parties rely on some infrastructure -- such as an application provider -- to help them store and deliver ciphertexts to each other. We then define and construct FF-FSE in the bulletin board model, which addresses the above-mentioned disadvantages. In particular, * Our FF-stream-cipher in the bulletin-board model has: (a) constant state size; (b) constant normal (no fast-forward) operation; and (c) logarithmic fast-forward property. This essentially matches the efficiency of non-fast-forwardable stream ciphers, at the cost of constant communication complexity with the bulletin board per update. * Our public-key FF-FSE avoids HIBE-based techniques by instead using so-called updatable public-key encryption (UPKE), introduced in several recent works (and more efficient than public-key FSEs). Our UPKE-based scheme uses a novel type of "update graph" that we construct in this work. Our graph has constant in-degree, logarithmic diameter, and logarithmic "cut property" which is essential for the efficiency of our schemes. Combined with recent UPKE schemes, we get two FF-FSEs in the bulletin board model, under the DDH and the LWE assumptions.
Last updated:  2022-09-20
Exploring SAT for Cryptanalysis: (Quantum) Collision Attacks against 6-Round SHA-3 (Full Version)
Jian Guo, Guozhen Liu, Ling Song, and Yi Tu
In this work, we focus on collision attacks against instances of SHA-3 hash family in both classical and quantum settings. Since the 5-round collision attacks on SHA3-256 and other variants proposed by Guo et al. at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on SHA-3 to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a SAT-based automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on SHAKE-128 with time complexity $2^{123.5}$, which also forms a quantum collision attack with quantum time ${{2^{67.25}}/{\sqrt{S}}}$, and the first 6-round quantum collision attack on SHA3-224 and SHA3-256 with quantum time ${{2^{97.75}}/{\sqrt{S}}}$ and ${{2^{104.25}}/{\sqrt{S}}}$, both with negligible requirement of classical and quantum memory. The fact that classical collision attacks do not apply to 6-round SHA3-224 and SHA3-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
Last updated:  2022-09-19
Universal Reductions: Reductions Relative to Stateful Oracles
Benjamin Chan, Cody Freitag, and Rafael Pass
We define a framework for analyzing the security of cryptographic protocols that makes minimal assumptions about what a "realistic model of computation is". In particular, whereas classical models assume that the attacker is a (perhaps non-uniform) probabilistic polynomial-time algorithm, and more recent definitional approaches also consider quantum polynomial-time algorithms, we consider an approach that is more agnostic to what computational model is physically realizable. Our notion of universal reductions models attackers as PPT algorithms having access to some arbitrary unbounded stateful Nature that cannot be rewound or restarted when queried multiple times. We also consider a more relaxed notion of universal reductions w.r.t. time-evolving, $k$-window, Natures that makes restrictions on Nature - roughly speaking, Nature's behavior may depend on number of messages it has received and the content of the last $k(\lambda)$-messages (but not on "older" messages). We present both impossibility results and general feasibility results for our notions, indicating to what extent the extended Church-Turing hypotheses are needed for a well-founded theory of Cryptography.
Last updated:  2022-09-19
Identity-Based Matchmaking Encryption from Standard Assumptions
Jie Chen, Yu Li, Jinming Wen, and Jian Weng
In this work, we propose the first identity-based matchmaking encryption (IB-ME) scheme under the standard assumptions in the standard model. This scheme is proven to be secure under the symmetric external Diffie-Hellman (SXDH) assumption in prime order bilinear pairing groups. In our IB-ME scheme, all parameters have constant number of group elements and are simpler than those of previous constructions. Previous works are either in the random oracle model or based on the q-type assumptions, while ours is built directly in the standard model and based on static assumptions, and does not rely on other crypto tools. More concretely, our IB-ME is constructed from a variant of two-level anonymous IBE. We observed that this two-level IBE with anonymity and unforgeability satisfies the same functionality of IB-ME, and its security properties cleverly meet the two requirements of IB-ME (Privacy and Authenticity). The privacy property of IB-ME relies on the anonymity of this two-level IBE, while the authenticity property is corresponding to the unforgeability in the 2nd level. This variant of two-level IBE is built from dual pairing vector spaces, and both security reductions rely on dual system encryption.
Last updated:  2022-09-19
On Generalizations of the Lai-Massey Scheme: the Birth of Amaryllises
Lorenzo Grassi
In this paper, we re-investigate the Lai-Massey scheme, originally proposed in the cipher IDEA. Due to the similarity with the Feistel schemes, and due to the existence of invariant subspace attacks as originally pointed out by Vaudenay at FSE 1999, the Lai-Massey scheme has received only little attention by the community. As first contribution, we propose new generalizations of such scheme that are not (affine) equivalent to any generalized Feistel scheme proposed in the literature so far. Then, inspired by the recent Horst construction, we propose the Amaryllises construction as a generalization of the Lai-Massey scheme, in which the linear combination in the Lai-Massey scheme is replaced by a non-linear one. Besides proposing concrete examples of the Amaryllises construction, we discuss its (possible) advantages and disadvantages with respect to other existing schemes/constructions published in the literature, with particular attention on the Lai-Massey one and on the Horst one.
Last updated:  2022-09-19
RapidUp: Multi-Domain Permutation Protocol for Lookup Tables
Héctor Masip Ardevol, Jordi Baylina Melé, Daniel Lubarov, and José L. Muñoz-Tapia
SNARKs for some standard cryptographic primitives tend to be plenty designed with SNARK-unfriendly operations such as XOR. Previous protocols such as [GW20] worked around this problem by the introduction of lookup arguments. However, these protocols were only appliable over the same circuit. RapidUp is a protocol that solves this limitation by unfolding the grand-product polynomial into two (equivalent) polynomials of the same size. Morevoer, a generalization of previous protocols is presented by the introduction of selectors.
Last updated:  2022-09-19
On Secret Sharing, Randomness, and Random-less Reductions for Secret Sharing
Divesh Aggarwal, Eldon Chung, Maciej Obremski, and João Ribeiro
Secret-sharing is one of the most basic and oldest primitives in cryptography, introduced by Shamir and Blakely in the 70s. It allows to strike a meaningful balance between availability and confidentiality of secret information. It has a host of applications most notably in threshold cryptography and multi-party computation. All known constructions of secret sharing (with the exception of those with a pathological choice of parameters) require access to uniform randomness. In practice, it is extremely challenging to generate a source of uniform randomness. This has led to a large body of research devoted to designing randomized algorithms and cryptographic primitives from imperfect sources of randomness. Motivated by this, 15 years ago, Bosley and Dodis asked whether it is even possible to build 2-out-of-2 secret sharing without access to uniform randomness. In this work, we make progress towards resolving this question. We answer this question for secret sharing schemes with important additional properties, i.e., either leakage-resilience or non-malleability. We prove that, unfortunately, for not too small secrets, it is impossible to construct any of 2-out-of-2 leakage-resilient secret sharing or 2-out-of-2 non-malleable secret sharing without access to uniform randomness. Given that the problem whether 2-out-of-2 secret sharing requires uniform randomness has been open for a long time, it is reasonable to consider intermediate problems towards resolving the open question. In a spirit similar to NP-completeness, we study how the existence of a t-out-of-n secret sharing without access to uniform randomness is related to the existence of a t'-out-of-n' secret sharing without access to uniform randomness for a different choice of the parameters t,n,t',n'.
Last updated:  2022-09-19
Nonmalleable Digital Lockers and Robust Fuzzy Extractors in the Plain Model
Daniel Apon, Chloe Cachet, Benjamin Fuller, Peter Hall, and Feng-Hao Liu
We give the first constructions in the plain model of 1) nonmalleable digital lockers (Canetti and Varia, TCC 2009) and 2) robust fuzzy extractors (Boyen et al., Eurocrypt 2005) that secure sources with entropy below 1/2 of their length. Constructions were previously only known for both primitives assuming random oracles or a common reference string (CRS). Along the way, we define a new primitive called a nonmalleable point function obfuscation with associated data. The associated data is public but protected from all tampering. We use the same paradigm to then extend this to digital lockers. Our constructions achieve nonmalleability over the output point by placing a CRS into the associated data and using an appropriate non-interactive zero-knowledge proof. Tampering is protected against the input point over low-degree polynomials and over any tampering to the output point and associated data. Our constructions achieve virtual black box security. These constructions are then used to create robust fuzzy extractors that can support low-entropy sources in the plain model. By using the geometric structure of a syndrome secure sketch (Dodis et al., SIAM Journal on Computing 2008), the adversary’s tampering function can always be expressed as a low-degree polynomial; thus, the protection provided by the constructed nonmalleable objects suffices.
Last updated:  2022-09-19
Light the Signal: Optimization of Signal Leakage Attacks against LWE-Based Key Exchange
Yue Qin, Ruoyu Ding, Chi Cheng, Nina Bindel, Yanbin Pan, and Jintai Ding
Key exchange protocols from the learning with errors (LWE) problem share many similarities with the Diffie–Hellman–Merkle (DHM) protocol, which plays a central role in securing our Internet. Therefore, there has been a long time effort in designing authenticated key exchange directly from LWE to mirror the advantages of DHM-based protocols. In this paper, we revisit signal leakage attacks and show that the severity of these attacks against LWE-based (authenticated) key exchange is still underestimated. In particular, by converting the problem of launching a signal leakage attack into a coding problem, we can significantly reduce the needed number of queries to reveal the secret key. Specifically, for DXL-KE we reduce the queries from 1,266 to only 29, while for DBS-KE, we need only 748 queries, a great improvement over the previous 1,074,434 queries. Moreover, our new view of signals as binary codes enables recognizing vulnerable schemes more easily. As such we completely recover the secret key of a password-based authenticated key exchange scheme by Dabra et al. with only 757 queries and partially reveal the secret used in a two-factor authentication by Wang et al. with only one query. The experimental evaluation supports our theoretical analysis and demonstrates the efficiency and effectiveness of our attacks. Our results caution against underestimating the power of signal leakage attacks as they are applicable even in settings with a very restricted number of interactions between adversary and victim.
Last updated:  2022-09-19
A Modular Approach to the Security Analysis of Two-Permutation Constructions
Yu Long Chen
Constructions based on two public permutation calls are very common in today’s cryptographic community. However, each time a new construction is introduced, a dedicated proof must be carried out to study the security of the construction. In this work, we propose a new tool to analyze the security of these constructions in a modular way. This tool is built on the idea of the classical mirror theory for block cipher based constructions, such that it can be used for security proofs in the ideal permutation model. We present different variants of this public permutation mirror theory such that it is suitable for different security notions. We also present a framework to use the new techniques, which provides the bad events that need to be excluded in order to apply the public permutation mirror theory. Furthermore, we showcase the new technique on three examples: the Tweakable Even-Mansour cipher by Cogliati et al. (CRYPTO ’15), the two permutation variant of the pEDM PRF by Dutta et al. (ToSC ’21(2)), and the two permutation variant of the nEHtM\(_p\) MAC algorithm by Dutta and Nandi (AFRICACRYPT ’20). With this new tool we prove the multi-user security of these constructions in a considerably simplified way.
Last updated:  2022-09-19
On the Lattice Isomorphism Problem, Quadratic Forms, Remarkable Lattices, and Cryptography
Léo Ducas and Wessel van Woerden
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: - a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014). - a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP. - a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
Last updated:  2022-09-19
Log-$\mathcal{S}$-unit lattices using Explicit Stickelberger Generators to solve Approx Ideal-SVP
Olivier Bernard, Andrea Lesavourey, Tuong-Huy Nguyen, and Adeline Roux-Langlois
In 2020, Bernard and Roux-Langlois introduced the Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé. They performed experiments for prime conductors cyclotomic fields of degrees at most 70, one of the main bottlenecks being the computation of a log-$\mathcal{S}$-unit lattice which requires subexponential time. Our main contribution is to extend these experiments to cyclotomic fields of degree up to $210$ for most conductors $m$. Building upon new results from Bernard and Kučera on the Stickelberger ideal, we use explicit generators to construct full-rank log-$\mathcal{S}$-unit sublattices fulfilling the role of approximating the full Twisted-PHS lattice. In our best approximate regime, our results show that the Twisted-PHS algorithm outperforms, over our experimental range, the CDW algorithm by Cramer, Ducas and Wesolowski, and sometimes beats its asymptotic volumetric lower bound. Additionally, we use these explicit Stickelberger generators to remove almost all quantum steps in the CDW algorithm, under the mild restriction that the plus part of the class number verifies $h^+_{m}\leq O(\sqrt{m})$.
Last updated:  2022-09-19
Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices
Olivier Bernard and Adeline Roux-Langlois
Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers $\mathcal{O}_{K}$ of a number field $K$) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new ``twisted'' version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our Twisted-PHS algorithm reaches the same asymptotic trade-off between runtime and approximation factor as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggests that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.
Last updated:  2022-09-19
Hybrid scalar/vector implementations of Keccak and SPHINCS+ on AArch64
Hanno Becker and Matthias J. Kannwischer
This paper presents two new techniques for the fast implementation of the Keccak permutation on the A-profile of the Arm architecture: First, the elimination of explicit rotations in the Keccak permutation through Barrel shifting, applicable to scalar AArch64 implementations of Keccak-f1600. Second, the construction of hybrid implementations concurrently leveraging both the scalar and the Neon instruction sets of AArch64. The resulting performance improvements are demonstrated in the example of the hash-based signature scheme SPHINCS+, one of the recently announced winners of the NIST post-quantum cryptography project: We achieve up to 1.89× performance improvements compared to the state of the art. Our implementations target the Arm Cortex-{A55,A510,A78,A710,X1,X2} processors common in client devices such as mobile phones.
Last updated:  2022-09-19
Data Protection Law and Multi-Party Computation: Applications to Information Exchange between Law Enforcement Agencies
Amos Treiber, Dirk Müllmann, Thomas Schneider, and Indra Spiecker genannt Döhmann
Pushes for increased power of Law Enforcement (LE) for data retention and centralized storage result in legal challenges with data protection law and courts - and possible violations of the right to privacy. This is motivated by a desire for better cooperation and exchange between LE Agencies (LEAs), which is difficult due to data protection regulations, was identified as a main factor of major public security failures, and is a frequent criticism of LE. Secure Multi-Party Computation (MPC) is often seen as a technological means to solve privacy conflicts where actors want to exchange and analyze data that needs to be protected due to data protection laws. In this interdisciplinary work, we investigate the problem of private information exchange between LEAs from both a legal and technical angle. We give a legal analysis of secret-sharing based MPC techniques in general and, as a particular application scenario, consider the case of matching LE databases for lawful information exchange between LEAs. We propose a system for lawful information exchange between LEAs using MPC and private set intersection and show its feasibility by giving a legal analysis for data protection and a technical analysis for workload complexity. Towards practicality, we present insights from qualitative feedback gathered within exchanges with a major European LEA.
Last updated:  2022-09-19
Le Mans: Dynamic and Fluid MPC for Dishonest Majority
Rahul Rachuri and Peter Scholl
Most MPC protocols require the set of parties to be active for the entire duration of the computation. Deploying MPC for use cases such as complex and resource-intensive scientific computations increases the barrier of entry for potential participants. The model of Fluid MPC (Crypto 2021) tackles this issue by giving parties the flexibility to participate in the protocol only when their resources are free. As such, the set of parties is dynamically changing over time. In this work, we extend Fluid MPC, which only considered an honest majority, to the setting where the majority of participants at any point in the computation may be corrupt. We do this by presenting variants of the SPDZ protocol, which support dynamic participants. Firstly, we describe a universal preprocessing for SPDZ, which allows a set of $n$ parties to compute some correlated randomness, such that later on, any subset of the parties can use this to take part in an online secure computation. We complement this with a Dynamic SPDZ online phase, designed to work with our universal preprocessing, as well as a protocol for securely realising the preprocessing. Our preprocessing protocol is designed to efficiently use pseudorandom correlation generators, thus, the parties' storage and communication costs can be almost independent of the function being evaluated. We then extend this to support a fluid online phase, where the set of parties can dynamically evolve during the online phase. Our protocol achieves maximal fluidity and security with abort, similarly to the previous, honest majority construction. Achieving this requires a careful design and techniques to guarantee a small state complexity, allowing us to switch between committees efficiently.
Last updated:  2022-09-19
Quantum Analysis of AES
Kyungbae Jang, Anubhab Baksi, Gyeongju Song, Hyunji Kim, Hwajeong Seo, and Anupam Chattopadhyay
Quantum computing is considered among the next big leaps in the computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the secret-key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256) with respect to the quantum implementation and the quantum key search using the Grover's algorithm. We develop a pool of implementations, by mostly reducing the circuit depth metrics. We consider various strategies for optimization, as well as make use of the state-of-the-art advancements in the relevant fields. In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 98 percent for all variants of AES. Our qubit count - Toffoli depth product is improved from theirs by more than 75 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix its bugs and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the recent Eprint'22 paper by Huang and Sun).
Last updated:  2022-09-19
Adaptively Secure Single Secret Leader Election from DDH
Dario Catalano, Dario Fiore, and Emanuele Giunta
Single Secret Leader Election protocols (SSLE, for short) allow a group of users to select a random leader so that the latter remains secret until she decides to reveal herself. Thanks to this feature, SSLE can be used to build an election mechanism for proof-of-stake based blockchains. In particular, a recent work by Azouvi and Cappelletti (ACM AFT 2021) shows that in comparison to probabilistic leader election methods, SSLE-based proof-of-stake blockchains have significant security gains, both with respect to grinding attacks and with respect to the private attack. Yet, as of today, very few concrete constructions of SSLE are known. In particular, all existing protocols are only secure in a model where the adversary is supposed to corrupt participants before the protocol starts -- an assumption that clashes with the highly dynamic nature of decentralized blockchain protocols. In this paper we make progress in the study of SSLE by proposing new efficient constructions that achieve stronger security guarantees than previous work. In particular, we propose the first SSLE protocol that achieves adaptive security. Our scheme is proven secure in the universal composability model and achieves efficiency comparable to previous, less secure, realizations in the state of the art.
Last updated:  2022-09-19
Continued Fractions Applied to a Family of RSA-like Cryptosystems
George Teseleanu and Paul Cotan
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Murru and Saettone presented in 2017 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2+p+1)(q^2+q+1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is immune to Wiener's continued fraction attack. Unfortunately, Nitaj \emph{et. al.} developed exactly such an attack. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k [(p^n-1)(q^n-1)]/[(p-1)(q-1)] = 1$, where $n>1$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Last updated:  2022-09-19
Practical Statistically-Sound Proofs of Exponentiation in any Group
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Karen Klein, and Krzysztof Pietrzak
For a group $\mathbb{G}$ of unknown order, a *Proof of Exponentiation* (PoE) allows a prover to convince a verifier that a tuple $(y,x,q,T)\in \mathbb{G}^2\times\mathbb{N}^2$ satisfies $y=x^{q^T}$. PoEs have recently found exciting applications in constructions of verifiable delay functions and succinct arguments of knowledge. The current PoEs that are practical in terms of proof-size only provide restricted soundness guarantees: Wesolowski's protocol (Journal of Cryptology 2020) is only computationally-sound (i.e., it is an argument), whereas Pietrzak's protocol (ITCS 2019) is statistically-sound only in groups that come with the promise of not having any small subgroups. On the other hand, the only statistically-sound PoE in *arbitrary* groups of unknown order is due to Block et al. (CRYPTO 2021), and it can be seen as an elaborate parallel repetition of Pietrzak's PoE. Therefore, to achieve $\lambda$ bits of security, say $\lambda=80$, the number of repetitions required -- and hence the (multiplicative) overhead incurred in proof-size -- is as large as $\lambda$. In this work, we propose a statistically-sound PoE for arbitrary groups for the case where the exponent $q$ is the product of all primes up to some bound $B$. For such a structured exponent, we show that it suffices to run only $\lambda/\log(B)$ parallel instances of Pietrzak's PoE. This reduces the concrete proof-size compared to Block et al. by an order of magnitude. Furthermore, we show that in the known applications where PoEs are used as a building block such structured exponents are viable. Finally, we also discuss batching of our PoE, showing that many proofs (for the same $\mathbb{G}$ and $q$ but different $x$ and $T$) can be batched by adding only a single element to the proof per additional statement.
Last updated:  2022-09-19
Sherlock Holmes Zero-Knowledge Protocols
George Teseleanu
We present two simple zero knowledge interactive proofs that can be instantiated with many of the standard decisional or computational hardness assumptions. Compared with traditional zero knowledge proofs, in our protocols the verifiers starts first, by emitting a challenge, and then the prover answers the challenge.
Last updated:  2022-09-19
Improving Bounds on Elliptic Curve Hidden Number Problem for ECDH Key Exchange
Jun Xu, Santanu Sarkar, Huaxiong Wang, and Lei Hu
Elliptic Curve Hidden Number Problem (EC-HNP) was first introduced by Boneh, Halevi and Howgrave-Graham at Asiacrypt 2001. To rigorously assess the bit security of the Diffie--Hellman key exchange with elliptic curves (ECDH), the Diffie--Hellman variant of EC-HNP, regarded as an elliptic curve analogy of the Hidden Number Problem (HNP), was presented at PKC 2017. This variant can also be used for practical cryptanalysis of ECDH key exchange in the situation of side-channel attacks. In this paper, we revisit the Coppersmith method for solving the involved modular multivariate polynomials in the Diffie--Hellman variant of EC-HNP and demonstrate that, for any given positive integer $d$, a given sufficiently large prime $p$, and a fixed elliptic curve over the prime field $\mathbb{F}_p$, if there is an oracle that outputs about $\frac{1}{d+1}$ of the most (least) significant bits of the $x$-coordinate of the ECDH key, then one can give a heuristic algorithm to compute all the bits within polynomial time in $\log_2 p$. When $d>1$, the heuristic result $\frac{1}{d+1}$ significantly outperforms both the rigorous bound $\frac{5}{6}$ and heuristic bound $\frac{1}{2}$. Due to the heuristics involved in the Coppersmith method, we do not get the ECDH bit security on a fixed curve. However, we experimentally verify the effectiveness of the heuristics on NIST curves for small dimension lattices.
Last updated:  2022-09-19
Secure Quantum Bit Commitment
Ping Wang, Yiting Su, and Fangguo Zhang
Bit commitment (BC) is one of the most important fundamental protocols in secure multi-party computation. However, it is generally believed that unconditionally secure bit commitment is impossible even with quantum resources. In this paper, we design a secure non-interactive bit commitment protocol by exploiting the no-communication theorem of the quantum entangled states, whose security relies on the indistinguishability of whether the Bell states are measured or not. The proposed quantum bit commitment (QBC) is secure against classical adversaries with unlimited computing power, and the probability of a successful attack by quantum adversaries decreases exponentially as $n$ (the number of qubits in a group) increases.
Last updated:  2022-09-18
On the Worst-Case Inefficiency of CGKA
Alexander Bienstock, Yevgeniy Dodis, Sanjam Garg, Garrison Grogan, Mohammad Hajiabadi, and Paul Rösler
Continuous Group Key Agreement (CGKA) is the basis of modern Secure Group Messaging (SGM) protocols. At a high level, a CGKA protocol enables a group of users to continuously compute a shared (evolving) secret while members of the group add new members, remove other existing members, and perform state updates. The state updates allow CGKA to offer desirable security features such as forward secrecy and post-compromise security. CGKA is regarded as a practical primitive in the real-world. Indeed, there is an IETF Messaging Layer Security (MLS) working group devoted to developing a standard for SGM protocols, including the CGKA protocol at their core. Though known CGKA protocols seem to perform relatively well when considering natural sequences of performed group operations, there are no formal guarantees on their efficiency, other than the $O(n)$ bound which can be achieved by trivial protocols, where $n$ is the number of group numbers. In this context, we ask the following questions and provide negative answers. 1. Can we have CGKA protocols that are efficient in the worst case? We start by answering this basic question in the negative. First, we show that a natural primitive that we call Compact Key Exchange (CKE) is at the core of CGKA, and thus tightly captures CGKA's worst-case communication cost. Intuitively, CKE requires that: first, $n$ users non-interactively generate key pairs and broadcast their public keys, then, some other special user securely communicates to these $n$ users a shared key. Next, we show that CKE with communication cost $o(n)$ by the special user cannot be realized in a black-box manner from public-key encryption, thus implying the same for CGKA, where $n$ is the corresponding number of group members. Surprisingly, this impossibility holds even in an offline setting, where parties have access to the sequence of group operations in advance. 2. Can we realize one CGKA protocol that works as well as possible in all cases? Here again, we present negative evidence showing that no such protocol based on black-box use of public-key encryption exists. Specifically, we show two distributions over sequences of group operations such that no CGKA protocol obtains optimal communication costs on both sequences.
Last updated:  2022-09-18
Rate-1 Non-Interactive Arguments for Batch-NP and Applications
Lalita Devadas, Rishab Goyal, Yael Kalai, and Vinod Vaikuntanathan
We present a rate-$1$ construction of a publicly verifiable non-interactive argument system for batch-$\mathsf{NP}$ (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of $k$ NP statements each with an $m$-bit witness, has size $m + \mathsf{poly}(\lambda,\log k)$. In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019). We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption: - A multi-hop BARG scheme for $\mathsf{NP}$. - A multi-hop aggregate signature scheme (in the standard model). - An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$. Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ under non-standard knowledge assumptions or in the random oracle model.
Last updated:  2022-09-18
Multi-Client Functional Encryption with Fine-Grained Access Control
Ky Nguyen, Duong Hieu Phan, and David Pointcheval
Multi-Client Functional Encryption ($\mathsf{MCFE}$) and Multi-Input Functional Encryption ($\mathsf{MIFE}$) are very interesting extensions of Functional Encryption for practical purpose. They allow to compute joint function over data from multiple parties. Both primitives are aimed at applications in multi-user settings where decryption can be correctly output for users with appropriate functional decryption keys only. While the definitions for a single user or multiple users were quite general and can be realized for general classes of functions as expressive as Turing machines or all circuits, efficient schemes have been proposed so far for concrete classes of functions: either only for access control, $\mathit{i.e.}$ the identity function under some conditions, or linear/quadratic functions under no condition. In this paper, we target classes of functions that explicitly combine some evaluation functions independent of the decrypting user under the condition of some access control. More precisely, we introduce a framework for $\mathsf{MCFE}$ with fine-grained access control and propose constructions for both single-client and multi-client settings, for inner-product evaluation and access control via Linear Secret Sharing Schemes ($\mathsf{LSSS}$), with selective and adaptive security. The only known work that combines functional encryption in multi-user setting with access control was proposed by Abdalla $\mathit{et~al.}$ (Asiacrypt '20), which relies on a generic transformation from the single-client schemes to obtain $\mathsf{MIFE}$ schemes that suffer a quadratic factor of $n$ (where $n$ denotes the number of clients) in the ciphertext size. We follow a different path, via $\mathsf{MCFE}$: we present a $\mathit{duplicate\text{-}and\text{-}compress}$ technique to transform the single-client scheme and obtain a $\mathsf{MCFE}$ with fine-grained access control scheme with only a linear factor of $n$ in the ciphertext size. Our final scheme thus outperforms the Abdalla $\mathit{et~al.}$'s scheme by a factor $n$, as one can obtain $\mathsf{MIFE}$ from $\mathsf{MCFE}$ by making all the labels in $\mathsf{MCFE}$ a fixed public constant. The concrete constructions are secure under the $\mathsf{SXDH}$ assumption, in the random oracle model for the $\mathsf{MCFE}$ scheme, but in the standard model for the $\mathsf{MIFE}$ improvement.
Last updated:  2022-09-18
Non-Interactive Zero-Knowledge Proofs to Multiple Verifiers
Kang Yang and Xiao Wang
In this paper, we study zero-knowledge (ZK) proofs for circuit satisfiability that can prove to $n$ verifiers at a time efficiently. The proofs are secure against the collusion of a prover and a subset of $t$ verifiers. We refer to such ZK proofs as multi-verifier zero-knowledge (MVZK) proofs and focus on the case that a majority of verifiers are honest (i.e., $t<n/2$). We construct efficient MVZK protocols in the random oracle model where the prover sends one message to each verifier, while the verifiers only exchange one round of messages. When the threshold of corrupted verifiers $t<n/2$, the prover sends $1/2+o(1)$ field elements per multiplication gate to every verifier; when $t<n(1/2-\epsilon)$ for some constant $0<\epsilon<1/2$, we can further reduce the communication to $O(1/n)$ field elements per multiplication gate per verifier. Our MVZK protocols demonstrate particularly high scalability: the proofs are streamable and only require a memory proportional to what is needed to evaluate the circuit in the clear.
Last updated:  2022-09-18
ROAST: Robust Asynchronous Schnorr Threshold Signatures
Tim Ruffing, Viktoria Ronge, Elliott Jin, Jonas Schneider-Bensch, and Dominique Schröder
Bitcoin and other cryptocurrencies have recently introduced support for Schnorr signatures whose cleaner algebraic structure, as compared to ECDSA, allows for simpler and more practical constructions of highly demanded "$t$-of-$n$" threshold signatures. However, existing Schnorr threshold signature schemes still fall short of the needs of real-world applications due to their assumption that the network is synchronous and due to their lack of robustness, i.e., the guarantee that $t$ honest signers are able to obtain a valid signature even in the presence of other malicious signers who try to disrupt the protocol. This hinders the adoption of threshold signatures in the cryptocurrency ecosystem, e.g., in second-layer protocols built on top of cryptocurrencies. In this work, we propose ROAST, a simple wrapper that turns a given threshold signature scheme into a scheme with a robust and asynchronous signing protocol, as long as the underlying signing protocol is semi-interactive (i.e., has one preprocessing round and one actual signing round), provides identifiable aborts, and is unforgeable under concurrent signing sessions. When applied to the state-of-the-art Schnorr threshold signature scheme FROST, which fulfills these requirements, we obtain a simple, efficient, and highly practical Schnorr threshold signature scheme.
Last updated:  2022-09-18
SCALES: MPC with Small Clients and Larger Ephemeral Servers
Anasuya Acharya, Carmit Hazay, Vladimir Kolesnikov, and Manoj Prabhakaran
The recently proposed YOSO model is a groundbreaking approach to MPC, executable on a public blockchain, circumventing adaptive player corruption by hiding the corruption targets until they are worthless. Players are selected unpredictably from a large pool to perform MPC subtasks, in which each selected player sends a single message (and reveals their identity). While YOSO MPC has attractive asymptotic complexity, unfortunately, it is concretely prohibitively expensive due to the cost of its building blocks. We propose a modification to the YOSO model that preserves resilience to adaptive server corruption, but allows for much more efficient protocols. In SCALES (Small Clients And Larger Ephemeral Servers) only the servers facilitating the MPC computation are ephemeral (unpredictably selected and ``speak once''). Input providers (clients) publish problem instances and collect the output, but do not otherwise participate in computation. SCALES offers attractive features, and improves over YOSO protocols in outsourcing MPC to a large pool of servers under adaptive corruption. We build SCALES from rerandomizable garbling schemes, which is a contribution of independent interest, with additional applications.
Last updated:  2022-09-18
Round-optimal Honest-majority MPC in Minicrypt and with Everlasting Security
Benny Applebaum, Eliran Kachlon, and Arpita Patra
We study the round complexity of secure multiparty computation (MPC) in the challenging model where full security, including guaranteed output delivery, should be achieved at the presence of an active rushing adversary who corrupts up to half of parties. It is known that 2 rounds are insufficient in this model (Gennaro et al., Crypto 2002), and that 3 round protocols can achieve computational security under public-key assumptions (Gordon et al., Crypto 2015; Ananth et al., Crypto 2018; and Badrinarayanan et al., Asiacrypt 2020). However, despite much effort, it is unknown whether public-key assumptions are inherently needed for such protocols, and whether one can achieve similar results with security against computationally-unbounded adversaries. In this paper, we use Minicrypt-type assumptions to realize 3-round MPC with full and active security. Our protocols come in two flavors: for a small (logarithmic) number of parties $n$, we achieve an optimal resiliency threshold of $t\leq \lfloor (n-1)/2\rfloor$, and for a large (polynomial) number of parties we achieve an almost-optimal resiliency threshold of $t\leq 0.5n(1-\epsilon)$ for an arbitrarily small constant $\epsilon > 0$. Both protocols can be based on sub-exponentially hard injective one-way functions in the plain model. If the parties have an access to a collision resistance hash function, we can derive statistical everlasting security for every NC1 functionality, i.e., the protocol is secure against adversaries that are computationally bounded during the execution of the protocol and become computationally unlimited after the protocol execution. As a secondary contribution, we show that in the strong honest-majority setting ($t<n/3$), every NC1 functionality can be computed in 3 rounds with everlasting security and complexity polynomial in $n$ based on one-way functions. Previously, such a result was only known based on collision-resistance hash function.
Last updated:  2022-09-18
General Properties of Quantum Bit Commitments
Jun Yan
While unconditionally-secure quantum bit commitment (allowing both quantum computation and communication) is impossible, researchers turn to study the complexity-based one. A complexity-based canonical (non-interactive) quantum bit commitment scheme refers to a kind of scheme such that the commitment consists of just a single (quantum) message from the sender to the receiver that can be opened later by uncomputing the commit stage. In this work, we study general properties of complexity-based quantum bit commitments through the lens of canonical quantum bit commitments. Among other results, we in particular obtain the following two: 1. Any complexity-based quantum bit commitment scheme can be converted into the canonical (non-interactive) form (with its sum-binding property preserved). 2. Two flavors of canonical quantum bit commitments are equivalent; that is, canonical computationally-hiding statistically-binding quantum bit commitment exists if and only if the canonical statistically-hiding computationally-binding one exists. Combining this result with the first one, it immediately implies (unconditionally) that complexity-based quantum bit commitment is symmetric. Canonical quantum bit commitments can be based on quantum-secure one-way functions or pseudorandom quantum states. But in our opinion, the formulation of canonical quantum bit commitment is so clean and simple that itself can be viewed as a plausible complexity assumption as well. We propose to explore canonical quantum bit commitment from perspectives of both quantum cryptography and quantum complexity theory in the future.
Last updated:  2022-09-17
Large-Precision Homomorphic Sign Evaluation using FHEW/TFHE Bootstrapping
Zeyu Liu, Daniele Micciancio, and Yuriy Polyakov
A comparison of two encrypted numbers is an important operation needed in many machine learning applications, for example, decision tree or neural network inference/training. An efficient instantiation of this operation in the context of fully homomorphic encryption (FHE) can be challenging, especially when a relatively high precision is sought. The conventional FHE way of evaluating the comparison operation, which is based on the sign function evaluation using FHEW/TFHE bootstrapping (often referred in literature as programmable bootstrapping), can only support very small precision (practically limited to 4-5 bits or so). For higher precision, the runtime complexity scales linearly with the ciphertext (plaintext) modulus (i.e., exponentially with the modulus bit size). We propose sign function evaluation algorithms that scale logarithmically with the ciphertext (plaintext) modulus, enabling the support of large-precision comparison in practice. Our sign evaluation algorithms are based on an iterative use of homomorphic floor function algorithms, which are also derived in our work. Further, we generalize our procedures for floor function evaluation to arbitrary function evaluation, which can be used to support both small plaintext moduli (directly) and larger plaintext moduli (by using a homomorphic digit decomposition algorithm, also suggested in our work). We implement all these algorithms using the PALISADE lattice cryptography library, introducing several implementation-specific optimizations along the way, and discuss our experimental results.
Last updated:  2022-09-17
QCCA-Secure Generic Transformations in the Quantum Random Oracle Model
Tianshu Shan, Jiangxia Ge, and Rui Xue
The post-quantum security of cryptographic systems assumes that the quantum adversary only receives the classical result of computations with the secret key. Furthermore, if the adversary is able to obtain a superposition state of the result, it is unknown whether the post-quantum secure schemes still remain secure. In this paper, we formalize one class of public-key encryption schemes, named oracle-masked schemes, relative to random oracles. For each oracle-masked scheme, we design a preimage extraction procedure and prove that it simulates the quantum decryption oracle with a certain loss. We also observe that the implementation of the preimage extraction procedure for some oracle-masked schemes does not need to take the secret key as input. This contributes to the IND-qCCA security proof of these schemes in the quantum random oracle model (QROM). As an application, we prove the IND-qCCA security of schemes obtained by the Fujisaki-Okamoto (FO) transformation and REACT transformation in the QROM, respectively. Notably, our security reduction for FO transformation is tighter than the reduction given by Zhandry (Crypto 2019).
Last updated:  2022-09-17
Towards Tight Security Bounds for OMAC, XCBC and TMAC
Soumya Chattopadhyay, Ashwin Jha, and Mridul Nandi
OMAC --- a single-keyed variant of CBC-MAC by Iwata and Kurosawa --- is a widely used and standardized (NIST FIPS 800-38B, ISO/IEC 29167-10:2017) message authentication code (MAC) algorithm. The best security bound for OMAC is due to Nandi who proved that OMAC's pseudorandom function (PRF) advantage is upper bounded by $ O(q^2\ell/2^n) $, where $ n $, $ q $, and $ \ell $, denote the block size of the underlying block cipher, the number of queries, and the maximum permissible query length (in terms of $ n $-bit blocks), respectively. In contrast, there is no attack with matching lower bound. Indeed, the best known attack on OMAC is the folklore birthday attack achieving a lower bound of $ \Omega(q^2/2^n) $. In this work, we close this gap for a large range of message lengths. Specifically, we show that OMAC's PRF security is upper bounded by $ O(q^2/2^n + q\ell^2/2^n)$. In practical terms, this means that for a $ 128 $-bit block cipher, and message lengths up to $ 64 $ Gigabyte, OMAC can process up to $ 2^{64} $ messages before rekeying (same as the birthday bound). In comparison, the previous bound only allows $ 2^{48} $ messages. As a side-effect of our proof technique, we also derive similar tight security bounds for XCBC (by Black and Rogaway) and TMAC (by Kurosawa and Iwata). As a direct consequence of this work, we have established tight security bounds (in a wide range of $\ell$) for all the CBC-MAC variants, except for the original CBC-MAC.
Last updated:  2022-09-16
Secure Sampling with Sublinear Communication
Seung Geol Choi, Dana Dachman-Soled, S. Dov Gordon, Linsheng Liu, and Arkady Yerukhimovich
Random sampling from specified distributions is an important tool with wide applications for analysis of large-scale data. In this paper we study how to randomly sample when the distribution is partitioned among two parties' private inputs. Of course, a trivial solution is to have one party send a (possibly encrypted) description of its weights to the other party who can then sample over the entire distribution (possibly using homomorphic encryption). However, this approach requires communication that is linear in the input size which is prohibitively expensive in many settings. In this paper, we investigate secure 2-party sampling with \emph{sublinear communication} for many standard distributions. We develop protocols for $L_1$, and $L_2$ sampling. Additionally, we investigate the feasibility of sublinear product sampling, showing impossibility for the general problem and showing a protocol for a restricted case of the problem. We additionally show how such product sampling can be used to instantiate a sublinear communication 2-party exponential mechanism for differentially-private data release.
Last updated:  2022-09-16
A Security Model for Randomization-based Protected Caches
Jordi Ribes-González, Oriol Farràs, Carles Hernández, Vatistas Kostalabros, and Miquel Moretó
Cache side-channel attacks allow adversaries to learn sensitive information about co-running processes by using only access latency measures and cache contention. This vulnerability has been shown to lead to several microarchitectural attacks. As a promising solution, recent work proposes Randomization-based Protected Caches (RPCs). RPCs randomize cache addresses, changing keys periodically so as to avoid long-term leakage. Unfortunately, recent attacks have called the security of state-of-the-art RPCs into question. In this work, we tackle the problem of formally defining and analyzing the security properties of RPCs. We first give security definitions against access-based cache side-channel attacks that capture security against known attacks such as Prime+Probe and Evict+Probe. Then, using these definitions, we obtain results that allow to guarantee security by adequately choosing the rekeying period, the key generation algorithm and the cache randomizer, thus providing security proofs for RPCs under certain assumptions.
Last updated:  2022-09-16
The Abe-Okamoto Partially Blind Signature Scheme Revisited
Julia Kastner, Julian Loss, and Jiayu Xu
Partially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works. We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.
Last updated:  2022-09-16
Continuously Non-Malleable Codes against Bounded-Depth Tampering
Gianluca Brian, Sebastian Faust, Elena Micheli, and Daniele Venturi
Non-malleable codes (Dziembowski, Pietrzak and Wichs, ICS 2010 & JACM 2018) allow protecting arbitrary cryptographic primitives against related-key attacks (RKAs). Even when using codes that are guaranteed to be non-malleable against a single tampering attempt, one obtains RKA security against poly-many tampering attacks at the price of assuming perfect memory erasures. In contrast, continuously non-malleable codes (Faust, Mukherjee, Nielsen and Venturi, TCC 2014) do not suffer from this limitation, as the non-malleability guarantee holds against poly-many tampering attempts. Unfortunately, there are only a handful of constructions of continuously non-malleable codes, while standard non-malleable codes are known for a large variety of tampering families including, e.g., NC0 and decision-tree tampering, AC0, and recently even bounded polynomial-depth tampering. We change this state of affairs by providing the first constructions of continuously non-malleable codes in the following natural settings: - Against decision-tree tampering, where, in each tampering attempt, every bit of the tampered codeword can be set arbitrarily after adaptively reading up to $d$ locations within the input codeword. Our scheme is in the plain model, can be instantiated assuming the existence of one-way functions, and tolerates tampering by decision trees of depth $d = O(n^{1/8})$, where $n$ is the length of the codeword. Notably, this class includes NC0. - Against bounded polynomial-depth tampering, where in each tampering attempt the adversary can select any tampering function that can be computed by a circuit of bounded polynomial depth (and unbounded polynomial size). Our scheme is in the common reference string model, and can be instantiated assuming the existence of time-lock puzzles and simulation-extractable (succinct) non-interactive zero-knowledge proofs.
Last updated:  2022-09-16
Group Action Key Encapsulation and Non-Interactive Key Exchange in the QROM
Julien Duman, Dominik Hartmann, Eike Kiltz, Sabrina Kunzweiler, Jonas Lehmann, and Doreen Riepel
In the context of quantum-resistant cryptography, cryptographic group actions offer an abstraction of isogeny-based cryptography in the Commutative Supersingular Isogeny Diffie-Hellman (CSIDH) setting. In this work, we revisit the security of two previously proposed natural protocols: the Group Action Hashed ElGamal key encapsulation mechanism (GA-HEG KEM) and the Group Action Hashed Diffie-Hellman non-interactive key-exchange (GA-HDH NIKE) protocol. The latter protocol has already been considered to be used in practical protocols such as Post-Quantum WireGuard (S&P '21) and OPTLS (CCS '20). We prove that active security of the two protocols in the Quantum Random Oracle Model (QROM) inherently relies on very strong variants of the Group Action Strong CDH problem, where the adversary is given arbitrary quantum access to a DDH oracle. That is, quantum accessible Strong CDH assumptions are not only sufficient but also necessary to prove active security of the GA-HEG KEM and the GA-HDH NIKE protocols. Furthermore, we propose variants of the protocols with QROM security from the classical Strong CDH assumption, i.e., CDH with classical access to the DDH oracle. Our first variant uses key confirmation and can therefore only be applied in the KEM setting. Our second but considerably less efficient variant is based on the twinning technique by Cash et al. (EUROCRYPT '08) and in particular yields the first actively secure isogeny-based NIKE with QROM security from the standard CDH assumption.
Last updated:  2022-09-16
Guarding the First Order: The Rise of AES Maskings
Amund Askeland, Siemen Dhooghe, Svetla Nikova, Vincent Rijmen, and Zhenda Zhang
We provide three first-order hardware maskings of the AES, each allowing for a different trade-off between the number of shares and the number of register stages. All maskings use a generalization of the changing of the guards method enabling the re-use of randomness between masked S-boxes. As a result, the maskings do not require fresh randomness while still allowing for a minimal number of shares and providing provable security in the glitch-extended probing model. The low-area variant has five cycles of latency and a serialized area cost of $8.13~kGE$. The low-latency variant reduces the latency to three cycles while increasing the serialized area by $67.89\%$ compared to the low-area variant. The maskings of the AES encryption are implemented on FPGA and evaluated with Test Vector Leakage Assessment (TVLA).
Last updated:  2022-09-16
Key-Policy ABE with Switchable Attributes
Cécile Delerablée, Lénaïck Gouriou, and David Pointcheval
This paper revisits Key-Policy Attribute-Based Encryption (KP-ABE), allowing dele- gation of keys, traceability of compromised keys, and key anonymity, as additional properties. Whereas delegation of rights has been addressed in the seminal paper by Goyal et al. in 2006, introducing KP-ABE, this feature has almost been neglected in all subsequent works in favor of better security levels. However, in multi-device scenarios, this is quite important to allow users to independently authorize their own devices, and thus to delegate their initial rights with possibly more restrictions to their everyday-use devices. But then, one may also require tracing capabilities in case of corrupted devices and anonymity for the users and their devices. To this aim, we define a new variant of KP-ABE including delegation, with switchable attributes, in both the ciphertexts and the keys, and new indistinguishability properties. We then provide a concrete and efficient instantiation with adaptive security under the sole SXDH assumption in the standard model. We eventually explain how this new primitive can address all our initial goals.
Last updated:  2022-09-16
Cumulatively All-Lossy-But-One Trapdoor Functions from Standard Assumptions
Benoît Libert, Ky Nguyen, and Alain Passelègue
Chakraborty, Prabhakaran, and Wichs (PKC'20) recently introduced a new tag-based variant of lossy trapdoor functions, termed cumulatively all-lossy-but-one trapdoor functions (CALBO-TDFs). Informally, CALBO-TDFs allow defining a public tag-based function with a (computationally hidden) special tag, such that the function is lossy for all tags except when the special secret tag is used. In the latter case, the function becomes injective and efficiently invertible using a secret trapdoor. This notion has been used to obtain advanced constructions of signatures with strong guarantees against leakage and tampering, and also by Dodis, Vaikunthanathan, and Wichs (EUROCRYPT'20) to obtain constructions of randomness extractors with extractor-dependent sources. While these applications are motivated by practical considerations, the only known instantiation of CALBO-TDFs so far relies on the existence of indistinguishability obfuscation. In this paper, we propose the first two instantiations of CALBO-TDFs based on standard assumptions. Our constructions are based on the LWE assumption with a sub-exponential approximation factor and on the DCR assumption, respectively, and circumvent the use of indistinguishability obfuscation by relying on lossy modes and trapdoor mechanisms enabled by these assumptions.
Last updated:  2022-09-16
SCARF: A Low-Latency Block Cipher for Secure Cache-Randomization
Uncategorized
Federico Canale, Tim Güneysu, Gregor Leander, Jan Thoma, Yosuke Todo, and Rei Ueno
Show abstract
Uncategorized
Randomized cache architectures have proven to significantly increase the complexity of contention-based cache side channel attacks and therefore pre\-sent an important building block for side channel secure microarchitectures. By randomizing the address-to-cache-index mapping, attackers can no longer trivially construct minimal eviction sets which are fundamental for contention-based cache attacks. At the same time, randomized caches maintain the flexibility of traditional caches, making them broadly applicable across various CPU-types. This is a major advantage over cache partitioning approaches. A large variety of randomized cache architectures has been proposed. However, the actual randomization function received little attention and is often neglected in these proposals. Since the randomization operates directly on the critical path of the cache lookup, the function needs to have extremely low latency. At the same time, attackers must not be able to bypass the randomization which would nullify the security benefit of the randomized mapping. In this paper we propose \cipher (\underline{S}ecure \underline{CA}che \underline{R}andomization \underline{F}unction), the first dedicated cache randomization cipher which achieves low latency and is cryptographically secure in the cache attacker model. The design methodology for this dedicated cache cipher enters new territory in the field of block ciphers with a small 10-bit block length and heavy key-dependency in few rounds.
Last updated:  2022-09-16
How to Sample a Discrete Gaussian (and more) from a Random Oracle
George Lu and Brent Waters
The random oracle methodology is central to the design of many practical cryptosystems. A common challenge faced in several systems is the need to have a random oracle that outputs from a structured distribution $\mathcal{D}$, even though most heuristic implementations such as SHA-3 are best suited for outputting bitstrings. Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions: -We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in \{0,1\}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle. -We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations. -We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians. -A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Last updated:  2022-09-16
To Be, or Not to Be Stateful: Post-Quantum Secure Boot using Hash-Based Signatures
Alexander Wagner, Felix Oberhansl, and Marc Schink
While research in post-quantum cryptography (PQC) has gained significant momentum, it is only slowly adopted for real-world products. This is largely due to concerns about practicability and maturity. The secure boot process of embedded devices is one s- cenario where such restraints can result in fundamental security problems. In this work, we present a flexible hardware/software co-design for hash-based signature (HBS) schemes which enables the move to a post-quantum secure boot today. These signature schemes stand out due to their straightforward security proofs and are on the fast track to standardisation. In contrast to previous works, we exploit the performance intensive similarities of the s- tateful LMS and XMSS schemes as well as the stateless SPHINCS+ scheme. Thus, we enable designers to use a stateful or stateless scheme depending on the constraints of each individual application. To show the feasibility of our approach, we compare our results with hardware accelerated implementations of classical asymmetric algorithms. Further, we lay out the usage of different HBS schemes during the boot process. We compare different schemes, show the importance of parameter choices, and demonstrate the performance gain with different levels of hardware acceleration.
Last updated:  2022-09-16
Security of Truncated Permutation Without Initial Value
Lorenzo Grassi and Bart Mennink
Indifferentiability is a powerful notion in cryptography. If a construction is proven to be indifferentiable from an ideal object, it can under certain assumptions instantiate that ideal object in higher-level constructions. Indifferentiability is a particularly useful model for cryptographic hash functions, and myriad results are known proving that a hash function behaves like a random oracle under the assumption that the underlying primitive (typically a compression function, a block cipher, or a permutation) is random. Recently, advances have been made in proving indifferentiability of one-way functions with fixed input length. One such example is truncation of a permutation. If one evaluates a random permutation on an input value concatenated with a fixed initial value, and truncates the output, one obtains a construction that is indifferentiable from a random function up to a certain bound (Dodis et al., FSE 2009; Choi et al., ASIACRYPT 2019). Security of this construction, however, is in part determined by the length of the initial value; omission of this fixed value yields an insecure construction. In this paper, we reconsider truncation of a permutation, and prove that the construction is indifferentiable from a random oracle, even if this fixed initial value is replaced by a randomized value. This randomized value may be the same for different evaluations of the construction, or freshly generated, up to the discretion of the adversary. The security level is the same as that of truncation with fixed initial value, up to collisions in the randomized value. We show that our construction has immediate implications in the context of parallel variable-length digest generation. In detail, we describe Cascade-MGF, that operates on top of any cryptographic hash function and uses the hash function output as randomized initial value in truncation. We demonstrate that Cascade-MGF compares favorably over earlier parallel variable-length digest generation constructions, namely Counter-MGF and Chained-MGF, in almost all settings.
Last updated:  2022-09-16
Algebraic Relation of Three MinRank Algebraic Modelings
Hao Guo and Jintai Ding
We give algebraic relations among equations of three algebraic modelings for MinRank problem: support minors modeling, Kipnis–Shamir modeling and minors modeling.
Last updated:  2022-09-16
Actively Secure OT Extension with Optimal Overhead
Marcel Keller, Emmanuela Orsini, and Peter Scholl
We describe an actively secure OT extension protocol in the random oracle model with efficiency very close to the passively secure IKNP protocol of Ishai et al. (Crypto 2003). For computational security parameter $\kappa$, our protocol requires $\kappa$ base OTs, and is the first practical, actively secure protocol to match the cost of the passive IKNP extension in this regard. The added communication cost is only additive in $O(\kappa)$, independent of the number of OTs being created, while the computation cost is essentially two finite field operations per extended OT. We present implementation results that show our protocol takes no more than 5% more time than the passively secure IKNP extension, in both LAN and WAN environments, and thus is essentially optimal with respect to the passive protocol. *Update, 2022:* Roy (Crypto 2022) showed that Lemma 1, which the core of our proof relies on, is incorrect, so our protocol does not currently have a security proof. Roy also presented a protocol with an alternative consistency check and complete security proof, which also fixes issues with instantiating the hash function raised earlier by Guo et al. (IEEE S&P 2020) and Masny and Rindal (ACM CCS 2019). In Section 4, we show how to fix our protocol using the techniques by Roy.
Last updated:  2022-09-16
A Fast Large-Integer Extended GCD Algorithm and Hardware Design for Verifiable Delay Functions and Modular Inversion
Kavya Sreedhar, Mark Horowitz, and Christopher Torng
The extended GCD (XGCD) calculation, which computes Bézout coefficients b_a, b_b such that b_a ∗ a_0 + b_b ∗ b_0 = GCD(a_0, b_0), is a critical operation in many cryptographic applications. In particular, large-integer XGCD is computationally dominant for two applications of increasing interest: verifiable delay functions that square binary quadratic forms within a class group and constant-time modular inversion for elliptic curve cryptography. Most prior work has focused on fast software implementations. The few works investigating hardware acceleration build on variants of Euclid’s division-based algorithm, following the approach used in optimized software. We show that adopting variants of Stein’s subtraction-based algorithm instead leads to significantly faster hardware. We quantify this advantage by performing a large-integer XGCD accelerator design space exploration comparing Euclid- and Stein-based algorithms for various application requirements. This exploration leads us to an XGCD hardware accelerator that is flexible and efficient, supports fast average and constant-time evaluation, and is easily extensible for polynomial GCD. Our 16nm ASIC design calculates 1024-bit XGCD in 294ns (8x faster than the state-of-the-art ASIC) and constant-time 255-bit XGCD for inverses in the field of integers modulo the prime 2^255−19 in 85ns (31× faster than state-of-the-art software). We believe our design is the first high-performance ASIC for the XGCD computation that is also capable of constant-time evaluation. Our work is publicly available at https://github.com/kavyasreedhar/sreedhar-xgcd-hardware-ches2022.
Last updated:  2022-09-15
Ad Hoc (Decentralized) Broadcast, Trace, and Revoke
Ji Luo
Traitor tracing schemes [Chor–Fiat–Naor, Crypto ’94] help content distributors fight against piracy and are defined with the content distributor as a trusted authority having access to the secret keys of all users. While the traditional model caters well to its original motivation, its centralized nature makes it unsuitable for many scenarios. For usage among mutually untrusted parties, a notion of *ad hoc* traitor tracing (naturally with the capability of broadcast and revocation) is proposed and studied in this work. Such a scheme allows users in the system to generate their own public/secret key pairs, without trusting any other entity. To encrypt, a list of public keys is used to identify the set of recipients, and decryption is possible with a secret key for any of the public keys in the list. In addition, there is a tracing algorithm that given a list of recipients’ public keys and a pirate decoder capable of decrypting ciphertexts encrypted to them, identifies at least one recipient whose secret key must have been used to construct the said decoder. Two constructions are presented. The first is based on obfuscation and has constant-size ciphertext, yet its decryption time is linear in the number of recipients. The second is a generic transformation that reduces decryption time at the cost of increased ciphertext size. A lower bound on the trade-off between ciphertext size and decryption time is shown, indicating that the two constructions achieve all possible optimal trade-offs, i.e., fully demonstrate the Pareto front of efficiency. The lower bound also applies to general attribute-based encryption and may be of independent interest.
Last updated:  2022-09-15
Impossibilities in Succinct Arguments: Black-box Extraction and More
Matteo Campanelli, Chaya Ganesh, Hamidreza Khoshakhlagh, and Janno Siim
The celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages we must assume non-falsifiable assumptions. We further investigate those barriers by showing new negative and positive results related to extractability and to the preprocessing model. 1. We first ask the question “are there further barriers to SNARGs that are knowledge-sound (SNARKs) and with a black-box extractor?”. We show it is impossible to have such SNARKs in the standard model. This separates SNARKs in the random oracle model (which can have black-box extraction) and those in the standard model. 2. We find positive results regarding the same question in the non-adaptive setting. Under the existence of SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP. 3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting. 4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also in the preprocessing model it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way. Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.
Last updated:  2022-09-15
Homomorphic Encryption on GPU
Ali Şah Özcan, Can Ayduman, Enes Recep Türkoğlu, and Erkay Savaş
Homomorphic encryption (HE) is a cryptosystem that allows secure processing of encrypted data. One of the most popular HE schemes is the Brakerski-Fan-Vercauteren (BFV), which supports somewhat (SWHE) and fully homomorphic encryption (FHE). Since overly involved arithmetic operations of HE schemes are amenable to concurrent computation, GPU devices can be instrumental in facilitating the practical use of HE in real world applications thanks to their superior parallel processing capacity. This paper presents an optimized and highly parallelized GPU library to accelerate the BFV scheme. This library includes state-of-the-art implementations of Number Theoretic Transform (NTT) and inverse NTT that minimize the GPU kernel function calls. It makes an efficient use of the GPU memory hierarchy and computes 128 NTT operations for ring dimension of $2^{14}$ only in $176.1~\mu s$ on RTX~3060Ti GPU. To the best of our knowlede, this is the fastest implementation in the literature. The library also improves the performance of the homomorphic operations of the BFV scheme. Although the library can be independently used, it is also fully integrated with the Microsoft SEAL library, which is a well-known HE library that also implements the BFV scheme. For one ciphertext multiplication, for the ring dimension $2^{14}$ and the modulus bit size of $438$, our GPU implementation offers $\mathbf{63.4}$ times speedup over the SEAL library running on a high-end CPU. The library compares favorably with other state-of-the-art GPU implementations of NTT and the BFV operations. Finally, we implement a privacy-preserving application that classifies encrpyted genome data for tumor types and achieve speedups of $42.98$ and $5.7$ over a CPU implementations using single and 16 threads, respectively. Our results indicate that GPU implementations can facilitate the deployment of homomorphic cryptographic libraries in real world privacy preserving applications.
Last updated:  2022-09-15
New Key-Recovery Attack on Reduced-Round AES
Navid Ghaedi Bardeh and Vincent Rijmen
A new fundamental 4-round property of AES, called the zero-difference property, was introduced by R{\o}njom, Bardeh and Helleseth at Asiacrypt 2017. Our work characterizes it in a simple way by exploiting the notion of related differences which was introduced and well analyzed by the AES designers. We extend the 4-round property by considering some further properties of related differences over the AES linear layer, generalizing the zero-difference property. This results in a new key-recovery attack on 7-round AES which is the first attack on 7-round AES by exploiting the zero-difference property.
Last updated:  2022-09-15
Hybrid Post-Quantum Signatures in Hardware Security Keys
Diana Ghinea, Fabian Kaczmarczyck, Jennifer Pullman, Julien Cretin, Stefan Kölbl, Rafael Misoczki, Jean-Michel Picod, Luca Invernizzi, and Elie Bursztein
Recent advances in quantum computing are increasingly jeopardizing the security of cryptosystems currently in widespread use, such as RSA or elliptic-curve signatures. To address this threat, researchers and standardization institutes have accelerated the transition to quantum-resistant cryptosystems, collectively known as Post-Quantum Cryptography (PQC). These PQC schemes present new challenges due to their larger memory and computational footprints and their higher chance of latent vulnerabilities. In this work, we address these challenges by introducing a scheme to upgrade the digital signatures used by security keys to PQC, focusing on both its theoretical and practical aspects. Specifically, we introduce a hybrid digital signature scheme based on two building blocks: a classically-secure scheme, ECDSA, and a post-quantum secure one, Dilithium. Our hybrid scheme maintains the guarantees of each underlying building block even if the other one is broken, thus being resistant to classical and quantum attacks. Additionally, our hybrid scheme ensures that an adversary cannot derive ECDSA or Dilithium signatures that this authentication protocol considers valid. On the practical aspect, we experimentally show that our hybrid signature scheme can successfully execute on current security keys, even though secure PQC schemes are known to require substantial resources. We publish an open-source implementation of our scheme at http://anonymous.4open.science/r/OpenSK-D018/ so that other researchers can reproduce our results on a nRF52840 development kit.
Last updated:  2022-09-15
Invisible Formula Attacks
David Naccache and Ofer Yifrach-Stav
This brief note introduces a new attack vector applicable to a symbolic computation tool routinely used by cryptographers. The attack takes advantage of the fact that the very rich user interface allows displaying formulae in invisible color or in font size zero. This allows to render some code portions invisible when opened using the tool. We implement a classical fault attack thanks to this deceptive mechanism but other cryptographic or non-cryptographic attacks (e.g. formatting the victim's disk or installing rootkits) can be easily conducted using identical techniques. This underlines the importance of creating malware detection software for symbolic computation tools. Such protections do not exist as of today. We stress that our observation is not a vulnerability in Mathematica but rather a misuse of the rich possibilities offered by the software.
Last updated:  2022-09-15
From Plaintext-extractability to IND-CCA Security
Ehsan Ebrahimi
We say a public-key encryption is plaintext-extractable in the random oracle model if there exists an algorithm that given access to all inputs/outputs queries to the random oracles can simulate the decryption oracle. We argue that plaintext-extractability is enough to show the indistinguishably under chosen ciphertext attack (IND-CCA) of OAEP+ transform (Shoup, Crypto 2001) when the underlying trapdoor permutation is one-way. We extend the result to the quantum random oracle model (QROM) and show that OAEP+ is IND-CCA secure in QROM if the underlying trapdoor permutation is quantum one-way.
Last updated:  2022-09-15
Putting up the swiss army knife of homomorphic calculations by means of TFHE functional bootstrapping
Pierre-Emmanuel Clet, Martin Zuber, Aymen Boudguiga, Renaud Sirdey, and Cédric Gouy-Pailler
In this work, we first propose a new functional bootstrapping with TFHE for evaluating any function of domain and codomain the real torus T by using a small number of bootstrappings. This result improves some aspects of previous approaches: like them, we allow for evaluating any functions, but with better precision. In addition, we develop more efficient multiplication and addition over ciphertexts building on the digit-decomposition approach. As a practical application, our results lead to an efficient implementation of ReLU, one of the most used activation functions in deep learning. The paper is concluded by extensive experimental results comparing each building block as well as their practical relevance and trade-offs.
Last updated:  2022-09-15
Efficient Proofs of Software Exploitability for Real-world Processors
Matthew Green, Mathias Hall-Andersen, Eric Hennenfent, Gabriel Kaptchuk, Benjamin Perez, and Gijs Van Laer
We consider the problem of proving in zero-knowledge the existence of vulnerabilities in executables compiled to run on real-world processors. We demonstrate that it is practical to prove knowledge of real exploits for real-world processor architectures without the need for source code and without limiting our consideration to narrow vulnerability classes. To achieve this, we devise a novel circuit compiler and a toolchain that produces highly optimized, non-interactive zero-knowledge proofs for programs executed on the MSP430, an ISA commonly used in embedded hardware. Our toolchain employs a highly optimized circuit compiler and a number of novel optimizations to construct efficient proofs for program binaries. To demonstrate the capability of our system, we test our toolchain by constructing proofs for challenges in the Microcorruption capture the flag exercises.
Last updated:  2022-09-15
On Module Unique-SVP and NTRU
Joël Felderhoff, Alice Pellet-Mary, and Damien Stehlé
The NTRU problem can be viewed as an instance of finding a short non-zero vector in a lattice, under the promise that it contains an exceptionally short vector. Further, the lattice under scope has the structure of a rank-2 module over the ring of integers of a number field. Let us refer to this problem as the module unique Shortest Vector Problem,or mod-uSVP for short. We exhibit two reductions that together provide evidence the NTRU problem is not just a particular case of mod-uSVP, but representative of it from a computational perspective. First, we reduce worst-case mod-uSVP to worst-case NTRU. For this, we rely on an oracle for id-SVP, the problem of finding short non-zero vectors in ideal lattices. Using the worst-case id-SVP to worst-case NTRU reduction from Pellet-Mary and Stehlé [ASIACRYPT'21],this shows that worst-case NTRU is equivalent to worst-case mod-uSVP. Second, we give a random self-reduction for mod-uSVP. We put forward a distribution D over mod-uSVP instances such that solving mod-uSVP with a non-negligible probability for samples from D allows to solve mod-uSVP in the worst-case. With the first result, this gives a reduction from worst-case mod-uSVP to an average-case version of NTRU where the NTRU instance distribution is inherited from D. This worst-case to average-case reduction requires an oracle for id-SVP.
Last updated:  2022-09-15
Through the Looking-Glass: Benchmarking Secure Multi-Party Computation Comparisons for ReLU's
Abdelrahaman Aly, Kashif Nawaz, Eugenio Salazar, and Victor Sucasas
Comparisons or Inequality Tests are an essential building block of Rectified Linear Unit functions (ReLU's), ever more present in Machine Learning, specifically in Neural Networks. Motivated by the increasing interest in privacy-preserving Artificial Intelligence, we explore the current state of the art of privacy preserving comparisons over Multi-Party Computation (MPC). We then introduce constant round variations and combinations, which are compatible with customary fixed point arithmetic over MPC. Our main focus is implementation and benchmarking; hence, we showcase our contributions via an open source library, compatible with current MPC software tools. Furthermore, we include a comprehensive comparative analysis on various adversarial settings. Our results improve running times in practical scenarios. Finally, we offer conclusions about the viability of these protocols when adopted for privacy-preserving Machine Learning.
Last updated:  2022-09-15
Vizard: A Metadata-hiding Data Analytic System with End-to-End Policy Controls
Chengjun Cai, Yichen Zang, Cong Wang, Xiaohua Jia, and Qian Wang
Owner-centric control is a widely adopted method for easing owners' concerns over data abuses and motivating them to share their data out to gain collective knowledge. However, while many control enforcement techniques have been proposed, privacy threats due to the metadata leakage therein are largely neglected in existing works. Unfortunately, a sophisticated attacker can infer very sensitive information based on either owners' data control policies or their analytic task participation histories (e.g., participating in a mental illness or cancer study can reveal their health conditions). To address this problem, we introduce $\textsf{Vizard}$, a metadata-hiding analytic system that enables privacy-hardened and enforceable control for owners. $\textsf{Vizard}$ is built with a tailored suite of lightweight cryptographic tools and designs that help us efficiently handle analytic queries over encrypted data streams coming in real-time (like heart rates). We propose extension designs to further enable advanced owner-centric controls (with AND, OR, NOT operators) and provide owners with release control to additionally regulate how the result should be protected before deliveries. We develop a prototype of $\textsf{Vizard}$ that is interfaced with Apache Kafka, and the evaluation results demonstrate the practicality of $\textsf{Vizard}$ for large-scale and metadata-hiding analytics over data streams.
Last updated:  2022-09-15
Some Security Arguments For Xifrat1
Jianfang "Danny" Niu
Xifrat1 is a family of cryptosystems based on abelian quasigroups and is the redesigned successor to the previous Xifrat0 cryptosystems. This paper discuss and attempt to argue its security, and provide this as foundation for future reasoning and/or refuting of its security.
Last updated:  2022-09-15
Resurrecting Xifrat - Compact Cryptosystems 2nd Attempt
Jianfang "Danny" Niu
Xifrat was a group-theoretic public-key cryptosystem based on a quasigroup with the special property of "restricted-commutativity". It was broken within half a month of its publication, due to a mistake made in the "mixing" function. In this paper, we revisit the design decisions made, proposing new constructions, and attempt (again) to build secure digital signature schemes and key encapsulation mechanisms. If the schemes can be proven secure, then this will be the most compact and the most efficient post-quantum cryptosystem ever proposed to date.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.