All papers in 2022 (Page 16 of 1781 results)

Last updated:  2022-08-23
Succinct Interactive Oracle Proofs: Applications and Limitations
Shafik Nassar, Ron D. Rothblum
\textit{Interactive Oracle Proofs} (IOPs) are a new type of proof-system that combines key properties of interactive proofs and PCPs: IOPs enable a verifier to be convinced of the correctness of a statement by interacting with an untrusted prover while reading just a few bits of the messages sent by the prover. IOPs have become very prominent in the design of efficient proof-systems in recent years. In this work we study \textit{succinct IOPs}, which are IOPs in which the communication complexity is polynomial (or even linear) in the original witness. While there are strong impossibility results for the existence of succinct PCPs (i.e., PCPs whose length is polynomial in the witness), it is known that the rich class of NP relations that are decidable in small space have succinct IOPs. In this work we show both new applications, and limitations, for succinct IOPs: \begin{itemize} \item First, using one-way functions, we show how to compile IOPs into zero-knowledge \textit{proofs}, while nearly preserving the proof length. This complements a recent line of work, initiated by Ben~Sasson~\etal{}~(TCC, 2016B), who compile IOPs into super-succinct zero-knowledge \textit{arguments}. Applying the compiler to the state-of-the-art succinct IOPs yields zero-knowledge proofs for bounded-space NP relations, with communication that is nearly equal to the original witness length. This yields the shortest known zero-knowledge proofs from the minimal assumption of one-way functions. \item Second, we give a barrier for obtaining succinct IOPs for more general NP relations. In particular, we show that if a language has a succinct IOP, then it can be decided in \textit{space} that is proportionate only to the witness length, after a bounded-time probabilistic preprocessing. We use this result to show that under a simple and plausible (but to the best of our knowledge, new) complexity-theoretic conjecture, there is no succinct IOP for CSAT. \end{itemize}
Last updated:  2022-07-18
Efficient Homomorphic Evaluation on Large Intervals
Jung Hee Cheon, Wootae Kim, Jai Hyun Park
Homomorphic encryption (HE) is being widely used for privacy-preserving computation. Since HE schemes only support polynomial operations, it is prevalent to use polynomial approximations of non-polynomial functions. We cannot monitor the intermediate values during the homomorphic evaluation; as a consequence, we should utilize polynomial approximations with sufficiently large approximation intervals to prevent the failure of the evaluation. However, the large approximation interval potentially accompanies computational overheads, and it is a serious bottleneck of HE application on real-world data. In this work, we introduce domain extension polynomials (DEPs) that extend the domain interval of functions by a factor of $k$ while preserving the feature of the original function on its original domain interval. By repeatedly iterating the domain-extension process with DEPs, we can extend with $O(\log{K})$ operations the domain of a given function by a factor of $K$ while the feature of the original function is preserved in its original domain interval. By using DEPs, we can efficiently evaluate in an encrypted state a function that converges at infinities, i.e., $\lim_{x\to\infty}f(x)$ and $\lim_{x\to-\infty}f(x)$ exist in $\mathbb{R}$. To uniformly approximate the function on $[-R,R]$, our method exploits $O(\log{R})$ operations and $O(1)$ memory. This is more efficient than the previous approach, the minimax approximation and Paterson-Stockmeyer algorithm, which uses $\Omega(\sqrt{R})$ multiplications and $\Omega(\sqrt{R})$ memory for the evaluation. As another application of DEPs, we also suggest a method to manage the risky outliers from a large interval $[-R,R]$ by using $O(\log{R})$ additional multiplications. As a real-world application, we trained the logistic regression classifier on large public datasets in an encrypted state by using our method. We exploit our method to the evaluation of the logistic function on large intervals, e.g., $[-7683,7683]$.
Last updated:  2022-08-19
Permutation rotation-symmetric S-boxes, liftings and affine equivalence
Tron Omland, Pantelimon Stanica
In this paper, we investigate permutation rotation-symmetric (shift-invariant) vectorial Boolean functions on n bits that are liftings from Boolean functions on k bits, for k≤n. These functions generalize the well-known map used in the current Keccak hash function, which is generated via the Boolean function on 3 variables, x1+(x2+1)x3. We provide some general constructions, and also study the affine equivalence between rotation-symmetric S-boxes and describe the corresponding relationship between the Boolean function they are associated with.
Last updated:  2022-03-02
Incompressiblity and Next-Block Pseudoentropy
Iftach Haitner, Noam Mazor, Jad Silbak
A distribution is k-incompressible, Yao [FOCS ’82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP 99], and to other cryptographic hardness assumptions, was unclear. We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k−2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP ’13]. We deduce that a samplable distribution X that is (H(X) + 2)-incompressible, implies the existence of one-way functions.
Last updated:  2022-11-08
Security Analysis of Elliptic Curves over Sextic Extension of Small Prime Fields
Robin Salen, Vijaykumar Singh, Vladimir Soukharev
In this report we investigate how to generate secure elliptic curves over sextic extension of prime fields of size roughly 64 bits to achieve 128-bit security. In particular, we present one of such curves over a 64-bit prime field, which we named Cheetah, and provide its security parameter. This curve is particularly well-suited for zero-knowledge applications such as FRI-based STARK proving systems, as its base prime field has the property of having a large two-adicity, necessary for FFT-related operations and at the same time it is used for elliptic curve-based signatures. We also provide a prototype implementation of this curve in Rust, featuring constant-time arithmetic and no use of the Rust standard library for WebAssembly support.
Last updated:  2023-11-04
Hardness estimates of the Code Equivalence Problem in the Rank Metric
Krijn Reijnders, Simona Samardjiska, and Monika Trimoska
In this paper, we analyze the hardness of the Matrix Code Equivalence (MCE) problem for matrix codes endowed with the rank metric, and provide the first algorithms for solving it. We do this by making a connection to another well-known equivalence problem from multivariate cryptography - the Isomorphism of Polynomials (IP). Under mild assumptions, we give tight reductions from MCE to the homogenous version of the Quadratic Maps Linear Equivalence (QMLE) problem, and vice versa. Furthermore, we present reductions to and from similar problems in the sum-rank metric, showing that MCE is at the core of code equivalence problems. On the practical side, using birthday techniques known for IP, we present two algorithms: a probabilistic algorithm for MCE running in time $q^{\frac{2}{3}(n+m)}$ up to a polynomial factor, and a deterministic algorithm for MCE with roots, running in time $q^{\min\{m,n,k\}}$ up to a polynomial factor. Lastly, to confirm these findings, we solve randomly generated instances of MCE using these two algorithms.
Last updated:  2022-04-19
Concrete Analysis of Approximate Ideal-SIVP to Decision Ring-LWE Reduction
Neal Koblitz, Subhabrata Samajder, Palash Sarkar, Subhadip Singha
A seminal 2013 paper by Lyubashevsky, Peikert, and Regev proposed basing post-quantum cryptography on ideal lattices and supported this proposal by giving a polynomial-time security reduction from the approximate Shortest Independent Vectors Problem (SIVP) to the Decision Learning With Errors (DLWE) problem in ideal lattices. We give a concrete analysis of this multi-step reduction. We find that the tightness gap in the reduction is so great as to vitiate any meaningful security guarantee, and we find reasons to doubt the feasibility in the foreseeable future of the quantum part of the reduction. In addition, when we make the reduction concrete it appears that the approximation factor in the SIVP problem is far larger than expected, a circumstance that causes the corresponding approximate-SIVP problem most likely not to be hard for proposed cryptosystem parameters. We also discuss implications for systems such as Kyber and SABER that are based on module-DLWE.
Last updated:  2022-03-02
EcGFp5: a Specialized Elliptic Curve
Thomas Pornin
We present here the design and implementation of ecGFp5, an elliptic curve meant for a specific compute model in which operations modulo a given 64-bit prime are especially efficient. This model is primarily intended for running operations in a virtual machine that produces and verifies zero-knowledge STARK proofs. We describe here the choice of a secure curve, amenable to safe cryptographic operations such as digital signatures, that maps to such models, while still providing reasonable performance on general purpose computers.
Last updated:  2022-03-02
Compact Storage for Homomorphic Encryption
Adi Akavia, Neta Oren, Boaz Sapir, Margarita Vald
Homomorphic encryption (HE) is a promising technology for protecting data in use, with considerable recent years progress towards attaining practical runtime performance. However the high storage overhead associated with HE remains an obstacle preventing its large scale adoption. In this work we propose a new storage solution in the two-server model resolving the high storage overhead associated with HE, while preserving data confidentiality. Our solution attains the following desired properties: 1) *Compact storage* with zero overhead over storing AES ciphertexts, and $10\times$ to $10,000\times$ better than storing CKKS ciphertexts. 2) *Fast runtime performance* for storage and retrieval, only twice the time of directly storing and retrieving HE ciphertexts. 3) *Dynamic control during retrieval* of the HE parameters and the data items to be packed in each HE ciphertext. 4) *Plug-and-play compatibility* with any homomorphic computation. We implemented our solution into a proof-of-concept system running on AWS EC2 instances with AWS S3 storage, empirically demonstrating its appealing performance. As a central tool we introduce the first perfect secret sharing scheme with fast homomorphic reconstruction over the reals; this may be of independent interest.
Last updated:  2022-03-02
Quantum-Secure Aggregate One-time Signatures with Detecting Functionality
Shingo Sato, Junji Shikata
An aggregate signature (ASIG) scheme allows any user to compress multiple signatures into a short signature called an aggregate signature. While a conventional ASIG scheme cannot detect any invalid messages from an aggregate signature, an ASIG scheme with detecting functionality (D-ASIG) has an additional property which can identify invalid messages from aggregate signatures. Hence, D-ASIG is useful to reduce the total amount of signature-sizes on a channel. On the other hand, development of quantum computers has been advanced recently. However, all existing D-ASIG schemes are insecure against attacks using quantum algorithms, which we call quantum attacks. In this paper, we propose a D-ASIG scheme with quantum-security which means security in a quantum setting. Hence, we first introduce quantum-security notions of ASIGs and D-ASIGs because there is no research on such security notions for (D-)ASIGs. Second, we propose a lattice-based aggregate one-time signature scheme with detecting functionality, and prove that this scheme satisfies our quantum-security in the quantum random oracle model and the certified key model. Hence, this scheme is the first quantum-secure D-ASIG.
Last updated:  2022-03-02
Approximate Divisor Multiples -- Factoring with Only a Third of the Secret CRT-Exponents
Alexander May, Julian Nowakowski, Santanu Sarkar
We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB). Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$. Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly efficient.
Last updated:  2022-03-02
Efficient NIZKs and Signatures from Commit-and-Open Protocols in the QROM
Uncategorized
Jelle Don, Serge Fehr, Christian Majenz, Christian Schaffner
Show abstract
Uncategorized
Commit-and-open Sigma-protocols are a popular class of protocols for constructing non-interactive zero-knowledge arguments and digital-signature schemes via the Fiat-Shamir transformation. Instantiated with hash-based commitments, the resulting non-interactive schemes enjoy tight online-extractability in the random oracle model. Online extractability improves the tightness of security proofs for the resulting digital-signature schemes by avoiding lossy rewinding or forking-lemma based extraction. In this work, we prove tight online extractability in the quantum random oracle model (QROM), showing that the construction supports post-quantum security. First, we consider the default case where committing is done by element-wise hashing. In a second part, we extend our result to Merkle-tree based commitments. Our results yield a significant improvement of the provable post-quantum security of the digital-signature scheme Picnic. Our analysis makes use of a recent framework by Chung et al. for analysing quantum algorithms in the QROM using purely classical reasoning. Therefore, our results can to a large extent be understood and verified without prior knowledge of quantum information science.
Last updated:  2023-10-27
On Codes and Learning With Errors over Function Fields
Maxime Bombar, Alain Couvreur, and Thomas Debris-Alazard
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial–LWE, Ring–LWE, Module–LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice–based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz extensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
Last updated:  2022-05-11
Efficient Schemes for Committing Authenticated Encryption
Mihir Bellare, Viet Tung Hoang
This paper provides efficient authenticated-encryption (AE) schemes in which a ciphertext is a commitment to the key. These are extended, at minimal additional cost, to schemes where the ciphertext is a commitment to all encryption inputs, meaning key, nonce, associated data and message. Our primary schemes are modifications of GCM (for basic, unique-nonce AE security) and AES-GCM-SIV (for misuse-resistant AE security) and add both forms of commitment without any increase in ciphertext size. We also give more generic, but somewhat more costly, solutions.
Last updated:  2022-04-12
Practical Post-Quantum Signature Schemes from Isomorphism Problems of Trilinear Forms
Gang Tang, Dung Hoang Duong, Antoine Joux, Thomas Plantard, Youming Qiao, Willy Susilo
In this paper, we propose a practical signature scheme based on the alternating trilinear form equivalence problem. Our scheme is inspired by the Goldreich-Micali-Wigderson's zero-knowledge protocol for graph isomorphism, and can be served as an alternative candidate for the NIST's post-quantum digital signatures. First, we present theoretical evidences to support its security, especially in the post-quantum cryptography context. The evidences are drawn from several research lines, including hidden subgroup problems, multivariate cryptography, cryptography based on group actions, the quantum random oracle model, and recent advances on isomorphism problems for algebraic structures in algorithms and complexity. Second, we demonstrate its potential for practical uses. Based on algorithm studies, we propose concrete parameter choices, and then implement a prototype. One concrete scheme achieves 128 bit security with public key size ~ 4100 bytes, signature size ~ 6800 bytes, and running times (key generation, sign, verify) ~ 0.8ms on a common laptop computer.
Last updated:  2022-03-02
Verifiably Distributed Multi-User Secret Sharing schemes
Likang Lu, Jianzhu Lu
Distributed secret sharing techniques, where a specific secret is encoded into its shares which are conveyed to the IoT device or its user via storage nodes, are considered. A verifiably distributed secret sharing (VDSS) provides a way for a legitimate user to verify the secret he reconstructs through the downloaded shares while the secrecy condition is satisfied in a weak or a perfect sense. This article examines the impact of minimizing verification information in a VDSS on the communication complexity and storage overhead, and achieves the verifiability in resource-limited IoTs by aggregating the verification information of different devices/users. Then, two secure VDSS are proposed. The first VDSS attains the lower bound on the communication complexity while providing the fault tolerance. The second VDSS simultaneously achieves the lower bounds of both communication complexity and storage overhead while providing the balanced storage load, thus showing the scheme that is optimal in terms of both parameters.
Last updated:  2022-11-26
Non-interactive Mimblewimble transactions, revisited
Georg Fuchsbauer, Michele Orrù
Mimblewimble is a cryptocurrency protocol that promises to overcome notorious blockchain scalability issues and provides user privacy. For a long time its wider adoption has been hindered by the lack of non-interactive transactions, that is, payments for which only the sender needs to be online. Yu proposed a way of adding non-interactive transactions to stealth addresses to Mimblewimble, but we show that it is flawed. Building on Yu and integrating ideas from Burkett, we give a fixed scheme and provide a rigorous security analysis in a strenghtening of the previous security model from Eurocrypt'19. Our protocol is considered for implementation by MimbleWimbleCoin and a variant is now deployed as MimbleWimble Extension Blocks (MWEB) in Litecoin.
Last updated:  2022-05-30
Gradecast in Synchrony and Reliable Broadcast in Asynchrony with Optimal Resilience, Efficiency, and Unconditional Security
Ittai Abraham, Gilad Asharov
We revisit Gradecast (Feldman and Micali, STOC'88) in Synchrony and Reliable Broadcast (Bracha, Information and Computation'87) in Asynchrony. For both tasks ,we provide new protocols that have three desirable properties: (1) \emph{optimal resilience}, tolerating $t<n/3$ malicious parties; (2) are \emph{communication-efficient}, where honest parties send just $O(n L)$ bits for a sender with a message of $L = \Omega(n \log n)$ bits; (3) and are \emph{unconditionally secure}, without needing to rely on any computational or setup assumptions (while having a statistical error probability). To the best of our knowledge, no previous work obtains all three properties simultaneously.
Last updated:  2022-03-02
Rethinking Modular Multi-Exponentiation in Real-World Applications
Vidal Attias, Luigi Vigneri, Vassil Dimitrov
The importance of efficient multi-exponen- tiation algorithms in a large spectrum of cryptographic applications continues to grow. Previous literature on the subject pays attention exclusively on the mini- mization of the number of modular multiplications. However, a small reduction of the multiplicative com- plexity can be easily overshadowed by other figures of merit. In this article, we demonstrate that the most efficient algorithm for computing multi-exponentiation changes if considering execution time instead of number of multi-exponentiations. We focus our work on two al- gorithms that perform best under the number of multi- exponentiation metric and show that some side opera- tions affects their theoretical ranking. We provide this analysis on different hardware, such as Intel Core and ARM CPUs and the two latest generations of Rasp- berry Pis, to show how the machine chosen affects the execution time of multi-exponentiation.
Last updated:  2022-03-02
Secure Non-Interactive Reduction and Spectral Analysis of Correlations
Pratyush Agarwal, Varun Narayanan, Shreya Pathak, Manoj Prabhakaran, Vinod M. Prabhakaran, Mohammad Ali Rehan
Correlated pairs of random variables are a central concept in information-theoretically secure cryptography. Secure reductions between different correlations have been studied, and completeness results are known. Further, the complexity of such reductions is intimately connected with circuit complexity and efficiency of locally decodable codes. As such, making progress on these complexity questions faces strong barriers. Motivated by this, in this work, we study a restricted form of secure reductions --- namely, Secure Non-Interactive Reductions (SNIR) --- which is still closely related to the original problem, and establish several fundamental results and relevant techniques for it. We uncover striking connections between SNIR and linear algebraic properties of correlations. Specifically, we define the spectrum of a correlation, and show that a target correlation has a SNIR to a source correlation only if the spectrum of the latter contains the entire spectrum of the former. We also establish a ``mirroring lemma'' that shows an unexpected symmetry between the two parties in a SNIR, when viewed through the lens of spectral analysis. We also use cryptographic insights and elementary linear algebraic analysis to fully characterize the role of common randomness as well as local randomness in SNIRs. We employ these results to resolve several fundamental questions about SNIRs, and to define future directions.
Last updated:  2022-05-01
Sublinear GMW-Style Compiler for MPC with Preprocessing
Elette Boyle, Niv Gilboa, Yuval Ishai, Ariel Nof
We consider the efficiency of protocols for secure multiparty computation (MPC) with a dishonest majority. A popular approach for the design of such protocols is to employ preprocessing. Before the inputs are known, the parties generate correlated secret randomness, which is consumed by a fast and possibly ``information-theoretic'' online protocol. A powerful technique for securing such protocols against malicious parties uses homomorphic MACs to authenticate the values produced by the online protocol. Compared to a baseline protocol, which is only secure against semi-honest parties, this involves a significant increase in the size of the correlated randomness, by a factor of up to a statistical security parameter. Different approaches for partially mitigating this extra storage cost come at the expense of increasing the online communication. In this work we propose a new technique for protecting MPC with preprocessing against malicious parties. We show that for circuit evaluation protocols that satisfy mild security and structural requirements, that are met by many standard protocols with semi-honest security, the extra additive storage and online communication costs are both logarithmic in the circuit size. This applies to Boolean circuits and to arithmetic circuits over fields or rings, and to both information-theoretic and computationally secure protocols. Our protocol can be viewed as a sublinear information-theoretic variant of the celebrated ``GMW compiler'' that applies to natural protocols for MPC with preprocessing. Our compiler makes a novel use of the techniques of Boneh et al. (Crypto 2019) for sublinear distributed zero knowledge, which were previously only used in the setting of honest-majority MPC.
Last updated:  2022-03-02
Advances in Logic Locking: Past, Present, and Prospects
Uncategorized
Hadi Mardani Kamali, Kimia Zamiri Azar, Farimah Farahmandi, Mark Tehranipoor
Show abstract
Uncategorized
Logic locking is a design concealment mechanism for protecting the IPs integrated into modern System-on-Chip (SoC) architectures from a wide range of hardware security threats at the IC manufacturing supply chain. Logic locking primarily helps the designer to protect the IPs against reverse engineering, IP piracy, overproduction, and unauthorized activation. For more than a decade, the research studies that carried out on this paradigm has been immense, in which the applicability, feasibility, and efficacy of the logic locking have been investigated, including metrics to assess the efficacy, impact of locking in different levels of abstraction, threat model definition, resiliency against physical attacks, tampering, and the application of machine learning. However, the security and strength of existing logic locking techniques have been constantly questioned by sophisticated logical and physical attacks that evolve in sophistication at the same rate as logic locking countermeasure approaches. By providing a comprehensive definition regarding the metrics, assumptions, and principles of logic locking, in this survey paper, we categorize the existing defenses and attacks to capture the most benefit from the logic locking techniques for IP protection, and illuminating the need for and giving direction to future research studies in this topic. This survey paper serves as a guide to quickly navigate and identify the state-of-the-art that should be considered and investigated for further studies on logic locking techniques, helping IP vendors, SoC designers, and researchers to be informed of the principles, fundamentals, and properties of logic locking.
Last updated:  2022-06-24
Partial Key Exposure Attacks on BIKE, Rainbow and NTRU
Andre Esser, Alexander May, Javier Verbel, Weiqiang Wen
In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information. There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposure attacks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and lattice-based schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results. On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time. Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty. Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries.
Last updated:  2022-04-06
Digital Twin for Secure Semiconductor Lifecycle Management: Prospects and Applications
Uncategorized
Hasan Al Shaikh, Mohammad Bin Monjil, Shigang Chen, Farimah Farahmandi, Navid Asadizanjani, Mark Tehranipoor, Fahim Rahman
Show abstract
Uncategorized
The expansive globalization of the semiconductor supply chain has introduced numerous untrusted entities into different stages of a device’s lifecycle, enabling them to compromise its security. To make matters worse, the increasing complexity in the design as well as aggressive time-to-market requirements of the newer generation of integrated circuits can lead either designers to unintentionally introduce security vulnerabilities or verification engineers to fail in detecting them earlier in design lifecycle, often due to the limitation of traditional verification and testing methodologies. These overlooked or undetected vulnerabilities can be exploited by malicious entities in subsequent stages of the lifecycle through an ever-widening variety of hardware attacks. The ability to ascertain the provenance of these vulnerabilities, after they have been unearthed at a later stage, becomes a pressing issue when the security assurance across the whole lifecycle is required to be ensured and generationally improved to thwart emerging attacks. We posit that if there is a malicious or unintentional breach of security policies of a device, it will be reflected in the form of anomalies in the data collected through traditional design, verification, validation, and testing activities throughout the lifecycle. With that, a digital simulacrum of a device’s lifecycle, called a digital twin (DT), can be formed by the data gathered from different stages to secure the lifecycle of the device. The DT can analyze the collected data through its constituent AI and data analytics algorithms to trace the origin of a detected hardware attack or vulnerability to the associated stage of the lifecycle. We refer to this functionality of the DT as Backward Trust Analysis. We also introduce the notion of Forward Trust Analysis which refers to the scalability and adaptability of the DT to unforeseen threats as they emerge. In this paper, we put forward a realization of intertwined relationships of security vulnerabilities with data available from the silicon lifecycle and formulate different components of an AI driven DT framework. The proposed DT framework leverages these relationships to achieve aforementioned security objectives through causality analysis, and thus accomplish end-to-end security-aware management of the entire semiconductor lifecycle. We put a perspective on how the limitations of existing ad-hoc-style security solutions can be overcome by the data oriented analysis that underpins our approach. With several threat and attack scenarios, we demonstrate how advanced modeling techniques can perform relational learning to identify such attacks. Finally, we provide potential future research avenues and challenges for realization of the digital twin framework to enable secure semiconductor lifecycle management
Last updated:  2022-09-28
Guaranteed Output in $O(\sqrt{n})$ Rounds for Round-Robin Sampling Protocols
Ran Cohen, Jack Doerner, Yashvanth Kondi, abhi shelat
We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets. Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants. We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.
Last updated:  2024-01-09
Multi-Designated Receiver Signed Public Key Encryption
Ueli Maurer, Christopher Portmann, and Guilherme Rito
This paper introduces a new type of public-key encryption scheme, called Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE), which allows a sender to select a set of designated receivers and both encrypt and sign a message that only these receivers will be able to read and authenticate (confidentiality and authenticity). An MDRS-PKE scheme provides several additional security properties which allow for a fundamentally new type of communication not considered before. Namely, it satisfies consistency---a dishonest sender cannot make different receivers receive different messages---off-the-record---a dishonest receiver cannot convince a third party of what message was sent (e.g., by selling their secret key), because dishonest receivers have the ability to forge signatures---and anonymity---parties that are not in the set of designated receivers cannot identify who the sender and designated receivers are. We give a construction of an MDRS-PKE scheme from standard assumptions. At the core of our construction lies yet another new type of public-key encryption scheme, which is of independent interest: Public Key Encryption for Broadcast (PKEBC) which provides all the security guarantees of MDRS-PKE schemes, except authenticity. We note that MDRS-PKE schemes give strictly more guarantees than Multi-Designated Verifier Signatures (MDVS) schemes with privacy of identities. This in particular means that our MDRS-PKE construction yields the first MDVS scheme with privacy of identities from standard assumptions. The only prior construction of such schemes was based on Verifiable Functional Encryption for general circuits (Damgård et al., TCC '20).
Last updated:  2022-03-02
Round-Optimal Byzantine Agreement
Diana Ghinea, Vipul Goyal, Chen-Da Liu-Zhang
Byzantine agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. It is long known that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. On the other hand, current protocols fail with probability at least $2^{-r}$. Whether we can match the lower bound agreement probability remains unknown. In this work, we resolve this long-standing open question. We present a protocol that matches the lower bound up to constant factors. Our results hold under a (strongly rushing) adaptive adversary that can corrupt up to $t = (1-\epsilon)n/2$ parties, and our protocols use a public-key infrastructure and a trusted setup for unique threshold signatures. This is the first protocol that decreases the failure probability (overall) by a 'super-constant' factor per round.
Last updated:  2022-03-02
Unprotected and Masked Hardware Implementations of Spook v2
Charles Momin, Gaëtan Cassiers, François-Xavier Standaert
We describe FPGA implementations of the Spook candidate to the NIST lightweight cryptography competition in two flavors. First, unprotected implementations that exhibit the excellent throughput and energy consumption for the area target specified by the NIST benchmarking initiative. Second, protected implementations leveraging the leveled implementation concept that the Spook design enables and confirming the significant performance gains that it enables.
Last updated:  2023-04-13
The Side-Channel Metrics Cheat Sheet
Kostas Papagiannopoulos, Ognjen Glamocanin, Melissa Azouaoui, Dorian Ros, Francesco Regazzoni, Mirjana Stojilovic
Side-channel attacks exploit a physical observable originating from a cryptographic device in order to extract its secrets. Many practically relevant advances in the field of side-channel analysis relate to security evaluations of cryptographic functions and devices. Accordingly, many metrics have been adopted or defined to express and quantify side-channel security. These metrics can relate to one another, but also conflict in terms of effectiveness, assumptions and security goals. In this work, we review the most commonly used metrics in the field of side-channel analysis. We provide a self-contained presentation of each metric, along with a discussion of its limitations. We practically demonstrate the metrics on examples of relevant implementations of the Advanced Encryption Standard (AES), and make the software implementation of the presented metrics available to the community as open source. This work, being beyond a survey of the current status of metrics, will allow researchers and practitioners to produce a well-informed security evaluation through a better understanding of its supporting and summarizing metrics.
Last updated:  2022-03-02
Handcrafting: Improving Automated Masking in Hardware with Manual Optimizations
Charles Momin, Gaëtan Cassiers, François-Xavier Standaert
Masking is an important countermeasure against side-channel attacks, but its secure implementation is known to be error-prone. The automated verification and generation of masked designs is therefore an important theoretical and practical challenge. In a recent work, Knichel et al. proposed a tool for the automated generation of masked hardware implementations satisfying strong security properties (e.g., glitch-freeness and composability). In this paper, we study the possibility to improve their results based on manual performance optimizations for the AES algorithm. Our main conclusion is that as the target architecture becomes more serial, such a handcrafted approach gains interest. For example, we reach latency reductions by a factor six for 8-bit architectures. We conclude the paper by discussing the extent to which such optimizations could be integrated in the tool of Knichel et al. As a bonus, we adapt a composition-based verification tool to check that our implementations are robust against glitches & transitions, and confirm the security order of exemplary implementations with preliminary leakage assessment.
Last updated:  2023-07-20
CoCoA: Concurrent Continuous Group Key Agreement
Joël Alwen, Benedikt Auerbach, Miguel Cueto Noval, Karen Klein, Guillermo Pascual-Perez, Krzysztof Pietrzak, Michael Walter
Messaging platforms like Signal are widely deployed and provide strong security in an asynchronous setting. It is a challenging problem to construct a protocol with similar security guarantees that can \emph{efficiently} scale to large groups. A major bottleneck are the frequent key rotations users need to perform to achieve post compromise forward security. In current proposals -- most notably in TreeKEM (which is part of the IETF's Messaging Layer Security (MLS) protocol draft) -- for users in a group of size $n$ to rotate their keys, they must each craft a message of size $\log(n)$ to be broadcast to the group using an (untrusted) delivery server. In larger groups, having users sequentially rotate their keys requires too much bandwidth (or takes too long), so variants allowing any $T \leq n$ users to simultaneously rotate their keys in just $2$ communication rounds have been suggested (e.g. "Propose and Commit" by MLS). Unfortunately, $2$-round concurrent updates are either damaging or expensive (or both); i.e. they either result in future operations being more costly (e.g. via "blanking'' or "tainting'') or are costly themselves requiring $\Omega(T)$ communication for each user [Bienstock et al., TCC'20]. In this paper we propose CoCoA; a scheme that allows for $T$ concurrent updates that are neither damaging nor costly. That is, they add no cost to future operations yet they only require $\Omega(\log^2(n))$ communication per user. To circumvent the [Bienstock et al.] lower bound, CoCoA increases the number of rounds needed to complete all updates from $2$ up to (at most) $\log(n)$; though typically fewer rounds are needed. The key insight of the protocol is the following: in the (non-concurrent version of) TreeKEM, a delivery server which gets $T$ concurrent update requests will approve one and reject the remaining $T-1$. In contrast, our server attempts to apply all of them. If more than one user requests to rotate the same key during a round, the server arbitrarily picks a winner. Surprisingly, we prove that regardless of how the server chooses the winners, all previously compromised users will recover after at most $\log(n)$ such update rounds. To keep the communication complexity low, CoCoA is a server-aided CGKA. That is, the delivery server no longer blindly forwards packets, but instead actively computes individualized packets tailored to each user. As the server is untrusted, this change requires us to develop new mechanisms ensuring robustness of the protocol.
Last updated:  2022-03-02
Private Circuits with Quasilinear Randomness
Vipul Goyal, Yuval Ishai, Yifan Song
A $t$-private circuit for a function $f$ is a randomized Boolean circuit $C$ that maps a randomized encoding of an input $x$ to an encoding of the output $f(x)$, such that probing $t$ wires anywhere in $C$ reveals nothing about $x$. Private circuits can be used to protect embedded devices against side-channel attacks. Motivated by the high cost of generating fresh randomness in such devices, several works have studied the question of minimizing the randomness complexity of private circuits. The best known upper bound, due to Coron et al. (Eurocrypt 2020), is $O(t^2\cdot\log ts)$ random bits, where $s$ is the circuit size of $f$. We improve this to $O(t\cdot \log ts)$, including the randomness used by the input encoder, and extend this bound to the stateful variant of private circuits. Our constructions are semi-explicit in the sense that there is an efficient randomized algorithm that generates the private circuit $C$ from a circuit for $f$ with negligible failure probability.
Last updated:  2022-03-02
The Summation-Truncation Hybrid: Reusing Discarded Bits for Free
Aldo Gunsing, Bart Mennink
A well-established PRP-to-PRF conversion design is truncation: one evaluates an $n$-bit pseudorandom permutation on a certain input, and truncates the result to $a$ bits. The construction is known to achieve tight $2^{n-a/2}$ security. Truncation has gained popularity due to its appearance in the GCM-SIV key derivation function (ACM CCS 2015). This key derivation function makes four evaluations of AES, truncates the outputs to $n/2$ bits, and concatenates these to get a $2n$-bit subkey. In this work, we demonstrate that truncation is wasteful. In more detail, we present the Summation-Truncation Hybrid (STH). At a high level, the construction consists of two parallel evaluations of truncation, where the truncated $(n-a)$-bit chunks are not discarded but rather summed together and appended to the output. We prove that STH achieves a similar security level as truncation, and thus that the $n-a$ bits of extra output is rendered for free. In the application of GCM-SIV, the current key derivation can be used to output $3n$ bits of random material, or it can be reduced to three primitive evaluations. Both changes come with no security loss.
Last updated:  2022-03-02
Collapseability of Tree Hashes
Aldo Gunsing, Bart Mennink
One oft-endeavored security property for cryptographic hash functions is collision resistance: it should be computationally infeasible to find distinct inputs $x,x'$ such that $H(x) = H(x')$, where $H$ is the hash function. Unruh (EUROCRYPT 2016) proposed collapseability as its quantum equivalent. The Merkle-Damgård and sponge hashing modes have recently been proven to be collapseable under the assumption that the underlying primitive is collapseable. These modes are inherently sequential. In this work, we investigate collapseability of tree hashing. We first consider fixed length tree hashing modes, and derive conditions under which their collapseability can be reduced to the collapseability of the underlying compression function. Then, we extend the result to two methods for achieving variable length hashing: tree hashing with domain separation between message and chaining value, and tree hashing with length encoding at the end of the tree. The proofs are performed using the collapseability composability framework of Fehr (TCC 2018), that allows us to discard of deeply technical quantum details and to focus on proper composition of the tree hashes from their compression function.
Last updated:  2022-03-02
Deck-Based Wide Block Cipher Modes and an Exposition of the Blinded Keyed Hashing Model
Aldo Gunsing, Joan Daemen, Bart Mennink
We present two tweakable wide block cipher modes from doubly-extendable cryptographic keyed (deck) functions and a keyed hash function: double-decker and docked-double-decker. Double-decker is a direct generalization of Farfalle-WBC of Bertoni et al. (ToSC 2017(4)), and is a four-round Feistel network on two arbitrarily large branches, where the middle two rounds call deck functions and the first and last rounds call the keyed hash function. Docked-double-decker is a variant of double-decker where the bulk of the input to the deck functions is moved to the keyed hash functions. We prove that the distinguishing advantage of the resulting wide block ciphers is simply two times the sum of the pseudorandom function distinguishing advantage of the deck function and the blinded keyed hashing distinguishing advantage of the keyed hash functions. We demonstrate that blinded keyed hashing is more general than the conventional notion of XOR-universality, and that it allows us to instantiate our constructions with keyed hash functions that have a very strong claim on bkh security but not necessarily on XOR-universality, such as Xoofffie (ePrint 2018/767). The bounds of double-decker and docked-double-decker are moreover reduced tweak-dependent, informally meaning that collisions on the keyed hash function for different tweaks only have a limited impact. We describe two use cases that can exploit this property opportunistically to get stronger security than what would be achieved with prior solutions: SSD encryption, where each sector can only be written to a limited number of times, and incremental tweaks, where one includes the state of the system in the variable-length tweak and appends new data incrementally.
Last updated:  2023-09-26
On the Concrete Security of TLS 1.3 PSK Mode
Hannah Davis, Denis Diemert, Felix Günther, and Tibor Jager
The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes. Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions. For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
Last updated:  2023-02-20
Entropic Hardness of Module-LWE from Module-NTRU
Katharina Boudgoust, Corentin Jeudy, Adeline Roux-Langlois, Weiqiang Wen
The Module Learning With Errors problem (M-LWE) has gained popularity in recent years for its security-efficiency balance, and its hardness has been established for a number of variants. In this paper, we focus on proving the hardness of (search) M-LWE for general secret distributions, provided they carry sufficient min-entropy. This is called entropic hardness of M-LWE. First, we adapt the line of proof of Brakerski and Döttling on R-LWE (TCC’20) to prove that the existence of certain distributions implies the entropic hardness of M-LWE. Then, we provide one such distribution whose required properties rely on the hardness of the decisional Module-NTRU problem.
Last updated:  2022-03-02
Universally Composable Subversion-Resilient Cryptography
Suvradip Chakraborty, Bernardo Magri, Jesper Buus Nielsen, Daniele Venturi
Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols. We remedy this situation by providing a foundation of *reverse firewalls* (Mironov and Stephens-Davidowitz, EUROCRYPT'15) in the *universal composability* (UC) framework (Canetti, FOCS'01 and J. ACM'20). More in details, our contributions are threefold: - We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks. For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call *specious subversion*). - We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption. - We show how to sanitize the classical GMW compiler (Goldreich, Micali and Wigderson, STOC 1987) for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion. Additionally, all our sanitized protocols are *transparent*, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
Last updated:  2022-03-02
A Greater GIFT: Strengthening GIFT against Statistical Cryptanalysis
Ling Sun, Bart Preneel, Wei Wang, Meiqin Wang
GIFT-64 is a 64-bit block cipher with a 128-bit key that is more lightweight than PRESENT. This paper provides a detailed analysis of GIFT-64 against differential and linear attacks. Our work complements automatic search methods for the best differential and linear characteristics with a careful manual analysis. This hybrid approach leads to new insights. In the differential setting, we theoretically explain the existence of differential characteristics with two active S-boxes per round and derive some novel properties of these characteristics. Furthermore, we prove that all optimal differential characteristics of GIFT-64 covering more than seven rounds must activate two S-boxes per round. We can construct all optimal characteristics by hand. In parallel to the work in the differential setting, we conduct a similar analysis in the linear setting. However, unlike the clear view in differential setting, the optimal linear characteristics of GIFT-64 must have at least one round activating only one S-box. Moreover, with the assistance of automatic searching methods, we identify 24 GIFT-64 variants achieving better resistance against differential attack while maintaining a similar security level against a linear attack. Since the new variants strengthen GIFT-64 against statistical cryptanalysis, we claim that the number of rounds could be reduced from 28 to 26 for the variants. This observation enables us to create a cipher with lower energy consumption than GIFT-64. Similarly to the case in GIFT-64, we do not claim any related-key security for the round-reduced variant as this is not relevant for most applications.
Last updated:  2022-12-05
YOLO YOSO: Fast and Simple Encryption and Secret Sharing in the YOSO Model
Ignacio Cascudo, Bernardo David, Lydia Garms, Anders Konring
Achieving adaptive (or proactive) security in cryptographic protocols is notoriously difficult due to the adversary's power to dynamically corrupt parties as the execution progresses. Inspired by the work of Benhamouda et al. in TCC 2020, Gentry et al. in CRYPTO 2021 introduced the YOSO (You Only Speak Once) model for constructing adaptively (or proactively) secure protocols in massively distributed settings (e.g. blockchains). In this model, instead of having all parties execute an entire protocol, smaller anonymous committees are randomly chosen to execute each individual round of the protocol. After playing their role, parties encrypt protocol messages towards the the next anonymous committee and erase their internal state before publishing their ciphertexts. However, a big challenge remains in realizing YOSO protocols: efficiently encrypting messages towards anonymous parties selected at random without learning their identities, while proving the encrypted messages are valid w.r.t. the protocol. In particular, the protocols of Benhamouda et al. and of Gentry et al. require showing ciphertexts contain valid shares of secret states. We propose concretely efficient methods for encrypting a protocol's secret state towards a random anonymous committee. We start by proposing a very simple and efficient scheme for encrypting messages towards randomly and anonymously selected parties. We then show constructions of publicly verifiable secret (re-)sharing (PVSS) schemes with concretely efficient proofs of (re-)share validity that can be generically instantiated from encryption schemes with certain linear homomorphic properties. Finally, we show that our PVSS schemes can be efficiently realized from our encyption scheme.
Last updated:  2022-07-13
Coalition and Threshold Hash-Based Signatures
John Kelsey, Stefan Lucks, Nathalie Lang
In a distributed digital signature scheme, coalitions of “trustees” can jointly create a valid signature. We propose a distributed version of stateful hash-based signature schemes like those defined in XMSS (defined in RFC8391) and LMS (defined in RFC8554). Our schemes allow a dealer, who has generated the secret keys and could create valid signatures, to delegate the ability to sign coalitions of trustees. Our schemes support k-of-n threshold signatures, where every k-subset from a total of $n \ge k$ trustees form a coalition, as well as more complex authorization structures. We require only secure point- to-point communications. Our schemes are efficient in terms of communications and computation. They are also storage-efficient, except for needing a large (but practical) public database for non-confidential data. Assuming a secure PRF and the security of the underlying HBS, our schemes are provably secure. Our schemes are practical, if one avoids an excessively large number of coalitions. The security of stateful hash-bases signatures crucially depends on never using a one-time key a second time – else the key would be compromised. We argue that delegating one’s signing capability to some coalitions of trustees, as done by our schemes, substantially decreases the risk of such a compromise.
Last updated:  2023-06-06
SNACKs: Leveraging Proofs of Sequential Work for Blockchain Light Clients
Hamza Abusalah, Georg Fuchsbauer, Peter Gaži, Karen Klein
The success of blockchains has led to ever-growing ledgers that are stored by all participating full nodes. In contrast, light clients only store small amounts of blockchain-related data and rely on the mediation of full nodes when interacting with the ledger. A broader adoption of blockchains calls for protocols that make this interaction trustless. We revisit the design of light-client blockchain protocols from the perspective of classical proof-system theory, and explain the role that proofs of sequential work (PoSWs) can play in it. To this end, we define a new primitive called succinct non-interactive argument of chain knowledge (SNACK), a non-interactive proof system that provides clear security guarantees to a verifier (a light client) even when interacting only with a single dishonest prover (a full node). We show how augmenting any blockchain with any graph-labeling PoSW (GL-PoSW) enables SNACK proofs for this blockchain. We also provide a unified and extended definition of GL-PoSWs covering all existing constructions, and describe two new variants. We then show how SNACKs can be used to construct light-client protocols, and highlight some deficiencies of existing designs, along with mitigations. Finally, we introduce incremental SNACKs which could provide a new approach to light mining.
Last updated:  2022-02-25
Several Improvements on BKZ Algorithm
Ziyu Zhao, Jintai Ding
Lattice problem such as NTRU problem and LWE problem is widely used as the security base of post-quantum cryptosystems. And currently doing lattice reduction by BKZ algorithm is the most efficient way to solve it. In this paper, we give several further improvements on BKZ algorithm, which can be used for different SVP subroutines base on both enumeration and sieving. These improvements in combination provide a speed up of $2^{3\sim 4}$ in total. It is significant in concrete attacks. Using these new techniques, we solved the 656 dimensional ideal lattice challenge in only 380 thread hours (also with a enumeration based SVP subroutine), much less than the previous records (which costs 4600 thread hours in total). With these improvements enabled, we can still simulate the new BKZ algorithm easily. One can also use this simulator to find the blocksize strategy (and the corresponding cost) to make $\mathrm{Pot}$ of the basis (defined in section 5.2) decrease as fast as possible, which means the length of the first basis vector decrease the fastest if we accept the GSA assumption. It is useful for analyzing concrete attacks on lattice-based cryptography.
Last updated:  2022-08-20
HEAD: an FHE-based Privacy-preserving Cloud Computing Protocol with Compact Storage and Efficient Computation
Lijing Zhou, Ziyu Wang, Hongrui Cui, Xiao Zhang, Xianggui Wang, Yu Yu
Fully homomorphic encryption (FHE) provides a natural solution for privacy-preserving cloud computing, but a straightforward FHE protocol may suffer from high computational overhead and a large ciphertext expansion rate, especially for computation-intensive tasks over large data, which are the main obstacles toward practical privacy-preserving cloud computing. In this paper, we present HEAD, a generic privacy-preserving cloud computing protocol that can be based on most mainstream (typically a BGV or GSW style scheme) FHE schemes with more compact storage and less computational costs than the straightforward FHE counterpart. In particular, our protocol enjoys a ciphertext/plaintext expansion rate of 1 (i.e., no expansion) in a cloud computing server, instead of a factor of hundreds of thousands. This is achieved by means of ``pseudorandomly masked'' ciphertexts, and the efficient transformations of them into FHE ciphertexts to facilitate privacy-preserving cloud computing. Depending on the underlying FHE in use, our HEAD protocol can be instantiated with the three masking techniques, namely modulo-subtraction-masking, modulo-division-masking, and XOR-masking, to support the decimal integer, real, or binary messages. Thanks to these masking techniques, various homomorphic computation tasks are made more efficient and less prone to noise accumulation. Furthermore, our multi-input masking and unmasking operations are more flexible than the FHE SIMD-batching, by supporting an on-demand configuration of FHE during each cloud computing request. We evaluate the performance of HEAD protocols on BFV, BGV, CKKS, and FHEW schemes based on the PALISADE and SEAL libraries, which confirms the theoretical analysis of the storage savings, the reduction in terms of computational complexity and noise accumulation. For example, in the BFV computation optimization, the sum or product of eight ciphertexts overhead is reduced from 336.3 ms to 6.3 ms, or from 1219.4 ms to 9.5 ms, respectively. We also embed HEAD into a mainstream database, PostgreSQL, in a client-server cloud storage and computing style. Compared with a straightforward FHE protocol, our experiments show that HEAD does not incur ciphertext expansion, and exhibits at least an order of magnitude saving in computing time at the server side for various tasks (on a hundred ciphertexts), by paying a reasonable price in client pre-processing time and communication. Our storage advantage not only gets around the database storage limitation but also reduces the I/O overhead.
Last updated:  2022-06-23
Public Randomness Extraction with Ephemeral Roles and Worst-Case Corruptions
Jesper Buus Nielsen, João Ribeiro, Maciej Obremski
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most $t$ parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO. We study the maximum corruption threshold $t$ which allows for unconditional randomness extraction in our model: - With respect to feasibility, we give protocols for $t$ corruptions and $n=6t+1$ or $n=5t$ parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions. - To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold $t$ and $n \leq 4t$ parties in both settings above. This also provides a separation between chosen and random corruptions, since the latter allows for randomness extraction with close to $n/2$ random corruptions.
Last updated:  2022-10-07
Characterizing the qIND-qCPA (in)security of the CBC, CFB, OFB and CTR modes of operation
Tristan Nemoz, Zoé AMBLARD, Aurélien DUPIN
We fully characterize the post-quantum security of the \(\mathsf{CBC}\), \(\mathsf{CFB}\), \(\mathsf{OFB}\) and \(\mathsf{CTR}\) modes of operation by considering all possible notions of \(\textsf{qIND-qCPA}\) security defined by Carstens, Ebrahimi, Tabia and Unruh (TCC 2021), thus extending the work performed by Anand, Targhi, Tabia and Unruh (PQCrypto 2016). We show that the results obtained by Anand et al. for the \(\textsf{qIND-qCPA-P6}\) security of these modes carry on to the other \(\textsf{IND-qCPA}\) notions, namely the \(\textsf{qIND-qCPA-P10}\) and \(\textsf{qIND-qCPA-P11}\) ones. We also show that all of these modes are insecure according to all of the other notions, regardless of the block cipher they are used with. We also provide two general results concerning the insecurity of commonly used properties of block ciphers, namely those preserving the length of their input and those using the \(\texttt{XOR}\) operation as a way to randomize the encryption. Finally, we use these results to highlight the need for new quantum semantic security notions.
Last updated:  2022-02-25
Limits of Preprocessing for Single-Server PIR
Giuseppe Persiano, Kevin Yeo
We present a lower bound for the static cryptographic data structure problem of single-server private information retrieval (PIR). PIR considers the setting where a server holds a database of $n$ entries and a client wishes to privately retrieve the $i$-th entry without revealing the index $i$ to the server. In our work, we focus on PIR with preprocessing where an $r$-bit hint may be computed in a preprocessing stage and stored by the server to be used to perform private queries in expected time $t$. We consider the public preprocessing setting of Beimel et al. [JoC, 2004] where the hint is publicly available to everyone including the adversary. We prove that for any single-server computationally secure PIR with preprocessing it must be that $tr = \Omega(n \log n)$ when $r = \Omega(\log n)$. If $r = O(\log n)$, we show that $t = \Omega(n)$. Our lower bound holds even when the scheme errs with probability $1/n^2$ and the adversary’s distinguishing advantage is $1/n$. Our work improves upon the $tr = \Omega(n)$ lower bound of Beimel et al. [JoC, 2004]. We prove our lower bound in a variant of the cell probe model where only accesses to the memory are charged cost and computation and accesses to the hint are free. Our main technical contribution is a novel use of the cell sampling technique (also known as the incompressibility technique) used to obtain lower bounds on data structures. In previous works, this technique only leveraged the correctness guarantees to prove lower bounds even when used for cryptographic primitives. Our work combines the cell sampling technique with the privacy guarantees of PIR to construct a powerful, polynomial-time adversary that is critical to proving our higher lower bounds.
Last updated:  2023-04-06
New algorithms for the Deuring correspondence: Towards practical and secure SQISign signatures
Luca De Feo, Antonin Leroux, Patrick Longa, Benjamin Wesolowski
The Deuring correspondence defines a bijection between isogenies of supersingular elliptic curves and ideals of maximal orders in a quaternion algebra. We present a new algorithm to translate ideals of prime-power norm to their corresponding isogenies --- a central task of the effective Deuring correspondence. The new method improves upon the algorithm introduced in 2021 by De Feo, Kohel, Leroux, Petit and Wesolowski as a building-block of the SQISign signature scheme. SQISign is the most compact post-quantum signature scheme currently known, but is several orders of magnitude slower than competitors, the main bottleneck of the computation being the ideal-to-isogeny translation. We implement the new algorithm and apply it to SQISign, achieving a more than two-fold speedup in key generation and signing with a new choice of parameter. Moreover, after adapting the state-of-the-art $\mathbb{F}_{p^2}$ multiplication algorithms by Longa to implement SQISign's underlying extension field arithmetic and adding various improvements, we push the total speedups to over three times for signing and four times for verification. In a second part of the article, we advance cryptanalysis by showing a very simple distinguisher against one of the assumptions used in SQISign. We present a way to impede the distinguisher through a few changes to the generic KLPT algorithm. We formulate a new assumption capturing these changes, and provide an analysis together with experimental evidence for its validity.
Last updated:  2023-02-21
Variational quantum solutions to the Shortest Vector Problem
Uncategorized
Martin R. Albrecht, Miloš Prokop, Yixin Shen, Petros Wallden
Show abstract
Uncategorized
A fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp. estimates) for the number of qubits required per dimension for any lattices (resp. random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with ≈ 10^3 noisy qubits such instances can be tackled.
Last updated:  2023-01-14
Conditional Variational AutoEncoder based on Stochastic Attack
Gabriel Zaid, Lilian Bossuet, Mathieu Carbone, Amaury Habrard, Alexandre Venelli
Over the recent years, the cryptanalysis community leveraged the potential of research on Deep Learning to enhance attacks. In particular, several studies have recently highlighted the benefits of Deep Learning based Side-Channel Attacks (DLSCA) to target real-world cryptographic implementations. While this new research area on applied cryptography provides impressive result to recover a secret key even when countermeasures are implemented (e.g. desynchronization, masking schemes), the lack of theoretical results make the construction of appropriate and powerful models a notoriously hard problem. This can be problematic during an evaluation process where a security bound is required. In this work, we propose the first solution that bridges DL and SCA in order to ease the use of DL techniques in an evaluation process. Based on theoretical results, we develop the first Machine Learning generative model, called Conditional Variational AutoEncoder based on Stochastic Attacks (cVAE-SA), designed from the well-known Stochastic Attacks, that have been introduced by Schindler et al. in $2005$. This model reduces the black-box property of DL and eases the architecture design for every real-world crypto-system as we define theoretical complexity bounds which only depend on the dimension of the (reduced) trace and the targeting variable over $\mathbb{F}_{2}^{n}$. We validate our theoretical proposition through simulations and public datasets on a wide range of use cases, including multi-task learning, curse of dimensionality and masking scheme.
Last updated:  2022-02-25
Towards Low-Latency Implementation of Linear Layers
Qun Liu, Weijia Wang, Yanhong Fan, Lixuan Wu, Ling Sun, Meiqin Wang
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits. We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F_2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
Last updated:  2022-02-25
Apple vs. EMA: Electromagnetic Side Channel Attacks on Apple CoreCrypto
Gregor Haas, Aydin Aysu
Cryptographic instruction set extensions are commonly used for ciphers which would otherwise face unacceptable side channel risks. A prominent example of such an extension is the ARMv8 Cryptographic Extension, or ARM CE for short, which defines dedicated instructions to securely accelerate AES. However, while these extensions may be resistant to traditional "digital" side channel attacks, they may still vulnerable to physical side channel attacks. In this work, we demonstrate the first such attack on a standard ARM CE AES implementation. We specifically focus on the implementation used by Apple’s CoreCrypto library which we run on the Apple A10 Fusion SoC. To that end, we implement an optimized side channel acquisition infrastructure involving both custom iPhone software and accelerated analysis code. We find that an adversary which can observe 5-30 million known-ciphertext traces can reliably extract secret AES keys using electromagnetic (EM) radiation as a side channel. This corresponds to an encryption operation on less than half of a gigabyte of data, which could be acquired in less than 2 seconds on the iPhone 7 we examined. Our attack thus highlights the need for side channel defenses for real devices and production, industry-standard encryption software.
Last updated:  2022-03-06
WiP: Applicability of ISO Standard Side-Channel Leakage Tests to NIST Post-Quantum Cryptography
Markku-Juhani O. Saarinen
FIPS 140-3 is the main standard defining security requirements for cryptographic modules in U.S. and Canada; commercially viable hardware modules generally need to be compliant with it. The scope of FIPS 140-3 will also expand to the new NIST Post-Quantum Cryptography (PQC) standards when migration from older RSA and Elliptic Curve cryptography begins. FIPS 140-3 mandates the testing of the effectiveness of ``non-invasive attack mitigations'', or side-channel attack countermeasures. At higher security levels 3 and 4, the FIPS 140-3 side-channel testing methods and metrics are expected to be those of ISO 17825, which is based on the older Test Vector Leakage Assessment (TVLA) methodology. We discuss how to apply ISO 17825 to hardware modules that implement lattice-based PQC standards for public-key cryptography -- Key Encapsulation Mechanisms (KEMs) and Digital Signatures. We find that simple ``random key'' vs. ``fixed key'' tests are unsatisfactory due to the close linkage between public and private components of PQC keypairs. While the general statistical testing approach and requirements can remain consistent with older public-key algorithms, a non-trivial challenge in creating ISO 17825 testing procedures for PQC is the careful design of test vector inputs so that only relevant Critical Security Parameter (CSP) leakage is captured in power, electromagnetic, and timing measurements.
Last updated:  2022-02-25
Semi-Quantum Tokenized Signatures
Omri Shmueli
Quantum tokenized signature schemes (Ben-David and Sattath, QCrypt 2017) allow a sender to generate and distribute quantum unclonable states which grant their holder a one-time permission to sign in the name of the sender. Such schemes are a strengthening of public-key quantum money schemes, as they imply public-key quantum money where some channels of communication in the system can be made classical. An even stronger primitive is semi-quantum tokenized signatures, where the sender is classical and can delegate the generation of the token to a (possibly malicious) quantum receiver. Semi-quantum tokenized signature schemes imply a powerful version of public-key quantum money satisfying two key features: 1. The bank is classical and the scheme can execute on a completely classical communication network. In addition, the bank is \emph{stateless} and after the creation of a banknote, does not hold any information nor trapdoors except the balance of accounts in the system. Such quantum money scheme solves the main open problem presented by Radian and Sattath (AFT 2019). 2. Furthermore, the classical-communication transactions between users in the system are \emph{direct} and do not need to go through the bank. This enables the transactions to be both classical and private. While fully-quantum tokenized signatures (where the sender is quantum and generates the token by itself) are known based on quantum-secure indistinguishability obfuscation and injective one-way functions, the semi-quantum version is not known under any computational assumption. In this work we construct a semi-quantum tokenized signature scheme based on quantum-secure indistinguishability obfuscation and the sub-exponential hardness of the Learning with Errors problem. In the process, we show new properties of quantum coset states and a new hardness result on indistinguishability obfuscation of classical subspace membership circuits.
Last updated:  2022-02-25
The Little Seal Bug: Optical Sound Recovery from Lightweight Reflective Objects
Ben Nassi, Ras Swissa, Yuval Elovici, Boris Zadov
In this paper, we introduce "the little seal bug" attack, an optical side-channel attack which exploits lightweight reflective objects (e.g., an iced coffee can, a smartphone stand, a souvenir) as optical implants for the purpose of recovering the content of a conversation. We show how fluctuations in the air pressure on the surface of a shiny object can be exploited by eavesdroppers to recover speech passively and externally, using equipment not likely to be associated with spying. These air pressure fluctuations, which occur in response to sound, cause the shiny object to vibrate and reflect light which modulates the nearby sound; as a result, seemingly innocuous objects like an empty beverage can, desk ornament, or smartphone stand, which are often placed on desks, can provide the infrastructure required for eavesdroppers to recover the content of a victim’s conversation held when the victim is sitting at his/her desk. First, we conduct a series of experiments aimed at learning the characteristics of optical measurements obtained from shiny objects that reflect light, by using a photodiode to analyze the movement of a shiny weight in response to sound. Based on our findings, we propose an optical acoustical transformation (OAT) to recover speech from the optical measurements obtained from light reflected from shiny objects. Finally, we compare the performance of the little seal bug attack to related methods presented in other studies. We show that eavesdroppers located 35 meters away from a victim can use the little seal bug attack to recover speech at the sound level of a virtual meeting with fair intelligibility whe
Last updated:  2022-06-23
To Label, or Not To Label (in Generic Groups)
Mark Zhandry
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results: - We show that numerous textbook constructions are *not* captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer *is* captured by Shoup. - For constructions that exist in both models, we show that security is equivalent for "single stage" games, but Shoup security is strictly stronger than Maurer security for some "multi-stage" games. - The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result. - We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation. - We give a new un-instantiability result for the *algebraic* group model.
Last updated:  2022-02-25
Constant matters: Fine-grained Complexity of Differentially Private Continual Observation Using Completely Bounded Norms
Monika Henzinger, Jalaj Upadhyay
We study fine-grained error bounds for differentially private algorithms for averaging and counting in the continual observation model. For this, we use the completely bounded spectral norm (cb norm) from operator algebra. For a matrix $W$, its cb norm is defined as \[ \|{W}\|_{\mathsf{cb}} = \max_{Q} \left\{ \frac{\|{Q \bullet W}\|}{\|{Q}\|} \right\}, \] where $Q \bullet W$ denotes the Schur product and $\|{\cdot}\|$ denotes the spectral norm. We bound the cb norm of two fundamental matrices studied in differential privacy under the continual observation model: the counting matrix $M_{\mathsf{counting}}$ and the averaging matrix $M_{\mathsf{average}}$. For $M_{\mathsf{counting}}$, we give lower and upper bound whose additive gap is $1 + \frac{1}{\pi}$. Our factorization also has two desirable properties sufficient for streaming setting: the factorization contains of lower-triangular matrices and the number of distinct entries in the factorization is exactly $T$. This allows us to compute the factorization on the fly while requiring the curator to store a $T$-dimensional vector. For $M_{\mathsf{average}}$, we show an additive gap between the lower and upper bound of $\approx 0.64$.
Last updated:  2022-02-25
Embedding the UC Model into the IITM Model
Daniel Rausch, Ralf Kuesters, Céline Chevalier
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how? In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by Kuesters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
Last updated:  2022-11-23
Zero-Knowledge Protocols for the Subset Sum Problem from MPC-in-the-Head with Rejection
Thibauld Feneuil, Jules Maire, Matthieu Rivain, Damien Vergnaud
We propose zero-knowledge arguments for the modular subset sum problem. Given a set of integers, this problem asks whether a subset adds up to a given integer $t$ modulo a given integer $q$. This NP-complete problem is considered since the 1980s as an interesting alternative in cryptography to hardness assumptions based on number theory and it is in particular believed to provide post-quantum security. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity. We improve this approach by using a secret-sharing over small integers (rather than modulo $q$) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new techniques have the significant advantage to result in arguments of size independent of the modulus $q$. Our new protocols for the subset sum problem achieve an asymptotic improvement by producing arguments of quadratic size (against cubic size for previous proposals). This improvement is also practical: for a 256-bit modulus $q$, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide an efficient alternative to state-of-the-art protocols when the underlying ring is not small and NTT-friendly. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols. Eventually, we use our technique to construct an efficient digital signature scheme based on a pseudo-random function due to Boneh-Halevi-Howgrave-Graham.
Last updated:  2022-02-25
Half-Aggregation of Schnorr Signatures with Tight Reductions
Yanbo Chen, Yunlei Zhao
An aggregate signature (AS) scheme allows an unspecified aggregator to compress many signatures into a short aggregation. AS schemes can save storage costs and accelerate verification. They are desirable for applications where many signatures need to be stored, transferred, or verified together, like blockchain systems, network routing, e-voting, and certificate chains. However, constructing AS schemes based on general groups, only requiring the hardness of the discrete logarithm problem, is quite tricky and has been a long-standing research question. Recently, Chalkias et al. (CT-RSA 2021) proposed a half-aggregate scheme for Schnorr signatures. We observe the scheme lacks a tight security proof and does not well support incremental aggregation, i.e., adding more signatures into a pre-existing aggregation. Chalkias et al. also presented an aggregate scheme for Schnorr signatures whose security can be tightly reduced to the security of Schnorr signatures in the random oracle model (ROM). However, the scheme is rather expensive and does not achieve half-aggregation. It is a fundamental question whether there exists half-aggregation of Schnorr signatures with tight reduction in the ROM, of both theoretical and practical interests. This work's contributions are threefold. We first give a tight security proof for the scheme in CT-RSA 2021 in the ROM and the algebraic group model (AGM). Second, we provide a new half-aggregate scheme for Schnorr signatures that perfectly supports incremental aggregation, whose security also tightly reduces to Schnorr's security in the AGM+ROM. Third, we present a Schnorr-based sequential aggregate signature (SAS) scheme that is tightly secure as Schnorr signature scheme in the ROM (without the AGM). Our work may pave the way for applying Schnorr aggregation in real-world cryptographic applications.
Last updated:  2022-08-15
Secure Joint Communication and Sensing
Onur Gunlu, Matthieu Bloch, Rafael F. Schaefer, Aylin Yener
This work considers the problem of mitigating information leakage between communication and sensing in systems jointly performing both operations. Specifically, a discrete memoryless state-dependent broadcast channel model is studied in which (i) the presence of feedback enables a transmitter to convey information, while simultaneously performing channel state estimation; (ii) one of the receivers is treated as an eavesdropper whose state should be estimated but which should remain oblivious to part of the transmitted information. The model abstracts the challenges behind security for joint communication and sensing if one views the channel state as a sensitive attribute, e.g., location. For independent and identically distributed states, perfect output feedback, and when part of the transmitted message should be kept secret, a partial characterization of the secrecy-distortion region is developed. The characterization is exact when the broadcast channel is either physically-degraded or reversely-physically-degraded. The partial characterization is also extended to the situation in which the entire transmitted message should be kept secret. The benefits of a joint approach compared to separation-based secure communication and state-sensing methods are illustrated with a binary joint communication and sensing model.
Last updated:  2022-02-25
Cache-22: A Highly Deployable End-To-End Encrypted Cache System with Post-Quantum Security
Keita Emura, Shiho Moriai, Takuma Nakajima, Masato Yoshimi
Cache systems are crucial for reducing communication overhead on the Internet. The importance of communication privacy is being increasingly and widely recognized; therefore, we anticipate that nearly all end-to-end communication will be encrypted via secure sockets layer/transport layer security (SSL/TLS) in the near future. Herein we consider a catch-22 situation, wherein the cache server checks whether content has been cached or not, i.e., the cache server needs to observe it, thereby violating end-to-end encryption. We avoid this catch-22 situation by proposing an encrypted cache system which we call Cache-22. To maximize its deployability, we avoid heavy, advanced cryptographic tools, and instead base our Cache-22 system purely on traditional SSL/TLS communication. It employs tags for searching, and its design concept enables the service provider to decide, e.g., via an authentication process, whether or not a particular user should be allowed to access particular content. We provide a prototype implementation of the proposed system using the color-based cooperative cache proposed by Nakajima et al. (IEICE Trans. 2017) under several ciphersuites containing post-quantum key exchanges in addition to ECDHE (Elliptic Curve-based). We consider NIST Post-Quantum Cryptography round 3 finalists and alternate candidates: lattice-based (Kyber, SABER, NTRU), code-based (BIKE), and isogeny-based (SIKE). Compared to direct HTTPS communication between a service provider and a user, employing our Cache-22 system has a merit to drastically reduce communications between a cache server and the service provider (approximately 95%) which is effective in a hierarchical network with a cost disparity.
Last updated:  2022-02-25
PFE: Linear Active Security, Double-Shuffle Proofs, and Low-Complexity Communication
Hanyu Jia, Xiangxue Li
We consider private function evaluation (PFE) in malicious adversary model. Current state-of-the-art in PFE from Valiant's universal circuits (Liu, Yu, et al., CRYPTO 2021) does not avoid the logarithmic factor in circuit size. In constructing linear active PFE, one essential building block is to prove the correctness of an extended permutation (EP, Mohassel and Sadeghian at EUROCRYPT 2013) by zero-knowledge protocols with linear complexity. The linear instantiation $\mathcal{ZK}_{EP}$ by Mohassel, Sadeghian, and Smart (ASIACRYPT 2014) is a three-phase protocol, and each phase (dummy placement, replication, and permutation) is of size $2g$. Its overhead thus seems really outrageous, reducing its practicability. We present in this paper a novel and efficient framework $\mathcal{ZK}_{DS}$ for proving the correct EP. We show that \underline{d}ouble \underline{s}huffles suffice for EP (exponentiations and communication overheads are about 27% and 31% of $\mathcal{ZK}_{EP}$, respectively). Data owner(s) generates the randomness for the first shuffle whose outputs determine outgoing wires of the circuit defined by the function. Function owner reuses and extends the randomness in the second shuffle whose outputs determine the incoming wires. From $\mathcal{ZK}_{DS}$, we build an online/offline PFE framework with linear active security. The online phase could be instantiated by any well-studied secure function evaluation (SFE) with linear active security (e.g., Tiny-OT at CRYPTO 2012). The offline phase depends only on the private function $f$ and uses $\mathcal{ZK}_{DS}$ to prove the EP relationship between outgoing wires and incoming wires in the circuit $\mathcal{C}_f$ derived from $f$.
Last updated:  2022-02-25
On the Impossibility of Key Agreements from Quantum Random Oracles
Per Austrin, Hao Chung, Kai-Min Chung, Shiuan Fu, Yao-Ting Lin, Mohammad Mahmoody
We study the following question, first publicly posed by Hosoyamada and Yamakawa in 2018. Can parties Alice and Bob with quantum computing power and classical communication rely only on a random oracle (that can be queried in quantum superposition) to agree on a key that is private from eavesdroppers? We make the first progress on the question above and prove the following. When only one of the parties is classical and the other party is quantum powered, as long as they ask a total of $d$ oracle queries and agree on a key with probability $1$, then there is always a way to break the key agreement by asking $O(d^2)$ number of classical oracle queries. When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with $poly(d)$ classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-$d$ real-valued polynomials over the Boolean hypercube of influence at most $1/poly(d)$ is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical $2^{O(md)}$-query attack on any such key agreement protocol, where $m$ is the oracle's output length. Since our attacks are classical, we then ask whether it is always possible to find classical attacks on key agreements with imperfect completeness in the quantum random oracle model. We proves a barrier for this approach, by showing that if the folklore “Simulation Conjecture” (first formally stated by Aaronson and Ambainis in 2009) about the possibility of simulating efficient-query quantum algorithms using efficient-query classical algorithms is false, then there is in fact such a secure key agreement in the quantum random oracle model that cannot be broken classically.
Last updated:  2022-02-25
High-Performance Hardware Implementation of Lattice-Based Digital Signatures
Luke Beckwith, Duc Tri Nguyen, Kris Gaj
Many currently deployed public-key cryptosystems are based on the difficulty of the discrete logarithm and integer factorization problems. However, given an adequately sized quantum computer, these problems can be solved in polynomial time as a function of the key size. Due to the future threat of quantum computing to current cryptographic standards, alternative algorithms that remain secure under quantum computing are being evaluated for future use. As a part of this evaluation, high-performance implementations of these candidate algorithms must be investigated. This work presents a high-performance implementation of all operations of CRYSTALS-Dilithium and one operation of FALCON (signature verification) targeting FPGAs. In particular, we present a Dilithium design that achieves the best latency for an FPGA implementation to date and, to the best of our knowledge, the first FALCON hardware implementation to date. We compare our results with the hardware implementations of all viable NIST Round 3 post-quantum digital signature candidates.
Last updated:  2022-12-08
Short Leakage Resilient and Non-malleable Secret Sharing Schemes
Nishanth Chandran, Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar
Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it. The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately ($3$.(message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best known result (again due to the above work) has a share size of ($11$.(message length)). In this work, we build LRSS and NMSS schemes with much improved share sizes. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results: -We build an information-theoretic LRSS scheme for threshold access structures with a share size of ((message length) + $\mu$). -As an application of the above result, we obtain an NMSS with a share size of ($4$.(message length)). Further, for the special case of sharing random messages, we obtain a share size of ($2$.(message length)).
Last updated:  2022-09-18
Multi-Client Functional Encryption with Fine-Grained Access Control
Ky Nguyen, Duong Hieu Phan, David Pointcheval
Multi-Client Functional Encryption ($\mathsf{MCFE}$) and Multi-Input Functional Encryption ($\mathsf{MIFE}$) are very interesting extensions of Functional Encryption for practical purpose. They allow to compute joint function over data from multiple parties. Both primitives are aimed at applications in multi-user settings where decryption can be correctly output for users with appropriate functional decryption keys only. While the definitions for a single user or multiple users were quite general and can be realized for general classes of functions as expressive as Turing machines or all circuits, efficient schemes have been proposed so far for concrete classes of functions: either only for access control, $\mathit{i.e.}$ the identity function under some conditions, or linear/quadratic functions under no condition. In this paper, we target classes of functions that explicitly combine some evaluation functions independent of the decrypting user under the condition of some access control. More precisely, we introduce a framework for $\mathsf{MCFE}$ with fine-grained access control and propose constructions for both single-client and multi-client settings, for inner-product evaluation and access control via Linear Secret Sharing Schemes ($\mathsf{LSSS}$), with selective and adaptive security. The only known work that combines functional encryption in multi-user setting with access control was proposed by Abdalla $\mathit{et~al.}$ (Asiacrypt '20), which relies on a generic transformation from the single-client schemes to obtain $\mathsf{MIFE}$ schemes that suffer a quadratic factor of $n$ (where $n$ denotes the number of clients) in the ciphertext size. We follow a different path, via $\mathsf{MCFE}$: we present a $\mathit{duplicate\text{-}and\text{-}compress}$ technique to transform the single-client scheme and obtain a $\mathsf{MCFE}$ with fine-grained access control scheme with only a linear factor of $n$ in the ciphertext size. Our final scheme thus outperforms the Abdalla $\mathit{et~al.}$'s scheme by a factor $n$, as one can obtain $\mathsf{MIFE}$ from $\mathsf{MCFE}$ by making all the labels in $\mathsf{MCFE}$ a fixed public constant. The concrete constructions are secure under the $\mathsf{SXDH}$ assumption, in the random oracle model for the $\mathsf{MCFE}$ scheme, but in the standard model for the $\mathsf{MIFE}$ improvement.
Last updated:  2022-06-21
Breaking Rainbow Takes a Weekend on a Laptop
Ward Beullens
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding secret key after on average 53 hours (one weekend) of computation time on a standard laptop.
Last updated:  2022-02-25
Issuer-Hiding Attribute-Based Credentials
Jan Bobolz, Fabian Eidens, Stephan Krenn, Sebastian Ramacher, Kai Samelin
Attribute-based credential systems enable users to authenticate in a privacy-preserving manner. However, in such schemes verifying a user's credential requires knowledge of the issuer's public key, which by itself might already reveal private information about the user. In this paper, we tackle this problem by introducing the notion of issuer-hiding attribute-based credential systems. In such a system, the verifier can define a set of acceptable issuers in an ad-hoc manner, and the user can then prove that her credential was issued by one of the accepted issuers -- without revealing which one. We then provide a generic construction, as well as a concrete instantiation based on Groth's structure preserving signature scheme (ASIACRYPT'15) and simulation-sound extractable NIZK, for which we also provide concrete benchmarks in order to prove its practicability. The online complexity of all constructions is independent of the number of acceptable verifiers, which makes it also suitable for highly federated scenarios.
Last updated:  2022-02-25
Tight Analysis of Decrypton Failure Probability of Kyber in Reality
Boyue Fang, Weize Wang, Yunlei Zhao
Kyber is a candidate in the third round of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) Standardization. However, because of the protocol's independence assumption, the bound on the decapsulation failure probability resulting from the original analysis is not tight. In this work, we give a rigorous mathematical analysis of the actual failure probability calculation, and provides the Kyber security estimation in reality rather than only in a statistical sense. Our analysis does not make independency assumptions on errors, and is with respect to concrete public keys in reality. Through sample test and experiments, we also illustrate the difference between the actual failure probability and the result given in the proposal of Kyber. The experiments show that, for Kyber-512 and 768, the failure probability resulting from the original paper is relatively conservative, but for Kyber-1024, the failure probability of some public keys is worse than claimed. This failure probability calculation for concrete public keys can also guide the selection of public keys in the actual application scenarios. What's more, we measure the gap between the upper bound of the failure probability and the actual failure probability, then give a tight estimate. Our work can also re-evaluate the traditional $1-\delta$ correctness in the literature, which will help re-evaluate some candidates' security in NIST post-quantum cryptographic standardization.
Last updated:  2022-11-02
Azeroth: Auditable Zero-knowledge Transactions in Smart Contracts
Gweonho Jeong, Nuri Lee, Jihye Kim, Hyunok Oh
With the rapid growth of the blockchain market, privacy and security issues for digital assets are becoming more important. In the most widely used public blockchains such as Bitcoin and Ethereum, all activities on user accounts are publicly disclosed, which violates privacy regulations such as EU GDPR. Encryption of accounts and transactions may protect privacy, but it also raises issues of validity and transparency: encrypted information alone cannot verify the validity of a transaction and makes it difficult to meet anti-money laundering regulations, i.e. auditability. In this paper, we propose $\textsf{Azeroth}$, an auditable zero-knowledge transfer framework. $\textsf{Azeroth}$ connects a zero-knowledge proof to an encrypted transaction, enabling it to check its validation while protecting its privacy. $\textsf{Azeroth}$ also allows authorized auditors to audit transactions. $\textsf{Azeroth}$ is designed as a smart contract for flexible deployment on existing blockchains. %According to the result of our experiment, the proof generation time is about $0.9s$, and the asset transferring time is only $4.4s$, which is practically usable. We implement the $\textsf{Azeroth}$ smart contract, execute it on various platforms including an Ethereum testnet blockchain, and measure the time to show the practicality of our proposal. The end-to-end latency of a privacy-preserving transfer takes about $4.4s$. In particular, the client's transaction generation time with a proof only takes about $0.9s$. The security of $\textsf{Azeroth}$ is proven under the cryptographic assumptions.
Last updated:  2022-10-01
An Analysis of the Algebraic Group Model
Jonathan Katz, Cong Zhang, Hong-Sheng Zhou
The algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss, has recently received significant attention. One of the appealing properties of the AGM is that it is viewed as being (strictly) weaker than the generic group model (GGM), in the sense that hardness results for algebraic algorithms imply hardness results for generic algorithms, and generic reductions in the AGM (namely, between the algebraic formulations of two problems) imply generic reductions in the GGM. We highlight that as the GGM and AGM are currently formalized, this is not true: hardness in the AGM may not imply hardness in the GGM, and a generic reduction in the AGM may not imply a similar reduction in the GGM.
Last updated:  2022-02-22
Blockchain based Contact Tracing: A Solution using Bluetooth and Sound Waves for Proximity Detection
ZiXi Hee, Iftekhar Salam
In the wake of the Covid-19 pandemic, countries and organizations started looking towards technology to curb the spread of the disease, for instance, conducting contact tracing with smartphones. Many contact tracing applications are on the market, built on different technology, such as Bluetooth, GPS, Sound, and QR code scanning systems. The use of sound is an area that has potential for further exploration; currently, only NOVID is utilizing this technology. On top of that, there is a need for a decentralized backend solution that is both public and auditable to address data manipulation concerns. One of the possible solutions is using a blockchain as the backend for the system. We propose a blockchain-based contact tracing solution that uses sound and Bluetooth to detect proximity. Our proposed solution uses blockchain as the backend of the system for decentralized storage of contact tracing data. In the proposed system, close contact is established if both Bluetooth and sound are detected between the communicating devices. The practicality of the proposed scheme is assessed by a performance evaluation of the proximity detection system and a proof-of-concept of the blockchain backend. The results show that the sound-amplitude based distance measurement can estimate whether an encounter is a close contact (within 3 meters) using a ‘threshold’ of the amplitude. The use of sound amplitude eliminates situations where the usage of only Bluetooth would show false positives. The proposed approach is the first work that integrates Blockchain, Bluetooth and sound amplitude for proximity detection to the best of our knowledge. Overall, the system shows promising results in distance estimation than if only a Bluetooth implementation is used.
Last updated:  2022-02-21
Trust Dies in Darkness: Shedding Light on Samsung's TrustZone Keymaster Design
Alon Shakevsky, Eyal Ronen, Avishai Wool
ARM-based Android smartphones rely on the TrustZone hardware support for a Trusted Execution Environment (TEE) to implement security-sensitive functions. The TEE runs a separate, isolated, TrustZone Operating System (TZOS), in parallel to Android. The implementation of the cryptographic functions within the TZOS is left to the device vendors, who create proprietary undocumented designs. In this work, we expose the cryptographic design and implementation of Android's Hardware-Backed Keystore in Samsung's Galaxy S8, S9, S10, S20, and S21 flagship devices. We reversed-engineered and provide a detailed description of the cryptographic design and code structure, and we unveil severe design flaws. We present an IV reuse attack on AES-GCM that allows an attacker to extract hardware-protected key material, and a downgrade attack that makes even the latest Samsung devices vulnerable to the IV reuse attack. We demonstrate working key extraction attacks on the latest devices. We also show the implications of our attacks on two higher-level cryptographic protocols between the TrustZone and a remote server: we demonstrate a working FIDO2 WebAuthn login bypass and a compromise of Google’s Secure Key Import. We discuss multiple flaws in the design flow of TrustZone based protocols. Although our specific attacks only apply to the $\approx$100 million devices made by Samsung, it raises the much more general requirement for open and proven standards for critical cryptographic and security designs.
Last updated:  2022-03-24
Cheetah: Lean and Fast Secure Two-Party Deep Neural Network Inference
Zhicong Huang, Wen-jie Lu, Cheng Hong, Jiansheng Ding
Secure two-party neural network inference (2PC-NN) can offer privacy protection for both the client and the server and is a promising technique in the machine-learning-as-a-service setting. However, the large overhead of the current 2PC-NN in- ference systems is still being a headache, especially when applied to deep neural networks such as ResNet50. In this work, we present Cheetah, a new 2PC-NN inference system that is faster and more communication-efficient than state-of-the-arts. The main contributions of Cheetah are two-fold: the first part includes carefully designed homomorphic encryption-based protocols that can evaluate the linear layers (namely convolution, batch normalization, and fully-connection) without any expensive rotation operation. The second part includes several lean and communication-efficient primitives for the non-linear functions (e.g., ReLU and truncation). Using Cheetah, we present intensive benchmarks over several large-scale deep neural networks. Take ResNet50 for an example, an end- to-end execution of Cheetah under a WAN setting costs less than 2.5 minutes and 2.3 gigabytes of communication, which outperforms CrypTFlow2 (ACM CCS 2020) by about 5.6× and 12.9×, respectively.
Last updated:  2022-02-20
Proving UNSAT in Zero Knowledge
Ning Luo, Timos Antonopoulos, William Harris, Ruzica Piskac, Eran Tromer, Xiao Wang
Zero-knowledge (ZK) protocols enable one party to prove to others that it knows a fact without revealing any information about the evidence for such knowledge. There exist ZK protocols for all problems in NP, and recent works developed highly efficient protocols for proving knowledge of satisfying assignments to Boolean formulas, circuits and other NP formalisms. This work shows an efficient protocol for the the converse: proving formula *unsatisfiability* in ZK (when the prover posses a non-ZK proof). An immediate practical application is efficiently proving safety of secret programs. The key insight is to prove, in ZK, the validity of *resolution proofs* of unsatisfiability. This is efficiently realized using an algebraic representation that exploits resolution proofs' structure to represent formula clauses as low-degree polynomials, combined with ZK random-access arguments. Only the proof's dimensions are revealed. We implemented our protocol and used it to prove unsatisfiability of formulas that encode combinatoric problems and program correctness conditions in standard verification benchmarks, including Linux kernel drivers and Intel cryptography modules. The results demonstrate both that our protocol has practical utility, and that its aggressive optimizations, based on non-trivial encodings, significantly improve practical performance.
Last updated:  2022-02-20
Fiat-Shamir signatures without aborts using Ring-and-Noise assumptions
Dipayan Das, Antoine Joux, Anand Kumar Narayanan
Lattice and code based hard problems such as Learning With Errors (LWE) or syndrome decoding (SD) form cornerstones of post-quantum cryptography. However, signature schemes built on these assumptions remain rather complicated. Indeed, signature schemes from LWE problems are built on the Fiat-Shamir with abort paradigm with no apparent means for knowledge extraction. On the code side, signature schemes mainly stem from Stern's zero-knowledge identification scheme. However, because of its large soundness error of $2/3$, it is costly to turn into a signature scheme. The latest developments rely on complicated cut-and-choose and multiparty-in-the-head techniques. As a consequence, they apply the Fiat-Shamir transformation on protocols with at least 5 rounds, leading to additional complexity and degraded security parameters. In the present paper, we propose an alternative approach to build a simple zero-knowledge $\Sigma$-protocol with a small soundness error, based on the hardness of Ring-and-Noise assumptions, a general family of assumptions that encompasses both lattices and codes. With such a $\Sigma$-protocol at hand, signatures can directly be derived by invoking the standard Fiat-Shamir transform, without the need for aborts. The main novel tool that allows us to achieve this is the use of specifically tailored locality sensitive hash functions. We outline our schemes for general Ring-and-Noise assumptions and present them in detail for the ring of residues modulo Mersenne numbers endowed with the Hamming metric. This Mersenne setting is ideal to illustrate our schemes, since it is close in spirit to both lattice and code based assumptions.
Last updated:  2022-02-20
RevEAL: Single-Trace Side-Channel Leakage of the SEAL Homomorphic Encryption Library
Uncategorized
Furkan Aydin, Emre Karabulut, Seetal Potluri, Erdem Alkim, Aydin Aysu
Show abstract
Uncategorized
This paper demonstrates the first side-channel attack on homomorphic encryption (HE), which allows computing on encrypted data. We reveal a power-based side-channel leakage of Microsoft SEAL prior to v3.6 that implements the Brakerski/Fan-Vercauteren (BFV) protocol. Our proposed attack targets the Gaussian sampling in the SEAL’s encryption phase and can extract the entire message with a single power measurement. Our attack works by (1) identifying each coefficient index being sampled, (2) extracting the sign value of the coefficients from control-flow variations, (3) recovering the coefficients with a high probability from data-flow variations, and (4) using a Blockwise Korkine-Zolotarev (BKZ) algorithm to efficiently explore and estimate the remaining search space. Using real power measurements, the results on a RISC-V FPGA implementation of the SEAL (v3.2) show that the proposed attack can reduce the plaintext encryption security level from 2ˆ128 to 2ˆ4.4. Therefore, as HE gears toward real-world applications, such attacks and related defenses should be considered.
Last updated:  2022-03-19
A New Perturbation for Multivariate Public Key Schemes such as HFE and UOV
Uncategorized
Jean-Charles Faugère, Gilles macario-Rat, Jacques Patarin, Ludovic Perret
Show abstract
Uncategorized
We present here the analysis of a new perturbation, that seems to strengthen significantly the security of some families of multivariate schemes. Thanks to this new perturbation, we are indeed able to get interestingly efficient signature and encryption public key schemes, in particular when combining this perturbation to the well known trapdoors HFE and UOV. We present here the best attacks that we know against these variant schemes and we give practical examples of parameters for current standard of security.
Last updated:  2022-09-15
Through the Looking-Glass: Benchmarking Secure Multi-Party Computation Comparisons for ReLU's
Abdelrahaman Aly, Kashif Nawaz, Eugenio Salazar, Victor Sucasas
Comparisons or Inequality Tests are an essential building block of Rectified Linear Unit functions (ReLU's), ever more present in Machine Learning, specifically in Neural Networks. Motivated by the increasing interest in privacy-preserving Artificial Intelligence, we explore the current state of the art of privacy preserving comparisons over Multi-Party Computation (MPC). We then introduce constant round variations and combinations, which are compatible with customary fixed point arithmetic over MPC. Our main focus is implementation and benchmarking; hence, we showcase our contributions via an open source library, compatible with current MPC software tools. Furthermore, we include a comprehensive comparative analysis on various adversarial settings. Our results improve running times in practical scenarios. Finally, we offer conclusions about the viability of these protocols when adopted for privacy-preserving Machine Learning.
Last updated:  2022-02-23
Enig: Player Replaceable Finality Layers with Optimal Validity
Simon Holmgaard Kamp, Jesper Buus Nielsen, Søren Eller Thomsen, Daniel Tschudi
We present two new provably secure finality layers for Nakamoto style blockchains. One is for partially synchronous networks and the other is for networks with periods of synchrony. Both protocols are player replaceable and therefore enjoy protection against denial of service attacks when run with a proof-of-stake lottery to elect the parties. The finality layers are proven secure to run on top of any Nakamoto style blockchain which has a property called \emph{finality friendliness}. Both finality layers improve on all existing provably secure finality layers in terms of communication complexity or security. A proof-of-stake finality layer has $v$-validity if whenever it declares a block $B$ final then honest parties holding a fraction $v$ of the stake had $B$ on the longest chain. Validity is important to prevent that the finality layer finalises blocks that were not ``good'' according to the Nakamoto style blockchain. We prove upper bounds on the achievable validity in partially synchronous networks and networks with periods of synchrony. Both our finality layers match these upper bounds.
Last updated:  2022-02-20
Non-Black-Box Approach to Secure Two-Party Computation in Three Rounds
Akshayaram Srinivasan
The round complexity of secure two-party computation is a long studied problem with matching upper and lower bounds for the case of black-box simulators (i.e., the simulators that use the adversary as a black-box). In this work, we focus on going beyond this black-box barrier via non-black-box techniques. Specifically, based on standard cryptographic assumptions, we give a construction of a 3-round two-party computation protocol for computing inputless functionalities (such as coin-tossing) that satisfies standard security against malicious senders and $\epsilon$-security against malicious receivers. Prior to our work such protocols were only known for the case of (weak) zero-knowledge.
Last updated:  2022-02-20
Lattice-based Public Key Encryption with Multi-Ciphertexts Equality Test in Cloud Computing
Giang Linh Duc Nguyen, Dung Hoang Duong, Huy Quoc Le, Willy Susilo
Nowadays, together with stormy technology advancement, billions of interconnected devices are constantly collecting data around us. In that fashion, privacy protection has become a major concern. The data must be in encrypted form before being stored on the cloud servers. As a result, the cloud servers are unable to perform calculations on en- crypted data, such as searching and matching keywords. In the PKE- MET setting, a cloud server can perform an equality test on a number of ciphertexts which encrypted with the same designated number. In this paper, we propose, for the first time, an efficient construction of a quantum-safe PKE-MET system based on the hardness of the Learning with Errors (LWE) problem in the lattice setting. Furthermore, we also discuss the first lattice-base public key encryption with flexible multi- ciphertext equality test (PKE-FMET) constructions, which allow per- forming equality test on multiple ciphertexts whose designated numbers are less than a threshold number. Our proposed schemes are proven to be secure in the standard model.
Last updated:  2023-06-10
Efficient FHEW Bootstrapping with Small Evaluation Keys, and Applications to Threshold Homomorphic Encryption
Yongwoo Lee, Daniele Micciancio, Andrey Kim, Rakyong Choi, Maxim Deryabin, Jieun Eom, Donghoon Yoo
There are two competing approaches to bootstrap the FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) and its variants: the original AP/FHEW method, which supports arbitrary secret key distributions, and the improved GINX/TFHE method, which uses much smaller evaluation keys, but is directly applicable only to binary secret keys, restricting the scheme's applicability. In this paper, we present a new bootstrapping procedure for FHEW-like encryption schemes that achieves the best features of both methods: support for arbitrary secret key distributions at no additional runtime costs, while using small evaluation keys. (Support for arbitrary secret keys is critical in a number of important applications, like threshold and some multi-key homomorphic encryption schemes.) As an added benefit, our new bootstrapping procedure results in smaller noise growth than both AP and GINX, regardless of the key distribution. Our improvements are both theoretically significant (offering asymptotic savings, up to a $O(\log n)$ multiplicative factor, either on the running time or public evaluation key size), and practically relevant. For example, for a concrete 128-bit target security level, we show how to decrease the evaluation key size of the best previously known scheme by more than 30%, while also slightly reducing the running time. We demonstrate the practicality of the proposed methods by building a prototype implementation within the PALISADE/OpenFHE open-source homomorphic encryption library. We provide optimized parameter sets and implementation results showing that the proposed algorithm has the best performance among all known FHEW bootstrapping methods in terms of runtime and key size. We illustrate the benefits of our method by sketching a simple construction of threshold homomorphic encryption based on FHEW.
Last updated:  2022-02-20
Nice Attacks --- but What is the Cost? Computational Models for Cryptanalysis
Charles Bouillaguet
This paper discusses the implications of choosing a computational model to study the cost of cryptographic attacks and therefore quantify how dangerous they are. This choice is often unconscious and the chosen model itself is usually implicit; but it has repercussions on security evaluations. We compare three reasonable computational models: $i$) the usual Random Access Machine (RAM) model; $ii$) the ``Expensive Memory Model'' explicitly introduced by several 3rd-round submissions to the Post-Quantum NIST competition (it states that a single access to a large memory costs as much as many local operations); $iii)$ the venerable VLSI model using the Area-Time cost measure. It is well-known that costs in the RAM model are lower that costs in the last two models. These have been claimed to be more realistic, and therefore to lead to more precise security evaluations. The main technical contribution of this paper is to show that the last two these models are incomparable. We identify a situation where the expensive memory model overestimates costs compared to the (presumably even more realistic) VLSI model. In addition, optimizing the cost in each model is a distinct objective that leads to different attack parameters, and raises the question of what is the ``best'' way to proceed for an eventual attacker. We illustrate these discrepancies by studying several generic attacks against hash function and Feistel networks in the three models.
Last updated:  2022-10-25
Generalising Fault Attacks to Genus Two Isogeny Cryptosystems
Ariana Goh, Chu-Wee Lim, Yan Bo Ti
In this paper, we generalise the SIDH fault attack and the SIDH loop-abort fault attacks on supersingular isogeny cryptosystems (genus-1) to genus-2. Genus-2 isogeny-based cryptosystems are generalisations of its genus-1 counterpart, as such, attacks on the latter are believed to generalise to the former. The point perturbation attack on supersingular elliptic curve isogeny cryptography has been shown to be practical. We show in this paper that this fault attack continues to be practical in genus-2, albeit with a few additional traces required. We also show that the loop-abort attack carries over to the genus-2 setting seamlessly. This article is a minor revision of the version accepted to the workshop Fault Diagnosis and Tolerance in Cryptography 2022 (FDTC 2022).
Last updated:  2022-02-20
Quantum and Classical Algorithms for Bounded Distance Decoding
Richard Allen, Ratip Emin Berker, Sílvia Casacuberta, Michael Gul
In this paper, we provide a comprehensive overview of a recent debate over the quantum versus classical solvability of bounded distance decoding (BDD). Specifically, we review the work of Eldar and Hallgren [EH22], [Hal21] demonstrating a quantum algorithm solving $\lambda_1 2^{-\Omega(\sqrt{k \log q})}$-BDD in polynomial time for lattices of periodicity $q$, finite group rank $k$, and shortest lattice vector length $\lambda_1$. Subsequently, we prove the results of [DvW21a], [DvW21b] with far greater detail and elaboration than in the original work. Namely, we show that there exists a deterministic, classical algorithm achieving the same result.
Last updated:  2022-03-30
Finding Collisions against 4-round SHA3-384 in Practical Time
Uncategorized
Senyang Huang, Orna Agmon Ben-Yehuda, Orr Dunkelman, Alexander Maximov
Show abstract
Uncategorized
The Keccak sponge function family, designed by Bertoni et al. in 2007, was selected by the U.S. National Institute of Standards and Technology (NIST) in 2012 as the next generation of Secure Hash Algorithm (SHA-3). Due to its theoretical and practical importance, cryptanalysis against SHA-3 has attracted an increasing attention. To the best of our knowledge, the most powerful collision attack on SHA-3 up till now is the linearisation technique proposed by Jian Guo et al. However, that technique is infeasible to work on variants with a smaller input space, such as SHA3-384. In this work we improve previous results with three ideas which were not used in previous works against SHA-3. First, in order to reduce constraints and increase flexibility in our solutions, we use 2-block messages instead of 1-block messages. Second, we reduce the connectivity problem into a satisfiability (SAT) problem, instead of applying the linearisation technique. Finally, we consider two new non-random properties of the Keccak non-linear layer and propose an efficient deduce-and-sieve algorithm based on these properties. The resulting collision-finding algorithm on 4-round SHA3-384 has a practical time complexity of $2^{59.64}$ (and memory complexity of $2^{45.93}$). This greatly improves the previous best known collision attack by Dinur et al., whose $2^{147}$ time complexity was far from being practical. Although our attack does not threaten the security margin of the SHA-3 hash function, the tools developed in this paper could be useful in future analysis of other cryptographic primitives and also in development of new and faster SAT solvers.
Last updated:  2023-01-16
OptRand: Optimistically responsive distributed random beacons
Adithya Bhat, Nibesh Shrestha, Aniket Kate, Kartik Nayak
Public random beacons publish random numbers at regular intervals, which anyone can obtain and verify. The design of public distributed random beacons has been an exciting research direction with significant implications for blockchains, voting, and beyond. Distributed random beacons, in addition to being bias-resistant and unpredictable, also need to have low communication overhead and latency, high resilience to faults, and ease of reconfigurability. Existing synchronous random beacon protocols sacrifice one or more of these properties. In this work, we design an efficient unpredictable synchronous random beacon protocol, OptRand, with quadratic (in the number n of system nodes) communication complexity per beacon output. First, we innovate by employing a novel combination of bilinear pairing based publicly verifiable secret-sharing and non-interactive zero-knowledge proofs to build a linear (in n) sized publicly verifiable random sharing. Second, we develop a state machine replication protocol with linear-sized inputs that is also optimistically responsive, i.e., it can progress responsively at actual network speed during optimistic conditions, despite the synchrony assumption, and thus incur low latency. In addition, we present an efficient reconfiguration mechanism for OptRand that allows nodes to leave and join the system. Our experiments show our protocols perform significantly better compared to state-of-the-art protocols under optimistic conditions and on par with state-of-the-art protocols in the normal case. We are also the first to implement a reconfiguration mechanism for distributed beacons and demonstrate that our protocol continues to be live during reconfigurations.
Last updated:  2022-02-20
SoftSpokenOT: Communication--Computation Tradeoffs in OT Extension
Lawrence Roy
Given a small number of base oblivious transfers (OTs), how does one generate a large number of extended OTs as efficiently as possible? The answer has long been the seminal work of IKNP (Ishai et al., Crypto 2003) and the family of protocols it inspired, which only use Minicrypt assumptions. Recently, Boyle et al. (Crypto 2019) proposed the Silent-OT technique that improves on IKNP, but at the cost of a much stronger, non-Minicrypt assumption: the learning parity with noise (LPN) assumption. We present SoftSpokenOT, the first OT extension to improve on IKNP's communication cost in the Minicrypt model. While IKNP requires security parameter $\lambda$ bits of communication for each OT, SoftSpokenOT only needs $\lambda / k$ bits, for any $k$, at the expense of requiring $2^{k-1} / k$ times the computation. For small values of $k$, this tradeoff is favorable since IKNP-style protocols are network-bound. We implemented SoftSpokenOT and found that our protocol gives almost a $5 \times$ speedup over IKNP in the LAN setting. Our technique is based on a novel silent protocol for vector oblivious linear evaluation (VOLE) over polynomial-sized fields. We created a framework to build maliciously secure 1-of-N OT extension from this VOLE, revisiting the existing work for each step. Along the way, we found several flaws in the existing work, including a practical attack against the consistency check of Patra et al. (NDSS 2017), while also making some improvements.
Last updated:  2022-03-30
NanoGRAM: Garbled RAM with $\widetilde{O}(\log N)$ Overhead
Andrew Park, Wei-Kai Lin, Elaine Shi
We propose a new garbled RAM construction called NanoGRAM, which achieves an amortized cost of $\widetilde{O}(\lambda \cdot (W \log N + \log^3 N))$ bits per memory access, where $\lambda$ is the security parameter, $W$ is the block size, and $N$ is the total number of blocks, and $\widetilde{O}(\cdot)$ hides $poly\log\log$ factors. For sufficiently large blocks where $W = \Omega(\log^2 N)$, our scheme achieves $\widetilde{O}(\lambda \cdot W \log N)$ cost per memory access, where the dependence on $N$ is optimal (barring $poly\log\log$ factors), in terms of the evaluator's runtime. Our asymptotical performance matches even the {\it interactive} state-of-the-art (modulo $poly\log\log$ factors), that is, running Circuit ORAM atop garbled circuit, and yet we remove the logarithmic number of interactions necessary in this baseline. Furthermore, we achieve asymptotical improvement over the recent work of Heath et al. Our scheme adopts the same assumptions as the mainstream literature on practical garbled circuits, i.e., circular correlation-robust hashes or a random oracle. We evaluate the concrete performance of NanoGRAM and compare it with a couple of baselines that are asymptotically less efficient. We show that NanoGRAM starts to outperform the naive linear-scan garbled RAM at a memory size of $N = 2^9$ and starts to outperform the recent construction of Heath et al. at $N = 2^{13}$. Finally, as a by product, we also show the existence of a garbled RAM scheme assuming only one-way functions, with an amortized cost of $\widetilde{O}(\lambda^2 \cdot (W \log N + \log^3 N))$ per memory access. Again, the dependence on $N$ is nearly optimal for blocks of size $W = \Omega(\log^2 N)$ bits.
Last updated:  2022-02-20
Short-lived zero-knowledge proofs and signatures
Arasu Arun, Joseph Bonneau, Jeremy Clark
We introduce the short-lived proof, a non-interactive proof of knowledge with a novel feature: after a specified period of time, the proof is no longer convincing. This time-delayed loss of soundness happens "naturally" without further involvement from the prover or any third party. We propose formal definitions for short-lived proofs as well as the special case of short-lived signatures. We show several practical constructions built using verifiable delay functions (VDFs). The key idea in our approach is to allow any party to forge any proof by executing a large sequential computation. Some constructions achieve a stronger property called reusable forgeability in which one sequential computation allows forging an arbitrary number of proofs of different statements. Our work also introduces two novel types of VDFs, re-randomizable VDFs and zero-knowledge VDFs, which may be of independent interest.
Last updated:  2022-06-10
Simplified MITM Modeling for Permutations: New (Quantum) Attacks
André Schrottenloher, Marc Stevens
Meet-in-the-middle (MITM) is a general paradigm where internal states are computed along two independent paths ('forwards' and 'backwards') that are then matched. Over time, MITM attacks improved using more refined techniques and exploiting additional freedoms and structure, which makes it more involved to find and optimize such attacks. This has led to the use of detailed attack models for generic solvers to automatically search for improved attacks, notably a MILP model developed by Bao et al. at EUROCRYPT 2021. In this paper, we study a simpler MILP modeling combining a greatly reduced attack representation as input to the generic solver, together with a theoretical analysis that, for any solution, proves the existence and complexity of a detailed attack. This modeling allows to find both classical and quantum attacks on a broad class of cryptographic permutations. First, Present-like constructions, with the permutations of the Spongent hash functions: we improve the MITM step in distinguishers by up to 3 rounds. Second, AES-like designs: despite being much simpler than Bao et al.'s, our model allows to recover the best previous results. The only limitation is that we do not use degrees of freedom from the key schedule. Third, we show that the model can be extended to target more permutations, like Feistel networks. In this context we give new Guess-and-determine attacks on reduced Simpira v2 and Sparkle. Finally, using our model, we find several new quantum preimage and pseudo-preimage attacks (e.g. Haraka v2, Simpira v2 ... ) targeting the same number of rounds as the classical attacks.
Last updated:  2022-11-23
Syndrome Decoding in the Head: Shorter Signatures from Zero-Knowledge Proofs
Thibauld Feneuil, Antoine Joux, Matthieu Rivain
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer urges the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants. In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. Specifically, we propose a 5-round zero-knowledge protocol that proves the knowledge of a vector $x$ such that $y=Hx$ and $\operatorname{wt}(x)\leq w$ and which achieves a soundness error closed to $1/N$ for an arbitrary $N$. While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for 128-bit security when relying on the hardness of the SD problem on binary fields. Using larger fields (like $\mathbb{F}_{2^8}$), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common "signature size + public key size" metric.
Last updated:  2023-07-04
Constant-Round YOSO MPC Without Setup
Sebastian Kolby, Divya Ravi, Sophia Yakoubov
YOSO MPC (Gentry et al., Crypto 2021) is a new MPC framework where each participant can speak at most once. This models an adaptive adversary’s ability to watch the network and corrupt or destroy parties it deems significant based on their communication. By using private channels to anonymous receivers (e.g. by encrypting to a public key whose owner is unknown), the communication complexity of YOSO MPC can scale sublinearly with the total number N of available parties, even when the adversary’s corruption threshold is linear in N (e.g. just under N/2). It was previously an open problem whether YOSO MPC can achieve guaranteed output delivery in a constant number of rounds without relying on trusted setup. In this work, we show that this can indeed be accomplished. We demonstrate three different approaches: the first two (which we call YaOSO and YOSO-GLS) use two and three rounds of communication, respectively. Our third approach (which we call YOSO-LHSS) uses O(d) rounds, where d is the multiplicative depth of the circuit being evaluated; however, it can be used to bootstrap any constant-round YOSO protocol that requires setup, by generating that setup within YOSO-LHSS. Though YOSO-LHSS requires more rounds than our first two approaches, it may be more practical, since the zero knowledge proofs it employs are more efficient to instantiate. As a contribution of independent interest, we introduce a verifiable state propagation UC functionality, which allows parties to send private message which are verifiably derived in the “correct” way (according to the protocol in question) to anonymous receivers. This is a natural functionality to build YOSO protocols on top of.
Last updated:  2022-02-28
Overflow-detectable Floating-point Fully Homomorphic Encryption
Seunghwan Lee, Dong-Joon Shin
We propose a floating-point fully homomorphic encryption (FPFHE) based on torus fully homomorphic encryption equipped with programmable bootstrapping. Specifically, FPFHE for 32-bit and 64-bit floating-point messages are implemented, the latter being state-of-the-art precision in FHEs. Also, a ciphertext is constructed to check if an overflow had occurred or not while evaluating arithmetic circuits with FPFHE, which is useful when the message space or arithmetic circuit is too complex to estimate a bound of outputs such as deep learning applications.
Last updated:  2022-06-14
Statistically Sender-Private OT from LPN and Derandomization
Nir Bitansky, Sapir Freizeit
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
Last updated:  2022-09-20
Exploring SAT for Cryptanalysis: (Quantum) Collision Attacks against 6-Round SHA-3 (Full Version)
Jian Guo, Guozhen Liu, Ling Song, Yi Tu
In this work, we focus on collision attacks against instances of SHA-3 hash family in both classical and quantum settings. Since the 5-round collision attacks on SHA3-256 and other variants proposed by Guo et al. at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on SHA-3 to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a SAT-based automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on SHAKE-128 with time complexity $2^{123.5}$, which also forms a quantum collision attack with quantum time ${{2^{67.25}}/{\sqrt{S}}}$, and the first 6-round quantum collision attack on SHA3-224 and SHA3-256 with quantum time ${{2^{97.75}}/{\sqrt{S}}}$ and ${{2^{104.25}}/{\sqrt{S}}}$, both with negligible requirement of classical and quantum memory. The fact that classical collision attacks do not apply to 6-round SHA3-224 and SHA3-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
Last updated:  2024-01-07
Improving Differential-Neural Cryptanalysis
Liu Zhang, Zilong Wang, and Baocang wang
In CRYPTO'19, Gohr introduced a novel cryptanalysis method by developing a differential-neural distinguisher using neural networks as the underlying distinguisher. He effectively integrated this distinguisher with classical differentials, facilitating a 12-round key recovery attack on Speck32/64 (from a total of 22 rounds). Bao et al. refined the concept of neutral bits, enabling key recovery attacks up to 13 rounds for Speck32/64 and 16 rounds (from a total of 32) for Simon32/64. Our primary objective is to enhance the capabilities of differential-neural distinguishers by applying more deep-learning techniques, focusing on handling more rounds and improving accuracy. Inspired by the Inception Block in GoogLeNet, we adopted a design that uses multiple parallel convolutional layers with varying kernel sizes before the residual block to capture multi-dimensional information. Additionally, we expanded the convolutional kernels in the residual blocks, thereby enlarging the network's receptive field. In the case of Speck32/64, our efforts yield accuracy improvements in rounds 6, 7, and 8, enabling the successful training of a 9-round differential-neural distinguisher. As for Simon32/64, we developed a differential-neural distinguisher capable of effectively handling 12 rounds while achieving noteworthy accuracy enhancements in rounds 9, 10, and 11. Additionally, we utilized neutral bits to ensure the required data distribution for launching a successful key recovery attack when using multiple-ciphertext pairs as input for the neural network. Meanwhile, we redefined the formula for time complexity based on the differences in prediction speeds of the distinguisher between a single-core CPU and a GPU. Combining these various advancements allows us to considerably reduce the time and data complexity of key recovery attacks on 13-round Speck32/64. Furthermore, we used knowledge distillation techniques to reduce the model size, thereby accelerating the distinguisher's prediction speed and reducing the time complexity. In particular, we achieved a successful 14-round key recovery attack by exhaustively guessing a 1-round subkey, marking a significant milestone in differential-neural cryptanalysis. For Simon32/64, we accomplished a groundbreaking 17-round key recovery attack for the first time and reduced the time complexity of the 16-round key recovery attack.
Last updated:  2024-02-14
A Novel Framework for Explainable Leakage Assessment
Si Gao and Elisabeth Oswald
Non-specific leakage detection (initially introduced as “Test Vector Leakage Assessment”, short TVLA) plays a vital role in practice because it detects (potential) leaks independently of assumptions about the leakage model and any specific attack vector. However, the nonspecific nature means detected leaks might not be exploitable, and thus the current state of the art is to employ a battery of specific attacks to confirm the detection outcomes. We propose a novel leakage assessment framework which enables to link non-specific leakage detection outcomes with size of the key guess that is necessary to exploit them. We therefore solve the problem of deciding if or not a leak is exploitable without the need for specific attacks. Our methodology furthermore enables (for a detected leak) to reveal the specific key bytes, and with that, it allows the construction of confirmatory attacks. This novel approach is enabled by proposing to cast the leakage detection problem as the statistical task of building key-dependent regression models: if such a model exists, then we know that the point leaks. Depending on the size and nature of the model, we can further judge the exploitability and provide a concrete attack vector.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.