All papers (21725 results)
Anonymous Complaint Aggregation for Secure Messaging
Private messaging platforms provide strong protection against platform eavesdropping, but malicious users can use privacy as cover for spreading abuse and misinformation. In an attempt to identify the sources of misinformation on private platforms, researchers have proposed mechanisms to trace back the source of a user-reported message (CCS '19,'21). Unfortunately, the threat model considered by initial proposals allowed a single user to compromise the privacy of another user whose legitimate content the reporting user did not like. More recent work has attempted to mitigate this side effect by requiring a threshold number of users to report a message before its origins can be identified (NDSS '22). However, the state of the art scheme requires the introduction of new probabilistic data structures and only achieves a "fuzzy" threshold guarantee. Moreover, false positives, where the source of an unreported message is identified, are possible.
This paper introduces a new threshold source tracking technique that allows a private messaging platform, with the cooperation of a third-party moderator, to operate a threshold reporting scheme with exact thresholds and no false positives. Unlike prior work, our techniques require no modification of the message delivery process for a standard source tracking scheme, affecting only the abuse reporting procedure, and do not require tuning of probabilistic data structures.
The Systemic Errors of Banded Quantum Fourier Transformation
Quantum Fourier Transformation (QFT) needs to construct the rotation gates with extremely tiny angles. Since it is impossible to physically manipulate such tiny angles (corresponding to extremely weak energies), those gates should be replaced by some scaled and controllable gates. The version of QFT is called banded QFT (BQFT), and can be mathematically specified by Kronecker product and binary fraction. But the systemic errors of BQFT has never been heuristically estimated. In this paper, we generate the programming code for BQFT and argue that its systemic errors are not negligible, which means the physical implementation of QFT with a huge transform size is still a challenge. To the best of our knowledge, it is the first time to obtain the result.
Verifiable Information-Theoretic Function Secret Sharing
A function secret sharing (FSS) (Boyle et al., Eurocrypt 2015) is a cryptographic primitive that enables additive secret sharing of functions from a given function family $\mathcal{F}$. FSS supports a wide range of cryptographic applications, including private information retrieval (PIR), anonymous messaging systems, private set intersection and more. Formally, given positive integers $r \geq 2$ and $t < r$, and a class $\mathcal{F}$ of functions $f: [n] \to \mathbb{G}$ for an Abelian group $\mathbb{G}$, an $r$-party $t$-private FSS scheme for $\mathcal{F}$ allows a dealer to split each $f \in \mathcal{F}$ into $r$ function shares $f_1, \ldots, f_r$ among $r$ servers. Shares have the property that $f = f_1 + \cdots + f_r$ and functions are indistinguishable for any coalition of up to $t$ servers. FSS for point function $f_{\alpha_1,\ldots,\alpha_l,\beta_1,\ldots,\beta_l}$ for different $\alpha$ and $l<t$ that evaluates to $\beta_i$ on input $\alpha_i$ for all $i\in[l]$ and to zero on all other inputs for $l=1$ are known under the name of distributed point functions (DPF). FSS for special interval functions $f^{<}_{\alpha,\beta}$ that evaluate to $\beta$ on inputs lesser than $\alpha$ and to zero on all other inputs are known under the name of distributed comparison functions (DCF).
Most existing FSS schemes are based on the existence of one-way functions or pseudo-random generators, and as a result, hiding of function $f$ holds only against computationally bounded adversaries. Protocols employing them as building blocks are computationally secure. Several exceptions mostly focus on DPF for four, eight or $d(t+1)$ servers for positive integer $d$, and none of them provide verifiability.
In this paper, we propose DPF for $d(t+l-1)+1$ servers, where $d$ is a positive integer, offering a better key size compared to the previously proposed DPF for $d(t+1)$ servers and DCF for $dt+1$ servers, also for positive integer $d$. We introduce their verifiable extension in which any set of servers holding $t$ keys cannot persuade us to accept the wrong value of the function. This verifiability notion differs from existing verifiable FSS schemes in the sense that we verify not only the belonging of the function to class $\mathcal{F}$ but also the correctness of computation results. Our schemes provide a secret key size $O(n^{1/d}\cdot s\log(p))$ for DPF and $O(n^{1/d}\cdot s\log(p))$ for DCF, where $p^s$ is the size of group $\mathbb{G}$.
Modeling Mobile Crash in Byzantine Consensus
Targeted Denial-of-Service (DoS) attacks have been a practical concern
for permissionless blockchains. Potential solutions, such as random
sampling, are adopted by blockchains.
However, the associated security guarantees have only been informally discussed in prior work. This
is due to the fact that existing adversary models are either not
fully capturing this attack or giving up certain design choices
(as in the sleepy model or asynchronous network model), or too strong to
be practical (as in the mobile Byzantine adversary model).
This paper provides theoretical foundations and desired properties
for consensus protocols that resist against targeted DoS attacks. In particular, we
define the Mobile Crash Adaptive Byzantine (MCAB) model to capture such an attack. In addition, we
identify and formalize two properties for consensus protocols under the MCAB model, and analyze their trade-offs.
As case studies, we prove that Ouroboros Praos and Algorand are secure in our MCAB model, giving the first formal proofs supporting their security guarantee against targeted DoS attacks, which were previously only informally discussed.
We also illustrate an application of our properties to secure a streamlined BFT protocol, chained Hotstuff, against targeted DoS attacks.
Towards Verifiable FHE in Practice: Proving Correct Execution of TFHE's Bootstrapping using plonky2
In this work we demonstrate for the first time that a full FHE bootstrapping operation can be proven using a SNARK in practice. We do so by designing an arithmetic circuit for the bootstrapping operation and prove it using plonky2. We are able to prove the circuit on an AWS C6i.metal instance in about 20 minutes. Proof size is about 200kB and verification takes less than 10ms. As the basis of our bootstrapping operation we use TFHE's programmable bootstrapping and modify it in a few places to more efficiently represent it as an arithmetic circuit (while maintaining full functionality and security). In order to achieve our results in a memory-efficient way, we take advantage of the structure of the computation and plonky2's ability to efficiently prove its own verification circuit to implement a recursion-based IVC scheme.
The 2Hash OPRF Framework and Efficient Post-Quantum Instantiations
An Oblivious Pseudo-Random Function (OPRF) is a two-party protocol for jointly evaluating a Pseudo-Random Function (PRF), where a user has an input x and a server has an input k. At the end of the protocol, the user learns the evaluation of the PRF using key k at the value x, while the server learns nothing about the user's input or output.
OPRFs are a prime tool for building secure authentication and key exchange from passwords, private set intersection, private information retrieval, and many other privacy-preserving systems. While classical OPRFs run as fast as a TLS Handshake, current *quantum-safe* OPRF candidates are still practically inefficient.
In this paper, we propose a framework for constructing OPRFs from post-quantum multi-party computation. The framework captures a family of so-called "2Hash PRFs", which sandwich a function evaluation in between two hashes. The core of our framework is a compiler that yields an OPRF from a secure evaluation of any function that is key-collision resistant and one-more unpredictable. We instantiate this compiler by providing such functions built from Legendre symbols, and from AES encryption. We then give a case-tailored protocol for securely evaluating our Legendre-based function, built from oblivious transfer (OT) and zero-knowledge proofs (ZKP). Instantiated with lattice-based OT and ZKPs, we obtain a quantum-safe OPRF that completes in 0.57 seconds, with less than 1MB of communication.
Practical Lattice-Based Distributed Signatures for a Small Number of Signers
$n$-out-of-$n$ distributed signatures are a special type of threshold $t$-out-of-$n$ signatures. They are created by a group of $n$ signers, each holding a share of the secret key, in a collaborative way. This kind of signatures has been studied intensively in recent years, motivated by different applications such as reducing the risk of compromising secret keys in cryptocurrencies. Towards maintaining security in the presence of quantum adversaries, Damgård et al. (J Cryptol 35(2), 2022) proposed lattice-based constructions of $n$-out-of-$n$ distributed signatures and multi-signatures following the Fiat-Shamir with aborts paradigm (ASIACRYPT 2009). Due to the inherent issue of aborts, the protocols either require to increase their parameters by a factor of $n$, or they suffer from a large number of restarts that grows with $n$. This has a significant impact on their efficiency, even if $n$ is small. Moreover, the protocols use trapdoor homomorphic commitments as a further cryptographic building block, making their deployment in practice not as easy as standard lattice-based Fiat-Shamir signatures. In this work, we present a new construction of $n$-out-of-$n$ distributed signatures. It is designed specifically for applications with small number of signers. Our construction follows the Fiat-Shamir with aborts paradigm, but solves the problem of large number of restarts without increasing the parameters by a factor of $n$ and utilizing any further cryptographic primitive. To demonstrate the practicality of our protocol, we provide a software implementation and concrete parameters aiming at 128 bits of security. Furthermore, we select concrete parameters for the construction by Damgård et al. and for the most recent lattice-based multi-signature scheme by Chen (CRYPTO 2023), and show that our approach provides a significant improvement in terms of all efficiency metrics. Our results also show that the multi-signature schemes by Damgård et al. and Chen as well as a multi-signature variant of our protocol produce signatures that are not smaller than a naive multi-signature derived from the concatenation of multiple standard signatures.
Differential Cryptanalysis of a Lightweight Block Cipher LELBC
In this study, we investigate the newly developed low energy lightweight block cipher (LELBC), specifically designed for resource-constrained Internet of Things (IoT) devices in smart agriculture. The designers conducted a preliminary differential cryptanalysis of LELBC through mixed-integer linear programming (MILP). This paper further delves into LELBC’s differential characteristics in both single and related-key frameworks using MILP, identifying a nine-round differential characteristic with a probability of $2^{-60}$ in a single-key framework and a 12-round differential characteristic with a probability of $2^{-60}$ in a related-key framework.
ORIGO: Proving Provenance of Sensitive Data with Constant Communication
Transport Layer Security ( TLS ) is foundational for safeguarding client-server communication. However, it does not extend integrity guarantees to third-party verification of data authenticity. If a client wants to present data obtained from a server, it cannot convince any other party that the data has not been tampered with.
TLS oracles ensure data authenticity beyond the client-server TLS connection, such that clients can obtain data from a server and ensure provenance to any third party, without server-side modifications. Generally, a TLS oracle involves a third party, the verifier, in a TLS session to verify that the data obtained by the client is accurate. Existing protocols for TLS oracles are communication-heavy, as they rely on interactive protocols. We present ORIGO, a TLS oracle with constant communication. Similar to prior work, ORIGO introduces a third party in a TLS session, and provides a protocol to ensure the authenticity of data transmitted in a TLS session, without forfeiting its confidentiality. Compared to prior work, we rely on intricate details specific to TLS 1.3, which allow us to prove correct key derivation, authentication and encryption within a Zero Knowledge Proof (ZKP). This, combined with optimizations for TLS 1.3, leads to an efficient protocol with constant communication in the online phase. Our work reduces online communication by $375 \times$ and online runtime by up to $4.6 \times$, compared to prior work.
Estimating the Unpredictability of Multi-Bit Strong PUF Classes
With the ongoing advances in machine learning (ML), cybersecurity solutions and security primitives are becoming increasingly vulnerable to successful attacks. Strong physically unclonable functions (PUFs) are a potential solution for providing high resistance to such attacks. In this paper, we propose a generalized attack model that leverages multiple chips jointly to minimize the cloning error. Our analysis shows that the entropy rate over different chips is a relevant measure to the new attack model as well as the multi-bit strong PUF classes. We explain the sources of randomness that affect unpredictability and its possible measures using models of state-of-the-art strong PUFs. Moreover, we utilize min-max entropy estimators to measure the unpredictability of multi-bit strong PUF classes for the first time in the PUF community. Finally, we provide experimental results for a multi-bit strong PUF class, the hybrid Boolean network PUF, showing its high unpredictability and resistance to ML attacks.
Threshold Structure-Preserving Signatures: Strong and Adaptive Security under Standard Assumptions
Structure-preserving signatures (SPS) have emerged as an important cryptographic building block, as their compatibility with the Groth-Sahai (GS) NIZK framework allows to construct protocols under standard assumptions with reasonable efficiency.
Over the last years there has been a significant interest in the design of threshold signature schemes. However, only very recently Crites et al. (ASIACRYPT 2023) have introduced threshold SPS (TSPS) along with a fully non-interactive construction. While this is an important step, their work comes with several limitations. With respect to the construction, they require the use of random oracles, interactive complexity assumptions and are restricted to so called indexed Diffie-Hellman message spaces. Latter limits the use of their construction as a drop-in replacement for SPS. When it comes to security, they only support static corruptions and do not allow partial signature queries for the forgery.
In this paper, we ask whether it is possible to construct TSPS without such restrictions. We start from an SPS from Kiltz, Pan and Wee (CRYPTO 2015) which has an interesting structure, but thresholdizing it requires some modifications. Interestingly, we can prove it secure in the strongest model (TS-UF-1) for fully non-interactive threshold signatures (Bellare et al., CRYPTO 2022) and even under fully adaptive corruptions. Surprisingly, we can show the latter under a standard assumption without requiring any idealized model. All known constructions of efficient threshold signatures in the discrete logarithm setting require interactive assumptions and idealized models.
Concretely, our scheme in type III bilinear groups under the SXDH assumption has signatures consisting of 7 group elements. Compared to the TSPS from Crites et al. (2 group elements), this comes at the cost of efficiency. However, our scheme is secure under standard assumptions, achieves strong and adaptive security guarantees and supports general message spaces, i.e., represents a drop-in replacement for many SPS applications. Given these features, the increase in the size of the signature seems acceptable even for practical applications.
A trust-minimized e-cash for cryptocurrencies
We introduce a private cryptocurrency design based on the original e-cash protocol. Our proposal allows for private payments on existing blockchain systems. In our design, the issuance of the private cash is transparent and is associated with a blockchain transfer to provide stronger security.
The cool and the cruel: separating hard parts of LWE secrets
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation. Known attacks on sparse binary LWE secrets include the sparse dual attack and the hybrid sparse dual-meet in the middle attack, which requires significant memory. In this paper, we provide a new statistical attack with low memory requirement. The attack relies on some initial parallelized lattice reduction. The key observation is that, after lattice
reduction is applied to the rows of a q-ary-like embedded random matrix A, the entries with high variance are concentrated in the early columns of the extracted matrix. This allows us to separate out the “hard part” of the LWE secret. We can first solve the sub-problem of finding the “cruel” bits of the secret in the early columns, and then find the remaining “cool” bits in linear time. We use statistical techniques to distinguish distributions to identify both the cruel and the cool bits of the secret. We provide concrete attack timings for recovering secrets in dimensions n = 256, 512, and 768. For the lattice reduction stage, we leverage recent improvements in lattice reduction (flatter) applied in parallel. We also apply our new attack in the RLWE setting for 2-power cyclotomic rings, showing that these RLWE instances are much more vulnerable to this attack than LWE.
Fastcrypto: Pioneering Cryptography Via Continuous Benchmarking
In the rapidly evolving fields of encryption and blockchain technologies, the efficiency and security of cryptographic schemes significantly impact performance. This paper introduces a comprehensive framework for continuous benchmarking in one of the most popular cryptography Rust libraries, fastcrypto. What makes our analysis unique is the realization that automated benchmarking is not just a performance monitor and optimization tool, but it can be used for cryptanalysis and innovation discovery as well. Surprisingly, benchmarks can uncover spectacular security flaws and inconsistencies in various cryptographic implementations and standards, while at the same time they can identify unique opportunities for innovation not previously known to science, such as providing a) hints for novel algorithms, b) indications for mix-and-match library functions that result in world record speeds, and c) evidences of biased or untested real world algorithm comparisons in the literature.
Our approach transcends traditional benchmarking methods by identifying inconsistencies in multi-threaded code, which previously resulted in unfair comparisons. We demonstrate the effectiveness of our methodology in identifying the fastest algorithms for specific cryptographic operations like signing, while revealing hidden performance characteristics and security flaws. The process of continuous benchmarking allowed fastcrypto to break many crypto-operations speed records in the Rust language ecosystem.
A notable discovery in our research is the identification of vulnerabilities and unfair speed claims due to missing padding checks in high-performance Base64 encoding libraries. We also uncover insights into algorithmic implementations such as multi-scalar elliptic curve multiplications, which exhibit different performance gains when applied in different schemes and libraries. This was not evident in conventional benchmarking practices. Further, our analysis highlights bottlenecks in cryptographic algorithms where pre-computed tables can be strategically applied, accounting for L1 and L2 CPU cache limitations.
Our benchmarking framework also reveals that certain algorithmic implementations incur additional overheads due to serialization processes, necessitating a refined `apples to apples' comparison approach. We identified unique performance patterns in some schemes, where efficiency scales with input size, aiding blockchain technologies in optimal parameter selection and data compression.
Crucially, continuous benchmarking serves as a tool for ongoing audit and security assurance. Variations in performance can signal potential security issues during upgrades, such as cleptography, hardware manipulation or supply chain attacks. This was evidenced by critical private key leakage vulnerabilities we found in one of the most popular EdDSA Rust libraries. By providing a dynamic and thorough benchmarking approach, our framework empowers stakeholders to make informed decisions, enhance security measures, and optimize cryptographic operations in an ever-changing digital landscape.
Cryptanalysis of rank-2 module-LIP in Totally Real Number Fields
We formally define the Lattice Isomorphism Problem for module lattices (module-LIP) in a number field $K$. This is a generalization of the problem defined by Ducas, Postlethwaite, Pulles, and van Woerden (Asiacrypt 2022), taking into account the arithmetic and algebraic specificity of module lattices from their representation using pseudo-bases.
We also provide the corresponding set of algorithmic and theoretical tools for the future study of this problem in a module setting.
Our main contribution is an algorithm solving module-LIP for modules of rank $2$ in $K^2$, when $K$ is a totally real number field.
Our algorithm exploits the connection between this problem, relative norm equations and the decomposition of algebraic integers as sums of two squares.
For a large class of modules (including $\mathcal{O}_K^2$), and a large class of totally real number fields (including the maximal real subfield of cyclotomic fields) it runs in classical polynomial time in the degree of the field and the residue at 1 of the Dedekind zeta function of the field (under reasonable number theoretic assumptions).
We provide a proof-of-concept code running over the maximal real subfield of some cyclotomic fields.
Secret and Shared Keys Recovery on Hamming Quasi-Cyclic with SASCA
Soft Analytical Side Channel Attacks (SASCA) are a powerful family of Side Channel Attacks (SCA) that allow to recover secret values with only a small number of traces. Their effectiveness lies in the Belief Propagation (BP) algorithm, which enables efficient computation of the marginal distributions of intermediate values. Post-quantum schemes such as Kyber, and more recently, Hamming Quasi-Cyclic (HQC), have been targets of SASCA. Previous SASCA on HQC focused on Reed-Solomon (RS) codes and successfully retrieved the shared key with a high success rate for high noise levels using a single trace. In this work, we present new SASCA on HQC where both the shared key and the secret key are targeted. Unlike the previous SASCA, we take a closer look at the Reed-Muller (RM) code. The advantage of this choice, is that the RM decoder is applied before the RS decoder. This is what makes it possible to attack the two keys. We build a factor graph of the Fast Hadamard Transform (FHT) function from the HQC reference implementation of April 2023. The information recovered from BP allows us to retrieve the shared key with a single trace. In addition to the previous SASCA targeting HQC, we also manage to recover the secret key with two chosen ciphertext attacks. One of them require a single trace and is successful until high noise levels.
Threshold implementations of cryptographic functions between finite Abelian groups
The threshold implementation technique has been proposed in 2006 by Nikova et al. as a countermeasure to mitigate cryptographic side-channel attacks on hardware implementations when the effect of glitches is taken into account. This technique is based on Boolean sharing (also called masking) and it was developed for securing symmetric ciphers such as AES. In 2023, Piccione et al. proposed a general construction of threshold implementations that is universal for S-boxes that are bijective vectorial Boolean function (functions from a binary vector space $\mathbb{F}_{2}^n$ into itself). In this paper, we further generalize the construction and we propose a general theory of threshold implementations for any type of S-boxes. We investigate the case of functions (also not necessarily bijective) that are defined between two finite Abelian groups and we use the definition of threshold implementation given by Dhooghe et al. in 2019 with additive sharing. To show that this generalized notion is as useful as the one for Boolean sharing, we prove that many classical results still hold. An important tool in this theory is the notion of functional degree introduced by Aichinger and Moosbauer in 2021 which generalizes the algebraic degree of a vectorial Boolean function. We show that if $F$ has functional degree (at most) $d$ and the cardinality of the domain is divisible by the cardinality of the codomain, then $F$ admits a threshold implementation $\mathcal{F}$ with $s\geq d+2$ shares in input and $d+2$ shares in output. Moreover, we provide a complete overview on which are the available tools for studying the functional degree and how to represent those functions using a Integer-Valued (IV) polynomial representation. Then we apply our theory for the following applications: defining the inner product masking in our setting, providing a threshold implementation of any multiplication map, and computing the functional degree and the IV polynomial representations of the conversion maps between $\mathbb{F}_p^n$ and $\mathbb{Z}_{p^n}$.
EFFLUX-F2: A High Performance Hardware Security Evaluation Board
Side-channel analysis has become a cornerstone of modern hardware security evaluation for cryptographic accelerators. Recently, these techniques are also being applied in fields such as AI and Machine Learning to investigate possible threats. Security evaluations are reliant on standard test setups including commercial and open-source evaluation boards such as, SASEBO/SAKURA and ChipWhisperer. However, with shrinking design footprints and overlapping tasks on the same platforms, the quality of the side channel information as well as the speed of data capture can significantly influence security assessment.
In this work, we designed EFFLUX-F2, a hardware security evaluation board to improve the quality and speed of side-channel information capture. We also designed a measurement setup to benchmark the signal differences between target boards. Multiple experimental evaluations like noise analysis, CPA and TVLA performed on EFFLUX-F2 and competing evaluation boards showcase the significant superiority of our design in all aspects.
Insecurity of MuSig and BN Multi-Signatures with Delayed Message Selection
This note reveals a vulnerability of MuSig and BN multi-signatures when used with delayed message selection. Despite the fact that both schemes can be correctly implemented with preprocessing of the first two signing rounds before the message to sign is selected, we show that they are insecure (i.e. not existentially unforgeable against chosen message attacks) when the message selection is deferred to the third signing round and when parallel signing sessions are permitted. The attack, which uses the algorithm by Benhamouda et al. to solve the ROS problem, is practical and runs in polynomial time.
Re-Randomized FROST
Uncategorized
Uncategorized
We define a (small) augmentation to the FROST threshold signature scheme that additionally allows for re-randomizable public and secret keys. We build upon the notion of re-randomizable keys in the literature, but tailor this capability when the signing key is secret-shared among a set of mutually trusted parties. We do not make any changes to the plain FROST protocol, but instead define additional algorithms to allow for randomization of the threshold public key and participant’s individual public and secret key shares.
We show the security of this re-randomized extension to FROST with respect to the algebraic one-more discrete logarithm (AOMDL) problem in the random oracle model, the same security assumptions underlying plain FROST.
Unbiasable Verifiable Random Functions
Verifiable Random Functions (VRFs) play a pivotal role in Proof of Stake (PoS) blockchain due to their applications in secret leader election protocols. However, the original definition by Micali, Rabin and Vadhan is by itself insufficient for such applications. The primary concern is that adversaries may craft VRF key pairs with skewed output distribution, allowing them to unfairly increase their winning chances.
To address this issue David, Gaži, Kiayias and Russel (2017/573) proposed a stronger definition in the universal composability framework, while Esgin et al. (FC '21) put forward a weaker game-based one. Their proposed notions come with some limitations though. The former appears to be too strong, being seemingly impossible to instantiate without a programmable random oracle. The latter instead is not sufficient to prove security for VRF-based secret leader election schemes.
In this work we close the above gap by proposing a new security property for VRF we call unbiasability. On the one hand, our notion suffices to imply fairness in VRF-based leader elections protocols. On the other hand, we provide an efficient compiler in the plain model (with no CRS) transforming any VRF into an unbiasable one under standard assumptions. Moreover, we show folklore VRF constructions in the ROM to achieve our notion without the need to program the random oracle. As a minor contribution, we also provide a generic and efficient construction of certified 1 to 1 VRFs from any VRF.
Parameter-Hiding Order-Revealing Encryption without Pairings
Order-Revealing Encryption (ORE) provides a practical solution for conducting range queries over encrypted data. Achieving a desirable privacy-efficiency tradeoff in designing ORE schemes has posed a significant challenge. At Asiacrypt 2018, Cash et al. proposed Parameter-hiding ORE (pORE), which specifically targets scenarios where the data distribution shape is known, but the underlying parameters (such as mean and variance) need to be protected. However, existing pORE constructions rely on impractical bilinear maps, limiting their real-world applicability. In this work, we propose an alternative and efficient method for constructing pORE using identification schemes. By leveraging the map-invariance property of identification schemes, we eliminate the need for pairing computations during ciphertext comparison. Specifically, we instantiate our framework with the pairing-free Schnorr identification scheme and demonstrate that our proposed pORE scheme reduces ciphertext size by approximately 31.25\% and improves encryption and comparison efficiency by over two times compared to the current state-of-the-art pORE construction. Our work provides a more efficient alternative to existing pORE constructions and could be viewed as a step towards making pORE a viable choice for practical applications.
UniHand: Privacy-preserving Universal Handover for Small-Cell Networks in 5G-enabled Mobile Communication with KCI Resilience
Introducing Small Cell Networks (SCN) has significantly improved wireless link quality, spectrum efficiency and network capacity, which has been viewed as one of the key technologies in the fifth-generation (5G) mobile network. However, this technology increases the frequency of handover (HO) procedures caused by the dense deployment of cells in the network with reduced cell coverage, bringing new security and privacy issues. The current 5G-AKA and HO protocols are vulnerable to security weaknesses, such as the lack of forward secrecy and identity confusion attacks. The high HO frequency of HOs might magnify these security and privacy concerns in the 5G mobile network. This work addresses these issues by proposing a secure privacy-preserving universal HO scheme UniHand for SCNs in 5G mobile communication. UniHand can achieve mutual authentication, strong anonymity, perfect forward secrecy, key-escrow-free and key compromise impersonation (KCI) resilience. To the best of our knowledge, this is the first scheme to achieve secure, privacy-preserving universal HO with KCI resilience for roaming users in 5G environment. We demonstrate that our proposed scheme is resilient against all the essential security threats by performing a comprehensive formal security analysis and conducting relevant experiments to show the cost-effectiveness of the proposed scheme.
Perfect Asynchronous MPC with Linear Communication Overhead
We study secure multiparty computation in the asynchronous setting with perfect security and optimal resilience (less than one-fourth of the participants are malicious). It has been shown that every function can be computed in this model [Ben-OR, Canetti, and Goldreich, STOC'1993]. Despite 30 years of research, all protocols in the asynchronous setting require $\Omega(n^2C)$ communication complexity for computing a circuit with $C$ multiplication gates. In contrast, for nearly 15 years, in the synchronous setting, it has been known how to achieve $\mathcal{O}(nC)$ communication complexity (Beerliova and Hirt; TCC 2008). The techniques for achieving this result in the synchronous setting are not known to be sufficient for obtaining an analogous result in the asynchronous setting.
We close this gap between synchronous and asynchronous secure computation and show the first asynchronous protocol with $\mathcal{O}(nC)$ communication complexity for a circuit with $C$ multiplication gates. Linear overhead forms a natural barrier for general secret-sharing-based MPC protocols. Our main technical contribution is an asynchronous weak binding secret sharing that achieves rate-1 communication (i.e., $\mathcal{O}(1)$-overhead per secret). To achieve this goal, we develop new techniques for the asynchronous setting, including the use of trivariate polynomials (as opposed to bivariate polynomials).
Generalized Feistel Ciphers for Efficient Prime Field Masking - Full Version
A recent work from Eurocrypt 2023 suggests that prime-field masking has excellent potential to improve the efficiency vs. security tradeoff of masked implementations against side-channel attacks, especially in contexts where physical leakages show low noise. We pick up on the main open challenge that this seed result leads to, namely the design of an optimized prime cipher able to take advantage of this potential. Given the interest of tweakable block ciphers with cheap inverses in many leakage-resistant designs, we start by describing the FPM (Feistel for Prime Masking) family of tweakable block ciphers based on a generalized Feistel structure. We then propose a first instantiation of FPM, which we denote as small-pSquare. It builds on the recent observation that the square operation (which is non-linear in Fp) can lead to masked gadgets that are more efficient than those for multiplication, and is tailored for efficient masked implementations in hardware. We analyze the mathematical security of the FPM family of ciphers and the small-pSquare instance, trying to isolate the parts of our study that can be re-used for other instances. We additionally evaluate the implementation features of small-pSquare by comparing the efficiency vs. security tradeoff of masked FPGA circuits against those of a state-of-the art binary cipher, namely SKINNY, confirming significant gains in relevant contexts.
SoK: Zero-Knowledge Range Proofs
Zero-knowledge range proofs (ZKRPs) allow a prover to convince a verifier that a secret value lies in a given interval. ZKRPs have numerous applications: from anonymous credentials and auctions, to confidential transactions in cryptocurrencies. At the same time, a plethora of ZKRP constructions exist in the literature, each with its own trade-offs. In this work, we systematize the knowledge around ZKRPs. We create a classification of existing constructions based on the underlying building techniques, and we summarize their properties. We provide comparisons between schemes both in terms of properties as well as efficiency levels, and construct a guideline to assist in the selection of an appropriate ZKRP for different application requirements. Finally, we discuss a number of interesting open research problems.
FOLEAGE: $\mathbb{F}_4$OLE-Based Multi-Party Computation for Boolean Circuits
Secure Multi-party Computation (MPC) allows two or more parties to compute any public function over their privately-held inputs, without revealing any information beyond the result of the computation. The main efficiency bottleneck in MPC protocols comes from interaction between parties. To limit the interaction, modern protocols for MPC generate a large amount of input-independent preprocessing material called multiplication triples, in an offline phase. This preprocessing can later be used by the parties to efficiently instantiate an input-dependent online phase computing the function.
To date, the state-of-the-art secure multi-party computation protocols in the preprocessing model are tailored to secure computation of arithmetic circuits over large fields and require little communication in the preprocessing phase. In contrast, when it comes to computing preprocessing for computations that are naturally represented as Boolean circuits, the state- of-the-art techniques have not evolved since the 1980s, and in particular, require every pair of parties to interactively generate a large number of oblivious transfers, which induces an $\Omega(N^2 \cdot m)$ communication overhead.
In this paper, we introduce FOLEAGE, which addresses this gap by introducing an efficient preprocessing protocol tailored to Boolean circuits. FOLEAGE exhibits excellent performance: it generates m triples using only $N \cdot m + O(N^2 \cdot \log m)$ bits of communication for N -parties, and can concretely produce over 11 million triples per second on one core of a commodity machine. Our result builds upon an efficient Pseudorandom Correlation Generator (PCG) for multiplications triples over the field $\mathbb{F}_4$. Roughly speaking, a PCG enables parties to stretch a short seed into a large number of pseudorandom correlations non-interactively, which greatly improves the efficiency of the offline phase in MPC protocols. Our construction significantly outperforms the state-of-the-art, which we demonstrate via a prototype implementation. This is achieved by introducing a number of protocol-level, algorithmic-level, and implementation-level optimizations on the recent PCG construction of Bombar et al. (Crypto 2023) from the Quasi-Abelian Syndrome Decoding assumption supporting any field size $q \ge 3$.
SNOW-SCA: ML-assisted Side-Channel Attack on SNOW-V
This paper presents SNOW-SCA, the first power side-channel analysis (SCA) attack of a 5G mobile communication security standard candidate, SNOW-V, running on a 32-bit ARM Cortex-M4 microcontroller. First, we perform a generic known-key correlation (KKC) analysis to identify the leakage points. Next, a correlation power analysis (CPA) attack is performed, which reduces the attack complexity to two key guesses for each key byte. The correct secret key is then uniquely identified utilizing linear discriminant analysis (LDA). The profiled SCA attack with LDA achieves 100% accuracy after training with < 200 traces, which means the attack succeeds with just a single trace. Overall, using the combined CPA and LDA attack model, the correct secret key byte is recovered with < 50 traces collected using the ChipWhisperer platform. The entire 256-bit secret key of SNOW-V can be recovered incrementally using the proposed SCA attack. Finally, we suggest low-overhead countermeasures that can be used to prevent these SCA attacks.
A Cautionary Note: Side-Channel Leakage Implications of Deterministic Signature Schemes
Uncategorized
Uncategorized
Two recent proposals by Bernstein and Pornin emphasize the use of deterministic signatures in DSA and its elliptic curve-based variants. Deterministic signatures derive the required ephemeral key value in a deterministic manner from the message to be signed and the secret key instead of using random number generators. The goal is to prevent severe security issues, such as the straight-forward secret key recovery from low quality random numbers. Recent developments have raised skepticism whether e.g. embedded or pervasive devices are able to generate randomness of sufficient quality. The main concerns stem from individual implementations lacking sufficient entropy source and standardized methods for random number generation with suspected back doors. While we support the goal of deterministic signatures, we are concerned about the fact that this has a significant influence on side-channel security of implementations. Specifically, attackers will be able to mount differential side-channel attacks on the additional use of the secret key in a cryptographic hash function to derive the deterministic ephemeral key. Previously, only a simple integer arithmetic function
to generate the second signature parameter had to be protected, which is rather straight-forward. Hash functions are significantly more difficult to protect. In this contribution, we systematically explain how deterministic signatures introduce this new side-channel vulnerability.
Efficient Actively Secure DPF and RAM-based 2PC with One-Bit Leakage
Secure two-party computation (2PC) in the RAM model has attracted huge attention in recent years. Most existing results only support semi-honest security, with the exception of Keller and Yanai (Eurocrypt 2018) with very high cost. In this paper, we propose an efficient RAM-based 2PC protocol with active security and one-bit leakage.
1) We propose an actively secure protocol for distributed point function (DPF), with one-bit leakage, that is essentially as efficient as the state-of-the-art semi-honest protocol. Compared with previous work, our protocol takes about $50 \times$ less communication for a domain with $2^{20}$ entries, and no longer requires actively secure generic 2PC.
2) We extend the dual-execution protocol to allow reactive computation, and then build a RAM-based 2PC protocol with active security on top of our new building blocks. The protocol follows the paradigm of Doerner and shelat (CCS 2017). We are able to prove that the protocol has end-to-end one-bit leakage.
3) Our implementation shows that our protocol is almost as efficient as the state-of-the-art semi-honest RAM-based 2PC protocol, and is at least two orders of magnitude faster than prior actively secure RAM-based 2PC without leakage, providing a realistic trade-off in practice.
Kolmogorov Comes to Cryptomania: On Interactive Kolmogorov Complexity and Key-Agreement
Only a handful candidates for computational assumptions that imply secure key-agreement protocols (KA) are known, and even fewer are believed to be quantum safe. In this paper, we present a new hardness assumption---the worst-case hardness of a promise problem related to an interactive version of Kolmogorov Complexity.
Roughly speaking, the promise problem requires telling apart tuples of strings $(\pi,x,y)$ with relatively (w.r.t. $K(\pi)$) low time-bounded Interactive Kolmogorov Complexity ($IK^t$), and those with relatively high Kolmogorov complexity, given the promise that $K^t(x|y)< s, K^t(y|x)< s$ and $s = log n$, and where $IK^t(\pi;x;y)$ is defined as the length of the shortest pair of $t$-bounded TMs $(A,B)$ such that the interaction of $(Ac,Bc)$ lead to the transcript $\pi$ and the respective outputs $x,y$.
We demonstrate that when $t$ is some polynomial, then not only does this hardness assumption imply the existence of KA, but it is also necessary for the existence of secure KA. As such, it yields the first natural hardness assumption characterizing the existence of key-agreement protocols.
We additionally show that when the threshold $s$ is bigger (e.g., $s = 55log n$), then the (worst-case) hardness of this problem instead characterizes the existence of one-way functions (OWFs). As such, our work also clarifies exactly what it would take to base KA on the existence of OWFs, and demonstrates that this question boils down to demonstrating a worst-case reduction between two closely related promise problems.
On the Concrete Security of Approximate FHE with Noise-Flooding Countermeasures
Approximate fully homomorphic encryption (FHE) schemes such as the CKKS scheme (Asiacrypt '17) are popular in practice due to their efficiency and utility for machine learning applications. Unfortunately, Li and Micciancio (Eurocrypt, '21) showed that, while achieving standard semantic (or $\mathsf{IND}\mbox{-}\mathsf{CPA}$ security), the CKKS scheme is broken under a variant security notion known as $\mathsf{IND}\mbox{-}\mathsf{CPA}^D$. Subsequently, Li, Micciancio, Schultz, and Sorrell (Crypto '22) proved the security of the CKKS scheme with a noise-flooding countermeasure, which adds Gaussian noise of sufficiently high variance before outputting the decrypted value. However, the variance required for provable security is very high, inducing a large loss in message precision.
In this work, we ask whether there is an intermediate noise-flooding level, which may not be provably secure, but allows to maintain the performance of the scheme, while resisting known attacks. We analyze the security with respect to different adversarial models and various types of attacks.
We investigate the effectiveness of lattice reduction attacks, guessing attacks and hybrid attacks with noise-flooding with variance $\rho^2_{\mathsf{circ}}$, the variance of the noise already present in the ciphertext as estimated by an average-case analysis, $100\cdot \rho^2_{\mathsf{circ}}$, and $t\cdot \rho^2_{\mathsf{circ}}$, where $t$ is the number of decryption queries. For noise levels of $\rho^2_{\mathsf{circ}}$ and $100\cdot \rho^2_{\mathsf{circ}}$, we find that a full guessing attack is feasible for all parameter sets and circuit types.
We find that a lattice reduction attack is the most effective attack for noise-flooding level $t\cdot \rho^2_{\mathsf{circ}}$, but it only induces at most a several bit reduction in the security level.
Due to the large dimension and modulus in typical FHE parameter sets, previous techniques even for estimating the concrete security of these attacks -- such as those in (Dachman-Soled, Ducas, Gong, Rossi, Crypto '20) -- become computationally infeasible, since they involve high
dimensional and high precision matrix multiplication and inversion. We therefore develop new techniques that allow us to perform fast security estimation, even for FHE-size parameter sets.
Plan your defense: A comparative analysis of leakage detection methods on RISC-V cores
Hardening microprocessors against side-channel attacks is a critical aspect of ensuring their security. A key step in this process is identifying and mitigating “leaky” hardware modules, which inadvertently leak information during the execution of cryptographic algorithms.
In this paper, we explore how different leakage detection methods, the Side-channel Vulnerability Factor (SVF) and the Test Vector Leakage Assessment (TVLA), contribute to hardening of microprocessors. We conduct experiments on two RISC-V cores, SHAKTI and Ibex, using two cryptographic algorithms, SHA-3 and AES.
Our findings suggest that SVF and TVLA can provide valuable insights into identifying leaky modules. However, the effectiveness of these methods can vary depending on the specific core and cryptographic algorithm in use.
We conclude that the choice of leakage detection method should be based not only on computational cost but also on the specific requirements of the system and the nature of the potential threats. Our research contributes to developing more secure microprocessors that are robust against side-channel attacks.
A Class of Weightwise Almost Perfectly Balanced Boolean Functions with High Weightwise Nonlinearity
A Boolean function with good cryptographic properties over a set of vectors with constant Hamming weight is significant for stream ciphers like FLIP [MJSC16]. This paper presents a construction weightwise almost perfectly balanced (WAPB) Boolean functions by perturbing the support vectors of a highly nonlinear function in the construction presented in [DM]. As a result, the nonlinearity and weightwise nonlinearities of the modified functions improve substantially.
LLRing: Logarithmic Linkable Ring Signatures with Transparent Setup
Linkable ring signatures are an important cryptographic primitive for anonymized applications, such as e-voting, e-cash and confidential transactions. To eliminate backdoor and overhead in trusted setup, transparent setup in the discrete logarithm or pairing settings has received considerable attention in practice. Recent advances have improved the proof sizes and verification efficiency of linkable ring signatures with transparent setup to achieve logarithmic bounds. Omniring (CCS `19) and RingCT 3.0 (FC `20) proposed linkable ring signatures in the discrete logarithm setting with logarithmic proof sizes with respect to the ring size, whereas DualDory (ESORICS `22) achieves logarithmic verifiability in the pairing setting. We make three novel contributions in this paper to improve the efficiency and soundness of logarithmic linkable ring signatures: (1) We report an attack on DualDory that breaks its linkability. (2) To eliminate such attacks, we present a new linkable ring signature scheme in the pairing setting with logarithmic verifiability. (3) We improve the verification efficiency of linkable ring signatures in the discrete logarithm setting, by a technique of reducing the number of group exponentiations for verification in Omniring by 50%. Furthermore, our technique is applicable to general inner-product relation proofs, which might be of independent interest. Finally, we empirically evaluate our schemes and compare them with the extant linkable ring signatures in concrete implementation.
Gap MCSP is not (Levin) NP-complete in Obfustopia
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size problem (MCSP) and the Minimum Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions.
In more detail:
- Assuming the existence of indistinguishability obfuscation, and subexponentially-secure one-way functions, an appropriate Gap version of MCSP is not NP-complete under randomized Levin-reductions.
- Assuming the existence of subexponentially-secure indistinguishability obfuscation, subexponentially-secure one-way functions and injective PRGs, an appropriate Gap version of MKTP is not NP-complete under randomized Levin-reductions.
New Upper Bounds for Evolving Secret Sharing via Infinite Branching Programs
Evolving secret-sharing schemes, defined by Komargodski, Naor, and Yogev [TCC 2016B, IEEE Trans. on Info. Theory 2018], are secret-sharing schemes in which there is no a-priory bound on the number of parties. In such schemes, parties arrive one by one; when a party arrives, the dealer gives it a share and cannot update this share in later stages. The requirement is that some predefined sets (called authorized sets) should be able to reconstruct the secret, while other sets should learn no information on the secret. The collection of authorized sets that can reconstruct the secret is called an evolving access structure. The challenge of the dealer is to be able to give short shares to the the current parties without knowing how many parties will arrive in the future. The requirement that the dealer cannot update shares is designed to prevent expensive updates.
Komargodski et al. constructed an evolving secret-sharing scheme for every monotone evolving access structure; the share size of the $t^{\text{th}}$ party in this scheme is $2^{t-1}$.
Recently, Mazor [ITC 2023] proved that evolving secret-sharing schemes require exponentially-long shares for some evolving access structure, namely shares of size $2^{t-o(t)}$.In light of these results, our goal is to construct evolving secret-sharing schemes with non-trivial share size for wide classes of evolving access structures; e.g., schemes with share size $2^{ct}$ for $c<1$ or even polynomial size. We provide several results achieving this goal:
-We define layered infinite branching programs representing evolving access structures, show how to transform them into generalized infinite decision trees, and show how to construct evolving secret-sharing schemes for generalized infinite decision trees. Combining these steps, we get a secret-sharing scheme realizing the evolving access structure.
As an application of this framework, we construct an evolving secret-sharing scheme with non-trivial share size for access structures that can be represented by layered infinite branching programs with width at layer $t$ of at most $2^{0.15t}$. If the width is polynomial, then we get an evolving secret-sharing scheme with quasi-polynomial share size.
-We construct efficient evolving secret-sharing schemes for dynamic-threshold access structures with high dynamic-threshold and for infinite $2$ slice and $3$-slice access structures. The share size of the $t^{\text{th}}$ party in these schemes is $2^{\tilde{O}((\log t)^{1/\sqrt{2}+\epsilon})}$ for any constant $\epsilon>0$, which is comparable to the best-known share size of $2^{\tilde{O}((\log t)^{1/2}))}$ for finite $2$-slice and 3-slice access structures.
-We prove lower bounds on the share size of evolving secret-sharing schemes for infinite $k$-hypergraph access structures and for infinite directed st-connectivity access structures. As a by-product of the lower bounds, we provide the first non-trivial lower bound for finite directed st-connectivity access structures for general secret-sharing schemes.
Atomic and Fair Data Exchange via Blockchain
We introduce a blockchain Fair Data Exchange (FDE) protocol, enabling a storage server to transfer a data file to a client atomically: the client receives the file if and only if the server receives an agreed-upon payment. We put forth a new definition for a cryptographic scheme that we name verifiable encryption under committed key (VECK), and we propose two instantiations for this scheme. Our protocol relies on a blockchain to enforce the atomicity of the exchange and uses VECK to ensure that the client receives the correct data (matching an agreed-upon commitment) before releasing the payment for the decrypting key. Our protocol is trust-minimized and requires only constant-sized on-chain communication, concretely $3$ signatures, $1$ verification key, and $1$ secret key, with most of the data stored and communicated off-chain. It also supports exchanging only a subset of the data, can amortize the server's work across multiple clients, and offers a general framework to design alternative FDE protocols using different commitment schemes. A prominent application of our protocol is the Danksharding data availability scheme on Ethereum, which commits to data via KZG polynomial commitments. We also provide an open-source implementation for our protocol with both instantiations for VECK, demonstrating our protocol's efficiency and practicality on Ethereum.
An improved exact CRR basis conversion algorithm for FHE without floating-point arithmetic
Fully homomorphic encryption (FHE) has attracted much attention recently. Chinese remainder representation (CRR) or RNS representation is one of the core technologies of FHE. CRR basis conversion is a key step of KeySwitching procedure. Bajard et al. proposed a fast basis conversion method for CRR basis conversion, but the elimination of error had to be ignored. Halevi et al. suggested a method using floating-point arithmetic to avoid errors, but floating-point arithmetic has its own issues such as low efficiency and complex chip design. In this work, we establish a more concise and efficient CRR basis conversion method by observing that each of the ciphertext modulus selected
by the CRR CKKS scheme is very close to an integer that is a power of 2. Our conversion algorithm eliminates errors and involves only integer arithmetic and bit operations. The proof of correctness of our algorithm is given. Extensive experiments are conducted and comparisons between the method of Halevi et al. and ours are obtained, which show that our method has the same accuracy and a slightly better effeciency. Our method is also applicable to the CRR variant of BGV and BFV schemes, and can be used to simplify chip design.
Mangrove: A Scalable Framework for Folding-based SNARKs
We present a framework for building efficient folding-based SNARKs. First we develop a new "uniformizing" compiler for NP statements that converts any poly-time computation to a sequence of identical simple steps. The resulting uniform computation is especially well-suited to be processed by a folding-based IVC scheme. Second, we develop two optimizations to folding-based IVC. The first reduces the recursive overhead of the IVC by restructuring the relation to which folding is applied. The second employs a "commit-and-fold" strategy to further simplify the relation. Together, these optimizations result in a folding-based SNARK that has a number of attractive features. First, the scheme uses a constant-size transparent common reference string (CRS). Second, the prover has
(i) low memory footprint,
(ii) makes only two passes over the data,
(iii) is highly parallelizable, and
(iv) is concretely efficient.
Microbenchmarks indicate proving time is comparable to leading monolithic SNARKs, and is significantly faster than other streaming SNARKs. On a laptop, for $2^{24}$ ($2^{32}$) gates, the Mangrove prover is estimated to take $2$ minutes ($8$ hours) with peak memory usage approximately $390$ MB ($800$ MB).
Column-wise Garbling, and How to Go Beyond the Linear Model
In the linear garbling model introduced by Zahur, Rosulek, and Evans (Eurocrypt 2015), garbling an AND gate requires at least \(2\kappa\) bits of ciphertext, where $\kappa$ is the security parameter. Though subsequent works, including those by Rosulek and Roy (Crypto 2021) and Acharya et al. (ACNS 2023), have advanced beyond these linear constraints, a more comprehensive design framework is yet to be developed.
Our work offers a novel, unified, and arguably simple perspective on garbled circuits. We introduce a hierarchy of models that captures all existing practical garbling schemes. By determining the lower bounds for these models, we elucidate the capabilities and limits of each. Notably, our findings suggest that simply integrating a nonlinear processing function or probabilistic considerations does not break the \(2\kappa\) lower bound by Zahur, Rosulek, and Evans. However, by incorporating column correlations, the bound can be reduced to \((1+1/w)\kappa\), where \(w\ge 1\). Additionally, we demonstrate that a straightforward extension of Rosulek and Roy's technique (Crypto 2021) does not yield improved results. We also present a methodology for crafting new models and for exploring further extensions of both the new and the existing models.
Our new models set the course for future designs. We introduce three innovative garbling schemes based on a common principle called ``majority voting.'' The third construction performs on par with the state-of-the-art.
Quantum One-Wayness of the Single-Round Sponge with Invertible Permutations
Sponge hashing is a novel class of cryptographic hash algorithms which underlies the current international hash function standard SHA-3. In a nutshell, a sponge function takes as input a bit-stream of any length and processes it via a simple iterative procedure: it repeatedly feeds each block of the input into a so-called block function, and then produces a short digest which consists of a subset of the final output bits. While much is known about the post-quantum security of the sponge construction in the case when the block function is modeled as a random function or permutation, the case of invertible permutations, which more accurately models the construction underlying SHA-3, has so far remained a fundamental open problem.
In this work, we make new progress towards overcoming this barrier and show several results. First, we prove the ``double-sided zero-search'' conjecture proposed by Unruh (eprint' 2021) and show that finding zero-pairs in a random $2n$-bit permutation requires at least $\Omega(2^{n/2})$ many queries---and this is tight due to Grover's algorithm. At the core of our proof lies a novel ``symmetrization argument'' which uses insights from the theory of Young subgroups. Second, we consider more general variants of the double-sided search problem and show similar query lower bounds for them. As an application, we prove the quantum one-wayness of the single-round sponge with invertible permutations in the quantum random oracle model.
Bent functions construction using extended Maiorana-McFarland’s class
In a particular case, we consider the extended Maiorana-McFarland’s class to obtain balanced bent functions restricted to vectors with even Hamming weight, an equal number of pre-images for each element in the range. Additionally, we demonstrate that all bent functions are balanced when we restrict to vectors of even Hamming weight or vectors with odd Hamming weight. Given the necessary tools, we provide a simple algorithm to obtain new bent functions using Maiorana-McFarland.
Quasi-Optimal Permutation Ranking and Applications to PERK
A ranking function for permutations maps every permutation of length $n$ to a unique integer between $0$ and $n!-1$. For permutations of size that are of interest in cryptographic applications, evaluating such a function requires multiple-precision arithmetic. This work introduces a quasi-optimal ranking technique that allows us to rank a permutation efficiently without needing a multiple-precision arithmetic library. We present experiments that show the computational advantage of our method compared to the standard lexicographic optimal permutation ranking. As an application of our result, we show how this technique improves the signature sizes and the efficiency of PERK digital signature scheme.
Polytopes in the Fiat-Shamir with Aborts Paradigm
The Fiat-Shamir with Aborts paradigm (FSwA) uses rejection sampling to remove a secret’s dependency on a given source distribution. Recent results revealed that unlike the uniform distribution in the hypercube, both the continuous Gaussian and the uniform distribution within the hypersphere minimise the rejection rate and the size of the proof of knowledge. However, in practice both these distributions suffer from the complexity of their sampler. So far, those three distributions are the only available alternatives, but none of them offer the best of all worlds: competitive proof of knowledge size and rejection rate with a simple sampler.
We introduce a new generic framework for FSwA using polytope based rejection sampling to enable a wider variety of constructions. As a matter of fact, this framework is the first to generalise these results to integral distributions. To complement the lack of alternatives, we also propose a new polytope construction, whose uniform sampler approaches in simplicity that of the hypercube. At the same time, it provides competitive proof of knowledge size compared to that obtained from the Gaussian distribution. Concurrently, we share some experimental improvements of our construction to further reduce the proof size. Finally, we propose a signature based on the FSwA paradigm using both our framework and construction. We prove it to be competitive with Haetae in signature size and with Dilithium on sampler simplicity.
Recent Progress in Quantum Computing Relevant to Internet Security
Quantum computers at some future date might be able
to factor large numbers, and this poses a threat to some public key
and key exchange systems in use today. This overview of recent
progress in devising quantum algorithms and building quantum computing devices is meant to help technologists understand the difficult problems that quantum engineers are working on, where advances have been made, and how those things affect estimates of if and when large scale quantum computation might happen.
Nebula: A Privacy-First Platform for Data Backhaul
Imagine being able to deploy a small, battery-powered device nearly anywhere on earth that humans frequent and having it be able to send data to the cloud without needing to provision a network—without buying a physical gateway, setting up WiFi credentials, or acquiring a cellular SIM. Such a capability would address one of the greatest bottlenecks to deploying the long-tail of small, embedded, and power-constrained IoT devices in nearly any setting. Unfortunately, decoupling the device deployment from the network configuration needed to transmit, or backhaul, sensor data to the cloud remains a tricky challenge, but the success of Tile and AirTag offers hope. They have shown that mobile phones can crowd-source worldwide local network coverage to find lost items, yet expanding these systems to enable general-purpose backhaul raises privacy concerns for network participants. In this work, we present Nebula, a privacy-focused architecture for global, intermittent, and low-rate data backhaul to enable nearly any thing to eventually connect to the cloud while (i) preserving the privacy of the mobile network participants from the platform provider by decentralizing data flow through the system, (ii) incentivizing participation through micropayments, and (iii) preventing system abuse.
Modular Indexer: Fully User-Verified Execution Layer for Meta-Protocols on Bitcoin
Before the emergence of inscriptions and ordinal protocols, Bitcoin was limited in its applications due to the Turing-incompleteness of its script language. Fortunately, with recent advances in techniques, Turing-complete off-chain execution layers are established via Bitcoin indexers. Yet, existing indexers have their data integrity and availability strongly dependent on the honesty of indexers. This violated the trustlessness and decentralization principle of the cryptocurrency literature. To provide an alternative Bitcoin indexer scheme and overcome the above limitations, we have reallocated the roles of committee indexers (for heavy computations), normal indexers, and light indexers (the client end), and established a fully user-verified execution layer based on our modular indexer protocol. For the trustless relay of data, we have adopted Verkle trees to store and prove the states. Thus, data integrity and availability are guaranteed even in the case of a majority of malicious committee indexers. Ideally, our modular indexer would safely bridge the gap between the Bitcoin layer-1 and applications from BRC-20, and contribute to the further prosperity of the Bitcoin ecosystem.
Permutation-Based Hashing Beyond the Birthday Bound
It is known that the sponge construction is tightly indifferentiable from a random oracle up to around $2^{c/2}$ queries, where $c$ is the capacity. In particular, it cannot provide generic security better than half of the underlying permutation size. In this paper, we aim to achieve hash function security beating this barrier. We present a hashing mode based on two $b$-bit permutations named the double sponge. The double sponge can be seen as the sponge embedded within the double block length hashing paradigm, making two permutation calls in parallel interleaved with an efficient mixing function. Similarly to the sponge, the permutation size is split as $b = r+c$, and the underlying compression function absorbs $r$ bits at a time. We prove that the double sponge is indifferentiable from a random oracle up to around $2^{2c/3}$ queries. This means that the double sponge achieves security beyond the birthday bound in the capacity. In addition, if $c>3b/4$, the double sponge beats the birthday bound in the primitive size, to our knowledge being the first hashing mode based on a permutation that accomplices this feature.
Some notes on algorithms for abelian varieties
A compilation of various notes written over the years on some algorithms on abelian varieties.
Traceable Secret Sharing: Strong Security and Efficient Constructions
Suppose Alice uses a $t$-out-of-$n$ secret sharing to store her secret key on $n$ servers. Her secret key is protected as long as $t$ of them do not collude. However, what if a less-than-$t$ subset of the servers decides to offer the shares they have for sale? In this case, Alice should be able to hold them accountable, or else nothing prevents them from selling her shares. With this motivation in mind, Goyal, Song, and Srinivasan (CRYPTO 21) introduced the concept of {\em traceable secret sharing}. In such schemes, it is possible to provably trace the leaked secret shares back to the servers who leaked them. Goyal et al.~presented the first construction of a traceable secret sharing scheme. However, secret shares in their construction are quadratic in the secret size, and their tracing algorithm is quite involved as it relies on Goldreich-Levin decoding.
In this work, we put forth new definitions and practical constructions for traceable secret sharing. In our model, some $f < t$ servers output a reconstruction box~$R$ that may arbitrarily depend on their shares. Given additional $t-f$ shares, $R$ reconstructs and outputs the secret. The task is to trace $R$ back to the corrupted servers given
black-box access to $R$. Unlike Goyal et al., we do not assume that the tracing algorithm has any information on how the corrupted servers constructed~$R$ from the shares in their possession.
We then present two very efficient constructions of traceable secret sharing based on two classic secret sharing schemes. In both of our schemes, shares are only twice as large as the secret, improving over the quadratic overhead of Goyal et al. Our first scheme is obtained by presenting a new practical tracing algorithm for the widely-used Shamir secret sharing scheme. Our second construction is based on an extension of Blakley's secret sharing scheme. Tracing in this scheme is optimally efficient, and requires just one successful query to $R$. We believe that our constructions are an important step towards bringing traceable secret-sharing schemes to practice. This work also raises several interesting open problems that we describe
in the paper.
Breaking the DECT Standard Cipher with Lower Time Cost
The DECT Standard Cipher (DSC) is a proprietary stream cipher used for encryption in the Digital Enhanced Cordless Telecommunications (DECT), which is a standard for short range cordless communication and widely deployed worldwide both in residential and enterprise environments. New weaknesses of the DSC stream cipher which are not discovered in previous works are explored and analyzed in this paper. Based on these weaknesses, new practical key recovery attacks and distinguishing attack on DSC with lower time cost are proposed. The first cryptanalytic result show that DSC can be broken in about 13.12 seconds in the known IV setting, when an offline phase that takes about 58.33 minutes is completed. After then, a distinguishing attack on DSC in the related key chosen IV setting is given, which has a time complexity of only 2 encryptions and a success probability of almost 1. Finally, based on the slide property, a key recovery attack on DSC with practical complexities is proposed. The experimental result shows that DSC can be broken on a common PC within about 44.97 seconds in the multiple related key setting. The attacks on DSC proposed in this paper clearly show that a well-designed initialization is absolutely necessary to design a secure stream cipher.
DARE to agree: Byzantine Agreement with Optimal Resilience and Adaptive Communication
Byzantine Agreement (BA) enables $n$ processes to reach consensus on a common valid $L_o$-bit value, even in the presence of up to $t<n$ faulty processes that can deviate arbitrarily from their prescribed protocol. Despite its significance, the optimal communication complexity for key variations of BA has not been determined within the honest majority regime ($n=2t+1$), for both the worst-case scenario and the adaptive scenario, which accounts for the actual number $f \leq t$ of failures.
We introduce ADA-DARE (Adaptively Disperse, Agree, Retrieve), a novel universal approach to solve BA efficiently. Let $\kappa$ represent the size of the cryptographic objects required to solve BA when $t>n/3$. Different instantiations of ADA-DARE achieve near-optimal adaptive bit complexity of $O(nL_o + n(f + 1) \kappa)$ for both strong multi-valued validated BA (SMVBA) and interactive consistency (IC). By definition, for IC, $L_o = nL_{in}$, with $L_{in}$ representing the size of an input value.
These results achieve optimal $O(n(L_o+f))$ word complexity and significantly improve the previous best results by up to a linear factor, depending on $L_o$ and $f$.
Efficient Unbalanced Quorum PSI from Homomorphic Encryption
Multiparty private set intersection (mPSI) protocol is capable of finding the intersection of multiple sets securely without revealing any other information. However, its limitation lies in processing only those elements present in every participant's set, which proves inadequate in scenarios where certain elements are common to several, but not all, sets.
In this paper, we introduce an innovative variant of the mPSI protocol named unbalanced quorum PSI to fill in the gaps of the mPSI protocol. Unlike the previous quorum-PSI proposals which detect elements present in at least $k$ out of $n$ equal sets, our protocol is particularly tailored for unbalanced cases where the size of the receiver's set is much smaller than the size of the senders' sets. Our work achieves logarithmic communication complexity in the semi-honest setting, significantly surpassing previous work in efficiency.
The benchmarks show that it takes 22.7 seconds in WAN and 14.7 seconds in LAN for online computation, and only 87.8 MB of total communication to intersect 5535 elements across 15 sets, each containing $2^{24}$ elements. Compared to prior work, this is roughly an 87$\times$ reduction in communication and a 31$\times$ reduction in online time. Our protocols can be easily extended to the larger set with $2^{28}$ elements which is nearly impractical for prior work.
Plover: Masking-Friendly Hash-and-Sign Lattice Signatures
We introduce a toolkit for transforming lattice-based hash-and-sign signature schemes into masking-friendly signatures secure in the t-probing model. Until now, efficiently masking lattice-based hash-and-sign schemes has been an open problem, with unsuccessful attempts such as Mitaka. A first breakthrough was made in 2023 with the NIST PQC submission Raccoon, although it was not formally proven.
Our main conceptual contribution is to realize that the same principles underlying Raccoon are very generic, and to find a systematic way to apply them within the hash-and-sign paradigm. Our main technical contribution is to formalize, prove, instantiate and implement a hash-and-sign scheme based on these techniques. Our toolkit includes noise flooding to mitigate statistical leaks, and an extended Strong Non-Interfering probing security (SNIu) property to handle masked gadgets with unshared inputs.
We showcase the efficiency of our techniques in a signature scheme, Plover, based on (hint) Ring-LWE. It is the first lattice-based masked hash-and-sign scheme with quasi-linear complexity O(d log d) in the number of shares d. Our performances are competitive with the state-of-the-art masking-friendly signature, the Fiat-Shamir scheme Raccoon.
SILBE: an Updatable Public Key Encryption Scheme from Lollipop Attacks
We present a new post-quantum Public Key Encryption scheme (PKE) named Supersingular Isogeny Lollipop Based Encryption or SILBE. SILBE is obtained by leveraging the generalized lollipop attack of Castryck and Vercauteren on the M-SIDH Key exchange by Fouotsa, Moriya and Petit. Doing so, we can in fact make of SILBE a post-quantum secure Updatable Public Key Encryption scheme (UPKE). SILBE is the first isogeny-based UPKE which is not based on group actions. In its core, SILBE extensively uses both the Deuring Correspondence and Kani's Lemma, two central concepts in Isogeny-Based Cryptography.
A Direct PRF Construction from Kolmogorov Complexity
While classic result in the 1980s establish that one-way functions (OWFs) imply the existence of pseudorandom generators (PRGs) which in turn imply pseudorandom functions (PRFs), the constructions (most notably the one from OWFs to PRGs) is complicated and inefficient.
Consequently, researchers have developed alternative \emph{direct} constructions of PRFs from various different concrete hardness assumptions. In this work, we continue this thread of work and demonstrate the first direct constructions of PRFs from average-case hardness of the time-bounded Kolmogorov complexity problem $\mktp[s]$, where given a threshold, $s(\cdot)$, and a polynomial time-bound, $t(\cdot)$, $\mktp[s]$ denotes the language consisting of strings $x$ with $t$-bounded Kolmogorov complexity, $K^t(x)$, bounded by $s(|x|)$.
In more detail, we demonstrate a direct PRF construction with quasi-polynomial security from mild average-case of hardness of $\mktp[2^{O(\sqrt{\log n})}]$ w.r.t the uniform distribution. We note that by earlier results, this
assumption is known to be equivalent to the existence of quasi-polynomially secure OWFs; as such, our results yield the first direct (quasi-polynomially secure) PRF constructions from a natural hardness assumptions that also is known to be implied by (quasi-polynomially secure) PRFs.
Perhaps surprisingly, we show how to make use of the Nisan-Wigderson PRG construction to get a cryptographic, as opposed to a complexity-theoretic, PRG.
The Last Challenge Attack: Exploiting a Vulnerable Implementation of the Fiat-Shamir Transform in a KZG-based SNARK
The Fiat-Shamir transform [1] is a well-known and widely employed technique for converting sound public-coin interactive protocols into sound non-interactive protocols. Even though the transformation itself is relatively clear and simple, some implementations choose to deviate from the specifications, for example for performance reasons. In this short note, we present a vulnerability arising from such a deviation in a KZG-based PLONK verifier implementation. This deviation stemmed from the incorrect computation of the last challenge of the PLONK protocol [2], where the KZG batching proof challenge was computed before, and, hence, independently from the KZG evaluation proofs. More generally, such a vulnerability may affect any KZG [3] implementation where one uses batched KZG proof evaluations for at least two distinct evaluation points. We call an attack enabled by such a deviation a Last Challenge Attack. For concreteness, we show that when a PLONK verifier implementation presents such a deviation, a malicious PLONK prover can mount a Last Challenge Attack to construct verifiable proofs of false statements. The described vulnerability was initially discovered as part of an audit, and has been responsibly disclosed to the developers and fixed. A proof of concept of the vulnerability, in which a proof is forged for an arbitrary public input, is made available.
In addition to the above, in this work we also provide a security proof of the knowledge-soundness of the batched KZG scheme with evaluations for at least two distinct values.
Exponent-VRFs and Their Applications
Verifiable random functions (VRFs) are pseudorandom functions with the addition that the function owner can prove that a generated output is correct, with respect to a committed key. In this paper we introduce the notion of an exponent-VRF, or eVRF, which is a VRF that does not provide its output $y$ explicitly, but instead provides $Y = y \cdot G$, where $G$ is a generator of some finite cyclic group (or $Y = g^y$ in multiplicative notation). We construct eVRFs from DDH and from the Paillier encryption scheme (both in the random-oracle model). We then show that an eVRF can be used to solve several long-standing open problems in threshold cryptography. In particular, we construct (1) a one-round fully simulatable distributed key-generation protocols (after a single two-round initialization phase), (2) a two-round fully simulatable signing protocols for multiparty Schnorr with a deterministic variant, (3) a two-party ECDSA protocol that has a deterministic variant, (4) a threshold Schnorr signing where the parties can later prove that they signed without being able to frame another group, (5) an MPC-friendly and verifiable HD-derivation. Efficient simulatable protocols of this round complexity were not known prior to this work. All of our protocols are concretely efficient.
On the impact of ionizing and non-ionizing irradiation damage on security microcontrollers in CMOS technology
The possible effects of irradiation on security controllers implemented in CMOS technology are studied. First, the decrease of the effectiveness of a light sensor/detector as countermeasure against laser fault injection is analysed. Second, the use of irradiation as fault injection method is proposed.
Notus: Dynamic Proofs of Liabilities from Zero-knowledge RSA Accumulators
Proofs of Liabilities (PoL) allow an untrusted prover to commit to its liabilities towards a set of users and then prove independent users' amounts or the total sum of liabilities, upon queries by users or third-party auditors. This application setting is highly dynamic. User liabilities may increase/decrease arbitrarily and the prover needs to update proofs in epoch increments (e.g., once a day for a crypto-asset exchange platform). However, prior works mostly focus on the static case and trivial extensions to the dynamic setting open the system to windows of opportunity for the prover to under-report its liabilities and rectify its books in time for the next check, unless all users check their liabilities at all epochs. In this work, we develop Notus, the first dynamic PoL system for general liability updates that avoids this issue. Moreover, it achieves $O(1)$ query proof size, verification time, and auditor overhead-per-epoch.
The core building blocks underlying Notus are a novel zero-knowledge (and SNARK-friendly) RSA accumulator and a corresponding zero-knowledge MultiSwap protocol, which may be of independent interest. We then propose optimizations to reduce the prover's update overhead and make Notus scale to large numbers of users ($10^6$ in our experiments). Our results are very encouraging, e.g., it takes less than $2$ms to verify a user's liability and the proof size is $256$ Bytes. On the prover side, deploying Notus on a cloud-based testbed with eight 32-core machines and exploiting parallelism, it takes ${\sim}3$ minutes to perform the complete epoch update, after which all proofs have already been computed.
A Deniably Authenticated Searchable Public Key Encryption Scheme in Mobile Electronic Mail System
Confidentiality and authentication are two main security goals in secure electronic mail (e-mail). Furthermore, deniability is also a significant security property for some e-mail applications to protect the privacy of the sender. Although searchable encryption solves the keyword searching problem in a secure e-mail system, it also breaks the deniability of the system. Because the adversary can obtain the information of the data sender and data user from the trapdoor as well as ciphertext used for keyword searching. At the same time, efficiency is another problem in mobile computing. To overcome the issues, we put forward deniably authenticated searchable public key encryption (DA-SPKE) which can apply to the deniable e-mail system. We first define the DA-SPKE model and propose the DA-SPKE scheme. Then we prove its security in the random oracle model and evaluate its efficiency. Finally, we use it in a deniably secure e-mail system to protect both data senders’ and data users' privacy.
Revisiting the May--Meurer--Thomae Algorithm --- Solving McEliece-1409 in One Day
As post-quantum cryptography transitions toward practical deployment, the significance of extensive cryptanalysis is on the rise. Three out of the four NIST-PQC round 4 candidates are forms of code-based cryptography. Analyses of asymptotic complexity in information set decoding (ISD) algorithms have been a central focus in the field of code-based cryptography. Recently, Esser, May and Zweydinger (Eurocrypt '22) demonstrate the practicality of the May--Meurer--Thomae (MMT) algorithm by decoding McEliece-1284. Esser and Zweydinger (Eurocrypt '23) propose the time-memory trade-off variant of Becker--Joux--May--Meurer (BJMM) decoding, which solves QC-3138. These works have paved the way for the cryptanalysis of ISD in real-world scenarios.
In this work, we further advance the progress of the abovementioned studies by performing a concrete analysis of MMT decoding. We improve the list construction in MMT so that the number of both candidates and representations in the enumeration phase is increased without the need for additional time and memory. Our new algorithm is theoretically 5.1 times faster than the BJMM algorithm for Classic McEliece I instance. We achieve the minimum time complexity across all categories of Classic McEliece among all ISD algorithms. Moreover, compared with the BJMM algorithm, our MMT algorithm reduces the bit security by 1 to 3 bits for all code based NIST-PQC round 4 candidates. Practical security estimates confirm that all the candidates have sufficiently strong bit security, except for Classic McEliece III, with a 1-bit deficiency.
In addition, we implement our new MMT algorithm in a GPU environment and provide the new record of the McEliece-1409 instance, along with implementation details and experimental analyses. Our study verifies the practical reliability of the code-based candidates against current ISD algorithms.
Heuristic Ideal Obfuscation Scheme based on LWE Problem, its Variants and Quantum Oracle
This paper presents a heuristic ideal obfuscation scheme based on the learning problem, which di ers from that of Jain, Lin, and Luo [JLLW23]. The paper adopts a method similar to Brakerski, Dottling, Garg, and Malavolta [BDGM22, BDGM20] for constructing iO. It first introduces a variant of the LWR problem and proves its pseudorandomness. Based on the variant of the LWR problem, it constructs LHE, then combines it with sFHE constructed from the LWE problem to further construct the ideal obfuscation scheme. In comparison to the approach by Jain et al., this paper is relatively more specific. Additionally, the paper incorporates the quantum random oracle construction by Jelle Don, et al.[DFMS22] to provide a more concrete quantum random oracle used in the proposed obfuscation scheme.
On Information-Theoretic Secure Multiparty Computation with Local Repairability
In this work we consider the task of designing information-theoretic MPC protocols for which the state of a given party can be recovered from a small amount of parties, a property we refer to as local repairability.
This is useful when considering MPC over dynamic settings where parties leave and join a computation, a scenario that has gained notable attention in recent literature.
Thanks to the results of (Cramer et al. EUROCRYPT'00), designing such protocols boils down to constructing a linear secret-sharing scheme (LSSS) with good locality, that is, each share is determined by only a small amount of other shares, that also satisfies the so-called multiplicativity property.
Previous constructions that achieve locality (e.g. using locally recoverable codes---LRCs) do not enjoy multiplicativity, and LSSS that are multiplicative (e.g. Shamir's secret-sharing) do not satisfy locality.
Our construction bridges this literature gap by showing the existence of an LSSS that achieves both properties simultaneously.
Our results are obtained by making use of well known connection between error correcting codes and LSSS, in order to adapt the LRC construction by (Tamo & Barg, IEEE Transactions on Information Theory 2014) to turn it into a LSSS.
With enough care, such coding-theoretic construction yields our desired locality property, but it falls short at satisfying multiplicativity.
In order to address this, we perform an extensive analysis of the privacy properties of our scheme in order to identify parameter regimes where our construction satisfies multiplicativity.
Finally, since our LSSS satisfies locality, every share is determined by a small amount of shares.
However, in an MPC context it is not enough to let the (small set of) parties to send their shares to the repaired party, since this may leak more information than the regenerated share.
To obtain our final result regarding MPC with local repairability, we construct a lightweight MPC protocol that performs such repairing process without any leakage.
We provide both a passively secure construction (for the plain multiplicative regime) and an actively secure one (for strong multiplicativity).
STIR: Reed–Solomon Proximity Testing with Fewer Queries
We present STIR (Shift To Improve Rate), an interactive oracle proof of proximity (IOPP) for Reed-Solomon codes that achieves the best known query complexity of any concretely efficient IOPP for this problem. For $\lambda$ bits of security, STIR has query complexity $O(\log d + \lambda \cdot \log \log d )$, while FRI, a popular protocol, has query complexity $O(\lambda \cdot \log d )$ (including variants of FRI based on conjectured security assumptions). STIR relies on a new technique for recursively improving the rate of the tested Reed-Solomon code.
We provide an implementation of STIR compiled to a SNARK. Compared to a highly-optimized implementation of FRI, STIR achieves an improvement in argument size that ranges from $1.25\times$ to $2.46\times$ depending on the chosen parameters, with similar prover and verifier running times. For example, in order to achieve 128 bits of security for degree $2^{26}$ and rate $1/4$, STIR has argument size $114$ KiB, compared to $211$ KiB for FRI.
On the Feasibility of Sliced Garbling
Garbling schemes are one of the most fundamental objects in cryptography and have been studied extensively due to their broad applicability. The state-of-the-art is a construction in which XOR gates are free and AND gates require $3\kappa/2+\mathcal{O}(1)$ bits per gate, due to Rosulek and Roy (CRYPTO'21). An important technique in their garbling is slicing, which partitions the labels into two equal-length slices. In this paper, we explore the feasibility of the slicing technique for garbling schemes beyond the results introduced by Rosulek and Roy, demonstrating both its potential and its limitations.
In particular, we extend this technique to demonstrate the garbling of certain higher fan-in gadgets, and then use this to show that it is possible to garble 2-input AND gates at a cost of $4\kappa/3 +\mathcal{O}(1)$ bits. We then give a separation result showing that sliced garbling cannot be used to garble higher fan-in gadgets of degree $\geq 3$ when restricted to making queries that are linear functions of the input labels to the random oracle. We further demonstrate the usefulness of our techniques in the context of oblivious garbling, a newly introduced concept for capturing circuit hiding from the garbler. The complexity of our construction is superior to that of universal circuits, and grows linearly with circuit size.
Leakage-Resilient Attribute-Based Encryption with Attribute-Hiding
In this work, we present two generic frameworks for leakage-resilient attribute-based encryption (ABE), which is an improved version of ABE that can be proven secure even when part of the secret key is leaked. Our frameworks rely on the standard assumption ($k$-Lin) over prime-order groups. The first framework is designed for leakage-resilient ABE with attribute-hiding in the bounded leakage model. Prior to this work, no one had yet derived a generic leakage-resilient ABE framework with attribute-hiding. The second framework provides a generic method to construct leakage-resilient ABE in the continual leakage model. It is compatible with Zhang et al.'s work [DCC 2018] but more generic. Concretely, Zhang et al.'s framework cannot act on some specific ABE schemes while ours manages to do that. Technically, our frameworks are built on the predicate encoding of Chen et al.'s [EUROCRYPT 2015] combined with a method of adding redundancy. At last, several instantiations are derived from our frameworks, which cover the cases of zero inner-product predicate and non-zero inner-product predicate.
Parallel Zero-knowledge Virtual Machine
Zero-knowledge virtual machine (zkVM) is a novel applica-
tion of succinct and non-interactive zero-knowledge proof protocols that
allows for verifiable computation over arbitrary codes. In this paper, we
present a new paradigm to build such a zkVM via data parallel circuits.
Our parallelization happens at the opcode and the basic block level.
Such a design allows us to eliminate almost all of the circuit overhead
for opcodes, including the control flow and the data flow which can be
substantial in a zero-knowledge circuit. The size of our circuit is dynamic
and optimal in the sense that it only costs the sum of all individual op-
codes that are executed in the program, (i.e., you only pay as much as
you use); while in all previous designs, the circuit is of a constant size, de-
termined by the product of a) the upper limit of the number of opcodes,
and b) the size of a super opcode circuit that is capable of handling every
type of opcode.
Further, we present an asymmetric GKR prover that is tailored to the
data parallelism in our zkVM design, accompanied by various optimized
constraint gadgets. The use of GKR prover also leads to significant re-
ductions in the number of witnesses that are to be committed: GKR
allows us to commit only the input and output of the circuits, whereas
in previous Plonkish-based solutions, the prover needs to commit to all
the witnesses
High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3-PC) and malicious four-party computation (4-PC) with one corruption. Compared to state-of-the-art protocols in the same setting, our protocols require fewer low-latency and high-bandwidth links between the parties to achieve high throughput. Our protocols also reduce the computational complexity by requiring up to 50 percent fewer basic instructions per gate. Further, our protocols achieve the currently best-known communication complexity (3, resp. 5 elements per multiplication gate) with an optional preprocessing phase to reduce the communication complexity of the online phase to 2 (resp. 3) elements per multiplication gate.
In homogeneous network settings, i.e. all links between the parties share similar network bandwidth and latency, our protocols achieve up to two times higher throughput than state-of-the-art protocols.
In heterogeneous network settings, i.e. all links between the parties share different network bandwidth and latency, our protocols achieve even larger performance improvements.
We implemented our protocols and multiple other state-of-the-art protocols (Replicated 3-PC, Astra, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for achieving high throughput.
Five out of six implemented 3-PC and 4-PC protocols achieve more than one billion 32-bit multiplication or more than 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment.
This is the highest throughput achieved in 3-PC and 4-PC so far and between two and three orders of magnitude higher than the throughput MP-SPDZ achieves in the same settings.
A New Public Key Cryptosystem Based on the Cubic Pell Curve
Since its invention in 1978 by Rivest, Shamir and Adleman, the public key cryptosystem RSA has become a widely popular and a widely useful scheme in cryptography. Its security is related to the difficulty of factoring large integers which are the product of two large prime numbers. For various reasons, several variants of RSA have been proposed, and some have different arithmetics such as elliptic and singular cubic curves. In 2018, Murru and Saettone proposed another variant of RSA based on the cubic Pell curve with a modulus of the form $N=pq$. In this paper, we present a new public key cryptosystem based on the arithmetic of the cubic Pell curve with a modulus of the form $N=p^rq^s$. Its security is based on the hardness of factoring composite integers, and on Rabin's trapdoor one way function. In the new scheme, the arithmetic operations are performed on a cubic Pell curve which is known only to the sender and the recipient of a plaintext.
Transmitter Actions for Secure Integrated Sensing and Communication
This work models a secure integrated sensing and communication (ISAC) system as a broadcast channel with action-dependent channel states and output feedback. The transmitted message is split into a common and a secure message, both of which must be recovered at the legitimate receiver, while the secure message needs to be kept secret from the eavesdropper. The transmitter actions, such as beamforming vector design, affect the corresponding state distribution at each channel use. The action sequence is modeled to depend on both the transmitted message and channel output feedback. For perfect channel output feedback, the secrecy-distortion regions are provided for physically-degraded and reversely-physically-degraded secure ISAC channels with transmitter actions. The corresponding rate regions when the entire message should be kept secret are also provided. We illustrate the results by characterizing the secrecy-distortion region of a binary example.
Malicious Security for SCALES: Outsourced Computation with Ephemeral Servers
SCALES (Small Clients And Larger Ephemeral Servers) model is a recently proposed model for MPC (Acharya et al., TCC 2022). While the SCALES model offers several attractive features for practical large-scale MPC, the result of Acharya et al. only offered semi-honest secure protocols in this model.
We present a new efficient SCALES protocol secure against malicious adversaries, for general Boolean circuits. We start with the base construction of Acharya et al. and design and use a suite of carefully defined building blocks that may be of independent interest. The resulting protocol is UC-secure without honest majority, with a CRS and bulletin-board as setups, and allows publicly identifying deviations from correct execution.
Decentralized Access Control Infrastructure for Enterprise Digital Asset Management
With the rapidly evolving landscape of cryptography, blockchain technology has advanced to cater to diverse user requirements, leading to the emergence of a multi-chain ecosystem featuring various use cases characterized by distinct transaction speed and decentralization trade-offs. At the heart of this evolution lies digital signature schemes, responsible for safeguarding blockchain-based assets such as ECDSA, Schnorr, and EdDSA, among others.
However, a critical gap exists in the current landscape — there is no solution empowering a consortium of entities to collectively manage or generate digital signatures for diverse digital assets in a distributed manner with dynamic threshold settings, all while mitigating counter-party risks. Existing threshold signature schemes impose a fixed threshold during the key generation phase, limiting the adaptability of threshold settings for the subsequent signature phase. Attempts to address this challenge often involve relinquishing signature generation control either partially or entirely from the participating parties, introducing vulnerabilities that could jeopardize digital assets in the event of network disruptions.
Addressing this gap, our work introduces an innovative infrastructure that allows a group of users to programmatically define and manage access control policies, supported by a blockchain network dedicated to policy enforcement. This network is uniquely designed to prevent any entity, including itself, from autonomously generating digital signatures, thereby mitigating counter-party risks and enhancing asset security. This system is particularly suited for enterprise contexts, where collaborative asset oversight and policy adherence are essential. Our solution marks a significant stride in the realm of blockchain technology, paving the way for more sophisticated and secure digital asset management in a rapidly evolving digital landscape.
Quantum Circuits of AES with a Low-depth Linear Layer and a New Structure
In recent years quantum computing has developed rapidly. The security threat posed by quantum computing to cryptography makes it necessary to better evaluate the resource cost of attacking algorithms, some of which require quantum implementations of the attacked cryptographic building blocks. In this paper we manage to optimize quantum circuits of AES in several aspects. Firstly, based on de Brugière \textit{et al.}'s greedy algorithm, we propose an improved depth-oriented algorithm for synthesizing low-depth CNOT circuits with no ancilla qubits. Our algorithm finds a CNOT circuit of AES MixColumns with depth 10, which breaks a recent record of depth 16. In addition, our algorithm gives low-depth CNOT circuits for many MDS matrices and matrices used in block ciphers studied in related work. Secondly, we present a new structure named compressed pipeline structure to synthesize quantum circuits of AES, which can be used for constructing quantum oracles employed in quantum attacks based on Grover and Simon's algorithms. When the number of ancilla qubits required by the round function and its inverse is not very large, our structure will have a better trade-off of $D$-$W$ cost. We then give detailed quantum circuits of AES-128 under the guidance of our structure and make some comparisons with other circuits. Finally, our encryption circuit and key schedule circuit have their own application scenarios. The Encryption oracle used in Simon's algorithm built with the former will have smaller depth. For example, we can construct an AES-128 Encryption oracle with $T$-depth 33, while the previous best result is 60. A small variant of the latter, along with our method to make an Sbox input-invariant, can avoid the allocation of extra ancilla qubits for storing key words in the shallowed pipeline structure. Based on this, we achieve a quantum circuit of AES-128 with the lowest $TofD$-$W$ cost 130720 to date.
Collision Resistance from Multi-Collision Resistance for all Constant Parameters
A $t$-multi-collision-resistant hash function ($t$-MCRH) is a family of shrinking functions for which it is computationally hard to find $t$ distinct inputs mapping to the same output for a function sampled from this family. Several works have shown that $t$-MCRHs are sufficient for many of the applications of collision-resistant hash functions (CRHs), which correspond to the special case of $t = 2$.
An important question is hence whether $t$-MCRHs for $t > 2$ are fundamentally weaker objects than CRHs. As a first step towards resolving this question, Rothblum and Vasudevan (CRYPTO '22) recently gave non-black-box constructions of infinitely-often secure CRHs from $t$-MCRHs for $t \in \{3,4\}$ assuming the MCRH is sufficiently shrinking. Earlier on, Komargodski and Yogev (CRYPTO '18) also showed that $t$-MCRHs for any constant $t$ imply the weaker notion of a distributional CRH.
In this paper, we remove the limitations of prior works, and completely resolve the question of the power of $t$-MCRHs for constant $t$ in the infinitely-often regime, showing that the existence of such a function family always implies the existence of an infinitely-often secure CRH. As in the works mentioned above, our construction is non-blackbox and non-constructive. We further give a new domain extension result for MCRHs that enables us to show that the underlying MCRH need only have arbitrarily small linear shrinkage (mapping $(1 + \epsilon)n$ bits to $n$ bits for any fixed $\epsilon > 0$) to imply the existence of CRHs.
SyRA: Sybil-Resilient Anonymous Signatures with Applications to Decentralized Identity
We introduce a new cryptographic primitive, called Sybil-Resilient Anonymous (SyRA) signature, which enables users to generate, on demand, unlinkable pseudonyms tied to any given context, and issue digital signatures on their behalf. Concretely, given a personhood relation, an issuer (who may be a distributed entity) enables users to prove their personhood and extract an associated long-term key, which can then be used to issue signatures for any given context and message. Sybil-resilient anonymous signatures achieve two key security properties: 1) Sybil resilience: ensures that every user is entitled to at most one pseudonym per context, and 2) anonymity: requires that no information about the user is leaked through their various pseudonyms or the signatures they issue on their pseudonyms’ behalf. We conceptualize SyRA signatures as an ideal functionality in the Universal Composition (UC) setting and realize the functionality via an efficient, pairing-based construction that utilizes two levels of verifiable random functions (VRFs) and which may be of independent interest. A key feature of this approach is the statelessness of the issuer: we achieve the core properties of Sybil resilience and anonymity without requiring the issuer to retain any information about past user interactions. SyRA signatures have various applications in multiparty systems, such as e-voting (e.g., for decentralized governance) and cryptocurrency airdrops, making them an attractive option for deployment in decentralized identity (DID) systems.
Strong PUF Security Metrics: Sensitivity of Responses to Single Challenge Bit Flips
This paper belongs to a sequence of manuscripts that discuss generic and easy-to-apply security metrics for Strong Physical Unclonable Functions (PUFs). These metrics cannot and shall not fully replace in-depth machine learning (ML) studies in the security assessment of Strong PUF candidates. But they can complement the latter, serve in initial complexity analyses, and allow simple iterative design optimization. Moreover, they are computationally more efficient and far easier to standardize than typical ML-studies. This manuscript treats one very natural, but also very impactful metric, and investigates the effects that the alteration of single challenge bits has on the associated PUF-responses.
We define several concrete metric scores based on this idea, and demonstrate their predictive power by applying them to various popular Strong PUF design families as test cases. This includes XOR Arbiter PUFs, XOR Bistable Ring PUFs, and Feed-Forward Arbiter PUFs, whose practical security is particularly well known after two decades of intense research. In passing, our manuscript also suggests techniques for representing our metric scores graphically, and for interpreting them in a meaningful manner. Our work demonstrates that if comparable methods had existed earlier, various Strong PUF candidates deemed secure and broken later could have been recognized and winnowed early on.
Connecting Leakage-Resilient Secret Sharing to Practice: Scaling Trends and Physical Dependencies of Prime Field Masking
Symmetric ciphers operating in (small or mid-size) prime fields have been shown to be promising candidates to maintain security against low-noise (or even noise-free) side-channel leakage.
In order to design prime ciphers that best trade physical security and implementation efficiency, it is essential to understand how side-channel security evolves with the field size (i.e., scaling trends).
Unfortunately, it has also been shown that such a scaling trend depends on the leakage functions and cannot be explained by the standard metrics used to analyze Boolean masking with noise.
In this work, we therefore initiate a formal study of prime field masking for two canonical leakage functions: bit leakages and Hamming weight leakages. By leveraging theoretical results from the leakage-resilient secret sharing literature, we explain formally why (1) bit leakages correspond to a worst-case and do not encourage operating in larger fields, and (2) an opposite conclusion holds for
Hamming weight leakages, where increasing the prime field modulus p can contribute to a security amplification that is exponential in the number of shares,with log(p) seen as security parameter like the noise variance in Boolean masking. We combine these theoretical results with experimental ones
and show that the interest masking in larger prime fields can degrade gracefully when leakage functions slightly deviate from the Hamming weight abstraction, motivating further research towards characterizing (ideally wide) classes of leakage functions offering such guarantees.
Perfect (Parallel) Broadcast in Constant Expected Rounds via Statistical VSS
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits.
This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Our protocol is optimal (up to expectation) for messages of length $L \in \Omega(n \log^2 n)$.
Our main contribution is a framework for obtaining perfectly secure broadcast with an expected constant number of rounds from a statistically secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O({\sf poly} \log n)$ bits. All our protocols are adaptively secure.
Efficient and Generic Methods to Achieve Active Security in Private Information Retrieval and More Advanced Database Search
Motivated by secure database search, we present secure computation protocols for a function $f$ in the client-servers setting, where a client can obtain $f(x)$ on a private input $x$ by communicating with multiple servers each holding $f$. Specifically, we propose generic compilers from passively secure protocols, which only keep security against servers following the protocols, to actively secure protocols, which guarantee privacy and correctness even against malicious servers. Our compilers are applied to protocols computing any class of functions, and are efficient in that the overheads in communication and computational complexity are only polynomial in the number of servers, independent of the complexity of functions. We then apply our compilers to obtain concrete actively secure protocols for various functions including private information retrieval (PIR), bounded-degree multivariate polynomials and constant-depth circuits. For example, our actively secure PIR protocols achieve exponentially better computational complexity in the number of servers than the currently best-known protocols. Furthermore, our protocols for polynomials and constant-depth circuits reduce the required number of servers compared to the previous actively secure protocols. In particular, our protocol instantiated from the sparse Learning Parity with Noise (LPN) assumption is the first actively secure protocol for multivariate polynomials which has the minimum number of servers, without assuming fully homomorphic encryption.
Universal Composable Password Authenticated Key Exchange for the Post-Quantum World
In this paper, we construct the first password authenticated key exchange (PAKE) scheme from isogenies with Universal Composable (UC) security in the random oracle model (ROM). We also construct the first two PAKE schemes with UC security in the quantum random oracle model (QROM), one is based on the learning with error (LWE) assumption, and the other is based on the group-action decisional Diffie- Hellman (GA-DDH) assumption in the isogeny setting.
To obtain our UC-secure PAKE scheme in ROM, we propose a generic construction of PAKE from basic lossy public key encryption (LPKE) and CCA-secure PKE. We also introduce a new variant of LPKE, named extractable LPKE (eLPKE). By replacing the basic LPKE with eLPKE, our generic construction of PAKE achieves UC security in QROM. The LPKE and eLPKE have instantiations not only from LWE but also from GA-DDH, which admit four specific PAKE schemes with UC security in ROM or QROM, based on LWE or GA-DDH.
Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $t\in[T]$ we learn (in an online fashion) that $\Delta_t\geq 0$ new events have occurred, and must respond with an estimate $n_t\approx\sum_{j=1}^t \Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy.
The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $O\left(\log(T)+\log^2(n)\right)$, and Henzinger et al. (2023) showed a lower bound of $\Omega\left(\min\{\log n, \log T\}\right)$. We show a new lower bound of $\Omega\left(\min\{n,\log T\}\right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $\log^2 n=O(\log T)$. Our lower bound has the following implications:
(1) We show that our lower bound extends to the online thresholds problem, where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017).
(2) Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi.
(3) Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
Two-Round Maliciously-Secure Oblivious Transfer with Optimal Rate
We give a construction of a two-round batch oblivious transfer (OT) protocol in the CRS model that is UC-secure against malicious adversaries and has (near) optimal communication cost. Specifically, to perform a batch of $k$ oblivious transfers where the sender's inputs are bits, the sender and the receiver need to communicate a total of $3k + o(k) \cdot \mathsf{poly}(\lambda)$ bits. We argue that $3k$ bits are required by any protocol with a black-box and straight-line simulator. The security of our construction is proven assuming the hardness of Quadratic Residuosity (QR) and the Learning Parity with Noise (LPN).
Preimage Attacks on Reduced-Round Ascon-Xof
Ascon, a family of algorithms that supports authenticated encryption and hashing, has been selected as the new standard for lightweight cryptography in the NIST Lightweight Cryptography Project. Ascon’s permutation and authenticated encryption have been actively analyzed, but there are relatively few analyses on the hashing. In this paper, we concentrate on preimage attacks on Ascon-Xof. We focus on linearizing the polynomials leaked by the hash value to find its inverse. In an attack on 2-round Ascon-Xof, we carefully construct the set of guess bits using a greedy algorithm in the context of guess-and-determine. This allows us to attack Ascon-Xof more efficiently than the method in Dobraunig et al., and we fully implement our attack to demonstrate its effectiveness. We also provide the number of guess bits required to linearize one output bit after 3- and 4-round Ascon’s permutation, respectively. In particular, for the first time, we connect the result for 3-round Ascon to a preimage attack on Ascon-Xof with a 64-bit output. Our attacks primarily focus on analyzing weakened versions of Ascon-Xof, where the weakening involves setting all the IV values to 0 and omitting the round constants. Although our attacks do not compromise the security of the full Ascon-Xof, they provide new insights into their security.
Perfectly-Secure Multiparty Computation with Linear Communication Complexity over Any Modulus
Consider the task of secure multiparty computation (MPC) among $n$ parties with perfect security and guaranteed output delivery, supporting $t<n/3$ active corruptions. Suppose the arithmetic circuit $C$ to be computed is defined over a finite ring $\mathbb{Z}/q\mathbb{Z}$, for an arbitrary $q\in\mathbb{Z}$. It is known that this type of MPC over such ring is possible, with communication that scales as $O(n|C|)$, assuming that $q$ scales as $\Omega(n)$. However, for constant-size rings $\mathbb{Z}/q\mathbb{Z}$ where $q = O(1)$, the communication is actually $O(n\log n|C|)$ due to the need of the so-called ring extensions. In most natural settings, the number of parties is variable but the ``datatypes'' used for the computation are fixed (e.g. 64-bit integers). In this regime, no protocol with linear communication exists.
In this work we provide an MPC protocol in this setting: perfect security, G.O.D. and $t<n/3$ active corruptions, that enjoys linear communication $O(n|C|)$, even for constant-size rings $\mathbb{Z}/q\mathbb{Z}$. This includes as important particular cases small fields such as $\mathbb{F}_2$, and also the ring $\mathbb{Z}/2^k\mathbb{Z}$. The main difficulty in achieving this result is that widely used techniques such as linear secret-sharing cannot work over constant-size rings, and instead, one must make use of ring extensions that add $\Omega(\log n)$ overhead, while packing $\Omega(\log n)$ ring elements in each extension element in order to amortize this cost. We make use of reverse multiplication-friendly embeddings (RMFEs) for this packing, and adapt recent techniques in network routing (Goyal et al. CRYPTO'22) to ensure this can be efficiently used for non-SIMD circuits. Unfortunately, doing this naively results in a restriction on the minimum width of the circuit, which leads to an extra additive term in communication of $\mathsf{poly}(n)\cdot\mathsf{depth}(C)$. One of our biggest technical contributions lies in designing novel techniques to overcome this limitation by packing elements that are distributed across different layers. To the best of our knowledge, all works that have a notion of packing (e.g. RMFE or packed secret-sharing) group gates across the same layer, and not doing so, as in our work, leads to a unique set of challenges and complications.
Garbled Circuit Lookup Tables with Logarithmic Number of Ciphertexts
Garbled Circuit (GC) is a basic technique for practical secure computation. GC handles Boolean circuits; it consumes significant network bandwidth to transmit encoded gate truth tables, each of which scales with the computational security parameter $\kappa$. GC optimizations that reduce bandwidth consumption are valuable.
It is natural to consider a generalization of Boolean two-input one-output gates (represented by $4$-row one-column lookup tables, LUTs) to arbitrary $N$-row $m$-column LUTs. Known techniques for this do not scale, with naive size-$O(Nm\kappa)$ garbled LUT being the most practical approach in many scenarios.
Our novel garbling scheme -- logrow -- implements GC LUTs while sending only a logarithmic in $N$ number of ciphertexts! Specifically, let $n = \lceil \log_2 N \rceil$. We allow the GC parties to evaluate a LUT for $(n-1)\kappa + nm\kappa + Nm$ bits of communication. logrow is compatible with modern GC advances, e.g. half gates and free XOR.
Our work improves state-of-the-art GC handling of several interesting applications, such as privacy-preserving machine learning, floating-point arithmetic, and DFA evaluation.
Algorithms for Matrix Code and Alternating Trilinear Form Equivalences via New Isomorphism Invariants
We devise algorithms for finding equivalences of trilinear forms over finite fields modulo linear group actions. Our focus is on two problems under this umbrella, Matrix Code Equivalence (MCE) and Alternating Trilinear Form Equivalence (ATFE), since their hardness is the foundation of the NIST round-$1$ signature candidates MEDS and ALTEQ respectively.
We present new algorithms for MCE and ATFE, which are further developments of the algorithms for polynomial isomorphism and alternating trilinear form equivalence, in particular by Bouillaguet, Fouque, and Véber (Eurocrypt 2013), and Beullens (Crypto 2023). Key ingredients in these algorithms are new easy-to-compute distinguishing invariants under the respective group actions.
For MCE, we associate new isomorphism invariants to corank-$1$ points of matrix codes, which lead to a birthday-type algorithm. We present empirical justifications that these isomorphism invariants are easy-to-compute and distinguishing, and provide an implementation of this algorithm. This algorithm has some implications to the security of MEDS.
The invariant function for ATFE is similar, except it is associated with lower rank points. Modulo certain assumptions on turning the invariant function into canonical forms, our algorithm for ATFE improves on the runtime of the previously best known algorithm of Beullens (Crypto 2023).
Finally, we present quantum variants of our classical algorithms with cubic runtime improvements.
Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. We demonstrate that very significant overall performance gains can be obtained from hardware that optimizes the padding formats and iterative hashing processes specific to SLH-DSA. A prototype implementation, SLotH, contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores and supports all 12 parameter sets of SLH-DSA. SLotH also supports side-channel secure PRF computation and Winternitz chains. SLotH drivers run on a small RISC-V control core, as is common in current Root-of-Trust (RoT) systems.
The new features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SHA2 parameter sets have approximately half of the speed of SHAKE parameter sets. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE.
We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Key Recovery Attack on the Partial Vandermonde Knapsack Problem
The Partial Vandermonde (PV) Knapsack problem is an algebraic variant of the low-density inhomogeneous SIS problem. The problem has been used as a building block for various lattice-based constructions, including signatures (ACNS'14, ACISP'18), encryptions (DCC'15,DCC'20), and signature aggregation (Eprint'20). At Crypto'22, Boudgoust, Gachon, and Pellet-Mary proposed a key distinguishing attack on the PV Knapsack exploiting algebraic properties of the problem. Unfortunately, their attack doesn't offer key recovery, except for worst-case keys.
In this paper, we propose an alternative attack on the PV Knapsack problem, which provides key recovery for a much larger set of keys. Like the Crypto'22 attack, it is based on lattice reduction and uses a dimension reduction technique to speed-up the underlying lattice reduction algorithm and enhance its performance. As a side bonus, our attack transforms the PV Knapsack problem into uSVP instances instead of SVP instances in the Crypto'22 attack. This also helps the lattice reduction algorithm, both from a theoretical and practical point of view.
We use our attack to re-assess the hardness of the concrete parameters used in the literature. It appears that many contain a non-negligible fraction of weak keys, which are easily identified and extremely susceptible to our attack. For example, a fraction of $2^{-19}$ of the public keys of a parameter set from ACISP'18 can be solved in about $30$ hours on a moderate server using off-the-shelf lattice reduction. This parameter set was initially claimed to have a $129$-bit security against key recovery attack. Its security was reduced to $87$-bit security using the distinguishing attack from Crypto'22. Similarly, the ACNS'14 proposal also includes a parameter set containing a fraction of $2^{-19}$ of weak keys; those can be solved in about $17$ hours.
Combined Threshold Implementation
Physical security is an important aspect of devices for which an adversary can manipulate the physical execution environment. Recently, more and more attention has been directed towards a security model that combines the capabilities of passive and active physical attacks, i.e., an adversary that performs fault-injection and side-channel analysis at the same time. Implementing countermeasures against such a powerful adversary is not only costly but also requires the skillful combination of masking and redundancy to counteract all reciprocal effects.
In this work, we propose a new methodology to generate combined-secure circuits. We show how to transform TI-like constructions to resist any adversary with the capability to tamper with internal gates and probe internal wires. For the resulting protection scheme, we can prove the combined security in a well-established theoretical security model.
Since the transformation preserves the advantages of TI-like structures, the resulting circuits prove to be more efficient in the number of required bits of randomness (up to 100%), the latency in clock cycles (up to 40%), and even the area for pipelined designs (up to 40%) than the state of the art for an adversary restricted to manipulating a single gate and probing a single wire.
Algebraic Algorithm for the Alternating Trilinear Form Equivalence Problem
The Alternating Trilinear Form Equivalence (ATFE) problem was recently used by Tang et al. as a hardness assumption in the design of a Fiat-Shamir digital signature scheme ALTEQ. The scheme was submitted to the additional round for digital signatures of the NIST standardization process for post-quantum cryptography.
ATFE is a hard equivalence problem known to be in the class of equivalence problems that includes, for instance, the Tensor Isomorphism (TI), Quadratic Maps Linear Equivalence (QMLE) and the Matrix Code Equivalence (MCE) problems. Due to the increased cryptographic interest, the understanding of its practical hardness has also increased in the last couple of years. Currently, there are several combinatorial and algebraic algorithms for solving it, the best of which is a graph-theoretic algorithm that also includes an algebraic subroutine.
In this paper, we take a purely algebraic approach to the ATFE problem, but we use a coding theory perspective to model the problem. This modelling was introduced earlier for the MCE problem. Using it, we improve the cost of algebraic attacks against ATFE compared to previously known ones.
Taking into account the algebraic structure of alternating trilinear forms, we show that the obtained system has less variables but also less equations than for MCE and gives rise to structural degree-3 syzygies. Under the assumption that outside of these syzygies the system behaves semi-regularly, we provide a concrete, non-asymptotic complexity estimate of the performance of our algebraic attack.
Our results show that the complexity of our attack is below the estimated security levels of ALTEQ by more than 20 bits for NIST level I (and more for the others), thus the scheme requires re-parametrization for all three NIST security levels.
Time-Averaged Analysis of Selfish Mining in Bitcoin
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase his relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in selfish mining unprofitability. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our result shows that even when considering the orphan blocks in the DAM, normal selfish mining can still be profitable; however, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM.
Integrating Causality in Messaging Channels
Causal reasoning plays an important role in the comprehension of communication, but it has been elusive so far how causality should be properly preserved by instant messaging services. To the best of our knowledge, causality preservation is not even treated as a desired security property by most (if not all) existing secure messaging protocols like Signal. This is probably due to the intuition that causality seems already preserved when all received messages are intact and displayed according to their sending order. Our starting point is to notice that this intuition is wrong.
Until now, for messaging channels (where conversations take place), both the proper causality model and the provably secure constructions have been left open. Our work fills this gap, with the goal to facilitate the formal understanding of causality preservation in messaging.
First, we focus on the common two-user secure messaging channels and model the desired causality preservation property. We take the popular Signal protocol as an example and analyze the causality security of its cryptographic core (the double-ratchet mechanism). We show its inadequacy with a simple causality attack, then fix it such that the resulting Signal channel is causality-preserving, even in a strong sense that guarantees post-compromise security. Our fix is actually generic: it can be applied to any bidirectional channel to gain strong causality security.
Then, we model causality security for the so-called message franking channels. Such a channel additionally enables end users to report individual abusive messages to a server (e.g., the service provider), where this server relays the end-to-end-encrypted communication between users. Causality security in this setting further allows the server to retrieve the necessary causal dependencies of each reported message, essentially extending isolated reported messages to message flows. This has great security merit for dispute resolution, because a benign message may be deemed abusive when isolated from the context. As an example, we apply our model to analyze Facebook’s message franking scheme. We show that a malicious user can easily trick Facebook (i.e., the server) to accuse an innocent user. Then we fix this issue by amending the underlying message franking channel to preserve the desired causality.
Key Exchange with Tight (Full) Forward Secrecy via Key Confirmation
Weak forward secrecy (wFS) of authenticated key exchange (AKE) protocols is a passive variant of (full) forward secrecy (FS). A natural mechanism to upgrade from wFS to FS is the use of key confirmation messages which compute a message authentication code (MAC) over the transcript. Unfortunately, Gellert, Gjøsteen, Jacobson and Jager (GGJJ, CRYPTO 2023) show that this mechanism inherently incurs a loss proportional to the number of users, leading to an overall non-tight reduction, even if wFS was established using a tight reduction.
Inspired by GGJJ, we propose a new notion, called one-way verifiable weak forward secrecy (OW-VwFS), and prove that OW-VwFS can be transformed tightly to FS using key confirmation in the random oracle model (ROM). To implement our generic transformation, we show that several tightly wFS AKE protocols additionally satisfy our OW-VwFS notion tightly. We highlight that using the recent lattice-based protocol from Pan, Wagner, and Zeng (CRYPTO 2023) can give us the first lattice-based tightly FS AKE via key confirmation in the classical random oracle model. Besides this, we also obtain a Decisional-Diffie-Hellman-based protocol that is considerably more efficient than the previous ones.
Finally, we lift our study on FS via key confirmation to the quantum random oracle model (QROM). While our security reduction is overall non-tight, it matches the best existing bound for wFS in the QROM (Pan, Wagner, and Zeng, ASIACRYPT 2023), namely, it is square-root- and session-tight. Our analysis is in the multi-challenge setting, and it is more realistic than the single-challenge setting as in Pan et al..
The NISQ Complexity of Collision Finding
Collision-resistant hashing, a fundamental primitive in modern cryptography, ensures that there is no efficient way to find distinct inputs that produce the same hash value. This property underpins the security of various cryptographic applications, making it crucial to understand its complexity. The complexity of this problem is well-understood in the classical setting and $\Theta(N^{1/2})$ queries are needed to find a collision. However, the advent of quantum computing has introduced new challenges since quantum adversaries - equipped with the power of quantum queries - can find collisions much more efficiently. Brassard, Höyer and Tapp and Aaronson and Shi established that full-scale quantum adversaries require $\Theta(N^{1/3})$ queries to find a collision, prompting a need for longer hash outputs, which impacts efficiency in terms of the key lengths needed for security.
This paper explores the implications of quantum attacks in the Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three different models for NISQ algorithms and achieve tight bounds for all of them:
(1) A hybrid algorithm making adaptive quantum or classical queries but with a limited quantum query budget, or
(2) A quantum algorithm with access to a noisy oracle, subject to a dephasing or depolarizing channel, or
(3) A hybrid algorithm with an upper bound on its maximum quantum depth; i.e., a classical algorithm aided by low-depth quantum circuits.
In fact, our results handle all regimes between NISQ and full-scale quantum computers. Previously, only results for the pre-image search problem were known for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while nothing was known about the collision finding problem.
Along with our main results, we develop an information-theoretic framework for recording query transcripts of quantum-classical algorithms. The main feature of this framework is that it allows us to record queries in two incompatible bases - classical queries in the standard basis and quantum queries in the Fourier basis - consistently. We call the framework the hybrid compressed oracle as it naturally interpolates between the classical way of recording queries and the compressed oracle framework of Zhandry for recording quantum queries.
Key-Recovery Attack on a Public-Key Encryption Related to Planted Clique
Hudoba proposed a public key encryption (PKE) scheme and conjectured its security to be based on the Planted Clique problem. In this note, we show that this scheme is not secure. We do so by devising an efficient algorithm for the even neighbor independent set problem proposed by Hudoba. This leaves open the possibility of building PKE based on Planted Clique.
Stateless Deterministic Multi-Party EdDSA Signatures with Low Communication
EdDSA, standardized by both IRTF and NIST, is a variant of the well-known Schnorr signature based on Edwards curves, and enjoys the benefit of statelessly and deterministically deriving nonces (i.e., it does not require reliable source of randomness or state continuity). Recently, NIST calls for multi-party threshold EdDSA signatures in one mode of deriving nonce statelessly and deterministically and verifying such derivation via zero-knowledge (ZK) proofs. Multi-party full-threshold EdDSA signatures in the dishonest-majority malicious setting have the advantage of strong security guarantee, and specially cover the two-party case. However, it is challenging to translate the stateless and deterministic benefit of EdDSA to the multi-party setting, as no fresh randomness is available for the protocol execution.
We present the notion of information-theoretic message authenticated codes (IT-MACs) over groups in the multi-verifier setting, and adopt the recent pseudorandom correlation function (PCF) to generate IT-MACs statelessly and deterministically.
Furthermore, we generalize the two-party IT-MACs-based ZK protocol by Baum et al. (Crypto'21) into the multi-verifier setting, which may be of independent interest. Together with multi-verifier extended doubly-authenticated bits (mv-edabits) with errors, we design a multi-verifier zero-knowledge (MVZK) protocol to derive nonces statelessly and deterministically. Building upon the MVZK protocol, we propose a stateless deterministic multi-party EdDSA signature, tolerating all-but-one malicious corruptions. Compared to the state-of-the-art multi-party EdDSA signature by Garillot et al. (Crypto'21), we improve communication cost by a factor of $61\times$, at the cost of increasing computation cost by about $2.25\times$ and requiring three extra rounds.
Security analysis of the iMessage PQ3 protocol
The iMessage PQ3 protocol is an end-to-end encrypted messaging protocol designed for exchanging data in long-lived sessions between two devices. It aims to provide classical and post-quantum confidentiality for forward secrecy and post-compromise secrecy, as well as classical authentication. Its initial authenticated key exchange is constructed from digital signatures plus elliptic curve Diffie–Hellman and post-quantum key exchanges; to derive per-message keys on an ongoing basis, it employs an adaptation of the Signal double ratchet that includes a post-quantum key encapsulation mechanism. This paper presents the cryptographic details of the PQ3 protocol and gives a reductionist security analysis by adapting the multi-stage key exchange security analysis of Signal by Cohn-Gordon et al. (J. Cryptology, 2020). The analysis shows that PQ3 provides confidentiality with forward secrecy and post-compromise security against both classical and quantum adversaries, in both the initial key exchange as well as the continuous rekeying phase of the protocol.
On Central Primitives for Quantum Cryptography with Classical Communication
Recent work has introduced the "Quantum-Computation Classical-Communication"
(QCCC) (Chung et. al.) setting for cryptography. There has been some evidence that
One Way Puzzles (OWPuzz) are the natural central cryptographic primitive for this
setting (Khurana and Tomer). For a primitive to be considered central it should
have several characteristics. It should be well behaved (which for this paper we will
think of as having amplification, combiners, and universal constructions); it should
be implied by a wide variety of other primitives; and it should be equivalent to some
class of useful primitives. We present combiners, correctness and security amplifica-
tion, and a universal construction for OWPuzz. Our proof of security amplification
uses a new and cleaner version construction of EFI from OWPuzz (in comparison to
the result of Khurana and Tomer) that generalizes to weak OWPuzz and is the most
technically involved section of the paper. It was previously known that OWPuzz are
implied by other primitives of interest including commitments, symmetric key encryp-
tion, one way state generators (OWSG), and therefore pseudorandom states (PRS).
However we are able to rule out OWPuzz’s equivalence to many of these primitives
by showing a black box separation between general OWPuzz and a restricted class
of OWPuzz (those with efficient verification, which we call EV − OWPuzz). We then
show that EV − OWPuzz are also implied by most of these primitives, which separates
them from OWPuzz as well. This separation also separates extending PRS from highly
compressing PRS answering an open question of Ananth et. al.