Papers updated in last 365 days (Page 27 of 2928 results)
PriFHEte: Achieving Full-Privacy in Account-based Cryptocurrencies is Possible
In cryptocurrencies, all transactions are public. For their adoption, it is important that these transactions, while publicly verifiable, do not leak information about the identity and the balances of the transactors.
For UTXO-based cryptocurrencies, there are well-established approaches (e.g., ZCash) that guarantee full privacy to the transactors. Full privacy in UTXO means that each transaction is anonymous within the set of all private transactions ever posted on the blockchain.
In contrast, for account-based cryptocurrencies (e.g., Ethereum) full privacy, that is, privacy within the set of all accounts, seems to be impossible to achieve within the constraints of blockchain transactions (e.g., they have to fit in a block).
Indeed, every approach proposed in the literature achieves only a much weaker privacy guarantee called anonymity where a transactor is private within a set of account holders.
anonymity is achieved by adding accounts to the transaction, which concretely limits the anonymity guarantee to a very small constant (e.g., 64 for QuisQuis and 256 for anonymous Zether), compared to the set of all possible accounts.
In this paper, we propose a completely new approach that does not achieve anonymity by including more accounts in the transaction, but instead makes the transaction itself ``smarter''.
Our key contribution is to provide a mechanism whereby a compact transaction can be used to correctly update all accounts. Intuitively, this guarantees that all accounts are equally likely to be the recipients/sender of such a transaction.
We, therefore, provide the first protocol that guarantees full privacy in account-based cryptocurrencies PriFHEte
The contribution of this paper is theoretical.
Our main objective is to demonstrate that achieving
full privacy in account-based cryptocurrency is actually possible.
We see our work as opening the door to new possibilities for anonymous account-based cryptocurrencies.
Nonetheless, in this paper, we also discuss PriFHEte's potential to be developed in practice by leveraging the power of off-chain scalability solutions such as zk rollups.
Fuzzy Private Set Intersection with Large Hyperballs
Traditional private set intersection (PSI) involves a receiver and a sender holding sets and , respectively, with the receiver learning only the intersection .
We turn our attention to its fuzzy variant, where the receiver holds hyperballs of radius in a metric space and the sender has points.
Representing the hyperballs by their center, the receiver learns the points for which there exists such that with respect to some distance metric.
Previous approaches either require general-purpose multi-party computation (MPC) techniques like garbled circuits or fully homomorphic encryption (FHE), leak details about the sender’s precise inputs, support limited distance metrics, or scale poorly with the hyperballs' volume.
This work presents the first black-box construction for fuzzy PSI (including other variants such as PSI cardinality, labeled PSI, and circuit PSI), which can handle polynomially large radius and dimension (i.e., a potentially exponentially large volume) in two interaction messages, supporting general distance, without relying on garbled circuits or FHE. The protocol excels in both asymptotic and concrete efficiency compared to existing works. For security, we solely rely on the assumption that the Decisional Diffie-Hellman (DDH) holds in the random oracle model.
The Ouroboros of ZK: Why Verifying the Verifier Unlocks Longer-Term ZK Innovation
Verifying the verifier in the context of zero-knowledge proof is an essential part of ensuring the long-term integrity of the zero-knowledge ecosystem. This is vital for both zero-knowledge rollups and also other industrial applications of ZK. In addition to further minimizing the required trust and reducing the trusted computing base (TCB), having a verified verifier opens the door to decentralized proof generation by potentially untrusted parties. We outline a research program and justify the need for more work at the intersection of ZK and formal verification research.
Transmitter Actions for Secure Integrated Sensing and Communication
This work models a secure integrated sensing and communication (ISAC) system as a wiretap channel with action-dependent channel states and channel output feedback, e.g., obtained through reflections. The transmitted message is split into a common and a secure message, both of which must be reliably recovered at the legitimate receiver, while the secure message needs to be kept secret from the eavesdropper. The transmitter actions, such as beamforming vector design, affect the corresponding state at each channel use. The action sequence is modeled to depend on both the transmitted message and channel output feedback. For perfect channel output feedback, the secrecy-distortion regions are provided for physically-degraded and reversely-physically-degraded secure ISAC channels with transmitter actions. The corresponding rate regions when the entire message should be kept secret are also provided. The results are illustrated through characterizing the secrecy-distortion region of a binary example.
Slothful reduction
In the implementation of many public key schemes, there is a need to implement modular arithmetic. Typically this consists
of addition, subtraction, multiplication and (occasionally) division with respect to a prime modulus. To resist certain side-channel attacks it helps if implementations are ``constant time''. As the calculations proceed there is potentially a need to reduce the result of an operation to its remainder modulo the prime modulus. However often this reduction can be delayed, a process known as ``lazy reduction''. The idea is that results do not have to be fully reduced at each step, that full reduction takes place only occasionally, hence providing a performance benefit. Here we extend the idea to determine the circumstances under which reduction can be delayed to the very end of a particular public key operation.
Group Oblivious Message Retrieval
Anonymous message delivery, as in private communication and privacy-preserving blockchain applications, ought to protect recipient metadata: a message should not be inadvertently linkable to its destination. But how can messages then be delivered to each recipient, without each recipient scanning all messages? Recent work constructed Oblivious Message Retrieval (OMR) protocols that outsource this job to untrusted servers in a privacy-preserving manner.
We consider the case of group messaging, where each message may have multiple recipients (e.g., in a group chat or blockchain transaction). Direct use of prior OMR protocols in the group setting increases the servers' work linearly in the group size, rendering it prohibitively costly for large groups.
We thus devise new protocols where the servers' cost grows very slowly with the group size, while recipients' cost is low and independent of the group size. Our approach uses Fully Homomorphic Encryption and other lattice-based techniques, building on and improving on prior work. The efficient handling of groups is attained by encoding multiple recipient-specific clues into a single polynomial or multilinear function that can be efficiently evaluated under FHE, and via preprocessing and amortization techniques.
We formally study several variants of Group Oblivious Message Retrieval (GOMR) and describe corresponding GOMR protocols. Our implementation and benchmarks show, for parameters of interest, cost reductions of orders of magnitude compared to prior schemes. For example, the servers' cost is per million messages scanned, where each message may address up to recipients.
Admissible Parameters for the Crossbred Algorithm and Semi-regular Sequences over Finite Fields
Multivariate public key cryptography (MPKC) is one of the most promising alternatives to build quantum-resistant signature schemes, as evidenced in NIST's call for additional post-quantum signature schemes. The main assumption in MPKC is the hardness of the Multivariate Quadratic (MQ) problem, which seeks for a common root to a system of quadratic polynomials over a finite field. Although the Crossbred algorithm is among the most efficient algorithm to solve MQ over small fields, its complexity analysis stands on shaky ground. In particular, it is not clear for what parameters it works and under what assumptions.
In this work, we provide a rigorous analysis of the Crossbred algorithm over any finite field. We provide a complete explanation of the series of admissible parameters proposed in previous literature and explicitly state the regularity assumptions required for its validity. Moreover, we show that the series does not tell the whole story, hence we propose an additional condition for Crossbred to work. Additionally, we define and characterize a notion of regularity for systems over a small field, which is one of the main building blocks in the series of admissible parameters.
ScionFL: Efficient and Robust Secure Quantized Aggregation
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orthogonal research directions that aim to (i) significantly reduce client-server communication and (ii) mitigate the impact of malicious clients. However, both of these additional properties are essential to facilitate cross-device FL with thousands or even millions of (mobile) participants.
In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients. Our framework leverages (novel) multi-party computation (MPC) techniques and supports multiple linear (1-bit) quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation.
Our theoretical results are supported by extensive evaluations.
We show that with no overhead for clients and moderate overhead for the server compared to transferring and processing quantized updates in plaintext, we obtain comparable accuracy for standard FL benchmarks. Moreover, we demonstrate the robustness of our framework against state-of-the-art poisoning attacks.
(Strong) aPAKE Revisited: Capturing Multi-User Security and Salting
Asymmetric Password-Authenticated Key Exchange (aPAKE) protocols, particularly Strong aPAKE (saPAKE) have enjoyed significant attention, both from academia and industry, with the well-known OPAQUE protocol currently undergoing standardization. In (s)aPAKE, a client and a server collaboratively establish a high-entropy key, relying on a previously exchanged password for authentication. A main feature is its resilience against offline and precomputation (for saPAKE) attacks. OPAQUE, as well as most other aPAKE protocols, have been designed and analyzed in a single-user setting, i.e., modelling that only a single user interacts with the server. By the composition framework of UC, security for the actual multi-user setting is then conjectured. As any real-world (s)aPAKE instantiation will need to cater multiple users, this introduces a dangerous gap in which developers are tasked to extend the single-user protocol securely and in a UC-compliant manner.
In this work, we extend the (s)aPAKE definition to directly model the multi-user setting, and explicitly capture the impact that a server compromise has across user accounts. We show that the currently standardized multi-user version of OPAQUE might not provide the expected security, as it is insecure against offline attacks as soon as the file for one user in the system is compromised. This is due to using shared state among different users, which violates the UC composition framework. However, we show that another change introduced in the standardization draft which also involves a shared state does not compromise security. When extending the aPAKE security in the multi-client setting, we notice that the widely used security definition captures significantly weaker security guarantees than what is offered by many protocols. Essentially, the aPAKE definition assumes that the server stores unsalted password-hashes, whereas several protocols explicitly use a salt to protect against precomputation attacks. We therefore propose a definitional framework that captures different salting approaches -- thus showing that the security gap between aPAKE and saPAKE can be smaller than expected.
Efficient Second-Order Masked Software Implementations of Ascon in Theory and Practice
In this paper, we present efficient protected software implementations of the authenticated cipher Ascon, the recently announced winner of the NIST standardization process for lightweight cryptography.
Our implementations target theoretical and practical security against second-order power analysis attacks.
First, we propose an efficient second-order extension of a previously presented first-order masking of the Keccak S-box that does not require online randomness.
The extension itself is inspired by a previously presented second-order masking of an AND-XOR construction.
We then discuss implementation tricks that further improve performance and reduce the chance of unintended combination of shares during the execution of masked software on microprocessors.
This allows us to retain the theoretic protection orders of masking in practice with low performance overhead, which we also confirm via TVLA on ARM microprocessors.
The formal correctness of our designs is additionally verified using Coco on the netlist of a RISC-V IBEX core.
We benchmark our masked software designs on 32-bit ARM and RISC-V microprocessor platforms.
On both platforms, we can perform Ascon-128 authenticated encryption with a throughput of about 300 or 550 cycles/byte when operating on 2 or 3 shares.
When utilizing a leveled implementation technique, the throughput of our masked implementations generally increases to about 90 cycles/byte.
We publish our masked software implementations together with a generic software framework for evaluating performance and side-channel resistance of various masked cryptographic implementations.
Simultaneous Haar Indistinguishability with Applications to Unclonable Cryptography
Unclonable cryptography is concerned with leveraging the no-cloning principle to build cryptographic primitives that are otherwise impossible to achieve classically. Understanding the feasibility of unclonable encryption, one of the key unclonable primitives, satisfying indistinguishability security in the plain model has been a major open question in the area. So far, the existing constructions of unclonable encryption are either in the quantum random oracle model or are based on new conjectures.
We present a new approach to unclonable encryption via a reduction to a novel
question about nonlocal quantum state discrimination: how well can
non-communicating -- but entangled -- players distinguish between different distributions over quantum states? We call this task simultaneous state indistinguishability. Our main technical result is showing that the players cannot distinguish between each player receiving independently-chosen Haar random states versus all players receiving the same Haar random state.
We leverage this result to present the first construction of unclonable
encryption satisfying indistinguishability security, with quantum decryption
keys, in the plain model. We also show other implications to single-decryptor
encryption and leakage-resilient secret sharing.
Reducing the CRS Size in Registered ABE Systems
Attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data. In (ciphertext-policy) ABE, a central trusted authority issues decryption keys for attributes to users. In turn, ciphertexts are associated with a decryption policy . Decryption succeeds and recovers the encrypted message whenever . Recently, Hohenberger, Lu, Waters, and Wu (Eurocrypt 2023) introduced the notion of registered ABE, which is an ABE scheme without a trusted central authority. Instead, users generate their own public/secret keys (just like in public-key encryption) and then register their keys (and attributes) with a key curator. The key curator is a transparent and untrusted entity.
Currently, the best pairing-based registered ABE schemes support monotone Boolean formulas and an a priori bounded number of users . A major limitation of existing schemes is that they require a (structured) common reference string (CRS) of size where is the size of the attribute universe. In other words, the size of the CRS scales quadratically with the number of users and multiplicatively with the size of the attribute universe. The large CRS makes these schemes expensive in practice and limited to a small number of users and a small universe of attributes.
In this work, we give two ways to reduce the CRS size in pairing-based registered ABE schemes. First, we introduce a combinatoric technique based on progression-free sets that enables registered ABE for the same class of policies but with a CRS whose size is sub-quadratic in the number of users. Asymptotically, we obtain a scheme where the CRS size is nearly linear in the number of users (i.e., ). If we take a more concrete-efficiency-oriented focus, we can instantiate our framework to obtain a construction with a CRS of size . For instance, in a scheme for 100,000 users, our approach reduces the CRS by a factor of over compared to previous approaches (and without incurring any overhead in encryption/decryption time). Our second approach for reducing the CRS size is to rely on a partitioning-based argument when arguing security of the registered ABE scheme. Previous approaches took a dual-system approach. Using a partitioning-based argument yields a registered ABE scheme where the size of the CRS is independent of the size of the attribute universe. The cost is the resulting scheme satisfies a weaker notion of static security. Our techniques for reducing the CRS size can be combined, and taken together, we obtain a pairing-based registered ABE scheme that supports monotone Boolean formulas with a CRS size of . Notably, this is the first pairing-based registered ABE scheme that does not require imposing a bound on the size of the attribute universe during setup time.
As an additional application, we also show how to apply our techniques based on progression-free sets to the batch argument (BARG) for scheme of Waters and Wu (Crypto 2022) to obtain a scheme with a nearly-linear CRS without needing to rely on non-black-box bootstrapping techniques.
PERK: Compact Signature Scheme Based on a New Variant of the Permuted Kernel Problem
In this work we introduce PERK a compact digital signature scheme based on the hardness of a new variant of the Permuted Kernel Problem (PKP). PERK achieves the smallest signature sizes for any PKP-based scheme for NIST category I security with 6 kB, while obtaining competitive signing and verification timings. PERK also compares well with the general state-of-the-art. To substantiate those claims we provide an optimized constant-time AVX2 implementation, a detailed performance analysis and different size-performance trade-offs.
Technically our scheme is based on a Zero-Knowledge Proof of Knowledge following the MPC-in-the-Head paradigm and employing the Fiat-Shamir transform. We provide comprehensive security proofs, ensuring EUF-CMA security for PERK in the random oracle model. The efficiency of PERK greatly stems from our particular choice of PKP variant which allows for an application of the challenge-space amplification technique due to Bidoux-Gaborit (C2SI 2023).
Our second main contribution is an in-depth study of the hardness of the introduced problem variant. First, we establish a link between the hardness of our problem variant and the hardness of standard PKP. Then, we initiate an in-depth study of the concrete complexity to solve our variant. We present a novel algorithm which outperforms previous approaches for certain parameter regimes. However, the proximity of our problem variant to the standard variant can be controlled via a specific parameter. This enables us to effectively safeguard against our new attack and potential future extensions by a choice of parameters that ensures only a slight variation from standard PKP.
Leakage-Tolerant Circuits
A leakage-resilient circuit for is a randomized Boolean circuit mapping a randomized encoding of an input to an encoding of , such that applying any leakage function to the wires of reveals essentially nothing about . A leakage-tolerant circuit achieves the stronger guarantee that even when and are not protected by any encoding, the output of can be simulated by applying some to and alone. Thus, is as secure as an ideal hardware implementation of with respect to leakage from .
Leakage-resilient circuits were constructed for low-complexity classes , including (length- output) functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where outputs the values of wires in .
We initiate a systematic study of leakage-tolerant circuits for natural classes of global leakage functions, obtaining the following main results.
Every circuit for can be efficiently compiled into an -tolerant circuit for , where includes all leakage functions that output either parities or disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in time. We provide partial evidence that this may be inherent.
We present a general transformation from (stateless) leakage-tolerant circuits to stateful leakage-resilient circuits. Using this transformation, we obtain the first constructions of stateful -leakage-resilient circuits that tolerate a continuous parity/disjunction/conjunction leakage in which the circuit size grows sub-quadratically with . Interestingly, here we can obtain -time simulation even in the case of parities.
The Art of Bonsai: How Well-Shaped Trees Improve the Communication Cost of MLS
Messaging Layer Security (MLS) is a Secure Group Messaging protocol that uses for its handshake a binary tree – called a Ratchet Tree – in order to reach a logarithmic communication cost w.r.t. the number of group members. This Ratchet Tree represents users as its leaves; therefore any change in the group membership results in adding or removing a leaf associated with that user. MLS consequently implements what we call a tree evolution mechanism, consisting in a user add algorithm – determining where to insert a new leaf – and a tree expansion process – stating how to increase the size of the tree when no space is available for a new user.
The tree evolution mechanism currently used by MLS is de-
signed so that it naturally left-balances the Ratchet Tree. However, such a Ratchet Tree structure is often quite inefficient in terms of communication cost. Furthermore, one may wonder whether the binary tree used in that Ratchet Tree has a degree optimized for the features of a handshake in MLS – called a commit.
Therefore, we study in this paper how to improve the communication cost of a commit in MLS by considering both the tree evolution mechanism and the tree degree used for the Ratchet Tree. To do so, we determine the tree structure that optimizes its communication cost, and we propose optimized algorithms for both the user add and tree expansion processes, that allow to remain close to that optimal structure and thus to have a communication cost as close to optimal as possible.
We also determine the Ratchet Tree degree that is best suited to a given set of parameters induced by the encryption scheme used by MLS. This study shows that when using classical (i.e. pre-quantum) ciphersuites, a binary tree is indeed the most appropriate Ratchet Tree; nevertheless, when it comes to post-quantum algorithms, it generally becomes more interesting to use instead a ternary tree.
Our improvements do not change TreeKEM protocol and are
easy to implement. With parameter sets corresponding to practical ciphersuites, they reduce TreeKEM’s communication cost by 5 to 10%. In particular, the 10% gain appears in the Post-Quantum setting – when both an optimized tree evolution mechanism and a ternary tree are necessary –, which is precisely the context where any optimization of the protocol’s communication cost is welcome, due to the important bandwidth of PQ encrypted communication.
MQ on my Mind: Post-Quantum Signatures from the Non-Structured Multivariate Quadratic Problem
This paper presents MQ on my Mind (MQOM), a digital signature scheme based on the difficulty of solving multivariate systems of quadratic equations (MQ problem). MQOM has been submitted to the NIST call for additional post-quantum signature schemes. MQOM relies on the MPC-in-the-Head (MPCitH) paradigm to build a zero-knowledge proof of knowledge (ZK-PoK) for MQ which is then turned into a signature scheme through the Fiat-Shamir heuristic. The underlying MQ problem is non-structured in the sense that the system of quadratic equations defining an instance is drawn uniformly at random. This is one of the hardest and most studied problems from multivariate cryptography which hence constitutes a conservative choice to build candidate post-quantum cryptosystems. For the efficient application of the MPCitH paradigm, we design a specific MPC protocol to verify the solution of an MQ instance. Compared to other multivariate signature schemes based on non-structured MQ instances, MQOM achieves the shortest signatures (6.3-7.8 KB) while keeping very short public keys (few dozen of bytes). Other multivariate signature schemes are based on structured MQ problems (less conservative) which either have large public keys (e.g. UOV) or use recently proposed variants of these MQ problems (e.g. MAYO).
Improved Conditional Cube Attacks on Ascon AEADs in Nonce-Respecting Settings -- with a Break-Fix Strategy
The best-known distinguisher on 7-round Ascon-128 and Ascon-128a AEAD uses a 60-dimensional cube where the nonce bits are set to be equal in the third and fourth rows of the Ascon state during initialization (Rohit et al. ToSC 2021/1).
It was not known how to use this distinguisher to mount key-recovery attacks.
In this paper, we investigate this problem using a new strategy called \textit{break-fix} for the conditional cube attack. The idea is to introduce slightly-modified cubes which increase the degrees of 7-round output bits to be more than 59 (break phase) and then find key conditions which can bring the degree back to 59 (fix phase).
Using this idea, key-recovery attacks on 7-round Ascon-128, Ascon-128a and Ascon-80pq are proposed.
The attacks have better time/memory complexities than the existing attacks, and in some cases improve the weak-key attacks as well.
A Deniability Analysis of Signal's Initial Handshake PQXDH
Many use messaging apps such as Signal to exercise their right to private communication. To cope with the advent of quantum computing, Signal employs a new initial handshake protocol called PQXDH for post-quantum confidentiality, yet keeps guarantees of authenticity and deniability classical. Compared to its predecessor X3DH, PQXDH includes a KEM encapsulation and a signature on the ephemeral key. In this work we show that PQXDH does not meet the same deniability guarantees as X3DH due to the signature on the ephemeral key. Our analysis relies on plaintext awareness of the KEM, which Signal's implementation of PQXDH does not provide. As for X3DH, both parties (initiator and responder) obtain different deniability guarantees due to the asymmetry of the protocol.
For our analysis of PQXDH, we introduce a new model for deniability of key exchange that allows a more fine-grained analysis. Our deniability model picks up on the ideas of prior work and facilitates new combinations of deniability notions, such as deniability against malicious adversaries in the big brother model, i.e. where the distinguisher knows all secret keys. Our model may be of independent interest.
One vector to rule them all: Key recovery from one vector in UOV schemes
Unbalanced Oil and Vinegar is a multivariate signature scheme that was introduced in 1999.
Most multivariate candidates for signature schemes at NIST's PQC standardization process are either based on UOV or closely related to it.
The UOV trapdoor is a secret subspace, the "oil subspace".
We show how to recover an equivalent secret key from the knowledge of a single vector in the oil subspace in any characteristic.
The reconciliation attack was sped-up by adding some bilinear equations in the subsequent computations, and able to conclude after two vectors were found.
We show here that these bilinear equations contain enough information to dismiss the quadratic equations and retrieve the secret subspace with linear algebra for practical parametrizations of UOV, in at most 15 seconds for modern instanciations of UOV.
This proves that the security of the UOV scheme lies in the complexity of finding exactly one vector in the oil space.
In addition, we deduce a key recovery attack from any forgery attack by applying a corollary of our main result.
We show how to extend this result to schemes related to UOV, such as MAYO and VOX.
Constant Input Attribute Based (and Predicate) Encryption from Evasive and Tensor LWE
Constructing advanced cryptographic primitives such as obfuscation or broadcast encryption from standard hardness assumptions in the post quantum regime is an important area of research, which has met with limited success despite significant effort. It is therefore extremely important to find new, simple to state assumptions in this regime which can be used to fill this gap. An important step was taken recently by Wee (Eurocrypt '22) who identified two new assumptions from lattices, namely evasive and tensor , and used these to construct broadcast encryption and ciphertext policy attribute based encryption for with optimal parameters. Independently, Tsabary formulated a similar assumption and used it to construct witness encryption (Crypto '22). Following Wee's work, Vaikuntanathan, Wee and Wichs independently provided a construction of witness encryption (Asiacrypt '22).
In this work, we advance this line of research by providing the first construction of multi-input attribute based encryption ( ) for the function class for any constant arity from evasive . Our construction can be extended to support the function class by using evasive and a suitable strengthening of tensor . In more detail, our construction supports encryptors, for any constant , where each encryptor uses the master secret key to encode its input , the key generator computes a key for a function and the decryptor can recover if and only if . The only known construction for for by Agrawal, Yadav and Yamada (Crypto '22) supports arity and relies on pairings in the generic group model (or with a non-standard knowledge assumption) in addition to . Furthermore, it is completely unclear how to go beyond arity using this approach due to the reliance on pairings.
Using a compiler from Agrawal, Yadav and Yamada (Crypto '22), our can be upgraded to multi-input predicate encryption for the same arity and function class. Thus, we obtain the first constructions for constant-arity predicate and attribute based encryption for a generalized class such as or from simple assumptions that may be conjectured post-quantum secure. Along the way, we show that the tensor assumption can be reduced to standard in an important special case which was not known before. This adds confidence to the plausibility of the assumption and may be of wider interest.
BGJ15 Revisited: Sieving with Streamed Memory Access
The focus of this paper is to tackle the issue of memory access within sieving algorithms for lattice problems. We have conducted an in-depth analysis of an optimized BGJ sieve (Becker-Gama-Joux 2015), and our findings suggest that its inherent structure is significantly more memory-efficient compared to the asymptotically fastest BDGL sieve (Becker-Ducas-Gama-Laarhoven 2016). Specifically, it necessitates merely streamed (non-random) main memory accesses for the execution of an -dimensional sieving. We also provide evidence that the time complexity of this refined BGJ sieve could potentially be , or at least something remarkably close to it. Actually, it outperforms the BDGL sieve in all dimensions that are practically achievable. We hope that this study will contribute to the resolution of the ongoing debate regarding the measurement of RAM access overhead in large-scale, sieving-based lattice attacks.
The concept above is also supported by our implementation. Actually, we provide a highly efficient, both in terms of time and memory, CPU-based implementation of the refined BGJ sieve within an optimized sieving framework. This implementation results in approximately 40% savings in RAM usage and is at least times more efficient in terms of gate count compared to the previous 4-GPU implementation (Ducas-Stevens-Woerden 2021). Notably, we have successfully solved the 183-dimensional SVP Darmstadt Challenge in 30 days using a 112-core server and approximately 0.87TB of RAM. The majority of previous sieving-based SVP computations relied on the HK3-sieve (Herold-Kirshanova 2017), hence this implementation could offer further insights into the behavior of these asymptotically faster sieving algorithms when applied to large-scale problems. Moreover, our refined cost estimation of SVP based on this implementation suggests that some of the NIST PQC candidates, such as Falcon-512, are unlikely to achieve NIST's security requirements.
An update on Keccak performance on ARMv7-M
This note provides an update on Keccak performance on the ARMv7-M processors. Starting from the XKCP implementation, we have applied architecture-specific optimizations that have yielded a performance gain of up to 21% for the largest permutation instance.
Provable Security for PKI Schemes
PKI schemes provide a critical foundation for applied cryptographic protocols.
However, there are no rigorous security specifications for realistic PKI schemes, and therefore, no PKI schemes were proven secure.
Cryptographic systems that use PKI are analyzed by adopting overly simplified models of the PKI, often, simply assuming securely-distributed public keys. This is problematic given the extensive reliance on PKI, the multiple failures of PKI systems, and the complexity of both proposed and deployed systems, which involve complex requirements and models.
We present game-based security specifications for PKI schemes, and analyze important and widely deployed PKIs: PKIX and two variants of Certificate Transparency (CT). All PKIs are based on the X.509v3 standard and its CRL revocation mechanism. Our analysis identified few subtle vulnerabilities, and provides reduction-based proofs showing that the PKIs ensure specific requirements under specific models (assumptions).
To our knowledge, this is the first reduction-based proof of security for a realistic PKI scheme, e.g., supporting certificate chains.
A Detailed Analysis of Fiat-Shamir with Aborts
Lyubashevky's signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded).
In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, run-time, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz, Lyubashevsky, and Shaffner [EUROCRYPT'18] and from Grilo, Hövelmanns, Hülsing, and Majenz [ASIACRYPT'21]. In the process, we prove the underlying -protocol to achieve a stronger zero-knowledge property than usually considered for -protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs
The Learning With Errors ( ) problem asks to find from an input of the form , for a vector that has small-magnitude entries. In this work, we do not focus on solving but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create and and then set . In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample , namely, without knowing the underlying . A variant of the assumption that oblivious sampling is hard has been used in a series of works to analyze the security of candidate constructions of Succinct Non interactive Arguments of Knowledge (SNARKs). As the assumption is related to , these SNARKs have been conjectured to be secure in the presence of quantum adversaries.
Our main result is a quantum polynomial-time algorithm that
samples well-distributed instances while provably not knowing the solution, under the assumption that is hard. Moreover, the approach works for a vast range of parametrizations, including those used in the above-mentioned SNARKs. This invalidates the assumptions used in their security analyses, although it does not yield attacks against the constructions themselves.
Quantum Key-Revocable Dual-Regev Encryption, Revisited
Quantum information can be used to achieve novel cryptographic primitives that are impossible to achieve classically. A recent work by Ananth, Poremba, Vaikuntanathan (TCC 2023) focuses on equipping the dual-Regev encryption scheme, introduced by Gentry, Peikert, Vaikuntanathan (STOC 2008), with key revocation capabilities using quantum information. They further showed that the key-revocable dual-Regev scheme implies the existence of fully homomorphic encryption and pseudorandom functions, with both of them also equipped with key revocation capabilities. Unfortunately, they were only able to prove the security of their schemes based on new conjectures and left open the problem of basing the security of key revocable dual-Regev encryption on well-studied assumptions.
In this work, we resolve this open problem. Assuming polynomial hardness of learning with errors (over sub-exponential modulus), we show that key-revocable dual-Regev encryption is secure. As a consequence, for the first time, we achieve the following results:
1. Key-revocable public-key encryption and key-revocable fully-homomorphic encryption satisfying classical revocation security and based on polynomial hardness of learning with errors. Prior works either did not achieve classical revocation or were based on sub-exponential hardness of learning with errors.
2. Key-revocable pseudorandom functions satisfying classical revocation from the polynomial hardness of learning with errors. Prior works relied upon unproven conjectures.
Updatable Encryption from Group Actions
Updatable Encryption (UE) allows to rotate the encryption key in the outsourced storage setting while minimizing the bandwith used. The server can update ciphertexts to the new key using a token provided by the client. UE schemes should provide strong confidentiality guarantees against an adversary that can corrupt keys and tokens.
This paper studies the problem of building UE in the group action framework. We introduce a new notion of Mappable Effective Group Action (MEGA) and show that we can build CCA secure UE from a MEGA by generalizing the SHINE construction of Boyd etal at Crypto 2020.
Unfortunately, we do not know how to instantiate this new construction in the post-quantum setting. Doing so would solve the open problem of building a CCA secure post-quantum UE scheme.
Isogeny-based group actions are the most studied post-quantum group actions. Unfortunately, the resulting group actions are not mappable. We show that we can still build UE from isogenies by introducing a new algebraic structure called Effective Triple Orbital Group Action (ETOGA). We prove that UE can be built from an ETOGA and show how to instantiate this abstract structure from isogeny-based group actions. This new construction solves two open problems in ciphertext-independent post-quantum UE.
First, this is the first post-quantum UE scheme that supports an unbounded number of updates. Second, our isogeny-based UE scheme is the first post-quantum UE scheme not based on lattices. The security of this new scheme holds under an extended version of the weak pseudorandomness of the standard isogeny group action.
Pianist: Scalable zkRollups via Fully Distributed Zero-Knowledge Proofs
In the past decade, blockchains have seen various financial and technological innovations, with cryptocurrencies reaching a market cap of over 1 trillion dollars. However, scalability is one of the key issues hindering the deployment of blockchains in many applications. To improve the throughput of the transactions, zkRollups and zkEVM techniques using the cryptographic primitive of zero-knowledge proofs (ZKPs) have been proposed and many companies are adopting these technologies in the layer-2 solutions. However, in these technologies, the proof generation of the ZKP is the bottleneck and the companies have to deploy powerful machines with TBs of memory to batch a large number of transactions in a ZKP.
In this work, we improve the scalability of these techniques by proposing new schemes of fully distributed ZKPs. Our schemes can improve the efficiency and the scalability of ZKPs using multiple machines, while the communication among the machines is minimal. With our schemes, the ZKP generation can be distributed to multiple participants in a model similar to the mining pools. Our protocols are based on Plonk, an efficient zero-knowledge proof system with a universal trusted setup. The first protocol is for data-parallel circuits.
For a computation of sub-circuits of size each, using machines, the prover time is , while the prover time of the original Plonk on a single machine is . Our protocol incurs only communication per machine, and the proof size and verifier time are both , the same as the original Plonk. Moreover, we show that with minor modifications, our second protocol can support general circuits with arbitrary connections while preserving the same proving, verifying, and communication complexity. The technique is general and may be of independent interest for other applications of ZKP.
We implement Pianist (Plonk vIA uNlimited dISTribution), a fully distributed ZKP system using our protocols. Pianist can generate the proof for 8192 transactions in 313 seconds on 64 machines. This improves the scalability of the Plonk scheme by 64 . The communication per machine is only 2.1 KB, regardless of the number of machines and the size of the circuit. The proof size is 2.2 KB and the verifier time is 3.5 ms. We further show that Pianist has similar improvements for general circuits. On a randomly generated circuit with gates, it only takes 5s to generate the proof using 32 machines, 24.2 faster than Plonk on a single machine.
Secret Sharing with Certified Deletion
Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may become an unrealistic assumption.
We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret is split into quantum shares that can be destroyed in a manner verifiable by the dealer.
We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about , even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized.
Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.
Secure Multiparty Computation in the Presence of Covert Adaptive Adversaries
We design a new MPC protocol for arithmetic circuits secure against erasure-free covert adaptive adversaries with deterrence 1/2. The new MPC protocol has the same asymptotic communication cost, the number of PKE operations and the number of exponentiation operations as the most efficient MPC protocol for arithmetic circuits secure against covert static adversaries. That means, the new MPC protocol improves security from covert static security to covert adaptive adversary almost for free. For MPC problems where the number of parties n is much larger than the number of multiplication gates M, the new MPC protocol asymptotically improves communication complexity over the most efficient MPC protocol for arithmetic circuits secure against erasure-free active adaptive adversaries.
Proof of Stake and Activity: Rewarding On-Chain Activity Through Consensus
We are introducing a novel consensus protocol for
blockchain, called Proof of Stake and Activity (PoSA) which can
augment the traditional Proof of Stake methods by integrating
a unique Proof of Activity system. PoSA offers a compelling
economic model that promotes decentralization by rewarding
validators based on their staked capital and also the business
value they contribute to the chain. This protocol has been
implemented already into a fully-fledged blockchain platform
called Bahamut (www.bahamut.io) which boasts hundreds of thousands of active users already.
Multivariate Blind Signatures Revisited
In 2017, Petzoldt, Szepieniec, and Mohamed proposed a blind signature scheme, based on multivariate cryptography. This construction has been expanded on by several other works. This short paper shows that their construction is susceptible to an efficient polynomial-time attack. The problem is that the authors implicitly assumed that for a random multivariate quadratic map and a collision-resistant hash function , the function is a binding commitment, which is not the case. There is a "folklore" algorithm that can be used to, given any pair of messages, efficiently produce a commitment that opens to both of them. We hope that by pointing out that multivariate quadratic maps are not binding, similar problems can be avoided in the future.
Quasi-Optimal Permutation Ranking and Applications to PERK
A ranking function for permutations maps every permutation of length to a unique integer between and . For permutations of size that are of interest in cryptographic applications, evaluating such a function requires multiple-precision arithmetic. This work introduces a quasi-optimal ranking technique that allows us to rank a permutation efficiently without needing a multiple-precision arithmetic library. We present experiments that show the computational advantage of our method compared to the standard lexicographic optimal permutation ranking. As an application of our result, we show how this technique improves the signature sizes and the efficiency of PERK digital signature scheme.
Communication-Efficient Secure Logistic Regression
We present a novel construction that enables two parties to securely train a logistic regression model on private secret-shared data. Our goal is to minimize online communication and round complexity, while still allowing for an efficient offline phase.
As part of our construction, we develop many building blocks of independent interest. These include a new approximation technique for the sigmoid function that results in a secure protocol with better communication, protocols for secure powers evaluation and secure spline computation on fixed-point values, and a new comparison protocol that optimizes online communication. We also present a new two-party protocol for generating keys for distributed point functions (DPFs) over arithmetic sharing, where previous constructions do this only for Boolean outputs.
We implement our protocol in an end-to-end system and benchmark its efficiency. We can securely evaluate a batch of sigmoids with MB of online communication, online rounds, and seconds of online time over WAN. This is less in online communication, fewer online rounds, and less online time than the well-known MP-SPDZ's protocol. Our system can train a logistic regression model over epochs and a database containing samples and features with MB of online communication and minutes of online time. We compare our logistic regression training against MP-SPDZ over a synthetic dataset of samples and features and show an improvement of in online communication and in online time over WAN. We converge to virtually the same model as plaintext in all cases. We open-source our system and include extensive tests.
Covert Adaptive Adversary Model: A New Adversary Model for Multiparty Computation
In covert adversary model, the corrupted parties can behave in any possible way like active adversaries, but any party that attempts to cheat is guaranteed to get caught by the honest parties with a minimum fixed probability. That probability is called the deterrence factor of covert adversary model. Security-wise, covert adversary is stronger than passive adversary and weaker than active adversary. It is more realistic than passive adversary model. Protocols for covert adversaries are significantly more efficient than protocols for active adversaries. Covert adversary model is defined only for static corruption. Adaptive adversary model is more realistic than static adversaries. In this article, we define a new adversary model, the covert adaptive adversary model, by generalizing the definition of covert adversary model for the more realistic adaptive corruption. We prove security relations between the new covert adaptive adversary model with existing adversary models like passive adaptive adversary model, active adaptive
adversary model and covert static adversary model. We prove the sequential composition theorem for the new adversary model which is necessary to allow modular design of protocols for this new adversary model.
Modeling Mobile Crash in Byzantine Consensus
Targeted Denial-of-Service (DoS) attacks have been a practical concern
for permissionless blockchains. Potential solutions, such as random
sampling, are adopted by blockchains.
However, the associated security guarantees have only been informally discussed in prior work. This
is due to the fact that existing adversary models are either not
fully capturing this attack or giving up certain design choices
(as in the sleepy model or asynchronous network model), or too strong to
be practical (as in the mobile Byzantine adversary model).
This paper provides theoretical foundations and desired properties
for consensus protocols that resist against targeted DoS attacks. In particular, we
define the Mobile Crash Adaptive Byzantine (MCAB) model to capture such an attack. In addition, we
identify and formalize two properties for consensus protocols under the MCAB model, and analyze their trade-offs.
As case studies, we prove that Ouroboros Praos and Algorand are secure in our MCAB model, giving the first formal proofs supporting their security guarantee against targeted DoS attacks, which were previously only informally discussed.
We also illustrate an application of our properties to secure a streamlined BFT protocol, chained Hotstuff, against targeted DoS attacks.
Categorization of Faulty Nonce Misuse Resistant Message Authentication
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations.
We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security.
Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
Let Attackers Program Ideal Models: Modularity and Composability for Adaptive Compromise
We show that the adaptive compromise security definitions of Jaeger and Tyagi (Crypto '20) cannot be applied in several natural use-cases. These include proving multi-user security from single-user security, the security of the cascade PRF, and the security of schemes sharing the same ideal primitive. We provide new variants of the definitions and show that they resolve these issues with composition. Extending these definitions to the asymmetric settings, we establish the security of the modular KEM/DEM and Fujisaki-Okamoto approaches to public key encryption in the full adaptive compromise setting. This allows instantiations which are more efficient and standard than prior constructions.
Hardness of Range Avoidance and Remote Point for Restricted Circuits via Cryptography
A recent line of research has introduced a systematic approach to explore the complexity of explicit construction problems through the use of meta problems, namely, the range avoidance problem (abbrev. ) and the remote point problem (abbrev. ). The upper and lower bounds for these meta problems provide a unified perspective on the complexity of specific explicit construction problems that were previously studied independently. An interesting question largely unaddressed by previous works is whether and are hard for simple circuits such as low-depth circuits.
In this paper, we demonstrate, under plausible cryptographic assumptions, that both the range avoidance problem and the remote point problem cannot be efficiently solved by nondeterministic search algorithms, even when the input circuits are as simple as constant-depth circuits. This extends a hardness result established by Ilango, Li, and Williams (STOC '23) against deterministic algorithms employing witness encryption for , where the inputs to are general Boolean circuits.
Our primary technical contribution is a novel construction of witness encryption inspired by public-key encryption for certain promise language in that is unlikely to be -complete. We introduce a generic approach to transform a public-key encryption scheme with particular properties into a witness encryption scheme for a promise language related to the initial public-key encryption scheme. Based on this translation and variants of standard lattice-based or coding-based PKE schemes, we obtain, under plausible assumption, a provably secure witness encryption scheme for some promise language in . Additionally, we show that our constructions of witness encryption are plausibly secure against nondeterministic adversaries under a generalized notion of security in the spirit of Rudich's super-bits (RANDOM '97), which is crucial for demonstrating the hardness of and against nondeterministic algorithms.
Challenger: Blockchain-based Massively Multiplayer Online Game Architecture
We propose Challenger a peer-to-peer blockchain-based middleware architecture for narrative games, and discuss its resilience to cheating attacks. Our architecture orchestrates nine services in a fully decentralized manner where nodes are not aware of the entire composition of the system nor its size. All these components are orchestrated together to obtain (strong) resilience to cheaters.
The main contribution of the paper is to provide, for the first time, an architecture for narrative games agnostic of a particular blockchain that brings together several distinct research areas, namely distributed ledgers, peer-to-peer networks, multi-player-online games and resilience to attacks.
Multi User Security of LightMAC and LightMAC_Plus
In FSE'16, Luykx et al. have proposed that provably achieves a query length independent PRF security bound. To be precise, the construction achieves security roughly in the order of , when instantiated with two independently keyed -bit block ciphers and is the total number of queries made by the adversary. Subsequently, in ASIACRYPT'17, Naito proposed a beyond-birthday-bound variant of the construction, dubbed as , that is built on three independently keyed -bit block ciphers and achieves -bits PRF security. Security analyses of these two constructions have been conducted in the single-user setting, where we assume that the adversary has the access to a single instance of the construction. In this paper, we investigate, for the first time, the security of the and the construction in the context of multi-user setting, where we assume that the adversary has access to more than one instances of the construction. In particular, we have shown that remains secure roughly up to construction queries and ideal-cipher queries in the ideal-cipher model and maintains security up to approximately construction queries and ideal-cipher queries in the ideal-cipher model, where denotes the block size and denotes the key size of the block cipher.
Large Language Models for Blockchain Security: A Systematic Literature Review
Large Language Models (LLMs) have emerged as powerful tools across various domains within cyber security. Notably,
recent studies are increasingly exploring LLMs applied to the context of blockchain security (BS).
However, there remains a gap in a comprehensive understanding regarding the full scope of applications, impacts, and potential constraints of LLMs on blockchain security.
To fill this gap, we undertake a literature review focusing on the studies that apply LLMs in blockchain security (LLM4BS).
Our study aims to comprehensively analyze and understand existing research, and elucidate how LLMs contribute to enhancing the security of blockchain systems.
Through a thorough examination of existing literature, we delve into the integration of LLMs into various aspects of blockchain security.
We explore the mechanisms through which LLMs can bolster blockchain security, including their applications in smart contract auditing, transaction anomaly detection, vulnerability repair, program analysis of smart contracts, and serving as participants in the cryptocurrency community.
Furthermore, we assess the challenges and limitations associated with leveraging LLMs for enhancing blockchain security, considering factors such as scalability, privacy concerns, and ethical concerns.
Our thorough review sheds light on the opportunities and potential risks of tasks on LLM4BS, providing valuable insights for researchers, practitioners, and policymakers alike.
Massive Superpoly Recovery with a Meet-in-the-middle Framework -- Improved Cube Attacks on Trivium and Kreyvium
The cube attack extracts the information of secret key bits by recovering the coefficient called superpoly in the output bit with respect to a subset of plaintexts/IV, which is called a cube. While the division property provides an efficient way to detect the structure of the superpoly, superpoly recovery could still be prohibitively costly if the number of rounds is sufficiently high. In particular, Core Monomial Prediction (CMP) was proposed at ASIACRYPT 2022 as a scaled-down version of Monomial Prediction (MP), which sacrifices accuracy for efficiency but ultimately gets stuck at 848 rounds of \trivium.
In this paper, we provide new insights into CMP by elucidating the algebraic meaning to the core monomial trails. We prove that it is sufficient to recover the superpoly by extracting all the core monomial trails, an approach based solely on CMP, thus demonstrating that CMP can achieve perfect accuracy as MP does. We further reveal that CMP is still MP in essence, but with variable substitutions on the target function. Inspired by the divide-and-conquer strategy that has been widely used in previous literature, we design a meet-in-the-middle (MITM) framework, in which the CMP-based approach can be embedded to achieve a speedup.
To illustrate the power of these new techniques, we apply the MITM framework to \trivium, \grain and \kreyvium. As a result, not only can the previous computational cost of superpoly recovery be reduced (e.g., 5x faster for superpoly recovery on 192-round \grain), but we also succeed in recovering superpolies for up to 851 rounds of \trivium and up to 899 rounds of \kreyvium. This surpasses the previous best results by respectively 3 and 4 rounds. Using the memory-efficient M\"obius transform proposed at EUROCRYPT 2021, we can perform key recovery attacks on target ciphers, even though the superpoly may contain over monomials. This leads to the best cube attacks on the target ciphers.
Efficient Hardware Implementation for Maiorana-McFarland type Functions
Maiorana--McFarland type constructions are basically concatenating the truth tables of linear functions on a smaller number of variables to obtain highly nonlinear ones on larger inputs. Such functions and their different variants have significant cryptology and coding theory applications. The straightforward hardware implementation of such functions using decoders (Khairallah et al., WAIFI 2018; Tang et al., SIAM Journal on Discrete Mathematics, 2019) requires exponential resources on the number of inputs. In this paper, we study such constructions in detail and provide implementation strategies for a selected subset of this class with polynomial many gates over the number of inputs. We demonstrate that such implementations cover the requirement of cryptographic primitives to a great extent. Several existing constructions are revisited in this direction, and exact implementations are provided with specific depth and gate counts for hardware implementation. Related combinatorial results of theoretical nature are also analyzed in this regard. Finally, we present a novel construction of a new class of balanced Boolean functions with very low absolute indicators and very high nonlinearity that can be implemented in polynomial-size circuits over the number of inputs. We underline that these constructions have immediate applications to resist the signature generation in Differential Fault Attack (DFA) and to implement functions on a large number of variables in designing ciphers for the paradigm of Fully Homomorphic Encryption (FHE).
Orca: FSS-based Secure Training and Inference with GPUs
Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs to each other. In the offline/online model for 2PC, correlated randomness that is independent of all inputs to the computation, is generated in a preprocessing (offline) phase and this randomness is then utilized in the online phase once the inputs to the parties become available. Most 2PC works focus on optimizing the online time as this overhead lies on the critical path. A recent paradigm for obtaining efficient 2PC protocols with low online cost is based on the cryptographic technique of function secret sharing (FSS).
We build an end-to-end system ORCA to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC protocols for several key functionalities in ML which reduce storage by up to 5×. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (PIRANHA, Usenix Security 2022), we show that ORCA has 4% higher accuracy, 98× lesser communication, and is 26× faster on CIFAR-10. Moreover, maintaining training accuracy while using fixed-point needs stochastic truncations, and all prior works on secure fixed-point training (including PIRANHA) use insecure protocols for it. We provide the first secure protocol for stochastic truncations and build on it to provide the first evaluation of training with end-to-end security. For secure ImageNet inference, ORCA achieves sub-second latency for VGG-16 and ResNet-50, and outperforms the state-of-the-art by 8 − 103×.
Shorter VOLEitH Signature from Multivariate Quadratic
The VOLE-in-the-Head paradigm, recently introduced by Baum et al. (Crypto 2023), is a compiler that uses SoftspokenOT (Crypto 2022) to transfer any VOLE-based designated verifier zero-knowledge protocol into a publicly verifiable zero-knowledge protocol. Together with the Fiat-Shamir transformation, a new digital signature scheme FAEST (faest.info) is proposed, and it outperforms all MPC-in-the-Head signatures.
We propose a new candidate post-quantum signature scheme from the Multivariate Quadratic (MQ) problem in the VOLE-in-the-Head framework, which significantly reduces the signature size compared to previous works. We achieve a signature size ranging from 3.5KB to 6KB for the 128-bit security level. Compared to the state-of-the-art MQ-based signature schemes and existing VOLE-in-the-Head signatures, our scheme achieves the smallest signature size (1.5 to 2 times compared to MQ-based schemes) while keeping the computational efficiency competitive.
Analysis of Layered ROLLO-I: A BII-LRPC code-based KEM
We analyze Layered ROLLO-I, a code-based cryptosystem
published in IEEE Communications Letters and submitted to the Korean
post-quantum cryptography competition. Four versions of Layered
ROLLO-I have been proposed in the competition. We show that the first
two versions do not provide the claimed security against rank decoding
attacks and give reductions to small instances of the original ROLLO-I
scheme, which was a candidate in the NIST competition and eliminated
there due to rank decoding attacks. As a second contribution, we provide
two efficient message recovery attacks, affecting every security level
of the first three versions of Layered ROLLO-I and security levels 128
and 192 of the fourth version.
Secure Multiparty Computation from Threshold Encryption Based on Class Groups
We construct the first actively-secure threshold version of the cryptosystem based on class groups from the so-called CL~framework (Castagnos and Laguillaumie, 2015).
We show how to use our threshold scheme to achieve general universally composable (UC) secure multiparty computation (MPC) with only transparent set-up, i.e., with no secret trapdoors involved.
On the way to our goal, we design new zero-knowledge (ZK) protocols with constant communication complexity for proving multiplicative relations between encrypted values. This allows us to use the ZK proofs to achieve MPC with active security with only a constant factor overhead.
Finally, we adapt our protocol for the so-called "You-Only-Speak-Once" (YOSO) setting, which is a very promising recent approach for performing MPC over a blockchain. This is possible because our key generation protocol is simpler and requires significantly less interaction compared to previous approaches: in particular, our new key generation protocol allows the adversary to bias the public key, but we show that this has no impact on the security of the resulting cryptosystem.
Non-Transferable Anonymous Tokens by Secret Binding
Non-transferability (NT) is a security notion which ensures that credentials are only used by their intended owners. Despite its importance, it has not been formally treated in the context of anonymous tokens (AT) which are lightweight anonymous credentials. In this work, we consider a client who "buys" access tokens which are forbidden to be transferred although anonymously redeemed. We extensively study the trade-offs between privacy (obtained through anonymity) and security in AT through the notion of non-transferability. We formalise new security notions, design a suite of protocols with various flavors of NT, prove their security, and implement the protocols to assess their efficiency. Finally, we study the existing anonymous credentials which offer NT, and show that they cannot automatically be used as AT without security and complexity implications.
Quantum-Safe Account Recovery for WebAuthn
WebAuthn is a passwordless authentication protocol which allows users to authenticate to online services using public-key cryptography. Users prove their identity by signing a challenge with a private key, which is stored on a device such as a cell phone or a USB security token. This approach avoids many of the common security problems with password-based authentication.
WebAuthn's reliance on proof-of-possession leads to a usability issue, however: a user who loses access to their authenticator device either loses access to their accounts or is required to fall back on a weaker authentication mechanism. To solve this problem, Yubico has proposed a protocol which allows a user to link two tokens in such a way that one (the primary authenticator) can generate public keys on behalf of the other (the backup authenticator). With this solution, users authenticate with a single token, only relying on their backup token if necessary for account recovery. However, Yubico's protocol relies on the hardness of the discrete logarithm problem for its security and hence is vulnerable to an attacker with a powerful enough quantum computer.
We present a WebAuthn recovery protocol which can be instantiated with quantum-safe primitives. We also critique the security model used in previous analysis of Yubico's protocol and propose a new framework which we use to evaluate the security of both the group-based and the quantum-safe protocol. This leads us to uncover a weakness in Yubico's proposal which escaped detection in prior work but was revealed by our model. In our security analysis, we require the cryptographic primitives underlying the protocols to satisfy a number of novel security properties such as KEM unlinkability, which we formalize. We prove that well-known quantum-safe algorithms, including CRYSTALS-Kyber, satisfy the properties required for analysis of our quantum-safe protocol.
Families of prime-order endomorphism-equipped embedded curves on pairing-friendly curves
This paper presents a procedure to construct parameterized families
of prime-order endomorphism-equipped elliptic curves that are defined over the
scalar field of pairing-friendly elliptic curve families such as Barreto–Lynn–Scott
(BLS), Barreto–Naehrig (BN) and Kachisa–Schaefer–Scott (KSS), providing general
formulas derived from the curves’ seeds. These so-called “embedded curves” are of
major interest in SNARK applications that prove statements involving elliptic curve
arithmetic i.e. digital signatures. In this paper, the mathematical groundwork is laid,
and advantages of these embeddings are discussed. Additionally, practical examples
in the case of BN and BLS families are included and impossibility results regarding
KSS families are explained.
A New Cryptographic Algorithm
The advent of quantum computing technology will compromise many of the current cryptographic algorithms, especially public-key cryptography, which is widely used to protect digital information. Most algorithms on which we depend are used worldwide in components of many different communications, processing, and storage systems. Once access to practical quantum computers becomes available, all public-key algorithms and associated protocols will be vulnerable to criminals, competitors, and other adversaries. It is critical to begin planning for the replacement of hardware, software, and services that use public-key algorithms now so that information is protected from future attacks.” [1].
For this purpose, we have developed a new algorithm that contributes to deal with the aforementioned problem. Instead to use a classical scheme of encoding / decoding methods (keys, prime numbers, etc.), our algorithm is rather based on a combination of functions. Because the cardinality of the set of functions is infinite, it would be impossible for a third party (e.g. a hacker) to decode the secret information transmitted by the sender (Bob) to the receiver (Alice).
Dragon: Decentralization at the cost of Representation after Arbitrary Grouping and Its Applications to Sub-cubic DKG and Interactive Consistency
Several distributed protocols, including distributed key generation (DKG) and interactive consistency (IC), depend on instances of Byzantine Broadcast or Byzantine Agreement among nodes, resulting in communication overhead.
In this paper, we provide a new methodology of realizing such broadcasts we call DRAGON: Decentralization at the cost of Representation after Arbitrary GrOupiNg. At the core of it, we arbitrarily group nodes into small ``shards'' and paired with multiple new primitives we call consortium-sender (dealer) broadcast (and secret sharing). The new tools enable a shard of nodes to jointly broadcast (or securely contribute a secret) to the whole population only at the cost of one dealer ({\em as if} there is a representative).
With our new Dragon method, we construct the first two DKG protocols, both achieving optimal resilience, with sub-cubic total communication and computation. The first DKG generates a secret key within an Elliptic Curve group, incurring total communication and computation. The second DKG, while slightly increasing communication and computation by a factor of the statistical security parameter, generates a secret key as a field element, which makes it directly compatible with various off-the-shelf DLog-based threshold cryptographic systems. We also construct a first deterministic IC with sub-cubic communication. Along the way, we also formalize simulation-based security and proved it for publicly verifiable secret sharing (PVSS), making it possible for a modular analysis, which might be of independent interest.
BUFFing FALCON without Increasing the Signature Size
This work shows how FALCON can achieve the Beyond UnForgeability Features (BUFF) introduced by Cremers et al. (S&P'21) more efficiently than by applying the generic BUFF transform. Specifically, we show that applying a transform of Pornin and Stern (ACNS'05), dubbed PS-3 transform, already suffices for FALCON to achieve BUFF security. For FALCON, this merely means to include the public key in the hashing step in signature generation and verification, instead of hashing only the nonce and the message; the other signature computation steps and the signature output remain untouched. In comparison to the BUFF transform, which appends a hash value to the final signature, the PS-3 transform therefore achieves shorter signature sizes, without incurring additional computations.
Signature-Free Atomic Broadcast with Optimal Messages and Expected Time
Byzantine atomic broadcast (ABC) is at the heart of permissioned blockchains and various multi-party computation protocols. We resolve a long-standing open problem in ABC, presenting the first information-theoretic (IT) and signature-free asynchronous ABC protocol that achieves optimal messages and expected time. Our ABC protocol adopts a new design, relying on a reduction from---perhaps surprisingly---a somewhat neglected primitive called multivalued Byzantine agreement (MBA).
Rollerblade: Replicated Distributed Protocol Emulation on Top of Ledgers
We observe that most fixed-party distributed protocols can be rewritten by replacing a party with a ledger (such as a blockchain system) and the authenticated channel communication between parties with cross-chain relayers. This transform is useful because blockchain systems are always online and have battle-tested security assumptions. We provide a definitional framework that captures this analogy. We model the transform formally, and posit and prove a generic metatheorem that allows translating all theorems from the party setting into theorems in the emulated setting, while preserving analogies between party honesty and ledger security. In the heart of our proof lies a reduction-based simulation argument. As an example, our metatheorem can be used to construct a consensus protocol on top of other blockchains, creating a reliable rollup that assumes only the majority of the underlying layer-1s are secure.
Ratel: MPC-extensions for Smart Contracts
Enhancing privacy on smart contract-enabled blockchains has garnered much attention in recent research. Zero-knowledge proofs (ZKPs) is one of the most popular approaches, however, they fail to provide full expressiveness and fine-grained privacy. To illustrate this, we underscore an underexplored type of Miner Extractable Value (MEV), called Residual Bids Extractable Value (RBEV). Residual bids highlight the vulnerability where unfulfilled bids inadvertently reveal traders’ unmet demands and prospective trading strategies, thus exposing them to exploitation. ZKP-based approaches failed to ad- dress RBEV as they cannot provide post-execution privacy without some level of information disclosure. Other MEV mitigations like fair-ordering protocols also failed to address RBEV. We introduce Ratel, an innovative framework bridging a multi-party computation (MPC) prototyping framework (MP-SPDZ) and a smart contract language (Solidity), harmonizing the privacy with full expressiveness of MPC with Solidity ’s on-chain programmability. This synergy empowers developers to effortlessly craft privacy-preserving decentralized applications (DApps). We demonstrate Ratel’s efficacy through two distinguished decentralized finance (DeFi) applications: a decentralized exchange and a collateral auction, effectively mitigating the potential RBEV issue. Furthermore, Ratel is equipped with a lightweight crash-reset mechanism, enabling the seamless recovery of transiently benign faulty nodes. To prevent the crash-reset mechanism abused by malicious entities and ward off DoS attacks, we incorporate a cost-utility analysis anchored in the Bayesian approach. Our performance evaluation of the applications developed under the Ratel framework underscores their competency in managing real-world peak-time workloads.
More Efficient Two-Round Multi-Signature Scheme with Provably Secure Parameters
In this paper, we propose the first two-round multi-signature scheme that can guarantee 128-bit security under a standardized EC in concrete security without using the Algebraic Group Model (AGM). To construct our scheme, we introduce a new technique to tailor a certain special homomorphic commitment scheme for the use with the Katz-Wang DDH-based signature scheme. We prove that an EC with at least a 321-bit order is sufficient for our scheme to have the standard 128-bit security. This means that it is easy for our scheme to implement in practice because we can use the NIST-standardized EC P-384 for 128-bit security. The signature size of our proposed scheme under P-384 is 1152 bits, which is the smallest size among the existing schemes without using the AGM. Our experiment on an ordinary machine shows that for signing and verification, each can be completed in about 65 ms under 100 signers. This shows that our scheme has sufficiently reasonable running time in practice.
BPDTE: Batch Private Decision Tree Evaluation via Amortized Efficient Private Comparison
Machine learning as a service requires the client to trust the server and provide its own private information to use this service. Usually, clients may worry that their private data is being collected by server without effective supervision, and the server also aims to ensure proper management of the user data to foster the advancement of its services. In this work, we focus on private decision tree evaluation (PDTE) which can alleviates such privacy concerns associated with classification tasks using decision tree. After the evaluation, except for some hyperparameters, the client only receives the classification results from the server, while the server learns nothing.
Firstly, we propose three amortized efficient private comparison algorithms: TECMP, RDCMP, and CDCMP, which are based on the leveled homomorphic encryption. They are non-interactive, high precision (up to 26624-bit), many-to-many, and output expressive, achieving an amortized cost of less than 1 ms under 32-bit, which is an order of magnitude faster than the state-of-the-art. Secondly, we propose three batch PDTE schemes using this private comparison: TECMP-PDTE, RDCMP-PDTE, and CDCMP-PDTE. Due to the batch operations, we utilized a clear rows relation (CRR) algorithm, which obfuscates the position and classification results of the different row data. Finally, in decision tree exceeding 1000 nodes under 16-bit each, the amortized runtime of TECMP-PDTE and RDCMP-PDTE both more than 56 faster than state-of-the-art, while the TECMP-PDTE with CRR still achieves 14 speedup. Even in a single row and a tree of fewer than 100 nodes with 64-bit, the TECMP-PDTE maintains a comparable performance with the current work.
One-Wayness in Quantum Cryptography
The existence of one-way functions is one of the most fundamental assumptions in classical cryptography. In the quantum world, on the other hand, there are evidences that some cryptographic primitives can exist even if one-way functions do not exist [Morimae and Yamakawa, CRYPTO 2022; Ananth, Qian, and Yuen, CRYPTO 2022]. We therefore have the following important open problem in quantum cryptography: What is the most fundamental element in quantum cryptography? In this direction, Brakerski, Canetti, and Qian [arXiv:2209.04101] recently defined a notion called EFI pairs, which are pairs of efficiently generatable states that are statistically distinguishable but computationally indistinguishable, and showed its equivalence with some cryptographic primitives including commitments, oblivious transfer, and general multi-party computations. However, their work focuses on decision-type primitives and does not cover search-type primitives like quantum money and digital signatures. In this paper, we study properties of one-way state generators (OWSGs), which are a quantum analogue of one-way functions proposed by Morimae and Yamakawa. We first revisit the definition of OWSGs and generalize it by allowing mixed output states. Then we show the following results.
(1) We define a weaker version of OWSGs, which we call weak OWSGs, and show that they are equivalent to OWSGs. It is a quantum analogue of the amplification theorem for classical weak one-way functions.
(2) (Bounded-time-secure) quantum digital signatures with quantum public keys are equivalent to OWSGs.
(3) Private-key quantum money schemes (with pure money states) imply OWSGs.
(4) Quantum pseudo one-time pad schemes imply both OWSGs and EFI pairs. For EFI pairs, single-copy security suffices.
(5) We introduce an incomparable variant of OWSGs, which we call secretly-verifiable and statistically-invertible OWSGs, and show that they are equivalent to EFI pairs.
Automated Generation of Fault-Resistant Circuits
Fault Injection (FI) attacks, which involve intentionally introducing faults into a system to cause it to behave in an unintended manner, are widely recognized and pose a significant threat to the security of cryptographic primitives implemented in hardware, making fault tolerance an increasingly critical concern. However, protecting cryptographic hardware primitives securely and efficiently, even with well-established and documented methods such as redundant computation, can be a time-consuming, error-prone, and expertise-demanding task.
In this research, we present a comprehensive and fully-automated software solution for the Automated Generation of Fault-Resistant Circuits (AGEFA). Our application employs a generic and extensively researched methodology for the secure integration of countermeasures based on Error-Correcting Codes (ECCs) into cryptographic hardware circuits. Our software tool allows designers without hardware security expertise to develop fault-tolerant hardware circuits with pre-defined correction capabilities under a comprehensive fault adversary model. Moreover, our tool applies to masked designs without violating the masking security requirements, in particular to designs generated by the tool AGEMA. We evaluate the effectiveness of our approach through experiments on various block ciphers and demonstrate its ability to produce fault-tolerant circuits. Additionally, we assess the security of examples generated by AGEFA against Side-Channel Analysis (SCA) and FI using state-of-the-art leakage and fault evaluation tools.
Towards a Polynomial Instruction Based Compiler for Fully Homomorphic Encryption Accelerators
Fully Homomorphic Encryption (FHE) is a transformative technology that enables computations on encrypted data without requiring decryption, promising enhanced data privacy. However, its adoption has been limited due to significant performance overheads. Recent advances include the proposal of domain-specific, highly-parallel hardware accelerators designed to overcome these limitations.
This paper introduces PICA, a comprehensive compiler framework designed to simplify the programming of these specialized FHE accelerators and integration with existing FHE libraries. PICA leverages a novel polynomial Instruction Set Architecture (p-ISA), which abstracts polynomial rings and their arithmetic operations, serving as a fundamental data type for the creation of compact, efficient code embracing high-level operations on polynomial rings, referred to as kernels, e.g., encompassing FHE primitives like arithmetic and ciphertext management. We detail a kernel generation framework that translates high-level FHE operations into pseudo-code using p-ISA, and a subsequent tracing framework that incorporates p-ISA functionalities and kernels into established FHE libraries. Additionally, we introduce a mapper to coordinate multiple FHE kernels for optimal application performance on targeted hardware accelerators. Our evaluations demonstrate PICA's efficacy in creation of compact and efficient code, when compared with an x64 architecture. Particularly in managing complex FHE operations such as relinearization, where we observe a 25.24x instruction count reduction even when a large batch size (8192) is taken into account.
Linicrypt in the Ideal Cipher Model
We extend the Linicrypt framework for characterizing hash function security as proposed by McQuoid, Swope, and Rosulek (TCC 2018) to support constructions in the ideal cipher model.
In this setting, we give a characterization of collision- and second-preimage-resistance in terms of a linear-algebraic condition on Linicrypt programs, and present an efficient algorithm for determining whether a program satisfies the condition. As an application, we consider the case of the block cipherbased hash functions proposed by Preneel, Govaerts, and Vandewall (Crypto 1993), and show that the semantic analysis of PGV given by Black et. al. (J. Crypto. 2010) can be captured as a special case of our characterization. In addition, We model hash functions constructed through the Merkle-Damgård transformation within the Linicrypt framework. Finally, we appy this model to an analysis of how various attacks on the underlying compression functions can compromise the collision resistance of the resulting hash function.
Dashing and Star: Byzantine Fault Tolerance with Weak Certificates
State-of-the-art Byzantine fault-tolerant (BFT) protocols assuming partial synchrony such as SBFT and HotStuff use \textit{regular certificates} obtained from (partial) signatures. We show that one can use \textit{weak certificates} obtained from only signatures to \textit{assist} in designing more robust and more efficient BFT protocols. We design and implement two BFT systems: Dashing (a family of two HotStuff-style BFT protocols) and Star (a parallel BFT framework).
We first present Dashing1 that targets both efficiency and robustness using weak certificates. Dashing1 is also network-adaptive in the sense that it can leverage network connection discrepancy to improve performance. We show that Dashing1 outperforms HotStuff in various failure-free and failure scenarios. We then present Dashing2 enabling a \textit{one-phase} fast path by using \textit{strong certificates} from signatures.
We then leverage weak certificates to build Star, a highly scalable BFT framework that delivers transactions from replicas. Star compares favorably with existing protocols in terms of liveness, communication, state transfer, scalability, and/or robustness under failures.
We demonstrate that Dashing achieves 47\%-107\% higher peak throughput than HotStuff for experiments on Amazon EC2. Meanwhile, unlike all known BFT protocols whose performance degrades as grows large, the peak throughput of Star increases as grows. When deployed in a WAN with 91 replicas across five continents, Star achieves an impressive throughput of 256 ktx/sec, 2.38x that of Narwhal.
Decentralised Repeated Modular Squaring Service Revisited: Attack and Mitigation
Repeated modular squaring plays a crucial role in various time-based cryptographic primitives, such as Time-Lock Puzzles and Verifiable Delay Functions. At ACM CCS 2021, Thyagarajan et al. introduced “OpenSquare”, a decentralised protocol that lets a client delegate the computation of repeated modular squaring to third-party servers while ensuring that these servers are compensated only if they deliver valid results. In this work, we unveil a significant vulnerability in OpenSquare, which enables servers to receive payments without fulfilling the delegated task. To tackle this issue, we present a series of mitigation measures.
A provably masked implementation of BIKE Key Encapsulation Mechanism
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances.
In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model.
More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used.
We hope that this work can be a starting point for future analysis and optimization.
Polynomial XL: A Variant of the XL Algorithm Using Macaulay Matrices over Polynomial Rings
Solving a system of multivariate quadratic equations in variables over finite fields (the MQ problem) is one of the important problems in the theory of computer science. The XL algorithm (XL for short) is a major approach for solving the MQ problem with linearization over a coefficient field. Furthermore, the hybrid approach with XL (h-XL) is a variant of XL guessing some variables beforehand. In this paper, we present a variant of h-XL, which we call the polynomial XL (PXL). In PXL, the whole variables are divided into variables to be fixed and the remaining variables as ``main variables'', and we generate a Macaulay matrix with respect to the main variables over a polynomial ring of the (sub-)variables. By eliminating some columns of the Macaulay matrix over the polynomial ring before guessing variables, the amount of operations required for each guessed value can be reduced compared with h-XL. Our complexity analysis of PXL (under some practical assumptions and heuristics) gives a new theoretical bound, and it indicates that PXL could be more efficient than other algorithms in theory on the random system with , which is the case of general multivariate signatures. For example, on systems over the finite field with elements with , the numbers of operations deduced from the theoretical bounds of the hybrid approaches with XL and Wiedemann XL, Crossbred, and PXL with optimal are estimated as , , , and , respectively.
Fully Automated Selfish Mining Analysis in Efficient Proof Systems Blockchains
We study selfish mining attacks in longest-chain blockchains like Bitcoin, but where the proof of work is replaced with efficient proof systems -- like proofs of stake or proofs of space -- and consider the problem of computing an optimal selfish mining attack which maximizes expected relative revenue of the adversary, thus minimizing the chain quality. To this end, we propose a novel selfish mining attack that aims to maximize this objective and formally model the attack as a Markov decision process (MDP). We then present a formal analysis procedure which computes an -tight lower bound on the optimal expected relative revenue in the MDP and a strategy that achieves this -tight lower bound, where may be any specified precision. Our analysis is fully automated and provides formal guarantees on the correctness. We evaluate our selfish mining attack and observe that it achieves superior expected relative revenue compared to two considered baselines.
In concurrent work [Sarenche FC'24] does an automated analysis on selfish mining in predictable longest-chain blockchains based on efficient proof systems. Predictable means the randomness for the challenges is fixed for many blocks (as used e.g., in Ouroboros), while
we consider unpredictable (Bitcoin-like) chains where the challenge is derived from the previous block.
An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks
In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks.
In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks.
We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over (up to ) speedup for separated non-linear layers and speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with million parameters, in seconds, achieving speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.
Lattice-based Public Key Encryption with Authorized Keyword Search: Construction, Implementation, and Applications
Public key encryption with keyword search (PEKS), formalized by Boneh et al. [EUROCRYPT' 04], enables secure searching for specific keywords in the ciphertext. Nevertheless, in certain scenarios, varying user tiers are granted disparate data searching privileges, and administrators need to restrict the searchability of ciphertexts to select users exclusively. To address this concern, Jiang et al. [ACISP' 16] devised a variant of PEKS, namely public key encryption with authorized keyword search (PEAKS), wherein solely authorized users possess the ability to conduct targeted keyword searches. Nonetheless, it is vulnerable to resist quantum computing attacks. As a result, research focusing on authorizing users to search for keywords while achieving quantum security is far-reaching.
In this work, we present a novel construction, namely lattice-based PEAKS (L-PEAKS), which is the first mechanism to permit the authority to authorize users to search different keyword sets while ensuring quantum-safe properties. Specifically, the keyword is encrypted with a public key, and each authorized user needs to obtain a search privilege from an authority. The authority distributes an authorized token to a user within a time period and the user will generate a trapdoor for any authorized keywords. Technically, we utilize several lattice sampling and basis extension algorithms to fight against attacks from quantum adversaries. Moreover, we leverage identity-based encryption (IBE) to alleviate the bottleneck of public key management. Furthermore, we conduct parameter analysis, security reduction, and theoretical complexity comparison of our scheme and perform comprehensive evaluations of a commodity machine for completeness. Our L-PEAKS satisfies IND-sID-CKA and T-EUF security and is efficient in terms of space and computation complexity compared to other existing primitives.
Quantum Unpredictability
Unpredictable functions (UPFs) play essential roles in classical cryptography, including message authentication codes (MACs) and digital signatures. In this paper, we introduce a quantum analog of UPFs, which we call unpredictable state generators (UPSGs). UPSGs are implied by pseudorandom function-like states generators (PRFSs), which are a quantum analog of pseudorandom functions (PRFs), and therefore UPSGs could exist even if one-way functions do not exist, similar to other recently introduced primitives like pseudorandom state generators (PRSGs), one-way state generators (OWSGs), and EFIs. In classical cryptography, UPFs are equivalent to PRFs, but in the quantum case, the equivalence is not clear, and UPSGs could be weaker than PRFSs. Despite this, we demonstrate that all known applications of PRFSs are also achievable with UPSGs. They include IND-CPA-secure secret-key encryption and EUF-CMA-secure MACs with unclonable tags. Our findings suggest that, for many applications, quantum unpredictability, rather than quantum pseudorandomness, is sufficient.
An Efficient All-to-All GCD Algorithm for Low Entropy RSA Key Factorization
RSA is an incredibly successful and useful asymmetric encryption algorithm. One of the types of implementation flaws in RSA is low entropy of the key generation, specifically the prime number creation stage. This can occur due to flawed usage of random prime number generator libraries, or on computers where there is a lack of a source of external entropy. These implementation flaws result in some RSA keys sharing prime factors, which means that the full factorization of the public modulus can be recovered incredibly efficiently by performing a computation GCD between the two public key moduli that share the prime factor. However, since one does not know which of the composite moduli share a prime factor a-priori, to determine if any such shared prime factors exist, an all-to-all GCD attack (also known as a batch GCD attack, or a bulk GCD attack) can be performed on the available public keys so as to recover any shared prime factors. This study describes a novel all-to-all batch GCD algorithm, which will be referred to as the binary tree batch GCD algorithm, that is more efficient than the current best batch GCD algorithm (the remainder tree batch GCD algorithm). A comparison against the best existing batch GCD method (which is a product tree followed by a remainder tree computation) is given using a dataset of random RSA moduli that are constructed such that some of the moduli share prime factors. This proposed binary tree batch GCD algorithm has better runtime than the existing remainder tree batch GCD algorithm, although asymptotically it has nearly identical scaling and its complexity is dependent on how many shared prime factors exist in the set of RSA keys. In practice, the implementation of the proposed binary tree batch GCD algorithm has a roughly 6x speedup compared to the standard remainder tree batch GCD approach.
Private Computations on Streaming Data
We present a framework for privacy-preserving streaming algorithms which combine the memory-efficiency of streaming algorithms with strong privacy guarantees. These algorithms enable some number of servers to compute aggregate statistics efficiently on large quantities of user data without learning the user's inputs. While there exists limited prior work that fits within our model, our work is the first to formally define a general framework, interpret existing methods within this general framework, and develop new tools broadly applicable to this model. To highlight our model, we designed and implemented a new privacy-preserving streaming algorithm to compute heavy hitters, which are the most frequent elements in a data stream. We provide a performance comparison between our system and Poplar, the only other private statistics algorithm which supports heavy hitters. We benchmarked ours and Poplar's systems and provided direct performance comparisons within the same hardware platform. Of note, Poplar requires linear space compared to our poly-logarithmic space, meaning our system is the first to compute heavy hitters within the privacy-preserving streaming model. A small memory footprint allows our algorithm (among other benefits) to run efficiently on a very large input sizes without running out of memory or crashing.
LINE: Cryptosystem based on linear equations for logarithmic signatures
The discourse herein pertains to a directional encryption cryptosystem predicated upon logarithmic signatures interconnected via a system of linear equations (we call it LINE). A logarithmic signature serves as a foundational cryptographic primitive within the algorithm, characterized by distinct cryptographic attributes including nonlinearity, noncommutativity, unidirectionality, and factorizability by key. The confidentiality of the cryptosystem is contingent upon the presence of an incomplete system of equations and the substantial ambiguity inherent in the matrix transformations integral to the algorithm. Classical cryptanalysis endeavors are constrained by the potency of the secret matrix transformation and the indeterminacy surrounding solutions to the system of linear equations featuring logarithmic signatures. Such cryptanalysis methodologies, being exhaustive in nature, invariably exhibit exponential complexity. The absence of inherent group computations within the algorithm, and by extension, the inability to exploit group properties associated with the periodicity of group elements, serves to mitigate quantum cryptanalysis to Grover's search algorithm. LINE is predicated upon an incomplete system of linear equations embodies the security levels ranging from 1 to 5, as stipulated by the NIST, and thus presents a promising candidate for the construction of post-quantum cryptosystems.
Beale Cipher 1 and Cipher 3: Numbers With No Messages
This paper's purpose is to give a new method of analyzing Beale Cipher 1 and Cipher 3 and to show that there is no key which will decipher them into sentences.
Previous research has largely used statistical methods to
either decipher them or prove they have no solution. Some
of these methods show that there is a high probability, but not certainty that they are unsolvable. Both ciphers remain unsolved.
The methods used in this paper are not statistical ones
based on thousands of samples. The evidence given here shows there is a high correlation between locations of certain numbers in the ciphers with locations in the written text that was given with these ciphers in the 1885 pamphlet called "The Beale Papers".
Evidence is correlated with a long monotonically increasing Gillogly String in Cipher 1, when translated with the Declaration of Independence given in the pamphlet.
The Beale Papers' writer was anonymous, and words in the three written letters in the 1885 pamphlet are compared with locations of numbers in the ciphers to show who the writer was.
Emphasis is on numbers which are controllable by the encipherer. Letter location sums are used when they are the most plausible ones found.
Evidence supports the statement that Cipher 1 and Cipher 3 are unintelligible. It also supports the statement that they were designed to have no intelligible sentences because they were part of a complex game made by the anonymous writer of The Beale Papers.
Lower-Bounds on Public-Key Operations in PIR
Private information retrieval (PIR) is a fundamental cryptographic primitive that allows a user to fetch a database entry without revealing to the server which database entry it learns. PIR becomes non-trivial if the server communication is less than the database size. We show that building (even) very weak forms of single-server PIR protocols, without pre-processing, requires the number of public-key operations to scale linearly in the database size. This holds irrespective of the number of symmetric-key operations performed by the parties.
We then use this bound to examine the related problem of communication efficient oblivious transfer (OT) extension.
Oblivious transfer is a crucial building block in secure multi-party computation (MPC). In most MPC protocols, OT invocations are the main bottleneck in terms of computation and communication. OT extension techniques allow one to minimize the number of public-key operations in MPC protocols. One drawback of all existing OT extension protocols is their communication overhead. In particular, the sender’s communication is roughly double what is information-theoretically optimal.
We show that OT extension with close to optimal sender communication is impossible, illustrating that the communication overhead is inherent. Our techniques go much further; we can show many lower bounds on communication-efficient MPC. E.g., we prove that to build high-rate string OT from generic groups, the sender needs to do linearly many group operations
LPN-based Attacks in the White-box Setting
In white-box cryptography, early protection techniques have fallen to the automated Differential Computation Analysis attack (DCA), leading to new countermeasures and attacks. A standard side-channel countermeasure, Ishai-Sahai-Wagner's masking scheme (ISW, CRYPTO 2003) prevents Differential Computation Analysis but was shown to be vulnerable in the white-box context to the Linear Decoding Analysis attack (LDA). However, recent quadratic and cubic masking schemes by Biryukov-Udovenko (ASIACRYPT 2018) and Seker-Eisenbarth-Liskiewicz (CHES 2021) prevent LDA and force to use its higher-degree generalizations with much higher complexity.
In this work, we study the relationship between the security of these and related schemes to the Learning Parity with Noise (LPN) problem and propose a new automated attack by applying an LPN-solving algorithm to white-box implementations. The attack effectively exploits strong linear approximations of the masking scheme and thus can be seen as a combination of the DCA and LDA techniques. Different from previous attacks, the complexity of this algorithm depends on the approximation error, henceforth allowing new practical attacks on masking schemes that previously resisted automated analysis. We demonstrate it theoretically and experimentally, exposing multiple cases where the LPN-based method significantly outperforms LDA and DCA methods, including their higher-order variants.
This work applies the LPN problem beyond its usual post-quantum cryptography boundary, strengthening its interest in the cryptographic community, while expanding the range of automated attacks by presenting a new direction for breaking masking schemes in the white-box model.
How to Make Rational Arguments Practical and Extractable
We investigate proof systems where security holds against rational parties instead of malicious ones. Our starting point is the notion of rational arguments, a variant of rational proofs (Azar and Micali, STOC 2012) where security holds against rational adversaries that are also computationally bounded.
Rational arguments are an interesting primitive because they generally allow for very efficient protocols, and in particular sublinear verification (i.e. where the Verifier does not have to read the entire input). In this paper we aim at narrowing the gap between literature on rational schemes and real world applications. Our contribution is two-fold.
We provide the first construction of rational arguments for the class of polynomial computations that is practical (i.e., it can be applied to real-world computations on reasonably common hardware) and with logarithmic communication.
Techniques-wise, we obtain this result through a compiler from information-theoretic protocols and rational proofs for polynomial evaluation. The latter could be of independent interest.
As a second contribution, we propose a new notion of extractability for rational arguments. Through this notion we can obtain arguments where knowledge of a witness is incentivized (rather than incentivizing mere soundness).
We show how our aforementioned compiler can also be applied to obtain efficient extractable rational arguments for .
Succinct Functional Commitments for Circuits from k-Lin
A functional commitment allows a user to commit to an input and later, open the commitment to an arbitrary function . The size of the commitment and the opening should be sublinear in and .
In this work, we give the first pairing-based functional commitment for arbitrary circuits where the size of the commitment and the size of the opening consist of a constant number of group elements. Security relies on the standard bilateral - assumption. This is the first scheme with this level of succinctness from falsifiable bilinear map assumptions (previous approaches required SNARKs for ). This is also the first functional commitment scheme for general circuits with -size commitments and openings from any assumption that makes fully black-box use of cryptographic primitives and algorithms. As an immediate consequence, we also obtain a succinct non-interactive argument for arithmetic circuits (i.e., a SNARG for ) with a universal setup and where the proofs consist of a constant number of group elements. In particular, the CRS in our SNARG only depends on the size of the arithmetic circuit rather than the circuit itself; the same CRS can be used to verify computations with respect to different circuits. Our construction relies on a new notion of projective chainable commitments which may be of independent interest.
Unstructured Inversions of New Hope
Introduced as a new protocol implemented in “Chrome Canary” for the Google Inc. Chrome browser,
“New Hope” is engineered as a post-quantum key exchange for the TLS 1.2 protocol. The structure of
the exchange is revised lattice-based cryptography. New Hope incorporates the key-encapsulation
mechanism of Peikert which itself is a modified Ring-LWE scheme. The search space used to introduce
the closest-vector problem is generated by an intersection of a tesseract and hexadecachoron, or the ℓ∞-
ball and ℓ1-ball respectively. This intersection results in the 24-cell 𝒱 of lattice 𝒟̃4. With respect to the
density of the Voronoi cell 𝒱, the proposed mitigation against backdoor attacks proposed by the authors
of New Hope may not withstand such attempts if enabled by a quantum computer capable of
implementing Grover’s search algorithm.
Committing AVID with Partial Retrieval and Optimal Storage
Asynchronous Verifiable Information Dispersal (AVID) allows a dealer to disperse a message across a collection of server replicas consistently and efficiently, such that any future client can reliably retrieve the message if some servers fail.
Since AVID was introduced by Cachin and Tessaro in 2005, several works improved the asymptotic communication complexity of AVID protocols.
However, recent gains in communication complexity have come at the expense of sub-optimal storage, which is the dominant cost in long-term archiving.
Moreover, recent works do not provide a mechanism to detect errors until the retrieval stage, which may result in completely wasted long-term storage if the dealer is malicious.
In this work, we contribute a new AVID construction that achieves optimal storage and guaranteed output delivery, without sacrificing on communication complexity during dispersal or retrieval.
First, we introduce a technique that bootstraps from dispersal of a message with sub-optimal storage to one with optimal storage.
Second, we define and construct an AVID protocol that is robust, meaning that all server replicas are guaranteed at dispersal time that their fragments will contribute toward retrieval of a valid message.
Third, we add the new possibility that some server replicas may lose their fragment in between dispersal and retrieval (as is likely in the long-term archiving scenario).
This allows us to rely on fewer available replicas for retrieval than are required for dispersal.
A Plug-and-Play Long-Range Defense System for Proof-of-Stake Blockchains
In recent years, many blockchain systems have progressively transitioned to proof-of-stake (PoS) con- sensus algorithms. These algorithms are not only more energy efficient than proof-of-work but are also well-studied and widely accepted within the community. However, PoS systems are susceptible to a particularly powerful "long-range" attack, where an adversary can corrupt the validator set retroactively and present forked versions of the blockchain. These versions would still be acceptable to clients, thereby creating the potential for double-spending. Several methods and research efforts have proposed counter- measures against such attacks. Still, they often necessitate modifications to the underlying blockchain, introduce heavy assumptions such as centralized entities, or prove inefficient for securely bootstrapping light clients.
In this work, we propose a method of defending against these attacks with the aid of external servers running our protocol. Our method does not require any soft or hard-forks on the underlying blockchain and operates under reasonable assumptions, specifically the requirement of at least one honest server.
Central to our approach is a new primitive called "Insertable Proof of Sequential Work" (InPoSW). Traditional PoSW ensures that a server performs computational tasks that cannot be parallelized and require a minimum execution time, effectively timestamping the input data. InPoSW additionally allows the prover to "insert" new data into an ongoing InPoSW instance. This primitive can be of independent interest for other timestamp applications. Compared to naively adopting prior PoSW schemes for In-PoSW, our construction achieves >22× storage reduction on the server side and >17900× communication cost reduction for each verification.
Xproofs: New Aggregatable and Maintainable Matrix Commitment with Optimal Proof Size
Vector Commitment (VC) enables one to commit to a vector, and then the element at a specific position can be opened, with proof of consistency to the initial commitment. VC is a powerful primitive with various applications, including stateless cryptocurrencies. Recently, matrix commitment Matproofs (Liu and Zhang CCS 2022), as an extension of VC, has been proposed to reduce the communication and computation complexity of VC-based cryptocurrencies. However, Matproofs requires linear-sized public parameters, and the aggregated proof size may also increase linearly with the number of individual proofs aggregated. Additionally, the proof updating process involves the third party, known as Proof-Serving Nodes (PSNs), which leads to extra storage and communication overhead. In this paper, we first propose a multi-dimensional variant of matrix commitment and construct a new matrix commitment scheme for two-dimensional matrix, called 2D-Xproofs, which achieves optimal aggregated proof size without using PSNs. Furthermore, we present a highly maintainable three-dimensional scheme, 3D-Xproofs, which updates all proofs within time sublinear in the size of the committed matrix without PSNs' assistance. More generally, we could further increase the matrix dimensionality to achieve more efficient proof updates. Finally, we demonstrate the security of our schemes, showing that both schemes are position binding. We also implement both schemes, and the results indicate that our schemes enjoy constant-sized aggregated proofs and sublinear-sized public parameters, and the proof update time in 3D-Xproofs is faster than Matproofs.
Vector Commitments with Efficient Updates
Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in , namely and for any . We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. For , our constructions outperform Verkle commitments by about a factor of in terms of both the update information size and runtime, but makes use of larger public parameters.
A note on ``a new password-authenticated module learning with rounding-based key exchange protocol: Saber.PAKE''
We show the Seyhan-Akleylek key exchange protocol [J. Supercomput., 2023, 79:17859-17896] cannot resist offline dictionary attack and impersonation attack, not as claimed.
Instant Zero Knowledge Proof of Reserve
We present a non-interactive and public verifier scheme that
allows one to assert the asset of a financial organization instantly and incrementally in zero knowledge with high throughput. It is enabled by the recent breakthrough in lookup argument, where the prover cost can
be independent of the lookup table size after a pre-processing step. We extend the cq protocol and develop an aggregated non-membership proof for zero knowledge sets. Based on it, we design a non-intrusive protocol that works for pseudo-anonymous cryptocurrencies such as BTC.
It has O(n log(n)) prover complexity and O(1) proof size, where n is the platform throughput (instead of anonymity set size). We implement and evaluate the protocol. Running on a 56-core server, it supports 1024 transactions per second.
Boomy: Batch Opening Of Multivariate polYnomial commitment
We present Boomy, a multivariate polynomial commitment scheme enabling the proof of the evaluation of multiple points, i.e., batch opening. Boomy is the natural extension of two popular protocols: the univariate polynomial commitment scheme of Kate, Zaverucha and Goldberg~\cite{AC:KatZavGol10} and its multivariate counterpart from Papamanthou, Shi and Tamassia~\cite{papamanthou2013signatures}. Our construction is proven secure under the selective security model. In this paper, we present Boomy's complexity and the applications on which it can have a significant impact. In fact, Boomy is perfectly suited to tackling blockchain data availability problems, shrinking existing challenges. We also present special lower-complexity cases that occur frequently in practical situations.
How to Use Quantum Indistinguishability Obfuscation
Quantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs.
As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection.
Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs --- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection.
Finally, we construct qsiO relative to an efficient quantum oracle.
Last updated: 2024-05-02
Compact and Secure Zero-Knowledge Proofs for Quantum-Resistant Cryptography from Modular Lattice Innovations
This paper presents a comprehensive security analysis of the Adh zero-knowledge proof system, a novel lattice-based, quantum-resistant proof of possession system. The Adh system offers compact key and proof sizes, making it suitable for real-world digital signature and public key agreement protocols. We explore its security by reducing it to the hardness of the Module-ISIS problem and introduce three new variants: Module-ISIS+, Module-ISIS*, and Module-ISIS**. These constructions enhance security through variations on chaining mechanisms. We also provide a reduction to the module modulus subset sum problem under conservative assumptions.
Empirical evidence and statistical testing support the zero-knowledge, completeness, and soundness properties of the Adh proof system. Comparative analysis demonstrates the Adh system's advantages in terms of key and proof sizes over existing post-quantum schemes like Kyber and Dilithium.
This paper represents an early preprint and is a work in progress. The core security arguments and experimental results are present, and formal proofs and additional analysis are provided. We invite feedback and collaboration from the research community to further strengthen the security foundations of the Adh system and explore its potential applications in quantum-resistant cryptography.
SigmaSuite: How to Minimize Foreign Arithmetic in ZKP Circuits While Keeping Succinct Final Verification.
Foreign field arithmetic often creates significant additional overheads in zero-knowledge proof circuits. Previous work has offloaded foreign arithmetic from proof circuits by using effective and often simple primitives such as Sigma protocols. While these successfully move the foreign field work outside of the circuit, the costs for the Sigma protocol’s verifier still remains high. In use cases where the verifier is constrained computationally this poses a major challenge. One such use case would be in proof composition where foreign arithmetic causes a blowup in the costs for the verifier circuit. In this work we show that by using folding scheme with Sigmabus and other such uniform verifier offloading techniques, we can remove foreign field arithmetic from zero-knowledge proof circuits while achieving succinct final verification. We do this by applying prior techniques iteratively and accumulate the resulting verifier work into one folding proof of size O(|F|) group elements, where F is the size of a single Sigma verifier’s computation. Then by using an existing zkSNARK we can further compress to a proof size of O(log |F|) which can be checked succinctly by a computationally constrained verifier.
On amortization techniques for FRI-based SNARKs
We present two techniques to improve the computational and/or communication costs of STARK proofs: packing and modular split-and-pack.
Packing allows to generate a single proof of the satisfiability of several constraints. We achieve this by packing the evaluations of all relevant polynomials in the same Merkle leaves, and combining all DEEP FRI functions into a single randomized validity function. Our benchmarks show that packing reduces the verification time and proof size compared to individually proving the satisfiability of each witness, while only increasing the prover time moderately.
Modular split-and-pack is a proof acceleration technique where the prover divides a witness into smaller sub-witnesses. It then uses packing to prove the simultaneous satisfiability of each sub-witness. Compared to producing a proof of the original witness, splitting improves the prover time and memory usage, while increasing the verifier time and proof size. Ideas similar to modular split-and-pack seem to be used throughout the industry, but 1) generally execution traces are split by choosing the first rows, then the next rows, and so on; and 2) full recursion is used to prove the simultaneous satisfiability of the sub-witnesses, usually combined with a final wrapper proof (typically a Groth16 proof). We present a different way to split the witness that allows for an efficient re-writing of Plonkish-type constraints. Based on our benchmarks, we believe this approach (together with a wrapper proof) can improve upon existing splitting methods, resulting in a faster prover at essentially no cost in proof size and verification time.
Both techniques apply to popular FRI-based proof systems such as ethSTARK, Plonky2/3, RISC Zero, and Boojum.
Chocobo: Creating Homomorphic Circuit Operating with Functional Bootstrapping in basis B
The TFHE cryptosystem only supports small plaintext space, up to 5 bits with usual parameters. However, one solution to circumvent this limitation is to decompose input messages into a basis B over multiple ciphertexts. In this work, we introduce B-gates, an extension of logic gates to non binary bases, to compute base B logic circuit. The flexibility introduced by our approach improves the speed performance over previous approaches such as the so called tree-based method which requires an exponential amount of operations in the number of inputs. We provide experimental results using sorting as a benchmark application and, additionally, we obtain a speed-up of ×3 in latency compared to state of the art BGV techniques for this application. As an additional result, we introduce a keyswitching key specific to packing TLWE ciphertexts into TRLWE ciphertexts with redundancy, which is of interest in many functional bootstrapping scenarios.
Cryptographic Accumulators: New Definitions, Enhanced Security, and Delegatable Proofs
Cryptographic accumulators, introduced in 1993 by Benaloh and De
Mare, represent a set with a concise value and offer proofs of (non-)membership. Accumulators have evolved, becoming essential in anonymous credentials, e-cash, and blockchain applications. Various properties like dynamic and universal emerged for specific needs, leading to multiple accumulator definitions. In 2015, Derler, Hanser, and Slamanig proposed a unified model, but new properties, including zero-knowledge security, have arisen since. We offer a new definition of accumulators, based on Derler et al.’s, that is suitable for all properties. We also introduce a new security property, unforgeability of private evaluation, to protect accumulator from forgery and we verify this property in Barthoulot, Blazy, and Canard’s recent accumulator. Finally we provide discussions on security properties of accumulators and on the delegatable (non-)membership proofs property.
Secure Implementation of SRAM PUF for Private Key Generation
This paper endeavors to securely implement a Physical Unclonable
Function (PUF) for private data generation within Field-Programmable
Gate Arrays (FPGAs). SRAM PUFs are commonly utilized due to their
use of memory devices for generating secret data, particularly in resource constrained devices. However, their reliance on memory access poses side-channel threats such as data remanence decay and memory-based attacks, and the time required to generate secret data is significant. To address these issues, we propose implementing n cross-coupled inverters in Verilog to generate n secret bits, followed by syndrome for error correction hardcoded in the hardware itself. This approach improves side channel security and reduces time consumption, albeit at the expense of additional area utilization
Blockchain Price vs. Quantity Controls
This paper studies the optimal transaction fee mechanisms for blockchains, focusing on the distinction between price-based ( ) and quantity-based ( ) controls. By analyzing factors such as demand uncertainty, validator costs, cryptocurrency price fluctuations, price elasticity of demand, and levels of decentralization, we establish criteria that determine the selection of transaction fee mechanisms. We present a model framed around a Nash bargaining game, exploring how blockchain designers and validators negotiate fee structures to balance network welfare with profitability. Our findings suggest that the choice between and mechanisms depends critically on the blockchain’s specific technical and economic features. The study concludes that no single mechanism suits all contexts and highlights the potential for hybrid approaches that adaptively combine features of both and to meet varying demands and market conditions.
A Low-Depth Homomorphic Circuit for Logistic Regression Model Training
Machine learning is an important tool for analyzing large data sets, but its use on sensitive data may be limited by regulation. One solution to this problem is to perform machine learning tasks on encrypted data using homomorphic encryption, which enables arbitrary computation on encrypted data. We take a fresh look at one specific task: training a logistic regression model on encrypted data. The most important
factor in the efficiency of a solution is the multiplicative depth of the homomorphic circuit. Two prior works have given circuits with multiplicative depth of five per training iteration. We optimize one of these solutions, by Han et al. [Han+18], and give a circuit with half the multiplicative depth per iteration on average, which allows us to perform twice as many training iterations in the same amount of time.
In the process of improving the state-of-the-art circuit for this task, we identify general techniques to improve homomorphic circuit design for two broad classes of algorithms: iterative algorithms, and algorithms based on linear algebra over real numbers. First, we formalize the encoding scheme from [Han+18] for encoding linear algebra objects as plaintexts in the CKKS homomorphic encryption scheme. We also show how to use this encoding to homomorphically compute many basic linear algebra operations, including novel operations not discussed in prior work. This “toolkit” is generic, and can be used in any application based on linear algebra. Second, we demonstrate how generic compiler techniques for loop optimization can be used to reduce the multiplicative depth of iterative algorithms.
Truncator: Time-space Tradeoff of Cryptographic Primitives
We present mining-based techniques to reduce the size of various cryptographic outputs without loss of security. Our approach can be generalized for multiple primitives, such as cryptographic key generation, signing, hashing and encryption schemes, by introducing a brute-forcing step to provers/senders aiming at compressing submitted cryptographic material.
Interestingly, mining can result in record-size cryptographic outputs, and we show that 5%-12% shorter hash digests and signatures are practically feasible even with commodity hardware. As a result, our techniques make compressing addresses and transaction signatures possible in order to pay less fees in blockchain applications while decreasing the demand for blockchain space, a major bottleneck for initial syncing, communication and storage. Also, the effects of "compressing once - then reuse'' at mass scale can be economically profitable in the long run for both the Web2 and Web3 ecosystems.
Our paradigm relies on a brute-force search operation in order to craft the primitive's output such that it fits into fewer bytes, while the "missing" fixed bytes are implied by the system parameters and omitted from the actual communication. While such compression requires computational effort depending on the level of compression, this cost is only paid at the source (i.e. in blockchains, senders are rewarded by lowered transaction fees), and the benefits of the compression are enjoyed by the whole ecosystem. As a starting point, we show how our paradigm applies to some basic primitives commonly used in blockchain applications but also traditional Web2 transactions (such as shorter digital certificates), and show how security is preserved using a bit security framework. Surprisingly, we also identified cases where wise mining strategies require proportionally less effort than naive brute-forcing, shorter hash-based signatures being one of the best examples. We also evaluate our approach for several primitives based on different levels of compression. Our evaluation concretely demonstrates the benefits both in terms of financial cost and storage if adopted by the community, and we showcase how our technique can achieve up to 83.21% reduction in smart contract gas fees at a cost of less than 4 seconds of computation on a single core.
Agile, Post-quantum Secure Cryptography in Avionics
To introduce a post-quantum-secure encryption scheme specifically for use in flight-computers, we used avionics’ module-isolation methods to wrap a recent encryption standard (HPKE – Hybrid Public Key Encryption) within a software partition. This solution proposes an upgrade to HPKE, using quantum-resistant ciphers (Kyber/ML-KEM and Dilithium/ML-DSA) redundantly alongside well-established ciphers, to achieve post-quantum security.
Because cryptographic technology can suddenly become obsolete as attacks become more sophisticated, "crypto-agility" -– the ability to swiftly replace ciphers – represents the key challenge to deployment of software like ours. Partitioning is a crucial method for establishing such agility, as it enables the replacement of compromised software without affecting software on other partitions, greatly simplifying the certification process necessary in an avionics environment.
Our performance measurements constitute initial evidence that both the memory and performance characteristics of this approach are suitable for deployment in flight-computers currently in use. Prior to optimisation, performance measurements show a modest memory requirement of under 400 KB of RAM, but employ a more substantial stack usage of just under 200 KB. Our most advanced redundant post-quantum cipher is five times slower than its non-redundant, pre-quantum counterpart.
New self-orthogonal codes from weakly regular plateaued functions and their application in LCD codes
A linear code with few weights is a significant code family in coding theory. A linear code is considered self-orthogonal if contained within its dual code. Self-orthogonal codes have applications in linear complementary dual codes, quantum codes, etc. The construction of linear codes is an interesting research problem. There are various methods to construct linear codes, and one approach involves utilizing cryptographic functions defined over finite fields. The construction of linear codes (in particular, self-orthogonal codes) from functions has been studied in the literature. In this paper, we generalize the construction method given by
Heng et al. in [Des. Codes Cryptogr. 91(12), 2023] to weakly regular plateaued functions. We first construct several families of p-ary linear codes with few weights from weakly regular plateaued unbalanced (resp. balanced) functions over the finite fields of odd characteristics. We observe that the constructed codes are self-orthogonal codes when p = 3. Then, we use the constructed ternary self-orthogonal codes to build new families of ternary LCD codes. Consequently, we obtain (almost) optimal ternary self-orthogonal codes and LCD codes.
Eagle: Efficient Privacy Preserving Smart Contracts
The proliferation of Decentralised Finance (DeFi) and Decentralised Autonomous Organisations (DAO), which in current form are exposed to front-running of token transactions and proposal voting, demonstrate the need to shield user inputs and internal state from the parties executing smart contracts. In this work we present “Eagle”, an efficient UC-secure protocol which efficiently realises a notion of privacy preserving smart contracts where both the amounts of tokens and the auxiliary data given as input to a contract are kept private from all parties but the one providing the input. Prior proposals realizing privacy preserving smart contracts on public, permissionless blockchains generally offer a limited contract functionality or require a trusted third party to manage private inputs and state. We achieve our results through a combination of secure multi-party computation (MPC) and zero-knowledge proofs on Pedersen commitments. Although other approaches leverage MPC in this setting, these incur impractical computational overheads by requiring the computation of cryptographic primitives within MPC. Our solution achieves security without the need of any cryptographic primitives to be computed inside the MPC instance and only require a constant amount of exponentiations per client input.