Papers updated in last 7 days (63 results)
Revisiting attacker's knowledge in inference attacks against Searchable Symmetric Encryption
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted some security vulnerabilities. Recent attacks assumed an attacker's knowledge containing data "similar" to the indexed data. However, this vague assumption is barely discussed in literature: how likely is it for an attacker to obtain a "similar enough" data?
Our paper provides novel statistical tools usable on any attack in this setting to analyze its sensitivity to data similarity. First, we introduce a mathematical model based on statistical estimators to analytically understand the attackers' knowledge and the notion of similarity. Second, we conceive statistical tools to model the influence of the similarity on the attack accuracy. We apply our tools on three existing attacks to answer questions such as: is similarity the only factor influencing accuracy of a given attack? Third, we show that the enforcement of a maximum index size can make the ``similar-data'' assumption harder to satisfy. In particular, we propose a statistical method to estimate an appropriate maximum size for a given attack and dataset. For the best known attack on the Enron dataset, a maximum index size of 200 guarantees (with high probability) the attack accuracy to be below 5%.
How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent
End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.
This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI assistants within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.
Post-Quantum Blind Signatures from Matrix Code Equivalence
We construct a novel code-based blind signature scheme, us- ing the Matrix Equivalence Digital Signature (MEDS) group action. The scheme is built using similar ideas to the Schnorr blind signature scheme and CSI-Otter, but uses additional public key and commitment informa- tion to overcome the difficulties that the MEDS group action faces: lack of module structure (present in Schnorr), lack of a quadratic twist (present in CSI-Otter), and non-commutativity of the acting group. We address security concerns related to public key validation, and prove the security of our protocol in the random oracle model, using the security framework of Kastner, Loss, and Xu, under a variant of the Inverse Matrix Code Equivalence problem and a mild heuristic assumption.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size $n$ where
constructing an opening proof requires $O(n)$ field operations, and $2n+O(\sqrt n)$ scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. $O(n\log n)$ field operations; or
2. $O(\log n)$ size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Registration-Based Encryption in the Plain Model
Registration-based encryption (RBE) is a recently developed alternative to identity-based encryption, that mitigates the well-known key-escrow problem by letting each user sample its own key pair. In RBE, the key authority is substituted by a key curator, a completely transparent entity whose only job is to reliably aggregate users' keys. However, one limitation of all known RBE scheme is that they all rely on one-time trusted setup, that must be computed honestly.
In this work, we ask whether this limitation is indeed inherent and we initiate the systematic study of RBE in the plain model, without any common reference string. We present the following main results:
- (Definitions) We show that the standard security definition of RBE is unachievable without a trusted setup and we propose a slight weakening, where one honest user is required to be registered in the system.
- (Constructions) We present constructions of RBE in the plain model, based on standard cryptographic assumptions. Along the way, we introduce the notions of non-interactive witness indistinguishable (NIWI) proofs secure against chosen statements attack and re-randomizable RBE, which may be of independent interest.
A major limitation of our constructions, is that users must be updated upon every new registration.
- (Lower Bounds) We show that this limitation is in some sense inherent. We prove that any RBE in the plain model that satisfies a certain structural requirement, which holds for all known RBE constructions, must update all but a vanishing fraction of the users, upon each new registration. This is in contrast with the standard RBE settings, where users receive a logarithmic amount of updates throughout the lifetime of the system.
Don't Use It Twice: Reloaded! On the Lattice Isomorphism Group Action
Group actions have emerged as a powerful framework in post-quantum cryptography, serving as the foundation for various cryptographic primitives. The Lattice Isomorphism Problem (LIP) has recently gained attention as a promising hardness assumption for designing quantum-resistant protocols. Its formulation as a group action has opened the door to new cryptographic applications, including a commitment scheme and a linkable ring signature.
In this work, we analyze the security properties of the LIP group action and present new findings. Specifically, we demonstrate that it fails to satisfy the weak unpredictability and weak pseudorandomness properties when the adversary has access to as few as three and two instances with the same secret, respectively. This significantly improves upon prior analysis by Budroni et al. (PQCrypto 2024).
As a direct consequence of our findings, we reveal a vulnerability in the linkable ring signature scheme proposed by Khuc et al. (SPACE 2024), demonstrating that the hardness assumption underlying the linkable anonymity property does not hold.
Adaptive Adversaries in Byzantine-Robust Federated Learning: A survey.
Federated Learning (FL) has recently emerged as one of the leading paradigms for collaborative machine learning, serving as a tool for model computation without a need to expose one’s privately stored data. However, despite its advantages, FL systems face severe challenges within its own security solutions that address both privacy and robustness of models. This paper focuses on vulnerabilities within the domain of FL security with emphasis on model-robustness. Identifying critical gaps in current defences, particularly against adaptive adversaries which modify their attack strategies after being disconnected and rejoin systems to continue attacks. To our knowledge, other surveys in this domain do not cover the concept of adaptive adversaries, this along with the significance of their impact serves as the main motivation for this work. Our contributions are fivefold: (1) we present a comprehensive overview of FL systems, presenting a complete summary of its fundamental building blocks, (2) an extensive overview of existing vulnerabilities that target FL systems in general, (3) highlight baseline attack vectors as well as state-of-the-art approaches to development of attack methods and defence mechanisms, (4) introduces a novel baseline method of attack leveraging reconnecting malicious clients, and (5) identifies future research directions to address and counter adaptive attacks. We demonstrate through experimental results that existing baseline secure aggregation rules used in other works for comparison such as Krum and Trimmed Mean are insufficient against those attacks. Further, works improving upon those algorithms do not address this concern either. Our findings serve as a basis for redefining FL security paradigms in the direction of adaptive adversaries.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy, strong diffusion, and computational efficiency. It integrates HKDF with BLAKE2b for key expansion, ensuring secure key derivation from user-supplied passwords. The encryption process employs XChaCha20, a high-speed stream cipher, to provide strong security and resistance against nonce reuse attacks. To guarantee data integrity and authentication, HMAC-SHA256 is used, preventing unauthorized modifications.
Security evaluations demonstrate that Deimos Cipher exhibits superior randomness, achieving 6.24 bits per byte entropy for short plaintexts and 7.9998 bits per byte for long plaintexts, surpassing industry standards like AES and ChaCha20. Avalanche Effect analysis confirms optimal diffusion, with 50.18% average bit change, ensuring high resistance to differential cryptanalysis. Additionally, key sensitivity tests reveal 50.54% ciphertext change for minimal key variations, making brute-force and key-recovery attacks impractical.
With its combination of a robust key expansion mechanism, stream cipher encryption, and cryptographic authentication, Deimos Cipher offers a secure and efficient encryption scheme suitable for secure messaging, cloud data protection, and high-security environments. This paper presents the algorithm’s design, security analysis, and benchmarking against established cryptographic standards.
The SMAesH dataset
Datasets of side-channel leakage measurements are widely used in research to develop and benchmark side-channel attack and evaluation methodologies. Compared to using custom and/or one-off datasets, widely-used and publicly available datasets improve research reproducibility and comparability. Further, performing high-quality measurements requires specific equipment and skills, while also taking a significant amount of time. Therefore, using publicly available datasets lowers the barriers to entry into side-channel research. This paper introduces the SMAesH dataset. SMAesH is an optimized masked hardware implementation of the AES with a provably secure arbitrary-order masking scheme. The SMAesH dataset contains power traces of the first-order protected version of SMAesH acquired on two FPGAs from different generations, along with key, plaintext and masking randomness. A part of the dataset use uniformly random key and plaintext to enable leakage profiling, while another part uses a fixed key (still with uniformly random plaintext) to enable attack validation or leakage assessment in a fixed-versus-random setting. We document the experimental setup used to acquire the dataset and also discuss particular methods employed to maximize the information content in the leakage traces, such as power supply selection, fine-grained trace alignment and resolution optimization. We perfom an in-depth leakage analysis for the two targets. Finally, we briefly discuss the known attacks against the dataset.
ammBoost: State Growth Control for AMMs
Automated market makers (AMMs) are a prime example of Web 3.0 applications. Their popularity and high trading activity led to serious scalability issues in terms of throughput and state size. In this paper, we address these challenges by utilizing a new sidechain architecture, building a system called ammBoost. ammBoost reduces the amount of on-chain transactions, boosts throughput, and supports blockchain pruning. We devise several techniques to enable layer 2 processing for AMMs, including a functionality-split and layer 2 traffic summarization paradigm, an epoch-based deposit mechanism, and pool snapshot-based and delayed token-payout trading. We also build a proof-of-concept for a Uniswap-inspired use case to empirically evaluate performance. Our experiments show that ammBoost decreases the gas cost by 96.05% and the chain growth by at least 93.42%, and that it can support up to 500x of the daily traffic volume of Uniswap. We also compare ammBoost to an Optimism-inspired solution showing a 99.94% reduction in transaction finality.
AI Agents in Cryptoland: Practical Attacks and No Silver Bullet
The integration of AI agents with Web3 ecosystems harnesses their complementary potential for autonomy and openness, yet also introduces underexplored security risks, as these agents dynamically interact with financial protocols and immutable smart contracts. This paper investigates the vulnerabilities of AI agents within blockchain-based financial ecosystems when exposed to adversarial threats in real-world scenarios. We introduce the concept of context manipulation -- a comprehensive attack vector that exploits unprotected context surfaces, including input channels, memory modules, and external data feeds. Through empirical analysis of ElizaOS, a decentralized AI agent framework for automated Web3 operations, we demonstrate how adversaries can manipulate context by injecting malicious instructions into prompts or historical interaction records, leading to unintended asset transfers and protocol violations which could be financially devastating. Our findings indicate that prompt-based defenses are insufficient, as malicious inputs can corrupt an agent's stored context, creating cascading vulnerabilities across interactions and platforms. This research highlights the urgent need to develop AI agents that are both secure and fiduciarily responsible.
Deniable Secret Sharing
We introduce deniable secret sharing (DSS), which, analogously to deniable encryption, enables shareholders to produce fake shares that are consistent with a target “fake message”, regardless of the original secret. In contrast to deniable encryption, in a DSS scheme an adversary sees multiple shares, some of which might be real, and some fake. This makes DSS a more difficult task, especially in situations where the fake shares need to be generated by individual shareholders, without coordination with other shareholders.
We define several desirable properties for DSS, and show both positive and negative results for each. The strongest property is fake hiding, which is a natural analogy of deniability for encryption: given a complete set of shares, an adversary cannot determine whether any shares are fake. We show a construction based on Shamir secret sharing that achieves fake hiding as long as (1) the fakers are qualified (number $t$ or more), and (2) the set of real shares which the adversary sees is unqualified. Next we show a construction based on indistinguishability obfuscation that relaxes condition (1) and achieves fake hiding even when the fakers are unqualified (as long as they comprise more than half of the shareholders). We also extend the first construction to provide the weaker property of faker anonymity for all thresholds. (Faker anonymity requires that given some real shares and some fake shares, an adversary should not be able to tell which are fake, even if it can tell that some fake shares are present.) All of these constructions require the fakers to coordinate in order to produce fake shares.
On the negative side, we first show that fake hiding is unachievable when the fakers are a minority, even if the fakers coordinate. Further, if the fakers do not coordinate, then even faker anonymity is unachievable as soon as $t < n$ (namely the reconstruction threshold is smaller than the number of parties).
Ring Referral: Efficient Publicly Verifiable Ad hoc Credential Scheme with Issuer and Strong User Anonymity for Decentralized Identity and More
In this paper, we present a ring referral scheme, by which a user can publicly prove her knowledge of a valid signature for a private message that is signed by one of an ad hoc set of authorized issuers, without revealing the signing issuer. Ring referral is a natural extension to traditional ring signature by allowing a prover to obtain a signature from a third-party signer. Our scheme is useful for diverse applications, such as certificate-hiding decentralized identity, privacy-enhancing federated authentication, anonymous endorsement and privacy -preserving referral marketing. In contrast with prior issuer-hiding credential schemes, our ring referral scheme supports more distinguishing features, such as (1) public verifiability over an ad hoc ring, (2) strong user anonymity against collusion among the issuers and verifier to track a user, (3) transparent setup, (4) message hiding, (5) efficient multi-message logarithmic verifiability, (6) threshold scheme for requiring multiple co-signing issuers. Finally, we implemented our ring referral scheme with extensive empirical evaluation
Quantum Key-Recovery Attacks on Permutation-Based Pseudorandom Functions
Due to their simple security assessments, permutation-based pseudo-random functions (PRFs) have become widely used in cryptography. It has been shown that PRFs using a single $n$-bit permutation achieve $n/2$ bits of security, while those using two permutation calls provide $2n/3$ bits of security in the classical setting. This paper studies the security of permutation-based PRFs within the Q1 model, where attackers are restricted to classical queries and offline quantum computations. We present improved quantum-time/classical-data tradeoffs compared with the previous attacks. Specifically, under the same assumptions/hardware as Grover's exhaustive search attack, i.e. the offline Simon algorithm, we can recover keys in quantum time $\tilde{O}(2^{n/3})$, with $O(2^{n/3})$ classical queries and $O(n^2)$ qubits. Furthermore, we enhance previous superposition attacks by reducing the data complexity from exponential to polynomial, while maintaining the same time complexity. This implies that permutation-based PRFs become vulnerable when adversaries have access to quantum computing resources. It is pointed out that the above quantum attack can be used to quite a few cryptography, including PDMMAC and pEDM, as well as general instantiations like XopEM, EDMEM, EDMDEM, and others.
HELM: Navigating Homomorphic Encryption through Gates and Lookup Tables
As cloud computing continues to gain widespread adoption, safeguarding the confidentiality of data entrusted to third-party cloud service providers becomes a critical concern. While traditional encryption methods offer protection for data at rest and in transit, they fall short when it comes to where it matters the most, i.e., during data processing.
To address this limitation, we present HELM, a framework for privacy-preserving data processing using homomorphic encryption. HELM automatically transforms arbitrary programs expressed in a Hardware Description Language (HDL), such as Verilog, into equivalent homomorphic circuits, which can then be efficiently evaluated using encrypted inputs. HELM features three modes of encrypted evaluation: a) a gate mode that consists of Boolean gates, b) a small-precision lookup table mode which significantly reduces the size of the circuit by combining multiple gates into lookup tables, and c) a high-precision lookup table mode tuned for multi-bit arithmetic evaluations. Finally, HELM introduces a scheduler that leverages the parallelism inherent in arithmetic and Boolean circuits to efficiently evaluate encrypted programs. We evaluate HELM with the ISCAS'85 and ISCAS'89 benchmark suites, as well as real-world applications such as image filtering and neural network inference. In our experimental results, we report that HELM can outperform prior works by up to 65x.
Assembly optimised Curve25519 and Curve448 implementations for ARM Cortex-M4 and Cortex-M33
Since the introduction of TLS 1.3, which includes X25519 and X448 as key exchange algorithms, one could expect that high efficient implementations for these two algorithms become important as the need for power efficient and secure IoT devices increases. Assembly optimised X25519 implementations for low end processors such as Cortex-M4 have existed for some time but there has only been scarce progress on optimised X448 implementations for low end ARM processors such as Cortex-M4 and Cortex-M33. This work attempts to fill this gap by demonstrating how to design a constant time X448 implementation that runs in 2 273 479 cycles on Cortex-M4 and 2 170 710 cycles on Cortex-M33 with DSP. An X25519 implementation is also presented that runs in 441 116 cycles on Cortex-M4 and 411 061 cycles on Cortex-M33 with DSP.
New Techniques for Analyzing Fully Secure Protocols: A Case Study of Solitary Output Secure Computation
Solitary output secure computation allows a set of mutually distrustful parties to compute a function of their inputs such that only a designated party obtains the output. Such computations should satisfy various security properties such as correctness, privacy, independence of inputs, and even guaranteed output delivery. We are interested in full security, which captures all of these properties. Solitary output secure computation has been the study of many papers in recent years, as it captures many real-world scenarios.
A systematic study of fully secure solitary output computation was initiated by Halevi et al. [TCC 2019]. They showed several positive and negative results, however, they did not characterize what functions can be computed with full security. Alon et al. [EUROCRYPT 2024] considered the special, yet important case, of three parties with Boolean output, where the output-receiving party has no input. They completely characterized the set of such functionalities that can be computed with full security. Interestingly, they also showed a possible connection with the seemingly unrelated notion of fairness, where either all parties obtain the output or none of them do.
We continue this line of investigation and study the set of three-party solitary output Boolean functionalities where all parties hold private inputs. Our main contribution is defining and analyzing a family of ``special-round'' protocols, which generalizes the set of previously proposed protocols. Our techniques allow us to identify which special-round protocols securely compute a given functionality (if such exists). Interestingly, our analysis can also be applied in the two-party setting (where fairness is an issue). Thus, we believe that our techniques may prove useful in additional settings and deepen our understanding of the connections between the various settings.
Division polynomials for arbitrary isogenies
Following work of Mazur-Tate and Satoh, we extend the definition of division polynomials to arbitrary isogenies of elliptic curves, including those whose kernels do not sum to the identity. In analogy to the classical case of division polynomials for multiplication-by-n, we demonstrate recurrence relations, identities relating to classical elliptic functions, the chain rule describing relationships between division polynomials on source and target curve, and generalizations to higher dimension (i.e., elliptic nets).
Masking-Friendly Post-Quantum Signatures in the Threshold-Computation-in-the-Head Framework
Side-channel attacks pose significant threats to cryptographic implementations, which require the inclusion of countermeasures to mitigate these attacks. In this work, we study the masking of state-of-the-art post-quantum signatures based on the MPC-in-the-head paradigm. More precisely, we focus on the recent threshold-computation-in-the-head (TCitH) framework that applies to some NIST candidates of the post-quantum standardization process. We first provide an analysis of side-channel attack paths in the signature algorithms based on the TCitH framework. We then explain how to apply standard masking to achieve a $d$-probing secure implementation of such schemes, with performance scaling in $O(d^{2})$, for $d$ the masking order.
Our main contribution is to introduce different ways to tweak those signature schemes towards their masking friendliness. While the TCitH framework comes in two variants, the GGM variant and the Merkle tree variant, we introduce a specific tweak for each of these variants. These tweaks allow us to achieve complexities of $O(d)$ and $O(d \log d)$ at the cost of non-constant signature size, caused by the inclusion of additional seeds in the signature. We also propose a third tweak that takes advantage of the threshold secret sharing used in TCitH. With the right choice of parameters, we show how, by design, some parts of the TCitH algorithms satisfy probing security without additional countermeasures. While this approach can substantially reduce the cost of masking in some part of the signature algorithm, it degrades the soundness of the core zero-knowledge proof, hence slightly increasing the size of the signature.
We analyze the complexity of the masked implementations of our tweaked TCitH signatures and provide benchmarks on a RISC-V platform with built-in hash accelerator. We use a modular benchmarking approach, allowing to estimate the performance of diverse signature instances with different tweaks and parameters. Our results illustrate how the different variants scale for an increasing masking order. For instance, for a masking order $d = 3$, we obtain signatures of around $14$ kB that run in $0.67$ second on a the target RISC-V CPU with a $250$MHz frequency. This is to be compared with the $4.7$ seconds required by the original signature scheme masked at the same order on the same platform. For a masking order $d=7$, we obtain a signature of $17.5$ kB running in $1.75$ second, to be compared with $16$ seconds for the stardard masked signature.
Finally, we discuss the extension of our techniques to signature schemes based on the VOLE-in-the-Head framework, which shares similarities with the GGM variant of TCitH. One key takeaway of our work is that the Merkle tree variant of TCitH is inherently more amenable to efficient masking than frameworks based on GGM trees, such as TCitH-GGM or VOLE-in-the-Head.
mid-pSquare: Leveraging the Strong Side-Channel Security of Prime-Field Masking in Software
Efficiently protecting embedded software implementations of standard symmetric cryptographic primitives against side-channel attacks has been shown to be a considerable challenge in practice. This is, in part, due to the most natural countermeasure for such ciphers, namely Boolean masking, not amplifying security well in the absence of sufficient physical noise in the measurements. So-called prime-field masking has been demonstrated to provide improved theoretical guarantees in this context, and the Feistel for Prime Masking (FPM) family of Tweakable Block Ciphers (TBCs) has been recently introduced (Eurocrypt’24) to efficiently leverage these advantages. However, it was so far only instantiated for and empirically evaluated in a hardware implementation context, by using a small (7-bit) prime modulus. In this paper, we build on the theoretical incentive to increase the field size to obtain improved side-channel (Eurocrypt’24) and fault resistance (CHES’24), as well as on the practical incentive to instantiate an FPM instance with optimized performance on 32-bit software platforms. We introduce mid-pSquare for this purpose, a lightweight TBC operating over a 31-bit Mersenne prime field. We first provide an in-depth black box security analysis with a particular focus on algebraic attacks – which, contrary to the cryptanalysis of instances over smaller primes, are more powerful than statistical ones in our setting. We also design a strong tweak schedule to account for potential related-tweak algebraic attacks which, so far, are almost unknown in the literature. We then demonstrate that mid-pSquare implementations deliver very competitive performance results on the target platform compared to analogous binary TBCs regardless of masked or unmasked implementation (we use fix-sliced SKINNY for our comparisons). Finally, we experimentally establish the side-channel security improvements that masked mid-pSquare can lead to, reaching unmatched resistance to profiled horizontal attacks on lightweight 32-bit processors (ARM Cortex-M4).
Embedding Integer Lattices as Ideals into Polynomial Rings
Many lattice-based crypstosystems employ ideal lattices for high efficiency. However, the additional algebraic structure of ideal lattices usually makes us worry about the security, and it is widely believed that the algebraic structure will help us solve the hard problems in ideal lattices more efficiently. In this paper, we study the additional algebraic structure of ideal lattices further and find that a given ideal lattice in a polynomial ring can be embedded as an ideal into infinitely many different polynomial rings by the coefficient embedding. We design an algorithm to verify whether a given full-rank lattice in $\mathbb{Z}^n$ is an ideal lattice and output all the polynomial rings that the given lattice can be embedded into as an ideal with bit operations $\mathcal{O}(n^3(\log n + B)^2(\log n)^2)$, where $n$ is the dimension of the lattice and $B$ is the upper bound of the bit length of the entries of the input lattice basis. We would like to point out that Ding and Lindner proposed an algorithm for identifying ideal lattices and outputting a single polynomial ring of which the input lattice can be regarded as an ideal with bit operations $\mathcal{O}(n^5B^2)$ in 2007. However, we find a flaw in Ding and Lindner's algorithm, and it causes some ideal lattices can't be identified by their algorithm.
X-Wing: The Hybrid KEM You’ve Been Looking For
X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security gaurantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.
Secret-Sharing Schemes for General Access Structures: An Introduction
A secret-sharing scheme is a method by which a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret. Secret-sharing schemes are an important tool in cryptography and they are used as a building block in many secure protocols, e.g., secure multiparty computation protocols for arbitrary functionalities, Byzantine agreement, threshold cryptography, access control, attribute-based encryption, and weighted cryptography (e.g., stake-based blockchains). The collection of authorized sets that should be able to reconstruct the secret is called an access structure. The main goal in secret sharing is to minimize the share size in a scheme realizing an access structure. In most of this monograph, we will consider secret-sharing schemes with information-theoretic security, i.e., schemes in which unauthorized sets cannot deduce any information on the secret even when the set has unbounded computational power. Although research on secret-sharing schemes has been conducted for nearly 40 years, we still do not know what the optimal share size required to realize an arbitrary 𝑛-party access structure is; there is an exponential gap between the best known upper bounds and the best known lower bounds on the share size.
In this monograph, we review the most important topics on secret sharing. We start by discussing threshold secret-sharing schemes in which the authorized sets are all sets whose size is at least some threshold 𝑡; these are the most useful secret-sharing schemes. We then describe efficient constructions of secret-sharing schemes for general access structures; in particular, we describe constructions of linear secret-sharing schemes from monotone formulas and monotone span programs and provide a simple construction for arbitrary 𝑛-party access structures with share size 2𝑐𝑛 for some constant 𝑐 < 1. To demonstrate the importance of secret-sharing schemes, we show how they are used to construct secure multi-party computation protocols for arbitrary functions. We next discuss the main problem with known secret-sharing schemes – the large share size, which is exponential in the number of parties. We present the known lower bounds on the share size. These lower bounds are fairly weak, and there is a big gap between the lower and upper bounds. For linear secret-sharing schemes, which are a class of schemes based on linear algebra that contains most known schemes, exponential lower bounds on the share size are known. We then turn to study ideal secret-sharing schemes in which the share size of each party is the same as the size of the secret; these schemes are the most efficient secret-sharing schemes. We describe a characterization of the access structures that have ideal schemes via matroids. Finally, we discuss computational secret-sharing schemes, i.e., secret-sharing schemes that are secure only against polynomial-time adversaries. We show computational schemes for monotone and non-monotone circuits; these constructions are more efficient than the best known schemes with information-theoretic security.
Designated-Verifier SNARGs with One Group Element
We revisit the question of minimizing the proof length of designated-verifier succinct non-interactive arguments (dv-SNARGs) in the generic group model. Barta et al. (Crypto 2020) constructed such dv-SNARGs with inverse-polynomial soundness in which the proof consists of only two group elements. For negligible soundness, all previous constructions required a super-constant number of group elements.
We show that one group element suffices for negligible soundness. Concretely, we obtain dv-SNARGs (in fact, dv-SNARKs) with $2^{-\tau}$ soundness where proofs consist of one element of a generic group $\mathbb G$ and $O(\tau)$ additional bits. In particular, the proof length in group elements is constant even with $1/|\mathbb G|$ soundness error.
In more concrete terms, compared to the best known SNARGs using bilinear groups, we get dv-SNARGs with roughly $2$x shorter proofs (with $2^{-80}$ soundness at a $128$-bit security level). We are not aware of any practically feasible proof systems that achieve similar succinctness, even fully interactive or heuristic ones.
Our technical approach is based on a novel combination of techniques for trapdoor hash functions and group-based homomorphic secret sharing with linear multi-prover interactive proofs.
Compressed Sigma Protocols: New Model and Aggregation Techniques
Sigma protocols ($\Sigma$-protocols) provide a foundational paradigm for constructing secure algorithms in privacy-preserving applications. To enhance efficiency, several extended models [BG18], [BBB+18], [AC20] incorporating various optimization techniques have been proposed as ``replacements'' for the original $\Sigma$-protocol. However, these models often lack the expressiveness needed to handle complex relations and hinder designers from applying appropriate instantiation and optimization strategies.
In this paper, we introduce a novel compressed $\Sigma$-protocol model that effectively addresses these limitations by providing concrete constructions for relations involving non-linear constraints. Our approach is sufficiently expressive to encompass a wide range of relations. Central to our model is the definition of doubly folded commitments, which, along with a proposed Argument of Knowledge, generalizes the compression and amortization processes found in previous models. Despite the ability to express more relations, this innovation also provides a foundation to discuss a general aggregation technique, optimizing the proof size of instantiated schemes.
To demonstrate the above statements, we provide a brief review of several existing protocols that can be instantiated using our model to demonstrate the versatility of our construction. We also present use cases where our generalized model enhances applications traditionally considered ``less compact'', such as binary proofs [BCC+15] and $k$-out-of-$n$ proofs [ACF21]. In conclusion, our new model offers a more efficient and expressive alternative to the current use of $\Sigma$-protocols, paving the way for broader applicability and optimization in cryptographic applications.
On Extractability of the KZG Family of Polynomial Commitment Schemes
We present a unifying framework for proving the knowledge-soundness of KZG-like polynomial commitment schemes, encompassing both univariate and multivariate variants. By conceptualizing the proof technique of Lipmaa, Parisella, and Siim for the univariate KZG scheme (EUROCRYPT 2024), we present tools and falsifiable hardness assumptions that permit black-box extraction of the multivariate KZG scheme. Central to our approach is the notion of a canonical Proof-of-Knowledge of a Polynomial (PoKoP) of a polynomial commitment scheme, which we use to capture the extractability notion required in constructions of practical zk-SNARKs. We further present an explicit polynomial decomposition lemma for multivariate polynomials, enabling a more direct analysis of interpolating extractors and bridging the gap between univariate and multivariate commitments. Our results provide the first standard-model proofs of extractability for the multivariate KZG scheme and many of its variants under falsifiable assumptions.
Server-Aided Anonymous Credentials
This paper formalizes the notion of server-aided anonymous credentials (SAACs), a new model for anonymous credentials (ACs) where, in the process of showing a credential, the holder is helped by additional auxiliary information generated in an earlier (anonymous) interaction with the issuer. This model enables lightweight instantiations of 'publicly verifiable and multi-use' ACs from pairing-free elliptic curves, which is important for compliance with existing national standards. A recent candidate for the EU Digital Identity Wallet, BBS#, roughly adheres to the SAAC model we have developed; however, it lacks formal security definitions and proofs.
In this paper, we provide rigorous definitions of security for SAACs, and show how to realize SAACs from the weaker notion of keyed-verification ACs (KVACs) and special types of oblivious issuance protocols for zero-knowledge proofs. We instantiate this paradigm to obtain two constructions: one achieves statistical anonymity with unforgeability under the Gap $q$-SDH assumption, and the other achieves computational anonymity and unforgeability under the DDH assumption.
PMNS arithmetic for elliptic curve cryptography
We show that using a polynomial representation of prime field elements (PMNS) can be relevant for real-world cryptographic applications even in terms of performance. More specifically, we consider elliptic curves for cryptography when pseudo-Mersenne primes cannot be used to define the base field (e.g. Brainpool standardized curves, JubJub curves in the zkSNARK context, pairing-friendly curves). All these primitives make massive use of the Montgomery reduction algorithm and well-known libraries such as GMP or OpenSSL for base field arithmetic. We show how this arithmetic can be replaced by a polynomial representation of the number that can be easily parallelized, avoids carry propagation, and allows randomization process. We provide good PMNS basis in the cryptographic context mentioned above, together with a C-implementation that is competitive with GMP and OpenSSL for performing basic operations in the base fields considered. We also integrate this arithmetic into the Rust reference implementation of elliptic curve scalar multiplication for Zero-knowledge applications, and achieve better practical performances for such protocols. This shows that PMNS is an attractive alternative for the base field arithmetic layer in cryptographic primitives using elliptic curves or pairings.
Key reconstruction for QC-MDPC McEliece from imperfect distance spectrum
McEliece cryptosystems, based on code-based cryptography, is a candidate in Round 4 of NIST's post-quantum cryptography standardization process. The QC-MDPC (quasi-cyclic moderate-density parity-check) variant is particularly noteworthy due to its small key length. The Guo-Johansson-Stankovski (GJS) attack against the QC-MDPC McEliece cryptosystem was recently proposed and has intensively been studied. This attack reconstructs the secret key using information on decoding error rate (DER). However, in practice, obtaining complete DER information is presumed to be time-consuming. This paper proposes two algorithms to reconstruct the secret key under imperfection in the DER information and evaluates the relationship between the imperfection and efficiency of key reconstruction. This will help us to increase the efficacy of the GJS attack.
Succinct Non-Subsequence Arguments
Lookup arguments have recently attracted a lot of developments due to their applications in the constructions of succinct non-interactive arguments of knowledge (SNARKs). A closely related topic is subsequence arguments in which one can prove that string $\mathbf{s}$ is a subsequence of another string $\mathbf{t}$, i.e., deleting some characters in $\mathbf{t}$ can achieve $\mathbf{s}$. A dual notion, namely, non-subsequence arguments, is to prove that $\mathbf{s}$ is not a subsequence of $\mathbf{t}$.
These problems have a lot of important applications in DNA sequence analysis, internet of things, blockchains, natural language processing, speech recognition, etc. However, despite their applications, they are not well-studied in cryptography, especially succinct arguments for non-subsequences with efficient proving time and sublinear verification time.
In this work, we propose the first succinct non-subsequence argument. Our solution applies the sumcheck protocol and is instantiable by any multivariate polynomial commitment schemes (PCSs). We achieve an efficient prover whose running time is linear in the size of sequences $\mathbf{s}$, $\mathbf{t}$ and their respective alphabet $\Sigma$. Our proof is succinct and the verifier time is sublinear assuming the employed PCS has succinct commitments and sublinear verification time. When instantiating with Sona PCS (EUROCRYPT'24), we achieve proof size $\mathcal{O}(\log_2|\mathbf{s}| + \log_2|\mathbf{t}|+\log_2|\Sigma|)$, prover time $\mathcal{O}(|\mathbf{s}|+|\mathbf{t}|+|\Sigma|)$ and verifier time $\mathcal{O}(\sqrt{|\mathbf{s}|}+\sqrt{|\mathbf{t}|}+\sqrt{|\Sigma|})$.
Extending our technique, we can achieve a batch subsequence argument for proving in batch $k$ interleaving subsequence and non-subsequence arguments without proof size suffering a linear blow-up in $k$.
Optimizing AES-GCM on ARM Cortex-M4: A Fixslicing and FACE-Based Approach
The Advanced Encryption Standard (AES) in Galois/Counter Mode (GCM) delivers both confidentiality and integrity yet poses performance and security challenges on resource-limited microcontrollers. In this paper, we present an optimized AES-GCM implementation for the ARM Cortex-M4 that combines Fixslicing AES with the FACE (Fast AES-CTR Encryption) strategy, significantly reducing redundant computations in AES-CTR. We further examine two GHASH implementations—a 4-bit Table-based approach and a Karatsuba-based constant-time variant—to balance speed, memory usage, and resistance to timing attacks. Our evaluations on an STM32F4 microcontroller show that Fixslicing+FACE reduces AES-128 GCTR cycle counts by up to 19.41\%, while the Table-based GHASH achieves nearly double the speed of its Karatsuba counterpart. These results confirm that, with the right mix of bitslicing optimizations, counter-mode caching, and lightweight polynomial multiplication, secure and efficient AES-GCM can be attained even on low-power embedded devices.
VeriSSO: A Privacy-Preserving Legacy-Compatible Single Sign-On Protocol Using Verifiable Credentials
Single Sign-On (SSO) is a popular authentication mechanism enabling users to access multiple web services with a single set of credentials. Despite its convenience, SSO faces outstanding privacy challenges. The Identity Provider (IdP) represents a single point of failure and can track users across different Relying Parties (RPs). Multiple colluding RPs may track users through common identity attributes. In response, anonymous credential-based SSO solutions have emerged to offer privacy-preserving authentication without revealing unnecessary user information. However, these solutions face two key challenges: supporting RP authentication without compromising user unlinkability and maintaining compatibility with the predominant Authorization Code Flow (ACF).
This paper introduces VeriSSO, a novel SSO protocol based on verifiable credentials (VC) that supports RP authentication while preserving privacy and avoiding single points of failure. VeriSSO employs an independent authentication server committee to manage RP and user authentication, binding RP authentication with credential-based anonymous user authentication. This approach ensures user unlinkability while supporting RP authentication and allows RPs to continue using their existing verification routines with identity tokens as in the ACF workflow. VeriSSO's design also supports lawful de-anonymization, ensuring user accountability for misbehavior during anonymity. Experimental evaluations of VeriSSO demonstrate its efficiency and practicality, with authentication processes completed within 100 milliseconds.
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, the final exponentiation with a large fixed exponent is essential to ensure unique outputs in both Tate and optimal ate pairings. While significant progress has been made in optimizing elliptic curves with even embedding degrees, advancements for curves with odd embedding degrees, particularly those divisible by 3, have been more limited. This paper introduces new optimization techniques for computing the final exponentiation of the optimal ate pairing on these curves. The first technique takes advantage of some existing seeds' forms, which enable cyclotomic cubing, and extends this approach to generate new seeds with a similar structure. The second technique involves generating new seeds with sparse ternary representations, replacing squaring operations with cyclotomic cubing.
The first technique improves efficiency by $1.7\%$ and $1.5\%$ compared to the square and multiply (\textbf{SM}) method for existing seeds at $192-$bit and $256-$bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of $3.4\%$ at $128-$bit, $5\%$ at $192-$bit, and $8.6\%$ at $256-$bit security levels. The second technique improves efficiency by $3.3\%$ at $128-$bit, $19.5\%$ at $192-$bit, and $4.3\%$ at $256-$bit security levels.
The Blockwise Rank Syndrome Learning problem and its applications to cryptography
This paper is an extended version of [8] published in PQCrypto 2024, in which we combine two approaches, blockwise errors and multi-syndromes, in a unique approach which leads to very efficient generalized RQC and LRPC schemes.
The notion of blockwise error in a context of rank based cryptography has been recently introduced in [31]. This notion of error, very close to the notion of sum-rank metric [27], permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before, the multi-syndromes approach introduced for LRPC and RQC schemes in [3,18] also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes.
In order to combine these approaches, we introduced in [8] the Blockwise Rank Support Learning problem. It consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduced have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme.
In this extended version we give the following new features. First, we propose a new optimization on the main protocol which consists in considering 1 in the support of an error, allowing to deduce a subspace of the error to decode and improve the decoding capacity of our LRPC code, while maintaining an equal level of security. The approach of the original paper permits to reach a $40\%$ gain in terms of parameters size when compared to previous results [18,31], and this optimization allows to reduce the parameters by another $4\%$ for higher security level. We obtain better results in terms of size than the KYBER scheme whose total sum is 1.5 kB. Second we give a more detailed analysis of the algebraic attacks on the $\ell$-RD problem we proposed in [8], which allowed to cryptanalyze all blockwise LRPC parameters proposed in [31] (with an improvement of more than 40 bits in the case of structural attacks). And at last, third, we propose a more detailed introduction to the historical background about rank metric, especially on the RQC and LRPC cryptosystems and their recent improvements and we add some parameters for the case of classical RQC (the case of only one given syndrome, that is a special case of our scheme, for which we could achieve 1.5 kB for the sum of the public key and the ciphertext), which compares very well to the previous version of classical RQC.
Simplified PIR and CDS Protocols and Improved Linear Secret-Sharing Schemes
We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these primitives requiring information-theoretic security. The complexity of these primitives has been dramatically improved in the last few years are they are closely related, i.e., the the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best known secret-sharing schemes.
To date, the messages size required in PIR and CDS protocols and the share size required in secret-sharing schemes is not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better understand the upper bounds by simplifying current constructions and improving their complexity.
We obtain the following two independent results:
- We simplify, abstract, and generalize the 2-server PIR protocol of
Dvir and Gopi (J. ACM 2016) and the 2-server and multi-server
CDS protocols of Liu et al. (CRYPTO 2017, Eurocrypt 2018) and
Beimel, Farr`as, and Lasri (TCC 2023). This is done by considering
a new variant of matching vectors and by using a general share
conversion. In addition to simplifying previous protocols, our
protocols can use matching vectors over any $m$ that is product
of two distinct primes.
Our construction does not improve the communication
complexity of PIR and CDS protocols; however, construction of
better matching vectors over any $m$ that is product of
two distinct primes will improve their communication complexity.
- In many applications of secret-sharing schemes it is important that
the scheme is linear, e.g., by using the fact that parties can locally
add shares of two secrets and obtain shares of the sum of the
secrets. We provide a construction of linear secret-sharing
schemes for $n$-party access structures with improved share size
of $2^{0.7563n}$. Previously, the best share size for linear secret-
sharing schemes was $2^{0.7576n}$ and it is known that for most
$n$-party access structures the shares size is at least $2^{0.5n}$.
This results is achieved by a reduction to unbalanced CDS
protocols (compared to balanced CDS protocols in previous
constructions).
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts.
Specifically, we consider a setting where a DAO consisting of $n_\mathsf{S}$ sellers holding shares of a witness $w$ interacts with a DAO comprising $n_\mathsf{B}$ buyers holding shares of a signing key $sk$; the goal is for the sellers to exchange $w$ for a signature under $sk$ transferring a predetermined amount of funds.
Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does).
We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision.
Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature, focusing on their slack sizes---extra bits required per secret share to ensure correctness.
We present improved constructions for these truncation methods in state-of-the-art three-party semi-honest (3PC) and four-party malicious (4PC) settings, achieving up to a threefold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while maintaining the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption.
Almost Optimal KP and CP-ABE for Circuits from Succinct LWE
We present almost-optimal lattice-based attribute-based encryption (ABE) and laconic function evaluation (LFE). For depth d circuits over $\ell$-bit inputs, we obtain
* key-policy (KP) and ciphertext-policy (CP) ABE schemes with ciphertext, secret key and public key size $O(1)$;
* LFE with ciphertext size $\ell + O(1)$ as well as CRS and digest size $O(1)$;
where O(·) hides poly(d, λ) factors. The parameter sizes are optimal, up to the poly(d) dependencies. The security of our schemes rely on succinct LWE (Wee, CRYPTO 2024). Our results constitute a substantial improvement over the state of the art; none of our results were known even under the stronger evasive LWE assumption.
Towards Building Scalable Constant-Round MPC from Minimal Assumptions via Round Collapsing
In this work, we study the communication complexity of constant-round secure multiparty computation (MPC) against a fully malicious adversary and consider both the honest majority setting and the dishonest majority setting. In the (strong) honest majority setting (where $t=(1/2-\epsilon)n$ for a constant $\epsilon$), the best-known result without relying on FHE is given by Beck et al. (CCS 2023) based on the LPN assumption that achieves $O(|C|\kappa)$ communication, where $\kappa$ is the security parameter and the achieved communication complexity is independent of the number of participants. In the dishonest majority setting, the best-known result is achieved by Goyal et al. (ASIACRYPT 2024), which requires $O(|C|n\kappa)$ bits of communication and is based on the DDH and LPN assumptions.
In this work, we achieve the following results: (1) For any constant $\epsilon<1$, we give the first constant-round MPC in the dishonest majority setting for corruption threshold $t<(1-\epsilon)n$ with $O(|C|\kappa+D (n+\kappa)^2\kappa)$ communication assuming random oracles and oblivious transfers, where $D$ is the circuit depth. (2) We give the first constant-round MPC in the standard honest majority setting (where $t=(n-1)/2$) with $O(|C|\kappa+D (n+\kappa)^2\kappa)$ communication only assuming random oracles.
Unlike most of the previous constructions of constant-round MPCs that are based on multiparty garbling, we achieve our result by letting each party garble his local computation in a non-constant-round MPC that meets certain requirements. We first design a constant-round MPC that achieves $O(|C|\kappa + Dn^2\kappa)$ communication assuming random oracles in the strong honest majority setting of $t=n/4$. Then, we combine the party virtualization technique and the idea of MPC-in-the-head to boost the corruption threshold to $t<(1-\epsilon)n$ for any constant $\epsilon<1$ assuming oblivious transfers to achieve our first result. Finally, our second result is obtained by instantiating oblivious transfers using a general honest-majority MPC and the OT extension technique built on random oracles.
A Formal Analysis of Apple’s iMessage PQ3 Protocol
We present the formal verification of Apple’s iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple’s identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security.
We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!).
Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key, Revisited: Consistency, Outsider Strong Unforgeability, and Generic Construction
Liu et al. (EuroS&P 2019) introduced Key-Insulated and Privacy-Preserving Signature Scheme with Publicly Derived Public Key (PDPKS) to enhance the security of stealth address and deterministic wallet. In this paper, we point out that the current security notions are insufficient in practice, and introduce a new security notion which we call consistency. Moreover, we explore the unforgeability to provide strong unforgeability for outsider which captures the situation that nobody, except the payer and the payee, can produce a valid signature. From the viewpoint of cryptocurrency functionality, it allows us to implement a refund functionality. Currently, basically there is no way to refund a coin when one mistakenly spends a coin to an address. This functionality rescues the case, even in the stealth environment that hides information of the payer. Note that the refund functionality only works before the payee transfers a coin to own wallet, and it prevents a double spending issue. Finally, we propose a generic construction of PDPKS that provides consistency and outsider strong unforgeability. The design is conceptually much simpler than known PDPKS constructions. It is particularly note that the underlying strongly unforgeable signature scheme is required to provide the strong conservative exclusive ownership (S-CEO) security (Cremers et al., IEEE S&P 2021). Since we explicitly require the underlying signature scheme to be S-CEO secure, our security proof introduces a new insight of exclusive ownership security which may be of independent interest.
Scalable Zero-knowledge Proofs for Non-linear Functions in Machine Learning
Zero-knowledge (ZK) proofs have been recently explored for the integrity of machine learning (ML) inference. However, these protocols suffer from high computational overhead, with the primary bottleneck stemming from the evaluation of non-linear functions. In this paper, we propose the first systematic ZK proof framework for non-linear mathematical functions in ML using the perspective of table lookup. The key challenge is that table lookup cannot be directly applied to non-linear functions in ML since it would suffer from inefficiencies due to the intolerably large table. Therefore, we carefully design several important building blocks, including digital decomposition, comparison, and truncation, such that they can effectively utilize table lookup with a quite small table size while ensuring the soundness of proofs. Based on these blocks, we implement complex mathematical operations and further construct ZK proofs for current mainstream non-linear functions in ML such as ReLU, sigmoid, and normalization. The extensive experimental evaluation shows that our framework achieves 50 ∼ 179× runtime improvement compared to the state-of-the-art work, while maintaining a similar level of communication efficiency.
SecurED: Secure Multiparty Edit Distance for Genomic Sequences
DNA edit distance (ED) measures the minimum number of single nucleotide insertions, substitutions, or deletions required to convert a DNA sequence into another. ED has broad applications in healthcare such as sequence alignment, genome assembly, functional annotation, and drug discovery. Privacy-preserving computation is essential in this context to protect sensitive genomic data. Nonetheless, the existing secure DNA edit distance solutions lack efficiency when handling large data sequences or resort to approximations and fail to accurately compute the metric.
In this work, we introduce secureED, a protocol that tackles these limitations, resulting in a significant performance enhancement of approximately $2-24\times$ compared to existing methods. Our protocol computes a secure ED between two genomes, each comprising $1,000$ letters, in just a few seconds. The underlying technique of our protocol is a novel approach that transforms the established approximate matching technique (i.e., the Ukkonen algorithm) into exact matching, exploiting the inherent similarity in human DNA to achieve cost-effectiveness. Furthermore, we introduce various optimizations tailored for secure computation in scenarios with a limited input domain, such as DNA sequences composed solely of the four nucleotide letters.
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001, and hence digital voting in 2005, yet there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK includes applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, has never been mentioned and/or analyzed in the academic literature before. The paper also updates the general knowledge about an extra right given to auditors
(but not observers) in the June 2024 European election, recent complaints, and about newer solutions suggested by academia in 2024. Finally, we discuss the current system status in 2024 EP elections and propose our own suggestions to some problems stated in the OSCE-ODIHR 2023 report that are still there.
Capitalized Bitcoin Fork for National Strategic Reserve
We describe a strategy for a nation to acquire majority stake in Bitcoin with zero cost to the taxpayers of the nation. We propose a bitcoin fork sponsored by the the government of the nation, and backed by the full faith of treasury of the nation, such that the genesis block of this fork attributes fixed large amount of new kinds of tokens called strategic-reserve-bitcoin tokens (SRBTC) to the nation's treasury, which is some multiple (greater than one) of the amount of all Bitcoin tokens (BTC) currently set in the Bitcoin protocol. The BTC tokens continue to be treated 1:1 as SRBTC tokens in the forked chain. The only capital that the nation puts up is its explicit guarantee that the SRBTC tokens of the fork will be accepted as legal tender, such as payment of tax to the treasury.
We suggest that this is a better approach than starting a new blockchain that mimics Bitcoin, as it will be partially fair to the current holders of Bitcoin, which in turn would make it competitive in the space of other such possible forks by other powerful nations. Moreover, such a proof-of-work blockchain retains its egalitarian and democratic nature, which competitively deters the said nation from any dilutions in the future.
To justify our proposal we setup three competitive games, and show strategies for different players that are in Nash equilibrium and which throw further light on these claims. In particular,
1. The first game shows that if the only two alternatives for investors is to invest in BTC or SRBTC, then individuals who have a certain fraction $\theta$ of their wealth already invested in BTC, will invest new money in the original chain, whereas the individuals whose current wealth invested in BTC is less than the $\theta$ fraction will invest new money in SRBTC.
2. The second game shows that if there is a third alternative for investment, which is cash that is losing value (inflation-adjusted) by a percentage $d$, then the investors who had less than $\theta$ fraction of wealth in Bitcoin, will invest in SRBTC only if the dilution of SRBTC is large enough (as an increasing (linear) function of $1/d$). Here by dilution we mean the new SRBTC tokens that are allowed to be eventually mined in the fork.
3. The third game shows that investors would prefer a fork of Bitcoin over a replica of Bitcoin that doesn't value original BTC, when both are available and even if both are backed similarly by one or more nations.
Ideal Compartmented Secret Sharing Scheme Based on the Chinese Remainder Theorem for Polynomial Rings
A secret sharing scheme starts with a secret and then derives from it
certain shares (or shadows) which are distributed to users.
The secret may be recovered only by certain
predetermined groups. In case of compartmented secret sharing, the
set of users is partitioned into compartments and the secret
can be recovered only if the number of participants from
any compartment is greater than or equal to a fixed compartment threshold
and the total number of participants is greater than or equal to a global threshold.
In this paper we use the Chinese Remainder Theorem for Polynomial Rings in order to construct an ideal compartmented secret sharing scheme, inspired by the work from [20].
Max Bias Analysis: A New Approach on Computing the Entropy of Free Ring-Oscillator
This work introduce a new approach called Max bias analysis for the entropy computation of structures of Free Ring Oscillator-based Physical Random Number Generator. It employs the stochastic model based on the well-established Wiener process, specifically adapted to only capture thermal noise contributions while accounting for potential non-zero bias in the duty cycle.
Our analysis is versatile, applicable to combinations of multiple sampled Ring Oscillator (RO) filtering by any function. The entropy computation takes as inputs the parameters of the thermal stochastic model and delivers directly a proven bound for both Shannon entropy and min-entropy to fulfill AIS31 and NIST SP 800-90 B. As an example, we apply the new methodology on an enhanced structure of TRNG combining several free-running Ring Oscillators filtered by a vectorial function built from a linear error correcting code that optimizes the functional performance in terms of [entropy rate/silicium area used] and that maintains the mathematical proof of the entropy lower bound as simple as possible.
hax: Verifying Security-Critical Rust Software using Multiple Provers
We present hax, a verification toolchain for Rust targeted at security-critical software such as cryptographic libraries, protocol imple- mentations, authentication and authorization mechanisms, and parsing and sanitization code. The key idea behind hax is the pragmatic observation that different verification tools are better at handling different kinds of verification goals. Consequently, hax supports multiple proof backends, including domain-specific security analysis tools like ProVerif and SSProve, as well as general proof assistants like Coq and F*. In this paper, we present the hax toolchain and show how we use it to translate Rust code to the input languages of different provers. We describe how we systematically test our translated models and our models of the Rust system libraries to gain confidence in their correctness. Finally, we briefly overview various ongoing verification projects that rely on hax.
It's a Kind of Magic: A Novel Conditional GAN Framework for Efficient Profiling Side-channel Analysis (Extended Version)
Profiling side-channel analysis (SCA) is widely used to evaluate the security of cryptographic implementations under worst-case attack scenarios. This method assumes a strong adversary with a fully controlled device clone, known as a profiling device, with full access to the internal state of the target algorithm, including the mask shares. However, acquiring such a profiling device in the real world is challenging, as secure products enforce strong life cycle protection, particularly on devices that allow the user partial (e.g., debug mode) or full (e.g., test mode) control. This enforcement restricts access to profiling devices, significantly reducing the effectiveness of profiling SCA.
To address this limitation, this paper introduces a novel framework that allows an attacker to create and learn from their own white-box reference design without needing privileged access on the profiling device.
Specifically, the attacker first implements the target algorithm on a different type of device with full control. Since this device is a white box to the attacker, they can access all internal states and mask shares. A novel conditional generative adversarial network (CGAN) framework is then introduced to mimic the feature extraction procedure from the reference device and transfer this experience to extract high-order leakages from the target device. These extracted features then serve as inputs for profiled SCA. Experiments show that our approach significantly enhances the efficacy of black-box profiling SCA, matching or potentially exceeding the results of worst-case security evaluations. Compared with conventional profiling SCA, which has strict requirements on the profiling device, our framework relaxes this threat model and, thus, can be better adapted to real-world attacks.
Relations among new CCA security notions for approximate FHE
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Sanitization of FHE Ciphertexts
By definition, fully homomorphic encryption (FHE) schemes
support homomorphic decryption, and all known FHE constructions are bootstrapped from a Somewhat Homomorphic Encryption (SHE) scheme via this technique. Additionally, when a public key is provided, ciphertexts are also re-randomizable, e.g., by adding to them fresh encryptions of 0. From those two operations we devise an algorithm to sanitize a ciphertext, by making its distribution canonical. In particular, the distribution of the ciphertext does not depend on the circuit that led to it via homomorphic evaluation, thus providing circuit privacy in the honest-but-curious model. Unlike the previous approach based on noise flooding, our approach does not degrade much the security/efficiency trade-off of the underlying FHE. The technique can be applied to all lattice-based FHE proposed so far, without substantially affecting their concrete parameters.
Enabling Lattice-based Authentication Encrypted Search with Ciphertext Broadcast for Cloud Storage
The development of cloud computing facilitates data outsourced sharing and storage, but also brings up several security issues. Public key authenticated encryption with keyword search (PAEKS) enables the encrypted search over cloud data while resisting the insider keyword guessing attacks (IKGAs). However, existing PAEKS schemes are limited to a single receiver, restricting application prospects in cloud storage. In addition, quantum computing attacks and key leakage issues further threaten the data security, which attracted extensive attention from researchers. Therefore, designing an encrypted search scheme to resist the above-mentioned attacks is still far-reaching. In this paper, we first propose BroSearch, a lattice-based authentication encrypted search with ciphertext broadcast. It utilizes lattice sampling algorithms to authenticate the keyword and offers searchability over broadcasting ciphertext while enjoying IKGAs-resistant in a quantum setting. To get around key leakage issues, we then incorporate the minimal cover set technique and lattice basis extension algorithm to construct FS-BroSearch, as an enhanced version. Furthermore, we give rigorous security analysis (IND-CKA and IND-IKGA) and comprehensive performance evaluation of both schemes. Specifically, the time cost of BroSearch is at least 0.61, 0.82, and 0.83 times compared to prior arts in terms of ciphertext calculation, trapdoor generation, and search procedures, which is practical and effective for cloud storage.
MultiReg-FE: Registered FE for Unbounded Inner-Product and Attribute-Weighted Sums
Recently, Francati et al. (Asiacrypt 2023) provided the first registered functional encryption (Reg-FE) beyond predicates. Reg-FE addresses the key escrow problem in functional encryption by allowing users to generate their own key pairs, effectively replacing the traditional private-key generator with a key curator. The key curator holds no secret information and runs deterministic algorithms to generate master public key for encryption and helper keys for decryption. However, existing Reg-FE schemes under standard assumptions require fixed data sizes, which limits their practicality in real-world applications.
In this work, we introduce Multi-Function Registered Functional Encryption for Inner-Product (MultiReg-FE for IP), a novel extension of Reg-FE. It enables users to register multiple functions under a single public key. With MultiReg-FE, we achieve both Reg-FE for Unbounded Inner-Product (Unbounded IP), which removes the need to predetermine vector lengths, and Reg-FE for Attribute-Weighted Sums with Inner-Product (AWSw/IP), allowing computations over arbitrary numbers of attribute-value pairs.
Specifically, we present:
-MultiReg-FE for Inner-Product, which supports unbounded number of function vectors from each user and achieves adaptive-IND-security;
- Reg-FE for Unbounded Inner-Product, removing the need for preset vector lengths and achieves adaptive-IND-security;
- The first Reg-FE for AWSw/IP in public-key settings with weekly security.
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion.
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 340 Gbps on x86 processors and 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips and setting a new performance record on the latest x86 processors.
Non-Interactive Zero-Knowledge Proofs with Certified Deletion
We introduce the notion of non-interactive zero-knowledge (NIZK) proofs with certified deletion. Our notion enables the recipient of a (quantum) NIZK proof to delete the proof and obtain a (classical) certificate proving such deletion. We define this notion and propose two candidate constructions from standard cryptographic assumptions. Our first construction is based on classical NIZK proofs and quantum-hard one-way functions but needs both the prover and verifier to run quantum algorithms. We then present an extension that allows the prover to be classical; this is based on the learning with errors problem and requires an instance-independent interactive setup between the prover and verifier.
Our results have applications to revocable signatures of knowledge and revocable anonymous credentials, which we also define and construct.
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
SCAPEgoat: Side-channel Analysis Library
Side-channel analysis (SCA) is a growing field in
hardware security where adversaries extract secret information
from embedded devices by measuring physical observables like
power consumption and electromagnetic emanation. SCA is a
security assessment method used by governmental labs, standardization
bodies, and researchers, where testing is not just
limited to standardized cryptographic circuits, but it is expanded
to AI accelerators, Post Quantum circuits, systems, etc. Despite
its importance, SCA is performed on an ad hoc basis in the
sense that its flow is not systematically optimized and unified
among labs. As a result, the current solutions do not account
for fair comparisons between analyses. Furthermore, neglecting
the need for interoperability between datasets and SCA metric
computation increases students’ barriers to entry. To address
this, we introduce SCAPEgoat, a Python-based SCA library
with three key modules devoted to defining file format, capturing
interfaces, and metric calculation. The custom file framework
organizes side-channel traces using JSON for metadata, offering
a hierarchical structure similar to HDF5 commonly applied in
SCA, but more flexible and human-readable. The metadata can
be queried with regular expressions, a feature unavailable in
HDF5. Secondly, we incorporate memory-efficient SCA metric
computations, which allow using our functions on resource-restricted
machines. This is accomplished by partitioning datasets
and leveraging statistics-based optimizations on the metrics. In
doing so, SCAPEgoat makes the SCA more accessible to newcomers
so that they can learn techniques and conduct experiments
faster and with the possibility to expand on in the future.
Scoop: An Optimizer for Profiling Attacks against Higher-Order Masking
In this paper we provide new theoretical and empirical evidences that gradient-based deep learning profiling attacks (DL-SCA) suffer from masking schemes. This occurs through an initial stall of the learning process: the so-called plateau effect. To understand why, we derive an analytical expression of a DL-SCA model targeting simulated traces which enables us to study an analytical expression of the loss. By studying the loss landscape of this model, we show that not only do the magnitudes of the gradients decrease as the order of masking increases, but the loss landscape also exhibits a prominent saddle point interfering with the optimization process. From these observations, we (1) propose the usage of a second-order optimization algorithm mitigating the impact of low-gradient areas. In addition, we show how to leverage the intrinsic sparsity of valuable information in SCA traces to better pose the DL-SCA problem. To do so, we (2) propose to use the implicit regularization properties of the sparse mirror descent. These propositions are gathered in a new publicly available optimization algorithm, Scoop. Scoop combines second-order derivative of the loss function in the optimization process, with a sparse stochastic mirror descent. We experimentally show that Scoop pushes further the current limitations of DL-SCA against simulated traces, and outperforms the state-of-the-art on the ASCADv1 dataset in terms of number of traces required to retrieve the key, perceived information and plateau length. Scoop also performs the first non-worst-case attack on the ASCADv2 dataset. On simulated traces, we show that using Scoop reduces the DL-SCA time complexity by the equivalent of one masking order.
Fast Scloud+: A Fast Hardware Implementation for the Unstructured LWE-based KEM - Scloud+
Scloud+ is an unstructured LWE-based key encapsulation mechanism (KEM) with conservative quantum security, in which ternary secrets and lattice coding are incorporated for higher computational and communication efficiency. However, its efficiencies are still much inferior to those of the structured LWE-based KEM, like ML-KEM (standardized by NIST). In this paper, we present a configurable hardware architecture for Scloud+.KEM to improve the computational efficiency. Many algorithmic and architectural co-optimizations are proposed to reduce the complexity and increase the degree of parallelism. Specially, the matrix multiplications are computed by a block in serial and the block is calculated in one cycle, without using any multipliers. In addition, the random bits all are generated by an unfolded Keccak core, well matched with the data flow required by the block matrix multiplier. The proposed design is coded in Verilog and implemented under the SMIC 40nm LP CMOS technology. The synthesized results show that Scloud+.KEM-128 only costs 23.0 $us$, 24.3 $us$, and 24.6 $us$ in the KeyGen, Encaps, and Decaps stages, respectively, with an area consumption of 0.69 $mm^2$, significantly narrowing the gap with the state-of-the-art of Kyber hardware implementation.
Shortcut2Secrets: A Table-based Differential Fault Attack Framework
Recently, Differential Fault Attacks (DFAs) have proven highly effective against stream ciphers designed for Hybrid Homomorphic Encryption (HHE). In this work, we present a table-based DFA framework called the \textit{shortcut attack}, which generalizes the attack proposed by Wang and Tang on the cipher \textsf{Elisabeth}.
The framework applies to a broad sub-family of ciphers following the Group Filter Permutator (GFP) paradigm and enhances previous DFAs by improving both the fault identification and path generation steps. Notably, the shortcut attack circumvents the issue of function representation, allowing successful attacks even when the cipher's filter function cannot be represented over the ring it is defined on.
Additionally, we provide complexity estimates for the framework and apply the shortcut attack to \textsf{Elisabeth-4} and its patches. As a result, we optimize the DFA on \textsf{Elisabeth-4}, requiring fewer keystreams and running faster than previous methods. Specifically, we achieve a DFA that requires only $3000$ keystreams, which is one-fifth of the previous best result. We also successfully mount a practical DFA on \textsf{Gabriel-4} and provide a theoretical DFA for \textsf{Elisabeth-b4}.
For the latest patch, \textsf{Margrethe-18-4}, which follows the more general Mixed Filter Permutator (MFP) paradigm, we present a DFA in a stronger model. To the best of our knowledge, these are the first DFA results on the patches of \textsf{Elisabeth-4}. Finally, we derive security margins to prevent shortcut attacks on a broad sub-family of MFP ciphers, which can serve as parameter recommendations for designers.
A Security-Enhanced Pairing-Free Certificateless Aggregate Signature for Vehicular Ad-Hoc Networks, Revisited
We show that the aggregate signature scheme [IEEE Syst. J., 2023, 17(3), 3822-3833] is insecure against forgery attack. This flaw is due to that the ephemeral key or ephemeral value chosen in the signing phase is not indeed bound to the final signature. An adversary can sign any message while the verifier cannot find the fraud. We also suggest a revising method to frustrate this attack.
PIGEON: A High Throughput Framework for Private Inference of Neural Networks using Secure Multiparty Computation
Privacy-Preserving Machine Learning (PPML) is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as ImageNet is still out of reach, given the performance overhead of MPC, GPU-based MPC frameworks are starting to achieve practical runtimes for private inference.
However, we show that, unlike plaintext machine learning, using GPU acceleration for both linear (e.g., convolutions) and nonlinear neural network layers (e.g., ReLU) is actually counterproductive in PPML. While GPUs effectively accelerate linear layers compared to CPU-based MPC implementations, the MPC circuits required to evaluate nonlinear layers introduce memory overhead and frequent data movement between the GPU and the CPU to handle network communication. This results in slow ReLU performance and high GPU memory requirements in state-of-the-art GPU-based PPML frameworks, hindering them from scaling to multiple images per second inference throughput and more than eight images per batch on ImageNet.
To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON employs a novel ABG programming model that switches between Arithmetic Vectorization and Bitslicing on the CPU for nonlinear layers depending on the MPC-specific computation required while offloading linear layers to the GPU.
Compared to the state-of-the-art PPML framework Piranha, PIGEON improves ReLU throughput by two orders of magnitude, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch sizes. This translates to one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g., 192) and more than 70% saturation of a 25 Gbit/s network.
Electromagnetic Side-Channel Analysis of PRESENT Lightweight Cipher
Side-channel vulnerabilities pose an increasing threat to cryptographically protected devices. Consequently, it is crucial to observe information leakages through physical parameters such as power consumption and electromagnetic (EM) radiation to reduce susceptibility during interactions with cryptographic functions. EM side-channel attacks are becoming more prevalent. PRESENT is a promising lightweight cryptographic algorithm expected to be incorporated into Internet-of-Things (IoT) devices in the future. This research investigates the EM side-channel robustness of PRESENT using a correlation attack model. This work extends our previous Correlation EM Analysis (CEMA) of PRESENT with improved results. The attack targets the Substitution box (S-box) and can retrieve 8 bytes of the 10-byte encryption key with a minimum of 256 EM waveforms. This paper presents the process of EM attack modelling, encompassing both simple and correlation attacks, followed by a critical analysis.