All papers (23853 results)
Adaptively Secure Threshold Blind BLS Signatures and Threshold Oblivious PRF
We show the first threshold blind signature scheme and threshold Oblivious PRF (OPRF) scheme which remain secure in the presence of an adaptive adversary, who can adaptively decide which parties to corrupt throughout the lifetime of the scheme. Moreover, our adaptively secure schemes preserve the minimal round complexity and add only a small computational overhead over prior solutions that offered security only for a much less realistic static adversary, who must choose the subset of corrupted parties before initializing the protocol.
Our threshold blind signature scheme computes standard BLS signatures while our threshold OPRF computes a very efficient "2HashDH" OPRF [JKK14]. We prove adaptive security of both schemes in the Algebraic Group Model (AGM). Our adaptively secure threshold schemes are as practical as the underlying standard single-server BLS blind signature and 2HashDH OPRF, and they can be used to add cryptographic fault-tolerance and decentralize trust in any system that relies on blind signatures, like anonymous credentials and e-cash, or on OPRF, like the OPAQUE password authentication and the Privacy Pass anonymous authentication scheme, among many others.
An Efficient Sequential Aggregate Signature Scheme with Lazy Verification
A sequential aggregate signature scheme (SAS) allows multiple potential signers to sequentially aggregate their respective signatures into a single compact signature. Typically, verification of a SAS signatures requires access to all messages and public key pairs utilized in the aggregate generation. However, efficiency is crucial for cryptographic protocols to facilitate their practical implementation. To this end, we propose a sequential aggregate signature scheme with lazy verification for a set of user-message pairs, allowing the verification algorithm to operate without requiring access to all messages and public key pairs in the sequence. This construction is based on the RSA assumption in the random oracle model and is particularly beneficial in resource constrained applications that involve forwarding of authenticated information between parties, such as certificate chains. As an extension of this work, we introduce the notion of sequentially aggregatable proxy re-signatures that enables third parties or proxies to transform aggregatable signatures under one public key to another, useful in applications such as sharing web certificates and authentication of network paths. We also present a construction of a sequential aggregate proxy re-signature scheme, secure in the random oracle model, based on the RSA assumption, which may be of independent interest.
RHQC: post-quantum ratcheted key exchange from coding assumptions
Key Exchange mechanisms (KE or KEMs) such as the Diffie-Hellman protocol have proved to be a cornerstone conciliating the efficiency of symmetric encryption and the practicality of public key primitives.
Such designs however assume the non-compromission of the long term asymmetric key in use. To relax this strong security assumption, and allow for modern security features such as Perfect Forward Secrecy (PFS) or Post Compromise Security (PCS), Ratcheted-KE (RKE) have been proposed.
This work proposes to turn the Hamming Quasi-Cyclic (HQC) cryptosystem into such a Ratcheted-KE, yielding the first code-based such construction.
Interestingly, our design allows indifferently one party to update the key on-demand rather than the other, yielding a construction called bi-directional RKE, which compares favorably to generic transformations.
Finally, we prove that the resulting scheme satisfies the usual correctness and key-indistinguishability properties, and suggest concrete sets of parameters, assuming different real-life use cases.
Worst-case Analysis of Lattice Enumeration Algorithm over Modules
This paper presents a systematic study of module lattices. We extend the lattice enumeration algorithm from Euclidean lattices to module lattices, providing a generalized framework.
To incorporate the refined analysis by Hanrot and Stehlè (CRYPTO'07), we adapt key definitions from Euclidean lattices, such as HKZ-reduced bases and quasi-HKZ-reduced bases, adapting them to the pseudo-basis of modules.
Furthermore, we revisit the lattice profile, a crucial aspect of enumeration algorithm analysis, and extend its analysis to module lattices.
As a result, we improve the asymptotic performance of the module lattice enumeration algorithm and module-SVP.
For instance, let be a number field with a power-of-two integer , and suppose that .
Then, the nonzero shortest vector in can be found in time , improving upon the previous lattice enumeration bound of .
Our algorithm naturally extends to solving ideal-SVP. Given an ideal , where with a power-of-two integer , we can find the nonzero shortest element of in time , improving upon the previous enumeration bound of .
Post Quantum Migration of Tor
Shor's and Grover's algorithms' efficiency and the advancement of quantum computers imply that the cryptography used until now to protect one's privacy is potentially vulnerable to retrospective decryption, also known as harvest now, decrypt later attack in the near future. This dissertation proposes an overview of the cryptographic schemes used by Tor, highlighting the non-quantum-resistant ones and introducing theoretical performance assessment methods of a local Tor network. The measurement is divided into three phases. We will start with benchmarking a local Tor network simulation on constrained devices to isolate the time taken by classical cryptography processes. Secondly, the analysis incorporates existing benchmarks of quantum-secure algorithms and compares these performances on the devices. Lastly, the estimation of overhead is calculated by replacing the measured times of traditional cryptography with the times recorded for Post Quantum Cryptography (PQC) execution within the specified Tor environment. By focusing on the replaceable cryptographic components, using theoretical estimations, and leveraging existing benchmarks, valuable insights into the potential impact of PQC can be obtained without needing to implement it fully.
Attacking Single-Cycle Ciphers on Modern FPGAs featuring Explainable Deep Learning
In this paper, we revisit the question of key recovery using side-channel analysis for unrolled, single-cycle block ciphers. In particular, we study the Princev2 cipher. While it has been shown vulnerable in multiple previous studies, those studies were performed on side-channel friendly ASICs or older FPGAs (e.g., Xilinx Virtex II on the SASEBO-G board), and using mostly expensive equipment. We start with the goal of exploiting a cheap modern FPGA and board using power traces from a cheap oscilloscope. Particularly, we use Xilinx Artix 7 on the Chipwhisperer CW305 board and PicoScope 5000A, respectively.
We split our study into three parts. First, we show that the new set-up still exhibits easily detectable leakage, using a non-specific t-test. Second, we replicate attacks from older FPGAs. Namely, we start with the attack by Yli-Mäyry et al., which is a simple chosen plaintext correlation power analysis attack using divide and conquer. However, we demonstrate that even this simple, powerful attack does not work, demonstrating a peculiar behavior. We study this behavior using a stochastic attack that attempts to extract the leakage model, and we show that models over a small part of the state are inconsistent and depend on more key bits than what is expected. We also attempt classical template attacks and get similar results.
To further exploit the leakage, we employ deep learning techniques and succeed in key recovery, albeit using a large number of traces. We perform the explainability technique called Key Guessing Occlusion (KGO) to detect which points the neural networks exploit. When we use these points as features for the classical template attack, although it did not recover the secret key, its performance improves compared to other feature selection techniques.
A Note on the Advanced Use of the Tate Pairing
This short note explains how the Tate pairing can be used to efficiently sample torsion points with precise requirements, and other applications. These applications are most clearly explained on Montgomery curves, using the Tate pairing of degree 2, but hold more generally for any degree or abelian variety, or even generalized Tate pairings. This note is explanatory in nature; it does not contain new results, but aims to provide a clear and concise explanation of results in the literature that are somewhat hidden, yet are extremely useful in practical isogeny-based cryptography.
A note on "industrial blockchain threshold signatures in federated learning for unified space-air-ground-sea model training"
We show that the threshold signature scheme [J. Ind. Inf. Integr. 39: 100593 (2024)] is insecure against forgery attack. An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm, so as to convert the legitimate signature of message into a valid signature of any message .
HammR: A ZKP Protocol for Fixed Hamming-Weight Restricted-Entry Vectors
In this paper, we introduce , a generic Zero-Knowledge Proof (ZKP) protocol demonstrating knowledge of a secret vector that has a fixed Hamming weight with entries taken from a shifted multiplicative group.
As special cases, we are able to directly apply this protocol to restricted vectors and to rank-1 vectors, which are vectors with entries that lie in a dimension one subspace of .
We show that these proofs can be batched with low computational overhead, and further prove that this general framework is complete, sound, and zero-knowledge, thus truly a genuine ZKP.
Finally, we present applications of to various Syndrome Decoding Problems, including the Regular and Restricted SDPs, as well as other implementations such as lookup instances, proof of proximity, and electronic voting protocols.
Black-Box Constant-Round Secure 2PC with Succinct Communication
The most fundamental performance metrics of secure multi-party computation (MPC) protocols are related to the number of messages the parties exchange (i.e., round complexity), the size of these messages (i.e., communication complexity), and the overall computational resources required to execute the protocol (i.e., computational complexity). Another quality metric of MPC protocols is related to the black-box or non-black-box use of the underlying cryptographic primitives. Indeed, the design of black-box MPC protocols, other than being of theoretical interest, usually can lead to protocols that have better computational complexity.
In this work, we aim to optimize the round and communication complexity of black-box secure multi-party computation in the plain model, by designing a constant-round two-party computation protocol in the malicious setting, whose communication complexity is only polylogarithmic in the size of the function being evaluated.
We successfully design such a protocol, having only black-box access to fully homomorphic encryption, trapdoor permutations, and hash functions. To the best of our knowledge, our protocol is the first to make black-box use of standard cryptographic primitives while achieving almost asymptotically optimal communication and round complexity.
Cross-Platform Benchmarking of the FHE Libraries: Novel Insights into SEAL and OpenFHE
The rapid growth of cloud computing and data-driven applications has amplified privacy concerns, driven by the increasing demand to process sensitive data securely. Homomorphic encryption (HE) has become a vital solution for addressing these concerns by enabling computations on encrypted data without revealing its contents. This paper provides a comprehensive evaluation of two leading HE libraries, SEAL and OpenFHE, examining their performance, usability, and support for prominent HE schemes such as BGV and CKKS. Our analysis highlights computational efficiency, memory usage, and scalability across Linux and Windows platforms, emphasizing their applicability in real-world scenarios. Results reveal that Linux outperforms Windows in computation efficiency, with OpenFHE emerging as the optimal choice across diverse cryptographic settings. This paper provides valuable insights for researchers and practitioners to advance privacy-preserving applications using FHE.
Quantum Attacks on Sum of Even-Mansour Construction Utilizing Online Classical Queries
The Sum of Even-Mansour (SoEM) construction, proposed by Chen et al. at Crypto 2019, has become the basis for designing some symmetric schemes, such as
the nonce-based MAC scheme and the nonce-based encryption scheme . In this paper, we make the first attempt to study the quantum security of SoEM under the Q1 model where the targeted encryption oracle can only respond to classical queries rather than quantum ones.
Firstly, we propose a quantum key recovery attack on SoEM21 with a time complexity of along with online classical queries. Compared with the current best classical result which requires , our method offers a quadratic time speedup while maintaining the same number of queries. The time complexity of our attack is less than that observed for quantum exhaustive search by a factor of . We further propose classical and quantum key recovery attacks on the generalized SoEMs1 construction (consisting of independent public permutations), revealing that the application of quantum algorithms can provide a quadratic acceleration over the pure classical methods. Our results also imply that the quantum security of SoEM21 cannot be strengthened merely by increasing the number of permutations.
A Practical Tutorial on Deep Learning-based Side-channel Analysis
This tutorial provides a practical introduction to Deep Learning-based Side-Channel Analysis (DLSCA), a powerful approach for evaluating the security of cryptographic implementations.
Leveraging publicly available datasets and a Google Colab service, we guide readers through the fundamental steps of DLSCA, offering clear explanations and code snippets.
We focus on the core DLSCA framework, providing references for more advanced techniques, and address the growing interest in this field driven by emerging standardization efforts like AIS 46. This tutorial is designed to be accessible to researchers, students, and practitioners seeking to learn practical DLSCA techniques and improve the security of cryptographic systems.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
Practical Semi-Open Chat Groups for Secure Messaging Applications
Chat groups in secure messaging applications such as Signal, Telegram, and Whatsapp are nowadays used for rapid and widespread dissemination of information to large groups of people. This is common even in sensitive contexts, associated with the organisation of protests, activist groups, and internal company dialogues. Manual administration of who has access to such groups quickly becomes infeasible, in the presence of hundreds or thousands of members.
We construct a practical, privacy-preserving reputation system, that automates the approval of new group members based on their reputation amongst the existing membership. We demonstrate security against malicious adversaries in a single-server model, with no further trust assumptions required. Furthermore, our protocol supports arbitrary reputation calculations while almost all group members are offline (as is likely). In addition, we demonstrate the practicality of the approach via an open-source implementation. For groups of size 50 (resp. 200), an admission process on a user that received 40 (resp. 80) scores requires 1312.2 KiB (resp. 5239.4 KiB) of communication, and 3.3s (resp. 16.3s) of overall computation on a single core. While our protocol design matches existing secure messaging applications, we believe it can have value in distributed reputation computation beyond this problem setting.
Optimized Frobenius and Cyclotomic Cubing for Enhanced Pairing Computation
Efficient implementation of a pairing-based cryptosystem relies on high-performance arithmetic in finite fields and their extensions , where is the embedding degree. A small embedding degree is crucial because part of the arithmetic for pairing computation occurs in and includes operations such as squaring, multiplication, and Frobenius operations.
In this paper, we present a fast and efficient method for computing the Frobenius endomorphism and its complexity. Additionally, we introduce an improvement in the efficiency of cyclotomic cubing operations for several pairing-friendly elliptic curves, which are essential for the calculation of Tate pairing and its derivatives.
PMNS arithmetic for elliptic curve cryptography
We show that using a polynomial representation of prime field elements (PMNS) can be relevant for real-world cryptographic applications even in terms of performance. More specifically, we consider elliptic curves for cryptography when pseudo-Mersenne primes cannot be used to define the base field (e.g. Brainpool standardized curves, JubJub curves in the zkSNARK context, pairing-friendly curves). All these primitives make massive use of the Montgomery reduction algorithm and well-known libraries such as GMP or OpenSSL for base field arithmetic. We show how this arithmetic can be advantageously replaced by a polynomial representation of the number that can be easily parallelized, avoids carry propagation, and allows randomization process. We provide good PMNS basis in the cryptographic context mentioned above, together with a C-implementation that is competitive or faster than GMP and OpenSSL for performing basic operations in the base fields considered. We also integrate this arithmetic into the Rust reference implementation of elliptic curve scalar multiplication for Zero-knowledge applications, and achieve better practical performances for such protocols. This shows that PMNS is an attractive alternative for the base field arithmetic layer in cryptographic primitives using elliptic curves or pairings.
Algebraic Cryptanalysis of Small-Scale Variants of Stream Cipher E0
This study explores the algebraic cryptanalysis of small-scale variants of the E0 stream cipher, a legacy cipher used in the Bluetooth protocol. By systematically reducing the size of the linear feedback shift registers (LFSRs) while preserving the cipher’s core structure, we investigate the relationship between the number of unknowns and the number of consecutive keystream bits required to recover the internal states of the LFSRs. Our work demonstrates an approximately linear relationship between the number of consecutive keystream bits and the size of small-scale E0 variants, as indicated by our experimental results. To this end, we utilize two approaches: the computation of Gröbner bases using Magma’s F4 algorithm and the application of CryptoMiniSat’s SAT solver. Our experimental results show that increasing the number of keystream bits significantly improves computational efficiency, with the F4 algorithm achieving a speedup of up to 733× when additional equations are supplied. Furthermore, we verify the non-existence of equations of degree four or lower for up to seven consecutive keystream bits, and the non-existence of equations of degree three or lower for up to eight consecutive keystream bits, extending prior results on the algebraic properties of E0.
zkAML: Zero-knowledge Anti Money Laundering in Smart Contracts with whitelist approach
In the interconnected global financial system, anti-money laundering (AML) and combating the financing of terrorism (CFT) regulations are indispensable for safeguarding financial integrity. However, while illicit transactions constitute only a small fraction of overall financial activities, traditional AML/CFT frameworks impose uniform compliance burdens on all users, resulting in inefficiencies, transaction delays, and privacy concerns.
These issues stem from the institution-centric model, where financial entities independently conduct compliance checks, resulting in repeated exposure of personally identifiable information (PII) and operational bottlenecks.
To address these challenges, we introduce \textsf{zkAML}, a cryptographic framework that offers a novel approach to AML/CFT compliance. By leveraging zero-knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARK) proofs, \textsf{zkAML}~enables users to cryptographically demonstrate their regulatory compliance without revealing sensitive personal information. This approach eliminates redundant identity checks, streamlines compliance procedures, and enhances transaction efficiency while preserving user privacy.
We implement and evaluate \textsf{zkAML}~on a blockchain network to demonstrate its practicality. Our experimental results show that \textsf{zkAML}~achieves 55 transactions per second (TPS) on a public network and 324 TPS on a private network. The zk-SNARK proof generation times are ms for senders and ms for receivers, with a constant verification time of ms per transaction. These findings highlight \textsf{zkAML}'s potential as a privacy-preserving and regulation-compliant solution for modern financial systems.
SoK: Efficient Design and Implementation of Polynomial Hash Functions over Prime Fields
Poly1305 is a widely-deployed polynomial hash function. The rationale behind its design was laid out in a series of papers by Bernstein, the last of which dates back to 2005. As computer architectures evolved, some of its design features became less relevant, but implementers found new ways of exploiting these features to boost its performance. However, would we still converge to this same design if we started afresh with today's computer architectures and applications? To answer this question, we gather and systematize a body of knowledge concerning polynomial hash design and implementation that is spread across research papers, cryptographic libraries, and developers' blogs. We develop a framework to automate the validation and benchmarking of the ideas that we collect. This approach leads us to five new candidate designs for polynomial hash functions. Using our framework, we generate and evaluate different implementations and optimization strategies for each candidate. We obtain substantial improvements over Poly1305 in terms of security and performance. Besides laying out the rationale behind our new designs, our paper serves as a reference for efficiently implementing polynomial hash functions, including Poly1305.
Multi-Party Computation in Corporate Data Processing: Legal and Technical Insights
This paper examines the deployment of Multi-Party Computation (MPC) in corporate data processing environments, focusing on its legal and technical implications under the European Union’s General Data Protection Regulation (GDPR). By combining expertise in cryptography and legal analysis, we address critical questions necessary for assessing the suitability of MPC for real-world applications. Our legal evaluation explores the conditions under which MPC qualifies as an anonymizing approach under GDPR, emphasizing the architectural requirements, such as the distribution of control among compute parties, to minimize re-identification risks effectively. The assertions put forth in the legal opinion are validated by two distinct assessments conducted independently.
We systematically answer key regulatory questions, demonstrating that a structured legal assessment is indispensable for organizations aiming to adopt MPC while ensuring compliance with privacy laws. In addition, we complement this analysis with a practical implementation of privacy-preserving analytics using Carbyne Stack, a cloud-native open-source platform for scalable MPC applications, which integrates the MP-SPDZ framework as its backend. We benchmark SQL queries under various security models to evaluate scalability and efficiency.
Practical Key Collision on AES and Kiasu-BC
The key collision attack was proposed as an open problem in key-committing security in Authenticated Encryption (AE) schemes like and . In ASIACRYPT 2024, Taiyama et al. introduce a novel type of key collision—target-plaintext key collision ( ) for . Depending on whether the plaintext is fixed, can be divided into and , which can be directly converted into collision attacks and semi-free-start collision attacks on the Davies-Meyer ( ) hashing mode.
In this paper, we propose a new rebound attack framework leveraging a time-memory tradeoff strategy, enabling practical key collision attacks with optimized complexity. We also present an improved automatic method for finding \textit{rebound-friendly} differential characteristics by controlling the probabilities in the inbound and outbound phases, allowing the identified characteristics to be directly used in key collision attacks. Through our analysis, we demonstrate that the 2-round attack proposed by Taiyama et al. is a attack in fact, while attacks are considerably more challenging than attacks. By integrating our improved automatic method with a new rebound attack framework, we successfully identify a new differential characteristic for the 2-round attack and develope the first practical attack against 2-round . Additionally, we present practical attacks against 5-round and 3-round , along with a practical attack against 6-round . Furthermore, we reduce time complexities for and attacks on other variants.
Machine-checking Multi-Round Proofs of Shuffle: Terelius-Wikstrom and Bayer-Groth
Shuffles are used in electronic voting in much the same way physical ballot boxes are used in paper systems: (encrypted) ballots are input into the shuffle and (encrypted) ballots are output in a random order, thereby breaking the link between voter identities and ballots. To guarantee that no ballots are added, omitted or altered, zero-knowledge proofs, called proofs of shuffle, are used to provide publicly verifiable transcripts that prove that the outputs are a re-encrypted permutation of the inputs. The most prominent proofs of shuffle, in practice, are those due to Terelius and
Wikström (TW), and Bayer and Groth (BG). TW is simpler whereas BG is more efficient, both in terms of bandwidth and computation. Security for the simpler (TW) proof of shuffle has already been machine-checked but several prominent vendors insist on using the more complicated BG proof of shuffle. Here, we machine-check the security of the Bayer-Groth proof of shuffle via the Coq proof-assistant. We then extract the verifier (software) required to check the transcripts produced by Bayer-Groth implementations and use it to check transcripts from the Swiss Post evoting
system under development for national elections in Switzerland.
Achieving Data Reconstruction Hardness and Efficient Computation in Multiparty Minimax Training
Generative models have achieved remarkable success in a wide range of applications. Training such models using proprietary data from multiple parties has been studied in the realm of federated learning. Yet recent studies showed that reconstruction of authentic training data can be achieved in such settings.
On the other hand, multiparty computation (MPC) guarantees standard data privacy, yet scales poorly for training generative models.
In this paper, we focus on improving reconstruction hardness during Generative Adversarial Network (GAN) training while keeping the training cost tractable. To this end, we explore two training protocols that use a public generator and an MPC discriminator: Protocol 1 (P1) uses a fully private discriminator, while Protocol 2 (P2) privatizes the first three discriminator layers. We prove reconstruction hardness for P1 and P2 by showing that (1) a public generator does not allow recovery of authentic training data, as long as the first two layers of the discriminator are private; and through an existing approximation hardness result on ReLU networks, (2) a discriminator with at least three private layers does not allow authentic data reconstruction with algorithms polynomial in network depth and size. We show empirically that compared with fully MPC training, P1 reduces the training time by and P2 further by .
Revisiting the Security and Privacy of FIDO2
We revisit the privacy and security analyses of FIDO2, a widely deployed standard for passwordless authentication on the Web.
We discuss previous works
and conclude that each of them has at least one of the following limitations:
(i) impractical trusted setup assumptions,
(ii) security models that are inadequate in light of state of the art of practical attacks,
(iii) not analyzing FIDO2 as a whole, especially for its privacy guarantees.
Our work addresses these gaps and proposes revised security models for privacy and authentication. Equipped with our new models, we analyze FIDO2 modularly and focus on its component protocols, WebAuthn and CTAP2, clarifying their exact security guarantees.
In particular, our results, for the first time, establish privacy guarantees for FIDO2 as a whole.
Furthermore, we suggest minor modifications that can help FIDO2 provably meet stronger privacy and authentication definitions and withstand known and novel attacks.
CAKE requires programming - On the provable post-quantum security of (O)CAKE
Uncategorized
Uncategorized
In this work we revisit the post-quantum security of KEM-based password-authenticated key exchange (PAKE), specifically that of (O)CAKE. So far, these schemes evaded a security proof considering quantum adversaries. We give a detailed analysis of why this is the case, determining the missing proof techniques. To this end, we first provide a proof of security in the post-quantum setting, up to a single gap. This proof already turns out to be technically involved, requiring advanced techniques to reason in the QROM, including the compressed oracle and the extractable QROM. To pave the way towards closing the gap, we then further identify an efficient simulator for the ideal cipher. This provides certain programming abilities as a necessary and sufficient condition to close the gap in the proof: we demonstrate that we can close the gap using the simulator, and give a meta-reduction based on KEM-anonymity that shows the impossibility of a non-programming reduction that covers a class of KEMs that includes Kyber / ML-KEM.
A 10-bit S-box generated by Feistel construction from cellular automata
In this paper, we propose a new 10-bit S-box generated from a Feistel construction. The subpermutations are generated by a 5-cell cellular automaton based on a unique well-chosen rule and bijective affine transformations. In particular, the cellular automaton rule is chosen based on empirical tests of its ability to generate good pseudorandom output on a ring cellular automaton. Similarly, Feistel's network layout is based on empirical data regarding the quality of the output S-box.
We perform cryptanalysis of the generated 10-bit S-box: we test the properties of algebraic degree, algebraic complexity, nonlinearity, strict avalanche criterion, bit independence criterion, linear approximation probability, differential approximation probability, differential uniformity and boomerang uniformity of our S-box, and relate them to those of the AES S-box. We find security properties comparable to or sometimes even better than those of the standard AES S-box. We believe that our S-box could be used to replace the 5-bit substitution of ciphers like ASCON.
A Democratic Distributed Post-Quantum Certificateless Encryption Scheme
We propose a post-quantum certificateless encryption scheme based on a web of trust instead of a centralized Key Generation Center. Our scheme allows nodes to communicate securely. It is the nodes already present in the network that vote on the acceptance of new nodes, and agree on the shared key. The threshold required for the acceptance of a new node is configurable. Our protocol thus allows to completely operate without the Key Generation Center (or Key Distribution Center).
Our scheme is based on Quasi-Cyclic Moderate Density Parity Check Code McEliece, which is resistant to quantum computer attacks. The voting system uses Shamir secret sharing, coupled with the Kabatianskii-Krouk-Smeets signature scheme, both are also resistant to quantum computer attacks.
We provide a security analysis of our protocol, as well as a formal verification and a proof of concept code.
StaMAC: Fault Protection via Stable-MAC Tags
Fault attacks pose a significant threat to cryptographic implementations, motivating the development of countermeasures, primarily based on a combination of redundancy and masking techniques. Redundancy, in these countermeasures, is often implemented via duplication or linear codes. However, their inherent structure remains susceptible to strategic fault injections bypassing error checks. To address this, the CAPA countermeasure from CRYPTO 2018 leveraged information-theoretic MAC tags for protection against fault and combined attacks. However, a recent attack has shown that CAPA can only protect against either side-channel analysis or fault attacks, but not both simultaneously, and with significant hardware costs. Its successor, M&M, improves efficiency but lacks protection against ineffective faults.
In this paper, we propose StaMAC, a framework aimed at securely incorporating MAC tags against both side-channel and fault adversaries in a non-combined scenario. We extend the security notions outlined in StaTI from TCHES 2024, and propose the notion of MAC-stability, ensuring fault propagation in masked and MACed circuits, necessitating only a single error check at the end of the computation. Additionally, we show that the stability notion from StaTI is arbitrarily composable (whereas it was previously thought to be only serially composable), making it the first arbitrary composable fault security notion which does not require intermediate error checks or correction. Then, we establish the improved protection of masking combined with MAC tags compared to linear encoding techniques by showing bounds on the advantage considering several fault adversaries: a gate/register faulting adversary, an arbitrary register faulting adversary, and a random register faulting adversary. Then, we show how to transform any probing secure circuit to protect against fault attacks using the proposed MAC-stable gadgets implementing field operations. Finally, we demonstrate StaMAC on an AES implementation, evaluating its security and hardware costs compared to the countermeasures using MAC tags.
Quantum circuit for implementing AES S-box with low costs
Advanced Encryption Standard (AES) is one of the most widely used and extensively studied encryption algorithms globally, which is renowned for its efficiency and robust resistance to attacks. In this paper, three quantum circuits are designed to implement the S-box, which is the sole nonlinear component in AES. By incorporating a linear key schedule, we achieve a quantum circuit for implementing AES with the minimum number of qubits used. As a consequence, only 264/328/398 qubits are needed to implement the quantum circuits for AES-128/192/256. Furthermore, through quantum circuits of the S-box and key schedule, the overall size of the quantum circuit required for Grover's algorithm to attack AES is significantly decreased. This enhancement improves both the security and resource efficiency of AES in a quantum computing environment.
Verifiable Secret Sharing Based on Fully Batchable Polynomial Commitment for Privacy-Preserving Distributed Computation
Privacy-preserving distributed computation enables a resource-limited client to securely delegate computations on sensitive data to multiple servers by distributing shares of the data. In such systems, verifiable secret sharing (VSS) is a fundamental component, ensuring secure data distribution and directly impacting the overall performance. The most practical approach to construct VSS is through polynomial commitment (PC), with two main research directions to improve the VSS efficiency. The first focuses on improving the dealer time by designing PC that supports batch evaluation, i.e., generating multiple evaluation proof pairs in one shot. The second aims to reduce the broadcast cost by designing PC that supports batch opening, i.e., producing a compact proof for multiple evaluations.
Recently, Zhang et al. (Usenix Security 2022) proposed a transparent PC that supports batch evaluation and obtained a transparent VSS with optimal dealer time. However, their scheme does not support batch opening, leading to high broadcast costs in VSS. To the best of our knowledge, no transparent PC currently supports both batch evaluation and batch opening, thus limiting the performance of existing VSS schemes.
In this paper, we propose a transparent fully batchable polynomial commitment (TFB-PC), that simultaneously supports batch evaluation and batch opening. Leveraging TFB-PC, we present a VSS scheme with optimal complexity: dealer time, participant time and communication cost. Furthermore, we implement our VSS scheme and compare its performance with Zhang et al.’s VSS
(the naive approach). Results show that our scheme achieves reduction in communication cost and a speed up in participant time for - parties.
Polar Lattice Cryptography
Presenting a protocol that builds a cryptographic solution which shifts security responsibility from the cipher designer to the cipher user. The Polar Lattice is a pattern-devoid cryptographic cipher. It is based on a geometric construct -- a polar lattice, on which the letters of a plaintext alphabet A, are presented as two points each letter, so that to transmit a letter the transmitter transmits a randomized pathway, a trail, (ciphertext) that begins at the first point of the transmitted letter and ends at the second point of the transmitted letter; the transmitted pathway is a set of steps on the lattice. Once a letter is transmitted the next bits on the ciphertext mark the beginning of the pathway that points to the next letter. The size and the geometric construction of the polar lattice are randomized and kept secret. The randomized pathways may be long or short, the attacker does not know how to parcel the ciphertext to individual trails pointing to distinct letters in the plaintext alphabet A. The polar lattice may be implemented algebraically, or geometrically; the lattice may be a physical nano-construct. The polar lattice is very power efficient, very fast. It claims all the attributes associated with pattern devoid cryptography: it allows for only brute force cryptanalysis, which in turn can be defeated through increased ciphertext size, unlimited key size and structure complexity.
Analysis of the Telegram Key Exchange
We describe, formally model, and prove the security of Telegram's key exchange protocols for client-server communications. To achieve this, we develop a suitable multi-stage key exchange security model along with pseudocode descriptions of the Telegram protocols that are based on analysis of Telegram's specifications and client source code. We carefully document how our descriptions differ from reality and justify our modelling choices. Our security proofs reduce the security of the protocols to that of their cryptographic building blocks, but the subsequent analysis of those building blocks requires the introduction of a number of novel security assumptions, reflecting many design decisions made by Telegram that are suboptimal from the perspective of formal analysis. Along the way, we provide a proof of IND-CCA security for the variant of RSA-OEAP+ used in Telegram and identify a hypothetical attack exploiting current Telegram server behaviour (which is not captured in our protocol descriptions). Finally, we reflect on the broader lessons about protocol design that can be taken from our work.
Verifiable Decapsulation: Recognizing Faulty Implementations of Post-Quantum KEMs
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases when following our approach.
Concretely Efficient Correlated Oblivious Permutation
Oblivious permutation (OP) enables two parties, a sender with a private data vector and a receiver with a private permutation π, to securely obtain the shares of π(x). OP has been used to construct many important MPC primitives and applications such as secret shuffle, oblivious sorting, private set operations, secure database analysis, and privacy-preserving machine learning. Due to its high complexity, OP has become a performance bottleneck in several practical applications, and many efforts have been devoted to enhancing its concrete efficiency. Chase et al. (Asiacrypt'20) proposed an offline-online OP paradigm leveraging a pre-computable resource termed Share Translation. While this paradigm significantly reduces online costs, the substantial offline cost of generating Share Translation remains an area for further investigation.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively.
We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from MB to MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes seconds to transpose a encrypted matrix. The CC-MM algorithm requires seconds to multiply two encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over .
Protecting Computations Against Continuous Bounded-Communication Leakage
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function.
In this work, we eliminate the first limitation and make progress on the second. Concretely:
- We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case.
- For simple types of bounded-communication leakage, which leak parities or disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Disincentivize Collusion in Verifiable Secret Sharing
In verifiable secret sharing (VSS), a dealer shares a secret input among several parties, ensuring each share is verifiable. Motivated by its applications in the blockchain space, we focus on a VSS where parties holding shares are not allowed to reconstruct the dealer's secret (even partially) on their own terms, which we address as privacy-targeted collusion if attempted.
In this context, our work investigates mechanisms deterring such collusion in VSS among rational and malicious parties. For this problem, we make both algorithmic and combinatorial contributions:
1. We provide two collusion-deterrent mechanisms to discourage parties from colluding and recovering the dealer's secret. Notably, when it is desired to achieve fairness---where non-colluding parties are not at a loss---while allowing for the best achievable malicious fault tolerance, we define ``trackable access structures'' (TAS) and design a deterrence mechanism tailored for VSS on these structures.
2. We estimate the size of the optimal TAS, construct them from Steiner systems, provide highly robust TAS using partial Steiner systems, and present efficient secret sharing schemes for the latter close-to-optimal TAS for various parameter regimes.
3. We demonstrate that trackability in access structures is connected to combinatorial objects like (partial) Steiner systems, uniform subsets with restricted intersections, and appropriate binary codes. The robustness of access structures is equivalent to the minimum vertex cover of hypergraphs.
We believe these connections between cryptography, game theory, and discrete mathematics will be of broader interest.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that for any block, when a plaintext-ciphertext pair is known, any key in the key space is valid, that is, for each block, the plaintext-ciphertext pair and the key are independence, and the independence between blocks is also easy to construct. This feature is completely different from all existing short-key cipher algorithms.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
Multiparty Garbling from OT with Linear Scaling and RAM Support
State-of-the-art protocols that achieve constant-round secure multiparty computation currently present a trade-off: either consume an amount of communication that scales quadratically in the number of parties, or achieve better asymptotics at the cost of high constant factors (e.g. schemes based on LPN or DDH).
We construct a constant-round MPC protocol where communication scales linearly in the number of parties n. Our construction relies only on OT and RO, and it leverages packed secret sharing. Due to building on simple primitives, our protocol offers concrete improvement over asymptotically-efficient LPN-based schemes. We consider security in the presence of a dishonest majority where the malicious (with abort) adversary corrupts an arbitrary constant fraction of parties.
By leveraging tri-state circuits (Heath et al. Crypto 2023), we extend our protocol to the RAM model of computation. For a RAM program that halts within steps, our maliciously-secure protocol communicates total bits, where is a security parameter.
Homomorphic Signature-based Witness Encryption and Applications
Practical signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing SWE schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption scheme relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential
A Unified Framework for Succinct Garbling from Homomorphic Secret Sharing
A major challenge in cryptography is the construction of succinct garbling schemes that have asymptotically smaller size than Yao’s garbled circuit construction. We present a new framework for succinct garbling that replaces the heavy machinery of most previous constructions by lighter-weight homomorphic secret sharing techniques.
Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size.
Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off
can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost.
High-Order Masking of BIKE
Every cryptographic implementation on embedded device is vulnerable to side-channel attacks. To prevent these attacks, the main countermeasure consists in splitting each sensitive variable in shares and processing them independently.
With the upcoming of new algorithms designed to resist quantum computers and the complexity of their operations, this protection represents a real challenge.
In this article, we present an attack on an earlier attempt to protect the decoder of BIKE cryptosystem against first-order attack.
Additionally, we introduce a new procedure for the high-order masking of the decoder, up-to-date with its latest improvement.
We also present the first fully masked implementation of the whole cryptosystem, including the key generation and the encapsulation.
Eventually, to assess the correctness of our countermeasures and initiate further comparison, we implemented our countermeasures in C and provide benchmarks of their performance.
AI for Code-based Cryptography
We introduce the use of machine learning in the cryptanalysis of code-based cryptography. Our focus is on distinguishing problems related to the security of NIST round-4 McEliece-like cryptosystems, particularly for Goppa codes used in ClassicMcEliece and Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes used in BIKE. We present DeepDistinguisher, a new algorithm for distinguishing structured codes from random linear codes that uses a transformer. The results show that the new distinguisher achieves a high level of accuracy in distinguishing Goppa codes, suggesting that their structure may be more recognizable by AI models. Our approach outperforms traditional attacks in distinguishing Goppa codes in certain settings and does generalize to larger code lengths without further training using a puncturing technique. We also present the first distinguishing results dedicated to MDPC and QC-MDPC codes.
Preimage Attacks on up to 5 Rounds of SHA-3 Using Internal Differentials
In this paper, we study preimage resistance of the SHA-3 standard. We propose a squeeze meet-in-the-middle attack as a new preimage attack method for the sponge functions. This attack combines the squeeze attack and meet-in-the-middle attack, and is implemented by internal differentials. We analyze the inverse operation of the SHA-3 round function, and develop a new target internal differential algorithm as well as a linearization technique for the Sbox in the backward phase. In addition, we propose the concept of a value-difference distribution table (VDDT) to optimize the attack complexity. These techniques lead to faster preimage attacks on five (out of six) SHA-3 functions reduced to 4 rounds, and also bring preimage attacks on 5 rounds of four SHA-3 instances. The attack techniques are verified by performing practical preimage attack on a small variant of 4-round Keccak.
Transmitting Secrets by Transmitting only Plaintext
Presenting a novel use of encryption, not for hiding a secret, but for marking letters. Given a 2n letters plaintext, the transmitter encrypts the first n letters with key K1 to generate corresponding n cipherletters, and encrypts the second n letters with key K2 to generate n corresponding cipherletters. The transmitter sends the 2n cipherletters along with the keys, K1 and K2 The recipient (and any interceptor) will readily decrypt the 2n cipherletters to the original plaintext. This makes the above procedure equivalent to sending out the plaintext. So why bother? When decrypting the 2n cipherletters one will make a note of how the letters that were encrypted with K1 are mixed with the letters encrypted with K2 while keeping the original order of the letters encrypted with each key. There are 2^n possible mixings. Which means that the choice of mixing order can deliver a secret message, S, comprising n bits. So while on the surface a given plaintext is sent out from transmitter to recipient, this plaintext hides a secret. Imagine a text messaging platform that uses this protocol. An adversary will not know which plain innocent message harbors a secret message. This allows residents of cyberspace to communicate secrets without exposing the fact that they communicated a secret. Expect a big impact on the level of cyberspace privacy.
Improved Cryptanalysis of ChaCha: Beating PNBs with Bit Puncturing
ChaCha is a widely deployed stream cipher and one of the most important symmetric primitives. Due to this practical importance, many cryptanalysis have been proposed. Until now, Probabilistic Neutral Bits (PNBs) have been the most successful. Given differential-linear distinguishers, PNBs are the technique for key recovery relying on an experimental backward correlation obtained through blackbox analysis. A careful theoretical analysis exploiting the round function design may find a better attack and improve our understanding, but the complicated nature of the ARX structure makes such analysis difficult.
%
We propose a theoretical methodology inspired by bit puncturing, which was recently proposed at Eurocrypt 2024. Our method has a theoretical foundation and is thus fundamentally different from PNBs, to which it is the first effective alternative. As a result, we significantly improved the attack complexity for 6, 7, and 7.5-round ChaCha. The 7-round attack is about times faster than the previous best. Furthermore, we propose the first 7.5-round attack with a non-negligible advantage over an exhaustive search.
The Algebraic One-More MISIS Problem and Applications to Threshold Signatures
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO ’24) to prove security of new two-round threshold signatures.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT ’23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO ’22), as a result of independent interest.
Constant-Time Code: The Pessimist Case
This note discusses the problem of writing cryptographic implementations in software, free of timing-based side-channels, and many ways in which that endeavour can fail in practice. It is a pessimist view: it highlights why such failures are expected to become more common, and how constant-time coding is, or will soon become, infeasible in all generality.
Fine-Grained Verifier NIZK and Its Applications
In this paper, we propose a new type of non-interactive zero-knowledge (NIZK), called Fine-grained Verifier NIZK (FV-NIZK), which provides more flexible and more fine-grained verifiability of proofs than standard NIZK that supports public verifiability and designated-verifier NIZK (DV-NIZK) that supports private verifiability. FV-NIZK has two statistically (or computationally) equivalent verification approaches:
--- a master verification using the master secret key ;
--- a fine-grained verification using a derived secret key , which is derived from w.r.t. (which may stand for user identity, email address, vector, etc.).
We require unbounded simulation soundness (USS) of FV-NIZK to hold, even if an adversary obtains derived secret keys with of its choices, and define proof pseudorandomness which stipulates the pseudorandomness of proofs for adversaries that are not given any secret key.
We present two instantiations of FV-NIZK for linear subspace languages, based on the matrix decisional Diffie-Hellman (MDDH) assumption.
One of the FV-NIZK instantiations is pairing-free and achieves almost tight USS and proof pseudorandomness. We also adapt the two instantiations to support unbounded fine-grained secret key delegations.
We illustrate the usefulness of FV-NIZK by showing two applications and obtain the following pairing-free schemes:
--- the first almost tightly multi-challenge CCA (mCCA)-secure inner-product functional encryption (IPFE) scheme without pairings;
--- the first public-key encryption (PKE) scheme that reconciles the inherent contradictions between public verifiability and anonymity.
We formalize such PKE as Fine-grained Verifiable PKE (FV-PKE), which derives a special key from the decryption secret key, such that for those who obtain the derived key, they can check the validity of ciphertexts but the anonymity is lost from their views (CCA-security still holds for them), while for others who do not get the derived key, they cannot do the validity check but the anonymity holds for them.
Our FV-PKE scheme achieves almost tight mCCA-security for adversaries who obtain the derived keys, and achieves almost tight ciphertext pseudorandomness (thus anonymity) for others who do not get any derived key.
MIDAS: an End-to-end CAD Framework for Automating Combinational Logic Locking
Logic locking has surfaced as a notable safeguard
against diverse hazards that pose a risk to the integrated circuit
(IC) supply chain. Existing literature on logic locking largely
encompasses the art of proposing new constructions, on the one
hand, and unearthing weaknesses in such algorithms on the
other. Somehow, in this race of make and break, the stress on
automation of adopting such techniques on real-life circuits has
been rather limited. For the first time, we present a generic
end-to-end combinational logic locking CAD framework, MIDAS.
This framework analyses circuit netlists and generates locked
netlists. Due to its generic circuit analysis, it bridges the gap,
integrates diverse logic locking techniques, and offers a scope of
integration of potential future ones. MIDAS framework’s efficacy
has been verified through its application on ISCAS’85 and
ISCAS’99 benchmark circuits, locked using six different schemes
such as EPIC, Anti-SAT, SFLL-HD, SFLL-fault, CAS-Lock, and
LoPher. MIDAS minimizes the hardware overhead requirements
of otherwise resource-intensive locking technique LoPher by
extracting an influential portion of circuit to lock and utilizing
a simple fitness function. We also assess the overhead increase
for the aforementioned locking methods, thereby complicating
the identification of influential nodes within the locked netlists.
Finally, we evaluate MIDAS by selectively locking parts of a
commercially-designed open-source RISC-V core.
Black-Box (and Fast) Non-Malleable Zero Knowledge
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks.
Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency).
In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability.
Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
Commitment Schemes Based on Module-LIP
Recently, Jiang et al. (EUROCRYPT 2025) proposed a universal framework for constructing commitment schemes using group actions, and instantiated it with the Lattice Isomorphism Problem (LIP). This paper attempts to construct an instantiation based on module-LIP with this framework. More precisely, we first present a reduction from -LIP to -LAP. Then we develop a re-randomized algorithm based on the self-reduction framework of Module-LIP (Ducas et al. ASIACRYPT 2022), adapting it to the framework to construct commitment schemes.
Non-interactive Anonymous Tokens with Private Metadata Bit
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous client authentication while also embedding trust signals that are only readable by the authority who holds the issuance secret key and nobody else. A drawback of all existing ATPM constructions is that they require client-issuer interaction during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the non-interaction property during the issuance process allows for more efficient issuance protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend (i.e., present the same token twice).
Enhanced CKKS Bootstrapping with Generalized Polynomial Composites Approximation
Bootstrapping in approximate homomorphic encryption involves evaluating the modular reduction function. Traditional methods decompose the modular reduction function into three components: scaled cosine, double-angle formula, and inverse sine. While these approaches offer a strong trade-off between computational cost and level consumption, they lack flexibility in parameterization.
In this work, we propose a new method to decompose the modular reduction function with improved parameterization, generalizing prior trigonometric approaches. Numerical experiments demonstrate that our method achieves near-optimal approximation errors. Additionally, we introduce a technique that integrates the rescaling operation into matrix operations during bootstrapping, further reducing computational overhead.
On Improved Cryptanalytic Results against ChaCha for Reduced Rounds ≥ 7
In this paper, we analyze the subtle issues of complexity estimates related to state-of-the-art cryptanalytic efforts on ChaCha. In this regard, we demonstrate that the currently best-known cryptanalytic result on -round ChaCha with time and data [Xu et al., ToSC 2024] can be estimated as for time and for data complexity. We improve the best-known result for the round by obtaining an improved set of Probabilistic Neutral Bits and considering our revised estimation. Our result with time complexity and data complexity improves the result of Xu et al., where they could achieve time and data complexity and , respectively. For both the and rounds, we can show an improvement of the order of in the time complexity. For -round, we improve the result of Dey [IEEE-IT 2024], which reports the time and data complexity of and , respectively. By applying the formula of the same paper and incorporating additional PNBs, we obtain improved time and data complexity of and , respectively. Thus, this paper describes the currently best-known cryptanalytic results against reduced round ChaCha. Our results do not affect the security claims of the complete algorithm with 20 rounds. Also, we provide a rebuttal of the Work by Wang et al. \cite{wangeprint} and analyze their claim about the error in the ``Divide-and-Conquer'' Approach.
BUFFing Threshold Signature Schemes
We explore advanced security notions for threshold signature schemes, focusing on Beyond UnForgeability Features (BUFF), introduced by Cremers et al. (S&P’21) in the non-threshold setting. The BUFF properties protect against attacks based on maliciously chosen keys, e.g., expropriating a message-signature pair under a new public key (called exclusive ownership). We first formalize these notions in the threshold setting and examine their relationships. Notably, unlike regular signature schemes, the hierarchy of variants of exclusive ownership notions only holds for threshold schemes if they are also robust.
We then present a generic compiler that transforms any threshold signature scheme to satisfy exclusive ownership, and message-bound signature properties with minimal overhead. Furthermore, we modify the threshold BLS signature scheme to achieve these additional properties without increasing the signature size. Lastly, we identify specific structures in threshold signature schemes where BUFF properties can be naturally extended from the underlying standard signature scheme, and we analyze and prove the security properties in some of the existing threshold schemes.
Exploring How to Authenticate Application Messages in MLS: More Efficient, Post-Quantum, and Anonymous Blocklistable
The Message Layer Security (MLS) protocol has recently been standardized by the IETF. MLS is a scalable secure group messaging protocol expected to run more efficiently compared to the Signal protocol at scale, while offering a similar level of strong security. Even though MLS has undergone extensive examination by researchers, the majority of the works have focused on confidentiality.
In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
A Note on the Blindness of the Scheme from ePrint 2025/397
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Matchmaker: Fast Secure Inference across Deployment Scenarios
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios.
We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system and a Function Secret Sharing (FSS)-based system for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based outperforms prior GPU-based LSS systems by up to . We then show that the best choice between and depends on the deployment scenario.
In fact, under certain deployments, a combination of and can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems
by up to (geomean ).
Multi-Client Attribute-Based Unbounded Inner Product Functional Encryption, and More
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting
of unboundedness:
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data;
– the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption;
– the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded
length vectors both at the time of key generation and encryption.
Private Computation on Common Fuzzy Records
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns.
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
A Note on Obfuscation-based Attacks on Private-coin Evasive LWE
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption.
While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22].
In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Non-Interactive Verifiable Aggregation
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA).
We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data.
We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3 smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity.
We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
ProofFrog: A Tool For Verifying Game-Hopping Proofs
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Trapdoor Hash Functions and PIR from Low-Noise LPN
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function , a hash on together with a (small) input encoding allow one to recover . TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate for , i.e., in the so-called low-noise regime. The construction achieves compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity , for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than ) from any code-based assumption.
On the Soundness of Algebraic Attacks against Code-based Assumptions
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy and strong diffusion while maintaining efficiency.
It employs advanced cryptographic transformations to ensure robust security against modern cryptanalysis techniques.
Entropy tests demonstrate its ability to generate highly randomized ciphertext, surpassing industry standards.
Avalanche effect analysis confirms optimal diffusion, achieving an average bit change of 50.18% in large datasets.
Key sensitivity tests reveal a 50.54% ciphertext difference for minimal key variations, ensuring strong resistance to differential cryptanalysis.
With fast encryption and decryption speeds, Deimos Cipher offers a balanced approach between security and performance, making it suitable for secure communication and data protection.
This paper presents the algorithm's design, security analysis, and benchmarking against established cryptographic standards.
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or -party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any out of local authorities as long as = poly . Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is . This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for from standard pairing-based assumptions, and for from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for from evasive LWE.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately evaluations, where is the capacity, the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Low Communication Threshold FHE from Standard (Module-)LWE
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction.
In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Hybrid Obfuscated Key Exchange and KEMs
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings.
For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation.
In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Delegatable ABE with Full Security from Witness Encryption
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. In this work, we design a fully-secure DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions. To the best of our knowledge, this is the first DABE construction (beyond hierarchical identity-based encryption) that achieves full security without relying on complexity leveraging. Our DABE supports an unbounded number of key delegations, and the secret key size grows just linearly with each key delegation operation.
AsyRand: fast asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which generate publicly verifiable randomness at regular intervals, are crucial for a wide range of applications. The publicly verifiable secret sharing (PVSS) scheme is a promising cryptographic primitive for implementing beacon protocols, such as Hydrand (S\&P '20) and SPURT (S\&P '22). However, two key challenges for practical deployment remain unresolved: asynchrony and reconfiguration. In this paper, we introduce the beacon protocol to address these challenges. In brief, leverages Bracha Reliable Broadcast (BRB) or BRB-like protocols for message dissemination and incorporates a producer-consumer model to decouple the production and consumption of PVSS commitments. In the producer-consumer model, PVSS commitments are produced and consumed using a queue data structure. Specifically, the producer process is responsible for generating new PVSS commitments and reaching consensus on them within the queue, while the consumer process continuously consumes the commitments to recover PVSS secrets and generate new beacon values. This separation allows the producer and consumer processes to operate simultaneously and asynchronously, without the need for a global clock. Moreover, the producer-consumer model enables each party to detect potential faults in other parties by monitoring the queue length. If necessary, parties in can initiate a removal process for faulty parties. BRB is also employed to facilitate the addition of new parties without requiring a system restart. In summary, supports reconfiguration, enhancing both the protocol's usability and reliability. Additionally, we propose a novel PVSS scheme based on the protocol, which is of independent interest. Regarding complexity, achieves state-of-the-art performance with communication complexity, computation complexity, and verification complexity.
Withdrawable signatures in Fiat-Shamir with aborts constructions
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
SNARKs for Stateful Computations on Authenticated Data
We present a new generalization of (zk-)SNARKs combining two additional features at the same time. Besides the verification of correct computation, our new SNARKs also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our SNARKs are specifically suited to applications in cyber-physical control systems, where computations are periodically carried out and need to be checked immediately. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. Additionally, we present new optimizations to reduce proof size, prover time, and verification time in our setting. With our construction, prover runtime improves significantly over the baseline by a factor of 89. Verification time is 70 % less for computations on authenticated data and 33 % less for stateful computations. To demonstrate relevance and practicality, we implement and benchmark our new SNARKs in a sample real-world scenario with
a (simple) quadcopter flight control system.
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}.
By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way.
Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method.
We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain different basis-based attacks in the -th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack.
We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks.
Second, we revisit the multiple-of- property with our refined geometric approach and exhibit new multiple-of- distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art.
Third, we study the multiple-of- property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
We emphasize that all these applications were not possible before.
Related-Key Differential and Boomerang Cryptanalysis in the Fixed-Key Model
Differential cryptanalysis, along with its variants such as boomerang attacks, is widely used to evaluate the security of block ciphers. These cryptanalytic techniques often rely on assumptions like the \textit{hypothesis of stochastic equivalence} and \textit{Markov ciphers assumption}. Recently, more attention has been paid to verifying whether differential characteristics (DCs) meet these assumptions, finding both positive and negative results.
A part of these efforts includes the automatic search methods for both the value and difference propagation (e.g., Liu et al. CRYPTO 2020, Nageler et al. ToSC 2025/1), structural constraints analysis (e.g., Tan and Peyrin, ToSC 2022/4), and the quasidifferential (Beyne and Rijmen, CRYPTO 2022).
Nevertheless, less attention has been paid to the related-key DCs and boomerang distinguishers, where the same assumptions are used. To the best of our knowledge, only some related-tweakey DCs of \skinny were checked thanks to its linear word-based key-schedule, and no similar work is done for boomerang distinguishers.
The verification of related-key DCs and boomerang distinguishers is as important as that of DCs, as they often hold the longest attack records for block ciphers.
This paper focuses on investigating the validity of DCs in the related-key setting and boomerang distinguishers in both single- and related-key scenarios.
For this purpose, we generalize Beyne and Rijmen's quasidifferential techniques for the related-key DCs and boomerang attacks.
First, to verify related-key DCs, the related-key quasi-DC is proposed. Similar to the relationship between the quasi-DC and DC, the exact probability of a related-key DC is equal to the sum of all corresponding related-key quasi-DCs' correlations.
Since the related-key quasi-DCs involve the key information, we can determine the probability of the target related-key DC in different key subspaces.
We find both positive and negative results.
For example, we verify the 18-round related-key DC used in the best attack on \gift-64 whose probability is , finding that this related-key DC has a higher probability for keys which is around , but it is impossible for the remaining keys.
Second, we identify proper bases to describe the boomerang distinguishers with the geometric approach. A quasi-BCT is constructed to consider the value influence in the boomerang connectivity table (BCT). For the DC parts, the quasi-biDDT is used. Connecting the quasi-BCT and quasi-biDDT, we can verify the probability of a boomerang distinguisher with quasi-boomerang characteristics.
This also allows us to analyze the probability of the boomerang in different key spaces.
For a 17-round boomerang distinguisher of \skinny-64-128 whose probability is , we find that the probability can be for half of keys, and impossible for the other half.
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order on a set of supersingular elliptic curves primitively oriented by . Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.
Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses -dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes.
Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more.
In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions.
Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction.
Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space.
We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved.
Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments.
These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties.
Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions.
To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP.
Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure -dimensional (IB)IPFE scheme to a simulation-based secure -dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure -dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
Blind Signatures from Cryptographic Group Actions
Uncategorized
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Provably Secure Approximate Computation Protocols from CKKS
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols.
This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition.
Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error.
For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Reducing the Number of Qubits in Solving LWE
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector with a known number of and entries. This attack significantly improved the time complexity of from previously known algorithms to , where is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional and entries, reduces the overall time complexity.
Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time . However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem.
Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time . Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than . Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit.
Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires qubits and classical memory space up to . We achieve as best obtainable time complexity.
An Efficient Quantum Oblivious Transfer Protocol
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Blockchain-based Secure D2D localisation with adaptive precision
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
A tuple of NP statements satisfies a monotone policy if , where if and only if is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that satisfy a monotone policy with a proof of size , where is the size of the Boolean circuit computing the NP relation .
Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over ). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions.
Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises.
These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions.
To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components.
In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes.
Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
An ETSI GS QKD compliant TLS implementation
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts.
Specifically, we consider a setting where a DAO consisting of sellers holding shares of a witness interacts with a DAO comprising buyers holding shares of a signing key ; the goal is for the sellers to exchange for a signature under transferring a predetermined amount of funds.
Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does).
We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Generic Composition: From Classical to Quantum Security
Authenticated encryption (AE) provides both authenticity and privacy.
Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach.
This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition.
We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security.
On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC.
Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure.
The same holds for the Encrypt-and-MAC composition of SE and a qPRF
How Small Can S-boxes Be
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives.
Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones.
This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library.
Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes.
Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity.
Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes.
The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration.
Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3.
If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates.
If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth.
Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform and linearity (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved.
More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8.
Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size where
constructing an opening proof requires field operations, and scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. field operations; or
2. size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in Tate and optimal Ate pairings. While optimizations for elliptic curves with even embedding degrees have been well-explored, progress for curves with odd embedding degrees, particularly those divisible by , has been more limited. This paper presents new optimization techniques for computing the final exponentiation of the optimal Ate pairing on these curves. The first exploits the fact that some existing seeds have a form enabling cyclotomic cubing and extends this to generate new seeds with the same form. The second is to generate new seeds with sparse ternary representations, replacing squaring with cyclotomic cubing. The first technique improves efficiency by and compared to the square and multiply (\textbf{SM}) method for existing seeds at -bit and -bit security levels, respectively. For newly generated seeds, it achieves efficiency gains of at -bit, at -bit, and at -bit security levels. The second technique improves efficiency by at -bit, at -bit, and at -bit security levels.