All papers in 2025 (Page 8 of 1125 results)
A Note on the Blindness of the Scheme from ePrint 2025/397
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Matchmaker: Fast Secure Inference across Deployment Scenarios
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios.
We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system and a Function Secret Sharing (FSS)-based system for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based outperforms prior GPU-based LSS systems by up to . We then show that the best choice between and depends on the deployment scenario.
In fact, under certain deployments, a combination of and can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems
by up to (geomean ).
Multi-Client Attribute-Based Unbounded Inner Product Functional Encryption, and More
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting
of unboundedness:
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data;
– the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption;
– the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded
length vectors both at the time of key generation and encryption.
Private Computation on Common Fuzzy Records
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns.
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
A Note on Obfuscation-based Attacks on Private-coin Evasive LWE
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption.
While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22].
In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Non-Interactive Verifiable Aggregation
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA).
We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data.
We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. The proof size is just 368 bytes, which is the smallest among all multilinear polynomial commitment schemes. Our scheme also has excellent batching properties, wherein proving k evaluations over the hypercube of size n incurs O(n + k\sqrt{n}) cryptographic work, resulting in substantially amortized prover work over several evaluations.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on
concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over the BLS12-381 curve) has a proof size of 6.2KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the log^2 n proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3X smaller argument size at 1 million constraints. We are competitive with the very recently proposed MicroSpartan (S&P 2025) and linear-time SNARKs for the Plonkish constraint system such as HyperPlonk (EUROCRYPT 2023).
ProofFrog: A Tool For Verifying Game-Hopping Proofs
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Trapdoor Hash Functions and PIR from Low-Noise LPN
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function , a hash on together with a (small) input encoding allow one to recover . TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate for , i.e., in the so-called low-noise regime. The construction achieves compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity , for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than ) from any code-based assumption.
On the Soundness of Algebraic Attacks against Code-based Assumptions
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy, strong diffusion, and computational efficiency. It integrates HKDF with BLAKE2b for key expansion, ensuring secure key derivation from user-supplied passwords. The encryption process employs XChaCha20, a high-speed stream cipher, to provide strong security and resistance against nonce reuse attacks. To guarantee data integrity and authentication, HMAC-SHA256 is used, preventing unauthorized modifications.
Security evaluations demonstrate that Deimos Cipher exhibits superior randomness, achieving 6.24 bits per byte entropy for short plaintexts and 7.9998 bits per byte for long plaintexts, surpassing industry standards like AES and ChaCha20. Avalanche Effect analysis confirms optimal diffusion, with 50.18% average bit change, ensuring high resistance to differential cryptanalysis. Additionally, key sensitivity tests reveal 50.54% ciphertext change for minimal key variations, making brute-force and key-recovery attacks impractical.
With its combination of a robust key expansion mechanism, stream cipher encryption, and cryptographic authentication, Deimos Cipher offers a secure and efficient encryption scheme suitable for secure messaging, cloud data protection, and high-security environments. This paper presents the algorithm’s design, security analysis, and benchmarking against established cryptographic standards.
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or -party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any out of local authorities as long as = poly . Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is . This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for from standard pairing-based assumptions, and for from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for from evasive LWE.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately evaluations, where is the capacity, the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.
TreeKEM: A Modular Machine-Checked Symbolic Security Analysis of Group Key Agreement in Messaging Layer Security
The Messaging Layer Security (MLS) protocol standard proposes a novel tree-based protocol that enables efficient end-to-end encrypted messaging over large groups with thousands of members. Its functionality can be divided into three components: TreeSync for authenticating and synchronizing group state, TreeKEM for the core group key agreement, and TreeDEM for group message encryption. While previous works have analyzed the security of abstract models of TreeKEM, they do not account for the precise low-level details of the protocol standard. This work presents the first machine-checked security proof for TreeKEM. Our proof is in the symbolic Dolev-Yao model and applies to a bit-level precise, executable, interoperable specification of the protocol. Furthermore, our security theorem for TreeKEM composes naturally with a previous result for TreeSync to provide a strong modular security guarantee for the published MLS standard.
Low Communication Threshold FHE from Standard (Module-)LWE
Threshold fully homomorphic encryption (ThFHE) is an extension of FHE that can be applied to multiparty computation (MPC) with low round complexity. Recently, Passelègue and Stehlé (Asiacrypt 2024) presented a simulation-secure ThFHE scheme with polynomially small decryption shares from “yet another” learning with errors assumption (LWE), in which the norm of the secret key is leaked to the adversary. While “yet another” LWE is reduced from standard LWE, its module variant, “yet another” module-LWE (MLWE), lacks a known reduction from standard MLWE. Because of this, it is left as an open question to extend their scheme to the MLWE-based construction.
In this paper, we address this open problem: we propose a simulation-secure ThFHE scheme with polynomially small decryption shares whose security is (directly) reduced from standard LWE/MLWE. Our core technique, which we call “noise padding”, eliminates the need of “yet another” assumptions: we distribute shares of a small error and use them to adjust the distribution of decryption noise so that no information about the secret key is leaked. As side benefits of our construction, our ThFHE efficiently realizes arbitrary T-out-of-N threshold decryption via simple Shamir secret sharing instead of {0, 1}-linear secret sharing. Furthermore, the sizes of keys, ciphertexts and decryption shares in our scheme are constant w.r.t. the number of parties N ; we achieve compactness w.r.t. N.
Hybrid Obfuscated Key Exchange and KEMs
Hiding the metadata in Internet protocols serves to protect user privacy, dissuade traffic analysis, and prevent network ossification. Fully encrypted protocols require even the initial key exchange to be obfuscated: a passive observer should be unable to distinguish a protocol execution from an exchange of random bitstrings. Deployed obfuscated key exchanges such as Tor's pluggable transport protocol obfs4 are Diffie–Hellman-based, and rely on the Elligator encoding for obfuscation. Recently, Günther, Stebila, and Veitch (CCS '24) proposed a post-quantum variant pq-obfs, using a novel building block called obfuscated key encapsulation mechanisms (OKEMs): KEMs whose public keys and ciphertexts look like random bitstrings.
For transitioning real-world protocols, pure post-quantum security is not enough. Many are taking a hybrid approach, combining traditional and post-quantum schemes to hedge against security failures in either component. While hybrid KEMs are already widely deployed (e.g., in TLS 1.3), existing hybridization techniques fail to provide hybrid obfuscation guarantees for OKEMs. Further, even if a hybrid OKEM existed, the pq-obfs protocol would still not achieve hybrid obfuscation.
In this work, we address these challenges by presenting the first OKEM combiner that achieves hybrid IND-CCA security with hybrid ciphertext obfuscation guarantees, and using this to build Drivel, a modification of pq-obfs that is compatible with hybrid OKEMs. Our OKEM combiner allows for a variety of practical instantiations, e.g., combining obfuscated versions of DHKEM and ML-KEM. We additionally provide techniques to achieve unconditional public key obfuscation for LWE-based OKEMs, and explore broader applications of hybrid OKEMs, including a construction of the first hybrid password-authenticated key exchange (PAKE) protocol secure against adaptive corruptions in the UC model.
Delegatable ABE with Delegations from Witness Encryption
Delegatable Attribute-Based Encryption (DABE) is a well-known generalization of ABE, proposed to mirror organizational hierarchies. We design a DABE scheme from witness encryption and other simple assumptions. Our construction does not rely on Random Oracles, and we provide a black-box reduction to polynomial hardness of underlying assumptions.
AsyRand: asynchronous distributed randomness beacon with reconfiguration
Distributed randomness beacon protocols, which continuously generate publicly verifiable randomness values, are crucial for many applications. Recently, there have been many approaches, such as Hydrand (S\&P'20), SPURT (S\&P'22), OptRand (NDSS'23) and GRandLine (CCS'24), based on publicly verifiable secret sharing (PVSS) to implement beacon protocols. However, two key challenges remain unresolved: asynchrony and reconfiguration. In this paper, we propose the beacon protocol to address these challenges. We incorporate a producer-consumer model to decouple the distribution and reconstruction of PVSS secrets. Parties continuously produce and distribute new PVSS commitments, which are the encrypted shares and the proofs. Meanwhile, all parties store received commitments using first-in-first-out queues and collectively consume each commitment to recover the corresponding secret for beacon generation. To achieve asynchronous consensus, we employ reliable broadcast for distribution and apply -validated asynchronous Byzantine agreement for reconstruction. To achieve reconfiguration, honest parties can collectively remove a faulty party if his queue remains empty for an extended duration, and a new party can join the system using reliable broadcast. We also introduce a novel PVSS scheme based on Sigma protocol and Fiat-Shamir heuristic, which is of independent interest. Consequently, maintains state-of-the-art complexity with communication complexity, computation complexity, and verification complexity while achieving asynchrony and reconfiguration. Experimental results highlight the performance of compared to existing works.
Last updated: 2025-03-14
Withdrawable signatures in Fiat-Shamir with aborts constructions
This article presents an extension of the work performed by Liu, Baek and Susilo on withdrawable signatures to the Fiat-Shamir with aborts paradigm. We introduce an abstract construction, and provide security proofs for this proposal. As an instantiation, we provide a concrete construction for a withdrawable signature scheme based on Dilithium.
SNARKs for Stateful Computations on Authenticated Data
We present a new generalization of (zk-)SNARKs specifically designed for the application domain of safety-critical control systems. These need to be protected against adversarial tampering as well as non-malicious but unintended system failures due to random faults in components. Our SNARKs combine two additional features at the same time. Besides the verification of correct computation, they also allow, first, the verification of input data authenticity. Specifically, a verifier can confirm that the input to the computation originated from a trusted source. Second, our SNARKs support verification of stateful computations across multiple rounds, ensuring that the output of the current round correctly depends on the internal state of the previous round. Our focus is on concrete practicality, so we abstain from arithmetizing hash functions or signatures in our SNARKs. Rather, we modify the internals of an existing SNARK to extend its functionality. We implement and benchmark our new SNARKs in a sample scenario of a real-time high-integrity flight control system.
With our construction, prover runtime improves significantly over the baseline by a factor of 90. Verification time increases by 36%, but is less than comparable approaches that do not arithmetize hash functions or signatures.
Unlocking Mix-Basis Potential: Geometric Approach for Combined Attacks
This paper explores the possibility of using different bases in Beyne's geometric approach, a flexibility that was theoretically proposed in Beyne's doctoral thesis but has not been adopted in real cryptanalytic attacks despite its potential to unify multiple attack paradigms.
We revisit three bases from previous geometric approach papers and extend them to four extra ones determined by simple rules. With the final seven bases, we can obtain different basis-based attacks in the -th-order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack. All these attacks can be studied in unified automatic search methods.
We provide several demonstrative applications of this framework.
First, we show that by choosing an alternative pair of bases, the divisibility property analyzed by Beyne and Verbauwhede with ultrametric integral cryptanalysis (ASIACRYPT 2024) can be interpreted as a single element rather than as a linear combination of elements of the transition matrix; thus, the property can be studied in a unified way as other geometric approach applications.
Second, we revisit the multiple-of- property (EUROCRYPT 2017) under our new framework and present new multiple-of- distinguishers for \skinny-64 that surpass the state-of-the-art results,
from the perspectives of both first-order and second-order attacks.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
Mix-Basis Geometric Approach to Boomerang Distinguishers
Differential cryptanalysis relies on assumptions like \textit{Markov ciphers} and \textit{hypothesis of stochastic equivalence}. The probability of a differential characteristic estimated by classical methods is the key-averaged probability under the two assumptions. However, the real probability can vary significantly between keys. Hence, tools for differential cryptanalysis in the fixed-key model are desirable. Recently, Beyne and Rijmen applied the geometric approach to differential cryptanalysis and proposed a systematic framework called \textit{quasi-differential} (CRYPTO 2022).
As a variant of differential cryptanalysis, boomerang attacks rely on similar assumptions, so it is important to study their probability in the fixed-key model as well. A direct extension of the quasi-differential for boomerang attacks leads to the quasi- -differential framework (TIT 2024).
However, such a straightforward approach is difficult in practical applications because there are too many quasi- -differential trails.
We tackle this problem by applying the mix-basis style geometric approach (CRYPTO 2025) to the boomerang attacks and construct the quasi-boomerang framework. By choosing a suitable pair of bases, the boomerang probability can be computed by summing correlations of \textit{quasi-boomerang characteristics}. The transition matrix of the key-XOR operation is also a diagonal matrix; thus, the influence of keys can be analyzed in a similar way to the quasi-differential framework.
We apply the quasi-boomerang framework to \skinny-64 and \gift-64. For \skinny-64, we check and confirm 4 boomerang distinguishers with high probability (2 with probability 1 and 2 with probability ) generated from Hadipour, Bagheri, and Song's tool (ToSC 2021/1), through the analysis of key dependencies and the probability calculation from \textit{quasi-boomerang characteristics}. We also propose a divide-and-conquer approach following the sandwich framework for boomerangs with small probability or long rounds to apply the quasi-boomerang framework. After checking 2/1 boomerang distinguisher(s) of \skinny-64/\gift-64, we find that the previously considered invalid 19-round distinguisher of \gift-64 is valid.
In addition, as a contribution of independent interest, we revisit Boura, Derbez, and Germon's work by extending the quasi-differential framework to the related-key scenario (ToSC 2025/1), and show an alternative way to derive the same formulas in their paper by regarding the key-XOR as a normal cipher component.
PEGASIS: Practical Effective Class Group Action using 4-Dimensional Isogenies
In this paper, we present the first practical algorithm to compute an effective group action of the class group of any imaginary quadratic order on a set of supersingular elliptic curves primitively oriented by . Effective means that we can act with any element of the class group directly, and are not restricted to acting by products of ideals of small norm, as for instance in CSIDH. Such restricted effective group actions often hamper cryptographic constructions, e.g. in signature or MPC protocols.
Our algorithm is a refinement of the Clapoti approach by Page and Robert, and uses -dimensional isogenies. As such, it runs in polynomial time, does not require the computation of the structure of the class group, nor expensive lattice reductions, and our refinements allows it to be instantiated with the orientation given by the Frobenius endomorphism. This makes the algorithm practical even at security levels as high as CSIDH-4096. Our implementation in SageMath takes 1.5s to compute a group action at the CSIDH-512 security level, 21s at CSIDH-2048 level and around 2 minutes at the CSIDH-4096 level. This marks the first instantiation of an effective cryptographic group action at such high security levels. For comparison, the recent KLaPoTi approach requires around 200s at the CSIDH-512 level in SageMath and 2.5s in Rust.
Re-Randomize and Extract: A Novel Commitment Construction Framework Based on Group Actions
Cryptographic group actions have attracted growing attention as a useful tool for constructing cryptographic schemes.
Among their applications, commitment schemes are particularly interesting as fundamental primitives, playing a crucial role in protocols such as zero-knowledge proofs, multi-party computation, and more.
In this paper, we introduce a novel framework to construct commitment schemes based on cryptographic group actions.
Specifically, we propose two key techniques for general group actions: re-randomization and randomness extraction.
Roughly speaking, a re-randomization algorithm introduces randomness within an orbit for any input element, while a randomness extractor maps this randomness to uniformity over the message space.
We demonstrate that these techniques can significantly facilitate the construction of commitment schemes, providing a flexible framework for constructing either perfectly hiding or perfectly binding commitments, depending on the type of extractor involved.
Moreover, we extend our framework to support the construction of commitments with additional desirable properties beyond hiding and binding, such as dual-mode commitments and enhanced linkable commitments.
These extensions are achieved by further adapting the extractor to satisfy trapdoor or homomorphic properties.
Finally, we instantiate all our proposed commitment schemes using lattices, specifically leveraging the lattice isomorphism problem (LIP) and the lattice automorphism problem (LAP) as underlying cryptographic assumptions.
To the best of our knowledge, this is the first commitment scheme construction based on LIP/LAP.
Additionally, we use LIP to provide a repair and improvement to the tensor isomorphism-based non-interactive commitment scheme proposed by D'Alconzo, Flamini, and Gangemi (ASIACRYPT 2023), which was recently shown to be insecure by an attack from Gilchrist, Marco, Petit, and Tang (CRYPTO 2024).
Computational Quantum Anamorphic Encryption and Anamorphic Secret Sharing
The concept of anamorphic encryption, first formally introduced by Persiano et al. in their influential 2022 paper titled ``Anamorphic Encryption: Private Communication Against a Dictator,'' enables embedding covert messages within ciphertexts. One of the key distinctions between a ciphertext embedding a covert message and an original ciphertext, compared to an anamorphic ciphertext, lies in the indistinguishability between the original ciphertext and the anamorphic ciphertext. This encryption procedure has been defined based on a public-key cryptosystem. Initially, we present a quantum analogue of the classical anamorphic encryption definition that is based on public-key encryption. Additionally, we introduce a definition of quantum anamorphic encryption that relies on symmetric key encryption. Furthermore, we provide a detailed generalized construction of quantum anamorphic symmetric key encryption within a general framework, which involves taking any two quantum density matrices of any different dimensions and constructing a single quantum density matrix, which is the quantum anamorphic ciphertext containing ciphertexts of both of them. Subsequently, we introduce a definition of computational anamorphic secret-sharing and extend the work of \c{C}akan et al. on computational quantum secret-sharing to computational quantum anamorphic secret-sharing, specifically addressing scenarios with multiple messages, multiple keys, and a single share function. This proposed secret-sharing scheme demonstrates impeccable security measures against quantum adversaries.
Tight Adaptive Simulation Security for Identity-based Inner-Product FE in the (Quantum) Random Oracle Model
Abdalla et al. (ASIACRYPT 2020) introduced a notion of identity-based inner-product functional encryption (IBIPFE) that combines identity-based encryption and inner-product functional encryption (IPFE). Thus far, several pairing-based and lattice-based IBIPFE schemes have been proposed. However, there are two open problems. First, there are no known IBIPFE schemes that satisfy the adaptive simulation-based security. Second, known IBIPFE schemes that satisfy the adaptive indistinguishability-based security or the selective simulation-based security do not have tight reductions. In this paper, we propose lattice-based and pairing-based IBIPFE schemes that satisfy the tight adaptive simulation-based security. At first, we propose a generic transformation from an indistinguishability-based secure -dimensional (IB)IPFE scheme to a simulation-based secure -dimensional (IB)IPFE scheme. The proposed transformation improves Agrawal et al.'s transformation for plain IPFE (PKC 2020) that requires an indistinguishability-based secure -dimensional scheme. Then, we construct a lattice-based IBIPFE scheme that satisfies the tight adaptive indistinguishability-based security under the LWE assumption in the quantum random oracle model. We apply the proposed transformation and obtain the first lattice-based IBIPFE scheme that satisfies adaptive simulation-based security. Finally, we construct a pairing-based IBIPFE scheme that satisfies the tight adaptive simulation-based security under the DBDH assumption in the random oracle model. The pairing-based scheme does not use the proposed transformation towards the best efficiency.
Blind Signatures from Cryptographic Group Actions
Uncategorized
Uncategorized
We provide a generic construction of blind signatures from cryptographic group actions following the framework of the blind signature CSIOtter introduced by Katsumata et al. (CRYPTO'23) in the context of isogeny (commutative group action). We adapt and modify that framework to make it work even for non-commutative group actions. As a result, we obtain a blind signature from abstract group actions which are proven to be secure in the random oracle model. We also propose an instantiation based on a variant of linear code equivalence, interpreted as a symmetric group action.
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 14 differential trails for the SKINNY, LBLOCK, and TWINE block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator's accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
Provably Secure Approximate Computation Protocols from CKKS
Secure multi-party computation (MPC) enables collaborative, privacy-preserving computation over private inputs. Advances in homomorphic encryption (HE), particularly the CKKS scheme, have made secure computation practical, making it well-suited for real-world applications involving approximate computations. However, the inherent approximation errors in CKKS present significant challenges in developing MPC protocols.
This paper investigates the problem of secure approximate MPC from CKKS. We first analyze CKKS-based protocols in two-party setting. When only one party holds a private input and the other party acts as an evaluator, a simple protocol with the noise smudging technique on the encryptor's side achieves security in the standard manner. When both parties have private inputs, we demonstrate that the protocol incorporating independent errors from each party achieves a relaxed standard security notion, referred to as a liberal security. Nevertheless, such a protocol fails to satisfy the standard security definition. To address this limitation, we propose a novel protocol that employs a distributed sampling approach to generate smudging noise in a secure manner, which satisfies the standard security definition.
Finally, we extend the two-party protocols to the multi-party setting. Since the existing threshold CKKS-based MPC protocol only satisfies the liberal security, we present a novel multi-party protocol achieving the standard security by applying multi-party distributed sampling of a smudging error.
For all the proposed protocols, we formally define the functionalities and provide rigorous security analysis within the simulation-based security framework. To the best of our knowledge, this is the first work to explicitly define the functionality of CKKS-based approximate MPC and achieve formal security guarantees.
Reducing the Number of Qubits in Solving LWE
At Crypto 2021, May presented an algorithm solving the ternary Learning-With-Error problem, where the solution is a ternary vector with a known number of and entries. This attack significantly improved the time complexity of from previously known algorithms to , where is the size of the key space. Therefore, May exploited that using more representations, i.e., allowing ternary interim results with additional and entries, reduces the overall time complexity.
Later, van Hoof et al. (PQCrypto 2021) combined May's algorithm with quantum walks to a new attack that performs in time . However, this quantum attack requires an exponential amount of qubits. This work investigates whether the ternary LWE problem can also be solved using only qubits. Therefore, we look closely into Dicke states, which are an equal superposition over all binary vectors with a fixed Hamming weight. Generalizing Dicke states to ternary vectors makes these states applicable to the ternary LWE problem.
Bärtschi and Eidenbenz (FCT 2019) proposed a quantum circuit to prepare binary Dicke states deterministically in linear time . Their procedure benefits from the inductive structure of Dicke states, i.e., that a Dicke state of a particular dimension can be built from Dicke states of lower dimensions. Our work proves that this inductive structure is also present in generalized Dicke states with an underlying set other than . Utilizing this structure, we introduce a new algorithm that deterministically prepares generalized Dicke states in linear time, for which we also provide an implementation in Qiskit.
Finally, we apply our generalized Dicke states to the ternary LWE problem, and construct an algorithm that requires qubits and classical memory space up to . We achieve as best obtainable time complexity.
An Efficient Quantum Oblivious Transfer Protocol
Oblivious Transfer (OT) is a significant two party privacy preserving cryptographic primitive. OT involves a sender having several pieces of information and a receiver having a choice bit. The choice bit represents the piece of information that the receiver wants to obtain as an output of OT. At the end of the protocol, sender remains oblivious about the choice bit and receiver remains oblivious to the contents of the information that were not chosen. It has applications ranging from secure multi-party computation, privacy-preserving protocols to cryptographic protocols for secure communication. Most of the classical OT protocols are based on number theoretic assumptions which are not quantum secure and existing quantum OT protocols are not so efficient and practical. Herein, we present the design and analysis of a simple yet efficient quantum OT protocol, namely qOT. qOT is designed by using the asymmetric key distribution proposed by Gao et al. [18] as a building block. The designed qOT requires only single photons as a source of a quantum state, and the measurements of the states are computed using single particle projective measurement. These make qOT efficient and practical. Our proposed design is secure against quantum attacks. Moreover, qOT also provides long-term security.
Blockchain-based Secure D2D localisation with adaptive precision
In this paper we propose a secure best effort methodology for providing localisation information to devices in a heterogenous network where devices do not have access to GPS-like technology or heavy cryptographic infrastructure. Each device will compute its localisation with the highest possible accuracy based solely on the data provided by its neighboring anchors. The security of the localisation is guarantied by registering the localisation information on a distributed ledger via smart contracts. We prove the security of our solution under the adaptive chosen message attacks model. We furthermore evaluate the effectiveness of our solution by measuring the average register location time, failed requests, and total execution time using as DLT case study Hyperledger Besu with QBFT consensus.
Monotone-Policy BARGs and More from BARGs and Quadratic Residuosity
A tuple of NP statements satisfies a monotone policy if , where if and only if is in the NP language. A monotone-policy batch argument (monotone-policy BARG) for NP is a natural extension of regular batch arguments (BARGs) that allows a prover to prove that satisfy a monotone policy with a proof of size , where is the size of the Boolean circuit computing the NP relation .
Previously, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) and Nassar, Waters, and Wu (TCC 2024) showed how to construct monotone-policy BARGs from (somewhere-extractable) BARGs for NP together with a leveled homomorphic encryption scheme (Brakerski et al.) or an additively homomorphic encryption scheme over a sufficiently-large group (Nassar et al.). In this work, we improve upon both works by showing that BARGs together with additively homomorphic encryption over any group suffices (e.g., over ). For instance, we can instantiate the additively homomorphic encryption with the classic Goldwasser-Micali encryption scheme based on the quadratic residuosity (QR) assumption. Then, by appealing to existing compilers, we also obtain a monotone-policy aggregate signature scheme from any somewhere-extractable BARG and the QR assumption.
Lattice-Based Post-Quantum iO from Circular Security with Random Opening Assumption (Part II: zeroizing attacks against private-coin evasive LWE assumptions)
Indistinguishability obfuscation (iO) stands out as a powerful cryptographic primitive but remains notoriously difficult to realize under simple-to-state, post-quantum assumptions. Recent works have proposed lattice-inspired iO constructions backed by new “LWE-with-hints” assumptions, which posit that certain distributions of LWE samples retain security despite auxiliary information. However, subsequent cryptanalysis has revealed structural vulnerabilities in these assumptions, leaving us without any post-quantum iO candidates supported by simple, unbroken assumptions.
Motivated by these proposals, we introduce the \emph{Circular Security with Random Opening} (CRO) assumption—a new LWE-with-hint assumption that addresses structural weaknesses from prior assumptions, and based on our systematic examination, does not appear vulnerable to known cryptanalytic techniques. In CRO, the hints are random ``openings'' of zero-encryptions under the Gentry--Sahai--Waters (GSW) homomorphic encryption scheme. Crucially, these zero-encryptions are efficiently derived from the original LWE samples via a special, carefully designed procedure, ensuring that the openings are marginally random. Moreover, the openings do not induce any natural leakage on the LWE noises.
These two features--- marginally random hints and the absence of (natural) noise leakage---rule out important classes of attacks that had undermined all previous LWE-with-hint assumptions for iO. Therefore, our new lattice-based assumption for iO provides a qualitatively different target for cryptanalysis compared to existing assumptions.
To build iO under this less-structured CRO assumption, we develop several new technical ideas. In particular, we devise an oblivious LWE sampling procedure, which succinctly encodes random LWE secrets and smudging noises, and uses a tailored-made homomorphic evaluation procedure to generate secure LWE samples. Crucially, all non-LWE components in this sampler, including the secrets and noises of the generated samples, are independently and randomly distributed, avoiding attacks on non-LWE components.
In the second part of this work, we investigate recent constructions of obfuscation for pseudorandom functionalities. We show that the same cryptanalytic techniques used to break previous LWE-with-hints assumptions for iO (Hopkins-Jain-Lin CRYPTO 21) can be adapted to construct counterexamples against the private-coin evasive LWE assumptions underlying these pseudorandom obfuscation schemes.
Unlike prior counterexamples for private-coin evasive LWE assumptions, our new counterexamples take the form of zeroizing attacks, contradicting the common belief that evasive-LWE assumptions circumvent zeroizing attacks by restricting to ``evasive'' or pseudorandom functionalities.
An ETSI GS QKD compliant TLS implementation
This paper presents our implementation of the Quantum Key Distribution standard ETSI GS QKD 014 v1.1.1, which required a modification of the Rustls library. We modified the TLS protocol while maintaining backward compatibility on the client and server side. We thus wish to participate in the effort to generalize the use of Quantum Key Distribution on the Internet. Finally we used this library for a video conference call encrypted by QKD.
Fair Exchange for Decentralized Autonomous Organizations via Threshold Adaptor Signatures
A Decentralized Autonomous Organization (DAO) enables multiple parties to collectively manage digital assets in a blockchain setting. We focus on achieving fair exchange between DAOs using a cryptographic mechanism that operates with minimal blockchain assumptions and, crucially, does not rely on smart contracts.
Specifically, we consider a setting where a DAO consisting of sellers holding shares of a witness interacts with a DAO comprising buyers holding shares of a signing key ; the goal is for the sellers to exchange for a signature under transferring a predetermined amount of funds.
Fairness is required to hold both between DAOs (i.e., ensuring that each DAO receives its asset if and only if the other does) as well as within each DAO (i.e., ensuring that all members of a DAO receive their asset if and only if every other member does).
We formalize these fairness properties and present an efficient protocol for DAO-based fair exchange under standard cryptographic assumptions. Our protocol leverages certified witness encryption and threshold adaptor signatures, two primitives of independent interest that we introduce and show how to construct efficiently.
Generic Composition: From Classical to Quantum Security
Authenticated encryption (AE) provides both authenticity and privacy.
Starting with Bellare's and Namprempre's work in 2000, the Encrypt-then-MAC composition of an encryption scheme for privacy and a MAC for authenticity has become a well-studied and common approach.
This work investigates the security of the Encrypt-then-MAC composition in a quantum setting which means that adversarial queries as well as the responses to those queries may be in superposition.
We demonstrate that the Encrypt-then-MAC composition of a chosen-plaintext (IND-qCPA) secure symmetric encryption scheme SE and a plus-one unforgeable MAC fails to achieve chosen-ciphertext (IND-qCCA) security.
On the other hand, we show that it suffices to choose a quantum pseudorandom function (qPRF) as the MAC.
Namely, the Encrypt-then-MAC composition of SE and a qPRF is IND-qCCA secure.
The same holds for the Encrypt-and-MAC composition of SE and a qPRF
How Small Can S-boxes Be
S-boxes are the most popular nonlinear building blocks used in symmetric-key primitives.
Both cryptographic properties and implementation cost of an S-box are crucial for a good cipher design, especially for lightweight ones.
This paper aims to determine the exact minimum area of optimal 4-bit S-boxes (whose differential uniform and linearity are both 4) under certain standard cell library.
Firstly, we evaluate the upper and lower bounds upon the minimum area of S-boxes, by proposing a Prim-like greedy algorithm and utilizing properties of balanced Boolean functions to construct bijective S-boxes.
Secondly, an SAT-aided automatic search tool is proposed that can simultaneously consider multiple cryptographic properties such as the uniform, linearity, algebraic degree, and the implementation costs such as area, and gate depth complexity.
Thirdly, thanks to our tool, we manage to find the exact minimum area for different types of 4-bit S-boxes.
The measurement in this paper uses the gate equivalent (GE) as standard unit under UMC 180 nm library, all 2/3/4-input logic gates are taken into consideration.
Our results show that the minimum area of optimal 4-bit S-box is 11 GE and the depth is 3.
If we do not use the 4-input gates, this minimum area increases to 12 GE and the depth in this case is 4, which is the same if we only use 2-input gates.
If we further require that the S-boxes should not have fixed points, the minimum area continue increasing a bit to 12.33 GE while keeping the depth.
Interestingly, the same results are also obtained for non-optimal 4-bit bijective S-boxes as long as their differential uniform and linearity (i.e., there is no non-trivial linear structures) if only 2-input and 3-input gates are used. But the minimum area reduce to 9 GE if 4-input gates are involved.
More strictly, if we require the algebraic degree of all coordinate functions of optimal S-boxes be 3, the minimum area is 14 GE with fixed point and 14.33 GE without fixed point, and the depth increases sharply to 8.
Besides determining the exact minimum area, our tool is also useful to search for a better implementation of existing S-boxes. As a result, we find out an implementation of Keccak's 5-bit S-box with 17 GE. As a contrast, the designer's original circuit has an area of 23.33 GE, while the optimized result by Lu et al. achieves an area of 17.66 GE. Also, we find out the first optimized implementation of SKINNY's 8-bit S-box with 26.67 GE.
MERCURY: A multilinear Polynomial Commitment Scheme with constant proof size and no prover FFTs
We construct a pairing-based polynomial commitment scheme for multilinear polynomials of size where
constructing an opening proof requires field operations, and scalar multiplications. Moreover,
the opening proof consists of a constant number of field elements.
This is a significant improvement over previous works which would require either
1. field operations; or
2. size opening proof.
The main technical component is a new method of verifiably folding a witness via univariate polynomial division.
As opposed to previous methods, the proof size and prover time remain constant *regardless of the folding factor*.
Last updated: 2025-04-17
Optimizing Final Exponentiation for Pairing-Friendly Elliptic Curves with Odd Embedding Degrees Divisible by 3
In pairing-based cryptography, the final exponentiation with a large fixed exponent is crucial for ensuring unique outputs in both Tate and optimal ate pairings. While significant strides have been made in optimizing elliptic curves with even embedding degrees, progress remains limited for curves with odd embedding degrees, especially those divisible by . This paper introduces novel techniques to optimize the computation of the final exponentiation for the optimal ate pairing on such curves. The first technique leverages the structure of certain existing seeds to enable the use of cyclotomic cubing and extends this concept to generate new seeds with similar characteristics.
The second technique focuses on producing new sparse ternary representation seeds to utilize cyclotomic cubing as a replacement for squaring. These approaches result in performance improvements of up to in the computation of the final exponentiation for the optimal ate pairing on and curves.
Pencil: A Domain-Extended PRF with Full -bit Security for Strengthening GCM and More
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, and , which provide weak PRF and full PRF security, respectively, where both achieve full -bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between and bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from , without resorting to a 256-bit blockcipher. Our second contribution is that can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
On the Security and Privacy of CKKS-based Homomorphic Evaluation Protocols
CKKS is a homomorphic encryption (HE) scheme that supports arithmetic over complex numbers in an approximate manner.
Despite its utility in PPML protocols, formally defining the security of CKKS-based protocols is challenging due to its approximate nature.
To be precise, in a sender-receiver model, where the receiver holds input ciphertexts and the sender evaluates its private circuit, it is difficult to define sender's privacy in terms of indistinguishability, whereas receiver's privacy is easily achieved through the semantic security of CKKS.
In this paper, we present a new definition for CKKS-based protocols, called Differentially Private Homomorphic Evaluation (DPHE) protocols, along with a general method to achieve this.
In our definition, we relax the sender’s privacy condition from indistinguishability to differential privacy notion.
We focus on the fact that most security concern for PPML protocols is differential privacy on evaluation results, rather than the simulatability of the evaluation.
We prove that if the ideal functionality satisfies differential privacy and a protocol satisfies DPHE, then the output of the protocol also satisfies differential privacy.
Next, we provide a general compiler that transforms a plain CKKS-based protocol into a DPHE one.
We achieve this by mixing the Laplace mechanism and zero-knowledge argument of knowledge (ZKAoK) for CKKS.
This approach allows us to achieve sender's privacy with a moderate noise, whereas the previous indistinguishability-based approach requires exponentially large overhead.
Finally, we provide a concrete instantiation of ZKAoK for CKKS in the form of PIOP.
To prove the well-formedness of CKKS ciphertexts and public keys, we devise new proof techniques that use homomorphic evaluation during verification.
We also provide an implementation to demonstrate the practicality of our ZKAoK for CKKS by compiling PIOPs using the HSS polynomial commitment scheme (Crypto'24).
Faster FHEW Bootstrapping with Adaptive Key Update
Lee et al. proposed a new bootstrapping algorithm based on homomorphic automorphism, which merges the empty sets of ciphertexts by adjusting the window size. This algorithm supports arbitrary secret key distributions with no additional runtime costs while using small evaluation keys. However, our implementation reveals that once the window size exceeds a certain threshold, the time required for bootstrapping remains relatively constant. This observation prompts the question of how to further reduce the running time.
To address this challenge, we introduce a new trick called Adaptive Key Update (AKU). With AKU and automorphism techniques, we propose a new bootstrapping algorithm for Gaussian secret keys that requires only
external products and no switching for blind rotation.
Building on this, we employ window size optimization and key switching techniques to further improve the algorithm. The improved algorithm provides a useful trade-off between key storage and computational efficiency, depending on the choice of the window size.
Compared to the current fastest FHEW bootstrapping method for Gaussian secret keys (the LLWW+ method proposed by Li et al.), our AKU-based algorithm reduces the number of key switching by 76% and decreases the running time of bootstrapping by 20.7%. At this time, the practical runtime for bootstrapping is approximately equal to that of performing only external products.
A New Generalized Attack on RSA-like Cryptosystems
Rivest, Shamir, and Adleman published the RSA cryptosystem in 1978, which has been widely used over the last four decades. The security of RSA is based on the difficulty of factoring large integers , where and are prime numbers. The public exponent and the private exponent are related by the equation . Recently, Cotan and Teseleanu (NordSec 2023) introduced a variant of RSA, where the public exponent and the private exponent satisfy the equation for some positive integer . In this paper, we study the general equation with positive integers and , and . We show that, given the public parameters and , one can recover and and factor the modulus in polynomial time by combining continued fractions with Coppersmith's algorithm which relies on lattice reduction techniques, under specific conditions on , , and . Furthermore, we show that if the private exponent in an RSA-like cryptosystem is either small or too large, then can be factored in polynomial time. This attack applies to the standard RSA cryptosystem.
A Complete Security Proof of SQIsign
SQIsign is the leading digital signature from isogenies. Despite the many improvements that have appeared in the literature, all its recents variants lack a complete security proof. In this work, we provide the first full security proof of SQIsign, as submitted to the second round of NIST's on-ramp track for digital signatures.
To do so, we introduce a new framework, which we call Fiat-Shamir with hints, that captures all those protocols where the simulator needs additional information to simulate a transcript. Using this framework, we show that SQIsign is EUF-CMA secure in the ROM, assuming the hardness of the One Endomorphism problem with hints, or the hardness of the Full Endomorphism Ring problem with hints together with a hint indistinguishability assumption; all assumptions, unlike previous ones in the literature, are non-interactive. Along the way, we prove several intermediate results that may be of independent interest.
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Ongoing efforts to transition to post-quantum public-key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions.
Among the candidates in NIST's post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES).
The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions.
However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signature schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These attacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the parallel all-but-one vector commitments, the VOLE generation, and the AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH-based signature schemes.
HiAE: A High-Throughput Authenticated Encryption Algorithm for Cross-Platform Efficiency
This paper addresses the critical challenges in designing cryptographic algorithms that achieve both high performance and cross-platform efficiency on ARM and x86 architectures, catering to the demanding requirements of next-generation communication systems, such as 6G and GPU/NPU interconnections. We propose HiAE, a high-throughput authenticated encryption algorithm optimized for performance exceeding 100 Gbps and designed to meet the stringent security requirements of future communication networks. HiAE leverages the stream cipher structure, integrating the AES round function for non-linear diffusion.
Our design achieves exceptional efficiency, with benchmark results from software implementations across various platforms showing over 340 Gbps on x86 processors and 180 Gbps on ARM devices in AEAD mode, making it the fastest AEAD solution on ARM chips and setting a new performance record on the latest x86 processors.
Another Look at the Quantum Security of the Vectorization Problem with Shifted Inputs
Cryptographic group actions provide a basis for simple post-quantum generalizations of many cryptographic protocols based on the discrete logarithm problem (DLP). However, many advanced group action-based protocols do not solely rely on the core group action problem (the so-called vectorization problem), but also on variants of this problem, to either improve efficiency or enable new functionalities. In particular, the security of the CSI-SharK threshold signature protocol relies on the hardness of the Vectorization Problem with Shifted Inputs where (in DLP formalism) the adversary not only receives and , but also for multiple known values of . A natural open question is whether the extra data provided to the adversary in this variant allows them to solve the underlying problem more efficiently.
In this paper, we revisit the concrete quantum security of this problem. We start from a quantum multiple hidden shift algorithm of Childs and van Dam, which to the best of our knowledge was never applied in cryptography before. We specify algorithms for its subroutines and we provide concrete complexity estimates for both these subroutines and the overall algorithm.
We apply our analysis to the CSI-SharK protocol. In prior analyses based on Kuperberg's algorithms, group action evaluations contributed to a significant part of the overall T-gate cost. For CSI-SharK suggested parameters, our new approach requires significantly fewer calls to the group action evaluation subroutine, leading to significant complexity improvements overall. We describe two instances of our approach, one which lowers the T-gate complexity, and the other the QRAM requirements. We obtain significant speedups -- even in both metrics simultaneously -- and we quantify the degradation of the quantum security of the protocol when the number of curves in the public key increases.
Evasive LWE: Attacks, Variants & Obfustopia
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO.
Attacks: Our attacks are summarized as follows.
- The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false.
- We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition.
- The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ( ) and extended this to obfuscation for pseudorandom functionalities ( ) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption.
- The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption.
- Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime.
Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered.
Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Simple and General Counterexamples for Private-Coin Evasive LWE
We present a simple counterexample to all known variants of the private-coin evasive learning with errors (LWE) assumption. Unlike prior works, our counterexample is direct, it does not use heavy cryptographic machinery (such as obfuscation or witness encryption), and it applies to all variants of the assumption. Our counterexample can be seen as a "zeroizing" attack against evasive LWE, calling into question the soundness of the underlying design philosophy.
Split Prover Zero-Knowledge SNARKs
We initiate the study of {\em split prover zkSNARKs}, which allow Alice to offload part of the zkSNARK computation to her assistant, Bob. In scenarios like online transactions (e.g., zCash), a significant portion of the witness (e.g., membership proofs of input coins) is often available to the prover (Alice) before the transaction begins. This setup offers an opportunity to Alice to initiate the proof computation early, even before the entire witness is available. The remaining computation can then be delegated to Bob, who can complete it once the final witness (e.g., the transaction amount) is known.
To prevent Bob from generating proofs independently (e.g., initiating unauthorized transactions), it is essential that the data provided to him for the second phase of computation does not reveal the witness used in the first phase. Additionally, the verifier of the zkSNARK should be unable to determine whether the proof was generated solely by Alice or through this two-step process. To achieve this efficiently, we require this two-phase proof generation to only use cryptography in a black-box manner.
We propose a split prover zkSNARK based on the Groth16 zkSNARKs [Groth, EUROCRYPT 2016], meeting all these requirements. Our solution is also \emph{asymptotically tight}, meaning it achieves the optimal second phase proof generation time for Groth16. Importantly, our split prover zkSNARK preserves the verification algorithm of the original Groth16 zkSNARK, enabling seamless integration into existing deployments of Groth16.
KLPT²: Algebraic Pathfinding in Dimension Two and Applications
Following Ibukiyama, Katsura and Oort, all principally polarized superspecial abelian surfaces over can be represented by a certain type of matrix , having entries in the quaternion algebra . We present a heuristic polynomial-time algorithm which, upon input of two such matrices , finds a "connecting matrix" representing a polarized isogeny of smooth degree between the corresponding surfaces. Our algorithm should be thought of as a two-dimensional analog of the KLPT algorithm from 2014 due to Kohel, Lauter, Petit and Tignol for finding a connecting ideal of smooth norm between two given maximal orders in .
The KLPT algorithm has proven to be a versatile tool in isogeny-based cryptography, and our analog has similar applications; we discuss two of them in detail. First, we show that it yields a polynomial-time solution to a two-dimensional analog of the so-called constructive Deuring correspondence: given a matrix representing a superspecial principally polarized abelian surface, realize the latter as the Jacobian of a genus- curve (or, exceptionally, as the product of two elliptic curves if it concerns a product polarization). Second, we show that, modulo a plausible assumption, Charles-Goren-Lauter style hash functions from superspecial principally polarized abelian surfaces require a trusted set-up. Concretely, if the matrix associated with the starting surface is known then collisions can be produced in polynomial time. We deem it plausible that all currently known methods for generating a starting surface indeed reveal the corresponding matrix. As an auxiliary tool, we present an explicit table for converting -isogenies into the corresponding connecting matrix, a step for which a previous method by Chu required super-polynomial (but sub-exponential) time.
Functional Oblivious Transfer with Applications in Privacy-Preserving Machine Learning
Oblivious Transfer (OT) is a fundamental cryptographic primitive introduced nearly four decades ago. OT allows a receiver to select and learn out of private messages held by a sender. It ensures that the sender does not learn which specific messages the receiver has chosen, while the receiver gains no information about the remaining messages. In this work, we introduce the notion of functional OT (FOT), for the first time. FOT adds a layer of security to the conventional OT by ensuring that the receiver only learns a function of the selected messages rather than the individual messages themselves. We propose several protocols that realize this concept. In particular, we propose concrete instantiations of FOT when the function to be executed on the selected message is mean, mode, addition, or multiplication. The schemes are efficient and unconditionally secure. We also propose a non-trivial protocol that supports arbitrary functions on the selected messages mainly using fully homomorphic encryption (FHE) and oblivious linear function evaluation, where the number of FHE invocations is constant with respect to . Our asymptotic and concrete cost analyses demonstrate the efficiency of our unconditionally secure FOT protocols. FOT can enhance the security of privacy-preserving machine learning, particularly in (i) K-Nearest Neighbors schemes and (ii) client selection in Federated Learning (FL).
Simple Public Key Anamorphic Encryption and Signature using Multi-Message Extensions
Anamorphic encryption (AE) considers secure communication in the presence of a powerful surveillant (typically called a ``dictator'') who only allows certain cryptographic primitives and knows all the secret keys in a system. The basic idea is that there is a second (anamorphic) mode of encryption that allows for transmitting an anamorphic message using a double key to a receiver who can decrypt this message using the same double key. From the point of view of the dictator, the encryption keys as well as the ciphertexts in the regular and anamorphic modes are indistinguishable. The most recent works in this field consider public key anamorphic encryption (PKAE), i.e., the sender of an anamorphic message requires an encryption double key (or no key at all), and the receiver requires an associated decryption double key. Known constructions, however, either work only for schemes that are mostly of theoretical interest or come with conceptual limitations, assuming additional unnecessary properties (e.g., randomness recoverability and CCA security).
In this paper, we ask whether we can design PKAE schemes without such limitations and be closer to practically used PKE schemes. In fact, such schemes are more likely to be allowed by a cognizant dictator. Moreover, we initiate the study of identity-based anamorphic encryption (IBAE), as the IBE setting seems to be a natural choice for a dictator. For both PKAE and IBAE, we show how well-known IND-CPA and IND-CCA secure primitives can be extended by an anamorphic encryption channel. In contrast to previous work, we additionally consider CCA (rather than just CPA) security notions for the anamorphic channel and also build upon CPA (rather than only CCA) secure PKE.
Finally, we ask whether it is possible to port the recent concept of anamorphic signatures, which considers constructing symmetric anamorphic channels in case only signature schemes are allowed by the dictator, to the asymmetric setting, which we denote by public-key anamorphic signatures (PKAS). Moreover, we consider IND-CCA security for the anamorphic channel of our PKAS.
Higher Residuosity Attacks on Small RSA Subgroup Decision Problems
Secure two-party comparison, known as Yao's millionaires' problem, has been a fundamental challenge in privacy-preserving computation. It enables two parties to compare their inputs without revealing the exact values of those inputs or relying on any trusted third party. One elegant approach to secure computation is based on homomorphic encryption. Recently, building on this approach, Carlton et al. (CT-RSA 2018) and Bourse et al. (CT-RSA 2020) presented novel solutions for the problem of secure integer comparison. These protocols have demonstrated significantly improved performance compared to the well-known and frequently used DGK protocol (ACISP 2007 and Int. J. Appl. Cryptogr. 1(4),323–324, 2009). In this paper, we introduce a class of higher residuosity attacks, which can be regarded as an extension of the classical quadratic residuosity attack on the decisional Diffie-Hellman problem. We demonstrate that the small RSA subgroup decision problems, upon which both the CEK and BST protocols are based, are not difficult to solve when the prime base is small (e.g., ). Under these conditions, the protocols achieve optimal overall performance. Furthermore, we offer recommendations for precluding such attacks, including one approach that does not adversely affect performance. We hope that these attacks can be applied to analyze other number-theoretic hardness assumptions.
Polynomial Secret Sharing Schemes and Algebraic Matroids
In a secret sharing scheme with polynomial sharing, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. These schemes generalize the linear ones, adding more expressivity and giving room for more efficient schemes. To identify the access structures for which this efficiency gain is relevant, we need a systematic method to identify the access structure of polynomial schemes; i.e., to identify which sets can reconstruct the secret in the scheme. As a first step, we study ideal polynomial secret sharing schemes where there is a single polynomial for each party. Ideal schemes have optimal share size because the size of each share is the size of the secret.
Our goal is to generalize results of linear secret sharing schemes, i.e., schemes in which the shares are computed by applying linear mappings and the linear dependency of these mappings determines their access structures. To achieve this goal, we study the connection between the algebraic dependency of the sharing polynomials and the access structure of the polynomial scheme. Our first result shows that if the degree of the sharing polynomials is not too big compared to the size of the field, then the algebraic dependence of the sharing polynomials determines the access structure of the scheme. This contributes to the characterization of ideal polynomial schemes and establishes a new connection between families of ideal schemes and algebraic matroids.
Conversely, we ask the question: If we associate a polynomial with each party and the dealer, can we use these polynomials to realize the access structure determined by the algebraic dependency of the polynomials? Our second result shows that these access structures admit statistical schemes with small shares. Finally, we extend this result to the general case where each party may have more than one polynomial.
Partial Lattice Trapdoors: How to Split Lattice Trapdoors, Literally
Lattice trapdoor algorithms allow us to sample hard random lattices together with their trapdoors, given which short lattice vectors can be sampled efficiently. This enables a wide range of advanced cryptographic primitives. In this work, we ask: can we distribute lattice trapdoor algorithms non-interactively?
We study a natural approach to sharing lattice trapdoors: splitting them into partial trapdoors for different lower-rank sublattices which allow the local sampling of short sublattice vectors. Given sufficiently many short sublattice vectors, these can then be combined to yield short vectors in the original lattice. Moreover, this process can be repeated an unbounded polynomial number of times without needing a party holding a full trapdoor to intervene. We further define one-wayness and indistinguishability properties for partial trapdoors.
We establish that such objects exist that have non-trivial performance under standard assumptions. Specifically, we prove these properties for a simple construction from the κ-SIS and κ-LWE assumptions, which were previously shown to be implied by the plain SIS and LWE assumptions, respectively. The security proofs extend naturally to the ring or module settings under the respective analogues of these assumptions, which have been conjectured to admit similar reductions.
Our partial trapdoors achieve non-trivial efficiency, with relevant parameters sublinear in the number of shareholders. Our construction is algebraic, without resorting to generic tools such as multiparty computation or fully homomorphic encryption. Consequently, a wide range of lattice-trapdoor-based primitives can be thresholdised non-interactively by simply substituting the trapdoor preimage sampling procedure with our partial analogue.
Enabling Microarchitectural Agility: Taking ML-KEM & ML-DSA from Cortex-M4 to M7 with SLOTHY
Highly-optimized assembly is commonly used to achieve the best performance for popular cryptographic schemes such as the newly standardized ML-KEM and ML-DSA.
The majority of implementations today rely on hand-optimized assembly for the core building blocks to achieve both security and performance.
However, recent work by Abdulrahman et al. takes a new approach, writing a readable base assembly implementation first and leaving the bulk of the optimization work to a tool named SLOTHY based on constraint programming.
SLOTHY performs instruction scheduling, register allocation, and software pipelining simultaneously using constraints modeling the architectural and microarchitectural details of the target platform.
In this work, we extend SLOTHY and investigate how it can be used to migrate already highly hand-optimized assembly to a different microarchitecture, while maximizing performance.
As a case study, we optimize state-of-the-art Arm Cortex-M4 implementations of ML-KEM and ML-DSA for the Arm Cortex-M7.
Our results suggest that this approach is promising:
For the number-theoretic transform (NTT) – the core building block of both ML-DSA and ML-KEM – we achieve speed-ups of and , respectively.
For Keccak – the permutation used by SHA-3 and SHAKE and also vastly used in ML-DSA and ML-KEM – we achieve speed-ups of 30% compared to the M4 code and 5% compared to hand-optimized M7 code.
For many other building blocks, we achieve similarly significant speed-ups of up to .
Overall, this results in 11 to 33% faster code for the entire cryptosystems.
Lattice-Based Updatable Public-Key Encryption for Group Messaging
Updatable Public-Key Encryption (UPKE) augments the security of PKE with Forward Secrecy properties. While requiring more coordination between parties, UPKE enables much more efficient constructions than full-fledged Forward-Secret PKE. Alwen, Fuchsbauer and Mularczyk (AFM, Eurocrypt’24) presented the strongest security notion to date. It is the first to meet the needs of UPKE’s most important applications: Secure Group Messaging and Continuous Group Key Agreement. The authors provide a very efficient construction meeting their notion with classic security based on the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model (ROM).
In this work we present the first post-quantum secure UPKE construction meeting (a slight relaxation of) the AFM security notion. Based on the Module LWE assumption, our construction is practically efficient. Moreover, public key sizes are about and ciphertext sizes around of those of the state-of-the-art lattice-based UPKE scheme in the ROM by Abou Haidar, Passelègue and Stehlé – despite only being shown to satisfy a significantly weaker security notion. As the AFM proofs relies on random self-reducibility of CDH, which has no analogue for lattices, we develop a new proof technique for strong UPKE, identifying the core properties required from the underlying (lattice-based) encryption scheme.
Traitor Tracing in Multi-sender Setting ( : Traceable Multi-client Functional Encryption)
Traitor tracing is a traditional cryptographic primitive designed for scenarios with multiple legitimate receivers. When the plaintext - that is, the output of decryption - is leaked and more than one legitimate receiver exists, it becomes imperative to identify the source of the leakage, a need that has motivated the development of traitor tracing techniques. Recent advances in standard encryption have enabled decryption outcomes to be defined in a fine-grained manner through the introduction of Functional Encryption (FE). Constructing FE schemes is intriguing, and achieving the tracing property adds an additional layer of complexity. Traitor tracing techniques have been actively developed for more than three decades, yet they have always remained within the same framework - a single sender responsible for encrypting all the data.
However, fine-grained decryption is particularly useful when data originates from multiple sources, allowing for joint computation on personal data. This leads to the concept of multi-client functional encryption (MCFE), where multiple concurrent senders independently encrypt their data while agreeing on the decryption of a specific function (e.g., a statistical measure) computed on the aggregated data, without revealing any additional information. In the era of cloud computing and big data, privacy-preserving joint computation is crucial, and tracing the source of any breach by dishonest participants becomes essential. Thus, in this paper we take the first step toward addressing the tracing problem in the general context of joint computation with multiple senders. Our contributions are twofold:
- We propose the first tracing model in the context of multi-sender encryption, namely ( ), which allows a pirate to extract secret information from both receivers and senders. Our model supports strong and naturally admissible decoders, removing artificial restrictions on the pirate decoder and thus addressing the shortcomings of existing traceable functional encryption schemes designed for the single-sender setting.
- To achieve our conceptual objective, we build upon the recently introduced notion of strong admissibility for MCFE. Our main technical contribution is a generic compiler that transforms a large class of MCFE schemes with weak admissibility into schemes with strong admissibility. This compiler not only helps overcome existing challenges but may also be of general interest within the functional encryption domain. Finally, we present a concrete lattice-based scheme for inner-product functionalities that achieves post-quantum security under standard assumptions.
The Security of Hash-and-Sign with Retry against Superposition Attacks
Considering security against quantum adversaries, while it is important to consider the traditional existential unforgeability (EUF-CMA security), it is desirable to consider security against adversaries making quantum queries to the signing oracle: Plus-one security (PO security) and blind unforgeability (BU security) proposed by Boneh and Zhandry (Crypto 2013) and Alagic et al. (EUROCRYPT 2020), respectively. Hash-and-sign is one of the most common paradigms for constructing EUF-CMA-secure signature schemes in the quantum random oracle model, employing a trapdoor function and a hash function. It is known that its derandomized version is PO- and BU-secure. A variant of hash-and-sign, known as hash-and-sign with retry (HSwR), formulated by Kosuge and Xagawa (PKC 2024), is widespread since it allows for weakening the security assumptions of a trapdoor function. Unfortunately, it has not been known whether HSwR can achieve PO- and BU-secure even with derandomization.
In this paper, we apply a derandomization with bounded loops to HSwR. We demonstrate that HSwR can achieve PO and BU security through this approach. Since derandomization with bounded loops offers advantages in some implementations, our results support its wider adoption, including in NIST PQC candidates.
Adaptively Secure Fully Homomorphic Message Authentication Code with Pre-processable Verification
There has been remarkable progress in fully homomorphic encryption, ever since Gentry's first scheme. In contrast, fully homomorphic authentication primitives received relatively less attention, despite existence of some previous constructions. While there exist various schemes with different functionalities for fully homomorphic encryption, there are only a few options for fully homomorphic authentication. Moreover, there are even fewer options when considering two of the most important properties: adaptive security, and pre-processable verification. To our knowledge, except for some concurrent works, achieving both properties requires the use of nested construction, which involves homomorphically authenticating a homomorphic authentication tag of a message, making the scheme costly and complicated.
In this work, we propose a dedicated scheme for (leveled) fully homomorphic message authentication code that is adaptively secure and has pre-processable verification. Leveraging the secrecy of the primitive, we demonstrate that a slight modification of a selectively secure (leveled) fully homomorphic signature scheme yields an adaptively secure (leveled) fully homomorphic message authentication code with pre-processable verification. Additionally, we introduce a novel notion and generic transform to enhance the security of a homomorphic message authentication code, which also exploits the secrecy of the primitive.
Predicate Encryption from Lattices: Enhanced Compactness and Refined Functionality
Uncategorized
Uncategorized
In this work, we explore the field of lattice-based Predicate Encryption (PE), with a focus on enhancing compactness and refining functionality.
First, we present a more compact bounded collusion predicate encryption scheme compared to previous constructions, significantly reducing both the per-unit expansion and fixed overhead, while maintaining an optimal linear blow-up proportional to .
Next, we propose a Predicate Inner Product Functional Encryption (P-IPFE) scheme based on our constructed predicate encryption scheme. P-IPFE preserves the attribute-hiding property while enabling decryption to reveal only the inner product between the key and message vectors, rather than the entire message as in traditional PE. Our P-IPFE scheme also achieves bounded collusion resistance while inheriting the linear compactness optimized in the underlying PE scheme. Additionally, it supports any polynomial-sized and bounded-depth circuits, thereby extending beyond the inner-product predicate class in prior works.
Furthermore, all the proposed schemes achieve selective fully attribute-hiding security in the simulation-based model, therefore, can further attain semi-adaptive security by adopting existing upgrading techniques.
Vanishing Short Integer Solution, Revisited: Reductions, Trapdoors, Homomorphic Signatures for Low-Degree Polynomials
The vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23], at its simplest form, asserts the hardness of finding a polynomial with short coefficients which vanishes at a given random point. While vSIS has proven to be useful in applications such as succinct arguments, not much is known about its theoretical hardness. Furthermore, without the ability to generate a hard instance together with a trapdoor, the applicability of vSIS is significantly limited.
We revisit the vSIS assumption focusing on the univariate single-point constant-degree setting, which can be seen as a generalisation of the (search) NTRU problem. In such a setting, we show that the vSIS problem is as hard as finding the shortest vector in certain ideal lattices. We also show how to generate a random vSIS instance together with a trapdoor, under the (decision) NTRU assumption. Interestingly, a vSIS trapdoor allows to sample polynomials of short coefficients which evaluate to any given value at the public point. By exploiting the multiplicativity of the polynomial ring, we use vSIS trapdoors to build a new homomorphic signature scheme for low-degree polynomials.
A Note on Zero-Knowledge Simulator of the CROSS Identification Protocol
We point out flaw in zero-knowledge of the CROSS identification protocol, , which allows a distinguisher to distinguish real and simulated transcripts given access to the witness. Moreover, we show that the real and simulated transcripts are not statistically indistinguishable, and therefore the protocol can only satisfy weak computational (rather than strong, statistical or perfect) Honest Verifier Zero-knowledge. This issue is still present in version 2.0 updated on January 31, 2025, which resolves the security losses attained via the attacks of [BLP+25]
The Complexity of Memory Checking with Covert Security
A memory checker is an algorithmic tool used to certify the integrity of a database maintained on a remote, unreliable, computationally bounded server. Concretely, it allows a user to issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. A recent result due to Boyle, Komargodski, and Vafa (BKV, STOC '24) showed a tradeoff between the size of the local storage and the number of queries the memory checker makes to the server upon every logical instruction. Specifically, they show that every non-trivial memory checker construction with inverse-polynomial soundness and local storage at most must make queries, and this is tight up to constant factors given known constructions. However, an intriguing question is whether natural relaxations of the security guarantee could allow for more efficient constructions.
We consider and adapt the notion of covert security to the memory checking context, wherein the adversary can effectively cheat while taking the risk of being caught with constant probability. Notably, BKV's lower bound does not apply in this setting.
We close this gap and prove that overhead is unavoidable even in the covert security setting. Our lower bound applies to any memory checker construction, including ones that use randomness and adaptivity and ones that rely on cryptographic assumptions and/or the random oracle model, as long as they satisfy a natural "read-only reads" property. This property requires a memory checker not to modify contents of the database or local storage in the execution of a logical read instruction.
Random Number Generation from Pulsars
Pulsars exhibit signals with precise inter-arrival times that are on the order of milliseconds to seconds, depending on the individual pulsar. There are subtle variations in the timing of pulsar signals. We show that these variations can serve as a natural entropy source for the creation of Random Number Generators (RNGs). We also explore the effects of using randomness extractors to increase the entropy of random bits extracted from Pulsar timing data. To evaluate the quality of the Pulsar RNG, we model its entropy as a -source and use well-known cryptographic results to show its closeness to a theoretically ideal uniformly random source. To remain consistent with prior work, we also show that the Pulsar RNG passes well-known statistical tests such as the NIST test suite.
Lattice-based Proof-Friendly Signatures from Vanishing Short Integer Solutions
Efficient anonymous credentials are typically constructed by combining proof-friendly signature schemes with compatible zero-knowledge proof systems. Inspired by pairing-based proof-friendly signatures such as Boneh- Boyen (BB) and Boneh-Boyen-Shacham (BBS), we propose a wide family of lattice-based proof-friendly signatures based on variants of the vanishing short integer solution (vSIS) assumption [Cini-Lai-Malavolta, Crypto'23]. In particular, we obtain natural lattice-based adaptions of BB and BBS which, similar to their pairing-based counterparts, admit nice algebraic properties.
[Bootle-Lyubashevsky-Nguyen-Sorniotti, Crypto'23] (BLNS) recently proposed a framework for constructing lattice-based proof-friendly signatures and anonymous credentials, based on another new lattice assumption called parametrised by a fixed function , with focus on being the binary decomposition. We introduce a generalised framework, called , with a keyed and probabilistic function . For example, picking with key for short ring element leads to algebraic and thus proof-friendly signatures. To better gauge the robustness and proof-friendliness of , we consider what happens when the inputs to are chosen selectively (or even adaptively) by the adversary, and the behaviour under relaxed norm checks. While bit decomposition quickly becomes insecure, our proposed function families seem robust.
Commit-and-Prove System for Vectors and Applications to Threshold Signing
Multi-signatures allow to combine several individual signatures into a compact one and verify it against a short aggregated key. Compared to threshold signatures, multi-signatures enjoy non-interactive key generation but give up on the threshold-setting. Recent works by Das et al. (CCS'23) and Garg et al. (S&P'24) show how multi-signatures can be turned into schemes that enable efficient verification when an ad hoc threshold -- determined only at verification -- is satisfied. This allows to keep the simple key generation of multi-signatures and support flexible threshold settings in the signing process later on. Both works use the same idea of combining BLS multi-signatures with inner-product proofs over committed keys. Das et al. give a somewhat generic proof from both building blocks, which we show to be flawed, whereas Garg et al. give a direct proof for the combined construction in the algebraic group model.
In this work, we identify the common blueprint used in both works and abstract the proof-based approach through the building block of a commit-and-prove system for vectors (CP). We formally define a flexible set of security properties for the CP system and show how it can be securely combined with a multi-signature to yield a signature with ad hoc thresholds. Our scheme also lifts the threshold signatures into the multiverse setting recently introduced by Baird et al. (S&P'23), which allows signers to re-use their long-term keys across several groups. The challenge in the generic construction is to express -- and realize -- the combination of homomorphic proofs and commitments (needed to realize flexible thresholds over fixed group keys) and their simulation extractability (needed in the threshold signature security proof). We finally show that a CP instantiation closely following the ideas of Das et al. can be proven secure, but requires a new flexible-base DL-assumption to do so.
Delayed-Input Multi-Party Computation
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.
Stronger Security for Threshold Blind Signatures
Uncategorized
Uncategorized
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
Efficient NIZK Arguments with Straight-Line Simulation and Extraction
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold.
First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%.
Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer.
Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
Bootstrapping with RMFE for Fully Homomorphic Encryption
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping.
In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction.
We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for . Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to additional multiplicative depth and brings latencies often below seconds, at the cost of lower packing capacity.
Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries.
In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways:
- Assuming the existence of non-interactive perfectly binding commitments, we design protocols with or parties that are efficient and secure whenever is small compared to the security parameter (e.g., is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that parties are necessary and sufficient for corruptions in the computational setting, while parties are required for information-theoretic security.
- Under the same assumption, we design protocols with or parties (depending on the adversarial network model) which are efficient whenever . This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions.
- We design efficient protocols with or parties (depending on the adversarial network model) assuming the existence of one-way functions.
We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are feasible, they would introduce significant overhead in verifying the correctness of multiplications. In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we first report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized four-input multiplication protocol. To mitigate the verification burden, we propose a set of lightweight compression protocols by exploiting reusability properties within the computation process, and seamlessly integrate them into existing verification techniques. Building on these enhancements, we further construct a practically-efficient and general -party computation protocol that serves as the cryptographic foundation for advanced PPML schemes. As a result, Helix achieves efficiency comparable to semi-honest frameworks. For instance, in 63-party neural network inference, Helix is only 1.9 (1.1 ) slower in the online phase and 1.2 (1.1 ) slower in preprocessing under LAN (WAN), compared to LXY24, in the best case.
Homomorphic Encryption for Large Integers from Nested Residue Number Systems
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose. Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of two thousand better multiplication throughput and a factor of twenty better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
Uncategorized
Uncategorized
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
Last updated: 2025-03-10
Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work.
Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
Traceable Threshold Encryption without Trusted Dealer
The fundamental assumption in -out-of- threshold encryption is that the adversary can only corrupt less than parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares?
In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size (for ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties.
An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing traitors.
CCA-Secure Traceable Threshold (ID-based) Encryption and Application
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
Uncategorized
Uncategorized
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness.
Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key into a ciphertext under , for any polynomial-time RAM program with runtime and memory . Combined with other lattice techniques, this allows us to construct:
1) Succinct-randomised encodings from RAM programs with encoder complexity and rate-1 encodings.
2) Laconic function evaluation for RAM programs, with encoder runtime bounded by and rate-1 encodings.
3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size . The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties.
All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
CT-LLVM: Automatic Large-Scale Constant-Time Analysis
Constant-time (CT) is a popular programming discipline to protect
cryptographic libraries against micro-architectural timing attacks.
One appeal of the CT discipline lies in its conceptual simplicity: a
program is CT iff it has no secret-dependent data-flow,
control-flow or variable-timing operation. Thanks to its simplicity,
the CT discipline is supported by dozens of analysis tools. However, a
recent user study demonstrates that these tools are seldom used due to
poor usability and maintainability (Jancar et al. IEEE SP 2022).
In this paper, we introduce CT-LLVM, a CT analysis tool designed for
usability, maintainability and automatic large-scale analysis.
Concretely, CT-LLVM is packaged as a
LLVM plugin and is built as a thin layer on top of two standard LLVM
analysis: def-use and alias analysis. Besides confirming known CT
violations, we demonstrate the usability and scalability of CT-LLVM by
automatically analyzing nine cryptographic libraries. On
average, CT-LLVM can automatically and soundly analyze 36% of the
functions in these libraries, proving that 61% of them are CT. In
addition, the large-scale automatic analysis also reveals new
vulnerabilities in these libraries. In the end, we demonstrate
that CT-LLVM helps systematically mitigate compiler-introduced CT
violations, which has been a long-standing issue in CT analysis.
IP Masking with Generic Security Guarantees under Minimum Assumptions, and Applications
Leakage-resilient secret sharing is a fundamental building block for securing implementations against side-channel attacks. In general, such schemes correspond to a tradeoff between the complexity of the resulting masked implementations, their security guarantees and the physical assumptions they require to be effective.
In this work, we revisit the Inner-Product (IP) framework, where a secret is encoded by two vectors , such that their inner product is equal to . So far, the state of the art is split in two.
On the one hand, the most efficient IP masking schemes (in which is public but random) are provably secure with the same security notions (i.e., in the abstract probing model) as Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their theoretical interest and practical relevance remain unclear.
On the other hand, the most secure IP masking schemes (in which is secret) lead to expensive implementations. We improve this state of the art by investigating the leakage resilience of IP masking with public coefficients in the bounded leakage model, which depicts well implementation contexts where the physical noise is negligible. Furthermore, we do that without assuming independent leakage from the shares, which may be challenging to enforce in practice. In this model, we show that if bits are leaked from the shares of the encoding over an -bit field, then, with probability at least over the choice of , the scheme is -leakage resilient.
We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients can yield leakage resilience up to , in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only. We additionally discuss the applications of our results, and the new research challenges they raise.
Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors , exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of .
We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as:
1)Adaptively secure laconic function evaluation for depth- functions with communication .
2) A trapdoor hash function for all functions.
3) An (optimally) succinct homomorphic secret sharing for all functions.
4) A rate- laconic oblivious transfer for batch messages, which is best possible.
In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features.
In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a -out-of- secret sharing of an NP statement is a reduction that maps an instance-witness pair to instance-witness pairs such that any subset of reveals no information about the original witness, while any subset of allows full recovery of the original witness. Although the notion was formulated for general , the only existing construction (due to GJS) applies solely to the case where and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions.
1. **Definition.** We revisit the notion of NPSS by formulating a new definition of information-theoretically secure NPSS. This notion serves as a cryptographic analogue of standard NP-reductions and can be compiled into the GJS definition using any one-way function.
2. **Construction.** We construct information-theoretic -out-of- NPSS for any values of with complexity polynomial in . Along the way, we present a new notion of secure multiparty computation that may be of independent interest.
3. **Applications.** Our NPSS framework enables the *non-interactive combination* of instances of zero-knowledge proofs, where only of them are sound and only are zero-knowledge, provided that . Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions:
(i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed.
(ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting.
(iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Asharov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
Towards Leakage-Resilient Ratcheted Key Exchange
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure.
In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
Private Multi-Party Neural Network Training over via Galois Rings
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is , which contains as a subring. The algebraic structure of enables us to benefit from Shamir scheme while performing modulo operations only on instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing -based implementations in this domain.
(Multi-Input) FE for Randomized Functionalities, Revisited
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
Towards a White-Box Secure Fiat-Shamir Transformation
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
Fully Asymmetric Anamorphic Homomorphic Encryption from LWE
As introduced by Persiano {\it et al.} (Eurocrypt'22), anamorphic encryption (AE) is a primitive enabling private communications against a dictator that forces users to surrender their decryption keys. In its fully asymmetric flavor (defined by Catalano {\it et al.}, Eurocrypt'24), anamorphic channels can work as hidden public-key mechanisms in the sense that anamorphic encryptors are not necessarily able to decrypt anamorphic ciphertexts. Unfortunately, fully asymmetric AE is hard to come by and even impossible to obtain from ordinary public-key encryption via black-box constructions. So far, only three schemes are known to rely on well-established assumptions. In this paper, we exhibit constructions from the standard LWE assumption based on Regev's cryptosystem and its dual version. In both cases, we retain the additive homomorphism of the schemes. We additionally show that dual Regev is public-key anamorphic in the sense of Persiano {\it et al.} (Crypto'24). In the FHE setting, we show that the dual GSW system provides fully asymmetric AE (while preserving its leveled homomorphism) when instantiated with binary/ternary secret keys. Along the way, we discuss the extent to which our schemes satisfy a generalization of Banfi {\it et al.}'s notion of robustness (Eurocrypt'24) to the case of homomorphically evaluated ciphertexts.
Bulletproofs for R1CS: Bridging the Completeness-Soundness Gap and a ZK Extension
Bulletproofs, introduced by Bünz, Bootle, Boneh, Poelstra, Wuille and Maxwell (IEEE S&P, 2018), is a highly efficient non-interactive argument system that does not require a trusted setup. Recently, Bünz (PhD Thesis, 2023) extended Bulletproofs to support arguments for rank-1 constraint satisfaction (R1CS) systems, a widely-used representation for arithmetic satisfiability problems. Although the argument system constructed by Bünz preserves the attractive properties of Bulletproofs, it presents a gap between its completeness and soundness guarantees: The system is complete for a restricted set of instances, but sound only for a significantly broader set. Although argument systems for such gap relations nevertheless provide clear and concrete guarantees, the gaps they introduce may lead to various inconsistencies or undesirable gaps within proofs of security, especially when used as building blocks within larger systems.
In this work we show that the argument system presented by Bünz can be extended to bridge the gap between its completeness and soundness, and to additionally provide honest-verifier zero-knowledge. For the extended argument system, we introduce a refined R1CS relation that captures the precise set of instances for which both completeness and soundness hold without resorting to a gap formulation. The extended argument system preserves the performance guarantees of the argument system presented by Bünz, and yields a non-interactive argument system using the Fiat-Shamir transform.
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication.
Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model.
In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.