Papers updated in last 365 days (2800 results)

Last updated:  2024-12-11
One-Shot Native Proofs of Non-Native Operations in Incrementally Verifiable Computations
Tohru Kohrita, Patrick Towa, and Zachary J. Williamson
Proving non-native operations is still a bottleneck in existing incrementally verifiable computations. Prior attempts to solve this issue either simply improve the efficiency of proofs of non-native operations or require folding instances in each curve of a cycle. This paper shows how to avoid altogether in-circuit proofs of non-native operations in the incremental steps, and only record them in some auxiliary proof information. These operations are proved natively at the end of the computation, at the cost of only a small constant number (four or five) of non-native field multiplications to go from a non-native operation record to a native one. To formalise the security guarantees of the scheme, the paper introduces the concept of incrementally verifiable computation with auxiliary proof information, a relaxation of the standard notion of incrementally verifiable computation. The knowledge-soundness now guarantees the correctness of a computation only if the piece of information attached to a proof is valid. This new primitive is thus only to be used if there is an efficient mechanism to verify the validity of that information. This relaxation is exactly what enables a construction which does not require in-circuit proofs of non-native operations during the incremental part of the computation. Instantiated in the Plonk arithmetisation, the construction leads to savings in circuit-gate count (compared to standard folding-based constructions) of at least one order of magnitude, and that can go up to a factor of 50.
Last updated:  2024-12-10
Synchronous Distributed Key Generation without Broadcasts
Nibesh Shrestha, Adithya Bhat, Aniket Kate, and Kartik Nayak
Distributed key generation (DKG) is an important building block in designing many efficient distributed protocols. In this work, we initiate the study of communication complexity and latency of distributed key generation protocols under a synchronous network in a point-to-point network. Our key result is the first synchronous DKG protocol for discrete log-based cryptosystems with $O(\kappa n^3)$ communication complexity ($\kappa$ denotes a security parameter) that tolerates $t < n/2$ Byzantine faults among $n$ parties. We show two variants of the protocol: a deterministic protocol with $O(t\Delta)$ latency and randomized protocol with $O(\Delta)$ latency in expectation where $\Delta$ denotes the bounded synchronous delay. In the process of achieving our results, we design (1) a gradecast protocol with optimal communication complexity of $O(\kappa n^2)$ for linear-sized inputs and latency of $O(\Delta)$, (2) a primitive called ``recoverable set of shares'' for ensuring recovery of shared secrets, (3) an oblivious leader election protocol with $O(\kappa n^3)$ communication and $O(\Delta)$ latency, and (4) a multi-valued validated Byzantine agreement (MVBA) protocol with $O(\kappa n^3)$ communication complexity for linear-sized inputs and $O(\Delta)$ latency in expectation. Each of these primitives may be of independent interest.
Last updated:  2024-12-10
Arke: Scalable and Byzantine Fault Tolerant Privacy-Preserving Contact Discovery
Nicolas Mohnblatt, Alberto Sonnino, Kobi Gurkan, and Philipp Jovanovic
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Last updated:  2024-12-10
GCKSign: Simple and Efficient Signatures from Generalized Compact Knapsacks
Joo Woo, Kwangsu Lee, and Jong Hwan Park
In 2009, Lyubashevsky proposed a lattice-based signature scheme by applying the Fiat-Shamir transformation and proved its security under the generalized compact knapsack (GCK) problem. This scheme has a simple structure but has large signature and key sizes due to the security requirement of their security reduction. Dilithium, which was submitted to the NIST Post-Quantum Cryptography standardization and selected as one of the final candidates, is an improvement of the Lyubashevsky's signature scheme and decreases key and signature sizes by modifying the form of a public key and including additional steps in key generation, signing, and verification algorithms. Thus, Dilithium has a more complex structure to implement compared to the Lyubashevsky's scheme. To combine the strength of both signature schemes, we modify the Lyubashevsky's signature scheme and present a new security proof that removes their security requirement. As a result, we propose a simple and practical GCKSign signature scheme based on the hardness of a new GCK assumption, called target-modified one-wayness of GCK function. The signature size of our signature scheme decreases 40 percent, the sum of signature and public key sizes decreases 25 percent, and the secret key size decreases 90 percent for the NIST security level III, compared to Dilithium. Furthermore, by the simplicity of our structure, the key generation, signing, and verification algorithms of our scheme run 2.4$\times$, 1.7$\times$, and 2.0$\times$ faster than those of Dilithium, respectively.
Last updated:  2024-12-09
SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation
Shams Tarek, Dipayan Saha, Sujan Kumar Saha, Mark Tehranipoor, and Farimah Farahmandi
Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). Large language models (LLMs) have emerged as a viable approach to address these shortcomings in security verification because of their natural language understanding, advanced reasoning, and knowledge transfer capabilities. However, their application to large designs is limited by inherent token limitation and memorization constraints. In this paper, we introduce SoCureLLM, an LLM-based framework that excels in identifying security vulnerabilities within SoC designs and creating a comprehensive security policy database. Our framework is adaptable and adept at processing varied, large-scale designs, overcoming the abovementioned issues of LLM. In evaluations, SoCureLLM detected 76.47% of security bugs across three vulnerable RISC-V SoCs, outperforming the state-of-the-art security verification methods. Furthermore, assessing three additional large-scale RISC-V SoC designs against various threat models led to the formulation of 84 novel security policies, enriching the security policy database. Previously requiring extensive manual effort to craft, these newly generated security policies can be used as guidelines for developing secured SoC designs.
Last updated:  2024-12-09
Private Eyes: Zero-Leakage Iris Searchable Encryption
Julie Ha, Chloe Cachet, Luke Demarest, Sohaib Ahmad, and Benjamin Fuller
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from oblivious symmetric searchable encryption. Approximate proximity queries are used: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric. Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and oblivious maps which map keywords to values. One computes many LSHs of each record in the database and uses these hashes as keywords in the oblivious map with the matching biometric readings concatenated as the value. At search time with a noisy reading, one computes the LSHs and retrieves the disjunction of the resulting values from the map. The underlying oblivious map needs to answer disjunction queries efficiently. We focus on the iris biometric which requires a large number of LSHs, approximately $1000$. Boldyreva and Tang's (PoPETS 2021) design yields a suitable map for a small number of LSHs (their application was in zero-leakage $k$-nearest-neighbor search). Our solution is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most $6\%$ of LSHs match any stored value. We evaluate using the ND-0405 dataset; this dataset has $356$ irises suitable for testing. To scale our evaluation, we use a generative adversarial network to produce synthetic irises. Accurate statistics on sizes beyond available datasets is crucial to optimizing the cryptographic primitives. This tool may be of independent interest. For the largest tested parameters of a $5000$ synthetic iris database, a search requires $18$ rounds of communication and $25$ms of parallel computation. Our scheme is implemented and open-sourced.
Last updated:  2024-12-09
The Cost of Maintaining Keys in Dynamic Groups with Applications to Multicast Encryption and Group Messaging
Michael Anastos, Benedikt Auerbach, Mirza Ahad Baig, Miguel Cueto Noval, Matthew Kwan, Guillermo Pascual-Perez, and Krzysztof Pietrzak
In this work we prove lower bounds on the (communication) cost of maintaining a shared key among a dynamic group of users. Being "dynamic'' means one can add and remove users from the group. This captures important protocols like multicast encryption (ME) and continuous group-key agreement (CGKA), which is the primitive underlying many group messaging applications. We prove our bounds in a combinatorial setting where the state of the protocol progresses in rounds. The state of the protocol in each round is captured by a set system, with each of its elements specifying a set of users who share a secret key. We show this combinatorial model implies bounds in symbolic models for ME and CGKA that capture, as building blocks, PRGs, PRFs, dual PRFs, secret sharing, and symmetric encryption in the setting of ME, and PRGs, PRFs, dual PRFs, secret sharing, public-key encryption, and key-updatable public-key encryption in the setting of CGKA. The models are related to the ones used by Micciancio and Panjwani (Eurocrypt'04) and Bienstock et al. (TCC'20) to analyze ME and CGKA, respectively. We prove - using the Bollobás' Set Pairs Inequality - that the cost (number of uploaded ciphertexts) for replacing a set of $d$ users in a group of size $n$ is $\Omega(d\ln(n/d))$. Our lower bound is asymptotically tight and both improves on a bound of $\Omega(d)$ by Bienstock et al. (TCC'20), and generalizes a result by Micciancio and Panjwani (Eurocrypt'04), who proved a lower bound of $\Omega(\log(n))$ for $d=1$.
Last updated:  2024-12-09
$\textsf{ThorPIR}$: Single Server PIR via Homomorphic Thorp Shuffles
Ben Fisch, Arthur Lazzaretti, Zeyu Liu, and Charalampos Papamanthou
Private Information Retrieval (PIR) is a two player protocol where the client, given some query $x \in [N]$, interacts with the server, which holds a $N$-bit string $\textsf{DB}$, in order to privately retrieve $\textsf{DB}[x]$. In this work, we focus on the single-server client-preprocessing model, initially proposed by Corrigan-Gibbs and Kogan (EUROCRYPT 2020), where the client and server first run a joint preprocessing algorithm, after which the client can retrieve elements from $\textsf{DB}$ privately in time sublinear in $N$. Most known constructions of single-server client-preprocessing PIR follow one of two paradigms: They feature either (1) a linear-bandwidth offline phase where the client downloads the whole database from the server, or (2) a sublinear-bandwidth offline phase where however the server has to compute a large-depth ($\Omega_\lambda (N)$) circuit under fully-homomorphic encryption (FHE) in order to execute the preprocessing phase. In this paper, we propose $\textsf{ThorPIR}$, a single-server client preprocessing PIR scheme which achieves both sublinear offline bandwidth (asymptotically and concretely) and a low-depth, highly parallelizable preprocessing circuit. Our main insight is to use and significantly optimize the concrete circuit-depth of a much more efficient shuffling technique needed during preprocessing, called Thorp shuffle. A Thorp shuffle satisfies a weaker security property (e.g., compared to an AES permutation) which is ``just enough'' for our construction. We estimate that with a powerful server (e.g., hundreds of thousands of GPUs), $\textsf{ThorPIR}$'s end-to-end preprocessing time is faster than any prior work. Additionally, compared to prior FHE-based works with sublinear bandwidth, our construction is at least around $10,000$ times faster.
Last updated:  2024-12-09
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions. We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
Last updated:  2024-12-09
The Brave New World of Global Generic Groups and UC-Secure Zero-Overhead SNARKs
Jan Bobolz, Pooya Farshim, Markulf Kohlweiss, and Akira Takahashi
The universal composability (UC) model provides strong security guarantees for protocols used in arbitrary contexts. While these guarantees are highly desirable, in practice, schemes with a standalone proof of security, such as the Groth16 proof system, are preferred. This is because UC security typically comes with undesirable overhead, sometimes making UC-secure schemes significantly less efficient than their standalone counterparts. We establish the UC security of Groth16 without any significant overhead. In the spirit of global random oracles, we design a global (restricted) observable generic group functionality that models a natural notion of observability: computations that trace back to group elements derived from generators of other sessions are observable. This notion turns out to be surprisingly subtle to formalize. We provide a general framework for proving protocols secure in the presence of global generic groups, which we then apply to Groth16.
Last updated:  2024-12-09
Oblivious Identity-based Encryption (IBE Secure Against an Adversarial KGC)
Katerina Mitrokotsa, Sayantan Mukherjee, and Jenit Tomy
Identity-Based Encryption (IBE) was introduced in order to reduce the cost associated with Public Key Infrastructure systems. IBE allows users to request a trusted Key Generation Centre (KGC) for a secret key on a given identity, without the need to manage public keys. However, one of the main concerns of IBE is that the KGC has the power to decrypt all ciphertexts as it has access to all (identity, secret key) pairs. To address this issue, Chow (PKC 2009) introduced a new security property against the KGC by employing a new trusted party called the Identity Certifying Authority (ICA). Emura et al. (ESORICS 2019) formalized this notion and proposed construction in the random oracle model. In this work, we first identify several existing IBE schemes where the KGC can decrypt a ciphertext even without knowing the receiver's identity. This paves the way for formalizing new capabilities for the KGC. We then propose a new security definition to capture an adversarial KGC including the newly identified capabilities and we remove the requirement of an additional trusted party. Finally, we propose a new IBE construction that allows users to ask the KGC for a secret key on an identity without leaking any information about the identity to the KGC that is provably secure in the standard model against an adversarial KGC and corrupted users. Our construction is achieved in the composite order pairing groups and requires essentially optimal parameters.
Last updated:  2024-12-09
Efficient Fuzzy Private Set Intersection from Fuzzy Mapping
Ying Gao, Lin Qi, Xiang Liu, Yuanchao Luo, and Longxin Wang
Private set intersection (PSI) allows Sender holding a set \(X\) and Receiver holding a set \(Y\) to compute only the intersection \(X\cap Y\) for Receiver. We focus on a variant of PSI, called fuzzy PSI (FPSI), where Receiver only gets points in \(X\) that are at a distance not greater than a threshold from some points in \(Y\). Most current FPSI approaches first pick out pairs of points that are potentially close and then determine whether the distance of each selected pair is indeed small enough to yield FPSI result. Their complexity bottlenecks stem from the excessive number of point pairs selected by the first picking process. Regarding this process, we consider a more general notion, called fuzzy mapping (Fmap), which can map each point of two parties to a set of identifiers, with closely located points having a same identifier, which forms the selected point pairs. We initiate the formal study on Fmap and show novel Fmap instances for Hamming and \(L_\infty\) distances to reduce the number of selected pairs. We demonstrate the powerful capability of Fmap with some superior properties in constructing FPSI variants and provide a generic construction from Fmap to FPSI. Our new Fmap instances lead to the fastest semi-honest secure FPSI protocols in high-dimensional space to date, for both Hamming and general \(L_{\mathsf p\in [1, \infty]}\) distances. For Hamming distance, our protocol is the first one that achieves strict linear complexity with input sizes. For \(L_{\mathsf p\in [1, \infty]}\) distance, our protocol is the first one that achieves linear complexity with input sizes, dimension, and threshold.
Last updated:  2024-12-09
LURK: Lambda, the Ultimate Recursive Knowledge
Nada Amin, John Burnham, François Garillot, Rosario Gennaro, Chhi'mèd Künzang, Daniel Rogozin, and Cameron Wong
We introduce Lurk, a new LISP-based programming language for zk-SNARKs. Traditional approaches to programming over zero-knowledge proofs require compiling the desired computation into a flat circuit, imposing serious constraints on the size and complexity of computations that can be achieved in practice. Lurk programs are instead provided as data to the universal Lurk interpreter circuit, allowing the resulting language to be Turing-complete without compromising the size of the resulting proof artifacts. Our work describes the design and theory behind Lurk, along with detailing how its implementation of content addressing can be used to sidestep many of the usual concerns of programming zero-knowledge proofs.
Last updated:  2024-12-08
Consistency-or-Die: Consistency for Key Transparency
Joakim Brorsson, Elena Pagnin, Bernardo David, and Paul Stankovski Wagner
This paper proposes a new consistency protocol that protects a key transparency log against split-view attacks and - contrary to all previous work - does not to rely on small committees of known external auditors, or out-of-band channels, or blockchains (full broadcast systems). Our approach is to use a mechanism for cryptographically selecting a small committee of random and initially undisclosed users, which are then tasked to endorse the current view of the log. The name of our protocol, Consistency-or-Die (CoD), reflects that users are guaranteed to know if they are in a consistent state or not, and upon spotting an inconsistency in the key transparency log, users stop using this resource and become inactive (die). CoD relies on well-established cryptographic building blocks, such as verifiable random functions and key-evolving signatures, for which lightweight constructions exist. We provide a novel statistical analysis for identifying optimal quorum sizes (minimal number of endorsers for a view) for various security levels and percentages of malicious users. Our experiments support that CoD is practical and can run in the background on mid-tier smart phones, for large-scale systems with billions of users.
Last updated:  2024-12-07
Ballot secrecy: Security definition, sufficient conditions, and analysis of Helios
Ben Smyth
We propose a definition of ballot secrecy as an indistinguishability game in the computational model of cryptography. Our definition improves upon earlier definitions to ensure ballot secrecy is preserved in the presence of an adversary that controls ballot collection. We also propose a definition of ballot independence as an adaptation of an indistinguishability game for asymmetric encryption. We prove relations between our definitions. In particular, we prove ballot independence is sufficient for ballot secrecy in voting systems with zero-knowledge tallying proofs. Moreover, we prove that building systems from non-malleable asymmetric encryption schemes suffices for ballot secrecy, thereby eliminating the expense of ballot-secrecy proofs for a class of encryption-based voting systems. We demonstrate applicability of our results by analysing the Helios voting system and its mixnet variant. Our analysis reveals that Helios does not satisfy ballot secrecy in the presence of an adversary that controls ballot collection. The vulnerability cannot be detected by earlier definitions of ballot secrecy, because they do not consider such adversaries. We adopt non-malleable ballots as a fix and prove that the fixed system satisfies ballot secrecy.
Last updated:  2024-12-06
DGMT: A Fully Dynamic Group Signature From Symmetric-key Primitives
Mojtaba Fadavi, Sabyasachi Karati, Aylar Erfanian, and Reihaneh Safavi-Naini
A group signatures allows a user to sign a message anonymously on behalf of a group and provides accountability by using an opening authority who can ``open'' a signature and reveal the signer's identity. Group signatures have been widely used in privacy-preserving applications including anonymous attestation and anonymous authentication. Fully dynamic group signatures allow new members to join the group and existing members to be revoked if needed. Symmetric-key based group signature schemes are post-quantum group signatures whose security rely on the security of symmetric-key primitives such as cryptographic hash functions and pseudorandom functions. In this paper, we design a symmetric-key based fully dynamic group signature scheme, called DGMT, that redesigns DGM (Buser et al. ESORICS 2019) and removes its two important shortcomings that limit its application in practice: (i) interaction with the group manager for signature verification, and (ii) the need for storing and managing an unacceptably large amount of data by the group manager. We prove security of DGMT (unforgeability, anonymity, and traceability) and give a full implementation of the system. Compared to all known post-quantum group signature schemes with the same security level, DGMT has the shortest signature size. We also analyze DGM signature revocation approach and show that despite its conceptual novelty, it has significant hidden costs that makes it much more costly than using traditional revocation list approach.
Last updated:  2024-12-06
Natively Compatible Super-Efficient Lookup Arguments and How to Apply Them
Matteo Campanelli, Dario Fiore, and Rosario Gennaro
Lookup arguments allow an untrusted prover to commit to a vector $\vec f \in \mathbb{F}^n$ and show that its entries reside in a predetermined table $\vec t \in \mathbb{F}^N$. One of their key applications is to augment general-purpose SNARKs making them more efficient on subcomputations that are hard to arithmetize. In order for this "augmentation" to work out, a SNARK and a lookup argument should have some basic level of compatibility with respect to the commitment on $\vec f$. However, not all existing efficient lookup arguments are fully compatible with other efficient general-purpose SNARKs. This incompatibility can for example occur whenever SNARKs use multilinear extensions under the hood (e.g. Spartan) but the lookup argument is univariate in flavor (e.g. Caulk or $\mathsf{cq}$). In this paper we discuss how to widen the spectrum of "super-efficient" lookup arguments (where the proving time is independent of the size of the lookup table): we present a new construction inspired by $\mathsf{cq}$and based on multilinear polynomial encodings (MLE). Our construction is the first lookup argument for any table that is also natively compatible with MLE-based SNARKs at comparable costs with other state-of-the-art lookup arguments, particularly when the large table is unstructured. This case arises in various applications, such as using lookups to prove that the program in a virtual machine is fetching the right instruction and when proving the correct computation of floating point arithmetic (e.g., in verifiable machine learning). We also introduce a second more general construction: a compiler that, given any super-efficient lookup argument compatible with univariate SNARKs, converts it into a lookup argument compatible with MLE-based SNARKs with a very small overhead. Finally, we discuss SNARKs that we can compose with our constructions as well as approaches for this composition to work effectively.
Last updated:  2024-12-06
Generic Security of the Ascon Mode: On the Power of Key Blinding
Charlotte Lefevre and Bart Mennink
The Ascon authenticated encryption scheme has recently been selected as winner of the NIST Lightweight Cryptography competition. Despite its fame, however, there is no known overall generic security treatment of its mode: most importantly, all earlier related generic security results only use the key to initialize the state and do not take into account key blinding internally and at the end. In this work we present a thorough security analysis of the Ascon mode: we consider multi-user and possibly nonce-misuse security by default, but more importantly, we particularly investigate the role of the key blinding. More technically, our analysis includes an authenticity study in various attack settings. This analysis includes a description of a security model of authenticity under state recovery, that captures the idea that the mode aims to still guarantee authenticity and security against key recovery even if an inner state is revealed to the adversary in some way, for instance through leakage. We prove that Ascon satisfies this security property, thanks to its unique key blinding technique.
Last updated:  2024-12-06
Selfish Mining Time-Averaged Analysis in Bitcoin: Is Orphan Reporting an Effective Countermeasure?
Roozbeh Sarenche, Ren Zhang, Svetla Nikova, and Bart Preneel
A Bitcoin miner who owns a sufficient amount of mining power can perform selfish mining to increase its relative revenue. Studies have demonstrated that the time-averaged profit of a selfish miner starts to rise once the mining difficulty level gets adjusted in favor of the attacker. Selfish mining profitability lies in the fact that orphan blocks are not incorporated into the current version of Bitcoin's difficulty adjustment mechanism (DAM). Therefore, it is believed that considering the count of orphan blocks in the DAM can result in complete unprofitability for selfish mining. In this paper, we disprove this belief by providing a formal analysis of the selfish mining time-averaged profit. We present a precise definition of the orphan blocks that can be incorporated into calculating the next epoch's target and then introduce two modified versions of DAM in which both main-chain blocks and orphan blocks are incorporated. We propose two versions of smart intermittent selfish mining, where the first one dominates the normal intermittent selfish mining, and the second one results in selfish mining profitability under the modified DAMs. Moreover, we present the orphan exclusion attack with the help of which the attacker can stop honest miners from reporting the orphan blocks. Using combinatorial tools, we analyze the profitability of selfish mining accompanied by the orphan exclusion attack under the modified DAMs. Our results show that even when considering orphan blocks in the DAM, selfish mining can still be profitable. However, the level of profitability under the modified DAMs is significantly lower than that observed under the current version of Bitcoin DAM, suggesting that orphan reporting can be considered an effective countermeasure for selfish mining in the presence of payoff-maximizing miners.
Last updated:  2024-12-06
A New Security Evaluation Method Based on Resultant for Arithmetic-Oriented Algorithms
Hong-Sen Yang, Qun-Xiong Zheng, Jing Yang, Quan-feng Liu, and Deng Tang
The rapid development of advanced cryptographic applications like multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge (ZK) proofs have motivated the designs of the so-called arithmetic-oriented (AO) primitives. Efficient AO primitives typically build over large fields and use large S-boxes. Such design philosophy brings difficulties in the cryptanalysis of these primitives as classical cryptanalysis methods do not apply well. The generally recognized attacks against these primitives are algebraic attacks, especially Groebner basis attacks. Thus, the numbers of security rounds are usually derived through the complexity of solving the system of algebraic equations using Groebner bases. In this paper, we propose a novel framework for algebraic attacks against AO primitives. Instead of using Groebner basis, we use resultants to solve a system of multivariate equations that can better exploit the algebraic structures of AO primitives. We employ several techniques to reduce the dimensions of the resultants and avoid rapid increases in degrees, including meet-in-the-middle modeling, variable substitutions, and fast Lagrange interpolation. We apply our attack to three mainstream AO cryptographic primitives: Rescue-Prime, Anemoi, and Jarvis. For Rescue-Prime, we theoretically prove that the final univariate equation has a degree of at most a specific power of three and practically attack five rounds for the first time. We attack the full-round of Anemoi with complexity 2^110.10, which has been claimed to provide 127 bits of security. We also give the first practical attack against eight rounds of Anemoi over a 55-bit prime field. For Jarvis, we improve the existing practical attack by a factor of 100. Therefore, we point out that our analysis framework can be used as a new evaluation method for AO designs.
Last updated:  2024-12-06
Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit
Markku-Juhani O. Saarinen
We report on efficient and secure hardware implementation techniques for the FIPS 205 SLH-DSA Hash-Based Signature Standard. We demonstrate that very significant overall performance gains can be obtained from hardware that optimizes the padding formats and iterative hashing processes specific to SLH-DSA. A prototype implementation, SLotH, contains Keccak/SHAKE, SHA2-256, and SHA2-512 cores and supports all 12 parameter sets of SLH-DSA. SLotH also supports side-channel secure PRF computation and Winternitz chains. SLotH drivers run on a small RISC-V control core, as is common in current Root-of-Trust (RoT) systems. The new features make SLH-DSA on SLotH many times faster compared to similarly-sized general-purpose hash accelerators. Compared to unaccelerated microcontroller implementations, the performance of SLotH's SHAKE variants is up to $300\times$ faster; signature generation with 128f parameter set is is 4,903,978 cycles, while signature verification with 128s parameter set is only 179,603 cycles. The SHA2 parameter sets have approximately half of the speed of SHAKE parameter sets. We observe that the signature verification performance of SLH-DSA's ``s'' parameter sets is generally better than that of accelerated ECDSA or Dilithium on similarly-sized RoT targets. The area of the full SLotH system is small, from 63 kGE (SHA2, Cat 1 only) to 155 kGe (all parameter sets). Keccak Threshold Implementation adds another 130 kGE. We provide sensitivity analysis of SLH-DSA in relation to side-channel leakage. We show experimentally that an SLH-DSA implementation with CPU hashing will rapidly leak the SK.seed master key. We perform a 100,000-trace TVLA leakage assessment with a protected SLotH unit.
Last updated:  2024-12-06
Breaktooth: Breaking Security and Privacy in Bluetooth Power-Saving Mode
Keiichiro Kimura, Hiroki Kuzuno, Yoshiaki Shiraishi, and Masakatu Morii
With the increasing demand for Bluetooth devices, various Bluetooth devices support a power-saving mode to reduce power consumption. One of the features of the power-saving mode is that the Bluetooth sessions among devices are temporarily disconnected or are close to being disconnected. Prior works have analyzed that the power-saving mode is vulnerable to denial of sleep (DoSL) attacks that interfere with the transition to the power-saving mode of Bluetooth devices, thereby increasing its power consumption. However, to the best of our knowledge, no prior work has analyzed vulnerabilities or attacks on the state after transitioning to the power-saving mode. To address this issue, we present an attack that abuses two novel vulnerabilities in sleep mode, which is one of the Bluetooth power-saving modes, to break Bluetooth sessions. We name the attack Breaktooth. The attack is the first to abuse the vulnerabilities as an entry point to hijack Bluetooth sessions between victims. The attack also allows overwriting the link key between the victims using the hijacked session, enabling arbitrary command injection on the victims. Furthermore, while many prior attacks assume that attackers can forcibly disconnect the Bluetooth session using methods such as jamming to launch their attacks, our attack does not require such assumptions, making it more realistic. In this paper, we present the root causes of the Breaktooth attack and their impact. We also provide the technical details of how attackers can secretly detect the sleep mode of their victims. The attackers can easily recognize the state of the victim's Bluetooth session remotely using a standard Linux command. Additionally, we develop a low-cost toolkit to perform our attack and confirm the effectiveness of our attack. Then, we evaluate the attack on 17 types of commodity Bluetooth keyboards, mice and audio devices that support the sleep mode and show that the attack poses a serious threat to Bluetooth devices supporting the sleep mode. To prevent our attack, we present defenses and their proof-of-concept. We responsibly disclosed our findings to the Bluetooth SIG. We also released the toolkit as open-source.
Last updated:  2024-12-05
RoK, Paper, SISsors – Toolkit for Lattice-based Succinct Arguments
Michael Klooß, Russell W. F. Lai, Ngoc Khanh Nguyen, and Michał Osadnik
Lattice-based succinct arguments allow to prove bounded-norm satisfiability of relations, such as $f(\vec{s}) = \vec{t} \bmod q$ and $\|\vec{s}\|\leq \beta$, over specific cyclotomic rings $\mathcal{O}_\mathcal{K}$, with proof size polylogarithmic in the witness size. However, state-of-the-art protocols require either 1) a super-polynomial size modulus $q$ due to a soundness gap in the security argument, or 2) a verifier which runs in time linear in the witness size. Furthermore, construction techniques often rely on specific choices of $\mathcal{K}$ which are not mutually compatible. In this work, we exhibit a diverse toolkit for constructing efficient lattice-based succinct arguments: (i) We identify new subtractive sets for general cyclotomic fields $\mathcal{K}$ and their maximal real subfields $\mathcal{K}^+$, which are useful as challenge sets, e.g. in arguments for exact norm bounds. (ii) We construct modular, verifier-succinct reductions of knowledge for the bounded-norm satisfiability of structured-linear/inner-product relations, without any soundness gap, under the vanishing SIS assumption, over any $\mathcal{K}$ which admits polynomial-size subtractive sets. (iii) We propose a framework to use twisted trace maps, i.e. maps of the form $\tau(z) = \frac{1}{N} \cdot \mathsf{Trace}_{\mathcal{K}/\mathbb{Q}}( \alpha \cdot z )$, to embed $\mathbb{Z}$-inner-products as $\mathcal{R}$-inner-products for some structured subrings $\mathcal{R} \subseteq \mathcal{O}_\mathcal{K}$ whenever the conductor has a square-free odd part. (iv) We present a simple extension of our reductions of knowledge for proving the consistency between the coefficient embedding and the Chinese Remainder Transform (CRT) encoding of $\vec{s}$ over any cyclotomic field $\mathcal{K}$ with a smooth conductor, based on a succinct decomposition of the CRT map into automorphisms, and a new, simple succinct argument for proving automorphism relations. Combining all techniques, we obtain, for example, verifier-succinct arguments for proving that $\vec{s}$ satisfying $f(\vec{s}) = \vec{t} \bmod q$ has binary coefficients, without soundness gap and with polynomial-size modulus $q$.
Last updated:  2024-12-05
Further Connections Between Isogenies of Supersingular Curves and Bruhat-Tits Trees
Steven Galbraith, Valerie Gilchrist, Shai Levin, and Ari Markowitz
We further explore the explicit connections between supersingular curve isogenies and Bruhat-Tits trees. By identifying a supersingular elliptic curve $E$ over $\mathbb{F}_p$ as the root of the tree, and a basis for the Tate module $T_\ell(E)$; our main result is that given a vertex $M$ of the Bruhat-Tits tree one can write down a generator of the ideal $I \subseteq \text{End}(E)$ directly, using simple linear algebra, that defines an isogeny corresponding to the path in the Bruhat-Tits tree from the root to the vertex $M$. In contrast to previous methods to go from a vertex in the Bruhat-Tits tree to an ideal, once a basis for the Tate module is set up and an explicit map $\Phi : \text{End}(E) \otimes_{\mathbb{Z}_\ell} \to M_2( \mathbb{Z}_\ell )$ is constructed, our method does not require any computations involving elliptic curves, isogenies, or discrete logs. This idea leads to simplifications and potential speedups to algorithms for converting between isogenies and ideals.
Last updated:  2024-12-05
Scribe: Low-memory SNARKs via Read-Write Streaming
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Karan Newatia, and Steve Wang
Succinct non-interactive arguments of knowledge (SNARKs) enable a prover to produce a short and efficiently verifiable proof of the validity of an arbitrary NP statement. Recent constructions of efficient SNARKs have led to interest in using them for a wide range of applications, but unfortunately, deployment of SNARKs in these applications faces a key bottleneck: SNARK provers require a prohibitive amount of time and memory to generate proofs for even moderately large statements. While there has been progress in reducing prover time, prover memory remains an issue. In this work, we describe Scribe, a new low-memory SNARK that can efficiently prove large statements even on cheap consumer devices such as smartphones by leveraging a plentiful, but heretofore unutilized, resource: disk storage. In more detail, instead of storing its (large) intermediate state in RAM, Scribe's prover instead stores it on disk. To ensure that accesses to state are efficient, we design Scribe's prover in a *read-write streaming* model of computation that allows the prover to read and modify its state only in a streaming manner. We implement and evaluate Scribe's prover, and show that, on commodity hardware, it can easily scale to circuits of size $2^{28}$ gates while using only 2GB of memory and incurring only minimal proving latency overhead (10-35%) compared to a state-of-the-art memory-intensive baseline (HyperPlonk [EUROCRYPT 2023]) that requires much more memory. Our implementation minimizes overhead by leveraging the streaming access pattern to enable several systems optimizations that together mask I/O costs.
Last updated:  2024-12-05
A non-comparison oblivious sort and its application to private k-NN
Sofiane Azogagh, Marc-Olivier Killijian, and Félix Larose-Gervais
In this paper, we introduce an adaptation of the counting sort algorithm that leverages the data obliviousness of the algorithm to enable the sorting of encrypted data using Fully Homomorphic Encryption (FHE). Our approach represents the first known sorting algorithm for encrypted data that does not rely on comparisons. The implementation takes advantage of some basic operations on TFHE's Look-Up-Tables (LUT). We have integrated these operations into RevoLUT, a comprehensive open-source library built on tfhe-rs. We demonstrate the effectiveness of our Blind Counting Sort algorithm by developing a top-k selection algorithm and applying it to privacy-preserving k-Nearest Neighbors classification. This proves to be approximately 5 times faster than state-of-the-art methods.
Last updated:  2024-12-05
RevoLUT : Rust Efficient Versatile Oblivious Look-Up-Tables
Sofiane Azogagh, Zelma Aubin Birba, Marc-Olivier Killijian, and Félix Larose-Gervais
In this paper we present RevoLUT, a library implemented in Rust that reimagines the use of Look-Up-Tables (LUT) beyond their conventional role in function encoding, as commonly used in TFHE's programmable boostrapping. Instead, RevoLUT leverages LUTs as first class objects, enabling efficient oblivious operations such as array access, elements sorting and permutation directly within the table. This approach supports oblivious algortithm, providing a secure, privacy-preserving solution for handling sensitive data in various applications.
Last updated:  2024-12-05
SoK: Security of the Ascon Modes
Charlotte Lefevre and Bart Mennink
The Ascon authenticated encryption scheme and hash function of Dobraunig et al (Journal of Cryptology 2021) were recently selected as winner of the NIST lightweight cryptography competition. The mode underlying Ascon authenticated encryption (Ascon-AE) resembles ideas of SpongeWrap, but not quite, and various works have investigated the generic security of Ascon-AE, all covering different attack scenarios and with different bounds. This work systemizes knowledge on the mode security of Ascon-AE, and fills gaps where needed. We consider six mainstream security models, all in the multi-user setting: (i) nonce-respecting security, reflecting on the existing bounds of Chakraborty et al (ASIACRYPT 2023, ACISP 2024) and Lefevre and Mennink (SAC 2024), (ii) nonce-misuse resistance, observing a non-fixable flaw in the proof of Chakraborty et al (ACISP 2024), (iii) nonce-misuse resilience, delivering missing security analysis, (iv) leakage resilience, delivering a new security analysis that supersedes the informal proof sketch (though in a different model) of Guo et al (ToSC 2020), (v) state-recovery security, expanding on the analysis of Lefevre and Mennink, and (vi) release of unverified plaintext, also delivering missing security analysis. We also match all bounds with tight attacks. As a bonus, we systemize the knowledge on Ascon-Hash and Ascon-PRF (but there are no technical novelties here).
Last updated:  2024-12-05
SoK: Pseudorandom Generation for Masked Cryptographic Implementation
Rei Ueno, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This paper investigates pseudorandom generation in the context of masked cryptographic implementation. Although masking and pseudorandom generators (PRGs) have been distinctly studied for a long time, little literature studies how the randomness in the masked implementation should be generated. The lack of analysis on mask-bits generators makes the practical security of masked cryptographic implementation unclear, and practitioners (e.g., designer, implementer, and evaluator) may be confused about how to realize it. This paper provides a novel viewpoint and comprehensive analyses by developing new three models, which correspond to respective practical scenarios of leakage assessment, quantitative evaluation of side-channel security (e.g., success rate), and practical deployment. We reveal what properties are required for each scenario. In particular, we support a long-held belief/folklore with a proof: for the output of PRG for masking, cryptographic security (i.e., randomness and unpredictability) is sufficient but not necessary, but only a statistical uniformity is necessary. In addition, we thoroughly investigate the SCA security of PRGs in the wild in the masking context. We conclude this paper with some recommendations for practitioners, with a proposal of leakage-resilient method of comparative performance.
Last updated:  2024-12-05
On Tweakable Correlation Robust Hashing against Key Leakages
Chun Guo, Xiao Wang, Kang Yang, and Yu Yu
We continue the study of blockcipher-based (tweakable) correlation robust hash functions, which are central building blocks of circuit garbling and oblivious-transfer extension schemes. Motivated by Roy (CRYPTO 2022), we first enhance the multi-user tweakable correlation robust notion of Guo et al. (CRYPTO 2020) with a {\it key leaking oracle} that tells the adversary whether a certain user key satisfies the adversarially-chosen predicate. We then investigate the state-of-the-art hash construction of Guo et al. with respect to our new security definition, providing security proof as well as matching attacks. As an application, we exhibit an OT extension protocol with non-trivial multi-user security.
Last updated:  2024-12-05
Analysis of REDOG: The Pad Thai Attack
Alex Pellegrini and Marc Vorstermans
This paper introduces the Pad Thai message recovery attack on REDOG, a rank-metric code-based encryption scheme selected for the second round of evaluation in the Korean Post-Quantum Cryptography (KPQC) competition. The attack exploits the low rank weight of a portion of the ciphertext to construct multiple systems of linear equations, one of which is noise-free and can be solved to recover the secret message. The Pad Thai attack significantly undermines the security of REDOG, revealing that its provided security is much lower than originally claimed.
Last updated:  2024-12-05
Asynchronous Agreement on a Core Set in Constant Expected Time and More Efficient Asynchronous VSS and MPC
Ittai Abraham, Gilad Asharov, Arpita Patra, and Gilad Stern
A major challenge of any asynchronous MPC protocol is the need to reach an agreement on the set of private inputs to be used as input for the MPC functionality. Ben-Or, Canetti and Goldreich [STOC 93] call this problem Agreement on a Core Set (ACS) and solve it by running $n$ parallel instances of asynchronous binary Byzantine agreements. To the best of our knowledge, all results in the perfect and statistical security setting used this same paradigm for solving ACS. Using all known asynchronous binary Byzantine agreement protocols, this type of ACS has $\Omega(\log n)$ expected round complexity, which results in such a bound on the round complexity of asynchronous MPC protocols as well (even for constant depth circuits). We provide a new solution for Agreement on a Core Set that runs in expected $O(1)$ rounds. Our perfectly secure variant is optimally resilient ($t<n/4$) and requires just $O(n^4\log n)$ expected communication complexity. We show a similar result with statistical security for $t<n/3$. Our ACS is based on a new notion of Asynchronously Validated Asynchronous Byzantine Agreement (AVABA) and new information-theoretic analogs to techniques used in the authenticated model. Along the way, we also construct a new perfectly secure packed asynchronous verifiable secret sharing (AVSS) protocol with just $O(n^3\log n)$ communication complexity, improving the state of the art by a factor of $O(n)$. This leads to a more efficient asynchronous MPC that matches the state-of-the-art synchronous MPC.
Last updated:  2024-12-05
Byzantine Fault Tolerance with Non-Determinism, Revisited
Yue Huang, Huizhong Li, Yi Sun, and Sisi Duan
The conventional Byzantine fault tolerance (BFT) paradigm requires replicated state machines to execute deterministic operations only. In practice, numerous applications and scenarios, especially in the era of blockchains, contain various sources of non-determinism. Despite decades of research on BFT, we still lack an efficient and easy-to-deploy solution for BFT with non-determinism—BFT-ND, especially in the asynchronous setting. We revisit the problem of BFT-ND and provide a formal and asynchronous treatment of BFT-ND. In particular, we design and implement Block-ND that insightfully separates the task of agreeing on the order of transactions from the task of agreement on the state: Block-ND allows reusing existing BFT implementations; on top of BFT, we reduce the agreement on the state to multivalued Byzantine agreement (MBA), a somewhat neglected primitive by practical systems. Block-ND is completely asynchronous as long as the underlying BFT is asynchronous. We provide a new MBA construction significantly faster than existing MBA constructions. We instantiate Block-ND in both the partially synchronous setting (with PBFT, OSDI 1999) and the purely asynchronous setting (with PACE, CCS 2022). Via a 91-instance WAN deployment on Amazon EC2, we show that Block-ND has only marginal performance degradation compared to conventional BFT.
Last updated:  2024-12-05
Breaking the power-of-two barrier: noise estimation for BGV in NTT-friendly rings
Andrea Di Giusto and Chiara Marcolla
The Brakerski-Gentry-Vaikuntanathan (BGV) scheme is a Fully Homomorphic Encryption (FHE) cryptosystem based on the Ring Learning With Error (RLWE) problem. Ciphertexts in this scheme contain an error term that grows with operations and causes decryption failure when it surpasses a certain threshold. For this reason, the parameters of BGV need to be estimated carefully, with a trade-off between security and error margin. The ciphertext space of BGV is the ring $\mathcal R_q=\mathbb Z_q[x]/(\Phi_m(x))$, where usually the degree $n$ of the cyclotomic polynomial $\Phi_m(x)$ is chosen as a power of two for efficiency reasons. However, the jump between two consecutive powers-of-two polynomials also causes a jump in the security, resulting in parameters that are much bigger than what is needed. In this work, we explore the non-power-of-two instantiations of BGV. Although our theoretical research encompasses results applicable to any cyclotomic ring, our main investigation is focused on the case of ${m=2^s\cdot3^t}$ where $s,t\ge 1$, i.e., cyclotomic polynomials with degree ${n=\phi(m)=2^s\cdot3^{t-1}}$. We provide a thorough analysis of the noise growth in this new setting using the canonical norm and compare our results with the power-of-two case considering practical aspects like NTT algorithms. We find that in many instances, the parameter estimation process yields better results for the non-power-of-two setting.
Last updated:  2024-12-05
$\textsf{LiLAC}$: Linear Prover, Logarithmic Verifier and Field-agnostic Multilinear Polynomial Commitment Scheme
Kyeongtae Lee, Seongho Park, Byeongjun Jang, Jihye Kim, and Hyunok Oh
In this paper, we propose $\textsf{LiLAC}$, a novel field-agnostic, transparent multilinear polynomial commitment scheme (MLPCS) designed to address key challenges in polynomial commitment systems. For a polynomial with $N$ coefficients, $\textsf{LiLAC}$ achieves $\mathcal{O}(N)$ prover time, $\mathcal{O}(\log N)$ verifier time, and $\mathcal{O}(\log N)$ proof size, overcoming the limitations of $\mathcal{O}(\log^2 N)$ verification time and proof size without any increase in other costs. This is achieved through an optimized polynomial commitment strategy and the recursive application of the tensor IOPP, making $\textsf{LiLAC}$ both theoretically optimal and practical for large-scale applications. Furthermore, $\textsf{LiLAC}$ offers post-quantum security, providing robust protection against future quantum computing threats. We propose two constructions of $\textsf{LiLAC}$: a field-agnostic $\textsf{LiLAC}$ and a field-specific $\textsf{LiLAC}$. Each construction demonstrates superior performance compared to the state-of-the-art techniques in their respective categories of MLPCS. First, the field-agnostic $\textsf{LiLAC}$ is compared against Brakedown (CRYPTO 2023), which is based on a tensor IOP and satisfies field-agnosticity. In experiments conducted over a 128-bit field with a coefficient size of $2^{30}$, the field-agnostic $\textsf{LiLAC}$ achieves a proof size that is $3.7\times$ smaller and a verification speed that is $2.2\times$ faster, while maintaining a similar proof generation time compared to Brakedown. Furthermore, the field-specific $\textsf{LiLAC}$ is evaluated against WHIR (ePrint 2024/1586), which is based on an FRI. With a 128-bit field and a coefficient size of $2^{30}$, the field-specific $\textsf{LiLAC}$ achieves a proof generation speed that is $2.8\times$ faster, a proof size that is $27\%$ smaller, and a verification speed that is $14\%$ faster compared to WHIR.
Last updated:  2024-12-05
Batch Range Proof: How to Make Threshold ECDSA More Efficient
Guofeng Tang, Shuai Han, Li Lin, Changzheng Wei, and Ying Yan
With the demand of cryptocurrencies, threshold ECDSA recently regained popularity. So far, several methods have been proposed to construct threshold ECDSA, including the usage of OT and homomorphic encryptions (HE). Due to the mismatch between the plaintext space and the signature space, HE-based threshold ECDSA always requires zero-knowledge range proofs, such as Paillier and Joye-Libert (JL) encryptions. However, the overhead of range proofs constitutes a major portion of the total cost. In this paper, we propose efficient batch range proofs to improve the efficiency of threshold ECDSA. At the heart of our efficiency improvement is a new technical tool called Multi-Dimension Forking Lemma, as a generalization of the well-known general forking lemma [Bellare and Neven, CCS 2006]. Based on our new tool, we construct efficient batch range proofs for Paillier and JL encryptions, and use them to give batch multiplication-to-addition (MtA) protocols, which are crucial to most threshold ECDSA. Our constructions improve the prior Paillier-based MtA by a factor of 2 and the prior JL-based MtA by a factor of 3, in both computation and bandwidth in an amortized way. Our batch MtA can be used to improve the efficiency of most Paillier and JL based threshold ECDSA. As three typical examples, our benchmarking results show: -- We improve the Paillier-based CGGMP20 [Canetti et al., CCS 2020] in bandwidth by a factor of 2.1 to 2.4, and in computation by a factor of 1.5 to 1.7. -- By implementing threshold ECDSA with the batch JL MtA of XAL+23 [Xue et al., CCS 2023] and our batch JL MtA, respectively, our batch construction improves theirs in bandwidth by a factor of 2.0 to 2.29, and in computation by a factor of 1.88 to 2.09. -- When replacing OT-based MtA in DKLs24 [Doerner et al., S$\&$P 2024] with our Paillier-based batch MtA, we improve the bandwidth efficiency by $7.8\times$ at the cost of $5.7\times$ slower computation.
Last updated:  2024-12-04
Efficient Succinct Zero-Knowledge Arguments in the CL Framework
Agathe Beaugrand, Guilhem Castagnos, and Fabien Laguillaumie
The CL cryptosystem, introduced by Castagnos and Laguillaumie in 2015, is a linearly homomorphic encryption scheme that has seen numerous developments and applications in recent years, particularly in the field of secure multiparty computation. Designing efficient zero-knowledge proofs for the CL framework is critical, especially for achieving adaptive security for such multiparty protocols. This is a challenging task due to the particularities of class groups of quadratic fields used to instantiate the groups of unknown order required in the CL framework. In this work, we provide efficient proofs and arguments for statements involving a large number of ciphertexts. We propose a new batched proof for correctness of CL ciphertexts and new succinct arguments for correctness of a shuffle of these ciphertexts. Previous efficient proofs of shuffle for linearly homomorphic encryption were designed for Elgamal “in the exponent” which has only a limited homomorphic property. In the line of a recent work by Braun, Damgard and Orlandi (CRYPTO 2023), all the new proofs and arguments provide partial extractability, a property that we formally introduce here. Thanks to this notion, we show that bulletproof techniques, which are in general implemented with groups of known prime order, can be applied in the CL framework despite the use of unknown order groups, giving non interactive arguments of logarithmic sizes. To prove the practicability of our approach we have implemented these protocols with the BICYCL library, showing that computation and communication costs are competitive. We also illustrate that the partial extractability of our proofs provide enough guarantees for complex applications by presenting a bipartite private set intersection sum protocol which achieves security against malicious adversaries using CL encryption, removing limitations of a solution proposed by Miao et al. (CRYPTO 2020).
Last updated:  2024-12-04
Onion Franking: Abuse Reports for Mix-Based Private Messaging
Matthew Gregoire, Margaret Pierce, and Saba Eskandarian
The fast-paced development and deployment of private messaging applications demands mechanisms to protect against the concomitant potential for abuse. While widely used end-to-end encrypted (E2EE) messaging systems have deployed mechanisms for users to verifiably report abusive messages without compromising the privacy of unreported messages, abuse reporting schemes for systems that additionally protect message metadata are still in their infancy. Existing solutions either focus on a relatively small portion of the design space or incur much higher communication and computation costs than their E2EE brethren. This paper introduces new abuse reporting mechanisms that work for any private messaging system based on onion encryption. This includes low-latency systems that employ heuristic or opportunistic mixing of user traffic, as well as schemes based on mixnets. Along the way, we show that design decisions and abstractions that are well-suited to the E2EE setting may actually impede security and performance improvements in the metadata-hiding setting. We also explore stronger threat models for abuse reporting and moderation not explored in prior work, showing where prior work falls short and how to strengthen both our scheme and others' -- including deployed E2EE messaging platforms -- to achieve higher levels of security. We implement a prototype of our scheme and find that it outperforms the best known solutions in this setting by well over an order of magnitude for each step of the message delivery and reporting process, with overheads almost matching those of message franking techniques used by E2EE encrypted messaging apps today.
Last updated:  2024-12-04
Lova: Lattice-Based Folding Scheme from Unstructured Lattices
Giacomo Fenzi, Christian Knabenhans, Ngoc Khanh Nguyen, and Duc Tu Pham
Folding schemes (Kothapalli et al., CRYPTO 2022) are a conceptually simple, yet powerful cryptographic primitive that can be used as a building block to realise incrementally verifiable computation (IVC) with low recursive overhead without general-purpose non-interactive succinct arguments of knowledge (SNARK). Most folding schemes known rely on the hardness of the discrete logarithm problem, and thus are both not quantum-resistant and operate over large prime fields. Existing post-quantum folding schemes (Boneh, Chen, ePrint 2024/257) based on lattice assumptions instead are secure under structured lattice assumptions, such as the Module Short Integer Solution Assumption (MSIS), which also binds them to relatively complex arithmetic. In contrast, we construct Lova, the first folding scheme whose security relies on the (unstructured) SIS assumption. We provide a Rust implementation of Lova, which makes only use of arithmetic in hardware-friendly power-of-two moduli. Crucially, this avoids the need of implementing and performing any finite field arithmetic. At the core of our results lies a new exact Euclidean norm proof which might be of independent interest.
Last updated:  2024-12-04
Alba: The Dawn of Scalable Bridges for Blockchains
Giulia Scaffino, Lukas Aumayr, Mahsa Bastankhah, Zeta Avarikioti, and Matteo Maffei
Over the past decade, cryptocurrencies have garnered attention from academia and industry alike, fostering a diverse blockchain ecosystem and novel applications. The inception of bridges improved interoperability, enabling asset transfers across different blockchains to capitalize on their unique features. Despite their surge in popularity and the emergence of Decentralized Finance (DeFi), trustless bridge protocols remain inefficient, either relaying too much information (e.g., light-client-based bridges) or demanding expensive computation (e.g., zk-based bridges). These inefficiencies arise because existing bridges securely prove a transaction's on-chain inclusion on another blockchain. Yet this is unnecessary as off-chain solutions, like payment and state channels, permit safe transactions without on-chain publication. However, existing bridges do not support the verification of off-chain payments. This paper fills this gap by introducing the concept of Pay2Chain bridges that leverage the advantages of off-chain solutions like payment channels to overcome current bridges' limitations. Our proposed Pay2Chain bridge, named Alba, facilitates the efficient, secure, and trustless execution of conditional payments or smart contracts on a target blockchain based on off-chain events. Alba, besides its technical advantages, enriches the source blockchain's ecosystem by facilitating DeFi applications, multi-asset payment channels, and optimistic stateful off-chain computation. We formalize the security of Alba against Byzantine adversaries in the UC framework and complement it with a game theoretic analysis. We further introduce formal scalability metrics to demonstrate Alba’s efficiency. Our empirical evaluation confirms Alba efficiency in terms of communication complexity and on-chain costs, with its optimistic case incurring only twice the cost of a standard Ethereum transaction of token ownership transfer.
Last updated:  2024-12-04
Dynamic-FROST: Schnorr Threshold Signatures with a Flexible Committee
Annalisa Cimatti, Francesco De Sclavis, Giuseppe Galano, Sara Giammusso, Michela Iezzi, Antonio Muci, Matteo Nardelli, and Marco Pedicini
Threshold signatures enable any subgroup of predefined cardinality $t$ out of a committee of $n$ participants to generate a valid, aggregated signature. Although several $(t,n)$-threshold signature schemes exist, most of them assume that the threshold $t$ and the set of participants do not change over time. Practical applications of threshold signatures might benefit from the possibility of updating the threshold or the committee of participants. Examples of such applications are consensus algorithms and blockchain wallets. In this paper, we present Dynamic-FROST (D-FROST, for short) that combines FROST, a Schnorr threshold signature scheme, with CHURP, a dynamic proactive secret sharing scheme. The resulting protocol is the first Schnorr threshold signature scheme that accommodates changes in both the committee and the threshold value without relying on a trusted third party. Besides detailing the protocol, we present a proof of its security: as the original signing scheme, D-FROST preserves the property of Existential Unforgeability under Chosen-Message Attack.
Last updated:  2024-12-04
Proof of Time: A Method for Verifiable Temporal Commitments Without Timestamp Disclosure
Alexander John Lee
This paper introduces a cryptographic method that enables users to prove that an event occurred in the past and that a specified amount of time has since elapsed, without disclosing the exact timestamp of the event. The method leverages zero-knowledge proofs and an on-chain Incremental Merkle Tree to store hash commitments. By utilizing the Poseidon hash function and implementing zero-knowledge circuits in Noir, this approach ensures both the integrity and confidentiality of temporal information.
Last updated:  2024-12-04
Efficient theta-based algorithms for computing $(\ell, \ell)$-isogenies on Kummer surfaces for arbitrary odd $\ell$
Ryo Yoshizumi, Hiroshi Onuki, Ryo Ohashi, Momonari Kudo, and Koji Nuida
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. Recently, many isogeny-based cryptosystems using isogenies between Kummer surfaces were proposed. Most of those cryptosystems use $(2,2)$-isogenies. However, to enhance the possibility of cryptosystems, higher degree isogenies, say $(\ell,\ell)$-isogenies for an odd $\ell$, is also crucial. For an odd $\ell$, the Lubicz-Robert gave a formula to compute $(\ell)^g$-isogenies in general dimension $g$. In this paper, we propose explicit and efficient algorithms to compute $(\ell,\ell)$-isogenies between Kummer surfaces, based on the Lubicz-Robert formula.In particular, we propose two algorithms for computing the codomain of the isogeny and two algorithms for evaluating the image of a point under the isogeny. Then, we count the number of arithmetic operations required for each of our proposed algorithms, and determine the most efficient algorithm in terms of the number of arithmetic operations from each of two types of algorithms for each $\ell$. As an application, using the most efficient one, we implemented the SIDH attack on B-SIDH in SageMath.In setting that originally claimed 128-bit security, our implementation was able to recover that secret key in about 11 hours.
Last updated:  2024-12-04
Machine Learning-Based Detection of Glitch Attacks in Clock Signal Data
Asier Gambra, Durba Chatterjee, Unai Rioja, Igor Armendariz, and Lejla Batina
Voltage fault injection attacks are a particularly powerful threat to secure embedded devices because they exploit brief, hard-to-detect power fluctuations causing errors or bypassing security mechanisms. To counter these attacks, various detectors are employed, but as defenses strengthen, increasingly elusive glitches continue to emerge. Artificial intelligence, with its inherent ability to learn and adapt to complex patterns, presents a promising solution. This research presents an AI-driven voltage fault injection detector that analyzes clock signals directly. We provide a detailed fault characterization of the STM32F410 microcontroller, emphasizing the impact of faults on the clock signal. Our findings reveal how power supply glitches directly impact the clock, correlating closely with the amount of power injected. This led to developing a lightweight Multi-Layer Perceptron model that analyzes clock traces to distinguish between safe executions, glitches that keep the device running but may introduce faults, and glitches that cause the target to reset. While previous fault injection AI applications have primarily focused on parameter optimization and simulation assistance, in this work we use the adaptability of machine learning to create a fault detection model that is specifically adjusted to the hardware that implements it. The developed glitch detector has a high accuracy showing this a promising direction to combat FI attacks on a variety of platform.
Last updated:  2024-12-04
uKNIT: Breaking Round-alignment for Cipher Design -- Featuring uKNIT-BC, an Ultra Low-Latency Block Cipher
Kai Hu, Mustafa Khairallah, Thomas Peyrin, and Quan Quan Tan
Automated cryptanalysis has seen a lot of attraction and success in the past decade, leading to new distinguishers or key-recovery attacks against various ciphers. We argue that the improved efficiency and usability of these new tools have been undervalued, especially for design processes. In this article, we break for the first time the classical iterative design paradigm for symmetric-key primitives, where constructions are built around the repetition of a round function. We propose instead a new design framework, so-called uKNIT, that allows a round-by-round optimization-led automated construction of the primitives and where each round can be entirely different from the others (the security/performance trade-off actually benefiting from this non-alignment). This new design framework being non-trivial to instantiate, we further propose a method for SPN ciphers using a genetic algorithm and leveraging advances in automated cryptanalysis: given a pool of good cipher candidates on $x$ rounds, our algorithm automatically generates and selects $(x+1)$-round candidates by evaluating their security and performance. We emphasize that our design pipeline is also the first to propose a fully automated design process, with completely integrated implementation and security analysis. We finally exemplify our new design strategy on the important use-case of low-latency cryptography, by proposing the uKNIT-BC block cipher, together with a complete security analysis and benchmarks. Compared to the state-of-the-art in low-latency ciphers (PRINCEv2), uKNIT-BC improves on all crucial security and performance directions at the same time, reducing latency by 10%, while increasing resistance against classical differential/linear cryptanalysis by more than 10%. It also reduces area by 17% and energy consumption by 44% when fixing the latency of both ciphers. As a contribution of independent interest, we discovered a generalization of the Superposition-Tweakey (STK) construction for key schedules, unlocking its application to bit-oriented ciphers.
Last updated:  2024-12-04
On the (Im)possibility of Game-Theoretically Fair Leader Election Protocols
Ohad Klein, Ilan Komargodski, and Chenzhi Zhu
We consider the problem of electing a leader among $n$ parties with the guarantee that each (honest) party has a reasonable probability of being elected, even in the presence of a coalition that controls a subset of parties, trying to bias the output. This notion is called ``game-theoretic fairness'' because such protocols ensure that following the honest behavior is an equilibrium and also the best response for every party and coalition. In the two-party case, Blum's commit-and-reveal protocol (where if one party aborts, then the other is declared the leader) satisfies this notion and it is also known that one-way functions are necessary. Recent works study this problem in the multi-party setting. They show that composing Blum's 2-party protocol for $\log n$ rounds in a tournament-tree-style manner results with {perfect game-theoretic fairness}: each honest party has probability $\ge 1/n$ of being elected as leader, no matter how large the coalition is. Logarithmic round complexity is also shown to be necessary if we require perfect fairness against a coalition of size $n-1$. Relaxing the above two requirements, i.e., settling for approximate game-theoretic fairness and guaranteeing fairness against only constant fraction size coalitions, it is known that there are $O(\log ^* n)$ round protocols. This leaves many open problems, in particular, whether one can go below logarithmic round complexity by relaxing only one of the strong requirements from above. We manage to resolve this problem for commit-and-reveal style protocols, showing that - $\Omega(\log n/\log\log n)$ rounds are necessary if we settle for approximate fairness against very large (more than constant fraction) coalitions; - $\Omega(\log n)$ rounds are necessary if we settle for perfect fairness against $n^\epsilon$ size coalitions (for any constant $\epsilon>0$). These show that both relaxations made in prior works are necessary to go below logarithmic round complexity. Lastly, we provide several additional upper and lower bounds for the case of single-round commit-and-reveal style protocols.
Last updated:  2024-12-04
Stronger Security for Non-Interactive Threshold Signatures: BLS and FROST
Mihir Bellare, Stefano Tessaro, and Chenzhi Zhu
We give a unified syntax, and a hierarchy of definitions of security of increasing strength, for non-interactive threshold signature schemes. They cover both fully non-interactive schemes (these are ones that have a single-round signing protocol, the canonical example being threshold-BLS) and ones, like FROST, that have a prior round of message-independent pre-processing. The definitions in the upper echelon of our hierarchy ask for security that is well beyond any currently defined, let alone proven to be met by the just-mentioned schemes, yet natural, and important for modern applications like securing digital wallets. We prove that BLS and FROST are better than advertised, meeting some of these stronger definitions. Yet, they fall short of meeting our strongest definition, a gap we fill for FROST via a simple enhancement to the scheme. We also surface subtle differences in the security achieved by variants of FROST.
Last updated:  2024-12-03
Share the MAYO: thresholdizing MAYO
Sofia Celi, Daniel Escudero, and Guilhem Niot
We present the first comprehensive study on thresholdizing practical OV-based signature schemes, specifically focusing on MAYO and UOV. Our approach begins by addressing the challenges associated with thresholdizing algorithms that sample solutions to linear equation systems of the form $Ax = y$, which are fundamental to OV-based signature schemes. Previous attempts have introduced levels of leakage that we deem insecure. We propose a novel minimum-leakage solution and assess its practicality. Furthermore, we explore the thresholdization of the entire functionality of these signature schemes, demonstrating their unique applications in networks and cryptographic protocols.
Last updated:  2024-12-03
Non-interactive Blind Signatures: Post-quantum and Stronger Security
Foteini Baldimtsi, Jiaqi Cheng, Rishab Goyal, and Aayush Yadav
Blind signatures enable a receiver to obtain signatures on messages of its choice without revealing any message to the signer. Round-optimal blind signatures are designed as a two-round interactive protocol between a signer and receiver. Incidentally, the choice of message is not important in many applications, and is routinely set as a random (unstructured) message by a receiver. With the goal of designing more efficient blind signatures for such applications, Hanzlik (Eurocrypt '23) introduced a new variant called non-interactive blind signatures (NIBS). These allow a signer to asynchronously generate partial signatures for any recipient such that only the intended recipient can extract a blinded signature for a random message. This bypasses the two-round barrier for traditional blind signatures, yet enables many known applications. Hanzlik provided new practical designs for NIBS from bilinear pairings. In this work, we propose new enhanced security properties for NIBS as well as provide multiple constructions with varying levels of security and concrete efficiency. We propose a new generic paradigm for NIBS from circuit-private leveled homomorphic encryption achieving optimal-sized signatures (i.e., same as any non-blind signature) at the cost of large public keys. We also investigate concretely efficient NIBS with post-quantum security, satisfying weaker level of privacy as proposed by Hanzlik.
Last updated:  2024-12-03
Application of Mordell-Weil lattices with large kissing numbers to acceleration of multi-scalar multiplication on elliptic curves
Dmitrii Koshelev
This article aims to speed up (the precomputation stage of) multi-scalar multiplication (MSM) on ordinary elliptic curves of $j$-invariant $0$ with respect to specific ``independent'' (a.k.a. ``basis'') points. For this purpose, so-called Mordell--Weil lattices (up to rank $8$) with large kissing numbers (up to $240$) are employed. In a nutshell, the new approach consists in obtaining more efficiently a considerable number (up to $240$) of certain elementary linear combinations of the ``independent'' points. By scaling the point (re)generation process, it is thus possible to get a significant performance gain. As usual, the resulting curve points can be then regularly used in the main stage of an MSM algorithm to avoid repeating computations. Seemingly, this is the first usage of lattices with large kissing numbers in cryptography, while such lattices have already found numerous applications in other mathematical domains. Without exaggeration, MSM is a widespread primitive (often the unique bottleneck) in modern protocols of real-world elliptic curve cryptography. Moreover, the new (re)generation technique is prone to further improvements by considering Mordell--Weil lattices with even greater kissing numbers.
Last updated:  2024-12-03
SoK: Privacy-Preserving Transactions in Blockchains
Foteini Baldimtsi, Kostas Kryptos Chalkias, Varun Madathil, and Arnab Roy
Ensuring transaction privacy in blockchain systems is essential to safeguard user data and financial activity from exposure on public ledgers. This paper conducts a systematization of knowledge (SoK) on privacy-preserving techniques in cryptocurrencies with native privacy features. We define and compare privacy notions such as confidentiality, k-anonymity, full anonymity, and sender-receiver unlinkability, and categorize the cryptographic techniques employed to achieve these guarantees. Our analysis highlights the trade-offs between privacy guarantees, scalability, and regulatory compliance. Finally, we evaluate the usability of the most popular private cryptocurrencies providing insights into their practical deployment and user interaction. Through this analysis, we identify key gaps and challenges in current privacy solutions, highlighting areas where further research and development are needed to enhance privacy while maintaining scalability and security.
Last updated:  2024-12-03
M-Sel: A Message Selection Functional Encryption from Simple Tool
Ahmad Khoureich Ka
In this paper, we put forward a new practical application of Inner-Product Functional Encryption (IPFE) that we call Message Selection functional encryption (M-Sel) which allows users to decrypt selected portions of a ciphertext. In a message selection functional encryption scheme, the plaintext is partitioned into a set of messages M = {m1, . . . , mt}. The encryption of M consists in encrypting each of its elements using distinct encryption keys. A user with a functional decryption key skx derived from a selection vector x can access a subset of M from the encryption thereof and nothing more. Our construction is generic and combines a symmetric encryption scheme and an inner product functional encryption scheme, therefore, its security is tied to theirs. By instantiating our generic construction from a DDH-based IPFE we obtain a message selection FE with constant-size decryption keys suitable for key storage in lightweight devices in the context of Internet of Things (IoT).
Last updated:  2024-12-03
Compute, but Verify: Efficient Multiparty Computation over Authenticated Inputs
Moumita Dutta, Chaya Ganesh, Sikhar Patranabis, and Nitin Singh
Traditional notions of secure multiparty computation (MPC) allow mutually distrusting parties to jointly compute a function over their private inputs, but typically do not specify how these inputs are chosen. Motivated by real-world applications where corrupt inputs could adversely impact privacy and operational legitimacy, we consider a notion of authenticated MPC where the inputs are authenticated, e.g., signed using a digital signature by some certification authority. We propose a generic and efficient compiler that transforms any linear secret sharing based honest-majority MPC protocol into one with input authentication. Our compiler incurs significantly lower computational costs and competitive communication overheads when compared to the best existing solutions, while entirely avoiding the (potentially expensive) protocol-specific techniques and pre-processing requirements that are inherent to these solutions. For $n$-party honest majority MPC protocols with abort security where each party has $\ell$ inputs, our compiler incurs $O(n\log \ell)$ communication overall and a computational overhead of $O(\ell)$ group exponentiations per party (the corresponding overheads for the most efficient existing solution are $O(n^2)$ and $O(\ell n)$). Finally, for a corruption threshold $t<n/3$, our compiler preserves the stronger identifiable abort security of the underlying MPC protocol. No existing solution for authenticated MPC achieves this regardless of the corruption threshold. Along the way, we make several technical contributions that are of independent interest. This includes the notion of distributed proofs of knowledge and concrete realizations of the same for several relations of interest, such as proving knowledge of many popularly used digital signature schemes, and proving knowledge of opening of a Pedersen commitment.
Last updated:  2024-12-03
Ideal-to-isogeny algorithm using 2-dimensional isogenies and its application to SQIsign
Hiroshi Onuki and Kohei Nakagawa
The Deuring correspondence is a correspondence between supersingular elliptic curves and quaternion orders. Under this correspondence, an isogeny between elliptic curves corresponds to a quaternion ideal. This correspondence plays an important role in isogeny-based cryptography and several algorithms to compute an isogeny corresponding to a quaternion ideal (ideal-to-isogeny algorithms) have been proposed. In particular, SQIsign is a signature scheme based on the Deuring correspondence and uses an ideal-to-isogeny algorithm. In this paper, we propose a novel ideal-to-isogeny algorithm using isogenies of dimension $2$. Our algorithm is based on Kani's reducibility theorem, which gives a connection between isogenies of dimension $1$ and $2$. By using the characteristic $p$ of the base field of the form $2^fg - 1$ for a small odd integer $g$, our algorithm works by only $2$-isogenies and $(2, 2)$-isogenies in the operations in $\mathbb{F}_{p^2}$. We apply our algorithm to SQIsign and compare the efficiency of the new algorithm with the existing one. Our analysis shows that the key generation and the signing in our algorithm are at least twice as fast as those in the existing algorithm at the NIST security level 1. This advantage becomes more significant at higher security levels. In addition, our algorithm also improves the efficiency of the verification in SQIsign.
Last updated:  2024-12-03
NICE-PAKE: On the Security of KEM-Based PAKE Constructions without Ideal Ciphers
Nouri Alnahawi, Jacob Alperin-Sheriff, Daniel Apon, and Alexander Wiesmaier
The interest in realizing generic PQC KEM-based PAKEs has increased significantly in the last few years. One such PAKE is the CAKE protocol, proposed by Beguinet et al. (ACNS ’23). However, despite its simple design based on the well-studied PAKE protocol EKE by Bellovin and Merritt (IEEE S&P ’92), both CAKE and its variant OCAKE do not fully protect against quantum adversaries, as they rely on the Ideal Cipher (IC) model. Related and follow-up works, including Pan and Zeng (ASIACRYPT ’23), Dos Santos et al. (EUROCRYPT ’23), Alnahawi et al. (CANS ’24), and Arragia et al. (IACR ’24/308) although touching on that issue, still rely on an IC. Considering the lack of a quantum IC model and the difficulty of using the classical IC to achieve secure instantiations on public keys in general and PQC in particular, we set out to eliminate it from PAKE design. In this paper, we present the No IC Encryption (NICE)-PAKE, a (semi)-generic PAKE framework providing a quantum-safe alternative for the IC, utilizing simpler cryptographic components for the authentication step. To give a formal proof for our construction, we introduce the notions of A-Part-Secrecy (A-SEC-CCA), Splittable Collision Freeness (A-CFR-CCA) and Public Key Uniformity (SPLIT-PKU) for splittable LWE KEMs. We show the relation of the former to the Non-uniform LWE and the Weak Hint LWE assumptions, as well as its application to ring and module LWE. Notably, this side quest led to some surprising discoveries, as we concluded that the new notion is not directly interchangeable between the LWE variants, or at least not in a straightforward manner. Further, we show that our approach requires some tedious tweaking for the parameter choices in both FrodoKEM and CRYSTALS-Kyber to obtain a secure PAKE construction. We also address some fundamental issues with the common IC usage and identify differences between lattice KEMs regarding their suitability for generic PQC PAKEs, especially regarding the structure of their public keys. We believe that this work marks a further step towards achieving complete security against quantum adversaries in PQC PAKEs.
Last updated:  2024-12-03
MultiReg-FE: Registered FE for Unbounded Inner-Product and Attribute-Weighted Sums
Qiuyan Du, Qiaohan Chu, Jie Chen, Man Ho Au, and Debiao He
Recently, Francati et al. (Asiacrypt 2023) provided the first registered functional encryption (Reg-FE) beyond predicates. Reg-FE addresses the key escrow problem in functional encryption by allowing users to generate their own key pairs, effectively replacing the traditional private-key generator with a key curator. The key curator holds no secret information and runs deterministic algorithms to generate master public key for encryption and helper keys for decryption. However, existing Reg-FE schemes under standard assumptions require fixed data sizes, which limits their practicality in real-world applications. In this work, we introduce Multi-Function Registered Functional Encryption for Inner-Product (MultiReg-FE for IP), a novel extension of Reg-FE. It enables users to register multiple functions under a single public key. With MultiReg-FE, we achieve both Reg-FE for Unbounded Inner-Product (Unbounded IP), which removes the need to predetermine vector lengths, and Reg-FE for Attribute-Weighted Sums with Inner-Product (AWSw/IP), allowing computations over arbitrary numbers of attribute-value pairs. All our schemes achieve adaptive-IND-security. Specifically, we present: -MultiReg-FE for Inner-Product, which supports unbounded number of function vectors from each user. - Reg-FE for Unbounded Inner-Product, removing the need for preset vector lengths. - The first Reg-FE for AWSw/IP in public-key settings.
Last updated:  2024-12-03
HyperPianist: Pianist with Linear-Time Prover and Logarithmic Communication Cost
Chongrong Li, Pengfei Zhu, Yun Li, Cheng Hong, Wenjie Qu, and Jiaheng Zhang
Recent years have seen great improvements in zero-knowledge proofs (ZKPs). Among them, zero-knowledge SNARKs are notable for their compact and efficiently-verifiable proofs, but suffer from high prover costs. Wu et al. (Usenix Security 2018) proposed to distribute the proving task across multiple machines, and achieved significant improvements in proving time. However, existing distributed ZKP systems still have quasi-linear prover cost, and may incur a communication cost that is linear in circuit size. In this paper, we introduce HyperPianist. Inspired by the state-of-the-art distributed ZKP system Pianist (Liu et al., S&P 2024) and the multivariate proof system HyperPlonk (Chen et al., EUROCRYPT 2023), we design a distributed multivariate polynomial interactive oracle proof (PIOP) system with a linear-time prover cost and logarithmic communication cost. Unlike Pianist, HyperPianist incurs no extra overhead in prover time or communication when applied to general (non-data-parallel) circuits. To instantiate the PIOP system, we adapt two additively-homomorphic multivariate polynomial commitment schemes, multivariate KZG (Papamanthou et al., TCC 2013) and Dory (Lee et al., TCC 2021), into the distributed setting, and get HyperPianist^K and HyperPianist^D respectively. Both systems have linear prover complexity and logarithmic communication cost; furthermore, HyperPianist^D requires no trusted setup. We also propose HyperPianist+, incorporating an optimized lookup argument based on Lasso (Setty et al., EUROCRYPT 2024) with lower prover cost. Experiments demonstrate HyperPianist^K and HyperPianist^D achieve a speedup of 66.8\times and 44.9\times over HyperPlonk with 32 distributed machines. Compared to Pianist, HyperPianistK can be 3.2\times and 5.0\times as fast and HyperPianistD can be 2.7\times and 4.1\times as fast, on vanilla gates and custom gates respectively.
Last updated:  2024-12-03
Universally Composable and Reliable Password Hardening Services
Shaoqiang Wu and Ding Wang
The password-hardening service (PH) is a crypto service that armors canonical password authentication with an external key against offline password guessing in case the password file is somehow compromised/leaked. The game-based formal treatment of PH was brought by Everspaugh et al. at USENIX Security'15. Their work is followed by efficiency-enhancing PO-COM (CCS'16), security-patching Phoenix (USENIX Security'17), and functionality-refining PW-Hero (SRDS'22). However, the issue of single points of failure (SPF) inherently impairs the availability of these PH schemes. More specifically, the failure of a single PH server responsible for crypto computation services will suspend password authentication for all users. We propose the notion of reliable PH, which improves the availability of PH by eliminating SPF. We present a modular PH construction, TF-PH, essentially a generic compiler that can transform any PH protocol into a reliable one without SPF via introducing threshold failover. Particularly, we propose a concrete reliable PH protocol, called TF-RePhoenix, a simple and efficient construction with RePhoenix (which improves over Phoenix at USENIX Security'17) as the PH module. Security is proven within the universally composable (UC) security framework and the random oracle model (ROM), where we, for the first time, formalize the ideal UC functionalities of PH and reliable PH. We comparatively evaluate the efficiency of our TF-PH with the canonical threshold method (taken as an example, the threshold solution introduced by Brost et al. at CCS'20 in a PH-derived domain -- password-hardened encryption). Results show that our threshold failover-based solution to SPF provides optimal performance and achieves failover in a millisecond.
Last updated:  2024-12-03
Interactive Threshold Mercurial Signatures and Applications
Masayuki Abe, Masaya Nanri, Octavio Perez Kempner, and Mehdi Tibouchi
Mercurial signatures are an extension of equivalence class signatures that allow malleability for the public keys, messages, and signatures within the respective classes. Unfortunately, the most efficient construction to date suffers from a weak public key class-hiding property, where the original signer with the signing key can link the public keys in the same class. This is a severe limitation in their applications, where the signer is often considered untrustworthy of privacy. This paper presents two-party and multi-party interactive threshold mercurial signatures that overcome the above limitation by eliminating the single entity who knows the signing key. For the general case, we propose two constructions. The first follows the same interactive structure as the two-party case, avoiding complex distributed computations such as randomness generation, inversion, and multiplication, and even eliminates the need for private communication between parties. The second is based on a blueprint for general multi-party computation using verifiable secret sharing, but adopting optimizations. We show applications in anonymous credential systems that individually fit the two-party and multi-party constructions. In particular, in the two-party case, our approach provides stronger privacy by completely removing the trust in the authorities. We also discuss more applications, from blind signatures to multi-signatures and threshold ring signatures. Finally, to showcase the practicality of our approach, we implement our interactive constructions and compare them against related alternatives.
Last updated:  2024-12-02
Gold OPRF: Post-Quantum Oblivious Power Residue PRF
Yibin Yang, Fabrice Benhamouda, Shai Halevi, Hugo Krawczyk, and Tal Rabin
We propose plausible post-quantum (PQ) oblivious pseudorandom functions (OPRFs) based on the Power Residue PRF (Damgård CRYPTO’88), a generalization of the Legendre PRF. For security parameter $\lambda$, we consider the PRF $\mathsf{Gold}_k(x)$ that maps an integer $x$ modulo a public prime $p = 2^\lambda\cdot g + 1$ to the element $(k + x)^g \bmod p$, where $g$ is public and $\log g \approx 2\lambda$. At the core of our constructions are efficient novel methods for evaluating $\mathsf{Gold}$ within two-party computation ($\mathsf{2PC}\text{-}\mathsf{Gold}$), achieving different security requirements. Here, the server $\mathcal{P}_s$ holds the PRF key $k$ whereas the client $\mathcal{P}_c$ holds the PRF input $x$, and they jointly evaluate $\mathsf{Gold}$ in 2PC. $\mathsf{2PC}\text{-}\mathsf{Gold}$ uses standard Vector Oblivious Linear Evaluation (VOLE) correlations and is information-theoretic and constant-round in the (V)OLE-hybrid model. We show: • For a semi-honest $\mathcal{P}_s$ and a malicious $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses a single (V)OLE correlation, and has a communication complexity of $3$ field elements ($2$ field elements if we only require a uniformly sampled key) and a computational complexity of $\mathcal{O}(\lambda)$ field operations. We refer to this as half-malicious security. • For malicious $\mathcal{P}_s$ and $\mathcal{P}_c$: a $\mathsf{2PC}\text{-}\mathsf{Gold}$ that just uses $\frac{\lambda}{4} + \mathcal{O}(1)$ VOLE correlations, and has a communication complexity of $\frac{\lambda}{4} + \mathcal{O}(1)$ field elements and a computational complexity of $\mathcal{O}(\lambda)$ field operations. These constructions support additional features and extensions, e.g., batched evaluations with better amortized costs where $\mathcal{P}_c$ repeatedly evaluates the PRF under the same key. Furthermore, we extend $\mathsf{2PC}\text{-}\mathsf{Gold}$ to Verifiable OPRFs and use the methodology from Beullens et al. (ePrint’24) to obtain strong OPRF security in the universally composable setting. All the protocols are efficient in practice. We implemented $\mathsf{2PC}\text{-}\mathsf{Gold}$—with (PQ) VOLEs—and benchmarked them. For example, our half-malicious (resp. malicious) $n$-batched PQ OPRFs incur about $100$B (resp. $1.9$KB) of amortized communication for $\lambda = 128$ and large enough $n$.
Last updated:  2024-12-02
A Complete Characterization of One-More Assumptions In the Algebraic Group Model
Jake Januzelli and Jiayu Xu
One-more problems like One-More Discrete Logarithm (OMDL) and One-More Diffie--Hellman (OMDH) have found wide use in cryptography, due to their ability to naturally model security definitions for interactive primitives like blind signatures and oblivious PRF. Furthermore, a generalization of OMDH called Threshold OMDH (TOMDH) has proven useful for building threshold versions of interactive protocols. However, due to their complexity it is often unclear how hard such problems actually are, leading cryptographers to analyze them in idealized models like the Generic Group Model (GGM) and Algebraic Group Model (AGM). In this work we give a complete characterization of known group-based one-more problems in the AGM, using the $Q$-DL hierarchy of assumptions defined in the work of Bauer, Fuchsbauer and Loss (CRYPTO '20). 1. Regarding (T)OMDH, we show (T)OMDH is part of the $Q$-DL hierarchy in the AGM; in particular, $Q$-OMDH is equivalent to $Q$-DL. Along the way we find and repair a flaw in the original GGM hardness proof of TOMDH, thereby giving the first correct proof that TOMDH is hard in the GGM. 2. Regarding OMDL, we show the $Q$-OMDL problems constitute an infinite hierarchy of problems in the AGM incomparable to the $Q$-DL hierarchy; that is, $Q$-OMDL is separate from $Q'$-OMDL if $Q' \neq Q$, and also separate from $Q'$-DL unless $Q = Q' = 0$.
Last updated:  2024-12-02
Lova: A Novel Framework for Verifying Mathematical Proofs with Incrementally Verifiable Computation
Noel Elias
Efficiently verifying mathematical proofs and computations has been a heavily researched topic within Computer Science. Particularly, even repetitive steps within a proof become much more complex and inefficient to validate as proof sizes grow. To solve this problem, we suggest viewing it through the lens of Incrementally Verifiable Computation (IVC). However, many IVC methods, including the state-of-the-art Nova recursive SNARKs, require proofs to be linear and for each proof step to be identical. This paper proposes Lova, a novel framework to verify mathematical proofs end-to-end that solves these problems. Particularly, our approach achieves a few novelties alongside the first-of-its-kind implementation of Nova: (i) an innovative proof splicing mechanism to generate independent proof sequences, (ii) a system of linear algorithms to verify a variety of mathematical logic rules, and (iii) a novel multiplexing circuit allowing non-homogeneous proof sequences to be verified together in a single Nova proof. The resulting Lova pipeline has linear prover time, constant verifying capability, dynamic/easy modification, and optional zero-knowledge privacy to efficiently validate mathematical proofs. Code is available at https://github.com/noelkelias/lova.
Last updated:  2024-12-02
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Christopher Harth-Kitzerow and Georg Carle
Fixed point arithmetic (FPA) is essential to enable practical Privacy-Preserving Machine Learning. When multiplying two fixed-point numbers, truncation is required to ensure that the product maintains correct precision. While multiple truncation schemes based on Secure Multiparty Computation (MPC) have been proposed, which of the different schemes offers the best trade-off between accuracy and efficiency on common PPML datasets and models has remained underexplored. In this work, we study several different stochastic and exact truncation approaches found in the MPC literature that require different slack sizes, i.e., additional bits required by each secret share to ensure correctness. We provide novel, improved construction for each truncation approach in the semi-honest 3-PC and malicious 4-PC settings, which reduce communication and round complexity up to three times. Moreover, we propose a truncation scheme that does not introduce any communication overhead in the online phase and exactly matches the accuracy of plaintext floating-point PyTorch inference of VGG-16 on the ImageNet dataset with over 80% accuracy using shares with a bitlength of only 32. This is the first time that high PPML accuracy is demonstrated on ImageNet.
Last updated:  2024-12-02
Worst-Case Lattice Sampler with Truncated Gadgets and Applications
Corentin Jeudy and Olivier Sanders
Gadget-based samplers have proven to be a key component of several cryptographic primitives, in particular in the area of privacy-preserving mechanisms. Most constructions today follow the approach introduced by Micciancio and Peikert (MP) yielding preimages whose dimension linearly grows with that of the gadget. To improve performance, some papers have proposed to truncate the gadget but at the cost of an important feature of the MP sampler, namely the ability to invert arbitrary syndromes. Technically speaking, they replace the worst-case MP sampler by an average-case sampler that can only be used in specific contexts. Far from being a mere theoretical restriction, it prevents the main applications of gadget-based samplers from using truncated variants and thus from benefiting from the associated performance gains. In this paper, we solve this problem by describing a worst-case sampler that still works with truncated gadgets. Its main strength is that it retains the main characteristics of the MP sampler while providing flexibility in the choice of the truncation parameter. As a consequence, it can be used as a plug-in replacement for all applications relying on the MP sampler so far, leading to performance improvements up to 30% as illustrated by several examples in this paper. Our sampler is supported by a thorough security analysis that addresses the hurdles met by previous works and its practicality is demonstrated by a concrete implementation.
Last updated:  2024-12-02
One-More Unforgeability for Multi- and Threshold Signatures
Sela Navot and Stefano Tessaro
This paper initiates the study of one-more unforgeability for multi-signatures and threshold signatures as a stronger security goal, ensuring that ℓ executions of a signing protocol cannot result in more than ℓ signatures. This notion is widely used in the context of blind signatures, but we argue that it is a convenient way to model strong unforgeability for other types of distributed signing protocols. We provide formal security definitions for one-more unforgeability (OMUF) and show that the HBMS multi-signature scheme does not satisfy this definition, whereas MuSig and MuSig2 do. We also show that mBCJ multi-signatures do not satisfy OMUF, as well as expose a subtle issue with their existential unforgeability (which does not contradict their original security proof). For threshold signatures, we show that FROST satisfies OMUF, but ROAST does not.
Last updated:  2024-12-02
Vote&Check: Secure Postal Voting with Reduced Trust Assumptions
Véronique Cortier, Alexandre Debant, Pierrick Gaudry, and Léo Louistisserand
Postal voting is a frequently used alternative to on-site voting. Traditionally, its security relies on organizational measures, and voters have to trust many entities. In the recent years, several schemes have been proposed to add verifiability properties to postal voting, while preserving vote privacy. Postal voting comes with specific constraints. We conduct a systematic analysis of this setting and we identify a list of generic attacks, highlighting that some attacks seem unavoidable. This study is applied to existing systems of the literature. We then propose Vote&Check, a postal voting protocol which provides a high level of security, with a reduced number of authorities. Furthermore, it requires only basic cryptographic primitives, namely hash functions and signatures. The security properties are proven in a symbolic model, with the help of the ProVerif tool.
Last updated:  2024-12-02
On Random Sampling of Supersingular Elliptic Curves
Marzio Mula, Nadir Murru, and Federico Pintore
We consider the problem of sampling random supersingular elliptic curves over finite fields of cryptographic size (SRS problem). The currently best-known method combines the reduction of a suitable complex multiplication (CM) elliptic curve and a random walk over some supersingular isogeny graph. Unfortunately, this method is not suitable when the endomorphism ring of the generated curve needs to be hidden, like in some cryptographic applications. This motivates a stricter version of the SRS problem, requiring that the sampling algorithm gives no information about the endomorphism ring of the output curve (cSRS problem). In this work we formally define the SRS and cSRS problems, which are both of theoretical interest. We discuss the relevance of the two problems for cryptographic applications, and we provide a self-contained survey of the known approaches to solve them. Those for the cSRS problem have exponential complexity in the characteristic of the base finite field (since they require computing and finding roots of polynomials of large degree), leaving the problem open. In the second part of the paper, we propose and analyse some alternative techniques – based either on the Hasse invariant or division polynomials – and we explain the reasons why they do not readily lead to efficient cSRS algorithms, but they may open promising research directions.
Last updated:  2024-12-02
Two-Round 2PC ECDSA at the Cost of 1 OLE
Michael Adjedj, Constantin Blokh, Geoffroy Couteau, Antoine Joux, and Nikolaos Makriyannis
We present a novel protocol for two-party ECDSA that achieves two rounds (a single back-and-forth communication) at the cost of a single oblivious linear function evaluation (OLE). In comparison, the previous work of [DKLs18] (S&P 2018) achieves two rounds at the cost of three OLEs, while [BHL24] (Manuscript 2024) requires expensive zero-knowledge proofs on top of the OLE. We demonstrate this by proving that in the generic group model, any adversary capable of generating forgeries for our protocol can be transformed into an adversary that finds preimages for the ECDSA message digest function (e.g., the SHA family). Interestingly, our analysis is closely related to, and has ramifications for, the `presignatures' mode of operation—[CGGMP20] (CCS 2020), [GroSho22] (EUROCRYPT 2022). Motivated by applications to embedded cryptocurrency wallets, where a single server maintains distinct, shared public keys with separate clients (i.e., a star-shaped topology), and with the goal of minimizing communication, we instantiate our protocol using Paillier encryption and suitable zero-knowledge proofs. To reduce computational overhead, we thoroughly optimize all components of our protocol under sound cryptographic assumptions, specifically small-exponent variants of RSA-style assumptions. Finally, we implement our protocol and provide benchmarks. At the 128-bit security level, the signing phase requires approximately 50ms of computation time on a standard linux machine, and 2KB of bandwidth.
Last updated:  2024-12-02
Do Not Disturb a Sleeping Falcon: Floating-Point Error Sensitivity of the Falcon Sampler and Its Consequences
Xiuhan Lin, Mehdi Tibouchi, Yang Yu, and Shiduo Zhang
Falcon is one of the three postquantum signature schemes already selected by NIST for standardization. It is the most compact among them, and offers excellent efficiency and security. However, it is based on a complex algorithm for lattice discrete Gaussian sampling which presents a number of implementation challenges. In particular, it relies on (possibly emulated) floating-point arithmetic, which is often regarded as a cause for concern, and has been leveraged in, e.g., side-channel analysis. The extent to which Falcon's use of floating point arithmetic can cause security issues has yet to be thoroughly explored in the literature. In this paper, we contribute to filling this gap by identifying a way in which Falcon's lattice discrete Gaussian sampler, due to specific design choices, is singularly sensitive to floating-point errors. In the presence of small floating-point discrepancies (which can occur in various ways, including the use of the two almost but not quite equivalent signing procedures ``dynamic'' and ``tree'' exposed by the Falcon API), we find that, when called twice on the same input, the Falcon sampler has a small but significant chance (on the order of once in a few thousand calls) of outputting two different lattice points with a very structured difference, that immediately reveals the secret key. This is in contrast to other lattice Gaussian sampling algorithms like Peikert's sampler and Prest's hybrid sampler, that are stable with respect to small floating-point errors. Correctly generated Falcon signatures include a salt that should in principle prevent the sampler to ever be called on the same input twice. In that sense, our observation has little impact on the security of Falcon signatures per se (beyond echoing warnings about the dangers of repeated randomness). On the other hand, it is critical for derandomized variants of Falcon, which have been proposed for use in numerous settings. One can mention in particular identity-based encryption, SNARK-friendly signatures, and sublinear signature aggregation. For all these settings, small floating point discrepancies have a chance of resulting in full private key exposure, even when using the slower, integer-based emulated floating-point arithmetic of Falcon's reference implementation.
Last updated:  2024-12-02
Avenger Ensemble: Genetic Algorithm-Driven Ensemble Selection for Deep Learning-based Side-Channel Analysis
Zhao Minghui and Trevor Yap
Side-Channel Analysis (SCA) exploits physical vulnerabilities in systems to reveal secret keys. With the rise of Internet-of-Things, evaluating SCA attacks has become crucial. Profiling attacks, enhanced by Deep Learning-based Side-Channel Analysis (DLSCA), have shown significant improvements over classical techniques. Recent works demonstrate that ensemble methods outperform single neural networks. However, almost every existing ensemble selection method in SCA only picks the top few best-performing neural networks for the ensemble, which we coined as Greedily-Selected Method (GSM), which may not be optimal. This work proposes Evolutionary Avenger Initiative (EAI), a genetic algorithm-driven ensemble selection algorithm, to create effective ensembles for DLSCA. We investigate two fitness functions and evaluate EAI across four datasets, including \AES and \ascon implementations. We show that EAI outperforms GSM, recovering secrets with the least number of traces. Notably, EAI successfully recovers secret keys for \ascon datasets where GSM fails, demonstrating its effectiveness.
Last updated:  2024-12-02
ARK: Adaptive Rotation Key Management for Fully Homomorphic Encryption Targeting Memory Efficient Deep Learning Inference
Jia-Lin Chan, Wai-Kong Lee, Denis C.-K Wong, Wun-She Yap, and Bok-Min Goi
Advancements in deep learning (DL) not only revolutionized many aspects in our lives, but also introduced privacy concerns, because it processed vast amounts of information that was closely related to our daily life. Fully Homomorphic Encryption (FHE) is one of the promising solutions to this privacy issue, as it allows computations to be carried out directly on the encrypted data. However, FHE requires high computational cost, which is a huge barrier to its widespread adoption. Many prior works proposed techniques to enhance the speed performance of FHE in the past decade, but they often impose significant memory requirements, which may be up to hundreds of gigabytes. Recently, focus has shifted from purely improving speed performance to managing FHE’s memory consumption as a critical challenge. Rovida and Leporati introduced a technique to minimize rotation key memory by retaining only essential keys, yet this technique is limited to cases with symmetric numerical patterns (e.g., -2 -1 0 1 2), constraining its broader utility. In this paper, a new technique, Adaptive Rotation Key (ARK), is proposed that minimizes rotation key memory consumption by exhaustively analyzing numerical patterns to produce a minimal subset of shared rotation keys. ARK also provides a dual-configuration option, enabling users to prioritize memory efficiency or computational speed. In memory-prioritized mode, ARK reduces rotation key memory consumption by 41.17% with a 12.57% increase in execution time. For speed-prioritized mode, it achieves a 24.62% rotation key memory reduction with only a 0.21% impact on execution time. This flexibility positions ARK as an effective solution for optimizing FHE across varied use cases, marking a significant advancement in optimization strategies for FHE-based privacy-preserving systems.
Last updated:  2024-12-02
Multiparty Shuffle: Linear Online Phase is Almost for Free
Jiacheng Gao, Yuan Zhang, and Sheng Zhong
Shuffle is a frequently used operation in secure multiparty computations, with various applications, including joint data analysis and anonymous communication systems. Most existing MPC shuffle protocols are constructed from MPC permutation protocols, which allows a party to securely apply its private permutation to an array of $m$ numbers shared among all $n$ parties. Following a ``permute-in-turn'' paradigm, these protocols result in $\Omega(n^2m)$ complexity in the semi-honest setting. Recent works have significantly improved efficiency and security by adopting a two-phase solution. Specifically, Eskandarian and Boneh demonstrate how to construct MPC shuffle protocols with linear complexity in both semi-honest and malicious adversary settings. However, a more recent study by Song et al. reveals that Eskandarian and Boneh's protocol fails to achieve malicious security. Consequently, designing an MPC shuffle protocol with linear complexity and malicious security remains an open question. In this paper, we address this question by presenting the first general construction of MPC shuffle protocol that is maliciously secure and has linear online communication and computation complexity, utilizing black-box access to secure arithmetic MPC primitives and MPC permutation protocol. When instantiating our construction with the SPDZ framework and the best existing malicious secure MPC shuffle, our construction only slightly increases the offline overhead compared to the semi-honest secure version, and thus achieve a linear online phase almost for free. As our constructions requires only black-box access to basic secure MPC primitives and permutation protocols, they are compatible with and can be integrated to most modern MPC frameworks. We provide formal security proofs for both semi-honest and malicious settings, demonstrating that our maliciously secure construction can achieve universally composable security. Experimental results indicate that our construction significantly enhances online performance while maintaining a moderate increase in offline overhead. Given that shuffle is a frequently used primitive in secure multiparty computation, we anticipate that our construction will accelerate many real-world MPC applications.
Last updated:  2024-12-01
PIGEON: A Framework for Private Inference of Neural Networks
Christopher Harth-Kitzerow, Yongqin Wang, Rachit Rajat, Georg Carle, and Murali Annavaram
Privacy-Preserving Machine Learning (PPML) is one of the most relevant use cases for Secure Multiparty Computation (MPC). While private training of large neural networks such as VGG-16 or ResNet-50 on state-of-the-art datasets such as ImageNet is still out of reach due to the performance overhead of MPC, GPU-based MPC frameworks are starting to achieve practical runtimes for private inference. However, we show that, in contrast to plaintext machine learning, the usage of GPU acceleration for both linear (e.g., convolutions) and nonlinear neural network layers (e.g., ReLU) is counterproductive in PPML. While GPUs effectively accelerate linear layers compared to CPU-based MPC implementations, the MPC circuits required to evaluate non-linear layers introduce memory overhead and frequent data movement between the GPU and the CPU to handle network communication. This results in slow ReLU performance and high GPU memory requirements in state-of-the-art GPU-based PPML frameworks, hindering them from scaling to multiple images per second inference throughput and batch sizes larger than eight images on ImageNet. To overcome these limitations, we propose PIGEON, an open-source framework for Private Inference of Neural Networks. PIGEON utilizes a novel ABG programming model that switches between arithmetic vectorization and bitslicing on the CPU for non-linear layers, depending on the MPC-specific computation required, while offloading linear layers to the GPU. Compared to the state-of-the-art PPML framework Piranha, PIGEON achieves two orders of magnitude improvements in ReLU throughput, reduces peak GPU memory utilization by one order of magnitude, and scales better with large batch sizes. This translates to one to two orders of magnitude improvements in throughput for large ImageNet batch sizes (e.g., 192) and more than 70% saturation of a 25 Gbit/s network.
Last updated:  2024-12-01
An algorithm for efficient detection of $(N,N)$-splittings and its application to the isogeny problem in dimension 2
Maria Corte-Real Santos, Craig Costello, and Sam Frengley
We develop an efficient algorithm to detect whether a superspecial genus 2 Jacobian is optimally $(N, N)$-split for each integer $N \leq 11$. Incorporating this algorithm into the best-known attack against the superspecial isogeny problem in dimension 2 gives rise to significant cryptanalytic improvements. Our implementation shows that when the underlying prime $p$ is 100 bits, the attack is sped up by a factor $25{\tt x}$; when the underlying prime is 200 bits, the attack is sped up by a factor $42{\tt x}$; and, when the underlying prime is 1000 bits, the attack is sped up by a factor $160{\tt x}$.
Last updated:  2024-12-01
MUTLISS: a protocol for long-term secure distributed storage over multiple remote QKD networks
Thomas Prévost, Olivier Alibart, Anne Marin, and Marc Kaplan
We introduce MULTISS, a new distributed storage protocol over multiple remote Quantum Key Distribution (QKD) networks that ensures long-term data confidentiality. Our protocol extends LINCOS, a secure storage protocol that uses Shamir secret sharing to distribute data in a single QKD network. Instead MULTISS uses a hierarchical secret scheme that makes certain shares mandatory for the reconstruction of the original secret. We prove that MULTISS ensures that the stored data remain secure even if an eavesdropper (1) gets full access to all storage servers of some of the QKD networks or (2) stores and breaks later all the classical communication between the QKD networks. We demonstrate that this is strictly more secure than LINCOS which is broken as soon as one QKD network is compromised. Our protocol, like LINCOS, has a procedure to update the shares stored in each QKD network without reconstructing the original data. In addition, we provide a procedure to recover from a full compromission of one of the QKD network. In particular, we introduce a version of the protocol that can only be implemented over a restricted network topologies, but minimizes the communication required in the recovery procedure. In practice, the MULTISS protocol is designed for the case of several QKD networks at the metropolitan scale connected to each other through channels secured by classical cryptography. Hence, MULTISS offers a secure distributed storage solution in a scenario that is compatible with the current deployment of quantum networks.
Last updated:  2024-12-01
Maliciously Secure Circuit Private Set Intersection via SPDZ-Compatible Oblivious PRF
Yaxi Yang, Xiaojian Liang, Xiangfu Song, Ye Dong, Linting Huang, Hongyu Ren, Changyu Dong, and Jianying Zhou
Circuit Private Set Intersection (Circuit-PSI) allows two parties to compute a function $f$ on items in the intersection of their input sets without revealing items in the intersection set. It is a well-known variant of PSI and has numerous practical applications. However, existing Circuit-PSI protocols only provide security against \textit{semi-honest} adversaries. A straightforward approach to constructing a maliciously secure Circuit-PSI is to extend a pure garbled-circuit-based PSI (NDSS'12 \cite{huang2012private}) to a maliciously secure circuit-PSI, but it will not be concretely efficient. Another is converting state-of-the-art semi-honest Circuit-PSI protocols (EUROCRYPT'21 \cite{rindal2021vole}; PoPETS'22 \cite{chandran2022circuit}) to be secure in the malicious setting. However, it will come across \textit{the consistency issue} (EUROCRYPT'11 \cite{shelat2011two}) since parties can not guarantee the inputs of the function $f$ stay unchanged as obtained from the last step. This paper tackles the previously mentioned issue by presenting the first maliciously secure Circuit-PSI protocol. Our key innovation, the Distributed Dual-key Oblivious Pseudorandom Function (DDOPRF), enables the oblivious evaluation of secret-shared inputs using dual keys within the SPDZ MPC framework. Notably, this construction seamlessly ensures fairness within the Circuit-PSI. Compared to the state-of-the-art semi-honest Circuit-PSI protocol (PoPETS'22), experimental results demonstrate that our malicious Circuit-PSI protocol not only reduces around $5$x communication costs but also enhances efficiency, particularly for modest input sets ($\le 2^{14}$) in the case of the WAN setting with high latency and limited bandwidth.
Last updated:  2024-12-01
Direct FSS Constructions for Branching Programs and More from PRGs with Encoded-Output Homomorphism
Elette Boyle, Lisa Kohl, Zhe Li, and Peter Scholl
Function secret sharing (FSS) for a class $\cal{F}$ allows to split a secret function $f \in \cal{F}$ into (succinct) secret shares $f_0,f_1$, such that for all $x\in \{0,1\}^n$ it holds $f_0(x)-f_1(x)=f(x)$. FSS has numerous applications, including private database queries, nearest neighbour search, private heavy hitters and secure computation in the preprocessing model, where the supported class $\cal{F}$ translates to richness in the application. Unfortunately, concretely efficient FSS constructions are only known for very limited function classes. In this work we introduce the notion of pseudorandom generators with encoded-output homomorphism (EOH-PRGs), and give direct FSS constructions for bit-fixing predicates, branching programs and more based on this primitive. Further, we give constructions of FSS for deterministic finite automatas (DFAs) from a KDM secure variant of EOH-PRGs. - New abstractions. Following the work of Alamati et al.(EUROCRYPT '19), who classify minicrypt primitives with algebraic structure and their applications, we capture the essence of our FSS constructions in the notion of EOH-PRG, paving the road towards future efficiency improvements via new instantiations of this primitive. The abstraction of EOH-PRG and its instantiations may be of independent interest, as it is an approximate substitution of an ideal homomorphic PRG. - Better efficiency. We show that EOH-PRGs can be instantiated from LWE and a small-exponent variant of the DCR assumption. A theoretical analysis of our instantiations suggest efficiency improvements over the state of the art both in terms of key size and evaluation time: We show that our FSS instantiations lead to smaller key sizes, improving over previous constructions by a factor of $3.5$ and more. While for bit-fixing predicates our FSS constructions show comparable or mildly improved run time (depending on the instantiation), we achieve considerable improvements for branching programs by avoiding the expensive generic transformation via universal circuits, shaving off a factor of $w$ and more in the number of abstract operations, where $w$ corresponds to an upper bound on the width of the underlying class of branching programs. - New constructions. We show that our instantiations of EOH-PRGs additionally support a form of KDM-security, without requiring an additional circular-security assumption. Based on this, we give the first FSS construction for DFAs which supports the evaluation of inputs of a-priori unbounded length without relying on FHE. - Applications. We outline applications of our FSS constructions including pattern matching with wild cards, image matching, nearest neighbor search and regular expression matching.
Last updated:  2024-11-30
Composability in Watermarking Schemes
Jiahui Liu and Mark Zhandry
Software watermarking allows for embedding a mark into a piece of code, such that any attempt to remove the mark will render the code useless. Provably secure watermarking schemes currently seems limited to programs computing various cryptographic operations, such as evaluating pseudorandom functions (PRFs), signing messages, or decrypting ciphertexts (the latter often going by the name ``traitor tracing''). Moreover, each of these watermarking schemes has an ad-hoc construction of its own. We observe, however, that many cryptographic objects are used as building blocks in larger protocols. We ask: just as we can compose building blocks to obtain larger protocols, can we compose watermarking schemes for the building blocks to obtain watermarking schemes for the larger protocols? We give an affirmative answer to this question, by precisely formulating a set of requirements that allow for composing watermarking schemes. We use our formulation to derive a number of applications.
Last updated:  2024-11-30
Distributed Differentially Private Data Analytics via Secure Sketching
Jakob Burkhardt, Hannah Keller, Claudio Orlandi, and Chris Schwiegelshohn
We explore the use of distributed differentially private computations across multiple servers, balancing the tradeoff between the error introduced by the differentially private mechanism and the computational efficiency of the resulting distributed algorithm. We introduce the linear-transformation model, where clients have access to a trusted platform capable of applying a public matrix to their inputs. Such computations can be securely distributed across multiple servers using simple and efficient secure multiparty computation techniques. The linear-transformation model serves as an intermediate model between the highly expressive central model and the minimal local model. In the central model, clients have access to a trusted platform capable of applying any function to their inputs. However, this expressiveness comes at a cost, as it is often expensive to distribute such computations, leading to the central model typically being implemented by a single trusted server. In contrast, the local model assumes no trusted platform, which forces clients to add significant noise to their data. The linear-transformation model avoids the single point of failure for privacy present in the central model, while also mitigating the high noise required in the local model. We demonstrate that linear transformations are very useful for differential privacy, allowing for the computation of linear sketches of input data. These sketches largely preserve utility for tasks such as private low-rank approximation and private ridge regression, while introducing only minimal error, critically independent of the number of clients. Previously, such accuracy had only been achieved in the more expressive central model.
Last updated:  2024-11-30
Multi-Client Attribute-Based and Predicate Encryption from Standard Assumptions
David Pointcheval and Robert Schädlich
Multi-input Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts, and a joint decryption of these ciphertexts is possible if and only if the combination of attributes satisfies the policy of the decryption key. We extend this model by introducing a new primitive that we call Multi-Client ABE (MC-ABE), which provides the usual enhancements of multi-client functional encryption over multi-input functional encryption. Specifically, we separate the secret keys that are used by the different encryptors and consider the case that some of them may be corrupted by the adversary. Furthermore, we tie each ciphertext to a label and enable a joint decryption of ciphertexts only if all ciphertexts share the same label. We provide constructions of MC-ABE for various policy classes based on SXDH. Notably, we can deal with policies that are not a conjunction of local policies, which has been a limitation of previous constructions from standard assumptions. Subsequently, we introduce the notion of Multi-Client Predicate Encryption (MC-PE) which, in contrast to MC-ABE, does not only guarantee message-hiding but also attribute-hiding. We present a new compiler that turns any constant-arity MC-ABE into an MC-PE for the same arity and policy class. Security is proven under the LWE assumption.
Last updated:  2024-11-30
Scloud+: a Lightweight LWE-based KEM without Ring/Module Structure
Anyu Wang, Zhongxiang Zheng, Chunhuan Zhao, Zhiyuan Qiu, Guang Zeng, Ye Yuan, Changchun Mu, and Xiaoyun Wang
We present Scloud+, an LWE-based key encapsulation mechanism (KEM). The key feature of Scloud+ is its use of the unstructured-LWE problem (i.e., without algebraic structures such as rings or modules) and its incorporation of ternary secrets and lattice coding to enhance performance. A notable advantage of the unstructured-LWE problem is its resistance to potential attacks exploiting algebraic structures, making it a conservative choice for constructing high-security schemes. However, a key disadvantage of such schemes is their limited computational and communication efficiency. Scloud+ utilizes ternary secrets and $\text{BW}_{32}$ lattice codes to enhance noise control and ensure robust error correction during decryption, enabling smaller parameters while maintaining low decryption failure probabilities. Equipped with these techniques, Scloud+ exhibits a significant improvement in efficiency. When compared with FrodoKEM for parameter sets targeting 128, 192, and 256 bits of security respectively, \lsc achieves practical performance with a public key size approximately $0.71 \sim 0.87$x and a ciphertext size approximately $0.56 \sim 0.78$x that of FrodoKEM. The encapsulation plus decapsulation time is approximately $0.74 \sim 0.77$x that of FrodoKEM.
Last updated:  2024-11-30
SoK: The apprentice guide to automated fault injection simulation for security evaluation
Asmita Adhikary, Giacomo Tommaso Petrucci, Philippe Tanguy, Vianney Lapôtre, and Ileana Buhan
Identifying and mitigating vulnerable locations to fault injections requires significant expertise and expensive equipment. Fault injections can damage hardware, cause software crashes, and pose safety and security hazards. Simulating fault injections offers a safer alternative, and fault simulators have steadily developed, though they vary significantly in functionality, target applications, fault injection methods, supported fault models, and guarantees. We present a taxonomy categorizing fault simulators based on their target applications and development cycle stages, from source code to final product. Our taxonomy provides insights and comparisons to highlight open problems.
Last updated:  2024-11-30
A New PPML Paradigm for Quantized Models
Tianpei Lu, Bingsheng Zhang, Xiaoyuan Zhang, and Kui Ren
Model quantization has become a common practice in machine learning (ML) to improve efficiency and reduce computational/communicational overhead. However, adopting quantization in privacy-preserving machine learning (PPML) remains challenging due to the complex internal structure of quantized operators, which leads to inefficient protocols under the existing PPML frameworks. In this work, we propose a new PPML paradigm that is tailor-made for and can benefit from quantized models. Our main observation is that lookup tables can ignore the complex internal constructs of any functions which can be used to simplify the quantized operator evaluation. We view the model inference process as a sequence of quantized operators, and each operator is implemented by a lookup table. We then develop an efficient private lookup table evaluation protocol, and its online communication cost is only $\log n$, where $n$ is the size of the lookup table. On a single CPU core, our protocol can evaluate $2^{26}$ tables with 8-bit input and 8-bit output per second. The resulting PPML framework for quantized models offers extremely fast online performance. The experimental results demonstrate that our quantization strategy achieves substantial speedups over SOTA PPML solutions, improving the online performance by $40\sim 60 \times$ w.r.t. convolutional neural network (CNN) models, such as AlexNet, VGG16, and ResNet18, and by $10\sim 25 \times$ w.r.t. large language models (LLMs), such as GPT-2, GPT-Neo, and Llama2.
Last updated:  2024-11-30
Decentralized FHE Computer
Gurgen Arakelov, Sergey Gomenyuk, and Hovsep Papoyan
The concept of a decentralized computer is a powerful and transformative idea that has proven its significance in enabling trustless, distributed computations. However, its application has been severely constrained by an inability to handle private data due to the inherent transparency of blockchain systems. This limitation restricts the scope of use cases, particularly in domains where confidentiality is critical. In this work, we introduce a model for a Fully Homomorphic Encryption (FHE) decentralized computer. Our approach leverages recent advancements in FHE technology to enable secure computations on encrypted data while preserving privacy. By integrating this model into the decentralized ecosystem, we address the long-standing challenge of privacy in public blockchain environments. The proposed FHE computer supports a wide range of use cases, is scalable, and offers a robust framework for incentivizing developer contributions.
Last updated:  2024-11-30
Ceno: Non-uniform, Segment and Parallel Zero-knowledge Virtual Machine
Tianyi Liu, Zhenfei Zhang, Yuncong Zhang, Wenqing Hu, and Ye Zhang
In this paper, we explore a novel Zero-knowledge Virtual Machine (zkVM) framework leveraging succinct, non-interactive zero-knowledge proofs for verifiable computation over any code. Our approach divides the proof of program execution into two stages. In the first stage, the process breaks down program execution into segments, identifying and grouping identical sections. These segments are then proved through data-parallel circuits that allow for varying amounts of duplication. In the subsequent stage, the verifier examines these segment proofs, reconstructing the program's control and data flow based on the segments' duplication number and the original program. The second stage can be further attested by a uniform recursive proof. We propose two specific designs of this concept, where segmentation and parallelization occur at two levels: opcode and basic block. Both designs try to minimize the control flow that affects the circuit size and support dynamic copy numbers, ensuring that computational costs directly correlate with the actual code executed (i.e., you only pay as much as you use). In our second design, in particular, by proposing an innovative data-flow reconstruction technique in the second stage, we can drastically cut down on the stack operations even compared to the original program execution. Note that the two designs are complementary rather than mutually exclusive. Integrating both approaches in the same zkVM could unlock more significant potential to accommodate various program patterns. We present an asymmetric GKR scheme to implement our designs, pairing a non-uniform prover and a uniform verifier to generate proofs for dynamic-length data-parallel circuits. The use of a GKR prover also significantly reduces the size of the commitment. GKR allows us to commit only the circuit's input and output, whereas in Plonkish-based solutions, the prover needs to commit to all the witnesses.
Last updated:  2024-11-29
HomeRun: High-efficiency Oblivious Message Retrieval, Unrestricted
Yanxue Jia, Varun Madathil, and Aniket Kate
In the realm of privacy-preserving blockchain applications such as Zcash, oblivious message retrieval (OMR) enables recipients to privately access messages directed to them on blockchain nodes (or bulletin board servers). OMR prevents servers from linking a message and its corresponding recipient's address, thereby safeguarding recipient privacy. Several OMR schemes have emerged recently to meet the demands of these privacy-centric blockchains; however, we observe that existing solutions exhibit shortcomings in various critical aspects and may only achieve certain objectives inefficiently, sometimes relying on trusted hardware, thereby impacting their practical utility. This work introduces a novel OMR protocol, HomeRun, that leverages two semi-honest, non-colluding servers to excel in both performance and security attributes as compared to the current state-of-the-art. HomeRun stands out by providing unlinkability across multiple requests for the same recipient's address. Moreover, it does not impose a limit on the number of pertinent messages that can be received by a recipient, which thwarts ``message balance exhaustion'' attacks and enhances system usability. HomeRun also empowers servers to regularly delete the retrieved messages and the associated auxiliary data, which mitigates the constantly increasing computation costs and storage costs incurred by servers. Remarkably, none of the existing solutions offer all of these features collectively. Finally, thanks to its judicious use of highly efficient cryptographic building blocks, HomeRun is highly performant: Specifically, the total runtime of servers in HomeRun is $3830 \times$ less than that in the work by Liu et al. (CRYPTO '22) based on fully-homomorphic encryption, and at least $1459 \times$ less than that in the design by Madathil et al. (USENIX Security '22) based on two semi-honest and non-colluding servers, using a single thread in a WAN setting.
Last updated:  2024-11-29
Universally Composable Server-Supported Signatures for Smartphones
Nikita Snetkov, Jelizaveta Vakarjuk, and Peeter Laud
Smart-ID is an application for signing and authentication provided as a service to residents of Belgium, Estonia, Latvia and Lithuania. Its security relies on multi-prime server-supported RSA, password-authenticated key shares and clone detection mechanism. Unfortunately, the security properties of the underlying protocol have been specified only in ``game-based'' manner. There is no corresponding ideal functionality that the actual protocol is shown to securely realize in the universal composability (UC) framework. In this paper, we remedy that shortcoming, presenting the functionality (optionally parameterized with a non-threshold signature scheme) and prove that the existing Smart-ID protocol securely realizes it. Additionally, we present a server-supported protocol for generating ECDSA signatures and show that it also securely realizes the proposed ideal functionality in the Global Random Oracle Model (UC+GROM).
Last updated:  2024-11-29
Efficient Modular Multiplication Hardware for Number Theoretic Transform on FPGA
Tolun Tosun, Selim Kırbıyık, Emre Koçer, Erkay Savaş, and Ersin Alaybeyoğlu
In this paper, we present a comprehensive analysis of various modular multiplication methods for Number Theoretic Transform (NTT) on FPGA. NTT is a critical and time-intensive component of Fully Homomorphic Encryption (FHE) applications while modular multiplication consumes a significant portion of the design resources in an NTT implementation. We study the existing modular reduction approaches from the literature, and implement particular methods on FPGA. Specifically Word-Level Montgomery (WLM) for NTT friendly primes [19] and K2 RED [3]. For improvements, we explore the trade-offs between the number of available primes in special forms and hardware cost of the reduction methods. We develop a DSP multiplication-optimized version of WLM, which we call WLM-Mixed. We also introduce a subclass of Proth primes, referred to as Proth-𝑙 primes, characterized by a low and fixed signed Hamming Weight. This special class of primes allows us to design multiplication-free shift-add versions of K2 RED and naive Montgomery reduction [20], referred to as K2 RED-Shift and Montgomery-Shift. We provide in-depth evaluations of these five reduction methods in an NTT architecture on FPGA. Our results indicate that WLM-Mixed is highly resource-efficient, utilizing only 3 DSP multiplications for 64-bit coefficient moduli. On the other hand, K2 RED-Shift and Montgomery-Shift offer DSP-free alternatives, which can be beneficial in specific scenarios.
Last updated:  2024-11-29
A Comprehensive Review of Post-Quantum Cryptography: Challenges and Advances
Seyed MohammadReza Hosseini and Hossein Pilaram
One of the most crucial measures to maintain data security is the use of cryptography schemes and digital signatures built upon cryptographic algorithms. The resistance of cryptographic algorithms against conventional attacks is guaranteed by the computational difficulties and the immense amount of computation required to them. In the last decade, with the advances in quantum computing technology and the realization of quantum computers, which have higher computational power compared to conventional computers and can execute special kinds of algorithms (i.e., quantum algorithms), the security of many existing cryptographic algorithms has been questioned. The reason is that by using quantum computers and executing specific quantum algorithms through them, the computational difficulties of conventional cryptographic algorithms can be reduced, which makes it possible to overcome and break them in a relatively short period of time. Therefore, researchers began efforts to find new quantum-resistant cryptographic algorithms that would be impossible to break, even using quantum computers, in a short time. Such algorithms are called post-quantum cryptographic algorithms. In this article, we provide a comprehensive review of the challenges and vulnerabilities of different kinds of conventional cryptographic algorithms against quantum computers. Afterward, we review the latest cryptographic algorithms and standards that have been proposed to confront the threats posed by quantum computers. We present the classification of post-quantum cryptographic algorithms and digital signatures based on their technical specifications, provide examples of each category, and outline the strengths and weaknesses of each category.
Last updated:  2024-11-29
Faster Signatures from MPC-in-the-Head
Dung Bui, Eliana Carozza, Geoffroy Couteau, Dahmun Goudarzi, and Antoine Joux
We revisit the construction of signature schemes using the MPC-in-the-head paradigm. We obtain two main contributions: – We observe that previous signatures in the MPC-in-the-head paradigm must rely on a salted version of the GGM puncturable pseudorandom function (PPRF) to avoid collision attacks. We design a new efficient PPRF construction that is provably secure in the multi-instance setting. The security analysis of our PPRF, in the ideal cipher model, is quite involved and forms a core technical contribution to our work. While previous constructions had to rely on a hash function, our construction uses only a fixed-key block cipher and is considerably more efficient as a result: we observe a 12× to 55× speed improvement for a recent signature scheme (Joux and Huth, Crypto’24). Our improved PPRF can be used to speed up many MPC-in-the-head signatures. – We introduce a new signature scheme from the regular syndrome decoding assumption, based on a new protocol for the MPC-in-the-head paradigm, which significantly reduces communication compared to previous works. Our scheme is conceptually simple, though its security analysis requires a delicate and nontrivial combinatorial analysis.
Last updated:  2024-11-29
LightCROSS: A Secure and Memory Optimized Post-Quantum Digital Signature CROSS
Puja Mondal, Suparna Kundu, Supriya Adhikary, and Angshuman Karmakar
CROSS is a code-based post-quantum digital signature scheme based on a zero-knowledge (ZK) framework. It is a second-round candidate of the National Institute of Standards and Technology’s additional call for standardizing post-quantum digital signatures. The memory footprint of this scheme is prohibitively large, especially for small embedded devices. In this work, we propose various techniques to reduce the memory footprint of the key generation, signature generation, and verification by as much as 50%, 52%, and 74%, respectively, on an ARM Cortex-M4 device. Moreover, our memory-optimized implementations adapt the countermeasure against the recently proposed (ASIACRYPT-24) fault attacks against the ZK-based signature schemes.
Last updated:  2024-11-29
Tweakable ForkCipher from Ideal Block Cipher
Sougata Mandal
In ASIACRYPT 2019, Andreeva et al. introduced a new symmetric key primitive called the $\textit{forkcipher}$, designed for lightweight applications handling short messages. A forkcipher is a keyed function with a public tweak, featuring fixed-length input and fixed-length (expanding) output. They also proposed a specific forkcipher, ForkSkinny, based on the tweakable block cipher SKINNY, and its security was evaluated through cryptanalysis. Since then, several efficient AEAD and MAC schemes based on forkciphers have been proposed, catering not only to short messages but also to various purposes such as leakage resilience and cloud security. While forkciphers have proven to be efficient solutions for designing AEAD schemes, the area of forkcipher design remains unexplored, particularly the lack of provably secure forkcipher constructions. In this work, we propose forkcipher design for various tweak lengths, based on a block cipher as the underlying primitive. We provide proofs of security for these constructions, assuming the underlying block cipher behaves as an ideal block cipher. First, we present a forkcipher, $\widetilde{\textsf{F}}1$, for an $n$-bit tweak and prove its optimal ($n$-bit) security. Next, we propose another construction, $\widetilde{\textsf{F}}2$, for a $2n$-bit tweak, also proving its optimal ($n$-bit) security. Finally, we introduce a construction, $\widetilde{\textsf{F}}r$, for a general $rn$-bit tweak, achieving $n$-bit security.
Last updated:  2024-11-29
Reduce and Prange: Revisiting Prange's ISD for Solving LPN/RSD over Large Fields
Jiseung Kim and Changmin Lee
The Learning Parity with Noise (LPN) problem and its specific form, LPN with regularity, are widely utilized in crafting cryptographic primitives. Recently, extending these problems to operate over larger fields has been considered to enhance applications. Despite the broad analysis available for traditional LPN approaches, the exploration of LPN over large fields remains underdeveloped. This gap in research has led to the development of improved attacks, suggesting that primitives based on LPN over large fields may not meet necessary security standards. We have developed an algorithm that enhances the efficiency of solving the LPN over large fields. This method innovatively modifies the Gaussian elimination attack, traditionally known as Prange's information set decoding algorithm. Our key advancement involves the selective use of partial Gaussian elimination rather than employing the full Gaussian algorithm throughout, which we have termed the ``Reduce and Prange's (RP) algorithm." Additionally, we apply the RP algorithm to address the LPN problem over large fields with regular noise. Our RP highlights two key aspects: the vulnerability of existing schemes and the superiority of our approach compared to recent analyses. Our findings reveal that cryptographic applications, including Syndrome Decoding in the Head frameworks (Crypto'22, Asiacrypt'23, Eurocrypt'24) and Scalable Multiparty Garbling (CCS'23), are not secure enough. To be precise, the schemes fail to achieve their intended bit-security. Compared to the previous analysis by Liu et al. (Eurocrypt'24), we show that for LPN over large fields (e.g., 128-bit field size), the bit-security is reduced by 5-11 bits . Furthermore, our RP algorithm for solving LPN with regular noise outperforms recent results by Liu et al., Briaud, and Øygard (Eurocrypt'23) under certain parameter choices, leading to a reduction in bit-security by 5-20 bits for LPN with regular noise.
Last updated:  2024-11-29
Instance-Hiding Interactive Proofs
Changrui Mu and Prashant Nalini Vasudevan
In an Instance-Hiding Interactive Proof (IHIP) [Beaver et al. CRYPTO 90], an efficient verifier with a _private_ input x interacts with an unbounded prover to determine whether x is contained in a language L. In addition to completeness and soundness, the instance-hiding property requires that the prover should not learn anything about x in the course of the interaction. Such proof systems capture natural privacy properties, and may be seen as a generalization of the influential concept of Randomized Encodings [Ishai et al. FOCS 00, Applebaum et al. FOCS 04, Agrawal et al. ICALP 15], and as a counterpart to Zero-Knowledge proofs [Goldwasser et al. STOC 89]. We investigate the properties and power of such instance-hiding proofs, and show the following: 1. Any language with an IHIP is contained in NP/poly and coNP/poly. 2. If an average-case hard language has a constant-round IHIP, then infinitely-often One-Way Functions exist. 3. There is an oracle with respect to which there is a language that has an IHIP but not an SZK proof. 4. IHIP's are closed under composition with any efficiently computable function. We further study a stronger version of IHIP (that we call Simulatable IHIP) where the view of the honest prover can be efficiently simulated. For these, we obtain stronger versions of some of the above: 5. Any language with a Simulatable IHIP is contained in AM and coAM. 6. If a _worst-case_ hard language has a Simulatable IHIP, then One-Way Functions exist.
Last updated:  2024-11-29
Improved Universal Thresholdizer from Iterative Shamir Secret Sharing
Jung Hee Cheon, Wonhee Cho, and Jiseung Kim
The universal thresholdizer, introduced at CRYPTO'18, is a cryptographic scheme that transforms any cryptosystem into a threshold variant, thereby enhancing its applicability in threshold cryptography. It enables black-box construction of one-round threshold signature schemes based on the Learning with Errors problem, and similarly, facilitates one-round threshold ciphertext-attack secure public key encryption when integrated with non-threshold schemes. Current constructions of universal thresholdizer are fundamentally built upon linear secret sharing schemes. One approach employs Shamir's secret sharing, which lacks compactness and results in ciphertext sizes of $O(N \log N)$, and another approach uses $\{0,1\}$-linear secret sharing scheme ($\{0,1\}$-LSSS), which is compact but induces high communication costs due to requiring $O(N^{5.3})$ secret shares. In this work, we introduce a communication-efficient universal thresholdizer by revising the linear secret sharing scheme. We propose a specialized linear secret sharing scheme, called TreeSSS, which reduces the number of required secret shares $O(N^{3+o(1)})$ while maintaining the compactness of the universal thresholdizer. TreeSSS can also serve as a subroutine for constructing lattice based $t$-out-of-$N$ threshold cryptographic primitives such as threshold fully homomorphic encryptions and threshold signatures. In this context, TreeSSS offers the advantage of lower communication overhead due to the reduced number of secret shares involved.
Last updated:  2024-11-29
A Formal Treatment of Key Transparency Systems with Scalability Improvements
Nicholas Brandt, Mia Filić, and Sam A. Markelon
Key Transparency (KT) systems have emerged as a critical technology for securely distributing and verifying the correctness of public keys used in end-to-end encrypted messaging services. Despite substantial academic interest, increased industry adoption, and IETF standardization efforts, KT systems lack a holistic and formalized security model, limiting their resilience to practical threats and constraining future development. In this paper, we introduce the first cryptographically sound formalization of KT as an ideal functionality, clarifying the assumptions, security properties, and potential vulnerabilities of deployed KT systems. We identify a significant security concern — a possible impersonation attack by a malicious service provider — and propose a backward-compatible solution. Additionally, we address a core scalability bottleneck by designing and implementing a novel, privacy-preserving verifiable Bloom filter (VBF) that significantly improves KT efficiency without compromising security. Experimental results demonstrate the effectiveness of our approach, marking a step forward in both the theoretical and practical deployment of scalable KT solutions.
Last updated:  2024-11-29
EndGame: Field-Agnostic Succinct Blockchain with Arc
Simon Judd and GPT
We present EndGame, a novel blockchain architecture that achieves succinctness through Reed-Solomon accumulation schemes. Our construction enables constant-time verification of blockchain state while maintaining strong security properties. We demonstrate how to efficiently encode blockchain state transitions using Reed-Solomon codes and accumulate proofs of state validity using the ARC framework. Our protocol achieves optimal light client verification costs and supports efficient state management without trusted setup.
Last updated:  2024-11-29
Asynchronous Byzantine Consensus with Trusted Monotonic Counters
Yackolley Amoussou-Guenou, Maurice Herlihy, and Maria Potop Butucaru
The paper promotes a new design paradigm for Byzantine tolerant distributed algorithms using trusted abstractions (oracles) specified in a functional manner. The contribution of the paper is conceptual. The objective here is to design distributed fundamental algorithms such as reliable broadcast and asynchronous byzantine consensus using trusted execution environments and to help designers to compare various solutions on a common ground. In this framework we revisit the Bracha's seminal work on Asynchronous Byzantine Consensus. Our solution uses trusted monotonic counters abstraction and tolerates $t$ Byzantine processes in a system with $n$ processes, $n \geq 2t+1$. The keystone of our construction is a novel and elegant Byzantine Reliable Broadcast algorithm resilient to $t<n$ Byzantine processes that uses an unique trusted monotonic counter (at the initiator).
Last updated:  2024-11-29
Generic, Fast and Short Proofs for Composite Statements
Zhuo Wu, Shi Qi, Xinxuan Zhang, and Yi Deng
This work introduces a novel technique to enhance the efficiency of proving composite statements. We present the \textit{Hash-and-Prove} framework to construct zkSNARKs for proving satisfiability of arithmetic circuits with additional \textit{Algebraic Gate}. These algebraic gates serve as building blocks for forming more generalized relations in algebra. Unlike Pedersen-committed \textit{Commit-and-Prove} SNARKs, which suffer from increased proof size and verification overhead when proving composite statements, our solution significantly improves both proof size and verification time while maintaining competitive and practical prover efficiency. In the application of proof of solvency where we need to prove knowledge of $x$ such that SHA$256(g^x)=y$, our approach achieves a 100$\times$ reduction in proof size and a 500$\times$ reduction in verification time, along with a 2$\times$ speedup in proving time compared to the work of Agrawal et al.(CRYPTO 2018). For proving ECDSA signatures verification, we achieve a proof time of 2.1 seconds, which is a 70$\times$ speedup compared to using Groth16, and a proof size of 4.81 kb, which is a 160$\times$ reduction compared to Field Agnostic SNARKs(Block et al., CRYPTO 2024).
Last updated:  2024-11-28
Quantum One-Time Programs, Revisited
Aparna Gupte, Jiahui Liu, Justin Raizes, Bhaskar Roberts, and Vinod Vaikuntanathan
One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are functions that can be run on any single input of a user's choice, but not on a second input. Classically, they are unachievable without trusted hardware, but the destructive nature of quantum measurements seems to provide a quantum path to constructing them. Unfortunately, Broadbent, Gutoski and Stebila showed that even with quantum techniques, a strong notion of one-time programs, similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a one-time program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-time program security. There is a vast gap between achievable and provably impossible notions of one-time program security, and it is unclear what functionalities are one-time programmable under the achievable notions of security. In this work, we present new, meaningful, yet achievable definitions of one-time program security for *probabilistic* classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose one-time program form leaks the entire function even in the oracle model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.