Papers updated in last 183 days (Page 11 of 1450 results)

Last updated:  2024-01-30
SimpleFT: A Simple Byzantine Fault Tolerant Consensus
Rui Hao, Chenglong Yi, Weiqi Dai, and Zhaonan Zhang
Although having been popular for a long time, Byzantine Fault Tolerance (BFT) consensus under the partially-synchronous network is denounced to be inefficient or even infeasible in recent years, which calls for a more robust asynchronous consensus. On the other hand, almost all the existing asynchronous consensus are too complicated to understand and even suffer from the termination problem. Motivated by the above problems, we propose SimpleFT in this paper, which is a simple asynchronous consensus and is mainly inspired by the simplicity of the Bitcoin protocol. With a re-understanding of the Bitcoin protocol, we disassemble the life cycle of a block into three phases, namely proposal, dissemination, and confirmation. Corresponding to these phases, we devise or introduce the sortition algorithm, reliable broadcast algorithm, and quorum certificate mechanism in SimpleFT, respectively. To make full use of the network resources and improve the system throughput, we further introduce the layered architecture to SimpleFT, which enables multiple blocks to be confirmed at the same height. Comprehensive analysis is made to validate the correctness of SimpleFT and various experiments are conducted to demonstrate its efficient performance.
Last updated:  2024-01-30
ProtoGalaxy: Efficient ProtoStar-style folding of multiple instances
Liam Eagen and Ariel Gabizon
We continue the recent line of work on folding schemes. Building on ideas from ProtoStar [BC23] we construct a folding scheme where the recursive verifier's ``marginal work'', beyond linearly combining witness commitments, consists only of a logarithmic number of field operations and a constant number of hashes. Moreover, our folding scheme performs well when \emph{folding multiple instances at one step}, in which case the marginal number of verifier field operations per instance becomes constant, assuming constant degree gates.
Last updated:  2024-01-30
Practical Post-Quantum Signatures for Privacy
Sven Argo, Tim Güneysu, Corentin Jeudy, Georg Land, Adeline Roux-Langlois, and Olivier Sanders
The transition to post-quantum cryptography has been an enormous challenge and effort for cryptographers over the last decade, with impressive results such as the future NIST standards. However, the latter has so far only considered central cryptographic mechanisms (signatures or KEM) and not more advanced ones, e.g., targeting privacy-preserving applications. Of particular interest is the family of solutions called blind signatures, group signatures and anonymous credentials, for which standards already exist, and which are deployed in billions of devices. Such a family does not have, at this stage, an efficient post-quantum counterpart although very recent works improved this state of affairs by offering two different alternatives: either one gets a system with rather large elements but a security proved under standard assumptions or one gets a more efficient system at the cost of ad-hoc interactive assumptions or weaker security models. Moreover, all these works have only considered size complexity without implementing the quite complex building blocks their systems are composed of. In other words, the practicality of such systems is still very hard to assess, which is a problem if one envisions a post-quantum transition for the corresponding systems/standards. In this work, we propose a construction of so-called signature with efficient protocols (SEP), which is the core of such privacy-preserving solutions. By revisiting the approach by Jeudy et al. (Crypto 2023) we manage to get the best of the two alternatives mentioned above, namely short sizes with no compromise on security. To demonstrate this, we plug our SEP in an anonymous credential system, achieving credentials of less than 80 KB. In parallel, we fully implemented our system, and in particular the complex zero-knowledge framework of Lyubashevsky et al. (Crypto'22), which has, to our knowledge, not be done so far. Our work thus not only improves the state-of-the-art on privacy-preserving solutions, but also significantly improves the understanding of efficiency and implications for deployment in real-world systems.
Last updated:  2024-01-30
HADES: Automated Hardware Design Exploration for Cryptographic Primitives
Fabian Buschkowski, Georg Land, Jan Richter-Brockmann, Pascal Sasdrich, and Tim Güneysu
While formal constructions for cryptographic schemes have steadily evolved and emerged over the past decades, the design and implementation of efficient and secure hardware instances is still a mostly manual, tedious, and intuition-driven process. With the increasing complexity of modern cryptography, e.g., Post-Quantum Cryptography (PQC) schemes, and consideration of physical implementation attacks, e.g., Side-Channel Analysis (SCA), the design space often grows exorbitantly without developers being able to weigh all design options. This immediately raises the necessity for tool-assisted Design Space Exploration (DSE) for efficient and secure cryptographic hardware. For this, we present the progressive HADES framework, offering a customizable, extendable, and streamlined DSE for efficient and secure cryptographic hardware accelerators. This tool exhaustively traverses the design space driven by security requirements, rapidly predicts user-defined performance metrics, e.g., area footprint or cycle-accurate latency, and instantiates the most suitable candidate in a synthesizable Hardware Description Language (HDL). We demonstrate the capabilities of our framework by applying our proof-of-concept implementation to a wide-range selection of state-of-the-art symmetric and PQC schemes, including the ChaCha20 stream cipher and the designated PQC standard Kyber, for which we provide the first set of arbitrary-order masked hardware implementations.
Last updated:  2024-01-30
ParBFT: Faster Asynchronous BFT Consensus with a Parallel Optimistic Path
Xiaohai Dai, Bolin Zhang, Hai Jin, and Ling Ren
To reduce latency and communication overhead of asynchronous Byzantine Fault Tolerance (BFT) consensus, an optimistic path is often added, with Ditto and BDT as state-of-the-art representatives. These protocols first attempt to run an optimistic path that is typically adapted from partially-synchronous BFT and promises good performance in good situations. If the optimistic path fails to make progress, these protocols switch to a pessimistic path after a timeout, to guarantee liveness in an asynchronous network. This design crucially relies on an accurate estimation of the network delay Δ to set the timeout parameter correctly. A wrong estimation of Δ can lead to either premature or delayed switching to the pessimistic path, hurting the protocol's efficiency in both cases. To address the above issue, we propose ParBFT, which employs a parallel optimistic path. As long as the leader of the optimistic path is non-faulty, ParBFT ensures low latency without requiring an accurate estimation of the network delay. We propose two variants of ParBFT, namely ParBFT1 and ParBFT2, with a trade-off between latency and communication. ParBFT1 simultaneously launches the two paths, achieves lower latency under a faulty leader, but has a quadratic message complexity even in good situations. ParBFT2 reduces the message complexity in good situations by delaying the pessimistic path, at the cost of a higher latency under a faulty leader. Experimental results demonstrate that ParBFT outperforms Ditto or BDT. In particular, when the network condition is bad, ParBFT can reach consensus through the optimistic path, while Ditto and BDT suffer from path switching and have to make progress using the pessimistic path.
Last updated:  2024-01-29
Commitments from Quantum One-Wayness
Dakshita Khurana and Kabir Tomer
One-way functions are central to classical cryptography. They are both necessary for the existence of non-trivial classical cryptosystems, and sufficient to realize meaningful primitives including commitments, pseudorandom generators and digital signatures. At the same time, a mounting body of evidence suggests that assumptions even weaker than one-way functions may suffice for many cryptographic tasks of interest in a quantum world, including bit commitments and secure multi-party computation. This work studies one-way state generators [Morimae-Yamakawa, CRYPTO 2022], a natural quantum relaxation of one-way functions. Given a secret key, a one-way state generator outputs a hard to invert quantum state. A fundamental question is whether this type of quantum one-wayness suffices to realize quantum cryptography. We obtain an affirmative answer to this question by proving that one-way state generators with pure state outputs imply quantum bit commitments and secure multiparty computation. Along the way, we build an intermediate primitive with classical outputs, which we call a (quantum) one-way puzzle. Our main technical contribution is a proof that one-way puzzles imply quantum bit commitments.
Last updated:  2024-01-29
Unconditional Security using (Random) Anonymous Bulletin Board
Albert Yu, Hai H. Nguyen, Aniket Kate, and Hemanta K. Maji
In a seminal work, Ishai et al. (FOCS–2006) studied the viability of designing unconditionally secure protocols for key agreement and secure multi-party computation (MPC) using an anonymous bulletin board (ABB) as a building block. While their results establish the feasibility of key agreement and honest-majority MPC in the ABB model, the optimality of protocols with respect to their round and communication complexity is not studied. This paper enriches this study of unconditional security in the ABB model in multiple ways. - We present a key agreement protocol with a novel combinatorial insight to offer a 200% throughput over the (FOCS–2006) study; i.e., using the same number of messages, we can (almost) double the bit-length of the agreed key. We also prove the near optimality of our approach. - We offer unconditionally secure protocols for the (random) string oblivious transfer functionalities. We present a $1$-round chosen message random string oblivious transfer and show how to extend it to a non-interactive (random) string oblivious transfer protocol and a $2$-round chosen message string oblivious transfer. - We prove a $1$-round lower bound for BEC under certain conditions. Central to our technical contributions is the abstraction of a distributional variant of the random ABB functionality. Investigating the concrete efficiency of founding MPC from this primitive leads to fascinating new mathematical challenges in well-established MPC models, which will be of broader interest to the community.
Last updated:  2024-01-29
Finite Key OTP Functionality: Ciphers That Hold Off Attackers Smarter Than Their Designers
Gideon Samid
The prevailing ciphers rely on the weak assumption that their attacker is not smarter than expected by their designers. The resultant crypto ecology favors the cryptographic powerhouses, and hinders cyber freedom, cyber privacy and cyber democracy. This weakness can be remedied by using the gold standard of cryptography -- One Time Pad, OTP. Alas, it comes with a prohibitive cost of a key as long as the message it encrypts. When the stakes are high enough users pay this high price because OTP is immunized against smarter and better equipped attackers. Claude Shannon has shown that this size imposition on the key is non-negotiable in the context he analyzed. Alas, changing the context, one could achieve OTP equivalence. Three simple changes are introduced: (i) make the size of the key an integral part of the secret, (ii) every finite message is encrypted with an arbitrary part of the key, (iii) allow for open-ended dilution of the contents-bearing bits of the ciphertext, with content-devoid bits, which don't confuse the intended recipient, but impose an open-ended cryptanalytic barrier before the attacker. A-priori a cryptanalyst is facing a set of messages each of them deemed plausible to be the one hidden in the ciphertext. If the ciphertext is Finite Key OTP compliant then membership in this set will not change after an exhaustive cryptanalytic processing of the ciphertext. This constitutes functional equivalence with OTP. OTP functionality with a shared finite key creates a path to digital freedom, digital privacy and digital democracy.
Last updated:  2024-01-29
Simultaneously simple universal and indifferentiable hashing to elliptic curves
Dmitrii Koshelev
The present article explains how to generalize the hash function SwiftEC (in an elementary quasi-unified way) to any elliptic curve $E$ over any finite field $\mathbb{F}_{\!q}$ of characteristic $> 3$. The new result apparently brings the theory of hash functions onto elliptic curves to its logical conclusion. To be more precise, this article provides compact formulas that define a hash function $\{0,1\}^* \to E(\mathbb{F}_{\!q})$ (deterministic and indifferentible from a random oracle) with the same working principle as SwiftEC. In particular, both of them equally compute only one square root in $\mathbb{F}_{\!q}$ (in addition to two cheap Legendre symbols). However, the new hash function is valid with much more liberal conditions than SwiftEC, namely when $3 \mid q-1$. Since in the opposite case $3 \mid q-2$ there are already indifferentiable constant-time hash functions to $E$ with the cost of one root in $\mathbb{F}_{\!q}$, this case is not processed in the article. If desired, its approach nonetheless allows to easily do that mutatis mutandis.
Last updated:  2024-01-29
DeCSIDH: Delegating isogeny computations in the CSIDH setting
Robi Pedersen
Delegating heavy computations to auxiliary servers, while keeping the inputs secret, presents a practical solution for computationally limited devices to use resource-intense cryptographic protocols, such as those based on isogenies, and thus allows the deployment of post-quantum security on mobile devices and in the internet of things. We propose two algorithms for the secure and verifiable delegation of isogeny computations in the CSIDH setting. We then apply these algorithms to different instances of CSIDH and to the signing algorithms SeaSign and CSI-FiSh. Our algorithms present a communication-cost trade-off. Asymptotically (for high communication), the cost for the delegator is reduced by a factor 9 for the original CSIDH-512 parameter set and a factor 30 for SQALE'd CSIDH-4096, while the relative cost of SeaSign vanishes. Even for much lower communication cost, we come close to these asymptotic results. Using the knowledge of the class group, the delegation of CSI-FiSh is basically free (up to element generation) already at a very low communication cost.
Last updated:  2024-01-29
Delegating Supersingular Isogenies over $\mathbb{F}_{p^2}$ with Cryptographic Applications
Robi Pedersen and Osmanbey Uzunkol
Although isogeny-based cryptographic schemes enjoy the lowest key sizes amongst current post-quantum cryptographic candidates, they unfortunately come at a high computational cost, which makes their deployment on the ever-growing number of resource-constrained devices difficult. Speeding up the expensive post-quantum cryptographic operations by delegating these computations from a weaker client to untrusted powerful external servers is a promising approach. Following this, we present in this work mechanisms allowing computationally restricted devices to securely and verifiably delegate isogeny computations to potentially untrusted third parties. In particular, we propose two algorithms that can be seamlessly integrated into existing isogeny-based protocols and which lead to a much lower cost for the delegator than the full, local computation. For example, compared to the local computation cost, we reduce the public-key computation step of SIDH/SIKE by a factor 5 and the zero-knowledge proof of identity from Jao and De Feo by a factor 16 for the prover, while it becomes almost free for the verifier, respectively, at the NIST security level 1.
Last updated:  2024-01-29
Non-Binding (Designated Verifier) Signature
Ehsan Ebrahimi
We argue that there are some scenarios in which plausible deniability might be desired for a digital signature scheme. For instance, the non-repudiation property of conventional signature schemes is problematic in designing an Instant Messaging system (WPES 2004). In this paper, we formally define a non-binding signature scheme in which the Signer is able to disavow her own signature if she wants, but, the Verifier is not able to dispute a signature generated by the Signer. That is, the Signer is able to convince a third party Judge that she is the owner of a signature without disclosing her secret information. We propose a signature scheme that is non-binding and unforgeable. Our signature scheme is post-quantum secure if the underlying cryptographic primitives are post-quantum secure. In addition, a modification to our nonbinding signature scheme leads to an Instant Messaging system that is of independent interest.
Last updated:  2024-01-29
Practical Key-Extraction Attacks in Leading MPC Wallets
Nikolaos Makriyannis, Oren Yomtov, and Arik Galansky
Multi-Party Computation (MPC) has become a major tool for protecting hundreds of billions of dollars in cryptocurrency wallets. MPC protocols are currently powering the wallets of Coinbase, Binance, Zengo, BitGo, Fireblocks and many other fintech companies servicing thousands of financial institutions and hundreds of millions of end-user consumers. We present four novel key-extraction attacks on popular MPC signing protocols showing how a single corrupted party may extract the secret in full during the MPC signing process. Our attacks are highly practical (the practicality of the attack depends on the number of signature-generation ceremonies the attacker participates in before extracting the key). Namely, we show key-extraction attacks against different threshold-ECDSA protocols/implementations requiring $10^6$, $256$, $16$, and *one signature*, respectively. In addition, we provide proof-of-concept code that implements our attacks.
Last updated:  2024-01-29
K-Waay: Fast and Deniable Post-Quantum X3DH without Ring Signatures
Daniel Collins, Loïs Huguenin-Dumittan, Ngoc Khanh Nguyen, Nicolas Rolin, and Serge Vaudenay
The Signal protocol and its X3DH key exchange core are regularly used by billions of people in applications like WhatsApp but are unfortunately not quantum-secure. Thus, designing an efficient and post-quantum secure X3DH alternative is paramount. Notably, X3DH supports asynchronicity, as parties can immediately derive keys after uploading them to a central server, and deniability, allowing parties to plausibly deny having completed key exchange. To satisfy these constraints, existing post-quantum X3DH proposals use ring signatures (or equivalently a form of designated-verifier signatures) to provide authentication without compromising deniability as regular signatures would. Existing ring signature schemes, however, have some drawbacks. Notably, they are not generally proven secure in the quantum random oracle model (QROM) and so the quantum security of parameters that are proposed is unclear and likely weaker than claimed. In addition, they are generally slower than standard primitives like KEMs. In this work, we propose an efficient, deniable and post-quantum X3DH-like protocol that we call K-Waay, that does not rely on ring signatures. At its core, K-Waay uses a split-KEM, a primitive introduced by Brendel et al. [SAC 2020], to provide Diffie-Hellman-like implicit authentication and secrecy guarantees. Along the way, we revisit the formalism of Brendel et al. and identify that additional security properties are required to prove a split-KEM-based protocol secure. We instantiate split-KEM by building a protocol based on the Frodo key exchange protocol relying on the plain LWE assumption: our proofs might be of independent interest as we show it satisfies our novel unforgeability and deniability security notions. Finally, we complement our theoretical results by thoroughly benchmarking both K-Waay and existing X3DH protocols. Our results show even when using plain LWE and a conservative choice of parameters that K-Waay is significantly faster than previous work.
Last updated:  2024-01-29
Monte Carlo Tree Search for automatic differential characteristics search: application to SPECK
Emanuele Bellini, David Gerault, Matteo Protopapa, and Matteo Rossi
The search for differential characteristics on block ciphers is a difficult combinatorial problem. In this paper, we investigate the performances of an AI-originated technique, Single Player Monte-Carlo Tree Search (SP-MCTS), in finding good differential characteristics on ARX ciphers, with an application to the block cipher SPECK. In order to make this approach competitive, we include several heuristics, such as the combination of forward and backward searches, and achieve significantly faster results than state-of-the-art works that are not based on automatic solvers. We reach 9, 11, 13, 13 and 15 rounds for SPECK32, SPECK48, SPECK64, SPECK96 and SPECK128 respectively. In order to build our algorithm, we revisit Lipmaa and Moriai's algorithm for listing all optimal differential transitions through modular addition, and propose a variant to enumerate all transitions with probability close (up to a fixed threshold) to the optimal, while fixing a minor bug in the original algorithm.
Last updated:  2024-01-29
Curve Trees: Practical and Transparent Zero-Knowledge Accumulators
Matteo Campanelli, Mathias Hall-Andersen, and Simon Holmgaard Kamp
In this work we improve upon the state of the art for practical zero-knowledge for set membership, a building block at the core of several privacy-aware applications, such as anonymous payments, credentials and whitelists. This primitive allows a user to show knowledge of an element in a large set without leaking the specific element. One of the obstacles to its deployment is efficiency. Concretely efficient solutions exist, e.g., those deployed in Zcash Sapling, but they often work at the price of a strong trust assumption: an underlying setup that must be generated by a trusted third party. To find alternative approaches we focus on a common building block: accumulators, a cryptographic data structure which compresses the underlying set. We propose novel, more efficient and fully transparent constructions (i.e., without a trusted setup) for accumulators supporting zero-knowledge proofs for set membership. Technically, we introduce new approaches inspired by ``commit-and-prove'' techniques to combine shallow Merkle trees and 2-cycles of elliptic curves into a highly practical construction. Our basic accumulator construction---dubbed Curve Trees---is completely transparent (does not require a trusted setup) and is based on simple and widely used assumptions (DLOG and Random Oracle Model). Ours is the first fully transparent construction that obtains concretely small proof/commitment sizes for large sets and a proving time one order of magnitude smaller than proofs over Merkle Trees with Pedersen hash. For a concrete instantiation targeting 128 bits of security we obtain: a commitment to a set of any size is 256 bits; for $|S| = 2^{40}$ a zero-knowledge membership proof is 2.9KB, its proving takes $2$s and its verification $40$ms on an ordinary laptop. Using our construction as a building block we can design a simple and concretely efficient anonymous cryptocurrency with full anonymity set, which we dub $\mathbb{V}$cash. Its transactions can be verified in $\approx 80$ms or $\approx 5$ms when batch-verifying multiple ($> 100$) transactions; transaction sizes are $4$KB. Our timings are competitive with those of the approach in Zcash Sapling and trade slightly larger proofs (transactions in Zcash Sapling are 2.8KB) for a completely transparent setup.
Last updated:  2024-01-29
LOL: A Highly Flexible Framework for Designing Stream Ciphers
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, and Tian Tian
In this paper, we propose LOL, a general framework for designing blockwise stream ciphers, to achieve ultrafast software implementations for the ubiquitous virtual networks in 5G/6G environments and high-security level for post-quantum cryptography. The LOL framework is structurally strong, and all its components as well as the LOL framework itself enjoy high flexibility with various extensions. Following the LOL framework, we propose new stream cipher designs named LOL-MINI and LOL-DOUBLE with the support of the AES-NI and SIMD instructions: the former applies the basic LOL single mode while the latter uses the extended parallel-dual mode. Both LOL-MINI and LOL-DOUBLE support 256-bit key length and, according to our thorough evaluations, have 256-bit security margins against all existing cryptanalysis methods including differential, linear, integral, etc. The software performances of LOL-MINI and LOL-DOUBLE can reach 89 Gbps and 135 Gbps. In addition to pure encryptions, the LOL-MINI and LOL-DOUBLE stream ciphers can also be applied in a stream-cipher-then-MAC strategy to make an AEAD scheme.
Last updated:  2024-01-29
Perceived Information Revisited II: Information-Theoretical Analysis of Deep-Learning Based Side-Channel Attacks
Akira Ito, Rei Ueno, and Naofumi Homma
In conventional deep-learning-based side-channel attacks (DL-SCAs), an attacker trains a model by updating parameters to minimize the negative log-likelihood (NLL) loss function. Although a decrease in NLL improves DL-SCA performance, the reasons for this improvement remain unclear because of the lack of a formal analysis. To address this open problem, this paper explores the relationship between NLL and the attack success rate (SR) and conducts an information-theoretical analysis of DL-SCAs with an NLL loss function to solve open problems in DL-SCA. To this end, we introduce a communication channel for DL-SCAs and derive an inequality that links model outputs to the SR. Our inequality states that mutual information between the model output and intermediate value, which is named the latent perceived information (LPI), provides an upper bound of the SR of a DL-SCA with a trained neural network. Subsequently, we examine the conjecture by Ito et al. on the relationship between the effective perceived information (EPI) and SR and clarify its valid conditions from the perspective of LPI. Our analysis results reveal that a decrease in NLL correlated with an increase in LPI, which indicates that the model capability to extract intermediate value information from traces is enhanced. In addition, the results indicate that the LPI bounds the SR from above, and a higher upper bound of the SR could directly improve the SR if the selection function satisfies certain conditions, such as bijectivity. Finally, these theoretical insights are validated through attack experiments on neural network models applied to AES software and hardware implementations with masking countermeasures.
Last updated:  2024-01-27
CCA-Secure (Puncturable) KEMs from Encryption With Non-Negligible Decryption Errors
Valerio Cini, Sebastian Ramacher, Daniel Slamanig, and Christoph Striecks
Public-key encryption (PKE) schemes or key-encapsulation mechanisms (KEMs) are fundamental cryptographic building blocks to realize secure communication protocols. There are several known transformations that generically turn weakly secure schemes into strongly (i.e., IND-CCA) secure ones. While most of these transformations require the weakly secure scheme to provide perfect correctness, Hofheinz, Hövelmanns, and Kiltz (HHK) (TCC 2017) have recently shown that variants of the Fujisaki-Okamoto (FO) transform can work with schemes that have negligible correctness error in the (quantum) random oracle model (QROM). Many recent schemes in the NIST post-quantum competition (PQC) use variants of these transformations. Some of their CPA-secure versions even have a non-negligible correctness error and so the techniques of HHK cannot be applied. In this work, we study the setting of generically transforming PKE schemes with potentially large, i.e., non-negligible, correctness error to ones having negligible correctness error. While there have been previous treatments in an asymptotic setting by Dwork, Naor, and Reingold (EUROCRYPT 2004), our goal is to come up with practically efficient compilers in a concrete setting and apply them in two different contexts. Firstly, we show how to generically transform weakly secure deterministic or randomized PKEs into CCA-secure KEMs in the (Q)ROM using variants of HHK. This applies to essentially all candidates to the NIST PQC based on lattices and codes with non-negligible error for which we provide an extensive analysis. We thereby show that it improves some of the code-based candidates. Secondly, we study puncturable KEMs in terms of the Bloom Filter KEM (BFKEM) proposed by Derler et al. (EUROCRYPT 2018) which inherently have a non-negligible correctness error. BFKEMs are a building block to construct fully forward-secret zero round-trip time (0-RTT) key-exchange protocols. In particular, we show the first approach towards post-quantum secure BFKEMs generically from lattices and codes by applying our techniques to identity-based encryption (IBE) schemes with (non-)negligible correctness error.
Last updated:  2024-01-27
Towards general-purpose program obfuscation via local mixing
Ran Canetti, Claudio Chamon, Eduardo Mucciolo, and Andrei Ruckenstein
We explore the possibility of obtaining general-purpose obfuscation for all circuits by way of making only simple, local, functionality preserving random perturbations in the circuit structure. Towards this goal, we use the additional structure provided by reversible circuits, but no additional algebraic structure. We start by formulating a new (and relatively weak) obfuscation task regarding the ability to obfuscate random circuits of bounded length. We call such obfuscators random input & output (RIO) obfuscators. We then show how to construct indistinguishability obfuscators for all (unbounded length) circuits given only an RIO obfuscator --- under a new assumption regarding the pseudorandomness of sufficiently long random reversible circuits with known functionality, which in turn builds on a conjecture made by Gowers (Comb. Prob. Comp. '96) regarding the pseudorandomness of bounded-size random reversible circuits. Furthermore, the constructed obfuscators satisfy a new measure of security - called random output indistinguishability (ROI) obfuscation - which is significantly stronger than IO and may be of independent interest. We then investigate the possibility of constructing RIO obfuscators using local, functionality preserving perturbations. Our approach is rooted in statistical mechanics and can be thought of as locally ``thermalizing'' a circuit while preserving its functionality. We provide candidate constructions along with a pathway for analyzing the security of such strategies. Given the power of program obfuscation, viability of the proposed approach would provide an alternative route to realizing almost all cryptographic tasks under hardness assumptions that are very different from standard ones. Furthermore, our specific candidate obfuscators are relatively efficient: the obfuscated version of an n-wire, m-gate (reversible) circuit with security parameter k has n wires and poly(n,k)m gates. We hope that our initial exploration will motivate further study of this alternative path to cryptography.
Last updated:  2024-01-27
Memory Checking Requires Logarithmic Overhead
Elette Boyle, Ilan Komargodski, and Neekon Vafa
We study the complexity of memory checkers with computational security and prove the first general tight lower bound. Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS '91, Algorithmica '94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity $O(\log n/\log\log n)$ and local space proportional to a computational security parameter, assuming one-way functions, where $n$ is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC '09) showed that for a restricted class of ``deterministic and non-adaptive'' memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and non-adaptive constructions has remained a major open problem. In this work, we fully resolve the complexity of memory checkers by showing that any construction with local space $p$ and query complexity $q$ must satisfy $$ p \ge \frac{n}{(\log n)^{O(q)}} \;. $$ This implies, as a special case, that $q\ge \Omega(\log n/\log\log n)$ in any scheme, assuming that $p\le n^{1-\varepsilon}$ for $\varepsilon>0$. The bound applies to any scheme with computational security, completeness $2/3$, and inverse polynomial in $n$ soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity $q_r$ and write complexity $q_w$ differ. For instance, we show the tight bound that if $q_r=O(1)$ and $p\le n^{1-\varepsilon}$ for $\varepsilon>0$, then $q_w\ge n^{\Omega(1)}$. This is the first lower bound, for any non-trivial class of constructions, showing a read-write query complexity trade-off. Our proof is via a delicate compression argument showing that a ``too good to be true'' memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings.
Last updated:  2024-01-27
SPRITE: Secure and Private Routing in Payment Channel Networks
Gaurav Panwar, Roopa Vishwanathan, George Torres, and Satyajayant Misra
Payment channel networks are a promising solution to the scalability challenge of blockchains and are designed for significantly increased transaction throughput compared to the layer one blockchain. Since payment channel networks are essentially decentralized peer-to-peer networks, routing transactions is a fundamental challenge. Payment channel networks have some unique security and privacy requirements that make pathfinding challenging, for instance, network topology is not publicly known, and sender/receiver privacy should be preserved, in addition to providing atomicity guarantees for payments. In this paper, we present an efficient privacy-preserving routing protocol, SPRITE, for payment channel networks that supports concurrent transactions. By finding paths offline and processing transactions online, SPRITE can process transactions in just two rounds, which is more efficient compared to prior work. We evaluate SPRITE’s performance using Lightning Net- work data and prove its security using the Universal Composability framework. In contrast to the current cutting-edge methods that achieve rapid transactions, our approach significantly reduces the message complexity of the system by 3 orders of magnitude while maintaining similar latencies.
Last updated:  2024-01-27
Practical key-recovery attack on MQ-Sign
Thomas Aulbach, Simona Samardjiska, and Monika Trimoska
In this paper we describe attacks on the UOV-based signature scheme called MQ-Sign. MQ-Sign was submitted by Shim, Kim, and An as a a first-round candidate for standardization in the (South) Korean post-quantum cryptography competition (KpqC). The scheme makes use of sparseness of the secret central polynomials and equivalent key construction to reduce the size of the private key. The authors propose four variants exploiting different levels of sparsity, MQ-Sign-SS, MQ-Sign-RS, MQ-Sign-SR, and MQ-Sign-RR with the last one being the standard UOV signature scheme. We show that apart from the MQ-Sign-RR variant, all the others are insecure. Namely, we present a polynomial-time key-recovery attack on the variants MQ-Sign-SS and MQ-Sign-RS and a forgery attack on the variant MQ-Sign-SR below the claimed security level. Our attack exploits exactly the techniques used for reduction of keys - the sparsity of the central polynomials in combination with the specific structure of the secret linear map $\mathbf{S}$. We provide a verification script for the polynomial-time key-recovery attack, that recovers the secret key in less than seven seconds for security level V. Furthermore, we provide an implementation of the non-guessing part of the forgery attack, confirming our complexity estimates.
Last updated:  2024-01-27
An acceleration of the AKS prime identification algorithm
Stephen Meredith Williams
In its standard form, the AKS prime identification algorithm is deterministic and polynomial time but too slow to be of practical use. By dropping its deterministic attribute, it can be accelerated to an extent that it is practically useful, though still much slower than the widely used Miller-Rabin-Selfridge-Monier (MRSM) algorithm based on the Fermat Little Theorem or the Solovay-Strassen algorithm based on the Euler Criterion. The change made, in the last stage of AKS, is to check a modular equation not for a long sequence of values but for a single one. Another change is to reduce, arbitrarily, the size of the parameter $r$ giving the degree of the polynomial used for the last stage.
Last updated:  2024-01-27
Transaction Fairness in Blockchains, Revisited
Rujia Li, Xuanwei Hu, Qin Wang, Sisi Duan, Qi Wang
With the growing number of decentralized finance (DeFi) applications, transaction fairness in blockchains has gained much research interest. As a broad concept in distributed systems and blockchains, fairness has been used in different contexts, varying from ones related to the liveness of the system to ones that focus on the received order of transactions. In this work, we revisit the fairness definitions and find that existing fairness definitions are not adapted to blockchains with multiple DApps. We then provide a more generic one called verifiable fairness. Compared with prior definitions, our notion has two unique features: (i) it relaxes the ordering rules to a predicate; (ii) it enables users to independently verify if their transactions comply with the predicate for concrete applications. We also provide a scheme that achieves verifiable fairness, leveraging trusted hardware. Unlike prior works that usually design a dedicated consensus protocol to achieve fairness, our scheme can be integrated with any blockchain system. Our evaluation results on Amazon EC2 using up to 120 instances across different regions show that our construction imposes only minimal overhead on existing blockchain systems.
Last updated:  2024-01-27
R3PO: Reach-Restricted Reactive Program Obfuscation and its Application to MA-ABE
Kaartik Bhushan, Sai Lakshmi Bhavana Obbattu, Manoj Prabhakaran, and Rajeev Raghunath
In recent breakthrough results, novel use of garbled circuits yielded constructions for several primitives like Identity-Based Encryption (IBE) and 2-round secure multi-party computation, based on standard assumptions in public-key cryptography. While the techniques in these different results have many common elements, these works did not offer a modular abstraction that could be used across them. Our main contribution is to introduce a novel notion of obfuscation, called Reach-Restricted Reactive Program Obfuscation (R3PO) that captures the essence of these constructions, and exposes additional capabilities. We provide a powerful composition theorem whose proof fully encapsulates the use of garbled circuits in these works. As an illustration of the potential of R3PO, and as an important contribution of independent interest, we present a variant of Multi-Authority Attribute-Based Encryption (MA-ABE) that can be based on (single-authority) CP-ABE in a blackbox manner, using only standard cryptographic assumptions (e.g., DDH). This is in stark contrast to the existing constructions for MA-ABE, which rely on the random oracle model and/or support only limited policy classes.
Last updated:  2024-01-26
Data Privacy Made Easy: Enhancing Applications with Homomorphic Encryption
Charles Gouert and Nektarios Georgios Tsoutsos
Homomorphic encryption is a powerful privacy-preserving technology that is notoriously difficult to configure and use, even for experts. The key difficulties include restrictive programming models of homomorphic schemes and choosing suitable parameters for an application. In this tutorial, we outline methodologies to solve these issues and allow for conversion of any application to the encrypted domain using both leveled and fully homomorphic encryption. The first approach, called Walrus, is suitable for arithmetic-intensive applications with limited depth and applications with high throughput requirements. Walrus provides an intuitive programming interface and handles parameterization automatically by analyzing the application and gathering statistics such as homomorphic noise growth to derive a parameter set tuned specifically for the application. We provide an in-depth example of this approach in the form of a neural network inference as well as guidelines for using Walrus effectively. Conversely, the second approach (HELM) takes existing HDL designs and converts them to the encrypted domain for secure outsourcing on powerful cloud servers. Unlike Walrus, HELM supports FHE backends and is well-suited for complex applications. At a high level, HELM consumes netlists and is capable of performing logic gate operations homomorphically on encryptions of individual bits. HELM incorporates both CPU and GPU acceleration by taking advantage of the inherent parallelism provided by Boolean circuits. As a case study, we walk through the process of taking an off-the-shelf HDL design in the form of AES-128 decryption and running it in the encrypted domain with HELM.
Last updated:  2024-01-26
Breaking HWQCS: a code-based signature scheme from high weight QC-LDPC codes
Alex Pellegrini and Giovanni Tognolini
We analyse HWQCS, a code based signature scheme presented at ICISC 2023, which uses quasi-cyclic low density parity check codes (QC-LDPC). The scheme introduces high Hamming weight errors and signs each message using a fresh ephemeral secret key rather than using only one secret key, so to avoid known attacks on QC-LDPC signature schemes. In this paper, we show that the signatures of HWQCS leak substantial information concerning the ephemeral keys and formally describe this behaviour. Furthermore, we show that for each security level, we can exploit the leakage to efficiently reconstruct partial secret data from very few signatures, and finally mount a universal forgery attack.
Last updated:  2024-01-26
Differential Cryptanalysis in the Fixed-Key Model
Tim Beyne and Vincent Rijmen
A systematic approach to the fixed-key analysis of differential probabilities is proposed. It is based on the propagation of 'quasidifferential trails', which keep track of probabilistic linear relations on the values satisfying a differential characteristic in a theoretically sound way. It is shown that the fixed-key probability of a differential can be expressed as the sum of the correlations of its quasidifferential trails. The theoretical foundations of the method are based on an extension of the difference-distribution table, which we call the quasidifferential transition matrix. The role of these matrices is analogous to that of correlation matrices in linear cryptanalysis. This puts the theory of differential and linear cryptanalysis on an equal footing. The practical applicability of the proposed methodology is demonstrated by analyzing several differentials for RECTANGLE, KNOT, Speck and Simon. The analysis is automated and applicable to other SPN and ARX designs. Several attacks are shown to be invalid, most others turn out to work only for some keys but can be improved for weak-keys.
Last updated:  2024-01-26
Djed: A Formally Verified Crypto-Backed Pegged Algorithmic Stablecoin
Joachim Zahnentferner, Dmytro Kaidalov, Jean-Frédéric Etienne, and Javier Díaz
This paper describes Djed, an algorithmic stablecoin protocol that behaves like an autonomous bank that buys and sells stablecoins for a price in a range that is pegged to a target price. It is crypto-backed in the sense that the bank keeps a volatile cryptocurrency in its reserve. The reserve is used to buy stablecoins from users that want to sell them. And revenue from sales of stablecoins to users are stored in the reserve. Besides stablecoins, the bank also trades reservecoins in order to capitalize itself and maintain a reserve ratio significantly greater than one. To the best of our knowledge, this is the first stablecoin protocol where stability claims are precisely and mathematically stated and proven. Furthermore, the claims and their proofs are formally verified using two different techniques: bounded model checking, to exhaustively search for counter-examples to the claims; and interactive theorem proving, to build rigorous formal proofs using a proof assistant with automated theorem proving features.
Last updated:  2024-01-26
Mask Conversions for d+1 shares in Hardware, with Application to Lattice-based PQC
Quinten Norga, Jan-Pieter D'Anvers, Suparna Kundu, and Ingrid Verbauwhede
The conversion between arithmetic and Boolean mask representations (A2B & B2A) is a crucial component for side-channel resistant implementations of lattice-based cryptography. In this paper, we present a first- and high-order masked, unified hardware implementation which can perform both A2B & B2A conversions. We optimize the operation on several layers of abstraction, applicable to any protection order. First, we propose novel higher-order algorithms for the secure addition and B2A operation. This is achieved through, among others, an improved method for repeated masked modular reduction and through the X2B operation, which can be viewed as a conversion from any type of additive masking to its Boolean representation. This allows for the removal of a full secure addition during B2A post-processing. Compared to prior work, our $B2A_q$ requires 51/46/45 % less fresh randomness at first through third protection order when implemented in software or hardware. Secondly, on the circuit level, we successfully introduce half-cycle data paths and demonstrate how careful, manual masking is a superior approach for masking highly non-linear operations and providing first- and high-order security. Our techniques significantly reduce the high latency and fresh randomness overhead, typically introduced by glitch-resistant masking schemes and universally composable gadgets, including HPC3 by Knichel et al. presented at CCS 2022. Compared to state-of-the-art algorithms and masking techniques, our unified and high-throughput hardware implementation requires up to 89/84/86 % fewer clock cycles and 78/71/55 % fewer fresh random bits. We show detailed performance results for first-, second- and third-order protected implementations on FPGA. Our proposed algorithms are proven secure in the glitch extended probing model and their implementations are validated via practical lab analysis using the TVLA methodology. We experimentally show that both our first- and second-order masked implementation is hardened against univariate and multivariate attacks using 100 million traces, for each mode of operation.
Last updated:  2024-01-26
Simpler and Faster BFV Bootstrapping for Arbitrary Plaintext Modulus from CKKS
Jaehyung Kim, Jinyeong Seo, and Yongsoo Song
Bootstrapping is a key operation in fully homomorphic encryption schemes that enables the evaluation of arbitrary multiplicative depth circuits. In the BFV scheme, bootstrapping corresponds to reducing the size of accumulated noise in lower bits while preserving the plaintext in the upper bits. The previous instantiation of BFV bootstrapping is achieved through the digit extraction procedure. However, its performance is highly dependent on the plaintext modulus, so only a limited form of the plaintext modulus, a power of a small prime number, was used for the efficiency of bootstrapping. In this paper, we present a novel approach to instantiate BFV bootstrapping, distinct from the previous digit extraction-based method. The core idea of our bootstrapping is to utilize CKKS bootstrapping as a subroutine, so the performance of our method mainly depends on the underlying CKKS bootstrapping rather than the plaintext modulus. We implement our method at a proof-of-concept level to provide concrete benchmark results. When performing the bootstrapping operation for a 51-bits plaintext modulus, our method improves the previous digit extraction-based method by a factor of 37.9 in latency and 29.4 in throughput. Additionally, we achieve viable bootstrapping performance for large plaintext moduli, such as 144-bits and 234-bits, which has never been measured before.
Last updated:  2024-01-26
Improved Linear Key Recovery Attacks on PRESENT
Wenhui Wu, Muzhou Li, and Meiqin Wang
PRESENT is an ultra-lightweight block cipher designed by Bogdanov et al., and has been widely studied since its proposal. It supports 80-bit and 128-bit keys, which are referred as PRESENT-80 and PRESENT-128, respectively. Up to now, linear cryptanalysis is the most effective method on attacking this cipher, especially when accelerated with the pruned Walsh transform. Combing pruned Walsh transform with multiple linear attacks, one can recover the right key for 28-round PRESENT-80 and -128. Later, this method is further improved with affine pruned Walsh transform by adding more zeros in the Walsh spectrum through rejecting some data. This leads to the 29-round attack on PRESENT-128 with full codebook. In this paper, we follow the affine pruned Walsh transform accelerated linear method, and propose 29-round attacks on both PRESENT-80 and PRESENT-128 without using full codebook. Both attacks rely on a statistical model depicting distributions of the experimental correlation when some data are artificially rejected in its computation. Besides, detailed analysis of complexity reduction for each linear hull used in attacking PRESENT is also provided and supported by an automatic tool. Our 29-round attack on PRESENT-80 mainly benefits from this tool. According to our knowledge, both attacks are the best ones on PRESENT so far.
Last updated:  2024-01-25
pqm4: Benchmarking NIST Additional Post-Quantum Signature Schemes on Microcontrollers
Matthias J. Kannwischer, Markus Krausz, Richard Petri, and Shang-Yi Yang
In July 2022, the US National Institute for Standards and Technology (NIST) announced the first set of Post-Quantum Cryptography standards: Kyber, Dilithium, Falcon, and SPHINCS+. Shortly after, NIST published a call for proposals for additional post-quantum signature schemes to complement their initial portfolio. In 2023, 50 submissions were received, and 40 were accepted as round-1 candidates for future standardization. In this paper, we study the suitability and performance of said candidates on the popular Arm Cortex-M4microcontroller. We integrate the suitable implementations into the benchmarking framework pqm4 and provide benchmarking results on the STM32L4R5ZI featuring 640 KB of RAM. pqm4 currently includes reference implementations for 15 submissions and M4-optimized implementations for five submissions. For the remaining candidates, we describe the reasons that hinder integration - the predominant reason being large key size or excessive memory consumption. While the performance of reference implementations is rather meaningless and often does not correlate with the performance of well-optimized implementations, this work provides some first indication of which schemes are most promising on microcontrollers. The publicly available implementations in pqm4 also provide a good starting point for future optimization efforts. Initially, we were hoping for a much higher code quality than for initial submissions to NIST's previous PQC project. However, we got grossly disappointed: Half of the submissions make use of dynamic memory allocations, often completely without reason; Many implementations have compiler warnings, sometimes hinting at more serious issues; Many implementations do not pass simple sanitizer tests such as using valgrind; Multiple implementations make use of static memory.
Last updated:  2024-01-25
A Novel Power Analysis Attack against CRYSTALS-Dilithium Implementation
Yong Liu, Yuejun Liu, Yongbin Zhou, Yiwen Gao, Zehua Qiao, and Huaxin Wang
Post-Quantum Cryptography (PQC) was proposed due to the potential threats quantum computer attacks against conventional public key cryptosystems, and four PQC algorithms besides CRYSTALS-Dilithium (Dilithium for short) have so far been selected for NIST standardization. However, the selected algorithms are still vulnerable to side-channel attacks in practice, and their physical security need to be further evaluated. This study introduces two efficient power analysis attacks, the optimized fast two-stage approach and the single-bit approach, aimed at reducing the key guess space in NTT polynomial multiplication on an STM32F405 device (ARM Cortex-M4 core). Our findings reveal that the optimized approach outperforms the conservative approach and the fast two-stage approach proposed in ICCD 2021 by factors of 519 and 88, respectively. Similarly, the single-bit approach demonstrates speedups of 365 and 62 times compared to these two approaches, respectively.
Last updated:  2024-01-25
Cryptanalysis of the SNOVA signature scheme
Peigen Li and Jintai Ding
SNOVA is a variant of a UOV-type signature scheme over a noncommutative ring. In this article, we demonstrate that certain parameters provided by authors in SNOVA fail to meet the NIST security level, and the complexities are lower than those claimed by SNOVA.
Last updated:  2024-01-24
The Splitting Field of $Y^n-2$, Two-Variable NTT and Lattice-Based Cryptography
Wenzhe Yang
The splitting field $F$ of the polynomial $Y^n-2$ is an extension over $\mathbb{Q}$ generated by $\zeta_n=\exp(2 \pi \sqrt{-1} /n)$ and $\sqrt[n]{2}$. In this paper, we lay the foundation for applying the Order-LWE in the integral ring $\mathcal{R}=\mathbb{Z}[\zeta_n, \sqrt[n]{2}]$ to cryptographic uses when $n$ is a power-of-two integer. We explicitly compute the Galois group $\text{Gal}\left(F/\mathbb{Q} \right)$ and the canonical embedding of $F$, based on which we study the properties of the trace pairings of the integral basis $\zeta_n^{k_0} \sqrt[n]{2}^{k_1}$. Then we discuss the security of the Order-LWE in $\mathcal{R}$, and show that it offers the same security level as the RLWE in $\mathbb{Z}[X]/\langle X^{n^2/4} + 1 \rangle$. Moreover, we design a Two-Variable Number Theoretic Transform (2NTT) algorithm for the quotient $\mathcal{R}_p=\mathcal{R}/p\mathcal{R}$, where $p$ is a prime number such that $Y^n \equiv 2 \bmod p$ has $n$ distinct solutions. Compared to the one-variable NTT in $\mathbb{Z}[X]/\langle X^{n^2/4} + 1 \rangle$, a crucial advantage of 2NTT is that it enjoys a quadratic saving of twiddle factors. Hence, we can leverage this quadratic saving to boost the performance of 2NTT in practical implementations. At last, we also look at the applications of the Order-LWE in $\mathcal{R}$. In particular, we construct a new variant of CKKS for $\mathcal{R}$ and study its new properties.
Last updated:  2024-01-24
Some Improvements for the PIOP for ZeroCheck
Angus Gruen
Most multivariate proof systems require, at some point, an algebraic check against the rows of the trace. One popular protocol for this is known as zerocheck which is a sumcheck based protocol which proves a constraint function is zero over the $n$-dimensional Boolean hypercube. One of the drawbacks of running zerocheck over a small field, is that it usually involves a large number of evaluations of the constraint polynomial over a cryptographically large extension field $\mathbb{G}$. We describe several improvements to the zerocheck protocol over a small field $\mathbb{F}$ which both reduce the number of constraint evaluations the prover needs to perform and shifts some computation from the extension field back into the base field $\mathbb{F}$. Specifically, for a table of length $2^n$, integer parameter $1\leq k \leq n$ and constraint function $C$ of degree $d$ with evaluation costs $C_{\mathbb{F}}, C_{\mathbb{G}}$ we show the protocol can be performed with prover cost roughly \[ 2^n\left(1 + \frac{C_{\mathbb{G}}}{2^k C_{\mathbb{F}}}\right)(d - 1)C_{\mathbb{F}}. \] To check the proof, the verifier needs to perform a single interpolation of degree $2^kd$ and $n$ interpolations of degree $d$. Thus varying $k$ allows for a tradeoff between prover and verifier cost.
Last updated:  2024-01-24
ELEKTRA: Efficient Lightweight multi-dEvice Key TRAnsparency
Julia Len, Melissa Chase, Esha Ghosh, Daniel Jost, Balachandar Kesavan, and Antonio Marcedone
Key Transparency (KT) systems enable service providers of end-to-end encrypted communication (E2EE) platforms to maintain a Verifiable Key Directory (VKD) that maps each user's identifier, such as a username or email address, to their identity public key(s). Users periodically monitor the directory to ensure their own identifier maps to the correct keys, thus detecting any attempt to register a fake key on their behalf to Meddler-in-the-Middle (MitM) their communications. We introduce and formalize a new primitive called Multi-Device Verifiable Key Directory (MVKD), which strengthens both the security, privacy, and usability guarantees of VKD by leveraging the multi-device setting. We formalize three properties for a MVKD (completeness, extraction-based soundness, and privacy), striking a non-trivial balance between strong guarantees and the limitations imposed by a truly practical system. We then present a new MVKD system called ELEKTRA. This system combines the core of the Keybase KT system (running in production since 2014) with ideas from SEEMless (Chase et. al., 2019) and RZKS (Chen et. al., 2022). Our construction is the first to achieve the above multi-device guarantees while having formal security and privacy proofs. Finally, we implement ELEKTRA and present benchmarks demonstrating its practicality.
Last updated:  2024-01-24
A Trust-based Recommender System over Arbitrarily Partitioned Data with Privacy
Ibrahim Yakut and Huseyin Polat
Recommender systems are effective mechanisms for recommendations about what to watch, read, or taste based on user ratings about experienced products or services. To achieve higher quality recommendations, e-commerce parties may prefer to collaborate over partitioned data. Due to privacy issues, they might hesitate to work in pairs and some solutions motivate them to collaborate. This study examines how to estimate trust-based predictions on arbitrarily partitioned data in which two parties have ratings for similar sets of customers and items. A privacy- preserving scheme is proposed, and it is justified that it efficiently offers trust-based predictions on partitioned data while preserving privacy.
Last updated:  2024-01-24
Generation of two ''independent'' points on an elliptic curve of $j$-invariant $\neq 0, 1728$
Dmitrii Koshelev
This article is dedicated to a new generation method of two ``independent'' $\mathbb{F}_{\!q}$-points $P_0$, $P_1$ on almost any ordinary elliptic curve $E$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. In particular, the method is relevant for all standardized and real-world elliptic curves of $j$-invariants different from $0$, $1728$. The points $P_0$, $P_1$ are characterized by the fact that nobody (even a generator) knows the discrete logarithm $\log_{P_0}(P_1)$ in the group $E(\mathbb{F}_{\!q})$. Moreover, only one square root extraction in $\mathbb{F}_{\!q}$ (instead of two ones) is required in comparison with all previous generation methods.
Last updated:  2024-01-24
Practical Delegatable Anonymous Credentials From Equivalence Class Signatures
Omid Mir, Daniel Slamanig, Balthazar Bauer, and René Mayrhofer
Anonymous credentials systems (ACs) are a powerful cryptographic tool for privacy-preserving applications and provide strong user privacy guarantees for authentication and access control. ACs allow users to prove possession of attributes encoded in a credential without revealing any information beyond them. A delegatable AC (DAC) system is an enhanced AC system that allows the owners of credentials to delegate the obtained credential to other users. This allows to model hierarchies as usually encountered within public-key infrastructures (PKIs). DACs also provide stronger privacy guarantees than traditional AC systems since the identities of issuers and delegators are also hidden. A credential issuer's identity may convey information about a user's identity even when all other information about the user is protected. We present a novel delegatable anonymous credential scheme that supports attributes, provides anonymity for delegations, allows the delegators to restrict further delegations, and also comes with an efficient construction. In particular, our DAC credentials do not grow with delegations, i.e., are of constant size. Our approach builds on a new primitive that we call structure-preserving signatures on equivalence classes on updatable commitments (SPSEQ-UC). The high-level idea is to use a special signature scheme that can sign vectors of set commitments which can be extended by additional set commitments. Signatures additionally include a user's public key, which can be switched. This allows us to efficiently realize delegation in the DAC. Similar to conventional SPSEQ signatures, the signatures and messages can be publicly randomized and thus allow unlinkable showings in the DAC system. We present further optimizations such as cross-set commitment aggregation that, in combination, enable selective, efficient showings in the DAC without using costly zero-knowledge proofs. We present an efficient instantiation that is proven to be secure in the generic group model and finally demonstrate the practical efficiency of our DAC by presenting performance benchmarks based on an implementation.
Last updated:  2024-01-24
A Cipher-Agnostic Neural Training Pipeline with Automated Finding of Good Input Differences
Emanuele Bellini, David Gerault, Anna Hambitzer, and Matteo Rossi
Neural cryptanalysis is the study of cryptographic primitives throughmachine learning techniques. Following Gohr’s seminal paper at CRYPTO 2019, afocus has been placed on improving the accuracy of such distinguishers against specific primitives, using dedicated training schemes, in order to obtain better key recovery attacks based on machine learning. These distinguishers are highly specialized and not trivially applicable to other primitives. In this paper, we focus on the opposite problem: building a generic pipeline for neural cryptanalysis. Our tool is composed of two parts. The first part is an evolutionary algorithm for the search of good input differences for neural distinguishers. The second part is DBitNet, a neuraldistinguisher architecture agnostic to the structure of the cipher. We show thatthis fully automated pipeline is competitive with a highly specialized approach, inparticular for SPECK32, and SIMON32. We provide new neural distinguishers forseveral primitives (XTEA, LEA, HIGHT, SIMON128, SPECK128) and improve overthe state-of-the-art for PRESENT, KATAN, TEA and GIMLI.
Last updated:  2024-01-24
Differential cryptanalysis with SAT, SMT, MILP, and CP: a detailed comparison for bit-oriented primitives
Emanuele Bellini, Alessandro De Piccoli, Mattia Formenti, David Gerault, Paul Huynh, Simone Pelizzola, Sergio Polese, and Andrea Visconti
SAT, SMT, MILP, and CP, have become prominent in the differential cryptanalysis of cryptographic primitives. In this paper, we review the techniques for constructing differential characteristic search models in these four formalisms. Additionally, we perform a systematic comparison encompassing over 20 cryptographic primitives and 16 solvers, on both easy and hard instances of optimisation, enumeration and differential probability estimation problems.
Last updated:  2024-01-24
Crystalor: Recoverable Memory Encryption Mechanism with Optimized Metadata Structure
Rei Ueno, Hiromichi Haneda, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This study presents an efficient recoverable memory encryption mechanism, named Crystalor. Existing memory encryption mechanisms, such as Intel SGX integrity tree, offer neither crash consistency nor recoverability, which results in attack surfaces and causes a non-trivial limitation of practical availability. Although the crash consistency of encrypted memory has been studied in the research field of microarchitecture, existing mechanisms lack formal security analysis and cannot incorporate with metadata optimization mechanisms, which are essential to achieve a practical performance. Crystalor efficiently realizes provably-secure recoverable memory encryption with metadata optimization. To establish Crystalor with provable security and practical performance, we develop a dedicated universal hash function PXOR-Hash and a microarchitecture equipped with PXOR-Hash. Crystalor incurs almost no latency overhead under the nominal operations for the recoverability, while it has a simple construction in such a way as to be compatible with existing microarchitectures. We evaluate its practical performance through both algorithmic analyses and system-level simulation in comparison with the state-of-the-art ones, such as SCUE. Crystalor requires 29–62% fewer clock cycles per memory read/write operation than SCUE for protecting a 4 TB memory. In addition, Crystalor and SCUE require 312 GB and 554 GB memory overheads for metadata, respectively, which indicates that Crystalor achieves a memory overhead reduction of 44%. The results of the system-level simulation using the gem5 simulator indicate that Crystalor achieves a reduction of up to 11.5% in the workload execution time compared to SCUE. Moreover, Crystalor achieves a higher availability and memory recovery several thousand times faster than SCUE, as Crystalor offers lazy recovery.
Last updated:  2024-01-24
eLIMInate: a Leakage-focused ISE for Masked Implementation
Hao Cheng, Daniel Page, and Weijia Wang
Even given a state-of-the-art masking scheme, masked software implementation of some cryptography functionality can pose significant challenges stemming, e.g., from simultaneous requirements for efficiency and security. In this paper we design an Instruction Set Extension (ISE) to address a specific element of said challenge, namely the elimination of leakage stemming from architectural and micro-architectural overwriting. Conceptually, the ISE allows a leakage-focused behavioural hint to be communicated from software to the micro-architecture: using it informs how computation is realised when applied to masking-specific data, which then offers an opportunity to eliminate associated leakage. We develop prototype, latency- and area-optimised implementations of the ISE design based on the RISC-V Ibex core. Using them, we demonstrate that use of the ISE can close the gap between assumptions about and actual behaviour of a device and thereby deliver an improved security guarantee.
Last updated:  2024-01-23
AnonPSI: An Anonymity Assessment Framework for PSI
Bo Jiang, Jian Du, and Qiang Yan
Private Set Intersection (PSI) is a widely used protocol that enables two parties to securely compute a function over the intersected part of their shared datasets and has been a significant research focus over the years. However, recent studies have highlighted its vulnerability to Set Membership Inference Attacks (SMIA), where an adversary might deduce an individual's membership by invoking multiple PSI protocols. This presents a considerable risk, even in the most stringent versions of PSI, which only return the cardinality of the intersection. This paper explores the evaluation of anonymity within the PSI context. Initially, we highlight the reasons why existing works fall short in measuring privacy leakage, and subsequently propose two attack strategies that address these deficiencies. Furthermore, we provide theoretical guarantees on the performance of our proposed methods. In addition to these, we illustrate how the integration of auxiliary information, such as the sum of payloads associated with members of the intersection (PSI-SUM), can enhance attack efficiency. We conducted a comprehensive performance evaluation of various attack strategies proposed utilizing two real datasets. Our findings indicate that the methods we propose markedly enhance attack efficiency when contrasted with previous research endeavors. The effective attacking implies that depending solely on existing PSI protocols may not provide an adequate level of privacy assurance. It is recommended to combine privacy-enhancing technologies synergistically to enhance privacy protection even further.
Last updated:  2024-01-23
Tree-based Lookup Table on Batched Encrypted Queries using Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, and Jai Hyun Park
Homomorphic encryption (HE) is in the spotlight as a solution for privacy-related issues in various real-world scenarios. However, the limited types of operations supported by each HE scheme have been a major drawback in applications. While HE schemes based on learning-with-error (LWE) problem provide efficient lookup table (LUT) evaluation in terms of latency, they have downsides in arithmetic operations and low throughput compared to HE schemes based on ring LWE (RLWE) problem. The use of HE on circuits containing LUT has been partly limited if they contain arithmetic operations or their computational width is large. In this paper, we propose homomorphic algorithms for batched queries on LUTs by using RLWE-based HE schemes. To look up encrypted LUTs of size $n$ on encrypted queries, our algorithms use $O(\log{n})$ homomorphic comparisons and $O(n)$ multiplications. For unencrypted LUTs, our algorithms use $O(\log{n})$ comparisons, $O(\sqrt{n})$ ciphertext multiplications, and $O(n)$ scalar multiplications. We provide a proof-of-concept implementation based on CKKS scheme (Asiacrypt 2017). The amortized running time for an encrypted (Resp. unencrypted) LUT of size $512$ is $0.041$ (Resp. $0.025$) seconds. Our implementation reported roughly $2.4$-$6.0$x higher throughput than the current implementation of LWE-based schemes, with more flexibility on the structure of the LUTs.
Last updated:  2024-01-23
Cryptanalysis of the Peregrine Lattice-Based Signature Scheme
Xiuhan Lin, Moeto Suzuki, Shiduo Zhang, Thomas Espitau, Yang Yu, Mehdi Tibouchi, and Masayuki Abe
The Peregrine signature scheme is one of the candidates in the ongoing Korean post-quantum cryptography competition. It is proposed as a high-speed variant of Falcon, which is a hash-and-sign signature scheme over NTRU lattices and one of the schemes selected by NIST for standardization. To this end, Peregrine replaces the lattice Gaussian sampler in the Falcon signing procedure with a new sampler based on the centered binomial distribution. While this modification offers significant advantages in terms of efficiency and implementation, it does not come with a provable guarantee that signatures do not leak information about the signing key. Unfortunately, lattice based signature schemes in the hash-and-sign paradigm that lack such a guarantee (such as GGH, NTRUSign or DRS) have generally proved insecure. In this paper, we show that Peregrine is no exception, by demonstrating a practical key recovery attack against it. We observe that the distribution of Peregrine signatures is a hidden transformation of some public distribution and still leaks information about the signing key. By adapting the parallelepiped-learning technique of Nguyen and Regev (Eurocrypt 2006), we show that the signing key can be recovered from a relatively small number of signatures. The learning technique alone yields an approximate version of the key, from which we can recover the exact key using a decoding technique due to Thomas Prest (PKC 2023). For the reference implementation (resp. the official specification version) of Peregrine-512, we fully recover the secret key with good probability in a few hours given around 25,000 (resp. 11 million) signature samples.
Last updated:  2024-01-23
On Instantiating Unleveled Fully-Homomorphic Signatures from Falsifiable Assumptions
Romain Gay and Bogdan Ursu
We build the first unleveled fully homomorphic signature scheme in the standard model. Our scheme is not constrained by any a-priori bound on the depth of the functions that can be homomorphically evaluated, and relies on subexponentially-secure indistinguishability obfuscation, fully-homomorphic encryption and a non-interactive zero-knowledge (NIZK) proof system with composable zero-knowledge. Our scheme is also the first to satisfy the strong security notion of context-hiding for an unbounded number of levels, ensuring that signatures computed homomorphically do not leak the original messages from which they were computed. All building blocks are instantiable from falsifiable assumptions in the standard model, avoiding the need for knowledge assumptions. The main difficulty we overcome stems from the fact that bootstrapping, which is a crucial tool for obtaining unleveled fully homomorphic encryption (FHE), has no equivalent for homomorphic signatures, requiring us to use novel techniques.
Last updated:  2024-01-23
Towards Practical Multi-key TFHE: Parallelizable, Key-Compatible, Quasi-linear Complexity
Hyesun Kwak, Seonhong Min, and Yongsoo Song
Multi-key homomorphic encryption is a generalized notion of homomorphic encryption supporting arbitrary computation on ciphertexts, possibly encrypted under different keys. In this paper, we revisit the work of Chen, Chillotti and Song (ASIACRYPT 2019) and present yet another multi-key variant of the TFHE scheme. The previous construction by Chen et al. involves a blind rotation procedure where the complexity of each iteration gradually increases as it continuously operates on ciphertexts under different keys. Hence, the complexity of gate bootstrapping grows quadratically with respect to the number of associated keys. Our scheme is based on a new blind rotation algorithm which consists of two separate phases. We first split a given multi-key ciphertext into several single-key ciphertexts, take each of them as input to the blind rotation procedure, and obtain accumulators corresponding to individual keys. Then, we merge these single-key accumulators into a single multi-key accumulator. In particular, we develop a novel homomorphic operation between single-key and multi-key ciphertexts to instantiate our pipeline. Therefore, our construction achieves an almost linear time complexity since the gate bootstrapping is dominated by the first phase of blind rotation which requires only independent single-key operations. It also enjoys with great advantages of parallelizability and key-compatibility. We implement the proposed scheme and provide its performance benchmark. For example, our 16-key gate bootstrapping takes about 5.65s, which is 4.38x faster compared to the prior work.
Last updated:  2024-01-23
Laconic Branching Programs from the Diffie-Hellman Assumption
Sanjam Garg, Mohammad Hajiabadi, Peihan Miao, and Alice Murphy
Laconic cryptography enables secure two-party computation (2PC) on unbalanced inputs with asymptotically-optimal communication in just two rounds of communication. In particular, the receiver (who sends the first-round message) holds a long input and the sender (who sends the second-round message) holds a short input, and the size of their communication to securely compute a function on their joint inputs only grows with the size of the sender's input and is independent of the receiver's input size. The work on laconic oblivious transfer (OT) [Cho et al. CRYPTO 2017] and laconic private set intersection (PSI) [Alamati et al. TCC 2021] shows how to achieve secure laconic computation for OT and PSI from the Diffie-Hellman assumption. In this work, we push the limits further and achieve laconic branching programs from the Diffie-Hellman assumption. In particular, the receiver holds a large branching program $P$ and the sender holds a short input $x$. We present a two-round 2PC protocol that allows the receiver to learn $x$ iff $P(x) =1$, and nothing else. The communication only grows with the size of $x$ and the depth of $P$, and does not further depend on the size of $P$.
Last updated:  2024-01-23
Fully Homomorphic Encryption-Based Protocols for Enhanced Private Set Intersection Functionalities
JINGWEI HU, Junyan Chen, Wangchen Dai, and Huaxiong Wang
This study delves into secure computations for set intersections using fully homomorphic encryption (FHE) within the semi-honest setting. Our protocols facilitate joint computations between two parties, each holding a set of inputs denoted as $N_s$ and $N_r$ in size, respectively. The primary objective is to determine various functionalities, such as intersection size and sum, while maintaining data confidentiality. These functionalities extend the classic private set intersection (PSI) and have practical applications in contact tracing, ad conversion analysis, and online dating, each abstracted into specialized PSI protocols. Our work demonstrates that these extended PSI functionalities are interconnected, with the PSI-cardinality protocol serving as the foundation. By adapting this protocol, we naturally arrive at PSI-sum-cardinality. Additionally, PSI-token-threshold is achieved by augmenting PSI-cardinality with FHE-based oblivious polynomial evaluation (OPE). The tPSI protocol combines PSI-token-threshold and standard PSI, allowing information sharing when the intersection size exceeds a threshold. Our protocols excel in simplicity, enhancing ease of understanding, implementation, optimization, and long-term maintenance. They also exhibit sublinear communication complexity concerning the larger sender's set, rendering them well-suited for scenarios involving substantial data. Various optimization techniques further bolster their practical efficiency.
Last updated:  2024-01-23
Powers of Tau in Asynchrony
Sourav Das, Zhuolun Xiang, and Ling Ren
The $q$-Strong Diffie-Hellman ($q$-SDH) parameters are foundational to efficient constructions of many cryptographic primitives such as zero-knowledge succinct non-interactive arguments of knowledge, polynomial/vector commitments, verifiable secret sharing, and randomness beacon. The only existing method to generate these parameters securely is highly sequential, requires synchrony assumptions, and has very high communication and computation costs. For example, to generate parameters for any given $q$, each party incurs a communication cost of $\Omega(nq)$ and requires $\Omega(n)$ rounds. Here $n$ is the number of parties in the secure multiparty computation protocol. Since $q$ is typically large, i.e., on the order of billions, the cost is highly prohibitive. In this paper, we present a distributed protocol to generate $q$-SDH parameters in an asynchronous network. In a network of $n$ parties, our protocol tolerates up to one-third of malicious parties. Each party incurs a communication cost of $O(q + n^2\log q)$ and the protocol finishes in $O(\log q + \log n)$ expected rounds. We provide a rigorous security analysis of our protocol. We implement our protocol and evaluate it with up to 128 geographically distributed parties. Our evaluation illustrates that our protocol is highly scalable and results in a 2-6$\times$ better runtime and 4-13$\times$ better per-party bandwidth usage compared to the state-of-the-art synchronous protocol for generating $q$-SDH parameters.
Last updated:  2024-01-23
Fast and Designated-verifier Friendly zkSNARKs in the BPK Model
Xudong Zhu, Xuyang Song, and Yi Deng
After the pioneering results proposed by Bellare et al in ASIACRYPT 2016, there have been lots of efforts to construct zero-knowledge succinct non-interactive arguments of knowledge protocols (zk-SNARKs) that satisfy subversion zero knowledge (S-ZK) and standard soundness from the zk-SNARK in the common reference string (CRS) model. The various constructions could be regarded secure in the bare public key (BPK) model because of the equivalence between S-ZK in the CRS model, and uniform non-black-box zero knowledge in the BPK model has been proved by Abdolmaleki et al. in PKC 2020. In this study, by leveraging the power of random oracle (RO) model, we proposed the first publicly verifiable non-uniform ZK zk-SNARK scheme in the BPK model maintaining comparable efficiency with its conventional counterpart, which can also be compatible with the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain an efficient designated-verifier zk-SNARK. We achieve this goal by only adding a constant number of elements into the CRS, and using an unconventional but natural method to transform Groth’s zk-SNARK in EUROCRYPT 2016. In addition, we propose a new speed-up technique that provides a trade-off. Specifically, if a logarithmic number of elements are added into the CRS, according to different circuits, the CRS verification time in our construction could be approximately $9\%-23\%$ shorter than that in the conventional counterpart.
Last updated:  2024-01-22
Lookup Arguments: Improvements, Extensions and Applications to Zero-Knowledge Decision Trees
Matteo Campanelli, Antonio Faonio, Dario Fiore, Tianyu Li, and Helger Lipmaa
Lookup arguments allow to prove that the elements of a committed vector come from a (bigger) committed table. They enable novel approaches to reduce the prover complexity of general-purpose zkSNARKs, implementing ``non-arithmetic operations" such as range checks, XOR and AND more efficiently. We extend the notion of lookup arguments along two directions and improve their efficiency: (1) we extend vector lookups to matrix lookups (where we can prove that a committed matrix is a submatrix of a committed table). (2) We consider the notion of zero-knowledge lookup argument that keeps the privacy of both the sub-vector/sub-matrix and the table. (3) We present new zero-knowledge lookup arguments, dubbed cq+, zkcq+ and cq++, more efficient than the state of the art, namely the recent work by Eagen, Fiore and Gabizon named cq. Finally, we give a novel application of zero-knowledge matrix lookup argument to the domain of zero-knowledge decision tree where the model provider releases a commitment to a decision tree and can prove zero-knowledge statistics over the committed data structure. Our scheme based on lookup arguments has succinct verification, prover's time complexity asymptotically better than the state of the art, and is secure in a strong security model where the commitment to the decision tree can be malicious.
Last updated:  2024-01-22
Zendoo: a zk-SNARK Verifiable Cross-Chain Transfer Protocol Enabling Decoupled and Decentralized Sidechains
Alberto Garoffolo, Dmytro Kaidalov, and Roman Oliynykov
Sidechains are an appealing innovation devised to enable blockchain scalability and extensibility. The basic idea is simple yet powerful: construct a parallel chain - sidechain - with desired features, and provide a way to transfer coins between the mainchain and the sidechain. In this paper, we introduce Zendoo, a construction for Bitcoin-like blockchain systems that allows the creation and communication with sidechains of different types without knowing their internal structure. We consider a parent-child relationship between the mainchain and sidechains, where sidechain nodes directly observe the mainchain while mainchain nodes only observe cryptographically authenticated certificates from sidechain maintainers. We use zk-SNARKs to construct a universal verifiable transfer mechanism that is used by sidechains. Moreover, we propose a specific sidechain construction, named Latus, that can be built on top of this infrastructure, and realizes a decentralized verifiable blockchain system for payments. We leverage the use of recursive composition of zk-SNARKs to generate succinct proofs of sidechain state progression that are used to generate certificates’ validity proofs. This allows the mainchain to efficiently verify all operations performed in the sidechain without knowing any details about those operations.
Last updated:  2024-01-22
Snarktor: A Decentralized Protocol for Scaling SNARKs Verification in Blockchains
Alberto Garoffolo, Dmytro Kaidalov, and Roman Oliynykov
The use of zero-knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARK) and similar types of proofs has become increasingly popular as a solution for improving scalability, privacy, and interoperability of blockchain systems. However, even with the most advanced proving systems, verifying a single SNARK proof can require a significant amount of computational resources making it expensive to be performed on-chain. This becomes a noticeable bottleneck in scaling SNARK-based applications. Further efficiency improvement to avoid this bottleneck lies in utilizing distributed recursive proof composition to aggregate multiple existing proofs into one that verifies all underlying proofs. Building upon this concept, we present a new protocol for decentralized recursive proof aggregation allowing one unique proof to aggregate many input proofs to be efficiently verified on-chain, increasing the throughput and cost efficiency of SNARK-based blockchains. The protocol is designed for decentralized environments where independent actors (provers) can join and contribute to the proof generation process. We also present an incentive scheme for such actors. The protocol is abstract enough to be used with a variety of proving systems that support recursive aggregation.
Last updated:  2024-01-22
Starlit: Privacy-Preserving Federated Learning to Enhance Financial Fraud Detection
Aydin Abadi, Bradley Doyle, Francesco Gini, Kieron Guinamard, Sasi Kumar Murakonda, Jack Liddell, Paul Mellor, Steven J. Murdoch, Mohammad Naseri, Hector Page, George Theodorakopoulos, and Suzanne Weller
Federated Learning (FL) is a data-minimization approach enabling collaborative model training across diverse clients with local data, avoiding direct data exchange. However, state-of-the-art FL solutions to identify fraudulent financial transactions exhibit a subset of the following limitations. They (1) lack a formal security definition and proof, (2) assume prior freezing of suspicious customers’ accounts by financial institutions (limiting the solutions’ adoption), (3) scale poorly, involving either $O(n^2)$ computationally expensive modular exponentiation (where $n$ is the total number of financial institutions) or highly inefficient fully homomorphic encryption, (4) assume the parties have already completed the identity alignment phase, hence excluding it from the implementation, performance evaluation, and security analysis, and (5) struggle to resist clients’ dropouts. This work introduces Starlit, a novel scalable privacy-preserving FL mechanism that overcomes these limitations. It has various applications, such as enhancing financial fraud detection, mitigating terrorism, and enhancing digital health. We implemented Starlit and conducted a thorough performance analysis using synthetic data from a key player in global financial transactions. The evaluation indicates Starlit’s scalability, efficiency, and accuracy.
Last updated:  2024-01-22
On Structure-Preserving Cryptography and Lattices
Dennis Hofheinz, Kristina Hostáková, Roman Langrehr, and Bogdan Ursu
The Groth-Sahai proof system is a highly efficient pairing-based proof system for a specific class of group-based languages. Cryptographic primitives that are compatible with these languages (such that we can express, e.g., that a ciphertext contains a valid signature for a given message) are called "structure-preserving". The combination of structure-preserving primitives with Groth-Sahai proofs allows to prove complex statements that involve encryptions and signatures, and has proved useful in a variety of applications. However, so far, the concept of structure-preserving cryptography has been confined to the pairing setting. In this work, we propose the first framework for structure-preserving cryptography in the lattice setting. Concretely, we - define "structure-preserving sets" as an abstraction of (typically noisy) lattice-based languages, - formalize a notion of generalized structure-preserving encryption and signature schemes (capturing a number of existing lattice-based encryption and signature schemes), - construct a compatible zero-knowledge argument system that allows to argue about lattice-based structure-preserving primitives, - offer a lattice-based construction of verifiably encrypted signatures in our framework. Along the way, we also discover a new and efficient strongly secure lattice-based signature scheme. This scheme combines Rückert's lattice-based signature scheme with the lattice delegation strategy of Agrawal et al., which yields more compact and efficient signatures. We hope that our framework provides a first step towards a modular and versatile treatment of cryptographic primitives in the lattice setting.
Last updated:  2024-01-22
Theoretical differential fault attacks on FLIP and FiLIP
Pierrick Méaux and Dibyendu Roy
In this article, we examine Differential Fault Attacks (DFA) targeting two stream ciphers, FLIP and FiLIP. We explore the fault model where an adversary flips a single bit of the key at an unknown position. Our analysis involves establishing complexity bounds for these attacks, contingent upon the cryptographic parameters of the Boolean functions employed as filters and the key size. Initially, we demonstrate how the concept of sensitivity enables the detection of the fault position using only a few keystream bits. This represents an enhancement over previous DFA methodologies applied to these ciphers. Subsequently, we leverage the properties of the filter's derivatives to execute attacks. This approach is universally applicable to any filter, and we delineate specific attack strategies for the two function families previously implemented in these ciphers.
Last updated:  2024-01-22
GeT a CAKE: Generic Transformations from Key Encaspulation Mechanisms to Password Authenticated Key Exchanges
Hugo Beguinet, Céline Chevalier, David Pointcheval, Thomas Ricosset, and Mélissa Rossi
Password Authenticated Key Exchange (PAKE) have become a key building block in many security products as they provide interesting efficiency/security trade-offs. Indeed, a PAKE allows to dispense with the heavy public key infrastructures and its efficiency and portability make it well suited for applications such as Internet of Things or e-passports. With the emerging quantum threat and the effervescent development of post-quantum public key algorithms in the last five years, one would wonder how to modify existing password authenticated key exchange protocols that currently rely on Diffie-Hellman problems in order to include newly introduced and soon-to-be-standardized post-quantum key encapsulation mechanisms (KEM). A generic solution is desirable for maintaining modularity and adaptability with the many post-quantum KEM that have been introduced. In this paper, we propose two new generic and natural constructions proven in the Universal Composability (UC) model to transform, in a black-box manner, a KEM into a PAKE with very limited performance overhead: one or two extra symmetric encryptions. Behind the simplicity of the designs, establishing security proofs in the UC model is actually non-trivial and requires some additional properties on the underlying KEM like fuzziness and anonymity. Luckily, post-quantum KEM protocols often enjoy these two extra properties. As a demonstration, we prove that it is possible to apply our transformations to Crystals-Kyber, a lattice-based post-quantum KEM that will soon be standardized by the National Institute of Standards and Technology (NIST). In a nutshell, this work opens up the possibility to securely include post-quantum cryptography in PAKE-based real-world protocols.
Last updated:  2024-01-22
Zero Knowledge Protocols and Signatures from the Restricted Syndrome Decoding Problem
Marco Baldi, Sebastian Bitzer, Alessio Pavoni, Paolo Santini, Antonia Wachter-Zeh, and Violetta Weger
The Restricted Syndrome Decoding Problem (R-SDP) corresponds to the Syndrome Decoding Problem (SDP) with the additional constraint that all entries of the solution error vector must live in a fixed subset of the finite field. In this paper, we study how this problem can be applied to the construction of signatures derived from Zero-Knowledge (ZK) protocols. First, we show that R-SDP appears to be well-suited for this type of application: ZK protocols relying on SDP can easily be modified to use R-SDP, resulting in significant reductions in the communication cost. We then introduce and analyze a variant of R-SDP, which we call R-SDP$(G)$, with the property that solution vectors can be represented with a number of bits that is slightly larger than the security parameter (which clearly provides an ultimate lower bound). This enables the design of competitive ZK protocols. We show that existing ZK protocols can greatly benefit from the use of R-SDP, achieving signature sizes in the order of 7 kB, which are smaller than those of several other schemes submitted to NIST's additional call for post-quantum digital signatures.
Last updated:  2024-01-22
Improved All-but-One Vector Commitment with Applications to Post-Quantum Signatures
Dung Bui, Kelong Cong, and Cyprien Delpech de Saint Guilhem
Post-quantum digital signature schemes have recently received increased attention due to the NIST standardization project for additional signatures. MPC-in-the-Head and VOLE-in-the-Head are general techniques for constructing such signatures from zero-knowledge proof systems. A common theme between the two is an all-but-one vector commitment scheme which internally uses GGM trees. This primitive is responsible for a significant part of the computational time during signing and verification. A more efficient technique for constructing GGM trees is the half-tree technique, introduced by Guo et al. (Eurocrypt 2023). Our work builds an all-but-one vector commitment scheme from the half-tree technique, and further generalizes it to an all-but-\(\tau\) vector commitment scheme. Crucially, our work avoids the use of the random oracle assumption in an important step, which means our binding proof is non-trivial and instead relies on the random permutation oracle. Since this oracle can be instantiated using fixed-key AES which has hardware support, we achieve faster signing and verification times. We integrate our vector commitment scheme into FAEST (faest.info), a round one candidate in the NIST standardization process, and demonstrates its performance with a prototype implementation. For \(\lambda = 128\), our experimental results show a nearly \(3.5\)-fold improvement in signing and verification times.
Last updated:  2024-01-22
On Modular Algorithms and Butterfly Operations in Number Theoretic Transform
Yanze Yang, Yiran Jia, and Guangwu Xu
Number theoretic transform (NTT) has been a very useful tool in computations for number theory, algebra and cryptography. Its performance affects some post-quantum cryptosystems. In this paper, we discuss the butterfly operation of NTT. This basic module of NTT requires heavy modular arithmetics. Montgomery reduction is commonly used in this setting. Recently several variants of Montgomery have been proposed for the purpose of speeding up NTT. We observe that the Chinese remainder theorem (CRT) can be involved in this type of algorithms in nature and transparent ways. In this paper, a framework of using CRT to model Montgomery type algorithms is described. The derivation of these algorithms as well as their correctness are all treated in the CRT framework. Under our approach, some problems of a modular reduction algorithm ((published in IACR Transactions on Cryptographic Hardware and Embedded Systems, doi:10.46586/tches.v2022.i4.614-636 ) are identified, and a counterexample is generated to show that the algorithm is incorrect.
Last updated:  2024-01-22
Revisiting the security analysis of SNOVA
Yasuhiko Ikematsu and Rika Akiyama
SNOVA is a multivariate signature scheme submitted to the ad- ditional NIST PQC standardization project started in 2022. SNOVA is con- structed by incorporating the structure of the matrix ring over a finite field into the UOV signature scheme, and the core part of its public key is the UOV public key whose coefficients consist of matrices. As a result, SNOVA dramatically reduces the public key size compared to UOV. In this paper, we recall the construction of SNOVA, and reconsider its security analysis. In particular, we investigate key recovery attacks applied to the core part of the public key of SNOVA in detail. Due to our analysis, we show that some pa- rameters of SNOVA submitted in the additional NIST PQC standardization do not satisfy the claimed security levels.
Last updated:  2024-01-22
ConvKyber: Unleashing the Power of AI Accelerators for Faster Kyber with Novel Iteration-based Approaches
Tian Zhou, Fangyu Zheng, Guang Fan, Lipeng Wan, Wenxu Tang, Yixuan Song, Yi Bian, and Jingqiang Lin
The remarkable performance capabilities of AI accelerators offer promising opportunities for accelerating cryptographic algorithms, particularly in the context of lattice-based cryptography. However, current approaches to leveraging AI accelerators often remain at a rudimentary level of implementation, overlooking the intricate internal mechanisms of these devices. Consequently, a significant number of computational resources is underutilized. In this paper, we present a comprehensive exploration of NVIDIA Tensor Cores and introduce a novel framework tailored specifically for Kyber. Firstly, we propose two innovative approaches that efficiently break down Kyber's NTT into iterative matrix multiplications, resulting in approximately a 75% reduction in costs compared to the state-of-the-art scanning-based methods.Secondly, by reversing the internal mechanisms, we precisely manipulate the internal resources of Tensor Cores using assembly-level code instead of inefficient standard interfaces, eliminating memory accesses and redundant function calls. Finally, building upon our highly optimized NTT, we provide a complete implementation for all parameter sets of Kyber. Our implementation surpasses the state-of-the-art Tensor Core based work, achieving remarkable speed-ups of 1.93x, 1.65x, 1.22x and 3.55x for polyvec_ntt, KeyGen, Enc and Dec in Kyber-1024, respectively. Even when considering execution latency, our throughput-oriented full Kyber implementation maintains an acceptable execution latency. For instance, the execution latency ranges from 1.02 to 5.68 milliseconds for Kyber-1024 on R3080 when achieving the peak throughput.
Last updated:  2024-01-21
Chosen-Ciphertext Secure Dual-Receiver Encryption in the Standard Model Based on Post-Quantum Assumptions
Laurin Benz, Wasilij Beskorovajnov, Sarai Eilebrecht, Roland Gröll, Maximilian Müller, and Jörn Müller-Quade
Dual-receiver encryption (DRE) is a special form of public key encryption (PKE) that allows a sender to encrypt a message for two recipients. Without further properties, the difference between DRE and PKE is only syntactical. One such important property is soundness, which requires that no ciphertext can be constructed such that the recipients decrypt to different plaintexts. Many applications rely on this property in order to realize more complex protocols or primitives. In addition, many of these applications explicitly avoid the usage of the random oracle, which poses an additional requirement on a DRE construction. We show that all of the IND-CCA2 secure standard model DRE constructions based on post-quantum assumptions fall short of augmenting the constructions with soundness and describe attacks thereon. We then give an overview over all applications of IND-CCA2 secure DRE, group them into generic (i. e., applications using DRE as black-box) and non-generic applications and demonstrate that all generic ones require either soundness or public verifiability. Conclusively, we identify the gap of sound and IND-CCA2 secure DRE constructions based on post-quantum assumptions in the standard Model. In order to fill this gap we provide two IND-CCA2 secure DRE constructions based on the standard post-quantum assumptions, Normal Form Learning With Errors (NLWE) and Learning Parity with Noise (LPN).
Last updated:  2024-01-21
Short Code-based One-out-of-Many Proofs and Applications
Xindong Liu and Li-Ping Wang
In this work, we propose two novel succinct one-out-of-many proofs from coding theory, which can be seen as extensions of the Stern's framework and Veron's framework from proving knowledge of a preimage to proving knowledge of a preimage for one element in a set, respectively. The size of each proof is short and scales better with the size of the public set than the code-based accumulator in \cite{nguyen2019new}. Based on our new constructions, we further present a logarithmic-size ring signature scheme and a logarithmic-size group signature scheme. Our schemes feature a short signature size, especially our group signature. To our best knowledge, it is the most compact code-based group signature scheme so far. At 128-bit security level, our group signature size is about 144 KB for a group with $2^{20}$ members while the group signature size of the previously most compact code-based group signature constructed by the above accumulator exceeds 3200 KB.
Last updated:  2024-01-21
Adaptively Secure BLS Threshold Signatures from DDH and co-CDH
Sourav Das and Ling Ren
Threshold signature is one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for Boldyreva's threshold signature, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM). In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing group in the Random Oracle Model (ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. Moreover, to achieve static security, our scheme only needs the hardness of CDH in the ROM, which is the same as the standard non-threshold BLS signature. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security. We also present an efficient distributed key generation (DKG) protocol to set up the signing keys for our signature scheme. We implement our scheme in Go and evaluate its signing and aggregation costs.
Last updated:  2024-01-21
On Algebraic Embedding for Unstructured Lattices
Madalina Bolboceanu, Zvika Brakerski, and Devika Sharma
Lattice-based cryptography, the study of cryptographic primitives whose security is based on the hardness of so-called lattice problems, has taken center stage in cryptographic research in recent years. It potentially offers favorable security features, even against quantum algorithms. One of the main obstacles for wide adoption of this type of cryptography is its unsatisfactory efficiency. To address this point, efficient lattice-based cryptography usually relies on the intractability of problems on lattices with additional algebraic structure (such as so-called ideal-lattices or module-lattices). It is an important open question to evaluate the hardness of such lattice problems, and their relation to the hardness of problems on unstructured lattices. It is a known fact that an unstructured lattice, which is simply an additive discrete group in Euclidean space, can be cast as an ideal-lattice in some \emph{order} of a number field (and thus, in a rather trivial sense, that ideals in orders are as general as unstructured lattices). However, it is not known whether this connection can be used to imply useful hardness results for structured lattices, or alternatively new algorithmic techniques for unstructured lattices. In this work we establish a gradient of hardness for the Order-LWE problem (a generalization of the well known Ring-LWE problem), as it varies over orders in a number field. Furthermore, we show that, in every number field, there are certain orders such that the corresponding Order-LWE problem is at least as hard as the (unstructured) LWE problem. So in general one should not hope to solve (any) Order-LWE more efficiently than LWE. However, we show that this connection holds in orders that are very ``skewed'' and hence, perhaps, irrelevant for improving efficiency in cryptographic applications. We further improve the hardness result for Order-LWE, to include \textit{all} ideal lattices, closing a gap left in prior work. This establishes a direct connection between problems on unstructured lattices and the structured problem of Order-LWE.
Last updated:  2024-01-21
Tagged Chameleon Hash from Lattices and Application to Redactable Blockchain
Yiming Li and Shengli Liu
Chameleon hash (CH) is a trapdoor hash function. Generally it is hard to find collisions, but with the help of a trapdoor, finding collisions becomes easy. CH plays an important role in converting a conventional blockchain to a redactable one. However, most of existing CH schemes are too weak to support redactable blockchains. The currently known CH schemes serving for redactable blockchains have the best security of so-called ``full collision resistance (f-CR)'', but they are built either in the random oracle model or rely on heavy tools like the simulation-sound extractable non-interactive zero-knowledge (SSE-NIZK) proof system. Moreover, up to now there is no CH scheme with post-quantum f-CR security in the standard model. Therefore, no CH can support redactable blockchains in a post-quantum way without relying on random oracles. In this paper, we introduce a variant of CH, namely tagged chameleon hash (tCH). Tagged chameleon hash takes a tag into hash evaluations and collision finding algorithms. We define two security notions for tCH, restricted collision resistance (r-CR) and full collision resistance (f-CR), and prove the equivalence between r-CR and f-CR when tCH works in the one-time tag mode. We propose a tCH scheme from lattices without using any NIZK proof, and prove that its restricted collision resistance is reduced to the Short Integer Solution (SIS) assumption in the standard model. We also show how to apply tCH to a blockchain in one-time tag mode so that the blockchain can be compiled to a redactable one. Our tCH scheme provides the first post-quantum solution for redactable blockchains, without resorting to random oracles or NIZK proofs. Besides, we also construct a more efficient tCH scheme with r-CR tightly reduced to SIS in the random oracle model, which may be of independent interest.
Last updated:  2024-01-20
Oblivious Turing Machine
Sofiane Azogagh, Victor Deflour, and Marc-Olivier Killijian
In the ever-evolving landscape of Information Tech- nologies, private decentralized computing on an honest yet curious server has emerged as a prominent paradigm. While numerous schemes exist to safeguard data during computation, the focus has primarily been on protecting the confidentiality of the data itself, often overlooking the potential information leakage arising from the function evaluated by the server. Recognizing this gap, this article aims to address the issue by presenting and implementing an innovative solution for ensuring the privacy of both the data and the program. We introduce a novel approach that combines the power of Fully Homomorphic Encryption with the concept of the Turing Machine model, resulting in the first fully secure practical, non-interactive oblivious Turing Machine. Our Oblivious Turing Machine construction is based on only three hypotheses, the hardness of the Ring Learning With Error problem, the ability to homomorphically evaluate non-linear functions and the capacity to blindly rotate elements of a data structure. Only based on those three assumptions, we propose an implementation of an Oblivious Turing Machine relying on the TFHE cryptosystem and present some implementation results.
Last updated:  2024-01-20
On historical Multivariate Cryptosystems and their restorations as instruments of Post-Quantum Cryptography
Vasyl Ustimenko
The paper presents a short survey of the History of Multivariate Cryptography together with the usage of old broken multivariate digital signatures in the new protocol based cryptosystems constructed in terms of Noncommutative Cryptography. The general schemes of New cryptosystems is a combinations of Eulerian maps and quadratic maps with their trapdoor accelerators, which are pieces of information such than the knowledge of them allow to compute the reimages in a polynomial time. These schemes are illustrated by historical examples of Imai – Matsumoto multivariate digital signatures schemes and Unbalanced Oil and Vinegar Cryptosystems.
Last updated:  2024-01-19
New proof systems and an OPRF from CSIDH
Cyprien Delpech de Saint Guilhem and Robi Pedersen
Isogeny computations in CSIDH (Asiacrypt 2018) are described using a commutative group G acting on the set of supersingular elliptic curves. The commutativity property gives CSIDH enough flexibility to allow the creation of many cryptographic primitives and protocols. Nevertheless, these operations are limited and more complex applications have not yet been proposed. When calling the composition of two group elements of G addition, our goal in this work is to explore exponentiation, multiplication with public elements, and multiplication between secret elements of this group. We first introduce a two-party interactive protocol for multiplication of secret group elements. Then, we explore zero-knowledge proofs of these different arithmetic operations. We present two types of approaches, using either standard sigma protocols or the MPC-in-the-Head paradigm. Most of our proofs need a trusted setup, which can be removed in the MPC-in-the-Head setting using cut-and-choose techniques. We conclude this work by presenting an oblivious pseudorandom function based on our new framework, that is competitive with current state-of-the-art designs.
Last updated:  2024-01-19
Compact Selective Opening Security From LWE
Dennis Hofheinz, Kristina Hostáková, Julia Kastner, Karen Klein, and Akin Ünal
Selective opening (SO) security is a security notion for public-key encryption schemes that captures security against adaptive corruptions of senders. SO security comes in chosen-plaintext (SO-CPA) and chosen-ciphertext (SO-CCA) variants, neither of which is implied by standard security notions like IND-CPA or IND-CCA security. In this paper, we present the first SO-CCA secure encryption scheme that combines the following two properties: (1) it has a constant ciphertext expansion (i.e., ciphertexts are only larger than plaintexts by a constant factor), and (2) its security can be proven from a standard assumption. Previously, the only known SO-CCA secure encryption scheme achieving (1) was built from an ad-hoc assumption in the RSA regime. Our construction builds upon LWE, and in particular on a new and surprisingly simple construction of compact lossy trapdoor functions (LTFs). Our LTF can be converted into an “all-but-many LTF” (or ABM-LTF), which is known to be sufficient to obtain SO-CCA security. Along the way, we fix a technical problem in that previous ABM-LTF-based construction of SO-CCA security.
Last updated:  2024-01-19
Two-party GOST in two parts: fruitless search and fruitful synthesis
Liliya Akhmetzyanova, Evgeny Alekseev, Alexandra Babueva, Lidiia Nikiforova, and Stanislav Smyshlyaev
In the current paper we investigate the possibility of designing secure two-party signature scheme with the same verification algorithm as in the Russian standardized scheme (GOST scheme). We solve this problem in two parts. The first part is a (fruitless) search for an appropriate scheme in the literature. It turned out that all existing schemes are insecure in the strong security models. The second part is a synthesis of new signature scheme and ends fruitfully. We synthesize a new two-party GOST signature scheme, additionally using the commitment scheme, guided by the features of the GOST signature scheme, as well as the known attacks on existing schemes. We prove that this scheme is secure in a bijective random oracle model in the case when one of the parties is malicious under the assumption that the classical GOST scheme is unforgeable in a bijective random oracle model and the commitment scheme is modelled as a random oracle.
Last updated:  2024-01-19
Enabling PERK on Resource-Constrained Devices
Slim Bettaieb, Loïc Bidoux, Alessandro Budroni, Marco Palumbi, and Lucas Pandolfo Perin
PERK is a digital signature scheme submitted to the recent NIST Post-Quantum Cryptography Standardization Process for Additional Digital Signature Schemes. For NIST security level I, PERK features sizes ranging from 6kB to 8.5kB, encompassing both the signature and public key, depending on the parameter set. Given its inherent characteristics, PERK's signing and verification algorithms involve the computation of numerous large objects, resulting in substantial stack-memory consumption ranging from 300kB to 1.5MB for NIST security level I and from 1.1MB to 5.7MB for NIST security level V. In this paper, we present a memory-versus-performance trade-off strategy that significantly reduces PERK's memory consumption to a maximum of approximately 82kB for any security level, enabling PERK to be executed on resource-constrained devices. Additionally, we explore various optimizations tailored to the Cortex M4 and introduce the first implementation of PERK designed for this platform.
Last updated:  2024-01-19
An $\tilde{O}(\log^2 p)$ Approach to Point-Counting on Elliptic Curves From a Prominent Family Over the Prime Field $\mathbb{F}_p$
Yuri Borissov and Miroslav Markov
We elaborate an approach for determining the order of an elliptic curve from the family $\mathcal{E}_p = \{E_a: y^2 = x^3 + a \pmod p, a \not = 0\}$ where $p$ is a prime number $> 3$. The essence of this approach consists in combining the well-known Hasse bound with an explicit formula for that order reduced to modulo $p$. It allows to advance an efficient technique of complexity $\tilde{O}(\log^2 p)$ for computing simultaneously the six orders associated with the family $\mathcal{E}_p$ when $p \equiv 1 \pmod 3$, thus improving the best known algorithmic solution for that problem with almost an order of magnitude.
Last updated:  2024-01-19
Compress: Generate Small and Fast Masked Pipelined Circuits
Gaëtan Cassiers, Barbara Gigerl, Stefan Mangard, Charles Momin, and Rishub Nagpal
Masking is an effective countermeasure against side-channel attacks. It replaces every logic gate in a computation by a gadget that performs the operation over secret sharings of the circuit's variables. When masking is implemented in hardware, care should be taken to protect against leakage from glitches, which could otherwise undermine the security of masking. This is generally done by adding registers, which stop the propagation of glitches, but introduce additional latency and area cost. In masked pipeline circuits, a high latency further increases the area overheads of masking, due to the need for additional registers that synchronize signals between pipeline stages. In this work, we propose a technique to minimize the number of such pipeline registers, which relies on optimizing the scheduling of the computations across the pipeline stages. We release an implementation of this technique as an open-source tool, COMPRESS. Further, we introduce other optimizations to deduplicate logic between gadgets, perform an optimal selection of masked gadgets, and introduce new gadgets with smaller area. Overall, our optimizations lead to circuits that improve the state-of-the art in area and achieve minimal latency. For example, a masked AES based on an S-box generated by COMPRESS reduces latency by 19% and area by 27% over a state of the art implementations, or, for the same latency, reduces area by 45%.
Last updated:  2024-01-19
FEASE: Fast and Expressive Asymmetric Searchable Encryption
Long Meng, Liqun Chen, Yangguang Tian, Mark Manulis, and Suhui Liu
Asymmetric Searchable Encryption (ASE) is a promising cryptographic mechanism that enables a semi-trusted cloud server to perform keyword searches over encrypted data for users. To be useful, an ASE scheme must support expressive search queries, which are expressed as conjunction, disjunction, or any Boolean formulas. In this paper, we propose a fast and expressive ASE scheme that is adaptively secure, called FEASE. It requires only 3 pairing operations for searching any conjunctive set of keywords independent of the set size and has linear complexity for encryption and trapdoor algorithms in the number of keywords. FEASE is based on a new fast Anonymous Key-Policy Attribute-Based Encryption (A-KP-ABE) scheme as our first proposal, which is of independent interest. To address optional protection against keyword guessing attacks, we extend FEASE into the first expressive Public-Key Authenticated Encryption with Keyword Search (PAEKS) scheme. We provide implementations and evaluate the performance of all three schemes, while also comparing them with the state of the art. We observe that FEASE outperforms all existing expressive ASE constructions and that our A-KP-ABE scheme offers anonymity with efficiency comparable to the currently fastest yet non-anonymous KP-ABE schemes FAME (ACM CCS 2017) and FABEO (ACM CCS 2022).
Last updated:  2024-01-18
On the Complexity and Admissible Parameters of the Crossbred Algorithm in $\mathbb{F}_{q\geq2}$
João Diogo Duarte
The Joux--Vitse Crossbred algorithm's aim is to efficiently solve a system of semi-regular multivariate polynomials equations. The authors tested their algorithm for boolean polynomials in $\mathbb{F}_2$ and stated that this algorithm also works for other non-boolean finite fields. In addition, the algorithm is dependent on a set of parameters that control its course. Finding functional parameters for this algorithm is a non-trivial task, so the authors presented a bivariate generating series to test the admissibility of parameters in $\mathbb{F}_2$. However, due to restrictive page limits, the series itself and its derivation are not explained. In this work, the derivation of the bivariate generating series to test the admissibility of parameters in boolean $\mathbb{F}_2$ is explained and from this, a new series for other non-boolean fields, $\mathbb{F}_{q > 2}$ is presented. In addition, a complexity estimate of the algorithm is given for both $\mathbb{F}_2$ and $\mathbb{F}_{q>2}$. By obtaining optimal parameters using the previous results, the cost of applying Crossbred to polynomial systems of various sizes, numbers of variables and values of $q$ was plotted. Overall, it was determined that the Crossbred algorithm provides an improved complexity over FES (Fast Exhaustive Search) for larger overdetermined systems, but for any overdetermined system, it does not improve the complexity when compared to state-of-the-art algorithms, Hybrid-$F_5$ and FXL.
Last updated:  2024-01-18
Layout Graphs, Random Walks and the t-wise Independence of SPN Block Ciphers
Tianren Liu, Angelos Pelecanos, Stefano Tessaro, and Vinod Vaikuntanathan
We continue the study of $t$-wise independence of substitution-permutation networks (SPNs) initiated by the recent work of Liu, Tessaro, and Vaikuntanathan (CRYPTO 2021). Our key technical result shows that when the S-boxes are randomly and independently chosen and kept secret, an $r$-round SPN with input length $n = b \cdot k$ is $2^{-\Theta(n)}$-close to $t$-wise independent within $r = O(\min\{k, \log t\})$ rounds for any $t$ almost as large as $2^{b/2}$. Here, $b$ is the input length of the S-box and we assume that the underlying mixing achieves maximum branch number. We also analyze the special case of AES parameters (with random S-boxes), and show it is $2^{-128}$-close to pairwise independent in $7$ rounds. Central to our result is the analysis of a random walk on what we call the layout graph, a combinatorial abstraction that captures equality and inequality constraints among multiple SPN evaluations. We use our technical result to show concrete security bounds for SPNs with actual block cipher parameters and small-input $S$-boxes. (This is in contrast to the large body of results on ideal-model analyses of SPNs.) For example, for the censored-AES block cipher, namely AES with most of the mixing layers removed, we show that 192 rounds suffice to attain $2^{-128}$-closeness to pairwise independence. The prior such result for AES (Liu, Tessaro and Vaikuntanathan, CRYPTO 2021) required more than 9000 rounds.
Last updated:  2024-01-18
Multi-Hop Fine-Grained Proxy Re-Encryption
Yunxiao Zhou, Shengli Liu, and Shuai Han
Proxy re-encryption (PRE) allows a proxy to transform a ciphertext intended for Alice (delegator) to another ciphertext intended for Bob (delegatee) without revealing the underlying message. Recently, a new variant of PRE, namely fine-grained PRE (FPRE), was proposed in [Zhou et al., Asiacrypt 2023]. Generally, FPRE is designed for a function family F: each re-encryption key rk_{A→B}^f is associated with a function f ∈ F, and with rk_{A→B}^f, a proxy can transform Alice's ciphertext encrypting m to Bob's ciphertext encrypting f(m). However, their scheme only supports single-hop re-encryption and achieves only CPA security. In this paper, we formalize multi-hop FPRE (mFPRE) that supports multi-hop re-encryptions in the fine-grained setting, and propose two mFPRE schemes achieving CPA security and stronger HRA security (security against honest re-encryption attacks), respectively. -- For multi-hop FPRE, we formally define its syntax and formalize a set of security notions including CPA security, HRA security, undirectionality and ciphertext unlinkablity. HRA security is stronger and more reasonable than CPA security, and ciphertext unlinkablity blurs the proxy relations among a chain of multi-hop re-encryptions, hence providing better privacy. We establish the relations between these security notions. -- Our mFPRE schemes support fine-grained re-encryptions for bounded linear functions and have security based on the learning-with-errors (LWE) assumption in the standard model. In particular, one of our schemes is HRA secure and enjoys all the aforementioned desirable securities. To achieve CPA security and HRA security for mFPRE, we extend the framework of [Jafargholi et al., Crypto 2017] and the technique of the [Fuchsbauer et al., PKC 2019].
Last updated:  2024-01-18
Exploiting the Central Reduction in Lattice-Based Cryptography
Tolun Tosun, Amir Moradi, and Erkay Savas
This paper presents a novel and efficient way of exploiting side-channel leakage of masked implementations of lattice-based cryptography (LBC). The presented attack specifically targets the central reduction technique, which is widely adapted in efficient implementations of LBC. We show that the central reduction leads to a vulnerability by creating a strong dependency between the power consumption and the sign of sensitive intermediate variables. We exploit this dependency by introducing a novel hypothetical power model, the range power model, which can be employed in higher-order multi-query side-channel analysis attacks. We particularly show that our approach is valid for the prime moduli employed by Kyber and Dilithium, the lattice-based post-quantum algorithms selected by NIST, while it generalizes to other primes used in LBC as well. We practically evaluate our introduced approach by performing second-order non-profiled attacks against a masked implementation of Kyber on an Arm Cortex-M4 micro-processor. In our experiments we revealed the full secret key of the aforementioned implementation with only 2100 electro-magnetic (EM) traces without profiling, achieving a more than 14 times reduction in the number of traces compared to classical attacks.
Last updated:  2024-01-18
Improved Rectangle Attacks on SKINNY and CRAFT
Hosein Hadipour, Nasour Bagheri, and Ling Song
The boomerang and rectangle attacks are adaptions of differential cryptanalysis that regard the target cipher $E$ as a composition of two sub-ciphers, i.e., $E = E_{1}\circ E_{0}$, to construct a distinguisher for $E$ with probability $p^{2}q^{2}$ by concatenating two short differential trails for $E_{0}$ and $E_{1}$ with probability $p$ and $q$ respectively. According to the previous research, the dependency between these two differential characteristics has a great impact on the probability of boomerang and rectangle distinguishers. Dunkelman et al. proposed the sandwich attack to formalise such dependency that regards $E$ as three parts, i.e., $E = E_{1}\circ E_{m}\circ E_{0}$, where $E_{m}$ contains the dependency between two differential trails, satisfying some differential propagation with probability $r$. Accordingly, the entire probability is $p^{2}q^{2}r$. Recently, Song et al. have proposed a general framework to identify the actual boundaries of $E_{m}$ and systematically evaluate the probability of $E_{m}$ with any number of rounds, and applied their method to accurately evaluate the probabilities of the best SKINNY's boomerang distinguishers. In this paper, using a more advanced method to search for boomerang distinguishers, we show that the best previous boomerang distinguishers for SKINNY can be significantly improved in terms of probability and number of rounds. More precisely, we propose related-tweakey boomerang distinguishers for up to 19, 21, 23, and 25 rounds of SKINNY-64-128, SKINNY-128-256, SKINNY-64-192 and SKINNY-128-384 respectively, which improve the previous boomerang distinguishers of these variants of SKINNY by 1, 2, 1, and 1 round respectively. Based on the improved boomerang distinguishers for SKINNY, we provide related-tweakey rectangle attacks on 23 rounds of SKINNY-64-128, 24 rounds of SKINNY-128-256, 29 rounds of SKINNY-64-192, and 30 rounds of SKINNY-128-384. It is worth noting that our improved related-tweakey rectangle attacks on SKINNY-64-192, SKINNY-128-256 and SKINNY-128-384 can be directly applied for the same number of rounds of ForkSkinny-64-192, ForkSkinny-128-256 and ForkSkinny-128-384 respectively. CRAFT is another SKINNY-like tweakable block cipher for which we provide the security analysis against rectangle attack for the first time. As a result, we provide a 14-round boomerang distinguisher for CRAFT in the single-tweak model based on which we propose a single-tweak rectangle attack on 18 rounds of this cipher. Moreover, following the previous research regarding the evaluation of switching in multiple rounds of boomerang distinguishers, we also introduce new tools called Double Boomerang Connectivity Table $\tt{DBCT}$, $\tt{LBCT}^{\scriptsize{=}|}$, and $\tt{UBCT}^{\vDash}$ to evaluate the boomerang switch through the multiple rounds more accurately.
Last updated:  2024-01-18
Comprehensive Security Analysis of CRAFT
Hosein Hadipour, Sadegh Sadeghi, Majid M. Niknam, and Nasour Bagheri
CRAFT is a lightweight block cipher, designed to provide efficient protection against differential fault attacks. It is a tweakable cipher that includes 32 rounds to produce a ciphertext from a 64-bit plaintext using a 128-bit key and 64-bit public tweak. In this paper, compared to the designers' analysis, we provide a more detailed analysis of CRAFT against differential and zero-correlation cryptanalysis, aiming to provide better distinguishers for the reduced rounds of the cipher. Our distinguishers for reduced-round CRAFT cover a higher number of rounds compared to the designers' analysis. In our analysis, we observed that, for any number of rounds, the differential effect of CRAFT has an extremely higher probability compared to any differential trail. As an example, while the best trail for 11 rounds of the cipher has a probability of at least $2^{-80}$, we present a differential with probability $2^{-49.79}$, containing $2^{29.66}$ optimal trails, all with the same optimum probability of $2^{-80}$. Next, we use a partitioning technique, based on optimal expandable truncated trails to provide a better estimation of the differential effect on CRAFT. Thanks to this technique, we are able to find differential distinguishers for 9, 10, 11, 12, 13, and 14 rounds of the cipher in single tweak model with the probabilities of at least $2^{-40.20}$, $ 2^{-45.12} $, $ 2^{-49.79}$, $ 2^{-54.49}$, $ 2^{-59.13}$, and $ 2^{-63.80}$, respectively. These probabilities should be compared with the best distinguishers provided by the designers in the same model for 9 and 10 rounds of the cipher with the probabilities of at least $ 2^{-54.67}$ and $ 2^{-62.61}$, respectively. In addition, we consider the security of CRAFT against the new concept of related tweak zero-correlation (ZC) linear cryptanalysis and present a new distinguisher which covers 14 rounds of the cipher, while the best previous ZC distinguisher covered 13 rounds. Thanks to the related tweak ZC distinguisher for 14 rounds of the cipher, we also present 14 rounds integral distinguishers in related tweak mode of the cipher. Although the provided analysis does not compromise the cipher, we think it provides a better insight into the designing of CRAFT.
Last updated:  2024-01-18
Forward Security under Leakage Resilience, Revisited
Suvradip Chakraborty, Harish Karthikeyan, Adam O'Neill, and C. Pandu Rangan
As both notions employ the same key-evolution paradigm, Bellare \emph{et al.} (CANS 2017) study combining forward security with leakage resilience. The idea is for forward security to serve as a hedge in case at some point the full key gets exposed from the leakage. In particular, Bellare \emph{et al.} combine forward security with \emph{continual} leakage resilience, dubbed FS+CL. Our first result improves on Bellare \emph{et al.}'s FS+CL secure PKE scheme by building one from any continuous leakage-resilient binary-tree encryption (BTE) scheme; in contrast, Bellare \emph{et al.} require extractable witness encryption. Our construction also preserves leakage rate of the underlying BTE scheme and hence, in combination with existing CL-secure BTE, yields the first FS+CL secure encryption scheme with optimal leakage rate from standard assumptions. \ind We next explore combining forward security with other notions of leakage resilience. Indeed, as argued by Dziembowski \emph{et al.} (CRYPTO 2011), it is desirable to have a \emph{deterministic} key-update procedure, which FS+CL does not allow for arguably pathological reasons. To address this, we combine forward security with \emph{entropy-bounded} leakage (FS+EBL). We construct FS+EBL non-interactive key exchange (NIKE) with deterministic key update based on indistinguishability obfuscation ($\iO$), and DDH or LWE. To make the public keys constant size, we rely on the Superfluous Padding Assumption (SuPA) of Brzuska and Mittelbach (ePrint 2015) \emph{without} auxiliary information, making it more plausible. SuPA notwithstanding, the scheme is also the first FS-secure NIKE from $\iO$ rather than multilinear maps. We advocate a future research agenda that uses FS+EBL as a hedge for FS+CL, whereby a scheme achieves the latter if key-update randomness is good and the former if not.
Last updated:  2024-01-18
The state diagram of $\chi$
Jan Schoone and Joan Daemen
In symmetric cryptography, block ciphers, stream ciphers and permutations often make use of a round function and many round functions consist of a linear and a non-linear layer. One that is often used is based on the cellular automaton that is denoted by $\chi$ as a Boolean map on bi-infinite sequences, $\mathbb{F}^{\mathbb{Z}}$. It is defined by $\sigma \mapsto \nu$ where each $\nu_i = \sigma_i + (\sigma_{i+1}+1)\sigma_{i+2}$. A map $\chi_n$ is a map that operatos on $n$-bit arrays with periodic boundary conditions. This corresponds with $\chi$ restricted to periodic infinite sequences with period that divides $n$. This map $\chi_n$ is used in various permutations, e.g., Keccak-f (the permutation in SHA-3), ASCON (the NIST standard for lightweight cryptography), Xoodoo, Rasta and Subterranean (2.0). In this paper, we characterize the graph of $\chi$ on periodic sequences. It turns out that $\chi$ is surjective on the set of \emph{all} periodic sequences. We will show what sequences will give collisions after one application of $\chi$. We prove that, for odd $n$, the order of $\chi_n$ (in the group of bijective maps on $\mathbb{F}^n$) is $2^{\lceil \operatorname{lg}(\frac{n+1}2)\rceil}$. A given periodic sequence lies on a cycle in the graph of $\chi$, or it can be represented as a polynomial. By regarding the divisors of such a polynomial one can see whether it lies in a cycle, or after how many iterations of $\chi$ it will. Furthermore, we can see, for a given $\sigma$, the length of the cycle in its component in the state diagram. Finally, we extend the surjectivity of $\chi$ to $\mathbb{F}^{\mathbb{Z}}$, thus to include non-periodic sequences.
Last updated:  2024-01-18
Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries
Tianpei Lu, Bingsheng Zhang, Lichun Li, and Kui Ren
Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine learning scenarios, e.g., medical care and finance. In this work, we introduce $\mathsf{Aegis}$~-- a high-performance PPML platform built on top of a maliciously secure 3-PC framework over ring $\mathbb{Z}_{2^\ell}$. In particular, we propose a novel 2-round secure comparison (a.k.a., sign bit extraction) protocol in the preprocessing model. The communication of its semi-honest version is only 25% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al.(S&P 2023); both communication and round complexity of its malicious version are approximately 50% of the SOTA (BLAZE) by Patra and Suresh (NDSS 2020), for $\ell=64$. Moreover, the communication of our maliciously secure inner product protocol is merely $3\ell$ bits, reducing 50% from the SOTA (Swift) by Koti et al. (USENIX 2021). Finally, the resulting ReLU and MaxPool PPML protocols outperform the SOTA by $4\times$ in the semi-honest setting and $10\times$ in the malicious setting, respectively.
Last updated:  2024-01-18
Quantum State Obfuscation from Classical Oracles
James Bartusek, Zvika Brakerski, and Vinod Vaikuntanathan
A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit. In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copy-protection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit $C$ with a classical description and an auxiliary quantum state $\ket{\psi}$, into a functionally-equivalent obfuscated quantum program that hides as much as possible about $C$ and $\ket{\psi}$. We prove the security of our obfuscator when applied to any pseudo-deterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantum-secure indistinguishability obfuscation for classical circuits. Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle, but leave the existence of a concrete real-world candidate as an open problem. Indeed, our quantum state obfuscator together with Coladangelo-Gunn gives the first candidate realization of a ``best-possible'' copy-protection scheme for all polynomial-time functionalities. Our techniques deviate significantly from previous works on quantum obfuscation. We develop several novel technical tools which we expect to be broadly useful in quantum cryptography. These tools include a publicly-verifiable, linearly-homomorphic quantum authentication scheme with classically-decodable ZX measurements (which we build from coset states), and a method for compiling any quantum circuit into a "linear + measurement" ($\LM$) quantum program: an alternating sequence of CNOT operations and partial ZX measurements.
Last updated:  2024-01-18
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
Yulin Zhao, Hualin Zhou, and Zhiguo Wan
Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-honest servers $\mathcal{S}_0$ and $\mathcal{S}_1$ collaborate with each other, with a shuffle server $\mathcal{S}_0$ in charge of privacy-preserving random clustering, while an analysis server $\mathcal{S}_1$ responsible for robustness detection, identifying and filtering malicious model updates. Our scheme employs a novel combination of homomorphic encryption and proxy re-encryption to realize secure server-to-server collaboration. We also utilize a novel sparse matrix projection compression technique to enhance communication efficiency and significantly reduce communication overhead. To resist poisoning attacks, we introduce a dual-filter algorithm based on trusted root, combine dimensionality reduction and norm calculation to identify malicious model updates. Extensive experiments validate the efficiency and robustness of our scheme. SuperFL achieves impressive compression ratios, ranging from $5\text{-}40$x, under different models while maintaining comparable model accuracy as the baseline. Notably, our solution demonstrates a maximal model accuracy decrease of no more than $2\%$ and $6\%$ on the MNIST and CIFAR-10 datasets respectively, under specific compression ratios and the presence of malicious clients.
Last updated:  2024-01-17
Tropical cryptography III: digital signatures
Jiale Chen, Dima Grigoriev, and Vladimir Shpilrain
We use tropical algebras as platforms for a very efficient digital signature protocol. Security relies on computational hardness of factoring one-variable tropical polynomials; this problem is known to be NP-hard. We also offer countermeasures against recent attacks by Panny and by Brown and Monico.
Last updated:  2024-01-17
Formal Security Analysis of the OpenID FAPI 2.0: Accompanying a Standardization Process
Pedram Hosseyni, Ralf Kuesters, and Tim Würtele
In recent years, the number of third-party services that can access highly-sensitive data has increased steadily, e.g., in the financial sector, in eGovernment applications, or in high-assurance identity services. Protocols that enable this access must provide strong security guarantees. A prominent and widely employed protocol for this purpose is the OpenID Foundation's FAPI protocol. The FAPI protocol is already in widespread use, e.g., as part of the UK's Open Banking standards and Brazil's Open Banking Initiative as well as outside of the financial sector, for instance, as part of the Australian government's Consumer Data Rights standards. Based on lessons learned from FAPI 1.0, the OpenID Foundation has developed a completely new protocol, called FAPI 2.0. The specifications of FAPI 2.0 include a concrete set of security goals and attacker models under which the protocol aims to be secure. Following an invitation from the OpenID Foundation's FAPI Working Group (FAPI WG), we have accompanied the standardization process of the FAPI 2.0 protocol by an in-depth formal security analysis. In this paper, we report on our analysis and findings. Our analysis incorporates the first formal model of the FAPI 2.0 protocol and is based on a detailed model of the web infrastructure, the Web Infrastructure Model, originally proposed by Fett, Küsters, and Schmitz. Our analysis has uncovered several types of attacks on the protocol, violating the aforementioned security goals set by the FAPI WG. We subsequently have worked with the FAPI WG to fix the protocol, resulting in several changes to the specifications. After adapting our model to the changed specifications, we have proved the security properties to hold under the strong attacker model defined by the FAPI WG.
Last updated:  2024-01-17
OBSCURE: Versatile Software Obfuscation from a Lightweight Secure Element
Darius Mercadier, Viet Sang Nguyen, Matthieu Rivain, and Aleksei Udovenko
Software obfuscation is a powerful tool to protect the intellectual property or secret keys inside programs. Strong software obfuscation is crucial in the context of untrusted execution environments (e.g., subject to malware infection) or to face potentially malicious users trying to reverse-engineer a sensitive program. Unfortunately, the state-of-the-art of pure software-based obfuscation (including white-box cryptography) is either insecure or infeasible in practice. This work introduces OBSCURE, a versatile framework for practical and cryptographically strong software obfuscation relying on a simple stateless secure element (to be embedded, for example, in a protected hardware chip or a token). Based on the foundational result by Goyal et al. from TCC 2010, our scheme enjoys provable security guarantees, and further focuses on practical aspects, such as efficient execution of the obfuscated programs, while maintaining simplicity of the secure element. In particular, we propose a new rectangular universalization technique, which is also of independent interest. We provide an implementation of OBSCURE taking as input a program source code written in a subset of the C programming language. This ensures usability and a broad range of applications of our framework. We benchmark the obfuscation on simple software programs as well as on cryptographic primitives, hence highlighting the possible use cases of the framework as an alternative to pure software-based white-box implementations.
Last updated:  2024-01-17
Forward Security with Crash Recovery for Secure Logs
Erik-Oliver Blass and Guevara Noubir
Logging is a key mechanism in the security of computer systems. Beyond supporting important forward security properties, it is critical that logging withstands both failures and intentional tampering to prevent subtle attacks leaving the system in an inconsistent state with inconclusive evidence. We propose new techniques combining forward security with crash recovery for secure log data storage. As the support of specifically forward integrity and the online nature of logging prevent the use of conventional coding, we propose and analyze a coding scheme resolving these unique design constraints. Specifically, our coding enables forward integrity, online encoding, and most importantly a constant number of operations per encoding. It adds a new log item by XORing it to $k$ cells of a table. If up to a certain threshold of cells is modified by the adversary, or lost due to a crash, we still guarantee recovery of all stored log items. The main advantage of the coding scheme is its efficiency and compatibility with forward integrity. The key contribution of the paper is the use of spectral graph theory techniques to prove that $k$ is constant in the number $n$ of all log items ever stored and small in practice, e.g., $k=5$. Moreover, we prove that to cope with up to $\sqrt{n}$ modified or lost log items, storage expansion is constant in $n$ and small in practice. For $k=5$, the size of the table is only $12\%$ more than the simple concatenation of all $n$ items. We propose and evaluate original techniques to scale the computation cost of recovery to several GBytes of security logs. We instantiate our scheme into an abstract data structure which allows to either detect adversarial modifications to log items or treat modifications like data loss in a system crash. The data structure can recover lost log items, thereby effectively reverting adversarial modifications.
Last updated:  2024-01-17
PRIDA: PRIvacy-preserving Data Aggregation with multiple data customers
Beyza Bozdemir, Betül Aşkın Özdemir, and Melek Önen
We propose a solution for user privacy-oriented privacy-preserving data aggregation with multiple data customers. Most existing state-of-the-art approaches present too much importance on performance efficiency and seem to ignore privacy properties except for input privacy. Most solutions for data aggregation do not generally discuss the users’ birthright, namely their privacy for their own data control and anonymity when they search for something on the browser or volunteer to participate in a survey. Still, they are ambitious to secure data customers’ rights (which should come later). They focus on resulting in an efficiency-oriented data aggregation enabling input privacy only. We aim to give importance to user privacy, and we have designed a solution for data aggregation in which we keep efficiency in balance. We show that PRIDA provides a good level of computational and communication complexities and is even better in timing evaluation than existing studies published recently (i.e., Bonawitz et al. (CCS’17), Corrigan-Gibbs et al. (NSDI’17), Bell et al. (CCS’20), Addanki et al. (SCN’22)). We employ threshold homomorphic encryption and secure two-party computation to ensure privacy properties. We balance the trade-off between a proper design for users and the desired privacy and efficiency.
Last updated:  2024-01-17
A Comparative Examination of Network and Contract-Based Blockchain Storage Solutions for Decentralized Applications
Lipeng He
Decentralized applications (DApps), which are innovative blockchain-powered software systems designed to serve as the fundamental building blocks for the next generation of Internet services, have witnessed exponential growth in recent years. This paper thoroughly compares and analyzes two blockchain-based decentralized storage networks (DSNs), which are crucial foundations for DApp and blockchain ecosystems. The study examines their respective mechanisms for data persistence, strategies for enforcing data retention, and token economics. In addition to delving into technical details, the suitability of each storage solution for decentralized application development is assessed, taking into consideration network performance, storage costs, and existing use cases. By evaluating these factors, the paper aims to provide insights into the effectiveness of these technologies in supporting the desirable properties of truly decentralized blockchain applications. In conclusion, the findings of this research are discussed and synthesized, offering valuable perspectives on the capabilities of these technologies. It sheds light on their potential to facilitate the development of DApps and provides an understanding of the ongoing trends in blockchain development.
Last updated:  2024-01-17
Too Hot To Be True: Temperature Calibration for Higher Confidence in NN-assisted Side-channel Analysis
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
The past years have witnessed a considerable increase in research efforts put into neural network-assisted profiled side-channel analysis (SCA). Studies have also identified challenges, e.g., closing the gap between metrics for machine learning (ML) classification and side-channel attack evaluation. In fact, in the context of NN-assisted SCA, the NN’s output distribution forms the basis for successful key recovery. In this respect, related work has covered various aspects of integrating neural networks (NNs) into SCA, including applying a diverse set of NN models, model selection and training, hyperparameter tuning, etc. Nevertheless, one well-known fact has been overlooked in the SCA-related literature, namely NNs’ tendency to become “over-confident,” i.e., suffering from an overly high probability of correctness when predicting the correct class (secret key in the sense of SCA). Temperature scaling is among the powerful and effective techniques that have been devised as a remedy for this. Regarding the principles of deep learning, it is known that temperature scaling does not affect the NN’s accuracy; however, its impact on metrics for secret key recovery, mainly guessing entropy, is worth investigating. This paper reintroduces temperature scaling into SCA and demonstrates that key recovery can become more effective through that. Interestingly, temperature scaling can be easily integrated into SCA, and no re-tuning of the network is needed. In doing so, temperature can be seen as a metric to assess the NN’s performance before launching the attack. In this regard, the impact of hyperparameter tuning, network variance, and capacity have been studied. This leads to recommendations on how network miscalibration and overconfidence can be prevented.
Last updated:  2024-01-17
Hints from Hertz: Dynamic Frequency Scaling Side-Channel Analysis of Number Theoretic Transform in Lattice-Based KEMs
Tianrun Yu, Chi Cheng, Zilong Yang, Yingchen Wang, Yanbin Pan, and Jian Weng
Number Theoretic Transform (NTT) has been widely used in accelerating computations in lattice-based cryptography. However, attackers can potentially launch power analysis targeting NTT because it is usually the most time-consuming part of the implementation. This extended time frame provides a natural window of opportunity for attackers. In this paper, we investigate the first CPU frequency leakage (Hertzbleed-like) attacks against NTT in lattice-based KEMs. Our key observation is that different inputs to NTT incur different Hamming weights in its output and intermediate layers. By measuring the CPU frequency during the execution of NTT, we propose a simple yet effective attack idea to find the input to NTT that triggers NTT processing data with significantly low Hamming weight. We further apply our attack idea to real-world applications that are built upon NTT: CPA-secure Kyber without Compression and Decompression functions, and CCA-secure NTTRU. This leads us to extract information or frequency Hints about the secret key. Integrating these Hints into the LWE-estimator framework, we estimate a minimum of $35\%$ security loss caused by the leakage. The frequency and timing measurements on the Reference and AVX2 implementations of NTT in both Kyber and NTTRU align well with our theoretical analysis, confirming the existence of frequency side-channel leakage in NTT. It is important to emphasize that our observation is not limited to a specific implementation but rather the algorithm on which NTT is based. Therefore, our results call for more attention to the analysis of power leakage against NTT in lattice-based cryptography.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.