All papers in 2024 (Page 7 of 723 results)

Last updated:  2024-01-27
Memory Checking Requires Logarithmic Overhead
Elette Boyle, Ilan Komargodski, and Neekon Vafa
We study the complexity of memory checkers with computational security and prove the first general tight lower bound. Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS '91, Algorithmica '94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity $O(\log n/\log\log n)$ and local space proportional to a computational security parameter, assuming one-way functions, where $n$ is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC '09) showed that for a restricted class of ``deterministic and non-adaptive'' memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and non-adaptive constructions has remained a major open problem. In this work, we fully resolve the complexity of memory checkers by showing that any construction with local space $p$ and query complexity $q$ must satisfy $$ p \ge \frac{n}{(\log n)^{O(q)}} \;. $$ This implies, as a special case, that $q\ge \Omega(\log n/\log\log n)$ in any scheme, assuming that $p\le n^{1-\varepsilon}$ for $\varepsilon>0$. The bound applies to any scheme with computational security, completeness $2/3$, and inverse polynomial in $n$ soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity $q_r$ and write complexity $q_w$ differ. For instance, we show the tight bound that if $q_r=O(1)$ and $p\le n^{1-\varepsilon}$ for $\varepsilon>0$, then $q_w\ge n^{\Omega(1)}$. This is the first lower bound, for any non-trivial class of constructions, showing a read-write query complexity trade-off. Our proof is via a delicate compression argument showing that a ``too good to be true'' memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings.
Last updated:  2024-01-27
SPRITE: Secure and Private Routing in Payment Channel Networks
Gaurav Panwar, Roopa Vishwanathan, George Torres, and Satyajayant Misra
Payment channel networks are a promising solution to the scalability challenge of blockchains and are designed for significantly increased transaction throughput compared to the layer one blockchain. Since payment channel networks are essentially decentralized peer-to-peer networks, routing transactions is a fundamental challenge. Payment channel networks have some unique security and privacy requirements that make pathfinding challenging, for instance, network topology is not publicly known, and sender/receiver privacy should be preserved, in addition to providing atomicity guarantees for payments. In this paper, we present an efficient privacy-preserving routing protocol, SPRITE, for payment channel networks that supports concurrent transactions. By finding paths offline and processing transactions online, SPRITE can process transactions in just two rounds, which is more efficient compared to prior work. We evaluate SPRITE’s performance using Lightning Net- work data and prove its security using the Universal Composability framework. In contrast to the current cutting-edge methods that achieve rapid transactions, our approach significantly reduces the message complexity of the system by 3 orders of magnitude while maintaining similar latencies.
Last updated:  2024-01-27
An acceleration of the AKS prime identification algorithm
Stephen Meredith Williams
In its standard form, the AKS prime identification algorithm is deterministic and polynomial time but too slow to be of practical use. By dropping its deterministic attribute, it can be accelerated to an extent that it is practically useful, though still much slower than the widely used Miller-Rabin-Selfridge-Monier (MRSM) algorithm based on the Fermat Little Theorem or the Solovay-Strassen algorithm based on the Euler Criterion. The change made, in the last stage of AKS, is to check a modular equation not for a long sequence of values but for a single one. Another change is to reduce, arbitrarily, the size of the parameter $r$ giving the degree of the polynomial used for the last stage.
Last updated:  2024-01-29
K-Waay: Fast and Deniable Post-Quantum X3DH without Ring Signatures
Daniel Collins, Loïs Huguenin-Dumittan, Ngoc Khanh Nguyen, Nicolas Rolin, and Serge Vaudenay
The Signal protocol and its X3DH key exchange core are regularly used by billions of people in applications like WhatsApp but are unfortunately not quantum-secure. Thus, designing an efficient and post-quantum secure X3DH alternative is paramount. Notably, X3DH supports asynchronicity, as parties can immediately derive keys after uploading them to a central server, and deniability, allowing parties to plausibly deny having completed key exchange. To satisfy these constraints, existing post-quantum X3DH proposals use ring signatures (or equivalently a form of designated-verifier signatures) to provide authentication without compromising deniability as regular signatures would. Existing ring signature schemes, however, have some drawbacks. Notably, they are not generally proven secure in the quantum random oracle model (QROM) and so the quantum security of parameters that are proposed is unclear and likely weaker than claimed. In addition, they are generally slower than standard primitives like KEMs. In this work, we propose an efficient, deniable and post-quantum X3DH-like protocol that we call K-Waay, that does not rely on ring signatures. At its core, K-Waay uses a split-KEM, a primitive introduced by Brendel et al. [SAC 2020], to provide Diffie-Hellman-like implicit authentication and secrecy guarantees. Along the way, we revisit the formalism of Brendel et al. and identify that additional security properties are required to prove a split-KEM-based protocol secure. We instantiate split-KEM by building a protocol based on the Frodo key exchange protocol relying on the plain LWE assumption: our proofs might be of independent interest as we show it satisfies our novel unforgeability and deniability security notions. Finally, we complement our theoretical results by thoroughly benchmarking both K-Waay and existing X3DH protocols. Our results show even when using plain LWE and a conservative choice of parameters that K-Waay is significantly faster than previous work.
Last updated:  2024-01-27
R3PO: Reach-Restricted Reactive Program Obfuscation and its Application to MA-ABE
Kaartik Bhushan, Sai Lakshmi Bhavana Obbattu, Manoj Prabhakaran, and Rajeev Raghunath
In recent breakthrough results, novel use of garbled circuits yielded constructions for several primitives like Identity-Based Encryption (IBE) and 2-round secure multi-party computation, based on standard assumptions in public-key cryptography. While the techniques in these different results have many common elements, these works did not offer a modular abstraction that could be used across them. Our main contribution is to introduce a novel notion of obfuscation, called Reach-Restricted Reactive Program Obfuscation (R3PO) that captures the essence of these constructions, and exposes additional capabilities. We provide a powerful composition theorem whose proof fully encapsulates the use of garbled circuits in these works. As an illustration of the potential of R3PO, and as an important contribution of independent interest, we present a variant of Multi-Authority Attribute-Based Encryption (MA-ABE) that can be based on (single-authority) CP-ABE in a blackbox manner, using only standard cryptographic assumptions (e.g., DDH). This is in stark contrast to the existing constructions for MA-ABE, which rely on the random oracle model and/or support only limited policy classes.
Last updated:  2024-01-26
Data Privacy Made Easy: Enhancing Applications with Homomorphic Encryption
Charles Gouert and Nektarios Georgios Tsoutsos
Homomorphic encryption is a powerful privacy-preserving technology that is notoriously difficult to configure and use, even for experts. The key difficulties include restrictive programming models of homomorphic schemes and choosing suitable parameters for an application. In this tutorial, we outline methodologies to solve these issues and allow for conversion of any application to the encrypted domain using both leveled and fully homomorphic encryption. The first approach, called Walrus, is suitable for arithmetic-intensive applications with limited depth and applications with high throughput requirements. Walrus provides an intuitive programming interface and handles parameterization automatically by analyzing the application and gathering statistics such as homomorphic noise growth to derive a parameter set tuned specifically for the application. We provide an in-depth example of this approach in the form of a neural network inference as well as guidelines for using Walrus effectively. Conversely, the second approach (HELM) takes existing HDL designs and converts them to the encrypted domain for secure outsourcing on powerful cloud servers. Unlike Walrus, HELM supports FHE backends and is well-suited for complex applications. At a high level, HELM consumes netlists and is capable of performing logic gate operations homomorphically on encryptions of individual bits. HELM incorporates both CPU and GPU acceleration by taking advantage of the inherent parallelism provided by Boolean circuits. As a case study, we walk through the process of taking an off-the-shelf HDL design in the form of AES-128 decryption and running it in the encrypted domain with HELM.
Last updated:  2024-01-26
Breaking HWQCS: a code-based signature scheme from high weight QC-LDPC codes
Alex Pellegrini and Giovanni Tognolini
We analyse HWQCS, a code based signature scheme presented at ICISC 2023, which uses quasi-cyclic low density parity check codes (QC-LDPC). The scheme introduces high Hamming weight errors and signs each message using a fresh ephemeral secret key rather than using only one secret key, so to avoid known attacks on QC-LDPC signature schemes. In this paper, we show that the signatures of HWQCS leak substantial information concerning the ephemeral keys and formally describe this behaviour. Furthermore, we show that for each security level, we can exploit the leakage to efficiently reconstruct partial secret data from very few signatures, and finally mount a universal forgery attack.
Last updated:  2024-02-17
On the practical CPAD security of “exact” and threshold FHE schemes and libraries
Marina Checri, Renaud Sirdey, Aymen Boudguiga, and Jean-Paul Bultel
In their 2021 seminal paper, Li and Micciancio presented a passive attack against the CKKS approximate FHE scheme and introduced the notion of CPAD security. The current status quo is that this line of attacks does not apply to ``exact'' FHE. In this paper, we challenge this status quo by exhibiting a CPAD key recovery attack on the linearly homomorphic Regev cryptosystem which easily generalizes to other xHE schemes such as BFV, BGV and TFHE showing that these cryptosystems are not CPAD secure in their basic form. We also show that existing threshold variants of BFV, BGV and CKKS are particularily exposed to CPAD attackers and would be CPAD-insecure without smudging noise addition after partial decryption. Finally we successfully implement our attack against several mainstream FHE libraries and discuss a number of natural countermeasures as well as their consequences in terms of FHE practice, security and efficiency. The attack itself is quite practical as it typically takes less than an hour on an average laptop PC, requiring a few thousand ciphertexts as well as up to around a million evaluations/decryptions, to perform a full key recovery.
Last updated:  2024-03-27
Accelerating BGV Bootstrapping for Large $p$ Using Null Polynomials Over $\mathbb{Z}_{p^e}$
Shihe Ma, Tairong Huang, Anyu Wang, and Xiaoyun Wang
The BGV scheme is one of the most popular FHE schemes for computing homomorphic integer arithmetic. The bootstrapping technique of BGV is necessary to evaluate arbitrarily deep circuits homomorphically. However, the BGV bootstrapping performs poorly for large plaintext prime $p$ due to its digit removal procedure exhibiting a computational complexity of at least $O(\sqrt{p})$. In this paper, we propose optimizations for the digit removal procedure with large $p$ by leveraging the properties of null polynomials over the ring $\mathbb{Z}_{p^e}$. Specifically, we demonstrate that it is possible to construct low-degree null polynomials based on two observations of the input to the digit removal procedure: 1) the support size of the input can be upper-bounded by $(2B+1)^2$; 2) the size of the lower digits to be removed can be upper-bounded by $B$. Here $B$ can be controlled within a narrow interval $[22,23]$ in our parameter selection, making the degree of these null polynomials much smaller than $p$ for large values of $p$. These low-degree null polynomials can significantly reduce the polynomial degrees during homomorphic digit removal, thereby decreasing both running time and capacity consumption. Theoretically, our optimizations reduce the computational cost of extracting a single digit from $O(\sqrt{pe})$ (by Chen and Han) or $O(\sqrt{p}\sqrt[4]{e})$ (by Geelen et al.) to $\min(2B+1,\sqrt{\lceil e/t\rceil(2B+1)})$ for some $t\ge 1$. We implement and benchmark our method on HElib with $p=17,127,257,8191$ and $65537$. With our optimized digit removal, we achieve a bootstrapping throughput $1.38\sim151$ times that in HElib, with the speedup increasing with the value of $p$. For $p=65537$, we accelerate the digit removal step by 80 times and reduce the bootstrapping time from more than 12 hours to less than 14 minutes.
Last updated:  2024-01-26
Mask Conversions for d+1 shares in Hardware, with Application to Lattice-based PQC
Quinten Norga, Jan-Pieter D'Anvers, Suparna Kundu, and Ingrid Verbauwhede
The conversion between arithmetic and Boolean mask representations (A2B & B2A) is a crucial component for side-channel resistant implementations of lattice-based cryptography. In this paper, we present a first- and high-order masked, unified hardware implementation which can perform both A2B & B2A conversions. We optimize the operation on several layers of abstraction, applicable to any protection order. First, we propose novel higher-order algorithms for the secure addition and B2A operation. This is achieved through, among others, an improved method for repeated masked modular reduction and through the X2B operation, which can be viewed as a conversion from any type of additive masking to its Boolean representation. This allows for the removal of a full secure addition during B2A post-processing. Compared to prior work, our $B2A_q$ requires 51/46/45 % less fresh randomness at first through third protection order when implemented in software or hardware. Secondly, on the circuit level, we successfully introduce half-cycle data paths and demonstrate how careful, manual masking is a superior approach for masking highly non-linear operations and providing first- and high-order security. Our techniques significantly reduce the high latency and fresh randomness overhead, typically introduced by glitch-resistant masking schemes and universally composable gadgets, including HPC3 by Knichel et al. presented at CCS 2022. Compared to state-of-the-art algorithms and masking techniques, our unified and high-throughput hardware implementation requires up to 89/84/86 % fewer clock cycles and 78/71/55 % fewer fresh random bits. We show detailed performance results for first-, second- and third-order protected implementations on FPGA. Our proposed algorithms are proven secure in the glitch extended probing model and their implementations are validated via practical lab analysis using the TVLA methodology. We experimentally show that both our first- and second-order masked implementation is hardened against univariate and multivariate attacks using 100 million traces, for each mode of operation.
Last updated:  2024-01-26
Improved Linear Key Recovery Attacks on PRESENT
Wenhui Wu, Muzhou Li, and Meiqin Wang
PRESENT is an ultra-lightweight block cipher designed by Bogdanov et al., and has been widely studied since its proposal. It supports 80-bit and 128-bit keys, which are referred as PRESENT-80 and PRESENT-128, respectively. Up to now, linear cryptanalysis is the most effective method on attacking this cipher, especially when accelerated with the pruned Walsh transform. Combing pruned Walsh transform with multiple linear attacks, one can recover the right key for 28-round PRESENT-80 and -128. Later, this method is further improved with affine pruned Walsh transform by adding more zeros in the Walsh spectrum through rejecting some data. This leads to the 29-round attack on PRESENT-128 with full codebook. In this paper, we follow the affine pruned Walsh transform accelerated linear method, and propose 29-round attacks on both PRESENT-80 and PRESENT-128 without using full codebook. Both attacks rely on a statistical model depicting distributions of the experimental correlation when some data are artificially rejected in its computation. Besides, detailed analysis of complexity reduction for each linear hull used in attacking PRESENT is also provided and supported by an automatic tool. Our 29-round attack on PRESENT-80 mainly benefits from this tool. According to our knowledge, both attacks are the best ones on PRESENT so far.
Last updated:  2024-01-25
pqm4: Benchmarking NIST Additional Post-Quantum Signature Schemes on Microcontrollers
Matthias J. Kannwischer, Markus Krausz, Richard Petri, and Shang-Yi Yang
In July 2022, the US National Institute for Standards and Technology (NIST) announced the first set of Post-Quantum Cryptography standards: Kyber, Dilithium, Falcon, and SPHINCS+. Shortly after, NIST published a call for proposals for additional post-quantum signature schemes to complement their initial portfolio. In 2023, 50 submissions were received, and 40 were accepted as round-1 candidates for future standardization. In this paper, we study the suitability and performance of said candidates on the popular Arm Cortex-M4microcontroller. We integrate the suitable implementations into the benchmarking framework pqm4 and provide benchmarking results on the STM32L4R5ZI featuring 640 KB of RAM. pqm4 currently includes reference implementations for 15 submissions and M4-optimized implementations for five submissions. For the remaining candidates, we describe the reasons that hinder integration - the predominant reason being large key size or excessive memory consumption. While the performance of reference implementations is rather meaningless and often does not correlate with the performance of well-optimized implementations, this work provides some first indication of which schemes are most promising on microcontrollers. The publicly available implementations in pqm4 also provide a good starting point for future optimization efforts. Initially, we were hoping for a much higher code quality than for initial submissions to NIST's previous PQC project. However, we got grossly disappointed: Half of the submissions make use of dynamic memory allocations, often completely without reason; Many implementations have compiler warnings, sometimes hinting at more serious issues; Many implementations do not pass simple sanitizer tests such as using valgrind; Multiple implementations make use of static memory.
Last updated:  2024-01-25
A Novel Power Analysis Attack against CRYSTALS-Dilithium Implementation
Yong Liu, Yuejun Liu, Yongbin Zhou, Yiwen Gao, Zehua Qiao, and Huaxin Wang
Post-Quantum Cryptography (PQC) was proposed due to the potential threats quantum computer attacks against conventional public key cryptosystems, and four PQC algorithms besides CRYSTALS-Dilithium (Dilithium for short) have so far been selected for NIST standardization. However, the selected algorithms are still vulnerable to side-channel attacks in practice, and their physical security need to be further evaluated. This study introduces two efficient power analysis attacks, the optimized fast two-stage approach and the single-bit approach, aimed at reducing the key guess space in NTT polynomial multiplication on an STM32F405 device (ARM Cortex-M4 core). Our findings reveal that the optimized approach outperforms the conservative approach and the fast two-stage approach proposed in ICCD 2021 by factors of 519 and 88, respectively. Similarly, the single-bit approach demonstrates speedups of 365 and 62 times compared to these two approaches, respectively.
Last updated:  2024-01-25
Cryptanalysis of the SNOVA signature scheme
Peigen Li and Jintai Ding
SNOVA is a variant of a UOV-type signature scheme over a noncommutative ring. In this article, we demonstrate that certain parameters provided by authors in SNOVA fail to meet the NIST security level, and the complexities are lower than those claimed by SNOVA.
Last updated:  2024-01-26
Simpler and Faster BFV Bootstrapping for Arbitrary Plaintext Modulus from CKKS
Jaehyung Kim, Jinyeong Seo, and Yongsoo Song
Bootstrapping is a key operation in fully homomorphic encryption schemes that enables the evaluation of arbitrary multiplicative depth circuits. In the BFV scheme, bootstrapping corresponds to reducing the size of accumulated noise in lower bits while preserving the plaintext in the upper bits. The previous instantiation of BFV bootstrapping is achieved through the digit extraction procedure. However, its performance is highly dependent on the plaintext modulus, so only a limited form of the plaintext modulus, a power of a small prime number, was used for the efficiency of bootstrapping. In this paper, we present a novel approach to instantiate BFV bootstrapping, distinct from the previous digit extraction-based method. The core idea of our bootstrapping is to utilize CKKS bootstrapping as a subroutine, so the performance of our method mainly depends on the underlying CKKS bootstrapping rather than the plaintext modulus. We implement our method at a proof-of-concept level to provide concrete benchmark results. When performing the bootstrapping operation for a 51-bits plaintext modulus, our method improves the previous digit extraction-based method by a factor of 37.9 in latency and 29.4 in throughput. Additionally, we achieve viable bootstrapping performance for large plaintext moduli, such as 144-bits and 234-bits, which has never been measured before.
Last updated:  2024-01-24
Some Improvements for the PIOP for ZeroCheck
Angus Gruen
Most multivariate proof systems require, at some point, an algebraic check against the rows of the trace. One popular protocol for this is known as zerocheck which is a sumcheck based protocol which proves a constraint function is zero over the $n$-dimensional Boolean hypercube. One of the drawbacks of running zerocheck over a small field, is that it usually involves a large number of evaluations of the constraint polynomial over a cryptographically large extension field $\mathbb{G}$. We describe several improvements to the zerocheck protocol over a small field $\mathbb{F}$ which both reduce the number of constraint evaluations the prover needs to perform and shifts some computation from the extension field back into the base field $\mathbb{F}$. Specifically, for a table of length $2^n$, integer parameter $1\leq k \leq n$ and constraint function $C$ of degree $d$ with evaluation costs $C_{\mathbb{F}}, C_{\mathbb{G}}$ we show the protocol can be performed with prover cost roughly \[ 2^n\left(1 + \frac{C_{\mathbb{G}}}{2^k C_{\mathbb{F}}}\right)(d - 1)C_{\mathbb{F}}. \] To check the proof, the verifier needs to perform a single interpolation of degree $2^kd$ and $n$ interpolations of degree $d$. Thus varying $k$ allows for a tradeoff between prover and verifier cost.
Last updated:  2024-01-24
ELEKTRA: Efficient Lightweight multi-dEvice Key TRAnsparency
Julia Len, Melissa Chase, Esha Ghosh, Daniel Jost, Balachandar Kesavan, and Antonio Marcedone
Key Transparency (KT) systems enable service providers of end-to-end encrypted communication (E2EE) platforms to maintain a Verifiable Key Directory (VKD) that maps each user's identifier, such as a username or email address, to their identity public key(s). Users periodically monitor the directory to ensure their own identifier maps to the correct keys, thus detecting any attempt to register a fake key on their behalf to Meddler-in-the-Middle (MitM) their communications. We introduce and formalize a new primitive called Multi-Device Verifiable Key Directory (MVKD), which strengthens both the security, privacy, and usability guarantees of VKD by leveraging the multi-device setting. We formalize three properties for a MVKD (completeness, extraction-based soundness, and privacy), striking a non-trivial balance between strong guarantees and the limitations imposed by a truly practical system. We then present a new MVKD system called ELEKTRA. This system combines the core of the Keybase KT system (running in production since 2014) with ideas from SEEMless (Chase et. al., 2019) and RZKS (Chen et. al., 2022). Our construction is the first to achieve the above multi-device guarantees while having formal security and privacy proofs. Finally, we implement ELEKTRA and present benchmarks demonstrating its practicality.
Last updated:  2024-01-24
A Trust-based Recommender System over Arbitrarily Partitioned Data with Privacy
Ibrahim Yakut and Huseyin Polat
Recommender systems are effective mechanisms for recommendations about what to watch, read, or taste based on user ratings about experienced products or services. To achieve higher quality recommendations, e-commerce parties may prefer to collaborate over partitioned data. Due to privacy issues, they might hesitate to work in pairs and some solutions motivate them to collaborate. This study examines how to estimate trust-based predictions on arbitrarily partitioned data in which two parties have ratings for similar sets of customers and items. A privacy- preserving scheme is proposed, and it is justified that it efficiently offers trust-based predictions on partitioned data while preserving privacy.
Last updated:  2024-01-24
Differential cryptanalysis with SAT, SMT, MILP, and CP: a detailed comparison for bit-oriented primitives
Emanuele Bellini, Alessandro De Piccoli, Mattia Formenti, David Gerault, Paul Huynh, Simone Pelizzola, Sergio Polese, and Andrea Visconti
SAT, SMT, MILP, and CP, have become prominent in the differential cryptanalysis of cryptographic primitives. In this paper, we review the techniques for constructing differential characteristic search models in these four formalisms. Additionally, we perform a systematic comparison encompassing over 20 cryptographic primitives and 16 solvers, on both easy and hard instances of optimisation, enumeration and differential probability estimation problems.
Last updated:  2024-01-23
AnonPSI: An Anonymity Assessment Framework for PSI
Bo Jiang, Jian Du, and Qiang Yan
Private Set Intersection (PSI) is a widely used protocol that enables two parties to securely compute a function over the intersected part of their shared datasets and has been a significant research focus over the years. However, recent studies have highlighted its vulnerability to Set Membership Inference Attacks (SMIA), where an adversary might deduce an individual's membership by invoking multiple PSI protocols. This presents a considerable risk, even in the most stringent versions of PSI, which only return the cardinality of the intersection. This paper explores the evaluation of anonymity within the PSI context. Initially, we highlight the reasons why existing works fall short in measuring privacy leakage, and subsequently propose two attack strategies that address these deficiencies. Furthermore, we provide theoretical guarantees on the performance of our proposed methods. In addition to these, we illustrate how the integration of auxiliary information, such as the sum of payloads associated with members of the intersection (PSI-SUM), can enhance attack efficiency. We conducted a comprehensive performance evaluation of various attack strategies proposed utilizing two real datasets. Our findings indicate that the methods we propose markedly enhance attack efficiency when contrasted with previous research endeavors. The effective attacking implies that depending solely on existing PSI protocols may not provide an adequate level of privacy assurance. It is recommended to combine privacy-enhancing technologies synergistically to enhance privacy protection even further.
Last updated:  2024-02-06
ChaCha related 64 bit oriented ARX cipher
Daniel Nager
A cipher scheme related to ChaCha [Ber] with the variation of using 64 bit operations instead of 32 bits, and the same 512 bit state size, is presented. We will provide strong argumentation to assert that the same security of ChaCha can be obtained with less number of instructions for 24 rounds, instead of Chacha's 20 rounds. Also, an strategy to implement this cipher on SIMD extensions is presented, with a maximal throughput of about 4 bytes per cycle on a 256 bit SIMD extension with at least 11 vector registers.
Last updated:  2024-01-23
Laconic Branching Programs from the Diffie-Hellman Assumption
Sanjam Garg, Mohammad Hajiabadi, Peihan Miao, and Alice Murphy
Laconic cryptography enables secure two-party computation (2PC) on unbalanced inputs with asymptotically-optimal communication in just two rounds of communication. In particular, the receiver (who sends the first-round message) holds a long input and the sender (who sends the second-round message) holds a short input, and the size of their communication to securely compute a function on their joint inputs only grows with the size of the sender's input and is independent of the receiver's input size. The work on laconic oblivious transfer (OT) [Cho et al. CRYPTO 2017] and laconic private set intersection (PSI) [Alamati et al. TCC 2021] shows how to achieve secure laconic computation for OT and PSI from the Diffie-Hellman assumption. In this work, we push the limits further and achieve laconic branching programs from the Diffie-Hellman assumption. In particular, the receiver holds a large branching program $P$ and the sender holds a short input $x$. We present a two-round 2PC protocol that allows the receiver to learn $x$ iff $P(x) =1$, and nothing else. The communication only grows with the size of $x$ and the depth of $P$, and does not further depend on the size of $P$.
Last updated:  2024-01-29
Unconditional Security using (Random) Anonymous Bulletin Board
Albert Yu, Hai H. Nguyen, Aniket Kate, and Hemanta K. Maji
In a seminal work, Ishai et al. (FOCS–2006) studied the viability of designing unconditionally secure protocols for key agreement and secure multi-party computation (MPC) using an anonymous bulletin board (ABB) as a building block. While their results establish the feasibility of key agreement and honest-majority MPC in the ABB model, the optimality of protocols with respect to their round and communication complexity is not studied. This paper enriches this study of unconditional security in the ABB model in multiple ways. - We present a key agreement protocol with a novel combinatorial insight to offer a 200% throughput over the (FOCS–2006) study; i.e., using the same number of messages, we can (almost) double the bit-length of the agreed key. We also prove the near optimality of our approach. - We offer unconditionally secure protocols for the (random) string oblivious transfer functionalities. We present a $1$-round chosen message random string oblivious transfer and show how to extend it to a non-interactive (random) string oblivious transfer protocol and a $2$-round chosen message string oblivious transfer. - We prove a $1$-round lower bound for BEC under certain conditions. Central to our technical contributions is the abstraction of a distributional variant of the random ABB functionality. Investigating the concrete efficiency of founding MPC from this primitive leads to fascinating new mathematical challenges in well-established MPC models, which will be of broader interest to the community.
Last updated:  2024-04-30
FiveEyes: Cryptographic Biometric Authentication from the Iris
Luke Demarest, Sohaib Ahmad, Sixia Chen, Benjamin Fuller, and Alexander Russell
Despite decades of effort, a stubborn chasm exists between the theory and practice of device-level biometric authentication. Deployed authentication algorithms rely on data that overtly leaks private information about the biometric; thus systems rely on externalized security measures such as trusted execution environments. The authentication algorithms have no cryptographic guarantees. This is particularly frustrating given the long line of research that has developed theoretical tools—known as fuzzy extractors—that enable secure, privacy preserving biometric authentication with public enrollment data (Dodis et al., SIAM Journal of Computing 2008). Unfortunately, the best known constructions either: 1. Assume that bits of biometrics are i.i.d. (or that all correlation is captured in pairs of features (Hine et al., TIFS 2023)), which is not true for the biometrics themselves or for features extracted using modern learning techniques, or 2. Only provide substantial true accept rates with an estimated security of $32$ bits for the iris (Simhadri et al., ISC 2019) and $45$ bits for the face (Zhang, Cui, and Yu, ePrint 2021/1559). This work introduces FiveEyes, an iris key derivation system powered by technical advances in both 1) feature extraction from the iris and 2) the fuzzy extractor used to secure authentication keys. FiveEyes’ feature extractor’s loss focuses on quality for key derivation. The fuzzy extractor builds on sample-then-lock (Canetti et al., Journal of Cryptology 2021). FiveEyes’ fuzzy extractor uses statistics of the produced features to sample non-uniformly, which significantly improves the security vs. true accept rate (TAR) tradeoff. Irises used to evaluate TAR and security are class disjoint from those used for training and collecting statistics. We state assumptions sufficient for security. We present various parameter regimes to highlight different TARs: 1. $65$ bits of security (equivalent to $87$ bits with a password) at $12$% TAR, and 2. $50$ bits of security (equivalent to $72$ bits with a password) at $45$% TAR. Applying known TAR (Davida et al., IEEE S&P 1998) amplification techniques additively boosts TAR by $30$% for the above settings.
Last updated:  2024-01-22
Snarktor: A Decentralized Protocol for Scaling SNARKs Verification in Blockchains
Alberto Garoffolo, Dmytro Kaidalov, and Roman Oliynykov
The use of zero-knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARK) and similar types of proofs has become increasingly popular as a solution for improving scalability, privacy, and interoperability of blockchain systems. However, even with the most advanced proving systems, verifying a single SNARK proof can require a significant amount of computational resources making it expensive to be performed on-chain. This becomes a noticeable bottleneck in scaling SNARK-based applications. Further efficiency improvement to avoid this bottleneck lies in utilizing distributed recursive proof composition to aggregate multiple existing proofs into one that verifies all underlying proofs. Building upon this concept, we present a new protocol for decentralized recursive proof aggregation allowing one unique proof to aggregate many input proofs to be efficiently verified on-chain, increasing the throughput and cost efficiency of SNARK-based blockchains. The protocol is designed for decentralized environments where independent actors (provers) can join and contribute to the proof generation process. We also present an incentive scheme for such actors. The protocol is abstract enough to be used with a variety of proving systems that support recursive aggregation.
Last updated:  2024-01-22
Theoretical differential fault attacks on FLIP and FiLIP
Pierrick Méaux and Dibyendu Roy
In this article, we examine Differential Fault Attacks (DFA) targeting two stream ciphers, FLIP and FiLIP. We explore the fault model where an adversary flips a single bit of the key at an unknown position. Our analysis involves establishing complexity bounds for these attacks, contingent upon the cryptographic parameters of the Boolean functions employed as filters and the key size. Initially, we demonstrate how the concept of sensitivity enables the detection of the fault position using only a few keystream bits. This represents an enhancement over previous DFA methodologies applied to these ciphers. Subsequently, we leverage the properties of the filter's derivatives to execute attacks. This approach is universally applicable to any filter, and we delineate specific attack strategies for the two function families previously implemented in these ciphers.
Last updated:  2024-01-22
Improved All-but-One Vector Commitment with Applications to Post-Quantum Signatures
Dung Bui, Kelong Cong, and Cyprien Delpech de Saint Guilhem
Post-quantum digital signature schemes have recently received increased attention due to the NIST standardization project for additional signatures. MPC-in-the-Head and VOLE-in-the-Head are general techniques for constructing such signatures from zero-knowledge proof systems. A common theme between the two is an all-but-one vector commitment scheme which internally uses GGM trees. This primitive is responsible for a significant part of the computational time during signing and verification. A more efficient technique for constructing GGM trees is the half-tree technique, introduced by Guo et al. (Eurocrypt 2023). Our work builds an all-but-one vector commitment scheme from the half-tree technique, and further generalizes it to an all-but-\(\tau\) vector commitment scheme. Crucially, our work avoids the use of the random oracle assumption in an important step, which means our binding proof is non-trivial and instead relies on the random permutation oracle. Since this oracle can be instantiated using fixed-key AES which has hardware support, we achieve faster signing and verification times. We integrate our vector commitment scheme into FAEST (faest.info), a round one candidate in the NIST standardization process, and demonstrates its performance with a prototype implementation. For \(\lambda = 128\), our experimental results show a nearly \(3.5\)-fold improvement in signing and verification times.
Last updated:  2024-01-22
Revisiting the security analysis of SNOVA
Yasuhiko Ikematsu and Rika Akiyama
SNOVA is a multivariate signature scheme submitted to the ad- ditional NIST PQC standardization project started in 2022. SNOVA is con- structed by incorporating the structure of the matrix ring over a finite field into the UOV signature scheme, and the core part of its public key is the UOV public key whose coefficients consist of matrices. As a result, SNOVA dramatically reduces the public key size compared to UOV. In this paper, we recall the construction of SNOVA, and reconsider its security analysis. In particular, we investigate key recovery attacks applied to the core part of the public key of SNOVA in detail. Due to our analysis, we show that some pa- rameters of SNOVA submitted in the additional NIST PQC standardization do not satisfy the claimed security levels.
Last updated:  2024-01-22
ConvKyber: Unleashing the Power of AI Accelerators for Faster Kyber with Novel Iteration-based Approaches
Tian Zhou, Fangyu Zheng, Guang Fan, Lipeng Wan, Wenxu Tang, Yixuan Song, Yi Bian, and Jingqiang Lin
The remarkable performance capabilities of AI accelerators offer promising opportunities for accelerating cryptographic algorithms, particularly in the context of lattice-based cryptography. However, current approaches to leveraging AI accelerators often remain at a rudimentary level of implementation, overlooking the intricate internal mechanisms of these devices. Consequently, a significant number of computational resources is underutilized. In this paper, we present a comprehensive exploration of NVIDIA Tensor Cores and introduce a novel framework tailored specifically for Kyber. Firstly, we propose two innovative approaches that efficiently break down Kyber's NTT into iterative matrix multiplications, resulting in approximately a 75% reduction in costs compared to the state-of-the-art scanning-based methods.Secondly, by reversing the internal mechanisms, we precisely manipulate the internal resources of Tensor Cores using assembly-level code instead of inefficient standard interfaces, eliminating memory accesses and redundant function calls. Finally, building upon our highly optimized NTT, we provide a complete implementation for all parameter sets of Kyber. Our implementation surpasses the state-of-the-art Tensor Core based work, achieving remarkable speed-ups of 1.93x, 1.65x, 1.22x and 3.55x for polyvec_ntt, KeyGen, Enc and Dec in Kyber-1024, respectively. Even when considering execution latency, our throughput-oriented full Kyber implementation maintains an acceptable execution latency. For instance, the execution latency ranges from 1.02 to 5.68 milliseconds for Kyber-1024 on R3080 when achieving the peak throughput.
Last updated:  2024-01-21
Chosen-Ciphertext Secure Dual-Receiver Encryption in the Standard Model Based on Post-Quantum Assumptions
Laurin Benz, Wasilij Beskorovajnov, Sarai Eilebrecht, Roland Gröll, Maximilian Müller, and Jörn Müller-Quade
Dual-receiver encryption (DRE) is a special form of public key encryption (PKE) that allows a sender to encrypt a message for two recipients. Without further properties, the difference between DRE and PKE is only syntactical. One such important property is soundness, which requires that no ciphertext can be constructed such that the recipients decrypt to different plaintexts. Many applications rely on this property in order to realize more complex protocols or primitives. In addition, many of these applications explicitly avoid the usage of the random oracle, which poses an additional requirement on a DRE construction. We show that all of the IND-CCA2 secure standard model DRE constructions based on post-quantum assumptions fall short of augmenting the constructions with soundness and describe attacks thereon. We then give an overview over all applications of IND-CCA2 secure DRE, group them into generic (i. e., applications using DRE as black-box) and non-generic applications and demonstrate that all generic ones require either soundness or public verifiability. Conclusively, we identify the gap of sound and IND-CCA2 secure DRE constructions based on post-quantum assumptions in the standard Model. In order to fill this gap we provide two IND-CCA2 secure DRE constructions based on the standard post-quantum assumptions, Normal Form Learning With Errors (NLWE) and Learning Parity with Noise (LPN).
Last updated:  2024-01-21
Short Code-based One-out-of-Many Proofs and Applications
Xindong Liu and Li-Ping Wang
In this work, we propose two novel succinct one-out-of-many proofs from coding theory, which can be seen as extensions of the Stern's framework and Veron's framework from proving knowledge of a preimage to proving knowledge of a preimage for one element in a set, respectively. The size of each proof is short and scales better with the size of the public set than the code-based accumulator in \cite{nguyen2019new}. Based on our new constructions, we further present a logarithmic-size ring signature scheme and a logarithmic-size group signature scheme. Our schemes feature a short signature size, especially our group signature. To our best knowledge, it is the most compact code-based group signature scheme so far. At 128-bit security level, our group signature size is about 144 KB for a group with $2^{20}$ members while the group signature size of the previously most compact code-based group signature constructed by the above accumulator exceeds 3200 KB.
Last updated:  2024-02-08
Call Me By My Name: Simple, Practical Private Information Retrieval for Keyword Queries
Sofía Celi and Alex Davidson
We introduce $\mathsf{ChalametPIR}$: a single-server Private Information Retrieval (PIR) scheme supporting fast, low-bandwidth keyword queries, with a conceptually very simple design. In particular, we develop a generic framework for converting PIR schemes for index queries over flat arrays (based on the Learning With Errors problem) into keyword PIR. This involves representing a key-value map using any probabilistic filter that permits reconstruction of elements from inclusion queries (e.g. Cuckoo filters). In particular, we make use of recently developed Binary Fuse filters to construct $\mathsf{ChalametPIR}$, with minimal efficiency blow-up compared with state-of-the-art index-based schemes (all costs bounded by a factor of $\leq 1.08$). Furthermore, we show that $\mathsf{ChalametPIR}$ achieves runtimes and financial costs that are factors of between $6$-$11\times$ and $3.75$-$11.4\times$ more efficient, respectively, than state-of-the-art keyword PIR approaches, for varying database configurations. Bandwidth costs are additionally reduced or remain competitive, depending on the configuration. Finally, we believe that our application of Binary Fuse filters can have independent value towards developing efficient variants of related cryptographic primitives (e.g. private set intersection), that already benefit from using less efficient filter constructions.
Last updated:  2024-01-20
On historical Multivariate Cryptosystems and their restorations as instruments of Post-Quantum Cryptography
Vasyl Ustimenko
The paper presents a short survey of the History of Multivariate Cryptography together with the usage of old broken multivariate digital signatures in the new protocol based cryptosystems constructed in terms of Noncommutative Cryptography. The general schemes of New cryptosystems is a combinations of Eulerian maps and quadratic maps with their trapdoor accelerators, which are pieces of information such than the knowledge of them allow to compute the reimages in a polynomial time. These schemes are illustrated by historical examples of Imai – Matsumoto multivariate digital signatures schemes and Unbalanced Oil and Vinegar Cryptosystems.
Last updated:  2024-01-22
Starlit: Privacy-Preserving Federated Learning to Enhance Financial Fraud Detection
Aydin Abadi, Bradley Doyle, Francesco Gini, Kieron Guinamard, Sasi Kumar Murakonda, Jack Liddell, Paul Mellor, Steven J. Murdoch, Mohammad Naseri, Hector Page, George Theodorakopoulos, and Suzanne Weller
Federated Learning (FL) is a data-minimization approach enabling collaborative model training across diverse clients with local data, avoiding direct data exchange. However, state-of-the-art FL solutions to identify fraudulent financial transactions exhibit a subset of the following limitations. They (1) lack a formal security definition and proof, (2) assume prior freezing of suspicious customers’ accounts by financial institutions (limiting the solutions’ adoption), (3) scale poorly, involving either $O(n^2)$ computationally expensive modular exponentiation (where $n$ is the total number of financial institutions) or highly inefficient fully homomorphic encryption, (4) assume the parties have already completed the identity alignment phase, hence excluding it from the implementation, performance evaluation, and security analysis, and (5) struggle to resist clients’ dropouts. This work introduces Starlit, a novel scalable privacy-preserving FL mechanism that overcomes these limitations. It has various applications, such as enhancing financial fraud detection, mitigating terrorism, and enhancing digital health. We implemented Starlit and conducted a thorough performance analysis using synthetic data from a key player in global financial transactions. The evaluation indicates Starlit’s scalability, efficiency, and accuracy.
Last updated:  2024-01-19
Two-party GOST in two parts: fruitless search and fruitful synthesis
Liliya Akhmetzyanova, Evgeny Alekseev, Alexandra Babueva, Lidiia Nikiforova, and Stanislav Smyshlyaev
In the current paper we investigate the possibility of designing secure two-party signature scheme with the same verification algorithm as in the Russian standardized scheme (GOST scheme). We solve this problem in two parts. The first part is a (fruitless) search for an appropriate scheme in the literature. It turned out that all existing schemes are insecure in the strong security models. The second part is a synthesis of new signature scheme and ends fruitfully. We synthesize a new two-party GOST signature scheme, additionally using the commitment scheme, guided by the features of the GOST signature scheme, as well as the known attacks on existing schemes. We prove that this scheme is secure in a bijective random oracle model in the case when one of the parties is malicious under the assumption that the classical GOST scheme is unforgeable in a bijective random oracle model and the commitment scheme is modelled as a random oracle.
Last updated:  2024-01-19
Enabling PERK on Resource-Constrained Devices
Slim Bettaieb, Loïc Bidoux, Alessandro Budroni, Marco Palumbi, and Lucas Pandolfo Perin
PERK is a digital signature scheme submitted to the recent NIST Post-Quantum Cryptography Standardization Process for Additional Digital Signature Schemes. For NIST security level I, PERK features sizes ranging from 6kB to 8.5kB, encompassing both the signature and public key, depending on the parameter set. Given its inherent characteristics, PERK's signing and verification algorithms involve the computation of numerous large objects, resulting in substantial stack-memory consumption ranging from 300kB to 1.5MB for NIST security level I and from 1.1MB to 5.7MB for NIST security level V. In this paper, we present a memory-versus-performance trade-off strategy that significantly reduces PERK's memory consumption to a maximum of approximately 82kB for any security level, enabling PERK to be executed on resource-constrained devices. Additionally, we explore various optimizations tailored to the Cortex M4 and introduce the first implementation of PERK designed for this platform.
Last updated:  2024-01-23
Tree-based Lookup Table on Batched Encrypted Queries using Homomorphic Encryption
Jung Hee Cheon, Hyeongmin Choe, and Jai Hyun Park
Homomorphic encryption (HE) is in the spotlight as a solution for privacy-related issues in various real-world scenarios. However, the limited types of operations supported by each HE scheme have been a major drawback in applications. While HE schemes based on learning-with-error (LWE) problem provide efficient lookup table (LUT) evaluation in terms of latency, they have downsides in arithmetic operations and low throughput compared to HE schemes based on ring LWE (RLWE) problem. The use of HE on circuits containing LUT has been partly limited if they contain arithmetic operations or their computational width is large. In this paper, we propose homomorphic algorithms for batched queries on LUTs by using RLWE-based HE schemes. To look up encrypted LUTs of size $n$ on encrypted queries, our algorithms use $O(\log{n})$ homomorphic comparisons and $O(n)$ multiplications. For unencrypted LUTs, our algorithms use $O(\log{n})$ comparisons, $O(\sqrt{n})$ ciphertext multiplications, and $O(n)$ scalar multiplications. We provide a proof-of-concept implementation based on CKKS scheme (Asiacrypt 2017). The amortized running time for an encrypted (Resp. unencrypted) LUT of size $512$ is $0.041$ (Resp. $0.025$) seconds. Our implementation reported roughly $2.4$-$6.0$x higher throughput than the current implementation of LWE-based schemes, with more flexibility on the structure of the LUTs.
Last updated:  2024-03-03
On Hilbert-Poincaré series of affine semi-regular polynomial sequences and related Gröbner bases
Momonari Kudo and Kazuhiro Yokoyama
Gröbner bases are nowadays central tools for solving various problems in commutative algebra and algebraic geometry. A typical use of Gröbner bases is is the multivariate polynomial system solving, which enables us to construct algebraic attacks against post-quantum cryptographic protocols. Therefore, the determination of the complexity of computing Gröbner bases is very important both in theory and in practice: One of the most important cases is the case where input polynomials compose an (overdetermined) affine semi-regular sequence. The first part of this paper aims to present a survey on Gröbner basis computation and its complexity. In the second part, we shall give an explicit formula on the (truncated) Hilbert-Poincaré series associated to the homogenization of an affine semi-regular sequence. Based on the formula, we also study (reduced) Gröbner bases of the ideals generated by an affine semi-regular sequence and its homogenization. Some of our results are considered to give mathematically rigorous proofs of the correctness of methods for computing Gröbner bases of the ideal generated by an affine semi-regular sequence.
Last updated:  2024-01-29
Simultaneously simple universal and indifferentiable hashing to elliptic curves
Dmitrii Koshelev
The present article explains how to generalize the hash function SwiftEC (in an elementary quasi-unified way) to any elliptic curve $E$ over any finite field $\mathbb{F}_{\!q}$ of characteristic $> 3$. The new result apparently brings the theory of hash functions onto elliptic curves to its logical conclusion. To be more precise, this article provides compact formulas that define a hash function $\{0,1\}^* \to E(\mathbb{F}_{\!q})$ (deterministic and indifferentible from a random oracle) with the same working principle as SwiftEC. In particular, both of them equally compute only one square root in $\mathbb{F}_{\!q}$ (in addition to two cheap Legendre symbols). However, the new hash function is valid with much more liberal conditions than SwiftEC, namely when $3 \mid q-1$. Since in the opposite case $3 \mid q-2$ there are already indifferentiable constant-time hash functions to $E$ with the cost of one root in $\mathbb{F}_{\!q}$, this case is not processed in the article. If desired, its approach nonetheless allows to easily do that mutatis mutandis.
Last updated:  2024-02-16
Efficient Instances of Docked Double Decker With AES, and Application to Authenticated Encryption
Christoph Dobraunig, Krystian Matusiewicz, Bart Mennink, and Alexander Tereschenko
A tweakable wide blockcipher is a construction which behaves in the same way as a tweakable blockcipher, with the difference that the actual block size is flexible. Due to this feature, a tweakable wide blockcipher can be directly used as a strong encryption scheme that provides full diffusion when encrypting plaintexts to ciphertexts and vice versa. Furthermore, it can be the basis of authenticated encryption schemes fulfilling the strongest security notions. In this paper, we present two instantiations of the docked double decker tweakable wide blockcipher: $\mathit{ddd}\text{-}\mathit{AES}$ and $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$. Both instances exclusively use similar building blocks as AES-GCM (AES and finite field multiplication), are designed for maximal parallelism, and hence, can make efficient use of existing hardware accelerators. Moreover, $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$ builds upon a novel beyond birthday bound secure pseudorandom function, a tweakable variant of the XOR of permutations, facilitating in the need to include a tweak in the AES evaluations without sacrificing flexibility in docked double decker. We furthermore introduce an authenticated encryption mode $\mathit{aaa}$ specifically tailored to be instantiated with $\mathit{ddd}\text{-}\mathit{AES}$ and $\mathit{bbb}\text{-}\mathit{ddd}\text{-}\mathit{AES}$, where special attentions is given to how the nonce and associated data can be processed. We prove that this mode is secure both in the nonce-respecting setting as well as in the setting where random nonces are used.
Last updated:  2024-01-18
Layout Graphs, Random Walks and the t-wise Independence of SPN Block Ciphers
Tianren Liu, Angelos Pelecanos, Stefano Tessaro, and Vinod Vaikuntanathan
We continue the study of $t$-wise independence of substitution-permutation networks (SPNs) initiated by the recent work of Liu, Tessaro, and Vaikuntanathan (CRYPTO 2021). Our key technical result shows that when the S-boxes are randomly and independently chosen and kept secret, an $r$-round SPN with input length $n = b \cdot k$ is $2^{-\Theta(n)}$-close to $t$-wise independent within $r = O(\min\{k, \log t\})$ rounds for any $t$ almost as large as $2^{b/2}$. Here, $b$ is the input length of the S-box and we assume that the underlying mixing achieves maximum branch number. We also analyze the special case of AES parameters (with random S-boxes), and show it is $2^{-128}$-close to pairwise independent in $7$ rounds. Central to our result is the analysis of a random walk on what we call the layout graph, a combinatorial abstraction that captures equality and inequality constraints among multiple SPN evaluations. We use our technical result to show concrete security bounds for SPNs with actual block cipher parameters and small-input $S$-boxes. (This is in contrast to the large body of results on ideal-model analyses of SPNs.) For example, for the censored-AES block cipher, namely AES with most of the mixing layers removed, we show that 192 rounds suffice to attain $2^{-128}$-closeness to pairwise independence. The prior such result for AES (Liu, Tessaro and Vaikuntanathan, CRYPTO 2021) required more than 9000 rounds.
Last updated:  2024-01-18
Quantum State Obfuscation from Classical Oracles
James Bartusek, Zvika Brakerski, and Vinod Vaikuntanathan
A major unresolved question in quantum cryptography is whether it is possible to obfuscate arbitrary quantum computation. Indeed, there is much yet to understand about the feasibility of quantum obfuscation even in the classical oracle model, where one is given for free the ability to obfuscate any classical circuit. In this work, we develop a new array of techniques that we use to construct a quantum state obfuscator, a powerful notion formalized recently by Coladangelo and Gunn (arXiv:2311.07794) in their pursuit of better software copy-protection schemes. Quantum state obfuscation refers to the task of compiling a quantum program, consisting of a quantum circuit $C$ with a classical description and an auxiliary quantum state $\ket{\psi}$, into a functionally-equivalent obfuscated quantum program that hides as much as possible about $C$ and $\ket{\psi}$. We prove the security of our obfuscator when applied to any pseudo-deterministic quantum program, i.e. one that computes a (nearly) deterministic classical input / classical output functionality. Our security proof is with respect to an efficient classical oracle, which may be heuristically instantiated using quantum-secure indistinguishability obfuscation for classical circuits. Our result improves upon the recent work of Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023) who also showed how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, but only ones with a completely classical description. Furthermore, our result answers a question of Coladangelo and Gunn, who provide a construction of quantum state indistinguishability obfuscation with respect to a quantum oracle, but leave the existence of a concrete real-world candidate as an open problem. Indeed, our quantum state obfuscator together with Coladangelo-Gunn gives the first candidate realization of a ``best-possible'' copy-protection scheme for all polynomial-time functionalities. Our techniques deviate significantly from previous works on quantum obfuscation. We develop several novel technical tools which we expect to be broadly useful in quantum cryptography. These tools include a publicly-verifiable, linearly-homomorphic quantum authentication scheme with classically-decodable ZX measurements (which we build from coset states), and a method for compiling any quantum circuit into a "linear + measurement" ($\LM$) quantum program: an alternating sequence of CNOT operations and partial ZX measurements.
Last updated:  2024-01-18
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness
Yulin Zhao, Hualin Zhou, and Zhiguo Wan
Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-honest servers $\mathcal{S}_0$ and $\mathcal{S}_1$ collaborate with each other, with a shuffle server $\mathcal{S}_0$ in charge of privacy-preserving random clustering, while an analysis server $\mathcal{S}_1$ responsible for robustness detection, identifying and filtering malicious model updates. Our scheme employs a novel combination of homomorphic encryption and proxy re-encryption to realize secure server-to-server collaboration. We also utilize a novel sparse matrix projection compression technique to enhance communication efficiency and significantly reduce communication overhead. To resist poisoning attacks, we introduce a dual-filter algorithm based on trusted root, combine dimensionality reduction and norm calculation to identify malicious model updates. Extensive experiments validate the efficiency and robustness of our scheme. SuperFL achieves impressive compression ratios, ranging from $5\text{-}40$x, under different models while maintaining comparable model accuracy as the baseline. Notably, our solution demonstrates a maximal model accuracy decrease of no more than $2\%$ and $6\%$ on the MNIST and CIFAR-10 datasets respectively, under specific compression ratios and the presence of malicious clients.
Last updated:  2024-04-25
Memory adds no cost to lattice sieving for computers in 3 or more spatial dimensions
Samuel Jaques
The security of lattice-based crytography (LWE, NTRU, and FHE) depends on the hardness of the shortest-vector problem (SVP). Sieving algorithms give the lowest asymptotic runtime to solve SVP, but depend on exponential memory. Memory access costs much more in reality than in the RAM model, so we consider a computational model where processors, memory, and meters of wire are in constant proportions to each other. While this adds substantial costs to route data during lattice sieving, we modify existing algorithms to amortize these costs and find that, asymptotically, a classical computer can achieve the previous RAM model cost of $2^{0.2925d+o(d)}$ to sieve a $d$-dimensional lattice for a computer existing in 3 or more spatial dimensions, and can reach $2^{0.3113d+o(d)}$ in 2 spatial dimensions, where "spatial dimensions" are the dimensions of the physical geometry in which the computer exists. Under some assumptions about the constant terms of memory access, we estimate increases in bit security between $7$ to $23$ bits for different Kyber parameter sets and $8$ to $22$ bits for Dilithium.
Last updated:  2024-01-22
On Modular Algorithms and Butterfly Operations in Number Theoretic Transform
Yanze Yang, Yiran Jia, and Guangwu Xu
Number theoretic transform (NTT) has been a very useful tool in computations for number theory, algebra and cryptography. Its performance affects some post-quantum cryptosystems. In this paper, we discuss the butterfly operation of NTT. This basic module of NTT requires heavy modular arithmetics. Montgomery reduction is commonly used in this setting. Recently several variants of Montgomery have been proposed for the purpose of speeding up NTT. We observe that the Chinese remainder theorem (CRT) can be involved in this type of algorithms in nature and transparent ways. In this paper, a framework of using CRT to model Montgomery type algorithms is described. The derivation of these algorithms as well as their correctness are all treated in the CRT framework. Under our approach, some problems of a modular reduction algorithm ((published in IACR Transactions on Cryptographic Hardware and Embedded Systems, doi:10.46586/tches.v2022.i4.614-636 ) are identified, and a counterexample is generated to show that the algorithm is incorrect.
Last updated:  2024-01-17
Formal Security Analysis of the OpenID FAPI 2.0: Accompanying a Standardization Process
Pedram Hosseyni, Ralf Kuesters, and Tim Würtele
In recent years, the number of third-party services that can access highly-sensitive data has increased steadily, e.g., in the financial sector, in eGovernment applications, or in high-assurance identity services. Protocols that enable this access must provide strong security guarantees. A prominent and widely employed protocol for this purpose is the OpenID Foundation's FAPI protocol. The FAPI protocol is already in widespread use, e.g., as part of the UK's Open Banking standards and Brazil's Open Banking Initiative as well as outside of the financial sector, for instance, as part of the Australian government's Consumer Data Rights standards. Based on lessons learned from FAPI 1.0, the OpenID Foundation has developed a completely new protocol, called FAPI 2.0. The specifications of FAPI 2.0 include a concrete set of security goals and attacker models under which the protocol aims to be secure. Following an invitation from the OpenID Foundation's FAPI Working Group (FAPI WG), we have accompanied the standardization process of the FAPI 2.0 protocol by an in-depth formal security analysis. In this paper, we report on our analysis and findings. Our analysis incorporates the first formal model of the FAPI 2.0 protocol and is based on a detailed model of the web infrastructure, the Web Infrastructure Model, originally proposed by Fett, Küsters, and Schmitz. Our analysis has uncovered several types of attacks on the protocol, violating the aforementioned security goals set by the FAPI WG. We subsequently have worked with the FAPI WG to fix the protocol, resulting in several changes to the specifications. After adapting our model to the changed specifications, we have proved the security properties to hold under the strong attacker model defined by the FAPI WG.
Last updated:  2024-01-17
OBSCURE: Versatile Software Obfuscation from a Lightweight Secure Element
Darius Mercadier, Viet Sang Nguyen, Matthieu Rivain, and Aleksei Udovenko
Software obfuscation is a powerful tool to protect the intellectual property or secret keys inside programs. Strong software obfuscation is crucial in the context of untrusted execution environments (e.g., subject to malware infection) or to face potentially malicious users trying to reverse-engineer a sensitive program. Unfortunately, the state-of-the-art of pure software-based obfuscation (including white-box cryptography) is either insecure or infeasible in practice. This work introduces OBSCURE, a versatile framework for practical and cryptographically strong software obfuscation relying on a simple stateless secure element (to be embedded, for example, in a protected hardware chip or a token). Based on the foundational result by Goyal et al. from TCC 2010, our scheme enjoys provable security guarantees, and further focuses on practical aspects, such as efficient execution of the obfuscated programs, while maintaining simplicity of the secure element. In particular, we propose a new rectangular universalization technique, which is also of independent interest. We provide an implementation of OBSCURE taking as input a program source code written in a subset of the C programming language. This ensures usability and a broad range of applications of our framework. We benchmark the obfuscation on simple software programs as well as on cryptographic primitives, hence highlighting the possible use cases of the framework as an alternative to pure software-based white-box implementations.
Last updated:  2024-05-07
A provably masked implementation of BIKE Key Encapsulation Mechanism
Loïc Demange and Mélissa Rossi
BIKE is a post-quantum key encapsulation mechanism (KEM) selected for the 4th round of the NIST’s standardization campaign. It relies on the hardness of the syndrome decoding problem for quasi-cyclic codes and on the indistinguishability of the public key from a random element, and provides the most competitive performance among round 4 candidates, which makes it relevant for future real-world use cases. Analyzing its side-channel resistance has been highly encouraged by the community and several works have already outlined various side-channel weaknesses and proposed ad-hoc countermeasures. However, in contrast to the well-documented research line on masking lattice-based algorithms, the possibility of generically protecting code-based algorithms by masking has only been marginally studied in a 2016 paper by Cong Chen et al. At this stage of the standardization campaign, it is important to assess the possibility of fully masking BIKE scheme and the resulting cost in terms of performances. In this work, we provide the first high-order masked implementation of a code-based algorithm. We had to tackle many issues such as finding proper ways to handle large sparse polynomials, masking the key-generation algorithm or keeping the benefit of the bitslicing. In this paper, we present all the gadgets necessary to provide a fully masked implementation of BIKE, we discuss our different implementation choices and we propose a full proof of masking in the Ishai Sahai and Wagner (Crypto 2003) model. More practically, we also provide an open C-code masked implementation of the key-generation, encapsulation and decapsulation algorithms with extensive benchmarks. While the obtained performance is slower than existing masked lattice-based algorithms, the scaling in the masking order is still encouraging and no Boolean to Arithmetic conversion has been used. We hope that this work can be a starting point for future analysis and optimization.
Last updated:  2024-02-08
Succinct Verification of Compressed Sigma Protocols in the Updatable SRS setting
Moumita Dutta, Chaya Ganesh, and Neha Jawalkar
We propose protocols in the Compressed Sigma Protocol framework that achieve a succinct verifier. Towards this, we construct a new inner product argument and cast it in the Compressed Sigma Protocol (CSP) framework as a protocol for opening a committed linear form, achieving logarithmic verification. We then use our succinct-verifier CSP to construct a zero-knowledge argument for circuit satisfiability (under the discrete logarithm assumption in bilinear groups) in the updatable Structured Reference String (SRS) setting that achieves $O(\log n)$ proof size and $O(\log n)$ verification complexity. Our circuit zero-knowledge protocol has concretely better proof/prover/verifier complexity compared to the the state-of-the-art protocol in the updatable setting under the same assumption. Our techniques of achieving verifier-succinctness in the compression framework is of independent interest. We then show a commitment scheme for committing to group elements using a structured commitment key. We construct protocols to open a committed homomorphism on a committed vector with verifier succinctness in the designated verifier setting. This has applications in making the verifier in compressed sigma protocols for bilinear group arithmetic circuits, succinct.
Last updated:  2024-01-17
PRIDA: PRIvacy-preserving Data Aggregation with multiple data customers
Beyza Bozdemir, Betül Aşkın Özdemir, and Melek Önen
We propose a solution for user privacy-oriented privacy-preserving data aggregation with multiple data customers. Most existing state-of-the-art approaches present too much importance on performance efficiency and seem to ignore privacy properties except for input privacy. Most solutions for data aggregation do not generally discuss the users’ birthright, namely their privacy for their own data control and anonymity when they search for something on the browser or volunteer to participate in a survey. Still, they are ambitious to secure data customers’ rights (which should come later). They focus on resulting in an efficiency-oriented data aggregation enabling input privacy only. We aim to give importance to user privacy, and we have designed a solution for data aggregation in which we keep efficiency in balance. We show that PRIDA provides a good level of computational and communication complexities and is even better in timing evaluation than existing studies published recently (i.e., Bonawitz et al. (CCS’17), Corrigan-Gibbs et al. (NSDI’17), Bell et al. (CCS’20), Addanki et al. (SCN’22)). We employ threshold homomorphic encryption and secure two-party computation to ensure privacy properties. We balance the trade-off between a proper design for users and the desired privacy and efficiency.
Last updated:  2024-01-17
A Comparative Examination of Network and Contract-Based Blockchain Storage Solutions for Decentralized Applications
Lipeng He
Decentralized applications (DApps), which are innovative blockchain-powered software systems designed to serve as the fundamental building blocks for the next generation of Internet services, have witnessed exponential growth in recent years. This paper thoroughly compares and analyzes two blockchain-based decentralized storage networks (DSNs), which are crucial foundations for DApp and blockchain ecosystems. The study examines their respective mechanisms for data persistence, strategies for enforcing data retention, and token economics. In addition to delving into technical details, the suitability of each storage solution for decentralized application development is assessed, taking into consideration network performance, storage costs, and existing use cases. By evaluating these factors, the paper aims to provide insights into the effectiveness of these technologies in supporting the desirable properties of truly decentralized blockchain applications. In conclusion, the findings of this research are discussed and synthesized, offering valuable perspectives on the capabilities of these technologies. It sheds light on their potential to facilitate the development of DApps and provides an understanding of the ongoing trends in blockchain development.
Last updated:  2024-04-17
1/0 Shades of UC: Photonic Side-Channel Analysis of Universal Circuits
Dev M. Mehta, Mohammad Hashemi, Domenic Forte, Shahin Tajik, and Fatemeh Ganji
A universal circuit (UC) can be thought of as a programmable circuit that can simulate any circuit up to a certain size by specifying its secret configuration bits. UCs have been incorporated into various applications, such as private function evaluation (PFE). Recently, studies have attempted to formalize the concept of semiconductor intellectual property (IP) protection in the context of UCs. This is despite the observations made in theory and practice that, in reality, the adversary may obtain additional information about the secret when executing cryptographic protocols. This paper aims to answer the question of whether UCs leak information unintentionally, which can be leveraged by the adversary to disclose the configuration bits. In this regard, we propose the first photon emission analysis against UCs relying on computer vision-based approaches. We demonstrate that the adversary can utilize a cost-effective solution to take images to be processed by off-the-shelf algorithms to extract configuration bits. We examine the efficacy of our method in two scenarios: (1) the design is small enough to be captured in a single image during the attack phase, and (2) multiple images should be captured to launch the attack by deploying a divide-and-conquer strategy. To evaluate the effectiveness of our attack, we use metrics commonly applied in side-channel analysis, namely rank and success rate. By doing so, we show that our profiled photon emission analysis achieves a success rate of 1 by employing a few templates (concretely, only 18 images were used as templates).
Last updated:  2024-01-17
Too Hot To Be True: Temperature Calibration for Higher Confidence in NN-assisted Side-channel Analysis
Seyedmohammad Nouraniboosjin and Fatemeh Ganji
The past years have witnessed a considerable increase in research efforts put into neural network-assisted profiled side-channel analysis (SCA). Studies have also identified challenges, e.g., closing the gap between metrics for machine learning (ML) classification and side-channel attack evaluation. In fact, in the context of NN-assisted SCA, the NN’s output distribution forms the basis for successful key recovery. In this respect, related work has covered various aspects of integrating neural networks (NNs) into SCA, including applying a diverse set of NN models, model selection and training, hyperparameter tuning, etc. Nevertheless, one well-known fact has been overlooked in the SCA-related literature, namely NNs’ tendency to become “over-confident,” i.e., suffering from an overly high probability of correctness when predicting the correct class (secret key in the sense of SCA). Temperature scaling is among the powerful and effective techniques that have been devised as a remedy for this. Regarding the principles of deep learning, it is known that temperature scaling does not affect the NN’s accuracy; however, its impact on metrics for secret key recovery, mainly guessing entropy, is worth investigating. This paper reintroduces temperature scaling into SCA and demonstrates that key recovery can become more effective through that. Interestingly, temperature scaling can be easily integrated into SCA, and no re-tuning of the network is needed. In doing so, temperature can be seen as a metric to assess the NN’s performance before launching the attack. In this regard, the impact of hyperparameter tuning, network variance, and capacity have been studied. This leads to recommendations on how network miscalibration and overconfidence can be prevented.
Last updated:  2024-01-17
Hints from Hertz: Dynamic Frequency Scaling Side-Channel Analysis of Number Theoretic Transform in Lattice-Based KEMs
Tianrun Yu, Chi Cheng, Zilong Yang, Yingchen Wang, Yanbin Pan, and Jian Weng
Number Theoretic Transform (NTT) has been widely used in accelerating computations in lattice-based cryptography. However, attackers can potentially launch power analysis targeting NTT because it is usually the most time-consuming part of the implementation. This extended time frame provides a natural window of opportunity for attackers. In this paper, we investigate the first CPU frequency leakage (Hertzbleed-like) attacks against NTT in lattice-based KEMs. Our key observation is that different inputs to NTT incur different Hamming weights in its output and intermediate layers. By measuring the CPU frequency during the execution of NTT, we propose a simple yet effective attack idea to find the input to NTT that triggers NTT processing data with significantly low Hamming weight. We further apply our attack idea to real-world applications that are built upon NTT: CPA-secure Kyber without Compression and Decompression functions, and CCA-secure NTTRU. This leads us to extract information or frequency Hints about the secret key. Integrating these Hints into the LWE-estimator framework, we estimate a minimum of $35\%$ security loss caused by the leakage. The frequency and timing measurements on the Reference and AVX2 implementations of NTT in both Kyber and NTTRU align well with our theoretical analysis, confirming the existence of frequency side-channel leakage in NTT. It is important to emphasize that our observation is not limited to a specific implementation but rather the algorithm on which NTT is based. Therefore, our results call for more attention to the analysis of power leakage against NTT in lattice-based cryptography.
Last updated:  2024-01-16
SDitH in Hardware
Sanjay Deshpande, James Howe, Jakub Szefer, and Dongze Yue
This work presents the first hardware realisation of the Syndrome-Decoding-in-the-Head (SDitH) signature scheme, which is a candidate in the NIST PQC process for standardising post-quantum secure digital signature schemes. SDitH's hardness is based on conservative code-based assumptions, and it uses the Multi-Party-Computation-in-the-Head (MPCitH) construction. This is the first hardware design of a code-based signature scheme based on traditional decoding problems and only the second for MPCitH constructions, after Picnic. This work presents optimised designs to achieve the best area efficiency, which we evaluate using the Time-Area Product (TAP) metric. This work also proposes a novel hardware architecture by dividing the signature generation algorithm into two phases, namely offline and online phases for optimising the overall clock cycle count. The hardware designs for key generation, signature generation, and signature verification are parameterised for all SDitH parameters, including the NIST security levels, both syndrome decoding base fields (GF256 and GF251), and thus conforms to the SDitH specifications. The hardware design further supports secret share splitting, and the hypercube optimisation which can be applied in this and multiple other NIST PQC candidates. The results of this work result in a hardware design with a drastic reducing in clock cycles compared to the optimised AVX2 software implementation, in the range of 2-4x for most operations. Our key generation outperforms software drastically, giving a 11-17x reduction in runtime, despite the significantly faster clock speed. On Artix 7 FPGAs we can perform key generation in 55.1 Kcycles, signature generation in 6.7 Mcycles, and signature verification in 8.6 Mcycles for NIST L1 parameters, which increase for GF251, and for L3 and L5 parameters.
Last updated:  2024-01-16
Laconic Function Evaluation, Functional Encryption and Obfuscation for RAMs with Sublinear Computation
Fangqi Dong, Zihan Hao, Ethan Mook, and Daniel Wichs
Laconic function evaluation (LFE) is a "flipped" version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for circuits under LWE, and for Turing Machines (TMs) from indistinguishability obfuscation (iO). In this work we introduce LFE for Random-Access Machines (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a weakly efficient variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a fully efficient variant, where the client's run-time must be sublinear in both $T$ and $|y|$. We construct the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO. We then show how to leverage fully efficient RAM-LFE to also get (many-key) functional encryption for RAMs (RAM-FE) where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as iO for RAMs where the obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.
Last updated:  2024-03-09
A Refined Hardness Estimation of LWE in Two-step Mode
Wenwen Xia, Leizhang Wang, Geng Wang, Dawu Gu, and Baocang Wang
Recently, researchers have proposed many LWE estimators, such as lattice-estimator (Albrecht et al, Asiacrypt 2017) and leaky-LWE-Estimator (Dachman-Soled et al, Crypto 2020), while the latter has already been used in estimating the security level of Kyber and Dilithium using only BKZ. However, we prove in this paper that solving LWE by combining a lattice reduction step (by LLL or BKZ) and a target vector searching step (by enumeration or sieving), which we call a Two-step mode, is more efficient than using only BKZ. Moreover, we give a refined LWE estimator in Two-step mode by analyzing the relationship between the probability distribution of the target vector and the solving success rate in a Two-step mode LWE solving algorithm. While the latest Two-step estimator for LWE, which is the “primal-bdd” mode in lattice-estimator1, does not take into account some up-to-date results and lacks a thorough theoretical analysis. Under the same gate-count model, our estimation for NIST PQC standards drops by 2.1∼3.4 bits (2.2∼4.6 bits while considering more flexible blocksize and jump strategy) compared with leaky-LWE-Estimator. Furthermore, we also give a conservative estimation for LWE from the Two-step solving algorithm. Compared with the Core-SVP model, which is used in previous conservative estimations, our estimation relies on weaker assumptions and outputs higher evaluation results than the Core- SVP model. For NIST PQC standards, our conservative estimation is 4.17∼8.11 bits higher than the Core-SVP estimation. Hence our estimator can give a closer estimation for both upper bound and lower bound of LWE hardness.
Last updated:  2024-01-18
Exploiting the Central Reduction in Lattice-Based Cryptography
Tolun Tosun, Amir Moradi, and Erkay Savas
This paper presents a novel and efficient way of exploiting side-channel leakage of masked implementations of lattice-based cryptography (LBC). The presented attack specifically targets the central reduction technique, which is widely adapted in efficient implementations of LBC. We show that the central reduction leads to a vulnerability by creating a strong dependency between the power consumption and the sign of sensitive intermediate variables. We exploit this dependency by introducing a novel hypothetical power model, the range power model, which can be employed in higher-order multi-query side-channel analysis attacks. We particularly show that our approach is valid for the prime moduli employed by Kyber and Dilithium, the lattice-based post-quantum algorithms selected by NIST, while it generalizes to other primes used in LBC as well. We practically evaluate our introduced approach by performing second-order non-profiled attacks against a masked implementation of Kyber on an Arm Cortex-M4 micro-processor. In our experiments we revealed the full secret key of the aforementioned implementation with only 2100 electro-magnetic (EM) traces without profiling, achieving a more than 14 times reduction in the number of traces compared to classical attacks.
Last updated:  2024-01-16
Privacy-preserving Anti-Money Laundering using Secure Multi-Party Computation
Marie Beth van Egmond, Vincent Dunning, Stefan van den Berg, Thomas Rooijakkers, Alex Sangers, Ton Poppe, and Jan Veldsink
Money laundering is a serious financial crime where criminals aim to conceal the illegal source of their money via a series of transactions. Although banks have an obligation to monitor transactions, it is difficult to track these illicit money flows since they typically span over multiple banks, which cannot share this information due to privacy concerns. We present secure risk propagation, a novel efficient algorithm for money laundering detection across banks without violating privacy concerns. In this algorithm, each account is assigned a risk score, which is then propagated through the transaction network. In this article we present two results. Firstly, using data from a large Dutch bank, we show that it is possible to detect unusual activity using this model, with cash ratio as the risk score. With a recall of 20%, the precision improves from 15% to 40% by propagating the risk scores, reducing the number of false positives significantly. Secondly, we present a privacy-preserving solution for securely performing risk propagation over a joint, inter-bank transaction network. To achieve this, we use Secure Multi-Party Computation (MPC) techniques, which are particularly well-suited for the risk propagation algorithm due to its structural simplicity. We also show that the running time of this secure variant scales linearly in the amount of accounts and transactions. For 200, 000 transactions, two iterations of the secure algorithm between three virtual parties, run within three hours on a consumer-grade server.
Last updated:  2024-01-16
Extreme Algebraic Attacks
Pierrick Méaux and Qingju Wang
When designing filter functions in Linear Feedback Shift Registers (LFSR) based stream ciphers, algebraic criteria of Boolean functions such as the Algebraic Immunity (AI) become key characteristics because they guarantee the security of ciphers against the powerful algebraic attacks. In this article, we investigate a generalization of the algebraic attacks proposed by Courtois and Meier on filtered LFSR twenty years ago. We consider how the standard algebraic attack can be generalized beyond filtered LFSR to stream ciphers applying a Boolean filter function to an updated state. Depending on the updating process, we can use different sets of annihilators than the ones used in the standard algebraic attack; it leads to a generalization of the concept of algebraic immunity, and more efficient attacks. To illustrate these strategies, we focus on one of these generalizations and introduce a new notion called Extreme Algebraic Immunity (EAI). We perform a theoretic study of the EAI criterion and explore its relation to other algebraic criteria. We prove the upper bound of the EAI of an n-variable Boolean function and further show that the EAI can be lower bounded by the AI restricted to a subset, as defined by Carlet, Méaux and Rotella at FSE 2017. We also exhibit functions with EAI guaranteed to be lower than the AI, in particular we highlight a pathological case of functions with optimal algebraic immunity and EAI only n/4. As applications, we determine the EAI of filter functions of some existing stream ciphers and discuss how extreme algebraic attacks using EAI could apply to some ciphers. Our generalized algebraic attack does not give a better complexity than Courtois and Meier's result on the existing stream ciphers. However, we see this work as a study to avoid weaknesses in the construction of future stream cipher designs.
Last updated:  2024-01-16
A Study of Soft Analytical Side-Channel Attacks on Secure Hash Algorithms
Julien Maillard, Thomas Hiscock, Maxime Lecomte, and Christophe Clavier
Hashing algorithms are one-way functions that are used in cryptographic protocols as Pseudo Random Functions (PRF), to assess data integrity or to create a Hash-based Message Authentication Code (HMAC). In many cryptographic constructions, secret data is processed with hashing functions. In these cases, recovering the input given to the hashing algorithm allows retrieving secret data. In this paper, we investigate the application of Soft Analytical Side-Channel Attacks (SASCA), based on a Belief Propagation (BP) framework, to recover the input of two popular hash function families: SHA-2 and SHA-3. Thanks to a simulation framework, we develop a comprehensive study of the attacker's recovery capacity depending on the hash function variant. Then, we demonstrate that an attacker can leverage prior knowledge on the hashing function input to increase the effectiveness of the attacks. As an example, in the context of a bootloader doing a hash-based integrity check on a secret firmware, we show that simple statistics on assembly code injected in BP improves input recovery. Finally, we study the security implications of SASCA on cryptosystems performing multiple invocations of hashing functions with inputs derived from the same secret data. We show that such constructions can be exploited efficiently by an attacker. We support such statements with experiments on SHA-256 based HMAC and on SHAKE-256 based PRF in Kyber's encryption routine. We also show that increasing Kyber's security parameters implies weaker security against the proposed SASCA targeting the shared key.
Last updated:  2024-01-16
Double Difficulties, Defense in Depth A succinct authenticated key agreement protocol
Uncategorized
WenBin Hsieh
Show abstract
Uncategorized
In 2016, NIST announced an open competition with the goal of finding and standardizing a suitable quantum-resistant cryptographic algorithm, with the standard to be drafted in 2023. These algorithms aim to implement post-quantum secure key encapsulation mechanism (KEM) and digital signatures. However, the proposed algorithm does not consider authentication and is vulnerable to attacks such as man-in-the-middle. In this paper, we propose an authenticated key exchange algorithm to solve the above problems and improve its usability. The proposed algorithm combines learning with errors (LWE) and elliptic curve discrete logarithm problem to provide the required security goals. As forward security is a desirable property in a key exchange protocol, an ephemeral key pair is designed that a long-term secret compromise does not affect the security of past session keys. Moreover, the exchange steps required by the algorithm are very streamlined and can be completed with only two handshakes. We also use the random oracle model to prove the correctness and the security of proposed scheme. The performance analysis demonstrates the effectiveness of the proposed scheme. We believe that the novel approach introduced in this algorithm opens several doors for innovative applications of digital signatures in KEMs.
Last updated:  2024-01-16
Partial Key Exposure Attack on Common Prime RSA
Mengce Zheng
In this paper, we focus on the common prime RSA variant and introduces a novel investigation into the partial key exposure attack targeting it. We explore the vulnerability of this RSA variant, which employs two common primes $p$ and $q$ defined as $p=2ga+1$ and $q=2gb+1$ for a large prime $g$. Previous cryptanalysis of common prime RSA has primarily focused on the small private key attack. In our work, we delve deeper into the realm of partial key exposure attacks by categorizing them into three distinct cases. We are able to identify weak private keys that are susceptible to partial key exposure by using the lattice-based method for solving simultaneous modular univariate linear equations. To validate the effectiveness and soundness of our proposed attacks, we conduct experimental evaluations. Through these examinations, we demonstrate the validity and practicality of the proposed partial key exposure attacks on common prime RSA.
Last updated:  2024-01-15
The Insecurity of Masked Comparisons: SCAs on ML-KEM’s FO-Transform
Julius Hermelink, Kai-Chun Ning, and Emanuele Strieder
NIST has released the draft standard for ML-KEM, and ML-KEM is actively used in several widely-distributed applications. Thus, the wide-spread use of ML-KEM in the embedded worlds has to be expected in the near future. This makes security against side-channel attacks a pressing matter. Several side-channel attacks have previously been proposed, and one line of research have been attacks against the comparison step of the FO-transform. These attacks construct a decryption failure oracle using a side-channel. A recent work published at TCHES 2022 stresses the need for higher-order masked comparisons by presenting a horizontal attack and proposes a t-probing secure comparison operation. A subsequent work by D’Anvers, Van Beirendonck, and Verbauwhede improves upon the performance of several previous proposals. In this work, we show that the latter masked comparison suffers from weakness similar to those identified in the former. We first propose an approximate template attack that requires only a very low number of traces for profiling and has an exceptionally high noise tolerance. We show that the profiling phase is not necessary and can be replaced by a vertical analysis of the distribution of certain points of interest without knowledge of the targeted values. Finally, we explain how a horizontal attack may construct a decryption failure oracle from a single trace. We provide a leakage model of the targeted operations, which is based on the noisy Hamming weight model. Our evaluations are carried out on a physical device to stress the practicality of our attack. In addition, we simulate the attacks to determine the measurement noise levels that can be handled. We discuss the underlying causes for our attack, the difficulty of securing the Fujisaki-Okamoto transform in ML-KEM, and draw conclusion about the (in-)sufficiency of t-probing security in this context.
Last updated:  2024-01-15
CrISA-X: Unleashing Performance Excellence in Lightweight Symmetric Cryptography for Extendable and Deeply Embedded Processors
Oren Ganon and Itamar Levi
The selection of a Lightweight Cryptography (LWC) algorithm is crucial for resource limited applications. The National Institute of Standards and Technology (NIST) leads this process, which involves a thorough evaluation of the algorithms’ cryptanalytic strength. Furthermore, careful consideration is given to factors such as algorithm latency, code size, and hardware implementation area. These factors are critical in determining the overall performance of cryptographic solutions at edge devices. Introducing CrISA-X, a Cryptography Instruction Set Architecture extensions designed to improve cryptographic latency on extendable processors. CrISA-X, classified as Generic-Atomic, Block-Specific and Procedure-Specific, leverages RISC processor hardware and a base ISA to effectively execute LWC algorithms. Our study aims to evaluate the execution efficiency of new single-cycle instruction extensions and tightly coupled multicycle instructions on extendable modular RISC processors. CrISA-X provides enhanced speed of various algorithms simultaneously while optimizing ISA adaptability, a feat yet to be accomplished. The extension, diverse for several computation levels, is first specifically tailored for individual algorithms and sets of LWC algorithms, depending on performance, frequency, and area trade-offs. By diligently applying the Min-Max optimization technique, we have configured these extensions to achieve a delicate balance between performance, area code size, etc. Our study presents empirical evidence of the performance enhancement achieved on a real synthesis modular RISC processor. We offer a framework for creating optimized processor hardware and ISA extensions. The CrISA-X framework generally outperforms ISA extensions by delivering significant performance boosts between 3x to 17x while experiencing a relative area cost increase of +12% and +47% in LUTs, in respect to the instruction set category. Notably, as one important example, the utilization of the ASCON algorithm yields a 10x performance boost in contrast to the base ISA instruction implementation
Last updated:  2024-04-22
Constrained Pseudorandom Functions for Inner-Product Predicates from Weaker Assumptions
Sacha Servan-Schreiber
In this paper, we build a framework for constructing Constrained Pseudorandom Functions (CPRFs) with inner-product constraint predicates, using ideas from subtractive secret sharing and related-key-attack security. Our framework can be instantiated using a random oracle or any suitable Related-Key-Attack (RKA) secure pseudorandom function. We provide three instantiations of our framework: 1. an adaptively-secure construction in the random oracle model; 2. a selectively-secure construction under the DDH assumption; and 3. a selectively-secure construction with a polynomial domain under the assumption that one-way functions exist. All three instantiations are constraint-hiding and support inner-product predicates, leading to the first constructions of such expressive CPRFs under each corresponding assumption. Moreover, while the OWF-based construction is primarily of theoretical interest, the random oracle and DDH-based constructions are concretely efficient, which we show via an implementation.
Last updated:  2024-04-15
Elastic MSM: A Fast, Elastic and Modular Preprocessing Technique for Multi-Scalar Multiplication Algorithm on GPUs
Xudong Zhu, Haoqi He, Zhengbang Yang, Yi Deng, Lutan Zhao, and Rui Hou
Zero-knowledge proof (ZKP) is a cryptographic primitive that enables a prover to convince a verifier that a statement is true, without revealing any other information beyond the correctness of the statement itself. Due to its powerful capabilities, its most practical type, called zero-knowledge Succinct Non-interactive ARgument of Knowledge (zkSNARK), has been widely deployed in various privacy preserving applications such as cryptocurrencies and verifiable computation. Although state-of-the-art zkSNARKs are highly efficient for the verifier, the computational overhead for the prover is still orders of magnitude too high to warrant use in many applications. This overhead arises from several time-consuming operations, including large-scale matrix-vector multiplication (MUL), number-theoretic transform (NTT), and especially the multi-scalar multiplication (MSM) which constitutes the largest proportion. Therefore, further efficiency improvements are needed. In this paper, we focus on comprehensive optimization of running time and storage space required by the MSM algorithm on GPUs. Specifically, we propose a novel, modular and adaptive parameter configuration technique—elastic MSM to enable us to adjust the scale of MSM according to our own wishes by performing a corresponding amount of preprocessing. This technique enables us to fully unleash the potential of various efficient parallel MSM algorithms. We have implemented and tested elastic MSM over three prevailing parallel Pippenger algorithms on GPUs. Given a range of practical parameters, across various preprocessing space limitations (across various MSM scales), our constructions achieve up to about 90%, 8% and 36% (2.58×, 39% and 91%) speedup versus three state-of-the-art parallel Pippenger algorithms on GPUs, respectively. From another perspective, elastic MSM could also be regarded as a preprocessing technique over the well-known Pippenger algorithm, which is modular and could be used to accelerate almost all the most advanced parallel Pippenger algorithms on GPUs. Meanwhile, elastic MSM provides an adaptive trade-off between the running time and the extra storage space needed by parallel Pippenger algorithms on GPUs. This is the first preprocessing technique to retain the improved MSM computation brought by preprocessing under varying storage space limitations. Specifically, given a range of practical parameters, across various preprocessing space limitations (across various MSM scales), our constructions achieve up to about 192× and 223× (159× and 174×) speedup versus two state-of-the-art preprocessing parallel Pippenger algorithms on GPUs, respectively.
Last updated:  2024-01-15
Zero-Knowledge Proofs for SIDH variants with Masked Degree or Torsion
Youcef Mokrani and David Jao
The polynomial attacks on SIDH by Castryck, Decru, Maino, Martindale and Robert have shown that, while the general isogeny problem is still considered unfeasible to break, it is possible to efficiently compute a secret isogeny when given its degree and image on enough torsion points. A natural response from many researchers has been to propose SIDH variants where one or both of these possible extra pieces of information is masked in order to obtain schemes for which a polynomial attack is not currently known. Example of such schemes are M-SIDH, MD-SIDH and FESTA. However, by themselves, theses SIDH variants are vulnerable to the same adaptive attacks where the adversary sends public keys whose associated isogeny is either unknown or inexistent. For the original SIDH scheme, one possible defense against these attacks is to use zero-knowledge proofs that a secret isogeny has been honestly computed. However, such proofs do not currently exist for most SIDH variants. In this paper, we present new zero-knowledge proofs for isogenies whose degree or torsion points have been masked. The security of these proofs mainly relies on the hardness of DSSP.
Last updated:  2024-01-18
Multi-Hop Fine-Grained Proxy Re-Encryption
Yunxiao Zhou, Shengli Liu, and Shuai Han
Proxy re-encryption (PRE) allows a proxy to transform a ciphertext intended for Alice (delegator) to another ciphertext intended for Bob (delegatee) without revealing the underlying message. Recently, a new variant of PRE, namely fine-grained PRE (FPRE), was proposed in [Zhou et al., Asiacrypt 2023]. Generally, FPRE is designed for a function family F: each re-encryption key rk_{A→B}^f is associated with a function f ∈ F, and with rk_{A→B}^f, a proxy can transform Alice's ciphertext encrypting m to Bob's ciphertext encrypting f(m). However, their scheme only supports single-hop re-encryption and achieves only CPA security. In this paper, we formalize multi-hop FPRE (mFPRE) that supports multi-hop re-encryptions in the fine-grained setting, and propose two mFPRE schemes achieving CPA security and stronger HRA security (security against honest re-encryption attacks), respectively. -- For multi-hop FPRE, we formally define its syntax and formalize a set of security notions including CPA security, HRA security, undirectionality and ciphertext unlinkablity. HRA security is stronger and more reasonable than CPA security, and ciphertext unlinkablity blurs the proxy relations among a chain of multi-hop re-encryptions, hence providing better privacy. We establish the relations between these security notions. -- Our mFPRE schemes support fine-grained re-encryptions for bounded linear functions and have security based on the learning-with-errors (LWE) assumption in the standard model. In particular, one of our schemes is HRA secure and enjoys all the aforementioned desirable securities. To achieve CPA security and HRA security for mFPRE, we extend the framework of [Jafargholi et al., Crypto 2017] and the technique of the [Fuchsbauer et al., PKC 2019].
Last updated:  2024-01-19
FEASE: Fast and Expressive Asymmetric Searchable Encryption
Long Meng, Liqun Chen, Yangguang Tian, Mark Manulis, and Suhui Liu
Asymmetric Searchable Encryption (ASE) is a promising cryptographic mechanism that enables a semi-trusted cloud server to perform keyword searches over encrypted data for users. To be useful, an ASE scheme must support expressive search queries, which are expressed as conjunction, disjunction, or any Boolean formulas. In this paper, we propose a fast and expressive ASE scheme that is adaptively secure, called FEASE. It requires only 3 pairing operations for searching any conjunctive set of keywords independent of the set size and has linear complexity for encryption and trapdoor algorithms in the number of keywords. FEASE is based on a new fast Anonymous Key-Policy Attribute-Based Encryption (A-KP-ABE) scheme as our first proposal, which is of independent interest. To address optional protection against keyword guessing attacks, we extend FEASE into the first expressive Public-Key Authenticated Encryption with Keyword Search (PAEKS) scheme. We provide implementations and evaluate the performance of all three schemes, while also comparing them with the state of the art. We observe that FEASE outperforms all existing expressive ASE constructions and that our A-KP-ABE scheme offers anonymity with efficiency comparable to the currently fastest yet non-anonymous KP-ABE schemes FAME (ACM CCS 2017) and FABEO (ACM CCS 2022).
Last updated:  2024-01-14
Anonymous Homomorphic IBE with Application to Anonymous Aggregation
Michael Clear, Ciaran McGoldrick, and Hitesh Tewari
All anonymous identity-based encryption (IBE) schemes that are group homomorphic (to the best of our knowledge) require knowledge of the identity to compute the homomorphic operation. This paper is motivated by this open problem, namely to construct an anonymous group-homomorphic IBE scheme that does not sacrifice anonymity to perform homomorphic operations. Note that even when strong assumptions such as indistinguishability obfuscation (iO) are permitted, no schemes are known. We succeed in solving this open problem by assuming iO and the hardness of the DBDH problem over rings (specifically, $Z_{N^2}$ for RSA modulus $N$). We then use the existence of such a scheme to construct an IBE scheme with re-randomizable anonymous encryption keys, which we prove to be IND-ID-RCCA secure. Finally, we use our results to construct identity-based anonymous aggregation protocols.
Last updated:  2024-01-13
Simple Vs Vectorial: Exploiting Structural Symmetry to Beat the ZeroSum Distinguisher Applications to SHA3, Xoodyak and Bash
SAHIBA SURYAWANSHI, Shibam Ghosh, Dhiman Saha, and Prathamesh Ram
Higher order differential properties constitute a very insightful tool at the hands of a cryptanalyst allowing for probing a cryptographic primitive from an algebraic perspective. In FSE 2017, Saha et al. reported SymSum (referred to as SymSum_Vec in this paper), a new distinguisher based on higher order vectorial Boolean derivatives of SHA-3, constituting one of the best distinguishers on the latest cryptographic hash standard. SymSum_Vec exploits the difference in the algebraic degree of highest degree monomials in the algebraic normal form of SHA-3 with regards to their dependence on round constants. Later in Africacrypt 2020, Suryawanshi et al. extended SymSum_Vec using linearization techniques and in SSS 2023 also applied it to NIST-LWC finalist Xoodyak. However, a major limitation of SymSum_Vec is the maximum attainable derivative (MAD) which is less than half of the widely studied ZeroSum distinguisher. This is attributed to SymSum_Vec being dependent on m−fold vectorial derivatives while ZeroSum relies on m−fold simple derivatives. In this work we overcome this limitation of SymSum_Vec by developing and validating the theory of computing SymSum_Vec with simple derivatives. This gives us a close to 100% improvement in the MAD that can be computed. The new distinguisher reported in this work can also be combined with one/two-round linearization to penetrate more rounds. Moreover, we identify an issue with the two-round linearization claim made by Suryawanshi et al. which renders it invalid and also furnish an algebraic fix at the cost of some additional constraints. Combining all results we report SymSum_Sim , a new variant of the SymSum_Vec distinguisher based on m−fold simple derivatives that outperforms ZeroSum by a factor of $2^{257}$, $2^{129}$ for 10-round SHA-3-384 and 9-round SHA-3-512 respectively while enjoying the same MAD as ZeroSum. For every other SHA-3 variant, SymSum_Sim maintains an advantage of factor 2. Combined with one/two-round linearization, SymSum_Sim improves upon all existing ZeroSum and SymSum_Vec distinguishers on both SHA-3 and Xoodyak. As regards Keccak-p, the internal permutation of SHA-3, we report the best 15-round distinguisher with a complexity of $2^{256}$ and the first better than birthday-bound 16-round distinguisher with a complexity of $2^{512}$ (improving upon the 15/16-round results by Guo et al. in Asiacrypt 2016). We also devise the best full-round distinguisher on the Xoodoo internal permutation of Xoodyak with a practically verifiable complexity of $2^{32}$ and furnish the first third-party distinguishers on the Belarushian hash function Bash. All distinguishers furnished in this work have been verified through implementations whenever practically viable. Overall, with the MAD barrier broken, SymSum_Sim emerges as a better distinguisher than ZeroSum on all fronts and adds to the state-of-the-art of cryptanalytic tools investigating non-randomness of crypto primitives.
Last updated:  2024-01-13
Limits on Authenticated Encryption Use in TLS
Atul Luykx and Kenneth G. Paterson
This technical note presents limits on the security (as a function of the number of plaintext bytes encrypted and the number of forgery attempts made by an adversary) for the main Authenticated Encryption schemes available in TLS 1.2 and the draft of TLS 1.3. These limits are derived from security proofs for the considered schemes available in the literature. Our intention is to provide considered technical input to on-going discussions in the TLS Working Group of the IETF concerning, amongst other things, the necessity of adding a key update feature to the TLS 1.3 specification.
Last updated:  2024-01-13
Do You Need a Zero Knowledge Proof?
Jens Ernstberger, Stefanos Chaliasos, Liyi Zhou, Philipp Jovanovic, and Arthur Gervais
Zero-Knowledge Proofs (ZKPs), a cryptographic tool known for decades, have gained significant attention in recent years due to advancements that have made them practically applicable in real-world scenarios. ZKPs can provide unique attributes, such as succinctness, non-interactivity, and the ability to prove knowledge without revealing the information itself, making them an attractive solution for a range of applications. This paper aims to critically analyze the applicability of ZKPs in various scenarios. We categorize ZKPs into distinct types: SNARKs (Succinct Non-Interactive Arguments of Knowledge), Commit-then-Prove ZKPs, MPC-in-the-Head, and Sigma Protocols, each offering different trade-offs and benefits. We introduce a flowchart methodology to assist in determining the most suitable ZKP system, given a set of technical application requirements. Next, we conduct an in-depth investigation of three major use cases: Outsourcing Computation, Digital Self-Sovereign Identity, and ZKPs in networking. Additionally, we provide a high-level overview of other applications of ZKPs, exploring their broader implications and opportunities. This paper aims to demystify the decision-making process involved in choosing the right ZKP system, providing clarity on when and how these cryptographic tools can be effectively utilized in various domains — and when they are better to be avoided.
Last updated:  2024-01-15
CL-SCA: Leveraging Contrastive Learning for Profiled Side-Channel Analysis
Annv Liu, An Wang, Shaofei Sun, Congming Wei, Yaoling Ding, Yongjuan Wang, and Liehuang Zhu
Side-channel analysis based on machine learning, especially neural networks, has gained significant attention in recent years. However, many existing methods still suffer from certain limitations. Despite the inherent capability of neural networks to extract features, there remains a risk of extracting irrelevant information. The heavy reliance on profiled traces makes it challenging to adapt to remote attack scenarios with limited profiled traces. Besides, attack traces also contain critical information that can be used in the training process to assist model learning. In this paper, we propose a side-channel analysis approach based on contrastive learning named CL-SCA to address these issues. We also leverage a stochastic data augmentation technique to assist model to effectively filter out irrelevant information from the profiled traces. Through experiments of different datasets from different platforms, we demonstrate that CL-SCA significantly outperforms various conventional machine learning side-channel analysis techniques. Moreover, by incorporating attack traces into the training process using our approach, known as CL-SCA+, it becomes possible to achieve even greater enhancements. This extension can further improve the effectiveness of key recovery, which is fully verified through experiments on different datasets.
Last updated:  2024-01-11
Computational Differential Privacy for Encrypted Databases Supporting Linear Queries
Ferran Alborch Escobar, Sébastien Canard, Fabien Laguillaumie, and Duong Hieu Phan
Differential privacy is a fundamental concept for protecting individual privacy in databases while enabling data analysis. Conceptually, it is assumed that the adversary has no direct access to the database, and therefore, encryption is not necessary. However, with the emergence of cloud computing and the «on-cloud» storage of vast databases potentially contributed by multiple parties, it is becoming increasingly necessary to consider the possibility of the adversary having (at least partial) access to sensitive databases. A consequence is that, to protect the on-line database, it is now necessary to employ encryption. At PoPETs'19, it was the first time that the notion of differential privacy was considered for encrypted databases, but only for a limited type of query, namely histograms. Subsequently, a new type of query, summation, was considered at CODASPY'22. These works achieve statistical differential privacy, by still assuming that the adversary has no access to the encrypted database. In this paper, we argue that it is essential to assume that the adversary may eventually access the encrypted data, rendering statistical differential privacy inadequate. Therefore, the appropriate privacy notion for encrypted databases that we use is computational differential privacy, which was introduced by Beimel et al. at CRYPTO '08. In our work, we focus on the case of functional encryption, which is an extensively studied primitive permitting some authorized computation over encrypted data. Technically, we show that any randomized functional encryption scheme that satisfies simulation-based security and differential privacy of the output can achieve computational differential privacy for multiple queries to one database. Our work also extends the summation query to a much broader range of queries, specifically linear queries, by utilizing inner-product functional encryption. Hence, we provide an instantiation for inner-product functionalities by proving its simulation soundness and present a concrete randomized inner-product functional encryption with computational differential privacy against multiple queries. In term of efficiency, our protocol is almost as practical as the underlying inner product functional encryption scheme. As evidence, we provide a full benchmark, based on our concrete implementation for databases with up to 1 000 000 entries. Our work can be considered as a step towards achieving privacy-preserving encrypted databases for a wide range of query types and considering the involvement of multiple database owners.
Last updated:  2024-02-20
On Efficient and Secure Compression Modes for Arithmetization-Oriented Hashing
Elena Andreeva, Rishiraj Bhattacharyya, Arnab Roy, and Stefano Trevisani
ZK-SNARKs are advanced cryptographic protocols used in private verifiable computation: modern SNARKs allow to encode the invariants of an arithmetic circuit over some large prime field in an appropriate NP language, from which a zero-knowlege short non-interactive argument of knowledge is built. Due to the high cost of proof generation, ZK-SNARKs for large constraint systems are inpractical. ZK-SNARKs are used in privacy-oriented blockchains such as Filecoin, ZCash and Monero, to verify Merkle tree opening proofs, which in turn requires computing a fixed-input-length (FIL) cryptographic compression function. As classical, bit-oriented hash functions like SHA-2 require huge constraint systems, Arithmetization-Oriented (AO) compression functions have emerged to fill the gap. Usually, AO compression functions are obtained by applying the Sponge hashing mode on a fixed-key permutation: while this avoids the cost of dynamic key scheduling, AO schedulers are often cheap to compute, making the exploration of AO compression functions based directly on blockciphers a topic of practical interest. In this work, we first adapt notions related to classical hash functions and their security notions to the AO syntax, and inspired by the classical PGV modes, we propose AO PGV-LC and AO PGV-ELC, two blockcipher-based FIL compression modes with parametrizable input and output sizes. In the ideal cipher model, we prove the collision and preimage resistance of both our modes, and give bounds for collision and opening resistance over Merkle trees of arbitrary arity. We then experimentally compare the AO PGV-LC mode over the Hades-MiMC blockcipher with its popular Sponge instantiation, Poseidon. The resulting construction, called Poseidon-DM, is $2$-$5\times$ faster than Poseidon in native computations, and $15$-$35\%$ faster in generating Merkle tree proofs over the Groth16 SNARK framework, depending on the tree arity. In particular, proof generation for an $8$-ary tree over Poseidon-DM is $2.5\times$ faster than for a binary tree with the same capacity over Poseidon. Finally, in an effort to further exploit the benefits of wide trees, we propose a new strategy to obtain a compact R1CS constraint system for Merkle trees with arbitrary arity.
Last updated:  2024-01-11
Quantum-Secure Hybrid Communication for Aviation Infrastructure
Benjamin Dowling and Bhagya Wimalasiri
The rapid digitization of aviation communication and its dependent critical operations demand secure protocols that address domain-specific security requirements within the unique functional constraints of the aviation industry. These secure protocols must provide sufficient security against current and possible future attackers, given the inherent nature of the aviation community, that is highly complex and averse to frequent upgrades as well as its high safety and cost considerations. In this work we propose a pair of quantum-secure hybrid key exchange protocols (PQAG-KEM and PQAG-SIG) to secure communication between aircrafts in-flight and ground stations. PQAG-KEM leverages post-quantum and classical Key Encapsulation Mechanisms (KEMs) to ensure the hybrid security of the protocol against classical as well as future quantum adversaries. PQAG-SIG, alternatively, uses quantum-safe digital signatures to achieve authentication security. We provide an implementation of both PQAG-KEM and PQAG-SIG, and compare favourably with current state-of- the-art secure avionic protocols. Finally, we provide a formal analysis of our new PQAG protocols in a strong hybrid key exchange framework.
Last updated:  2024-01-11
A Low-Latency High-Order Arithmetic to Boolean Masking Conversion
Jiangxue Liu, Cankun Zhao, Shuohang Peng, Bohan Yang, Hang Zhao, Xiangdong Han, Min Zhu, Shaojun Wei, and Leibo Liu
Masking, an effective countermeasure against side-channel attacks, is commonly applied in modern cryptographic implementations. Considering cryptographic algorithms that utilize both Boolean and arithmetic masking, the conversion algorithm between arithmetic masking and Boolean masking is required. Conventional high-order arithmetic masking to Boolean masking conversion algorithms based on Boolean circuits suffer from performance overhead, especially in terms of hardware implementation. In this work, we analyze high latency for the conversion and propose an improved high-order A2B conversion algorithm. For the conversion of 16-bit variables, the hardware latency can be reduced by 47% in the best scenario. For the case study of second-order 32-bit conversion, the implementation results show that the improved scheme reduces the clock cycle latency by 42% in hardware and achieves a 30% speed performance improvement in software. Theoretically, a security proof of arbitrary order is provided for the proposed high-order A2B conversion. Experimental validations are performed to verify the second-order DPA resistance of second-order implementation. The Test Vector Leakage Assessment does not observe side-channel leakage for hardware and software implementations.
Last updated:  2024-02-16
Adaptive Distributional Security for Garbling Schemes with $\mathcal{O}(|x|)$ Online Complexity
Estuardo Alpírez Bock, Chris Brzuska, Pihla Karanko, Sabine Oechsner, and Kirthivaasan Puniamurthy
Garbling schemes allow to garble a circuit $C$ and an input $x$ such that $C(x)$ can be computed while hiding both $C$ and $x$. In the context of adaptive security, an adversary specifies the input to the circuit after seeing the garbled circuit, so that one can pre-process the garbling of $C$ and later only garble the input $x$ in the online phase. Since the online phase may be time-critical, it is an interesting question how much information needs to be transmitted in this phase and ideally, this should be close to $|x|$. Unfortunately, Applebaum, Ishai, Kushilevitz, and Waters (AIKW, CRYPTO 2013) show that for some circuits, specifically PRGs, achieving online complexity close to $|x|$ is impossible with simulation-based security, and Hubáček and Wichs (HW, ITCS 2015) show that online complexity of maliciously secure two-party computation needs to grow with the incompressibility entropy of the function. We thus seek to understand under which circumstances optimal online complexity is feasible despite these strong lower bounds. Our starting point is the observation that lower bounds (only) concern cryptographic circuits and that, when an embedded secret is not known to the adversary (distinguisher), then the lower bound techniques do not seem to apply. Our main contribution is distributional simulation-based security (DSIM), a framework for capturing weaker, yet meaningful simulation-based (adaptive) security which does not seem to suffer from impossibility results akin to AIKW. We show that DSIM can be used to prove security of a distributed symmetric encryption protocol built around garbling. We also establish a bootstrapping result from DSIM-security for $\text{NC}^0$ circuits to DSIM-security for arbitrary polynomial-size circuits while preserving their online complexity.
Last updated:  2024-01-10
Fuzzy Identity Based Encryption with a flexible threshold value
Sedigheh Khajouei-Nejad, Sam Jabbehdari, Hamid Haj Seyyed Javadi, and Seyed Mohammad Hossein Moattar
The issue of data and information security on the internet and social network has become more serious and pervasive in recent years. Cryptography is used to solve security problems. However, message encryption cannot merely meet the intended goals because access control over the encrypted messages is required in some applications. To achieve these requirements, attribute-based encryption (ABE) is used. This type of encryption provides both security and access structure for the network users simultaneously. Fuzzy Identity-Based Encryption (FIBE) is a special mode of ABE that provides a threshold access structure for the users. This threshold value is set by the authority for users, which is always fixed and cannot be changed. So, the sender (encryptor) will not play a role in determining the threshold value. The mentioned issue exists also in Key Policy Attribute Based Encryption (KP-ABE) schemes. In this paper, we present a FIBE scheme in addition to the authority, the sender also plays a role in determining the threshold value. Thus, the policy will be more flexible than previous FIBE schemes in that the threshold value is selected only by the authority. We can call the proposed scheme a dual-policy ABE. The proposed technique for flexibility of threshold value can be applied in most of the existing KP-ABE schemes. We use the (indistinguishable) selective security model for security proof. The hardness assumption that we use is the modified bilinear decision Diffie-Hellman problem.
Last updated:  2024-01-10
Foundations of Anonymous Signatures: Formal Definitions, Simplified Requirements, and a Construction Based on General Assumptions
Jan Bobolz, Jesus Diaz, and Markulf Kohlweiss
In today's systems, privacy is often at odds with utility: users that reveal little information about themselves get restricted functionality, and service providers mistrust them. In practice, systems tip to either full anonymity (e.g. Monero), or full utility (e.g. Bitcoin). Well-known cryptographic primitives for bridging this gap exist: anonymous credentials (AC) let users disclose a subset of their credentials' attributes, revealing to service providers "just what they need"; group signatures (GS) allow users to authenticate anonymously, to be de-anonymized "just when deemed necessary". However, these primitives are hard to deploy. Current AC and GS variants reach specific points in the privacy-utility tradeoff, which we point as counter-productive engineering-wise, as it requires full and error-prone re-engineering to adjust the tradeoff. Also, so far, GS and AC have been studied separately by theoretical research. We take the first steps toward unifying and generalizing both domains, with the goal of bringing their benefits to practice, in a flexible way. We give a common model capturing their core properties, and use functional placeholders to subsume intermediate instantiations of the privacy-utility tradeoff under the same model. To prove its flexibility, we show how concrete variants of GS, AC (and others, like ring signatures) can be seen as special cases of our scheme – to which we refer as universal anonymous signatures (UAS). In practice, this means that instantiations following our construction can be configured to behave as variant X of a GS scheme, or as variant Y of an AC scheme, by tweaking a few functions.
Last updated:  2024-05-01
SASTA: Ambushing Hybrid Homomorphic Encryption Schemes with a Single Fault
Aikata Aikata, Ahaan Dabholkar, Dhiman Saha, and Sujoy Sinha Roy
The rising tide of data breaches targeting large data storage centres and servers has raised serious privacy and security concerns. Homomorphic Encryption schemes offer an effective defence against such attacks, but their adoption has been hindered by substantial computational and communication overheads, particularly on the client's side. The Hybrid Homomorphic Encryption (HEE) protocol was developed to mitigate these issues. However, the susceptibility of HHE to strong attacks, specifically physical attacks, has been largely unexplored. While physical attacks like the Differential Fault Analysis (DFA) have proved very effective in the field of symmetric cryptography, prior works have largely relied on strong assumptions like nonce reuse, limiting their feasibility in a real-world setting. In this work, we introduce a novel attack- SASTA, which presents, to the best of our knowledge, the first generalized analysis of HHE under DFA. Our analysis uncovers a significant limitation of the HHE protocol where a single fault leads to complete key recovery not only for the standard scheme-AES but also for the new HHE tailored Symmetric Encryption (SE) schemes -- RASTA, PASTA, MASTA, and HERA. We further extend SASTA to effectively target Authenticated Transciphering protocols. Unlike prior works, the key advantage of SASTA is that it does not require nonce reuse. We demonstrate a proof-of-concept of our attack on an off-the-shelf ATXmega128D4-AU microcontroller running HHE firmware and mount end-to-end key recovery attacks. Finally, we discuss conventional countermeasures to defend against SASTA. Our work highlights that despite HHE's advantages of improving performance and reducing communication overhead, further analysis of its security guarantees is required.
Last updated:  2024-01-10
ReSolveD: Shorter Signatures from Regular Syndrome Decoding and VOLE-in-the-Head
Hongrui Cui, Hanlin Liu, Di Yan, Kang Yang, Yu Yu, and Kaiyi Zhang
We present ReSolveD, a new candidate post-quantum signature scheme under the regular syndrome decoding (RSD) assumption for random linear codes, which is a well-established variant of the well-known syndrome decoding (SD) assumption. Our signature scheme is obtained by designing a new zero-knowledge proof for proving knowledge of a solution to the RSD problem in the recent VOLE-in-the-head framework using a sketching scheme to verify that a vector has weight exactly one. We achieve a signature size of 3.99 KB with a signing time of 27.3 ms and a verification time of 23.1 ms on a single core of a standard desktop for a 128-bit security level. Compared to the state-of-the-art code-based signature schemes, our signature scheme achieves $1.5\times \sim 2\times$ improvement in terms of the common "signature size + public-key size" metric, while keeping the computational efficiency competitive.
Last updated:  2024-04-15
X-Wing: The Hybrid KEM You’ve Been Looking For
Manuel Barbosa, Deirdre Connolly, João Diogo Duarte, Aaron Kaiser, Peter Schwabe, Karoline Varner, and Bas Westerbaan
X-Wing is a hybrid key-encapsulation mechanism based on X25519 and ML-KEM-768. It is designed to be the sensible choice for most applications. The concrete choice of X25519 and ML-KEM-768 allows X-Wing to achieve improved efficiency compared to using a generic KEM combiner. In this paper, we introduce the X-Wing hybrid KEM construction and provide a proof of security. We show (1) that X-Wing is a classically IND-CCA secure KEM if the strong Diffie-Hellman assumption holds in the X25519 nominal group, and (2) that X-Wing is a post-quantum IND-CCA secure KEM if ML-KEM-768 is itself an IND-CCA secure KEM and SHA3-256 is secure when used as a pseudorandom function. The first result is proved in the ROM, whereas the second one holds in the standard model. Loosely speaking, this means X-Wing is secure if either X25519 or ML-KEM-768 is secure. We stress that these security gaurantees and optimizations are only possible due to the concrete choices that were made, and it may not apply in the general case.
Last updated:  2024-03-28
On Computing the Multidimensional Scalar Multiplication on Elliptic Curves
Walid Haddaji, Loubna Ghammam, Nadia El Mrabet, and Leila Ben Abdelghani
A multidimensional scalar multiplication ($d$-mul) consists of computing $[a_1]P_1+\cdots+[a_d]P_d$, where $d$ is an integer ($d\geq 2)$, $\alpha_1, \cdots, \alpha_d$ are scalars of size $l\in \mathbb{N}^*$ bits, $P_1, P_2, \cdots, P_d$ are points on an elliptic curve $E$. This operation ($d$-mul) is widely used in cryptography, especially in elliptic curve cryptographic algorithms. Several methods in the literature allow to compute the $d$-mul efficiently (e.g., the bucket method~\cite{bernstein2012faster}, the Karabina et al. method~\cite{hutchinson2019constructing, hisil2018d, hutchinson2020new}). This paper aims to present and compare the most recent and efficient methods in the literature for computing the $d$-mul operation in terms of with, complexity, memory consumption, and proprieties. We will also present our work on the progress of the optimisation of $d$-mul in two methods. The first method is useful if $2^d-1$ points of $E$ can be stored. It is based on a simple precomputation function. The second method works efficiently when $d$ is large and $2^d-1$ points of $E$ can not be stored. It performs the calculation on the fly without any precomputation. We show that the main operation of our first method is $100(1-\frac{1}{d})\%$ more efficient than that of previous works, while our second exhibits a $50\%$ improvement in efficiency. These improvements will be substantiated by assessing the number of operations and practical implementation.
Last updated:  2024-04-18
Computing $2$-isogenies between Kummer lines
Damien Robert and Nicolas Sarkis
We use theta groups to study $2$-isogenies between Kummer lines, with a particular focus on the Montgomery model. This allows us to recover known formulas, along with more efficient forms for translated isogenies, which require only $2S+2m_0$ for evaluation. We leverage these translated isogenies to build a hybrid ladder for scalar multiplication on Montgomery curves with rational $2$-torsion, which cost $3M+6S+2m_0$ per bit, compared to $5M+4S+1m_0$ for the standard Montgomery ladder.
Last updated:  2024-01-09
Blink: Breaking Lattice-Based Schemes Implemented in Parallel with Chosen-Ciphertext Attack
Jian Wang, Weiqiong Cao, Hua Chen, and Haoyuan Li
As the message recovery-based attack poses a serious threat to lattice-based schemes, we conducted a study on the side-channel secu- rity of parallel implementations of lattice-based key encapsulation mech- anisms. Initially, we developed a power model to describe the power leakage during message encoding. Utilizing this power model, we pro- pose a multi-ciphertext message recovery attack, which can retrieve the required messages for a chosen ciphertext attack through a suitable mes- sage recovery oracle. Building upon the successful message recovery, we further develop a key recovery method based on a ciphertext-choosing strategy that maximizes key recovery accuracy, as well as a lattice reduc- tion attack capable of solving the whole private key from the target LWE instance. To assess the effectiveness of the attack, we conducted experi- ments using Kyber768 implemented on a Xilinx FPGA board. The exper- imental results demonstrate that our attack could successfully recover the private key with 9600 power traces and a computational complexity of 100 bikz, which is a significant advantage over existing attacks. Notably, our attack remains effective despite countermeasures such as masking and shuffling being implemented. This study reveals that parallel im- plementations remain vulnerable to side-channel attacks, and highlights the necessity of additional analysis and countermeasures for lattice-based schemes implemented in parallel.
Last updated:  2024-05-01
A New Approach to Efficient and Secure Fixed-point Computation
Tore Kasper Frederiksen, Jonas Lindstrøm, Mikkel Wienberg Madsen, and Anne Dorte Spangsberg
Secure Multi-Party Computation (MPC) constructions typically allow computation over a finite field or ring. While useful for many applications, certain real-world applications require the usage of decimal numbers. While it is possible to emulate floating-point operations in MPC, fixed-point computation has gained more traction in the practical space due to its simplicity and efficient realizations. Even so, current protocols for fixed-point MPC still require computing a secure truncation after each multiplication gate. In this paper, we show a new paradigm for realizing fixed-point MPC. Starting from an existing MPC protocol over arbitrary, large, finite fields or rings, we show how to realize MPC over a residue number system (RNS). This allows us to leverage certain mathematical structures to construct a secure algorithm for efficient approximate truncation by a static and public value. We then show how this can be used to realize highly efficient secure fixed-point computation. In contrast to previous approaches, our protocol does not require any multiplications of secret values in the underlying MPC scheme to realize truncation but instead relies on preprocessed pairs of correlated random values, which we show can be constructed very efficiently, when accepting a small amount of leakage and robustness in the strong, covert model. We proceed to implement our protocol, with SPDZ as the underlying MPC protocol, and achieve significantly faster fixed-point multiplication.
Last updated:  2024-05-09
How (not) to hash into class groups of imaginary quadratic fields?
István András Seres, Péter Burcsi, and Péter Kutas
Class groups of imaginary quadratic fields (class groups for short) have seen a resurgence in cryptography as transparent groups of unknown order. They are a prime candidate for being a trustless alternative to RSA groups because class groups do not need a (distributed) trusted setup to sample a cryptographically secure group of unknown order. Class groups have recently found many applications in verifiable secret sharing, secure multiparty computation, transparent polynomial commitments, and perhaps most importantly, in time-based cryptography, i.e., verifiable delay functions, (homomorphic) time-lock puzzles, timed commitments, etc. However, there are various roadblocks to making class groups widespread in practical cryptographic deployments. We initiate the rigorous study of hashing into class groups. Specifically, we want to sample a uniformly distributed group element in a class group such that nobody knows its discrete logarithm with respect to any public parameter. We point out several flawed algorithms in numerous publicly available class group libraries. We further illustrate the insecurity of these hash functions by showing concrete attacks against cryptographic protocols, i.e., verifiable delay functions, if they were deployed with one of those broken hash-to-class group functions. We propose two families of cryptographically secure hash functions into class groups. We implement these constructions and evaluate their performance. We release our implementation as an open-source library.
Last updated:  2024-02-04
Security analysis and improvements on a semi-quantum electronic voting protocol
Qiu Shujing, Xin Xiangjun, Zheng Qian, Li Chaoyang, and Li Fagen
Recently, Qiu et al. proposed a semi-quantum voting scheme based on the ring signature (International Journal of Theoretical Physics, 60: 1550–1555(2021)), in which the signer and verifier only need measure the received particles with Z-basis and perform some classical simple encryption/decryption operations on the classical message. Although their scheme is very efficient, it cannot resist against the eavesdropping attacks and forgery attack. In this paper, first, the eavesdropping attacks on Qiu et al.’s scheme are proposed. Second, we show the forgery attack on their scheme. To overcome the security drawbacks of Qiu et al.’s protocol, the eavesdropping check technology should be considered.
Last updated:  2024-04-30
Verifiable FHE via Lattice-based SNARKs
Shahla Atapoor, Karim Baghery, Hilder V. L. Pereira, and Jannik Spiessens
Fully Homomorphic Encryption (FHE) is a prevalent cryptographic primitive that allows for computation on encrypted data. In various cryptographic protocols, this enables outsourcing computation to a third party while retaining the privacy of the inputs to the computation. However, these schemes make an honest-but-curious assumption about the adversary. Previous work has tried to remove this assumption by combining FHE with Verifiable Computation (VC). Recent work has increased the flexibility of this approach by introducing integrity checks for homomorphic computations over rings. However, efficient FHE for circuits of large multiplicative depth also requires non-ring computations called maintenance operations, i.e. modswitching and keyswitching, which cannot be efficiently verified by existing constructions. We propose the first efficiently verifiable FHE scheme that allows for arbitrary depth homomorphic circuits by utilizing the double-CRT representation in which FHE schemes are typically computed, and using lattice-based SNARKs to prove components of this computation separately, including the maintenance operations. Therefore, our construction can theoretically handle bootstrapping operations. We also present the first implementation of a verifiable computation on encrypted data for a computation that contains multiple ciphertext-ciphertext multiplications. Concretely, we verify the homomorphic computation of an approximate neural network containing three layers and more than 100 ciphertexts in less than 1 second while maintaining reasonable prover costs.
Last updated:  2024-03-21
Feldman's Verifiable Secret Sharing for a Dishonest Majority
Yi-Hsiu Chen and Yehuda Lindell
Verifiable secret sharing (VSS) protocols enable parties to share secrets while guaranteeing security (in particular, that all parties hold valid and consistent shares) even if the dealer or some of the participants are malicious. Most work on VSS focuses on the honest majority case, primarily since it enables one to guarantee output delivery (e.g., a corrupted recipient cannot prevent an honest dealer from sharing their value). Feldman's VSS is a well known and popular protocol for this task and relies on the discrete log hardness assumption. In this paper, we present a variant of Feldman's VSS for the dishonest majority setting and formally prove its security. Beyond the basic VSS protocol, we present a publicly-verifiable version, as well as show how to securely add participants to the sharing and how to refresh an existing sharing (all secure in the presence of a dishonest majority). We prove that our protocols are UC secure, for appropriately defined ideal functionalities.
Last updated:  2024-01-08
Quantum Oblivious LWE Sampling and Insecurity of Standard Model Lattice-Based SNARKs
Thomas Debris-Alazard, Pouria Fallahpour, and Damien Stehlé
The Learning With Errors ($\mathsf{LWE}$) problem asks to find $\mathbf{s}$ from an input of the form $(\mathbf{A}, \mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}) \in (\mathbb{Z}/q\mathbb{Z})^{m \times n} \times (\mathbb{Z}/q\mathbb{Z})^{m}$, for a vector $\mathbf{e}$ that has small-magnitude entries. In this work, we do not focus on solving $\mathsf{LWE}$ but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create $\mathbf{s}$ and $\mathbf{e}$ and then set $\mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}$. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample $(\mathbf{A}, \mathbf{A}\mathbf{s}+\mathbf{e})$, namely, without knowing the underlying $\mathbf{s}$. A variant of the assumption that oblivious $\mathsf{LWE}$ sampling is hard has been used in a series of works constructing Succinct Non-interactive Arguments of Knowledge (SNARKs) in the standard model. As the assumption is related to $\mathsf{LWE}$, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomial-time algorithm that samples well-distributed $\mathsf{LWE}$ instances while provably not knowing the solution, under the assumption that $\mathsf{LWE}$ is hard. Moreover, the approach works for a vast range of $\mathsf{LWE}$ parametrizations, including those used in the above-mentioned SNARKs.
Last updated:  2024-01-08
YouChoose: A Lightweight Anonymous Proof of Account Ownership
Aarav Varshney, Prashant Agrawal, and Mahabir Prasad Jhanwar
We explore the issue of anonymously proving account ownership (anonymous PAO). Such proofs allow a prover to prove to a verifier that it owns a valid account at a server without being tracked by the server or the verifier, without requiring any changes at the server's end and without even revealing to it that any anonymous PAO is taking place. This concept is useful in sensitive applications like whistleblowing. The first introduction of anonymous PAOs was by Wang et al., who also introduced the secure channel injection (SCI) protocol to realize anonymous PAO in the context of email account ownership. In this paper, we propose YouChoose, an approach that improves upon Wang et al.'s SCI-based anonymous PAO. Unlike SCI, which demands carefully designed multi-party computation (MPC) protocols for efficiency, YouChoose works without MPC, simply relying on the verifier to selectively forward TLS records. It is faster, more efficient, and more adaptable compared to SCI. Further, the simplicity of the YouChoose approach readily enables anonymous PAO in different settings such as various ciphersuites of TLS, account types other than email, etc., while the SCI approach needs specifically designed MPC protocols for each use case. We also provide formal security definitions for a generalized anonymous PAO of which both YouChoose and SCI are concrete instantiations.
Last updated:  2024-01-08
Lattice-Based Functional Commitments: Fast Verification and Cryptanalysis
Hoeteck Wee and David J. Wu
A functional commitment allows a user to commit to an input $\mathbf{x} \in \{0,1\}^\ell$ and later open up the commitment to a value $y = f(\mathbf{x})$ with respect to some function $f$. In this work, we focus on schemes that support fast verification. Specifically, after a preprocessing step that depends only on $f$, the verification time as well as the size of the commitment and opening should be sublinear in the input length $\ell$, We also consider the dual setting where the user commits to the function $f$ and later, opens up the commitment at an input $\mathbf{x}$. In this work, we develop two (non-interactive) functional commitments that support fast verification. The first construction supports openings to constant-degree polynomials and has a shorter CRS for a broad range of settings compared to previous constructions. Our second construction is a dual functional commitment for arbitrary bounded-depth Boolean circuits. Both schemes are lattice-based and avoid non-black-box use of cryptographic primitives or lattice sampling algorithms. Security of both constructions rely on the $\ell$-succinct short integer solutions (SIS) assumption, a falsifiable $q$-type generalization of the SIS assumption (Preprint 2023). In addition, we study the challenges of extending lattice-based functional commitments to extractable functional commitments, a notion that is equivalent to succinct non-interactive arguments (when considering openings to quadratic relations). We describe a general methodology that heuristically breaks the extractability of our construction and provides evidence for the implausibility of the knowledge $k$-$R$-$\mathsf{ISIS}$ assumption of Albrecht et al. (CRYPTO 2022) that was used in several constructions of lattice-based succinct arguments. If we additionally assume hardness of the standard inhomogeneous SIS assumption, we obtain a direct attack on a variant of the extractable linear functional commitment of Albrecht et al.
Last updated:  2024-04-21
Updatable, Aggregatable, Succinct Mercurial Vector Commitment from Lattice
Hongxiao Wang, Siu-Ming Yiu, Yanmin Zhao, and Zoe L. Jiang
Vector commitments (VC) and their variants attract a lot of attention due to their wide range of usage in applications such as blockchain and accumulator. Mercurial vector commitment (MVC), as one of the important variants of VC, is the core technique for building more complicated cryptographic applications, such as the zero-knowledge set (ZKS) and zero-knowledge elementary database (ZK-EDB). However, to the best of our knowledge, the only post-quantum MVC construction is trivially implied by a generic framework proposed by Catalano and Fiore (PKC '13) with lattice-based components which causes $\textit{large}$ auxiliary information and $\textit{cannot satisfy}$ any additional advanced properties, that is, updatable and aggregatable. A major difficulty in constructing a $\textit{non-black-box}$ lattice-based MVC is that it is not trivial to construct a lattice-based VC that satisfies a critical property called ``mercurial hiding". In this paper, we identify some specific features of a new falsifiable family of basis-augmented SIS assumption ($\mathsf{BASIS}$) proposed by Wee and Wu (EUROCRYPT '23) that can be utilized to construct the mercurial vector commitment from lattice $\textit{satisfying}$ updatability and aggregatability with $\textit{smaller}$ auxiliary information. We $\textit{first}$ extend stateless update and differential update to the mercurial vector commitment and define a $\textit{new}$ property, named updatable mercurial hiding. Then, we show how to modify our constructions to obtain the updatable mercurial vector commitment that satisfies these properties. To aggregate the openings, our constructions perfectly inherit the ability to aggregate in the $\mathsf{BASIS}$ assumption, which can break the limitation of $\textit{weak}$ binding in the current aggregatable MVCs. In the end, we show that our constructions can be used to build the various kinds of lattice-based ZKS and ZK-EDB directly within the existing framework.
Last updated:  2024-01-08
Towards Compact Identity-based Encryption on Ideal Lattices
Huiwen Jia, Yupu Hu, Chunming Tang, and Lin Wang
Basic encryption and signature on lattices have comparable efficiency to their classical counterparts in terms of speed and key size. However, Identity-based Encryption (IBE) on lattices is much less efficient in terms of compactness, even when instantiated on ideal lattices and in the Random Oracle Model (ROM). This is because the underlying preimage sampling algorithm used to extract the users' secret keys requires huge public parameters. In this work, we specify a compact IBE instantiation for practical use by introducing various optimizations. Specifically, we first propose a modified gadget to make it more suitable for the instantiation of practical IBE. Then, by incorporating our gadget and the non-spherical Gaussian technique, we provide an efficient preimage sampling algorithm, based on which, we give a specification of a compact IBE on ideal lattice. Finally, two parameter sets and a proof-of-concept implementation are presented. Given the importance of the preimage sampling algorithm in lattice-based cryptography, we believe that our technique can also be applied to the practical instantiation of other advanced cryptographic schemes.
Last updated:  2024-01-07
Bitcoin Clique: Channel-free Off-chain Payments using Two-Shot Adaptor Signatures
Siavash Riahi and Orfeas Stefanos Thyfronitis Litos
Blockchains suffer from scalability limitations, both in terms of latency and throughput. Various approaches to alleviate this have been proposed, most prominent of which are payment and state channels, sidechains, commit-chains, rollups, and sharding. This work puts forth a novel commit-chain protocol, Bitcoin Clique. It is the first trustless commit-chain that is compatible with all major blockchains, including (an upcoming version of) Bitcoin. Clique enables a pool of users to pay each other off-chain, i.e., without interacting with the blockchain, thus sidestepping its bottlenecks. A user can directly send its coins to any other user in the Clique: In contrast to payment channels, its funds are not tied to a specific counterparty, avoiding the need for multi-hop payments. An untrusted operator facilitates payments by verifiably recording them. Furthermore, we define and construct a novel primitive, Two-Shot Adaptor Signatures, which is needed for Bitcoin Clique while being of independent interest. This primitive extends the functionality of normal Adaptor Signatures by allowing the extraction of the witness only after two signatures are published on the blockchain.
Last updated:  2024-02-28
FlexHi: A Flexible Hierarchical Threshold Signature Scheme
Muhammed Ali Bingol, Sermin Kocaman, Ali Dogan, and Sibel Kurt Toplu
Threshold signature schemes have gained prominence in enhancing the security and flexibility of digital signatures, allowing a group of participants to collaboratively create signatures while maintaining a predefined threshold of participants for validity. However, conventional threshold signatures treat all participants equally, lacking the capability to accommodate hierarchical structures often seen in real-world applications. Hierarchical Threshold Signature Schemes (HTSS) naturally extend the concept of simple threshold signatures, offering a solution that aligns with hierarchical organizational structures. Our paper introduces a novel, efficient, and flexible HTSS that employs independent polynomials at each hierarchical level, removing limitations on threshold values. This adaptability enables us to tailor the scheme to diverse requirements, whether signing requires only top-level nodes or lower-level participants' involvement. Based on our analysis, our FlexHi integrated into the FROST scheme outperforms Tassa's hierarchical scheme on FROST and operates approximately 30% to 40% faster, depending on the number of participants and the chosen threshold values. This demonstrates that, in addition to flexibility, our scheme has practical benefits through improved performance.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.