Papers updated in last 183 days (Page 14 of 1460 results)

Last updated:  2023-12-14
FANNG-MPC: Framework for Artificial Neural Networks and Generic MPC
Najwa Aaraj, Abdelrahaman Aly, Tim Güneysu, Chiara Marcolla, Johannes Mono, Rogerio Paludo, Iván Santos-González, Mireia Scholz, Eduardo Soria-Vazquez, Victor Sucasas, and Ajith Suresh
In this work, we introduce FANNG-MPC, a versatile secure multi-party computation framework capable to offer active security for privacy preserving machine learning as a service (MLaaS). Derived from the now deprecated SCALE-MAMBA, FANNG is a data-oriented fork, featuring novel set of libraries and instructions for realizing private neural networks, effectively reviving the popular framework. To the best of our knowledge, FANNG is the first MPC framework to offer actively secure MLaaS in the dishonest majority setting, specifically two parties. FANNG goes beyond SCALE-MAMBA by decoupling offline and online phases and materializing the dealer model in software, enabling a separate set of entities to produce offline material. The framework incorporates database support, a new instruction set for pre-processed material, including garbled circuits and convolutional and matrix multiplication triples. FANNG also implements novel private comparison protocols and an optimized library supporting Neural Network functionality. All our theoretical claims are substantiated by an extensive evaluation using an open-sourced implementation, including the private evaluation of popular neural networks like LeNet and VGG16.
Last updated:  2023-12-14
On the Cost of Post-Compromise Security in Concurrent Continuous Group-Key Agreement
Benedikt Auerbach, Miguel Cueto Noval, Guillermo Pascual-Perez, and Krzysztof Pietrzak
Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key. It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF. CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server. The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise). One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates. Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties. This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC'20]. The recently proposed protocol CoCoA [Alwen et al. Eurocrypt'22], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required. The natural question, thus, is whether CoCoA is optimal in this setting. In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary $k$ number of rounds, that shows that CoCoA is very close to optimal. Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain $k$. We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key. We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al. Our lower bound is of order $k\cdot n^{1+1/(k-1)}/\log(k)$, where $2\le k\le \log(n)$ is the number of updates per user the protocol requires to heal. This generalizes the $n^2$ bound for $k=2$ from Bienstock et al. This bound almost matches the $k\cdot n^{1+2/(k-1)}$ or $k^2\cdot n^{1+1/(k-1)}$ efficiency we get for the variants of the CoCoA protocol also introduced in this paper.
Last updated:  2023-12-14
Not optimal but efficient: a distinguisher based on the Kruskal-Wallis test
Yan Yan, Arnab Roy, and Elisabeth Oswald
Research about the theoretical properties of side channel distinguishers revealed the rules by which to maximise the probability of first order success (``optimal distinguishers'') under different assumptions about the leakage model and noise distribution. Simultaneously, research into bounding first order success (as a function of the number of observations) has revealed universal bounds, which suggest that (even optimal) distinguishers are not able to reach theoretically possible success rates. Is this gap a proof artefact (aka the bounds are not tight) or does a distinguisher exist that is more trace efficient than the ``optimal'' one? We show that in the context of an unknown (and not linear) leakage model there is indeed a distinguisher that outperforms the ``optimal'' distinguisher in terms of trace efficiency: it is based on the Kruskal-Wallis test.
Last updated:  2023-12-14
One-out-of-$q$ OT Combiners
Oriol Farràs and Jordi Ribes-González
In $1$-out-of-$q$ Oblivious Transfer (OT) protocols, a sender Alice is able to send one of $q\ge 2$ messages to a receiver Bob, all while being oblivious to which message was transferred. Moreover, the receiver learns only one of these messages. Oblivious Transfer combiners take $n$ instances of OT protocols as input, and produce an OT protocol that is secure if sufficiently many of the $n$ original OT instances are secure. We present new $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries. Our combiners arise from secret sharing techniques. We show that given an $\mathbb{F}_q$-linear secret sharing scheme on a set of $n$ participants and adversary structure $\mathcal{A}$, we can construct $n$-server, $1$-out-of-$q$ OT combiners that are secure against an adversary corrupting either Alice and a set of servers in $\mathcal{A}$, or Bob and a set of servers $B$ with $\bar{B}\notin\mathcal{A}$. If the normalized total share size of the scheme is $\ell$, then the resulting OT combiner requires $\ell$ calls to OT protocols, and the total amount of bits exchanged during the protocol is $(q^2+q+1)\ell\log q$. We also present a construction based on $1$-out-of-$2$ OT combiners that uses the protocol of Crépeau, Brassard and Robert (FOCS 1986). This construction provides smaller communication costs for certain adversary structures, such as threshold ones: For any prime power $q\geq n$, there are $n$-server, $1$-out-of-$q$ OT combiners that are perfectly secure against active adversaries corrupting either Alice or Bob, and a minority of the OT candidates, exchanging $O(qn\log q)$ bits in total.
Last updated:  2023-12-13
Fast batched asynchronous distributed key generation
Jens Groth and Victor Shoup
We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
Last updated:  2023-12-13
Oops, I did it again revisited: another look at reusing one-time signatures
Scott Fluhrer
In "Oops, I did it again" - Security of One-Time Signatures under Two-Message Attacks, Bruinderink and Hülsing analyzed the effect of key reuse for several one time signature systems. When they analyzed the Winternitz system, they assumed certain probabilities were independent when they weren't, leading to invalid conclusions. This paper does a more correct characterization of the Winternitz scheme, and while their ultimate conclusion (that key reuse allows for practical forgeries) is correct, the situation is both better and worse than what they concluded.
Last updated:  2023-12-13
Muckle+: End-to-End Hybrid Authenticated Key Exchanges
Sonja Bruckner, Sebastian Ramacher, and Christoph Striecks
End-to-end authenticity in public networks plays a significant role. Namely, without authenticity, the adversary might be able to retrieve even confidential information straight away by impersonating others. Proposed solutions to establish an authenticated channel cover pre-shared key-based, password-based, and certificate-based techniques. To add confidentiality to an authenticated channel, authenticated key exchange (AKE) protocols usually have one of the three solutions built in. As an amplification, hybrid AKE (HAKE) approaches are getting more popular nowadays and were presented in several flavors to incorporate classical, post-quantum, or quantum-key-distribution components. The main benefit is redundancy, i.e., if some of the components fail, the primitive still yields a confidential and authenticated channel. However, current HAKE instantiations either rely on pre-shared keys (which yields inefficient end-to-end authenticity) or only support one or two of the three above components (resulting in reduced redundancy and flexibility). In this work, we present an extension of a modular HAKE framework due to Dowling, Brandt Hansen, and Paterson (PQCrypto'20) that does not suffer from the above constraints. While their instantiation, dubbed Muckle, requires pre-shared keys (and hence yields inefficient end-to-end authenticity), our extended instantiation called Muckle+ utilizes post-quantum digital signatures. While replacing pre-shared keys with digital signatures is rather straightforward in general, this turned out to be surprisingly non-trivial when applied to HAKE frameworks (resulting in a significant model change with adapted proof techniques).
Last updated:  2023-12-13
Efficient Low-Latency Masking of Ascon without Fresh Randomness
Srinidhi Hari Prasad, Florian Mendel, Martin Schläffer, and Rishub Nagpal
In this work, we present the first low-latency, second-order masked hardware implementation of Ascon that requires no fresh randomness using only $d+1$ shares. Our results significantly outperform any publicly known second-order masked implementations of AES and Ascon in terms of combined area, latency and randomness requirements. Ascon is a family of lightweight authenticated encryption and hashing schemes selected by NIST for standardization. Ascon is tailored for small form factors. It requires less power and energy while attaining the same or even better performance than current NIST standards. We achieve the reduction of latency by rearranging the linear layers of the Ascon permutation in a round-based implementation. We provide an improved technique to achieve implementations without the need for fresh randomness. It is based on the concept of changing of the guards extended to the second-order case. Together with the reduction of latency, we need to consider a large set of additional conditions which we propose to solve using a SAT solver. We have formally verified both, our first- and second-order implementations of Ascon using CocoAlma for the first two rounds. Additionally, we have performed a leakage assessment using t-tests on all 12 rounds of the initial permutation. Finally, we provide a comparison of our second-order masked Ascon implementation with other results.
Last updated:  2023-12-13
Breaking RSA Authentication on Zynq-7000 SoC and Beyond: Identification of Critical Security Flaw in FSBL Software
Prasanna Ravi, Arpan Jati, and Shivam Bhasin
In this report, we perform an in-depth analysis of the RSA authentication feature used in the secure boot procedure of Xilinx Zynq-7000 SoC device. The First Stage Boot Loader (FSBL) is a critical piece of software executed during secure boot, which utilizes the RSA authentication feature to validate all the hardware and software partitions to be mounted on the device. We analyzed the implementation of FSBL (provided by Xilinx) for the Zynq-7000 SoC and identified a critical security flaw, whose exploitation makes it possible to load an unauthenticated application onto the Zynq device, thereby bypassing RSA authentication. We also experimentally validated the presence of the vulnerability through a Proof of Concept (PoC) attack to successfully mount an unauthenticated software application on an RSA authenticated Zynq device. The identified flaw is only present in the FSBL software and thus can be easily fixed through appropriate modification of the FSBL software. Thus, the first contribution of our work is the identification of a critical security flaw in the FSBL software to bypass RSA authentication. Upon bypassing RSA authentication, an attacker can mount any unauthenticated software application on the target device to mount a variety of attacks. Among the several possible attacks, we are interested to perform recovery of the encrypted bitstream in the target boot image of the Zynq-7000 device. To the best of our knowledge, there does not exist any prior work that has reported a practical bitstream recovery attack on the Zynq-7000 device. In the context of bitstream recovery, Ender et al. in 2020 proposed the Starbleed attack that is applicable to standalone Virtex-6 and 7-series Xilinx FPGAs. The design advisory provided by Xilinx as a response to the Starbleed attack claims that the Zynq-7000 SoC is resistant “due to the use of asymmetric and/or symmetric authentication in the boot/configuration process that ensures configuration is authenticated prior to use". Due to the security flaw found in the FSBL, we managed to identify a novel approach to mount the Starbleed attack on the Zynq-7000 device for full bitstream recovery. Thus, as a second contribution of our work, we present the first practical demonstration of the Starbleed attack on the Zynq-7000 SoC. We perform experimental validation of our proposed attacks on the PYNQ-Z1 platform based on the Zynq-7000 SoC.
Last updated:  2023-12-13
Dishonest Majority Multiparty Computation over Matrix Rings
Hongqing Liu, Chaoping Xing, Chen Yuan, and Taoxu Zou
The privacy-preserving machine learning (PPML) has gained growing importance over the last few years. One of the biggest challenges is to improve the efficiency of PPML so that the communication and computation costs of PPML are affordable for large machine learning models such as deep learning. As we know, linear algebra such as matrix multiplication occupies a significant part of the computation in the deep learning such as deep convolutional neural networks (CNN). Thus, it is desirable to propose the MPC protocol specialized for the matrix operations. In this work, we propose a dishonest majority MPC protocol over matrix rings which supports matrix multiplication and addition. Our MPC protocol can be seen as a variant of SPDZ protocol, i.e., the MAC and global key of our protocol are vectors of length $m$ and the secret of our protocol is an $m\times m$ matrix. Compared to the classic SPDZ protocol, our MPC protocol reduces the communication complexity by at least $m$ times. We also show that our MPC protocol is as efficient as [11] which also presented a dishonest majority MPC protocol specialized for matrix operations. The MPC protocol [11] resorts to the homomorphic encryption scheme (BFV scheme) to produce the matrix triples in the preprocessing phase. This implies that their protocol only supports the matrix operations over integer rings or prime fields of large size. On the contrary, we resort to vector oblivious linear evaluations and random vector oblivious linear evaluations to generate correlated randomness in the preprocessing phase. Thus, the matrices of our MPC protocol can be defined over any finite field or integer ring. Due to the small size of our MAC, the communication complexity of our MPC protocol remains almost the same regardless of the size of the field or the ring.
Last updated:  2023-12-13
Non-Interactive Classical Verification of Quantum Depth: A Fine-Grained Characterization
Nai-Hui Chia and Shih-Han Hung
We introduce protocols for classical verification of quantum depth (CVQD). These protocols enable a classical verifier to differentiate between devices of varying quantum circuit depths, even in the presence of classical computation. The goal is to demonstrate that a classical verifier can reject a device with a quantum circuit depth of no more than $d$, even if the prover employs additional polynomial-time classical computation to deceive. Conversely, the verifier accepts a device with a quantum circuit depth of $d'>d$. Previous results for separating hybrid quantum-classical computers with various quantum depths require either quantum access to oracles or interactions between the classical verifier and the quantum prover. However, instantiating oracle separations can significantly increase the quantum depth in general, and interaction challenges the quantum device to keep the qubits coherent while waiting for the verifier's messages. These requirements pose barriers to implementing the protocols on near-term devices. In this work, we present a two-message protocol under the quantum hardness of learning with errors and the random oracle heuristic. An honest prover only needs classical access to the random oracle, and therefore any instantiation of the oracle does not increase the quantum depth. To our knowledge, our protocol is the first non-interactive CVQD, the instantiation of which using concrete hash functions, e.g., SHA-3, does not require additional quantum depth. Our second protocol seeks to explore the minimality of cryptographic assumptions and the tightness of the separations. To accomplish this, we introduce an untrusted quantum machine that shares entanglements with the target machine. Utilizing a robust self-test, our protocol certifies the depth of the target machine with information-theoretic security and nearly optimal separation.
Last updated:  2023-12-12
Failed crypto: Matrices over non-standard arithmetic
Daniel R. L. Brown
A failed hypothesis is reported here. The hope was that large matrices over small non-standard arithmetic are likely to have infeasible division, and furthermore be secure for use in Rabi–Sherman associative cryptography.
Last updated:  2023-12-12
PARScoin: A Privacy-preserving, Auditable, and Regulation-friendly Stablecoin
Amirreza Sarencheh, Aggelos Kiayias, and Markulf Kohlweiss
Stablecoins are digital assets designed to maintain a consistent value relative to a reference point, serving as a vital component in Blockchain, and Decentralized Finance (DeFi) ecosystem. Typical implementations of stablecoins via smart contracts come with important downsides such as a questionable level of privacy, potentially high fees, and lack of scalability. We put forth a new design, PARScoin, for a Privacy-preserving, Auditable, and Regulation-friendly Stablecoin that mitigates these issues while enabling high performance both in terms of speed of settlement and for scaling to large numbers of users. Our construction is blockchain-agnostic and is analyzed in the Universal Composition (UC) framework, offering a secure and modular approach for its integration into the broader blockchain ecosystem.
Last updated:  2023-12-12
Integral Cryptanalysis Using Algebraic Transition Matrices
Tim Beyne and Michiel Verbauwhede
In this work we introduce algebraic transition matrices as the basis for a new approach to integral cryptanalysis that unifies monomial trails (Hu et al., Asiacrypt 2020) and parity sets (Boura and Canteaut, Crypto 2016). Algebraic transition matrices allow for the computation of the algebraic normal form of a primitive based on the algebraic normal forms of its components by means of well-understood operations from linear algebra. The theory of algebraic transition matrices leads to better insight into the relation between integral properties of $F$ and $F^{−1}$. In addition, we show that the link between invariants and eigenvectors of correlation matrices (Beyne, Asiacrypt 2018) carries over to algebraic transition matrices. Finally, algebraic transition matrices suggest a generalized definition of integral properties that subsumes previous notions such as extended division properties (Lambin, Derbez and Fouque, DCC 2020). On the practical side, a new algorithm is described to search for these generalized properties and applied to Present, resulting in new properties. The algorithm can be instantiated with any existing automated search method for integral cryptanalysis.
Last updated:  2023-12-12
Exploring SIDH-based Signature Parameters
Andrea Basso, Mingjie Chen, Tako Boris Fouotsa, Péter Kutas, Abel Laval, Laurane Marco, and Gustave Tchoffo Saah
Isogeny-based cryptography is an instance of post-quantum cryptography whose fundamental problem consists of finding an isogeny between two (isogenous) elliptic curves $E$ and $E'$. This problem is closely related to that of computing the endomorphism ring of an elliptic curve. Therefore, many isogeny-based protocols require the endomorphism ring of at least one of the curves involved to be unknown. In this paper, we explore the design of isogeny based protocols in a scenario where one assumes that the endomorphism ring of all the curves are public. In particular, we identify digital signatures based on proof of isogeny knowledge from SIDH squares as such a candidate. We explore the design choices for such constructions and propose two variants with practical instantiations. We analyze their security according to three lines, the first consists of attacks based on KLPT with both polynomial and superpolynomial adversary, the second consists of attacks derived from the SIDH attacks and finally we study the zero-knowledge property of the underlying proof of knowledge.
Last updated:  2023-12-11
Streaming Functional Encryption
Jiaxin Guan, Alexis Korb, and Amit Sahai
We initiate the study of streaming functional encryption (sFE) which is designed for scenarios in which data arrives in a streaming manner and is computed on in an iterative manner as the stream arrives. Unlike in a standard functional encryption (FE) scheme, in an sFE scheme, we (1) do not require the entire data set to be known at encryption time and (2) allow for partial decryption given only a prefix of the input. More specifically, in an sFE scheme, we can sequentially encrypt each data point $x_i$ in a stream of data $x = x_1\ldots x_n$ as it arrives, without needing to wait for all $n$ values. We can then generate function keys for streaming functions which are stateful functions that take as input a message $x_i$ and a state $\mathsf{st}_i$ and output a value $y_i$ and the next state $\mathsf{st}_{i+1}$. For any $k \leq n$, a user with a function key for a streaming function $f$ can learn the first $k$ output values $y_1\ldots y_k$ where $(y_i, \mathsf{st}_{i+1}) = f(x_i, \mathsf{st}_i)$ and $\mathsf{st}_1 = \bot$ given only ciphertexts for the first $k$ elements $x_1\ldots x_k$. In this work, we introduce the notion of sFE and show how to construct it from FE. In particular, we show how to achieve a secure sFE scheme for $\mathsf{P/Poly}$ from a compact, secure FE scheme for $\mathsf{P/Poly}$, where our security notion for sFE is similar to standard FE security except that we require all function queries to be made before the challenge ciphertext query. Furthermore, by combining our result with the FE construction of Jain, Lin, and Sahai (STOC, 2022), we show how to achieve a secure sFE scheme for $\mathsf{P/Poly}$ from the polynomial hardness of well-studied assumptions.
Last updated:  2023-12-11
High-assurance zeroization
Santiago Arranz Olmos, Gilles Barthe, Ruben Gonzalez, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Tiago Oliveira, and Peter Schwabe
In this paper we revisit the problem of erasing sensitive data from memory and registers during return from a cryptographic routine. While the problem and related attacker model is fairly easy to phrase, it turns out to be surprisingly hard to guarantee security in this model when implementing cryptography in common languages such as C/C++ or Rust. We revisit the issues surrounding zeroization and then present a principled solution in the sense that it guarantees that sensitive data is erased and it clearly defines when this happens. We implement our solution as extension to the formally verified Jasmin compiler and extend the correctness proof of the compiler to cover zeroization. We show that the approach seamlessly integrates with state-of-the-art protections against microarchitectural attacks by integrating zeroization into Libjade, a cryptographic library written in Jasmin with systematic protections against timing and Spectre-v1 attacks. We present benchmarks showing that in many cases the overhead of zeroization is barely measurable and that it stays below 2% except for highly optimized symmetric crypto routines on short inputs.
Last updated:  2023-12-11
First-Order Masked Kyber on ARM Cortex-M4
Daniel Heinz, Matthias J. Kannwischer, Georg Land, Thomas Pöppelmann, Peter Schwabe, and Amber Sprenkels
In this work, we present a fast and first-order secure Kyber implementation optimized for ARM Cortex-M4. Most notably, to our knowledge this is the first liberally-licensed open-source Cortex-M4 implementation of masked Kyber. The ongoing NIST standardization process for post-quantum cryptography and newly proposed side-channel attacks have increased the demand for side-channel analysis and countermeasures for the finalists. On the foundation of the commonly used PQM4 project, we make use of the previously presented optimizations for Kyber on a Cortex-M4 and further combine different ideas from various recent works to achieve a better performance and improve the security in comparison to the original implementations. We show our performance results for first-order secure implementations. Our masked Kyber768 decapsulation on the ARM Cortex-M4 requires only 2 978 441 cycles, including randomness generation from the internal RNG. We then practically verify our implementation by using the t-test methodology with 100 000 traces.
Last updated:  2023-12-11
A Transaction-Level Model for Blockchain Privacy
François-Xavier Wicht, Zhipeng Wang, Duc V. Le, and Christian Cachin
Considerable work explores blockchain privacy notions. Yet, it usually employs entirely different models and notations, complicating potential comparisons. In this work, we use the Transaction Directed Acyclic Graph (TDAG) and extend it to capture blockchain privacy notions (PDAG). We give consistent definitions for untraceability and unlinkability. Moreover, we specify conditions on a blockchain system to achieve each aforementioned privacy notion. Thus, we can compare the two most prominent privacy-preserving blockchains -- Monero and Zcash, in terms of privacy guarantees. Finally, we unify linking heuristics from the literature with our graph notation and review a good portion of research on blockchain privacy.
Last updated:  2023-12-11
Middle-Products of Skew Polynomials and Learning with Errors
Cong Ling and Andrew Mendelsohn
We extend the middle product to skew polynomials, which we use to define a skew middle-product Learning with Errors (LWE) variant. We also define a skew polynomial LWE problem, which we connect to Cyclic LWE (CLWE), a variant of LWE in cyclic division algebras. We then reduce a family of skew polynomial LWE problems to skew middle-product LWE, for a family which includes the structures found in CLWE. Finally, we give an encryption scheme and demonstrate its IND-CPA security, assuming the hardness of skew middle-product LWE.
Last updated:  2023-12-11
On Constructing One-Way Quantum State Generators, and More
Shujiao Cao and Rui Xue
As a quantum analogue of one-way function, the notion of one-way quantum state generator is recently proposed by Morimae and Yamakawa (CRYPTO'22), which is proved to be implied by the pseudorandom state and can be used to devise the one-time secure digital signature. Due to Kretschmer's result (TQC'20), it's believed that pseudorandom state generator requires less than post-quantum secure one-way function. Unfortunately, it remains to be unknown how to achieve the one-way quantum state generator without the existence of post-quantum secure one-way function. In this paper, we mainly study that problem and obtain the following results: Two variants of one-way quantum state generator are proposed, called the weak one-way quantum state generator and distributionally one-way quantum state generator. Then the equivalence between weak and strong one-way state generator is obtained, and the equivalence between weak and distributionally one-way quantum state generator is shown in the symmetric setting. We construct the symmetric distributionally one-way quantum state generator from average-case hardness assumption of a promise problem belongs to $\textsf{QSZK}$. We construct quantum bit commitment with statistical binding (sum-binding) and computational hiding directly from the average-case hardness of $\textsf{QSZK}$. To show the non-triviality of the constructions above, a quantum oracle $\mathcal{U}$ is devised relative to which such promise problem in $ \textsf{QSZK}$ doesn't belong to $\mathsf{QMA}^{\mathcal{U}}$. Our results present the first non-trivial construction of one-way quantum state generator from the hardness assumption of complexity class, and give another evidence that one-way quantum state generator probably requires less than post-quantum secure one-way function.
Last updated:  2023-12-11
FIN: Practical Signature-Free Asynchronous Common Subset in Constant Time
Sisi Duan, Xin Wang, and Haibin Zhang
Asynchronous common subset (ACS) is a powerful paradigm enabling applications such as Byzantine fault-tolerance (BFT) and multi-party computation (MPC). The most efficient ACS framework in the information-theoretic setting is due to Ben-Or, Kelmer, and Rabin (BKR, 1994). The BKR ACS protocol has been both theoretically and practically impactful. However, the BKR protocol has an $O(\log n)$ running time (where $n$ is the number of replicas) due to the usage of $n$ parallel asynchronous binary agreement (ABA) instances, impacting both performance and scalability. Indeed, for a network of 16~64 replicas, the parallel ABA phase occupies about 95%~97% of the total runtime in BKR. A long-standing open problem is whether we can build an ACS framework with $O(1)$ time while not increasing the message or communication complexity of the BKR protocol. In this paper, we resolve the open problem, presenting the first constant-time ACS protocol with $O(n^3)$ messages in the information-theoretic and signature-free settings. Moreover, as a key ingredient of our new ACS framework and an interesting primitive in its own right, we provide the first information-theoretic multivalued validated Byzantine agreement (MVBA) protocol with $O(1)$ time and $O(n^3)$ messages. Both results can improve---asymptotically and concretely---various applications using ACS and MVBA in the information-theoretic, quantum-safe, or signature-free settings. As an example, we implement FIN, a BFT protocol instantiated using our framework. Via a 121-server deployment on Amazon EC2, we show FIN is significantly more efficient than PACE (CCS 2022), the state-of-the-art asynchronous BFT protocol of the same type. In particular, FIN reduces the overhead of the ABA phase to as low as 1.23% of the total runtime, and FIN achieves up to 3.41x the throughput of PACE. We also show that FIN outperforms other BFT protocols with the standard liveness property such as Dumbo and Speeding Dumbo.
Last updated:  2023-12-10
Allowing Blockchain Loans with Low Collateral
Tom Azoulay, Uri Carl, and Ori Rottenstreich
Collateral is an item of value serving as security for the repayment of a loan. In blockchain-based loans, cryptocurrencies serve as the collateral. The high volatility of cryptocurrencies implies a serious barrier of entry with a common practice that collateral values equal multiple times the value of the loan. As assets serving as collateral are locked, this requirement prevents many candidates from obtaining loans. In this paper, we aim to make loans more accessible by offering loans with lower collateral, while keeping the risk for lenders bound. We propose a credit score based on data recovered from the blockchain to predict how likely a potential borrower is to repay a loan. Our protocol does not risk the initial amount granted by liquidity providers, but only risks part of the interest yield gained from the borrower by the protocol in the past.
Last updated:  2023-12-10
An Empirical Study of Cross-chain Arbitrage in Decentralized Exchanges
Ori Mazor and Ori Rottenstreich
Blockchain interoperability refers to the ability of blockchains to share information with each other. Decentralized Exchanges (DEXs) are peer-to-peer marketplaces where traders can exchange cryptocurrencies. Several studies have focused on arbitrage analysis within a single blockchain, typically in Ethereum. Recently, we have seen a growing interest in cross-chain technologies to create a more interconnected blockchain network. We present a framework to study cross-chain arbitrage in DEXs. We use this framework to analyze cross-chain arbitrages between two popular DEXs, PancakeSwap and QuickSwap, within a time frame of a month. While PancakeSwap is implemented on a blockchain named BNB Chain, QuickSwap is implemented on a different blockchain named Polygon. The approach of this work is to study the cross-chain arbitrage through an empirical study. We refer to the number of arbitrages, their revenue as well as to their duration. This work lays the basis for understanding cross-chain arbitrage and its potential impact on the blockchain technology.
Last updated:  2023-12-10
Selective Delegation of Attributes in Mercurial Signature Credentials
Colin Putman and Keith M. Martin
Anonymous credential schemes enable service providers to verify information that a credential holder willingly discloses, without needing any further personal data to corroborate that information, and without allowing the user to be tracked from one interaction to the next. Mercurial signatures are a novel class of anonymous credentials which show good promise as a simple and efficient construction without heavy reliance on zero-knowledge proofs. However, they still require significant development in order to achieve the functionality that most existing anonymous credential schemes provide. Encoding multiple attributes of the credential holder in such a way that they can be disclosed selectively with each use of the credential is often seen as a vital feature of anonymous credentials, and is one that mercurial signatures have not yet implemented. In this paper, we show a simple way to encode attributes in a mercurial signature credential and to regulate which attributes a credential holder can issue when delegating their credential to another user. We also extend the security model associated with mercurial signatures to account for the inclusion of attributes, and prove the security of our extension with respect to the original mercurial signature construction.
Last updated:  2023-12-09
Ring-LWE Hardness Based on Non-invertible Ideals
Charanjit S. Jutla and Chengyu Lin
We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices. In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible. Since the conductor itself is non-invertible, this gives a non-trivial multiplicative set that lies outside the ideal class group. Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.
Last updated:  2023-12-09
The Patching Landscape of Elisabeth-4 and the Mixed Filter Permutator Paradigm
Clément Hoffmann, Pierrick Méaux, and François-Xavier Standaert
Filter permutators are a family of stream cipher designs that are aimed for hybrid homomorphic encryption. While originally operating on bits, they have been generalized to groups at Asiacrypt 2022, and instantiated for evaluation with the TFHE scheme which favors a filter based on (negacyclic) Look Up Tables (LUTs). A recent work of Gilbert et al., to appear at Asiacrypt 2023, exhibited (algebraic) weaknesses in the Elisabeth-4 instance, exploiting the combination of the 4-bit negacyclic LUTs it uses as filter. In this article, we explore the landscape of patches that can be used to restore the security of such designs while maintaining their good properties for hybrid homomorphic encryption. Starting with minimum changes, we observe that just updating the filter function (still with small negacyclic LUTs) is conceptually feasible, and propose the resulting Elisabeth-b4 design with three levels of NLUTs. We then show that a group permutator combining two different functions in the filter can simplify the analysis and improve performances. We specify the Gabriel instance to illustrate this claim. We finally propose to modify the group filter permutator paradigm into a mixed filter permutator, which considers the permutation of the key with elements in a group and a filter outputting elements in a different group. We specify the Margrethe instance as a first example of mixed filter permutator, with key elements in $\mathbb{F}_2$ and output in $\mathbb{Z}_{16}$, that we believe well-suited for recent fully homomorphic encryption schemes that can efficiently evaluate larger (not negacyclic) LUTs.
Last updated:  2023-12-09
Zero-Knowledge Functional Elementary Databases
Xinxuan Zhang and Yi Deng
Zero-knowledge elementary databases (ZK-EDBs) enable a prover to commit a database ${D}$ of key-value $(x,v)$ pairs and later provide a convincing answer to the query ``send me the value $D(x)$ associated with $x$'' without revealing any extra knowledge (including the size of ${D}$). After its introduction, several works extended it to allow more expressive queries, but the expressiveness achieved so far is still limited: only a relatively simple queries--range queries over the keys and values-- can be handled by known constructions. In this paper we introduce a new notion called zero knowledge functional elementary databases (ZK-FEDBs), which allows the most general functional queries. Roughly speaking, for any Boolean circuit $f$, ZK-FEDBs allows the ZK-EDB prover to provide convincing answers to the queries of the form ``send me all records ${(x,v)}$ in ${{D}}$ satisfying $f(x,v)=1$,'' without revealing any extra knowledge (including the size of ${D}$). We present a construction of ZK-FEDBs in the random oracle model and generic group model, whose proof size is only linear in the length of record and the size of query circuit, and is independent of the size of input database $D$. Our technical constribution is two-fold. Firstly, we introduce a new variant of zero-knowledge sets (ZKS) which supports combined operations on sets, and present a concrete construction that is based on groups with unknown order. Secondly, we develop a tranformation that tranforms the query of Boolean circuit into a query of combined operations on related sets, which may be of independent interest.
Last updated:  2023-12-09
Revisiting BBS Signatures
Stefano Tessaro and Chenzhi Zhu
BBS signatures were implicitly proposed by Boneh, Boyen, and Shacham (CRYPTO ’04) as part of their group signature scheme, and explicitly cast as stand-alone signatures by Camenisch and Lysyanskaya (CRYPTO ’04). A provably secure version, called BBS+, was then devised by Au, Susilo, and Mu (SCN ’06), and is currently the object of a standardization effort which has led to a recent RFC draft. BBS+ signatures are suitable for use within anonymous credential and DAA systems, as their algebraic structure enables efficient proofs of knowledge of message-signature pairs that support partial disclosure. BBS+ signatures consist of one group element and two scalars. As our first contribution, we prove that a variant of BBS+ producing shorter signatures, consisting only of one group element and one scalar, is also secure. The resulting scheme is essentially the original BBS proposal, which was lacking a proof of security. Here we show it satisfies, under the q-SDH assumption, the same provable security guarantees as BBS+. We also provide a complementary tight analysis in the algebraic group model, which heuristically justifies instantiations with potentially shorter signatures. Furthermore, we devise simplified and shorter zero-knowledge proofs of knowledge of a BBS message-signature pair that support partial disclosure of the message. Over the BLS12-381 curve, our proofs are 896 bits shorter than the prior proposal by Camenisch, Drijvers, and Lehmann (TRUST ’16), which is also adopted by the RFC draft. Finally, we show that BBS satisfies one-more unforgeability in the algebraic group model in a scenario, arising in the context of credentials, where the signer can be asked to sign arbitrary group elements, meant to be commitments, without seeing their openings.
Last updated:  2023-12-08
Asymptotics of hybrid primal lattice attacks
Daniel J. Bernstein
The literature gives the impression that (1) existing heuristics accurately predict how effective lattice attacks are, (2) non-ternary lattice systems are not vulnerable to hybrid multi-decoding primal attacks, and (3) the asymptotic exponents of attacks against non-ternary systems have stabilized. This paper shows that 1 contradicts 2 and that 1 contradicts 3: the existing heuristics imply that hybrid primal key-recovery attacks are exponentially faster than standard non-hybrid primal key-recovery attacks against the LPR PKE with any constant error width. This is the first report since 2015 of an exponential speedup in heuristic non-quantum primal attacks against non-ternary LPR. Quantitatively, for dimension n, modulus n^{Q_0+o(1)}, and error width w, a surprisingly simple hybrid attack reduces heuristic costs from 2^{(ρ+o(1))n} to 2^{(ρ-ρ H_0+o(1))n}, where z_0=2Q_0/(Q_0+1/2)^2, ρ=z_0 log_4(3/2), and H_0=1/(1+(lg w)/0.057981z_0). This raises the questions of (1) what heuristic exponent is achieved by more sophisticated hybrid attacks and (2) what impact hybrid attacks have upon concrete cryptosystems whose security analyses have ignored hybrid attacks, such as Kyber-512.
Last updated:  2023-12-08
zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
Haochen Sun, Tonghe Bai, Jason Li, and Hongyang Zhang
The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep neural networks. To protect the intellectual properties of AI developers, directly examining the training process by accessing the model parameters and training data is often prohibited for verifiers. In response to this challenge, we present zero-knowledge deep learning (zkDL), an efficient zero-knowledge proof for deep learning training. To address the long-standing challenge of verifiable computations of non-linearities in deep learning training, we introduce zkReLU, a specialized proof for the ReLU activation and its backpropagation. zkReLU turns the disadvantage of non-arithmetic relations into an advantage, leading to the creation of FAC4DNN, our specialized arithmetic circuit design for modelling neural networks. This design aggregates the proofs over different layers and training steps, without being constrained by their sequential order in the training process. With our new CUDA implementation that achieves full compatibility with the tensor structures and the aggregated proof design, zkDL enables the generation of complete and sound proofs in less than a second per batch update for an 8-layer neural network with 10M parameters and a batch size of 64, while provably ensuring the privacy of data and model parameters. To our best knowledge, we are not aware of any existing work on zero-knowledge proof of deep learning training that is scalable to million-size networks.
Last updated:  2023-12-08
QCB is Blindly Unforgeable
Jannis Leuther and Stefan Lucks
QCB is a proposal for a post-quantum secure, rate-one authenticated encryption with associated data scheme (AEAD) based on classical OCB3 and \(\Theta\)CB, which are vulnerable against a quantum adversary in the Q2 setting. The authors of QCB prove integrity under plus-one unforgeability, whereas the proof of the stronger definition of blind unforgeability has been left as an open problem. After a short overview of QCB and the current state of security definitions for authentication, this work proves blind unforgeability of QCB. Finally, the strategy of using tweakable block ciphers in authenticated encryption is generalised to a generic blindly unforgeable AEAD model.
Last updated:  2023-12-08
Intermediate Certificate Suppression in Post-Quantum TLS: An Approximate Membership Querying Approach
Dimitrios Sikeridis, Sean Huntley, David Ott, and Michael Devetsikiotis
Quantum computing advances threaten the security of today's public key infrastructure, and have led to the pending standardization of alternative, quantum-resistant key encapsulation and digital signature cryptography schemes. Unfortunately, authentication algorithms based on the new post-quantum (PQ) cryptography create significant performance bottlenecks for TLS due to larger certificate chains which introduce additional packets and round-trips. The TLS handshake slowdown will be unacceptable to many applications, and detrimental to the broader adoption of quantum safe cryptography standards. In this paper, we propose a novel framework for Intermediate Certificate Authority (ICA) certificate suppression in TLS that reduces the authentication message size and prevents excessive round-trip delays. Our approach utilizes an approximate membership query (AMQ) data structure (probabilistic filter) to advertise known ICA certs to remote TLS endpoints so that unnecessary ICA certificates are omitted from the TLS handshake exchange. We showcase the extend of the PQ authentication overhead challenge in TLS, and evaluate the feasibility of AMQ filters for ICA suppression in terms of space and computational overhead. Finally, we experimentally evaluate the potential gains form our approach and showcase a $70\%$ reduction in exchanged ICA cert data that translates to 15-50 MB of savings in PQ TLS and for certain Web-based application scenarios.
Last updated:  2023-12-08
In-depth Correlation Power Analysis Attacks on a Hardware Implementation of CRYSTALS-Dilithium
Huaxin Wang, Yiwen Gao, Yuejun Liu, Qian Zhang, and Yongbin Zhou
During the standardisation process of post-quantum cryptography, NIST encourages research on side-channel analysis for candidate schemes. As the recommended lattice signature scheme, CRYSTALS-Dilithium, when implemented on hardware, has only been subjected to the side-channel attack presented by Steffen et al. in IACR ePrint 2022. This attack is not complete and requires excessive traces. Therefore, we investigate the leakage of an FPGA (Kintex7) implementation of CRYSTALS-Dilithium using the CPA method, where with a minimum of 70000 traces partial private key coefficients can be recovered. As far as we know, this is the first work that applies power leakage to sidechannel attacks on FPGA implementations of CRYSTALS-Dilithium. Furthermore, we optimise the attack by extracting Point-of-Interests using known information due to parallelism (named CPA-PoI) and by iteratively utilising parallel leakages (named CPA-ITR). We experimentally demonstrate that when recovering the same number of key coefficients, the CPA-PoI and CPA-ITR reduce the number of traces used by up to 16.67 percent and 25 percent, respectively, compared to the CPA method. When attacking with the same number of traces, the CPA-PoI method and the CPA-ITR method increase the number of recovered key coefficients by up to 55.17 percent and 93.10 percent, respectively, compared to the CPA method. Our experiments confirm that the FPGA implementation of CRYSTALS-Dilithium is also very vulnerable to side-channel analysis.
Last updated:  2023-12-08
Reverie: an end-to-end accumulation scheme from Cyclefold
Lev Soukhanov
Recent advances in SNARK recursion and incrementally-verifiable computation are vast, but most of the efforts seem to be focused on a particular design goal - proving the result of a large computation known completely in advance. There are other possible applications, requiring different design tradeoffs. Particularly interesting direction is a case with a swarm of collaborating provers, communicating over a peer-to-peer network - which requires to also optimize the amount of data exchanged between the participants of the swarm. One notable such application is Ethereum's consensus, which requires to aggregate millions of signatures of individual validators. In this technical note, we propose an informal notion of an end-to-end IVC scheme, which means that the amount of data that the prover needs exchange with the previous prover to continue the computation is small. We explore the existing design space from this point of view, and suggest an approach to constructing such a scheme by combining the PlonK proof systemwith the recent Cyclefold construction.
Last updated:  2023-12-07
The Blockwise Rank Syndrome Learning problem and its applications to cryptography
Nicolas Aragon, Pierre Briaud, Victor Dyseryn, Philippe Gaborit, and Adrien Vinçotte
Recently the notion of blockwise error in a context of rank based cryptography has been introduced by Sont et al. at AsiaCrypt 2023 . This notion of error, very close to the notion sum-rank metric, permits, by decreasing the weight of the decoded error, to greatly improve parameters for the LRPC and RQC cryptographic schemes. A little before the multi-syndromes approach introduced for LRPC and RQC schemes had also allowed to considerably decrease parameters sizes for LRPC and RQC schemes, through in particular the introduction of Augmented Gabidulin codes. In the present paper we show that the two previous approaches (blockwise errors and multi-syndromes) can be combined in a unique approach which leads to very efficient generalized RQC and LRPC schemes. In order to do so, we introduce a new problem, the Blockwise Rank Support Learning problem, which consists of guessing the support of the errors when several syndromes are given in input, with blockwise structured errors. The new schemes we introduce have very interesting features since for 128 bits security they permit to obtain generalized schemes for which the sum of public key and ciphertext is only 1.4 kB for the generalized RQC scheme and 1.7 kB for the generalized LRPC scheme. The new approach proposed in this paper permits to reach a 40 % gain in terms of parameters size when compared to previous results, obtaining even better results in terms of size than for the KYBER scheme whose total sum is 1.5 kB. Besides the description of theses new schemes the paper provides new attacks for the l-RD problem introduced in the paper by Song et al. of AsiaCrypt 2023, in particular these new attacks permit to cryptanalyze all blockwise LRPC parameters they proposed (with an improvement of more than 40bits in the case of structural attacks). We also describe combinatorial attacks and algebraic attacks, for the new Blockwise Rank Support Learning problem we introduce.
Last updated:  2023-12-07
When Cryptography Needs a Hand: Practical Post-Quantum Authentication for V2V Communications
Geoff Twardokus, Nina Bindel, Hanif Rahbari, and Sarah McCarthy
We tackle the atypical challenge of supporting post-quantum cryptography (PQC) and its significant overhead in safety-critical vehicle-to-vehicle (V2V) communications, dealing with strict overhead and latency restrictions within the limited radio spectrum for V2V. For example, we show that the current use of spectrum to support signature verification in V2V makes it nearly impossible to adopt PQC. Accordingly, we propose a scheduling technique for message signing certificate transmissions (which we find are currently up to 93% redundant) that learns to adaptively reduce the use of radio spectrum. In combination, we design the first integration of PQC and V2V, which satisfies the above stringent constraints given the available spectrum. Specifically, we analyze the three PQ signature algorithms selected for standardization by NIST, as well as XMSS (RFC 8391), and propose a Partially Hybrid authentication protocol—a tailored fusion of classical cryptography and PQC—for use in the V2V ecosystem during the nascent transition period we outline towards fully PQ V2V. Our provably secure protocol efficiently balances security and performance, as demonstrated experimentally with software-defined radios (USRPs), commercial V2V devices, and road traffic and V2V simulators. We show our joint transmission scheduling optimization and Partially Hybrid design are scalable and reliable under realistic conditions, adding a negligible average delay (0.39 ms per message) against the current state-of-the-art.
Last updated:  2023-12-07
Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux
Elia Anzuoni and Tommaso Gagliardoni
We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries. We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.
Last updated:  2023-12-07
On Active Attack Detection in Messaging with Immediate Decryption
Khashayar Barooti, Daniel Collins, Simone Colombo, Loı̈s Huguenin-Dumittan, and Serge Vaudenay
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
Last updated:  2023-12-07
Multi-Signatures for Ad-hoc and Privacy-Preserving Group Signing
Anja Lehmann and Cavit Özbay
Multi-signatures allow to combine individual signatures from different signers on the same message into a short aggregated signature. Newer schemes further allow to aggregate the individual public keys, such that the combined signature gets verified against a short aggregated key. This makes them a versatile alternative to threshold or distributed signatures: the aggregated key can serve as group key, and signatures under that key can only be computed with the help of all signers. What makes multi-signatures even more attractive is their simple key management, as users can re-use the same secret key in several and ad-hoc formed groups. In that context, it will be desirable to not sacrifice privacy as soon as keys get re-used and ensure that users are not linkable across groups. In fact, when multi-signatures with key aggregation were proposed, it was claimed that aggregated keys hide the signers' identities or even the fact that it is a combined key at all. In our work, we show that none of the existing multi-signature schemes provide these privacy guarantees when keys get re-used in multiple groups. This is due to the fact that all known schemes deploy deterministic key aggregation. To overcome this limitation, we propose a new variant of multi-signatures with probabilistic yet verifiable key aggregation. We formally define the desirable privacy and unforgeability properties in the presence of key re-use. This also requires to adapt the unforgeability model to the group setting, and ensure that key-reuse does not weaken the expected guarantees. We present a simple BLS-based scheme that securely realizes our strong privacy and security guarantees. We also formalize and investigate the privacy that is possible by deterministic schemes, and prove that existing schemes provide the advertised privacy features as long as one public key remains secret.
Last updated:  2023-12-07
The statistical nature of leakage in SSE schemes and its role in passive attacks
Marc Damie, Jean-Benoist Leger, Florian Hahn, and Andreas Peter
Encrypted search schemes have been proposed to address growing privacy concerns. However, several leakage-abuse attacks have highlighted the shortcomings of these schemes. The literature remains vague about the consequences of these attacks for real-world applications: are these attacks dangerous in practice? Is it safe to use these schemes? Do we even need countermeasures? This paper introduces a novel mathematical model for attackers' knowledge using statistical estimators. Our model reveals that any attacker's knowledge is inherently noisy, which limits attack effectiveness. This inherent noise can be considered a security guarantee, a natural attack mitigation. Capitalizing on this insight, we develop a risk assessment protocol to guide real-world deployments. Our findings demonstrate that limiting the index size is an efficient leverage to bound attack accuracy. Finally, we employ similar statistical methods to enhance attack analysis methodology. Hence, our work offers a fresh perspective on SSE attacks and provides practitioners and researchers with novel methodological tools.
Last updated:  2023-12-07
Blockchain Governance via Sharp Anonymous Multisignatures
Wonseok Choi, Xiangyu Liu, and Vassilis Zikas
Electronic voting has occupied a large part of the cryptographic protocols literature. The recent reality of blockchains---in particular their need for online governance mechanisms---has put new parameters and requirements to the problem. We identify the key requirements of a blockchain governance mechanism, namely correctness (including eliminative double votes), voter anonymity, and traceability, and investigate mechanisms that can achieve them with minimal interaction and under assumptions that fit the blockchain setting. First, we define a signature-like primitive, which we term sharp anonymous multisignatures (in short, #AMS) that tightly meets the needs of blockchain governance. In a nutshell, #AMSs allow any set of parties to generate a signature, e.g., on a proposal to be voted-upon, which if posted on the blockchain hides the identities of the signers/voters, but reveals their number. This can be seen as a (strict) generalization of threshold ring signatures (TRS). We next turn to constructing such #AMSs and using them in various governance scenarios---e.g., single vs. multiple vote per voter. To this direction, we observe that although the definition of TRS does not imply #AMS, one can compile some of the existing TRS constructions into #AMS. This raises the question: What is the TRS structure that allows such a compilation? To answer the above, we devise templates for TRSs. Our templates encapsulate and abstract the structure that allows for the above compilation---most of the TRS schemes that can be compiled into #AMS are, in fact, instantiations of our template. This abstraction makes our template generic for instantiating TRSs and #AMSs from different cryptographic assumptions (e.g., DDH, LWE, etc). One of our templates is based on chameleon hashing and we explore a framework of lossy chameleon hashes to fully understand its nature. Finally, we turn to how #AMS schemes can be used in our applications. We provide fast (in some cases non-interactive) #AMS-based blockchain governance mechanisms for a wide spectrum of assumptions on the honesty (semi-honest vs malicious) and availability of voters and proposers.
Last updated:  2023-12-07
On the Black-Box Impossibility of Multi-Designated Verifiers Signature Schemes from Ring Signature Schemes
Kyosuke Yamashita and Keisuke Hara
From the work by Laguillaumie and Vergnaud in ICICS'04, it has been widely believed that multi-designated verifier signature schemes (MDVS) can be constructed from ring signature schemes in general. However in this paper, somewhat surprisingly, we prove that it is impossible to construct an MDVS scheme from a ring signature scheme in a black-box sense (in the standard model). The impossibility stems from the difference between the definitions of unforgeability. To the best of our knowledge, existing works demonstrating the constructions do not provide formal reduction from an MDVS scheme to a ring signature scheme, and thus the impossibility has been overlooked for a long time.
Last updated:  2023-12-06
Optimal Flexible Consensus and its Application to Ethereum
Joachim Neu, Srivatsan Sridhar, Lei Yang, and David Tse
Classic BFT consensus protocols guarantee safety and liveness for all clients if fewer than one-third of replicas are faulty. However, in applications such as high-value payments, some clients may want to prioritize safety over liveness. Flexible consensus allows each client to opt for a higher safety resilience, albeit at the expense of reduced liveness resilience. We present the first construction that allows optimal safety-liveness tradeoff for every client simultaneously. This construction is modular and is realized as an add-on applied on top of an existing consensus protocol. The add-on consists of an additional round of voting and permanent locking done by the replicas, to sidestep a sub-optimal quorum-intersection-based constraint present in previous solutions. We adapt our construction to the existing Ethereum protocol to derive optimal flexible confirmation rules that clients can adopt unilaterally without requiring system-wide changes. This is possible because existing Ethereum protocol features can double as the extra voting and locking. We demonstrate an implementation using Ethereum's consensus API.
Last updated:  2023-12-06
A Multiparty Commutative Hashing Protocol based on the Discrete Logarithm Problem
Daniel Zentai, Mihail Plesa, and Robin Frot
Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets and suppose that a set of participants $P=\{P_1,P_2,\dots,P_n\}$ would like to calculate the keyed hash value of some message $m\in\mathcal{X}$ known to a single participant in $P$ called the data owner. Also, suppose that each participant $P_i$ knows a secret value $x_i\in\mathcal{X}$. In this paper, we will propose a protocol that enables the participants in this setup to calculate the value $y=H(m,x_1,x_2,\dots ,x_n)$ of a hash function $H:\mathcal{X}^{n+1}\rightarrow\mathcal{Y}$ such that: - The function $H$ is a one-way function. - Participants in $P\backslash\{P_i\}$ cannot obtain $x_i$. - Participants other than the data owner cannot obtain $m$. - The hash value $y=H(m,x_1,x_2,\dots ,x_n)$ remains the same regardless the order of the secret $x_i$ values.
Last updated:  2023-12-06
Leaking-Cascade: an Optimal Construction for KEM Hybridization
Céline Chevalier, Guirec Lebrun, and Ange Martinelli
Hybrid post-quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure. Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys out- put by a parallel execution of classical and post-quantum KEMs. While this architecture is straightforward, it appears to lack computational efficiency and bandwidth optimization. Hence, we propose a novel method for more effectively hybridizing several KEMs, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “leaking-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA2-secure KEM. In terms of performance, our leaking-cascade scheme is at least as computationally efficient and has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that may exceed 13 % compared to the latter. Moreover, we prove that for given PKEs that need to be hybridized, the leaking-cascade has an optimal ciphertext communication cost.
Last updated:  2023-12-06
Security Analysis of an Image Encryption Scheme Based on a New Secure Variant of Hill Cipher and 1D Chaotic Maps
George Teseleanu
In 2019, Essaid et al. introduced a chaotic map-based encryption scheme for color images. Their approach employs three improved chaotic maps to dynamically generate the key bytes and matrix required by the cryptosystem. It should be noted that these parameters are dependent on the size of the source image. According to the authors, their method offers adequate security (i.e. $279$ bits) for transmitting color images over unsecured channels. However, we show in this paper that this is not the case. Specifically, we present two cryptanalytic attacks that undermine the security of Essaid et al.'s encryption scheme. In the case of the chosen plaintext attack, we require only two chosen plaintexts to completely break the scheme. The second attack is a a chosen ciphertext attack, which requires two chosen ciphertexts and compared to the first one has a rough complexity of $2^{24}$. The attacks are feasible due to the fact that the key bits and matrix generated by the algorithm remain unaltered for distinct plaintext images.
Last updated:  2023-12-06
Security Analysis of an Image Encryption Based on the Kronecker Xor Product, the Hill Cipher and the Sigmoid Logistic Map
George Teseleanu
In 2023, Mfungo et al. introduce an image encryption scheme that employs the Kronecker xor product, the Hill cipher and a chaotic map. Their proposal uses the chaotic map to dynamically generate two out of the three secret keys employed by their scheme. Note that both keys are dependent on the size of the original image, while the Hill key is static. Despite the authors' assertion that their proposal offers sufficient security ($149$ bits) for transmitting color images over unsecured channels, we found that this is not accurate. To support our claim, we present a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. Note that in this case Mfungo et al.'s scheme has a complexity of $\mathcal O(2^{33})$, and thus our attack is two times faster than an encryption. The reason why this attack is viable is that the two keys remain unchanged for different plaintext images of the same size, while the Hill key remains unaltered for all images.
Last updated:  2023-12-06
SoK: Post-Quantum TLS Handshake
Nouri Alnahawi, Johannes Müller, Jan Oupický, and Alexander Wiesmaier
Transport Layer Security (TLS) is the backbone security protocol of the Internet. As this fundamental protocol is at risk from future quantum attackers, many proposals have been made to protect TLS against this threat by implementing post-quantum cryptography (PQC). The widespread interest in post-quantum TLS has given rise to a large number of solutions over the last decade. These proposals differ in many aspects, including the security properties they seek to protect, the efficiency and trustworthiness of their post-quantum building blocks, and the application scenarios they consider, to name a few. Based on an extensive literature review, we classify existing solutions according to their general approaches, analyze their individual contributions, and present the results of our extensive performance experiments. Based on these insights, we identify the most reasonable candidates for post-quantum TLS, which research problems in this area have already been solved, and which are still open. Overall, our work provides a well-founded reference point for researching post-quantum TLS and preparing TLS in practice for the quantum age.
Last updated:  2023-12-06
Integral Multiset: A Novel Framework for Integral Attacks over Finite Fields
Weizhe Wang and Deng Tang
In recent years, symmetric primitives that focus on arithmetic metrics over large finite fields, characterized as arithmetization-oriented (\texttt{AO}) ciphers, are widely used in advanced protocols such as secure multi-party computations (MPC), fully homomorphic encryption (FHE) and zero-knowledge proof systems (ZK). To ensure good performance in protocols, these \texttt{AO} ciphers are commonly designed with a small number of multiplications over finite fields and low multiplicative depths. This feature makes \texttt{AO} ciphers vulnerable to algebraic attacks, especially integral attacks. While a far-developed analysis for integral attacks on traditional block ciphers defined over $\mathbb{F}_2$ exists, there is still a lack of research on this kind of attacks over large finite fields. Previous integral attacks over large finite fields are primarily higher-order differential attacks, which construct distinguishers by simply utilizing algebraic degrees without fully exploiting other algebraic properties of finite fields. In this paper, we propose a new concept called \textit{integral multiset}, which provides a clear characterization of the integral property of multiset over the finite field $\mathbb{F}_{p^n}$. Based on multiplicative subgroups of finite fields, we present a new class of integral multisets that exhibits completely different integral property compared to the previously studied multisets based on vector subspaces over the finite field $\mathbb{F}_2$. In addition, we also present a method for merging existing integral multisets to create a new one with better integral property. Furthermore, combining with monomial detection techniques, we propose a framework for searching for integral distinguishers based on integral multisets. We apply our new framework to some competitive \texttt{AO} ciphers, including \textsf{MiMC} and \textsf{Chaghri}. For all these ciphers, we successfully find integral distinguishers with lower time and data complexity. Especially for \textsf{MiMC}, the complexity of some distinguishers we find is only a half or a quarter of the previous best one. Due to the specific algebraic structure, all of our results could not be obtained by higher-order differential attacks. Furthermore, our framework perfectly adapts to various monomial detection techniques like general monomial prediction proposed by Cui et al. at ASIACRYPT 2022 and coefficient grouping invented by Liu et al. at EUROCRYPT 2023. We believe that our work will provide new insight into integral attacks over large finite fields.
Last updated:  2023-12-06
B2T: The Third Logical Value of a Bit
Dipesh, Vishesh Mishra, and Urbi chatterjee
Modern computing systems predominantly operate on the binary number system that accepts only ‘0’ or ‘1’ as logical values leading to computational homogeneity. But this helps in creating leakage patterns that can be exploited by adversaries to carry out hardware and software-level attacks. Recent research has shown that ternary systems, operating on three logical values (‘0′, ‘1', and ‘z') can surpass binary systems in terms of performance and security. In this paper, we first propose a novel approach that assigns logical values based on the direction of current flow within a conducting element, rather than relying on the voltage scale. Furthermore, we also present the mathematical models for each ternary gate.
Last updated:  2023-12-06
An Improved Method for Evaluating Secret Variables and Its Application to WAGE
Weizhe Wang, Haoyang Wang, and Deng Tang
The cube attack is a powerful cryptanalysis technique against symmetric ciphers, especially stream ciphers. The adversary aims to recover secret key bits by solving equations that involve the key. To simplify the equations, a set of plaintexts called a cube is summed up together. Traditional cube attacks use only linear or quadratic superpolies, and the size of cube is limited to an experimental range, typically around 40. However, cube attack based on division property, proposed by Todo et al. at CRYPTO 2017, overcomes these limitations and enables theoretical cube attacks on many lightweight stream ciphers. For a given cube $I$, they evaluate the set $J$ of secret key bits involved in the superpoly and require $2^{|I|+|J|}$ encryptions to recover the superpoly. However, the secret variables evaluation method proposed by Todo et al. sometimes becomes unresponsive and fails to solve within a reasonable time. In this paper, we propose an improvement to Todo's method by breaking down difficult-to-solve problems into several smaller sub-problems. Our method retains the efficiency of Todo's method while effectively avoiding unresponsive situations. We apply our method to the WAGE cipher, an NLFSR-based authenticated encryption algorithm and one of the second round candidates in the NIST LWC competition. Specifically, we successfully mount cube attacks on 29-round WAGE, as well as on 24-round WAGE with a sponge constraint. To the best of our knowledge, this is the first cube attack against the WAGE cipher, which provides a more accurate characterization of the WAGE's resistance against algebraic attacks.
Last updated:  2023-12-05
Verifiable Distributed Aggregation Functions
Hannah Davis, Christopher Patton, Mike Rosulek, and Phillipp Schoppmann
The modern Internet is built on systems that incentivize collection of information about users. In order to minimize privacy loss, it is desirable to prevent these systems from collecting more information than is required for the application. The promise of multi-party computation is that data can be aggregated without revealing individual measurements to the data collector. This work offers a provable security treatment for "Verifiable Distributed Aggregation Functions (VDAFs)", a class of multi-party computation protocols being considered for standardization by the IETF. We propose a formal framework for the analysis of VDAFs and apply it to two constructions. The first is Prio3, one of the candidates for standardization. This VDAF is based on the Prio system of Corrigan-Gibbs and Boneh (NSDI 2017). We prove that Prio3 achieves our security goals with only minor changes to the draft. The second construction, called Doplar, is introduced by this paper. Doplar is a round-reduced variant of the Poplar system of Boneh et al. (IEEE S&P 2021), itself a candidate for standardization. The cost of this improvement is a modest increase in overall bandwidth and computation.
Last updated:  2023-12-05
Accountable Bulletin Boards: Definition and Provably Secure Implementation
Mike Graf, Ralf Küsters, Daniel Rausch, Simon Egger, Marvin Bechtold, and Marcel Flinspach
Bulletin boards (BB) are important cryptographic building blocks that, at their core, provide a broadcast channel with memory. BBs are widely used within many security protocols, including secure multi-party computation protocols, e-voting systems, and electronic auctions. Even though the security of protocols crucially depends on the underlying BB, as also highlighted by recent works, the literature on constructing secure BBs is sparse. The so-far only provably secure BBs require trusted components and sometimes also networks without message loss, which makes them unsuitable for applications with particularly high security needs where these assumptions might not always be met. In this work, we fill this gap by leveraging the concepts of accountability and universal composability (UC). More specifically, we propose the first ideal functionality for accountable BBs that formalizes the security requirements of such BBs in UC. We then propose Fabric$^\ast_\text{BB}$ as a slight extension designed on top of Fabric$^\ast$, which is a variant of the prominent Hyperledger Fabric distributed ledger protocol, and show that Fabric$^\ast_\text{BB}$ UC-realizes our ideal BB functionality. This result makes Fabric$^\ast_\text{BB}$ the first provably accountable BB, an often desired, but so far not formally proven property for BBs, and also the first BB that has been proven to be secure based only on standard cryptographic assumptions and without requiring trusted BB components or network assumptions. Through an implementation and performance evaluation we show that Fabric$^\ast_\text{BB}$ is practical for many applications of BBs.
Last updated:  2023-12-05
COMMON: Order Book with Privacy
Albert Garreta, Adam Gągol, Aikaterini-Panagiota Stouka, Damian Straszak, and Michal Zajac
Decentralized Finance (DeFi) has witnessed remarkable growth and innovation, with Decentralized Exchanges (DEXes) playing a pivotal role in shaping this ecosystem. As numerous DEX designs emerge, challenges such as price inefficiency and lack of user privacy continue to prevail. This paper introduces a novel DEX design, termed COMMON, that addresses these two predominant challenges. COMMON operates as an order book, natively integrated with a shielded token pool, thus providing anonymity to its users. Through the integration of zk-SNARKs, order batching, and Multiparty Computation (MPC) COMMON allows to conceal also the values in orders. This feature, paired with users never leaving the shielded pool when utilizing COMMON, provides a high level of privacy. To enhance price efficiency, we introduce a two-stage order matching process: initially, orders are internally matched, followed by an open, permissionless Dutch Auction to present the assets to Market Makers. This design effectively enables aggregating multiple sources of liquidity as well as helps reducing the adverse effects of Maximal Extractable Value (MEV), by redirecting most of the MEV profits back to the users.
Last updated:  2023-12-05
Different Flavours of HILL Pseudoentropy and Yao Incompressibility Entropy
Pihla Karanko
There are two popular ways to measure computational entropy in cryptography: (HILL) pseudoentropy and (Yao) incompressibility entropy. Both of these computational entropy notions are based on a natural intuition. - A random variable $X$ has $k$ bits of pseudoentropy if there exists a random variable $Y$ that has $k$ bits 'real' entropy and $Y$ is computationally indistinguishable from $X$. - A random variable $X$ has $k$ bits of incompressibility entropy if $X$ cannot be efficiently compressed to less than $k$ bits. It is also intuitive, that if a random variable has high pseudoentropy, then it should also have high incompressibility entropy, because a high-entropy distribution cannot be compressed. However, the above intuitions are not precise. Does 'real entropy' refer to Shannon entropy or min-entropy? What kind of correctness do we require from the compressor algorithm? Different papers use slightly different variations of both pseudoentropy and incompressibility entropy. In this note we study these subtle differences and see how they affect the parameters in the implication that pseudoentropy implies incompressibility.
Last updated:  2023-12-05
When NTT Meets SIS: Efficient Side-channel Attacks on Dilithium and Kyber
Zehua Qiao, Yuejun Liu, Yongbin Zhou, Mingyao Shao, and Shuo Sun
In 2022, NIST selected Kyber and Dilithium as post-quantum cryptographic standard algorithms. The Number Theoretic Transformation (NTT) algorithm, which facilitates polynomial multiplication, has become a primary target for side-channel attacks. Among these, Correlation Power Analysis (CPA) attacks against NTT have received much attention, which aims to recover all the coefficients of the private key in NTT domain. The necessity to recover all these coefficients not only limits efficiency but also directly impacts the feasibility of such attacks. Thus, a crucial question emerges: can the remaining coefficients be recovered using only a subset of known ones? In this work, we respond affirmatively by introducing overdetermined system-based and SIS-assisted key recovery methods for both Dilithium and Kyber, tailored for scenarios with incomplete NTT domain private keys. The SIS-assisted method, by embedding NTT transform matrix into the SIS search problem, offers a complete key recovery with the minimum known coefficients in NTT domain. For Kyber512 and Dilithium2, only 64 and 32 coefficients are enough to recover a subset of the private key with 256 coefficients, respectively. Furthermore, we propose a parameter-adjustable CPA scheme to expedite the recovery of a single coefficient in NTT domain. Combining this CPA scheme with the SIS-assisted approach, we executed practical attacks on both unprotected and masked implementations of Kyber and Dilithium on an ARM Cortex-M4. The results demonstrate that we can recover a subset of 256 private key coefficients for Dilithium2 using 2,000 power traces in 0.5 minutes, while Kyber512 requires 0.4 minutes and 500 power traces. These attacks achieve a 400$\times$ speedup compared to the best-known attacks against Dilithium. Moreover, we successfully break the first-order mask implementations and explore the potential applicable to higher-order implementations.
Last updated:  2023-12-05
Projective Space Stern Decoding and Application to SDitH
Uncategorized
Kevin Carrier, Valérian Hatey, and Jean-Pierre Tillich
Show abstract
Uncategorized
We show that here standard decoding algorithms for generic linear codes over a finite field can speeded up by a factor which is essentially the size of the finite field by reducing it to a low weight codeword problem and working in the relevant projective space. We apply this technique to SDitH and show that the parameters of both the original submission and the updated version fall short of meeting the security requirements asked by the NIST.
Last updated:  2023-12-05
Robust Combiners and Universal Constructions for Quantum Cryptography
Taiga Hiroka, Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases. Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them. On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
Last updated:  2023-12-05
Fallen Sanctuary: A Higher-Order and Leakage-Resilient Rekeying Scheme
Rei Ueno, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This paper presents a provably secure, higher-order, and leakage-resilient (LR) rekeying scheme named LR Rekeying with Random oracle Repetition (LR4), along with a quantitative security evaluation methodology. Many existing LR primitives are based on a concept of leveled implementation, which still essentially require a leak-free sanctuary (i.e., differential power analysis (DPA)-resistant component(s)) for some parts. In addition, although several LR pseudorandom functions (PRFs) based on only bounded DPA-resistant components have been developed, their validity and effectiveness for rekeying usage still need to be determined. In contrast, LR4 is formally proven under a leakage model that captures the practical goal of side-channel attack (SCA) protection (e.g., masking with a practical order) and assumes no unbounded DPA-resistant sanctuary. This proof suggests that LR4 resists exponential invocations (up to the birthday bound of key size) without using any unbounded leak-free component, which is the first of its kind. Moreover, we present a quantitative SCA success rate evaluation methodology for LR4 that combines the bounded leakage models for LR cryptography and a state-of-the-art information-theoretical SCA evaluation method. We validate its soundness and effectiveness as a DPA countermeasure through a numerical evaluation; that is, the number of secure calls of a symmetric primitive increases exponentially by increasing a security parameter under practical conditions.
Last updated:  2023-12-04
Automatic Verification of Cryptographic Block Function Implementations with Logical Equivalence Checking
Li-Chang Lai, Jiaxiang Liu, Xiaomu Shi, Ming-Hsien Tsai, Bow-Yaw Wang, and Bo-Yin Yang
Given a fixed-size block, cryptographic block functions gen- erate outputs by a sequence of bitwise operations. Block functions are widely used in the design of hash functions and stream ciphers. Their correct implementations hence are crucial to computer security. We pro- pose a method that leverages logic equivalence checking to verify assem- bly implementations of cryptographic block functions. Logic equivalence checking is a well-established technique from hardware verification. Using our proposed method, we verify two dozen assembly implementations of ChaCha20, SHA-256, and SHA-3 block functions from OpenSSL and XKCP automatically. We also compare the performance of our technique with the conventional SMT-based technique in experiments.
Last updated:  2023-12-04
Constructing Secure Multi-Party Computation with Identifiable Abort
Nicholas Brandt, Sven Maier, Tobias Müller, and Jörn Müller-Quade
Composable protocols for Multi-Party Computation that provide security with Identifiable Abort against a dishonest majority require some form of setup, e.g. correlated randomness among the parties. While this is a very useful model, it has the downside that the setup's randomness must be programmable, otherwise security becomes provably impossible. Since programmability is more realistic for smaller setups (in terms of number of parties), it is crucial to minimize the correlation complexity (degree of correlation) of the setup's randomness. We give a tight tradeoff between the correlation complexity \(\beta\) and the corruption threshold \(t\). Our bounds are strong in that \(\beta\)-wise correlation is sufficient for statistical security while \(\beta-1\)-wise correlation is insufficient even for computational security. In particular, for strong security, i.e., \(t < n\), full \(n\)-wise correlation is necessary. However, for any constant fraction of honest parties, we provide a protocol with constant correlation complexity which tightens the gap between the theoretical model and the setup's implementation in the real world. In contrast, previous state-of-the-art protocols require full \(n\)-wise correlation regardless of \(t\).
Last updated:  2023-12-04
EstraNet: An Efficient Shift-Invariant Transformer Network for Side-Channel Analysis
Suvadeep Hajra, Siddhartha Chowdhury, and Debdeep Mukhopadhyay
Deep Learning (DL) based Side-Channel Analysis (SCA) has been extremely popular recently. DL-based SCA can easily break implementations protected by masking countermeasures. DL-based SCA has also been highly successful against implementations protected by various trace desynchronization-based countermeasures like random delay, clock jitter, and shuffling. Over the years, many DL models have been explored to perform SCA. Recently, Transformer Network (TN) based model has also been introduced for SCA. Though the previously introduced TN-based model is successful against implementations jointly protected by masking and random delay countermeasures, it is not scalable to long traces (having a length greater than a few thousand) due to its quadratic time and memory complexity. This work proposes a novel shift-invariant TN-based model with linear time and memory complexity. The contributions of the work are two-fold. First, we introduce a novel TN-based model called EstraNet for SCA. EstraNet has linear time and memory complexity in trace length, significantly improving over the previously proposed TN-based model’s quadratic time and memory cost. EstraNet is also shift-invariant, making it highly effective against countermeasures like random delay and clock jitter. Secondly, we evaluated EstraNet on three SCA datasets of masked implementations with random delay and clock jitter effects. Our experimental results show that EstraNet significantly outperforms several benchmark models, demonstrating up to an order of magnitude reduction in the number of attack traces required to reach guessing entropy 1.
Last updated:  2023-12-04
XorSHAP: Privacy-Preserving Explainable AI for Decision Tree Models
Dimitar Jetchev and Marius Vuille
Explainable AI (XAI) refers to the development of AI systems and machine learning models in a way that humans can understand, interpret and trust the predictions, decisions and outputs of these models. A common approach to explainability is feature importance, that is, determining which input features of the model have the most significant impact on the model prediction. Two major techniques for computing feature importance are LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). While very generic, these methods are computationally expensive even in plaintext. Applying them in the privacy-preserving setting when part or all of the input data is private is therefore a major computational challenge. In this paper, we present $\texttt{XorSHAP}$ - the first practical privacy-preserving algorithm for computing Shapley values for decision tree ensemble models in the semi-honest Secure Multiparty Computation (SMPC) setting with full threshold. Our algorithm has complexity $O(T \widetilde{M} D 2^D)$, where $T$ is the number of decision trees in the ensemble, $D$ is the depth of the decision trees and $\widetilde{M}$ is the maximum of the number of features $M$ and $2^D$ (the number of leaf nodes of a tree), and scales to real-world datasets. Our implementation is based on Inpher's $\texttt{Manticore}$ framework and simultaneously computes (in the SMPC setting) the Shapley values for 100 samples for an ensemble of $T = 60$ trees of depth $D = 4$ and $M = 100$ features in just 7.5 minutes, meaning that the Shapley values for a single prediction are computed in just 4.5 seconds for the same decision tree ensemble model. Additionally, it is parallelization-friendly, thus, enabling future work on massive hardware acceleration with GPUs.
Last updated:  2023-12-04
Efficient Issuer-Hiding Authentication, Application to Anonymous Credential
Olivier Sanders and Jacques Traoré
Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer. In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.
Last updated:  2023-12-04
Batch Proofs are Statistically Hiding
Nir Bitansky, Chethan Kamath, Omer Paneth, Ron Rothblum, and Prashant Nalini Vasudevan
Batch proofs are proof systems that convince a verifier that $x_1,\dots,x_t \in \mathcal{L}$, for some $\mathsf{NP}$ language $\mathcal{L}$, with communication that is much shorter than sending the $t$ witnesses. In the case of *statistical soundness* (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for $\mathsf{UP}$, the class of *unique-witness* $\mathsf{NP}$ languages. In the case of computational soundness (where both honest and dishonest provers are efficient), *non-interactive* solutions are now known for all of $\mathsf{NP}$, assuming standard lattice or group assumptions. We exhibit the first negative results regarding the existence of batch proofs and arguments: - Statistically sound batch proofs for $\mathcal{L}$ imply that $\mathcal{L}$ has a statistically witness indistinguishable ($\mathsf{SWI}$) proof, with inverse polynomial $\mathsf{SWI}$ error, and a non-uniform honest prover. The implication is unconditional for obtaining honest-verifier $\mathsf{SWI}$ or for obtaining full-fledged $\mathsf{SWI}$ from public-coin protocols, whereas for private-coin protocols full-fledged $\mathsf{SWI}$ is obtained assuming one-way functions. This poses a barrier for achieving batch proofs beyond $\mathsf{UP}$ (where witness indistinguishability is trivial). In particular, assuming that $\mathsf{NP}$ does not have $\mathsf{SWI}$ proofs, batch proofs for all of $\mathsf{NP}$ do not exist. - Computationally sound batch proofs (a.k.a batch arguments or $\mathsf{BARG}$s) for $\mathsf{NP}$, together with one-way functions, imply statistical zero-knowledge ($\mathsf{SZK}$) arguments for $\mathsf{NP}$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover. Thus, constant-round interactive $\mathsf{BARG}$s from one-way functions would yield constant-round $\mathsf{SZK}$ arguments from one-way functions. This would be surprising as $\mathsf{SZK}$ arguments are currently only known assuming constant-round statistically-hiding commitments. We further prove new positive implications of non-interactive batch arguments to non-interactive zero knowledge arguments (with explicit uniform prover and verifier): - Non-interactive $\mathsf{BARG}$s for $\mathsf{NP}$, together with one-way functions, imply non-interactive computational zero-knowledge arguments for $\mathsf{NP}$. Assuming also dual-mode commitments, the zero knowledge can be made statistical. Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language $\mathcal{L}$ into an $\mathsf{SWI}$ protocol for $\mathcal{L}$.
Last updated:  2023-12-04
$\textsf{Asterisk}$: Super-fast MPC with a Friend
Banashri Karmakar, Nishat Koti, Arpita Patra, Sikhar Patranabis, Protik Paul, and Divya Ravi
Secure multiparty computation$~$(MPC) enables privacy-preserving collaborative computation over sensitive data held by multiple mutually distrusting parties. Unfortunately, in the most natural setting where a majority of the parties are maliciously corrupt$~$(also called the $\textit{dishonest majority}$ setting), traditional MPC protocols incur high overheads and offer weaker security guarantees than are desirable for practical applications. In this paper, we explore the possibility of circumventing these drawbacks and achieving practically efficient dishonest majority MPC protocols with strong security guarantees by assuming an additional semi-honest, non-colluding helper party $\mathrm{HP}$. We believe that this is a more realistic alternative to assuming an honest majority, since many real-world applications of MPC involving potentially large numbers of parties$~$(such as dark pools) are typically enabled by a central governing entity that can be modeled as the $\mathrm{HP}$. In the above model, we are the first to design, implement and benchmark a practically-efficient and general multi-party framework, $\textsf{Asterisk}$. Our framework requires invoking $\mathrm{HP}$ only a constant number of times, achieves the strong security guarantee of $\textit{fairness}$ (either all parties learn the output or none do), scales to hundreds of parties, outperforms all existing dishonest majority MPC protocols, and is, in fact, competitive with state-of-the-art honest majority MPC protocols. Our experiments show that $\textsf{Asterisk}$ achieves $228-288\times$ speedup in preprocessing as compared to the best dishonest majority MPC protocol. With respect to online time, $\textsf{Asterisk}$ supports $100$-party evaluation of a circuit with $10^6$ multiplication gates in approximately $20$ seconds. We also implement and benchmark practically efficient and highly scalable dark pool instances using $\textsf{Asterisk}$. The corresponding run times showcase the effectiveness of $\textsf{Asterisk}$ in enabling efficient realizations of real-world privacy-preserving applications with strong security guarantees.
Last updated:  2023-12-04
New Public-Key Cryptosystem Blueprints Using Matrix Products in $\mathbb F_p$
Remi Geraud-Stewart and David Naccache
Given a set of matrices $\mathbf{A} := \{A_0, \dotsc, A_{k-1}\}$, and a matrix $M$ guaranteed to be the product of some ordered subset of $\mathbf{L}\subset\mathbf{A}$, can $\mathbf{L}$ be efficiently recovered? We begin by observing that the answer is positive under some assumptions on $\mathbf{A}$. Noting that appropriate transformations seem to make $\mathbf{L}$'s recovery difficult we provide the blueprint of two new public-key cryptosystems based upon this problem. We term those constructions "blueprints because, given their novelty, we are still uncertain of their exact security. Yet, we daringly conjecture that even if attacks are found on the proposed constructions, these attacks could be thwarted by adjustments in the key generation, key size or the encryption mechanism, thereby resulting on the long run in fully-fledged public-key cryptosystems that do not seem to belong to any of the mainstream public-key encryption paradigms known to date.
Last updated:  2023-12-04
A Novel Power-Sum PRG with Applications to Lattice-Based zkSNARKs
Charanjit S Jutla, Eamonn W. Postlethwaite, and Arnab Roy
zkSNARK is a cryptographic primitive that allows a prover to prove to a resource constrained verifier, that it has indeed performed a specified non-deterministic computation correctly, while hiding private witnesses. In this work we focus on lattice based zkSNARK, as this serves two important design goals. Firstly, we get post-quantum zkSNARK schemes with $O(\log (\mbox{Circuit size}))$ sized proofs (without random oracles) and secondly, the easy verifier circuit allows further bootstrapping by arbitrary (zk)SNARK schemes that offer additional or complementary properties. However, this goal comes with considerable challenges. The only known lattice-based bilinear maps are obtained using multi-linear maps of Garg, Gentry, and Halevi 2013 (GGH13), which have undergone considerable cryptanalytic attacks, in particular annihilation attacks. In this work, we propose a (level-2) GGH13-encoding based zkSNARK which we show to be secure in the weak-multilinear map model of Miles-Sahai-Zhandry assuming a novel pseudo-random generator (PRG). We argue that the new PRG assumption is plausible based on the well-studied Newton's identity on power-sum polynomials, as well as an analysis of hardness of computing Grobner bases for these polynomials. The particular PRG is designed for efficient implementation of the zkSNARK. Technically, we leverage the 2-linear instantiation of the GGH13 graded encoding scheme to provide us with an analogue of bilinear maps and adapt the Groth16 (Groth, Eurocrypt 2016) protocol, although with considerable technical advances in design and proof. The protocol is non-interactive in the CRS model.
Last updated:  2023-12-04
Privacy-Preserving Cross-Facility Early Warning for Unknown Epidemics
Shiyu Li, Yuan Zhang, Yaqing Song, Fan Wu, Feng Lyu, Kan Yang, and Qiang Tang
Syndrome-based early epidemic warning plays a vital role in preventing and controlling unknown epidemic outbreaks. It monitors the frequency of each syndrome, issues a warning if some frequency is aberrant, identifies potential epidemic outbreaks, and alerts governments as early as possible. Existing systems adopt a cloud-assisted paradigm to achieve cross-facility statistics on the syndrome frequencies. However, in these systems, all symptom data would be directly leaked to the cloud, which causes critical security and privacy issues. In this paper, we first analyze syndrome-based early epidemic warning systems and formalize two security notions, i.e., symptom confidentiality and frequency confidentiality, according to the inherent security requirements. We propose EpiOracle, a cross-facility early warning scheme for unknown epidemics. EpiOracle ensures that the contents and frequencies of syndromes will not be leaked to any unrelated parties; moreover, our construction uses only a symmetric-key encryption algorithm and cryptographic hash functions (e.g., [CBC]AES and SHA-3), making it highly efficient. We formally prove the security of EpiOracle in the random oracle model. We also implement an EpiOracle prototype and evaluate its performance using a set of real-world symptom lists. The evaluation results demonstrate its practical efficiency.
Last updated:  2023-12-04
A Simple and Efficient Framework of Proof Systems for NP
Yuyu Wang, Chuanjie Su, Jiaxin Pan, and Yu Chen
In this work, we propose a simple framework of constructing efficient non-interactive zero-knowledge proof (NIZK) systems for all NP. Compared to the state-of-the-art construction by Groth, Ostrovsky, and Sahai (J. ACM, 2012), our resulting NIZK system reduces the proof size and proving and verification cost without any trade-off, i.e., neither increasing computation cost, CRS size nor resorting to stronger assumptions. Furthermore, we extend our framework to construct a batch argument (BARG) system for all NP. Our construction remarkably improves the efficiency of BARG by Waters and Wu (Crypto 2022) without any trade-off.
Last updated:  2023-12-03
Adding more parallelism to the AEGIS authenticated encryption algorithms
Frank Denis
While the round function of the AEGIS authenticated encryption algorithms is highly parallelizable, their mode of operation is not. We introduce two new modes to overcome that limitation: AEGIS-128X and AEGIS-256X, that require minimal changes to existing implementations and retain the security properties of AEGIS-128L and AEGIS-256.
Last updated:  2023-12-03
ID-CAKE: Identity-based Cluster Authentication and Key Exchange Scheme for Message Broadcasting and Batch Verification in VANETs
Apurva K Vangujar, Alia Umrani, and Paolo Palmieri
Vehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance. This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.
Last updated:  2023-12-03
Optimizing AES Threshold Implementation under the Glitch-Extended Probing Model
Fu Yao, Hua Chen, Yongzhuang Wei, Enes Pasalic, Feng Zhou, and Limin Fan
Threshold Implementation (TI) is a well-known Boolean masking technique that provides provable security against side-channel attacks. In the presence of glitches, the probing model was replaced by the so-called glitch-extended probing model which specifies a broader security framework. In CHES 2021, Shahmirzadi et al. introduced a general search method for finding first-order 2-share TI schemes without fresh randomness (under the presence of glitches) for a given encryption algorithm. Although it handles well single-output Boolean functions, this method has to store output shares in registers when extended to vector Boolean functions, which results in more chip area and increased latency. Therefore, the design of TI schemes that have low implementation cost under the glitch-extended probing model appears to be an important research challenge. In this paper, we propose an approach to design the first-order glitch-extended probing secure TI schemes when quadratic functions are employed in the substitution layer. This method only requires a small amount of fresh random bits and a single clock cycle for its implementation. In particular, the random bits in our approach are reusable and compatible with the changing of the guards technique. Our dedicated TI scheme for the AES cipher gives 20.23% smaller implementation area and 4.2% faster encryption compared to the TI scheme of AES (without using fresh randomness) proposed in CHES 2021. Additionally, we propose a parallel implementation of two S-boxes that further reduces latency (about 39.83%) at the expense of increasing the chip area by 9%. We have positively confirmed the security of AES under the glitch-extended probing model using the verification tool - SILVER and the side-channel leakage assessment method - TVLA.
Last updated:  2023-12-03
Demystifying DeFi MEV Activities in Flashbots Bundle
Zihao Li, Jianfeng Li, Zheyuan He, Xiapu Luo, Ting Wang, Xiaoze Ni, Wenwu Yang, Xi Chen, and Ting Chen
Decentralized Finance, mushrooming in permissionless blockchains, has attracted a recent surge in popularity. Due to the transparency of permissionless blockchains, opportunistic traders can compete to earn revenue by extracting Miner Extractable Value (MEV), which undermines both the consensus security and efficiency of blockchain systems. The Flashbots bundle mechanism further aggravates the MEV competition because it empowers opportunistic traders with the capability of designing more sophisticated MEV extraction. In this paper, we conduct the first systematic study on DeFi MEV activities in Flashbots bundle by developing ActLifter, a novel automated tool for accurately identifying DeFi actions in transactions of each bundle, and ActCluster, a new approach that leverages iterative clustering to facilitate us to discover known/unknown DeFi MEV activities. Extensive experimental results show that ActLifter can achieve nearly 100% precision and recall in DeFi action identification, significantly outperforming state-of-the-art techniques. Moreover, with the help of ActCluster, we obtain many new observations and discover 17 new kinds of DeFi MEV activities, which occur in 53.12% of bundles but have not been reported in existing studies.
Last updated:  2023-12-03
A note on quantum approximate optimization algorithm
Zhengjun Cao
The general quantum approximate optimization algorithm (QAOA) produces approximate solutions for combinatorial optimization problems. The algorithm depends on a positive integer $p$ and the quality of approximation improves as $p$ is increased. In this note, we put some questions about the general QAOA. We also find the recursive QAOA for MaxCut problem is flawed because all quantum gates involved in the algorithm are single qubit gates. No any entangling gate is used, which results in that the quantum computing power cannot be certified for the problem.
Last updated:  2023-12-03
A Novel CCA Attack for NTRU+ KEM
Joohee Lee, Minju Lee, Hansol Ryu, and Jaehui Park
The KpqC competition has begun in 2022, that aims to standardize Post-Quantum Cryptography (PQC) in the Republic of Korea. Among the 16 submissions of the KpqC competition, the lattice-based schemes exhibit the most promising and balanced features in performance. In this paper, we propose an effective classical CCA attack to recover the transmitted session key for NTRU+, one of the lattice-based Key Encapsulation Mechanisms (KEM) proposed in the KpqC competition, for the first time. With the proposed attacks, we show that all the suggested parameters of NTRU+ do not satisfy the claimed security. We also suggest a way to modify the NTRU+ scheme to defend our attack.
Last updated:  2023-12-02
Quantifying risks in cryptographic selection processes
Daniel J. Bernstein
There appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.
Last updated:  2023-12-02
Report on evaluation of KpqC candidates
Jolijn Cottaar, Kathrin Hövelmanns, Andreas Hülsing, Tanja Lange, Mohammad Mahzoun, Alex Pellegrini, Alberto Ravagnani, Sven Schäge, Monika Trimoska, and Benne de Weger
This report analyzes the 16 submissions to the Korean post-quantum cryptography (KpqC) competition.
Last updated:  2023-12-02
There Is Always a Way Out! Destruction-Resistant Key Management: Formal Definition and Practical Instantiation
Yuan Zhang, Yaqing Song, Shiyu Li, Weijia Li, Zeqi Lai, and Qiang Tang
A central advantage of deploying cryptosystems is that the security of large high-sensitive data sets can be reduced to the security of a very small key. The most popular way to manage keys is to use a $(t,n)-$threshold secret sharing scheme: a user splits her/his key into $n$ shares, distributes them among $n$ key servers, and can recover the key with the aid of any $t$ of them. However, it is vulnerable to device destruction: if all key servers and user's devices break down, the key will be permanently lost. We propose a $\mathrm{\underline{D}}$estruction-$\mathrm{\underline{R}}$esistant $\mathrm{\underline{K}}$ey $\mathrm{\underline{M}}$anagement scheme, dubbed DRKM, which ensures the key availability even if destruction occurs. In DRKM, a user utilizes her/his $n^{*}$ personal identification factors (PIFs) to derive a cryptographic key but can retrieve the key using any $t^{*}$ of the $n^{*}$ PIFs. As most PIFs can be retrieved by the user $\textit{per se}$ without requiring $\textit{stateful}$ devices, destruction resistance is achieved. With the integration of a $(t,n)-$threshold secret sharing scheme, DRKM also provides $\textit{portable}$ key access for the user (with the aid of any $t$ of $n$ key servers) before destruction occurs. DRKM can be utilized to construct a destruction-resistant cryptosystem (DRC) in tandem with any backup system. We formally prove the security of DRKM, implement a DRKM prototype, and conduct a comprehensive performance evaluation to demonstrate its high efficiency. We further utilize Cramer's Rule to reduce the required buffer to retrieve a key from 25 MB to 40 KB (for 256-bit security).
Last updated:  2023-12-02
Rectangular Attack on VOX
Gilles Macario-Rat, Jacques Patarin, Benoit Cogliati, Jean-Charles Faugère, Pierre-Alain Fouque, Louis Gouin, Robin Larrieu, and Brice Minaud
VOX has been submitted to the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition in June 2023. VOX is a strengthened variant of UOV which uses the Quotient-Ring (QR) setting to reduce the public-key size. At the end of August 2023, Furue and Ikamatsu posted on the NIST mailing-list a post, indicating that the parameters of VOX can be attacked efficiently using the rectangular attack in the QR setting. In this note, we explain the attack in the specific case of VOX, we detail the complexity, and show that as Furue and Ikematsu indicated, the attack can be completely avoided by adding one more constraint on the parameter selection. Finally, we show that this constraint does not increase the sizes of the public keys or signature.
Last updated:  2023-12-01
Reduction from sparse LPN to LPN, Dual Attack 3.0
Kévin Carrier, Thomas Debris-Alazard, Charles Meyer-Hilfiger, and Jean-Pierre Tillich
The security of code-based cryptography relies primarily on the hardness of decoding generic linear codes. Until very recently, all the best algorithms for solving the decoding problem were information set decoders ($\mathsf{ISD}$). However, recently a new algorithm called RLPN-decoding which relies on a completely different approach was introduced and it has been shown that RLPN outperforms significantly $\mathsf{ISD}$ decoders for a rather large range of rates. This RLPN decoder relies on two ingredients, first reducing decoding to some underlying LPN problem, and then computing efficiently many parity-checks of small weight when restricted to some positions. We revisit RLPN-decoding by noticing that, in this algorithm, decoding is in fact reduced to a sparse-LPN problem, namely with a secret whose Hamming weight is small. Our new approach consists this time in making an additional reduction from sparse-LPN to plain-LPN with a coding approach inspired by $\mathsf{coded}$-$\mathsf{BKW}$. It outperforms significantly the $\mathsf{ISD}$'s and RLPN for code rates smaller than $0.42$. This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography. We depart completely from the traditional analysis of this kind of algorithm which uses a certain number of independence assumptions that have been strongly questioned recently in the latter domain. We give instead a formula for the LPN noise relying on duality which allows to analyze the behavior of the algorithm by relying only on the analysis of a certain weight distribution. By using only a minimal assumption whose validity has been verified experimentally we are able to justify the correctness of our algorithm. This key tool, namely the duality formula, can be readily adapted to the lattice setting and is shown to give a simple explanation for some phenomena observed on dual attacks in lattices in [DP23].
Last updated:  2023-12-01
Arke: Scalable and Byzantine Fault Tolerant Privacy-Preserving Contact Discovery
Nicolas Mohnblatt, Alberto Sonnino, Kobi Gurkan, and Philipp Jovanovic
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Last updated:  2023-12-01
DY Fuzzing: Formal Dolev-Yao Models Meet Cryptographic Protocol Fuzz Testing
Max Ammann, Lucca Hirschi, and Steve Kremer
Critical and widely used cryptographic protocols have repeatedly been found to contain flaws in their design and their implementation. A prominent class of such vulnerabilities is logical attacks, e.g. attacks that exploit flawed protocol logic. Automated formal verification methods, based on the Dolev-Yao (DY) attacker, formally define and excel at finding such flaws, but operate only on abstract specification models. Fully automated verification of existing protocol implementations is today still out of reach. This leaves open whether such implementations are secure. Unfortunately, this blind spot hides numerous attacks, such as recent logical attacks on widely used TLS implementations introduced by implementation bugs. We answer by proposing a novel and effective technique that we call DY model-guided fuzzing, which precludes logical attacks against protocol implementations. The main idea is to consider as possible test cases the set of abstract DY executions of the DY attacker, and use a novel mutation-based fuzzer to explore this set. The DY fuzzer concretizes each abstract execution to test it on the program under test. This approach enables reasoning at a more structural and security-related level of messages represented as formal terms (e.g. decrypt a message and re-encrypt it with a different key) as opposed to random bit-level modifications that are much less likely to produce relevant logical adversarial behaviors. We implement a full-fledged and modular DY protocol fuzzer. We demonstrate its effectiveness by fuzzing three popular TLS implementations, resulting in the discovery of four novel vulnerabilities.
Last updated:  2023-12-01
Quantum Security of the UMTS-AKA Protocol and its Primitives, Milenage and TUAK
Paul Frixons, Sébastien Canard, and Loïc Ferreira
The existence of a quantum computer is one of the most significant threats cryptography has ever faced. However, it seems that real world protocols received little attention so far with respect to their future security. Indeed merely relying upon post-quantum primitives may not suffice in order for a security protocol to be resistant in a full quantum world. In this paper, we consider the fundamental UMTS key agreement used in 3G but also in 4G (LTE), and in the (recently deployed) 5G technology. We analyze the protocol in a quantum setting, with quantum communications (allowing superposition queries by the involved parties), and where quantum computation is granted to the adversary. We prove that, assuming the underlying symmetric-key primitive is quantum-secure, the UMTS key agreement is also quantum-secure. We also give a quantum security analysis of the underlying primitives, namely Milenage and TUAK. To the best of our knowledge this paper provides the first rigorous proof of the UMTS key agreement in a strong quantum setting. Our result shows that in the quantum world to come, the UMTS technology remains a valid scheme in order to secure the communications of billions of users.
Last updated:  2023-12-01
Learning with Errors over Group Rings Constructed by Semi-direct Product
Jiaqi Liu and Fang-Wei Fu
The Learning with Errors (LWE) problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the LWE problem called Group ring LWE (GR-LWE). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-LWE problem described in \cite{lyubashevsky2010ideal}, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-LWE, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem (SIVP) in ideal lattices with polynomial approximate factor to the search version of GR-LWE. This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case SIVP problem is directly reduced to the (average-case) decision GR-LWE problem. The pseudorandomness of GR-LWE samples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems.
Last updated:  2023-12-01
Accurate Score Prediction for Dual-Sieve Attacks
Léo Ducas and Ludo N. Pulles
The Dual-Sieve Attack on Learning with Errors (LWE), or more generally Bounded Distance Decoding (BDD), has seen many improvements in the recent years, and ultimately led to claims that it outperforms the primal attack against certain lattice-based schemes in the PQC standardization process organised by NIST. However, the work of Ducas--Pulles (Crypto '23) revealed that the so-called "Independence Heuristic", which all recent dual attacks used, leads to wrong predictions in a contradictory regime, which is relevant for the security of cryptoschemes. More specifically, the stated distributions of scores for the actual solution and for incorrect candidates were both incorrect. In this work, we propose to use the weaker heuristic that the output vectors of a lattice sieve are uniformly distributed in a ball. Under this heuristic, we give an analysis of the score distribution in the case of an error of fixed length. Integrating over this length, we extend this analysis to any radially distributed error, in particular the gaussian as a fix for the score distribution of the actual solution. This approach also provides a prediction for the score of incorrect candidates, using a ball as an approximation of the Voronoi cell of a lattice. We compare the predicted score distributions to extensive experiments, and observe them to be qualitatively and quantitatively quite accurate. This constitutes a first step towards fixing the analysis of the dual-sieve attack: we can now accurately estimate false-positives and false-negatives. Now that the analysis is fixed, one may consider how to fix the attack itself, namely exploring the opportunities to mitigate a large number of false-positives.
Last updated:  2023-12-01
Horst Meets Fluid-SPN: Griffin for Zero-Knowledge Applications
Lorenzo Grassi, Yonglin Hao, Christian Rechberger, Markus Schofnegger, Roman Walch, and Qingju Wang
Zero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others. These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively over prime fields rather than binary ones. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces crucial limitations in the design, which may affect the performance in the target applications. In this paper, we propose the Horst construction, in which the addition in a Feistel scheme (x, y) -> (y + F(x), x) is extended via a multiplication, i.e., (x, y) -> (y * G(x) + F(x), x). By carefully analyzing the performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme with a Rescue-like SPN scheme in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.
Last updated:  2023-12-01
Breach Extraction Attacks: Exposing and Addressing the Leakage in Second Generation Compromised Credential Checking Services
Dario Pasquini, Danilo Francati, Giuseppe Ateniese, and Evgenios M. Kornaropoulos
Credential tweaking attacks use breached passwords to generate semantically similar passwords and gain access to victims' services. These attacks sidestep the first generation of compromised credential checking (C3) services. The second generation of compromised credential checking services, called "Might I Get Pwned" (MIGP), is a privacy-preserving protocol that defends against credential tweaking attacks by allowing clients to query whether a password or a semantically similar variation is present in the server's compromised credentials dataset. The desired privacy requirements include not revealing the user's entered password to the server and ensuring that no compromised credentials are disclosed to the client. In this work, we formalize the cryptographic leakage of the MIGP protocol and perform a security analysis to assess its impact on the credentials held by the server. We focus on how this leakage aids breach extraction attacks, where an honest-but-curious client interacts with the server to extract information about the stored credentials. Furthermore, we discover additional leakage that arises from the implementation of Cloudflare's deployment of MIGP. We evaluate how the discovered leakage affects the guessing capability of an attacker in relation to breach extraction attacks. Finally, we propose MIGP 2.0, a new iteration of the MIGP protocol designed to minimize data leakage and prevent the introduced attacks.
Last updated:  2023-12-01
HashRand: Efficient Asynchronous Random Beacon without Threshold Cryptographic Setup
Akhil Bandarupalli, Adithya Bhat, Saurabh Bagchi, Aniket Kate, and Michael Reiter
Regular access to unpredictable and bias-resistant randomness is important for applications such as blockchains, voting, and secure distributed computing. Distributed random beacon protocols address this need by distributing trust across multiple nodes, with the majority of them assumed to be honest. These protocols have found applications in blockchain technology, leading to the proposal of several distributed random beacon protocols, with some already implemented. However, many current random beacon systems rely on threshold cryptographic setups or exhibit high computational costs, while others assume partial or bounded synchronous networks. To overcome these limitations, we propose HashRand, a computation and communication-efficient asynchronous random beacon protocol that uses a secure Hash function to generate beacons and pairwise secure channels. HashRand has a per-node communication complexity of $\mathcal{O}(\lambda n \log(n))$ bits per beacon. The computational efficiency of HashRand is attributed to the two orders of magnitude lower time of a one-way Hash computation compared to discrete log exponentiation. Interestingly, besides reduced overhead, HashRand achieves Post-Quantum security by leveraging the secure Hash function against quantum adversaries, setting it apart from other random beacon protocols that use discrete log cryptography. In a geo-distributed testbed of $n=160$ nodes, HashRand produces 1 beacon every second, which is at least 4x higher than Spurt. We also demonstrate the practical utility of HashRand by implementing a Post-Quantum secure Asynchronous SMR protocol, which has a response rate of over 122k txns per second over a WAN at $n=40$ nodes.
Last updated:  2023-12-01
End-to-End Encrypted Zoom Meetings: Proving Security and Strengthening Liveness
Yevgeniy Dodis, Daniel Jost, Balachandar Kesavan, and Antonio Marcedone
In May 2020, Zoom Video Communications, Inc. (Zoom) announced a multi-step plan to comprehensively support end-to-end encrypted (E2EE) group video calls and subsequently rolled out basic E2EE support to customers in October 2020. In this work we provide the first formal security analysis of Zoom's E2EE protocol, and also lay foundation to the general problem of E2EE group video communication. We observe that the vast security literature analyzing asynchronous messaging does not translate well to synchronous video calls. Namely, while strong forms of forward secrecy and post compromise security are less important for (typically short-lived) video calls, various liveness properties become crucial. For example, mandating that participants quickly learn of updates to the meeting roster and key, media streams being displayed are recent, and banned participants promptly lose any access to the meeting. Our main results are as follows: 1. Propose a new notion of leader-based continuous group key agreement with liveness, which accurately captures the E2EE properties specific to the synchronous communication scenario. 2. Prove security of the core of Zoom's E2EE meetings protocol in the above well-defined model. 3. Propose ways to strengthen Zoom's liveness properties by simple modifications to the original protocol, which subsequently influenced updates implemented in production.
Last updated:  2023-12-01
Lattice-based Programmable Hash Functions and Applications
Jiang Zhang, Yu Chen, and Zhenfeng Zhang
Driven by the open problem raised by Hofheinz and Kiltz (Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q, which is large for typical parameters. To overcome the above limitations, we also give a refined way of using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of B¨ohl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15), and allow us to achieve much tighter security from weaker hardness assumptions.
Last updated:  2023-11-30
Cycle Structure and Observability of Two Types of Galois NFSRs
Xianghan Wang, Jianghua Zhong, and Dongdai Lin
Nonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. One security criterion for the design of a stream cipher is to assure its keystream has a long period. To meet this criterion, the NFSR used in a stream cipher must have a long state cycle. Further, to simultaneously avoid equivalent keys, the keystream's period is not compressed compared to the NFSR's state cycle length, which can be guaranteed if the NFSR is observable in the sense that any two distinct initial states are distinguishable from their resulting output sequences. The cycle structure of a general NFSR remains an open hard problem. Constructing Fibonacci NFSRs with maximum state cycles has therefore attracted much attention, but so far such Fibonacci NFSRs with known feedback functions have been found only for their stage numbers no greater than 33. Considering that Galois NFSRs may decrease the area and increase the throughput compared to Fibonacci NFSRs, this paper studies two types of $n$-stage Galois NFSRs, whose state transition matrices are circulant matrices with only one nonzero element of 1 in each column. The cycle structure and observability of both types are disclosed using the semi-tensor product based Boolean network approach. In the first type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is even. It has the maximum state cycle with an arbitrary stage number and an explicit feedback functions. It is observable if and only if its output function is dependent on the first state bit. In the second type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is $2^m+1$ with positive integer $m\leq n-1$ for the NFSR's stage number $n$. It has $2^m$ cycles of length $2^{n-m}$, and it is observable if its output function is dependent on all the state bits whose indices are no smaller than $n-m+1$.
Last updated:  2023-11-30
Decentralized Finance (DeFi): A Survey
Erya Jiang, Bo Qin, Qin Wang, Zhipeng Wang, Qianhong Wu, Jian Weng, Xinyu Li, Chenyang Wang, Yuhang Ding, and Yanran Zhang
Decentralized Finance (DeFi) is a new paradigm in the creation, distribution, and utilization of financial services via the integration of blockchain technology. Our research conducts a comprehensive introduction and meticulous classification of various DeFi applications. Beyond that, we thoroughly analyze these risks from both technical and economic perspectives, spanning multiple layers. We point out research gaps and revenues, covering technical advancements, innovative economics, and sociology and ecology optimization.
Last updated:  2023-11-30
Unconditionally Secure Commitments with Quantum Auxiliary Inputs
Tomoyuki Morimae, Barak Nehoran, and Takashi Yamakawa
We show the following unconditional results on quantum commitments in two related yet different models: 1. We revisit the notion of quantum auxiliary-input commitments introduced by Chailloux, Kerenidis, and Rosgen (Comput. Complex. 2016) where both the committer and receiver take the same quantum state, which is determined by the security parameter, as quantum auxiliary inputs. We show that computationally-hiding and statistically-binding quantum auxiliary-input commitments exist unconditionally, i.e., without relying on any unproven assumption, while Chailloux et al. assumed a complexity-theoretic assumption, ${\bf QIP}\not\subseteq{\bf QMA}$. On the other hand, we observe that achieving both statistical hiding and statistical binding at the same time is impossible even in the quantum auxiliary-input setting. To the best of our knowledge, this is the first example of unconditionally proving computational security of any form of (classical or quantum) commitments for which statistical security is impossible. As intermediate steps toward our construction, we introduce and unconditionally construct post-quantum sparse pseudorandom distributions and quantum auxiliary-input EFI pairs which may be of independent interest. 2. We introduce a new model which we call the common reference quantum state (CRQS) model where both the committer and receiver take the same quantum state that is randomly sampled by an efficient setup algorithm. We unconditionally prove that there exist statistically hiding and statistically binding commitments in the CRQS model, circumventing the impossibility in the plain model. We also discuss their applications to zero-knowledge proofs, oblivious transfers, and multi-party computations.
Last updated:  2023-11-30
Zero-day vulnerability prevention with recursive feature elimination and ensemble learning
Mike Nkongolo Wa Nkongolo
This study focuses on spotting and stopping new types of online threats by improving the UGRansome dataset to detect unusual activity in real-time. By blending different machine learning methods, like naïve tree-based ensemble learning and recursive feature elimination (RFE), the research achieves a high accuracy rate of 97%. Naïve Bayes (NB) stands out as the most effective classifier. The suggested setup, combining gradient boosting (GB) and random forest (RF) with NB, effectively identifies and prevents unknown vulnerabilities in computer systems. UGRansome successfully blocks over 100 kilobits per second (kbps) of harmful online traffic by using details pinpointed by the RFE method, specifically uniform resource locators (URLs). This outperforms existing Intrusion Detection System (IDS) datasets. It's particularly good at stopping secure shell attacks, proving the dataset's usefulness in making networks safer. This research marks significant progress in detecting intrusions. The NB model excels in accuracy, precision, and remembering patterns, especially in identifying new threats. Moreover, the suggested naïve tree-based ensemble model shows outstanding accuracy, standing out as the best-performing technique among all models studied. Applying the UGRansome properties-based rule noticeably changes how traffic is sorted, decreasing unknown traffic while increasing unclassified traffic, which requires more investigation.
Last updated:  2023-11-30
Unclonable Cryptography with Unbounded Collusions
Alper Çakan and Vipul Goyal
Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of $k$ such states cannot create $k+1$ working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve. In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure $\mathcal{iO}$, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works. Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]). We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from $1\to2$ secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest. Along the way, we also construct a puncturable functional encryption scheme whose master secret key can be punctured at all functions $f$ such that $f(m_0) \neq f(m_1)$. This might also be of independent interest.
Last updated:  2023-11-30
Withdrawable Signature: How to Call off a Signature
Xin Liu, Joonsang Baek, and Willy Susilo
Digital signatures are a cornerstone of security and trust in cryptography, providing authenticity, integrity, and non-repudiation. Despite their benefits, traditional digital signature schemes suffer from inherent immutability, offering no provision for a signer to retract a previously issued signature. This paper introduces the concept of a withdrawable signature scheme, which allows for the retraction of a signature without revealing the signer's private key or compromising the security of other signatures the signer created before. This property, defined as ``withdrawability'', is particularly relevant in decentralized systems, such as e-voting, blockchain-based smart contracts, and escrow services, where signers may wish to revoke or alter their commitment. The core idea of our construction of a withdrawable signature scheme is to ensure that the parties with a withdrawable signature are not convinced whether the signer signed a specific message. This ability to generate a signature while preventing validity from being verified is a fundamental requirement of our scheme, epitomizing the property of \textit{withdrawability}. After formally defining security notions for withdrawable signatures, we present two constructions of the scheme based on the pairing and the discrete logarithm. We provide security proof that both constructions are unforgeable under insider corruption and satisfy the criteria of withdrawability. We anticipate our new type of signature will significantly enhance flexibility and security in digital transactions and communications.
Last updated:  2023-11-30
Secret-Shared Shuffle with Malicious Security
Xiangfu Song, Dong Yin, Jianli Bai, Changyu Dong, and Ee-Chien Chang
A secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt'20) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS'21, NDSS'22) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library.
Last updated:  2023-11-30
Unconditionally secure quantum commitments with preprocessing
Luowen Qian
We demonstrate how to build computationally secure commitment schemes with the aid of quantum auxiliary inputs without unproven complexity assumptions. Furthermore, the quantum auxiliary input can be prepared either (1) efficiently through a trusted setup similar to the classical common random string model, or (2) strictly between the two involved parties in uniform exponential time. Classically this remains impossible without first proving $\mathsf{P} \neq \mathsf{NP}$.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.