Papers updated in last 7 days (68 results)

Last updated:  2023-03-28
Boosting Batch Arguments and RAM Delegation
Yael Tauman Kalai, Alex Lombardi, Vinod Vaikuntanathan, and Daniel Wichs
We show how to generically improve the succinctness of non-interactive publicly verifiable batch argument ($\mathsf{BARG}$) systems. In particular, we show (under a mild additional assumption) how to convert a $\mathsf{BARG}$ that generates proofs of length $\mathsf{poly} (m)\cdot k^{1-\epsilon}$, where $m$ is the length of a single instance and $k$ is the number of instances being batched, into one that generates proofs of length $\mathsf{poly} (m)\cdot \mathsf{poly} \log k$, which is the gold standard for succinctness of $\mathsf{BARG}$s. By prior work, such $\mathsf{BARG}$s imply the existence of $\mathsf{SNARG}$s for deterministic time $T$ computation with optimal succinctness $\mathsf{poly}\log T$. Our result reduces the long-standing challenge of building publicly-verifiable delegation schemes to a much easier problem: building a batch argument system that beats the trivial construction. It also immediately implies new constructions of $\mathsf{BARG}$s and $\mathsf{SNARG}$s with polylogarithmic succinctness based on either bilinear maps or a combination of the $\mathsf{DDH}$ and $\mathsf{QR}$ assumptions. Along the way, we prove an equivalence between $\mathsf{BARG}$s and a new notion of $\mathsf{SNARG}$s for (deterministic) $\mathsf{RAM}$ computations that we call ``flexible $\mathsf{RAM}$ $\mathsf{SNARG}$s with partial input soundness." This is the first demonstration that $\mathsf{SNARG}$s for deterministic computation (of any kind) imply $\mathsf{BARG}$s. Our $\mathsf{RAM}$ $\mathsf{SNARG}$ notion is of independent interest and has already been used in a recent work on constructing rate-1 $\mathsf{BARG}$s (Devadas et. al. FOCS 2022).
Last updated:  2023-03-27
PLUME: An ECDSA Nullifier Scheme for Unique Pseudonymity within Zero Knowledge Proofs
Aayush Gupta and Kobi Gurkan
ZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers. Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic signature algorithm that both uniquely identifies the keypair, and keeps the account identity secret. In this work, we will define the full DDH-VRF construction, and prove uniqueness, secrecy, and existential unforgeability. We will also demonstrate a proof of concept of our Pseudonymously Linked Unique Message Entity (PLUME) scheme.
Last updated:  2023-03-27
Interoperable Private Attribution: A Distributed Attribution and Aggregation Protocol
Benjamin Case, Richa Jain, Alex Koshelev, Andy Leiserson, Daniel Masny, Ben Savage, Erik Taubeneck, Martin Thomson, and Taiki Yamaguchi
Measuring people’s interactions that span multiple websites can provide unique insight that enables better products and improves people’s experiences, but directly observing people’s individual journeys creates privacy risks that conflict with the newly emerging privacy model for the web. We propose a protocol that uses the combination of multi-party computation and differential privacy that enables the processing of peoples’ data such that only aggregate measurements are revealed, strictly limiting the information leakage about individual people. Our primary application of this protocol is measuring, in aggregate, the effectiveness of digital advertising without enabling cross-site tracking of individuals. In this paper we formalize our protocol, Interoperable Private Attribution (IPA), and analyze its security. IPA is proposed in the W3C’s Private Advertising Technology Community Group (PATCG) [8]. We have implemented our protocol in the malicious honest majority MPC setting for three parties where network costs dominate compute costs. For processing a query with 1M records it uses around 18GiB of network which at \$0.08 per GiB leads to a network cost of \$1.44.
Last updated:  2023-03-27
Generalized Inverse Matrix Construction for Code Based Cryptography
Farshid Haidary Makoui and T. Aaron Gulliver
The generalized inverses of systematic non-square binary matrices have applications in mathematics, channel coding and decoding, navigation signals, machine learning, data storage and cryptography such as the McEliece and Niederreiter public-key cryptosystems. A systematic non-square $(n-k) \times k$ matrix $H$, $n > k$, has $2^{k\times(n-k)}$ different generalized inverse matrices. This paper presents an algorithm for generating these matrices and compares it with two well-known methods, i.e. Gauss-Jordan elimination and Moore-Penrose methods. A random generalized inverse matrix construction method is given which has a lower execution time than the Gauss-Jordan elimination and Moore-Penrose approaches.
Last updated:  2023-03-27
Provable Lattice Reduction of $\mathbb Z^n$ with Blocksize $n/2$
Léo Ducas
The Lattice Isomorphism Problem (LIP) is the computational task of recovering, assuming it exists, a orthogonal linear transformation sending one lattice to another. For cryptographic purposes, the case of the trivial lattice $\mathbb Z^n$ is of particular interest ($\mathbb Z$LIP). Heuristic analysis suggests that the BKZ algorithm with blocksize $\beta = n/2 + o(n)$ solves such instances (Ducas, Postlethwaite, Pulles, van Woerden, ASIACRYPT 2022). In this work, I propose a provable version of this statement, namely, that $\mathbb Z$LIP can indeed be solved by making polynomially many calls to a Shortest Vector Problem (SVP) oracle in dimension at most $n/2 + 1$.
Last updated:  2023-03-27
Revisiting Preimage Sampling for Lattices
Corentin Jeudy, Adeline Roux-Langlois, and Olivier Sanders
Preimage Sampling is a fundamental process in lattice-based cryptography whose performance directly affects the one of the cryptographic mechanisms that rely on it. In 2012, Micciancio and Peikert proposed a new way of generating trapdoors (and an associated preimage sampling procedure) with very interesting features. Unfortunately, in some applications such as digital signatures, the performance may not be as competitive as other approaches like Fiat-Shamir with Aborts. In this work we revisit the Micciancio-Peikert preimage sampling algorithm with different contributions. We first propose a finer analysis of this procedure which results in interesting efficiency gains of around 20% on the preimage sizes without affecting security. It can thus be used as a drop-in replacement in every construction resorting to it. We then reconsider the Lyubashevsky-Wichs sampler for Micciancio-Peikert trapdoors which leverages rejection sampling but suffered from strong parameter requirements that hampered performance. We propose an improved analysis which allows to obtain much more compact parameters. This leads to gains of up to 30% compared to the original Micciancio-Peikert sampling technique and opens promising perspectives for the efficiency of advanced lattice-based constructions relying on such mechanisms. As an application of the latter, we give the first lattice-based aggregate signature supporting public aggregation and that achieves relevant compression compared to the concatenation of individual signatures. Our scheme is proven secure in the aggregate chosen-key model coined by Boneh et al. in 2003, based on the well-studied assumptions Module Learning With Errors and Module Short Integer Solution.
Last updated:  2023-03-27
Practical key-recovery attack on MQ-Sign
Thomas Aulbach, Simona Samardjiska, and Monika Trimoska
This note describes a polynomial-time key-recovery attack on the UOV-based signature scheme called MQ-Sign. The scheme is a first-round candidate in the Korean Post-Quantum Cryptography Competition. Our attack exploits the sparsity of the secret central polynomials in combination with the specific structure of the secret linear map $S$. We provide a verification script that recovers the secret key in less than seven seconds for security level 5.
Last updated:  2023-03-27
Fully Adaptive Schnorr Threshold Signatures
Elizabeth Crites, Chelsea Komlo, and Mary Maller
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature. In this paper, we demonstrate that Sparkle achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t + 1 signers, then Sparkle is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme; moreover, the reduction is tight.
Last updated:  2023-03-27
Compact Bounded-Collusion Identity-based Encryption via Group Testing
Shingo Sato and Junji Shikata
Bounded-collusion identity-based encryption (BC-IBE) is a variant of identity-based encryption, where an adversary obtains user secrete keys corresponding to at most $d$ identities. From results of existing work, it is proven that BC-IBE can be constructed from public key encryption (PKE) with several properties. In particular, we focus on post-quantum PKE schemes submitted to the NIST PQC competition, as the underlying PKE of BC-IBE schemes. This is because post-quantum cryptography is one of active research areas, due to recent advancement of developing quantum computers. Hence, it is reasonable to consider converting such PKE schemes into encryption schemes with additional functionalities. By using existing generic constructions of BC-IBE, those post-quantum PKE schemes are transformed into BC-IBE with non-compact public parameter. In this paper, we propose generic constructions of BC-IBE whose public parameter-size is more compact, and it is possible to apply many post-quantum PKE schemes secure against chosen plaintext attacks, into our generic constructions. To this end, we construct BC-IBE schemes from a group testing perspective, while existing ones are constructed by employing error-correcting codes or cover-free families. As a result, we can obtain BC-IBE schemes with more compact public parameter, which are constructed from the NIST PQC PKE schemes.
Last updated:  2023-03-27
Abstraction Model of Probing and DFA Attacks on Block Ciphers
Yuiko Matsubara, Daiki Miyahara, Yohei Watanabe, Mitsugu Iwamoto, and Kazuo Sakiyama
A thread of physical attacks that try to obtain secret information from cryptographic modules has been of academic and practical interest. One of the concerns is determining its efficiency, e.g., the number of attack trials to recover the secret key. However, the accurate estimation of the attack efficiency is generally expensive because of the complexity of the physical attack on a cryptographic algorithm. Based on this background, in this study, we propose a new abstraction model for evaluating the attack efficiency of the probing and DFA attacks. The proposed model includes an abstracted attack target and attacker to determine the amount of leaked information obtained in a single attack trial. We can adapt the model flexibly to various attack scenarios and can get the attack efficiency quickly and precisely. In the probing attack on AES, the difference in the attack efficiency is only approximately 0.3% between the model and experimental values, whereas that of a previous model is approximately 16%. We also apply the probing attack on DES, and the results show that DES has a high resistance to the probing attack. Moreover, the proposed model works accurately also for the DFA attack on AES.
Last updated:  2023-03-27
Efficient Linkable Ring Signature from Compact Commitment to Vector inexplicably named Multratug
Anton A. Sokolov
In this paper we revise the idea of our previous work Lin2-Xor lemma and Log-size Linkable Threshold Ring Signature and introduce another lemma, called Lin2-Choice, which extends the Lin2-Xor lemma. Using a membership proof protocol defined in the Lin2-Choice lemma, we create a compact general-purpose trusted-setup-free log-size linkable threshold ring signature called EFLRSL. The signature size is 2log(n+1)+3l+1, where n is the ring size and l is the threshold. It is composed of several public coin arguments that are special honest verifier zero-knowledge and have computational witness-extended emulation. As the base building block which contributes most to the size, we use a black-box pivot argument that proves knowledge of a committed vector. This makes our signature combinable with other proofs with further size reduction. Also, we present an extended version of the EFLRSL signature of size 2log(n+l+1)+7l+4, aliased as Multratug, which simultaneously proves balance and allows for easy multiparty signing. All this takes place in a prime-order group without bilinear parings under the decisional Diffie-Hellman assumption in the random oracle model. Both our signatures are unforgeable w.r.t insider corruption and are also EU-CMA. They remain anonymous even for non-uniformly distributed and malformed keys, which makes it possible to use them as a log-size drop-in replacement for LSAG-based signatures.
Last updated:  2023-03-27
Finding and Evaluating Parameters for BGV
Johannes Mono, Chiara Marcolla, Georg Land, Tim Güneysu, and Najwa Aaraj
Fully Homomorphic Encryption (FHE) is a groundbreaking technology that allows for arbitrary computations to be performed on encrypted data. State-of-the-art schemes such as Brakerski Gentry Vaikuntanathan (BGV) are based on the Learning with Errors over rings (RLWE) assumption, and each ciphertext has an associated error that grows with each homomorphic operation. For correctness, the error needs to stay below a certain threshold, requiring a trade-off between security and error margin for computations in the parameters. Choosing the parameters accordingly, for example, the polynomial degree or the ciphertext modulus, is challenging and requires expert knowledge specific to each scheme. In this work, we improve the parameter generation process across all steps of its process. We provide a comprehensive analysis for BGV in the Double Chinese Remainder Theorem (DCRT) representation providing more accurate and better bounds than previous work on the DCRT, and empirically derive a closed formula linking the security level, the polynomial degree, and the ciphertext modulus. Additionally, we introduce new circuit models and combine our theoretical work in an easy-to-use parameter generator for researchers and practitioners interested in using BGV for secure computation. Our formula results in better security estimates than previous closed formulas, while our DCRT analysis results in reduced prime sizes of up to 42% compared to previous work.
Last updated:  2023-03-27
Silph: A Framework for Scalable and Accurate Generation of Hybrid MPC Protocols
Edward Chen, Jinhao Zhu, Alex Ozdemir, Riad S. Wahby, Fraser Brown, and Wenting Zheng
Many applications in finance and healthcare need access to data from multiple organizations. While these organizations can benefit from computing on their joint datasets, they often cannot share data with each other due to regulatory constraints and business competition. One way mutually distrusting parties can collaborate without sharing their data in the clear is to use secure multiparty computation (MPC). However, MPC’s performance presents a serious obstacle for adoption as it is difficult for users who lack expertise in advanced cryptography to optimize. In this paper, we present Silph, a framework that can automatically compile a program written in a high-level language to an optimized, hybrid MPC protocol that mixes multiple MPC primitives securely and efficiently. Compared to prior works, our compilation speed is improved by up to 30000×. On various database analytics and machine learning workloads, the MPC protocols generated by Silph match or outperform prior work by up to 3.6×.
Last updated:  2023-03-27
Non-interactive privacy-preserving naive Bayes classifier using homomorphic encryption
Jingwei Chen, Yong Feng, Yang Liu, Wenyuan Wu, and Guanci Yang
In this paper, we propose a non-interactive privacy-preserving naive Bayes classifier from leveled fully homomorphic encryption schemes. The classifier runs on a server that is also the model’s owner (modeler), whose input is the encrypted data from a client. The classifier produces encrypted classification results, which can only be decrypted by the client, while the modelers model is only accessible to the server. Therefore, the classifier does not leak any privacy on either the servers model or the clients data and results. More importantly, the classifier does not require any interactions between the server and the client during the classification phase. The main technical ingredient is an algorithm that computes the maximum index of an encrypted array homomorphically without any interactions. The proposed classifier is implemented using HElib. Experiments show the accuracy and efficiency of our classifier. For instance, the average cost can achieve about 34ms per sample for a real data set in UCI Machine Learning Repository with the security parameter about 100 and accuracy about 97%.
Last updated:  2023-03-26
Yafa-108/146: Implementing ed25519-embedding Cocks-Pinch curves in arkworks-rs
Rami Akeela and Weikeng Chen
This note describes two pairing-friendly curves that embed ed25519, of different bit security levels. Our search is not novel; it follows the standard recipe of the Cocks-Pinch method. We implemented these two curves on arkworks-rs. This note is intended to document how the parameters are being generated and how to implement these curves in arkworks-rs 0.4.0, for further reference. We name the two curves as Yafa-108 and Yafa-146: - Yafa-108 is estimated to offer 108-bit security, which we parameterized to match the 103-bit security of BN254 - Yafa-146 is estimated to offer 146-bit security, which we parameterized to match the 132-bit security of BLS12-446 or 123-bit security of BLS12-381 We use these curves as an example to demonstrate two things: - The "elastic" zero-knowledge proof, Gemini (EUROCRYPT '22), is more than being elastic, but it is more curve-agnostic and hardware-friendly. - The cost of nonnative field arithmetics can be drastic, and the needs of application-specific curves may be inherent. This result serves as evidence of the necessity of EIP-1962, and the insufficiency of EIP-2537.
Last updated:  2023-03-26
Unconditionally secure ciphers with a short key for a source with unknown statistics
Boris Ryabko
We consider the problem of constructing an unconditionally secure cipher with a short key for the case where the probability distribution of encrypted messages is unknown. Note that unconditional security means that an adversary with no computational constraints can obtain only a negligible amount of information ("leakage") about an encrypted message (without knowing the key). Here we consider the case of a priori (partially) unknown message source statistics. More specifically, the message source probability distribution belongs to a given family of distributions. We propose an unconditionally secure cipher for this case. As an example, one can consider constructing a single cipher for texts written in any of the languages of the European Union. That is, the message to be encrypted could be written in any of these languages.
Last updated:  2023-03-26
On the Possibility of a Backdoor in the Micali-Schnorr Generator
Hannah Davis, Matthew Green, Nadia Heninger, Keegan Ryan, and Adam Suhl
In this paper, we study both the implications and potential impact of backdoored parameters for two RSA-based pseudorandom number generators: the ISO-standardized Micali-Schnorr generator and a closely related design, the RSA PRG. We observe, contrary to common understanding, that the security of the Micali-Schnorr PRG is not tightly bound to the difficulty of inverting RSA. We show that the Micali-Schnorr construction remains secure even if one replaces RSA with a publicly evaluatable PRG, or a function modeled as an efficiently invertible random permutation. This implies that any cryptographic backdoor must somehow exploit the algebraic structure of RSA, rather than an attacker's ability to invert RSA or the presence of secret keys. We exhibit two such backdoors in related constructions: a family of exploitable parameters for the RSA PRG, and a second vulnerable construction for a finite-field variant of Micali-Schnorr. We also observe that the parameters allowed by the ISO standard are incompletely specified, and allow insecure choices of exponent. Several of our backdoor constructions make use of lattice techniques, in particular multivariate versions of Coppersmith's method for finding small solutions to polynomials modulo integers.
Last updated:  2023-03-26
Standard Model Time-Lock Puzzles: Defining Security and Constructing via Composition
Karim Eldefrawy, Sashidhar Jakkamsetti, Ben Terner, and Moti Yung
The introduction of time-lock puzzles initiated the study of publicly “sending information into the future.” For time-lock puzzles, the underlying security-enabling mechanism is the computational complexity of the operations needed to solve the puzzle, which must be tunable to reveal the solution after a predetermined time, and not before that time. Time-lock puzzles are typically constructed via a commitment to a secret, paired with a reveal algorithm that sequentially iterates a basic function over such commitment. One then shows that short-cutting the iterative process violates cryptographic hardness of an underlying problem. To date, and for more than twenty-five years, research on time-lock puzzles relied heavily on iteratively applying well-structured algebraic functions. However, despite the tradition of cryptography to reason about primitives in a realistic model with standard hardness assumptions (often after initial idealized assumptions), most analysis of time-lock puzzles to date still relies on cryptography modeled (in an ideal manner) as a random oracle function or a generic group function. Moreover, Mahmoody et al. showed that time-lock puzzles with superpolynomial gap cannot be constructed from random-oracles; yet still, current treatments generally use an algebraic trapdoor to efficiently construct a puzzle with a large time gap, and then apply the inconsistent (with respect to Mahmoody et al.) random-oracle idealizations to analyze the solving process. Finally, little attention has been paid to the nuances of composing multi-party computation with timed puzzles that are solved as part of the protocol. In this work, we initiate a study of time-lock puzzles in a model built upon a realistic (and falsifiable) computational framework. We present a new formal definition of residual complexity to characterize a realistic, gradual time-release for time-lock puzzles. We also present a general definition of timed multi-party computation (MPC) and both sequential and concurrent composition theorems for MPC in our model.
Last updated:  2023-03-26
Minimal $p$-ary codes from non-covering permutations
René Rodríguez, Enes Pasalic, Fengrong Zhang, and Yongzhuang Wei
In this article, we propose generalizations to the non-binary scenario of the methods employed in [44] for constructing minimal linear codes. Specifically, we provide three constructions of minimal codes over $\mathbb{F}_p$. The first construction uses the method of direct sum of an arbitrary function $f:\mathbb{F}_{p^r}\to \mathbb{F}_{p}$ and a bent function $g:\mathbb{F}_{p^s}\to \mathbb{F}_p$ to induce minimal codes with parameters $[p^{r+s}-1,r+s+1]$ and minimum distance larger than $p^r(p-1)(p^{s-1}-p^{s/2-1})$. For the first time, we provide a general construction of linear codes from a subclass of non-weakly regular plateaued functions. The second construction deals with a bent function $g:\mathbb{F}_{p^m}\to \mathbb{F}_p$ and a subspace of suitable derivatives $U$ of $g$, i.e., functions of the form $g(y+a)-g(y)$ for some $a\in \mathbb{F}_{p^m}^*.$ We also provide a generalization of the recently introduced concept of non-covering permutations [44] and prove important properties of this class of permutations. The most notable observation is that the class of non-covering permutations contains the class of APN power permutations (characterized by having two-to-one derivatives). Finally, the last construction combines the previous two methods (direct sum, non-covering permutations and subspaces of derivatives) to construct minimal codes with a larger dimension. This method proves to be quite flexible since it can lead to several non-equivalent codes, depending exclusively on the choice of the underlying non-covering permutation.
Last updated:  2023-03-26
Fast and Clean: Auditable high-performance assembly via constraint solving
Amin Abdulrahman, Hanno Becker, Matthias J. Kannwischer, and Fabien Klein
Handwritten assembly is a widely used tool in the development of high-performance cryptography: By providing full control over instruction selection, instruction scheduling, and register allocation, highest performance can be unlocked. On the flip side, developing handwritten assembly is not only time-consuming, but the artifacts produced also tend to be difficult to review and maintain – threatening their suitability for use in practice. In this work, we present SLOTHY (Super (Lazy) Optimization of Tricky Handwritten assemblY), a framework for the automated superoptimization of assembly with respect to instruction scheduling, register allocation, and loop optimization (software pipelining): With SLOTHY, the developer controls and focuses on algorithm and instruction selection, providing a readable “base” implementation in assembly, while SLOTHY automatically finds optimal and traceable instruction scheduling and register allocation strategies with respect to a model of the target (micro)architecture. We demonstrate the flexibility of SLOTHY by instantiating it with models of the Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72 microarchitectures, implementing the Armv8.1-M+Helium and AArch64+Neon architectures. We use the resulting tools to optimize three workloads: First, for Cortex-M55 and Cortex-M85, a radix-4 complex Fast Fourier Transform (FFT) in fixed-point and floating-point arithmetic, fundamental in Digital Signal Processing. Second, on Cortex-M55, Cortex-M85, Cortex-A55 and Cortex-A72, the instances of the Number Theoretic Transform (NTT) underlying CRYSTALS-Kyber and CRYSTALS-Dilithium, two recently announced winners of the NIST Post-Quantum Cryptography standardization project. Third, for Cortex-A55, the scalar multiplication for the elliptic curve key exchange X25519. The SLOTHY-optimized code matches or beats the performance of prior art in all cases, while maintaining compactness and readability.
Last updated:  2023-03-25
Extended Abstract: HotStuff-2: Optimal Two-Phase Responsive BFT
Dahlia Malkhi and Kartik Nayak
In this paper, we observe that it is possible to solve partially-synchronous BFT and simultaneously achieves $O(n^2)$ worst-case communication, optimistically linear communication, a two-phase commit regime within a view, and optimistic responsiveness. Prior work falls short in achieving one or more of these properties, e.g., the most closely related work, HotStuff, requires a three-phase view while achieving all other properties. We demonstrate that these properties are achievable through a two-phase HotStuff variant named HotStuff-2. The quest for two-phase HotStuff variants that achieve all the above desirable properties has been long, producing a series of results that are yet sub-optimal and, at the same time, are based on somewhat heavy hammers. HotStuff-2 demonstrates that none of these are necessary: HotStuff-2 is remarkably simple, adding no substantive complexity to the original HotStuff protocol. The main takeaway is that two phases are enough for BFT after all.
Last updated:  2023-03-25
AIM: Symmetric Primitive for Shorter Signatures with Stronger Security (Full Version)
Seongkwang Kim, Jincheol Ha, Mincheol Son, Byeonghak Lee, Dukjae Moon, Joohee Lee, Sangyub Lee, Jihoon Kwon, Jihoon Cho, Hyojin Yoon, and Jooyoung Lee
Post-quantum signature schemes based on the MPC-in-the-Head (MPCitH) paradigm are recently attracting significant attention as their security solely depends on the one-wayness of the underlying primitive, providing diversity for the hardness assumption in post-quantum cryptography. Recent MPCitH-friendly ciphers have been designed using simple algebraic S-boxes operating on a large field in order to improve the performance of the resulting signature schemes. Due to their simple algebraic structures, their security against algebraic attacks should be comprehensively studied. In this paper, we refine algebraic cryptanalysis of power mapping based S-boxes over binary extension fields, and cryptographic primitives based on such S-boxes. In particular, for the Gröbner basis attack over $\mathbb{F}_2$, we experimentally show that the exact number of Boolean quadratic equations obtained from the underlying S-boxes is critical to correctly estimate the theoretic complexity based on the degree of regularity. Similarly, it turns out that the XL attack might be faster when all possible quadratic equations are found and used from the S-boxes. This refined cryptanalysis leads to more precise algebraic analysis of cryptographic primitives based on algebraic S-boxes. Considering the refined algebraic cryptanalysis, we propose a new one-way function, dubbed $\mathsf{AIM}$, as an MPCitH-friendly symmetric primitive with high resistance to algebraic attacks. The security of $\mathsf{AIM}$ is comprehensively analyzed with respect to algebraic, statistical, quantum, and generic attacks. $\mathsf{AIM}$ is combined with the BN++ proof system, yielding a new signature scheme, dubbed $\mathsf{AIMer}$. Our implementation shows that $\mathsf{AIMer}$ outperforms existing signature schemes based on symmetric primitives in terms of signature size and signing time.
Last updated:  2023-03-25
Reversing, Breaking, and Fixing the French Legislative Election E-Voting Protocol
Alexandre Debant and Lucca Hirschi
We conduct a security analysis of the e-voting protocol used for the largest political election using e-voting in the world, the 2022 French legislative election for the citizens overseas. Due to a lack of system and threat model specifications, we built and contributed such specifications by studying the French legal framework and by reverse-engineering the code base accessible to the voters. Our analysis reveals that this protocol is affected by two design-level and implementation-level vulnerabilities. We show how those allow a standard voting server attacker and even more so a channel attacker to defeat the election integrity and ballot privacy due to 6 attack variants. We propose and discuss 5 fixes to prevent those attacks. Our specifications, the attacks, and the fixes were acknowledged by the relevant stakeholders during our responsible disclosure. Our attacks are in the process of being prevented with our fixes for future elections. Beyond this specific protocol, we draw general conclusions and lessons from this instructive experience where an e-voting protocol meets the real-world constraints of a large-scale and political election.
Last updated:  2023-03-25
SQISignHD: New Dimensions in Cryptography
Pierrick Dartois, Antonin Leroux, Damien Robert, and Benjamin Wesolowski
We introduce SQISignHD, a new post-quantum digital signature scheme inspired by SQISign. SQISignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQISignHD overcomes the main drawbacks of SQISign. First, it scales well to high security levels, since the public parameters for SQISignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQISign, with compressed signatures as small as 105 bytes for the post-quantum NIST-1 level of security. These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention.
Last updated:  2023-03-25
FSMx-Ultra: Finite State Machine Extraction from Gate-Level Netlist for Security Assessment
Rasheed Kibria, Farimah Farahmandi, and Mark Tehranipoor
Numerous security vulnerability assessment techniques urge precise and fast finite state machines (FSMs) extraction from the design under evaluation. Sequential logic locking, watermark insertion, fault-injection assessment of a System-ona- Chip (SoC) control flow, information leakage assessment, and reverse engineering at gate-level abstraction, to name a few, require precise FSM extraction from the synthesized netlist of the design. Unfortunately, no reliable solutions are currently available for fast and precise extraction of FSMs from the highly unstructured gate-level netlist for effective security evaluation. The major challenge in developing such a solution is precise recognition of FSM state flip-flops in a netlist having a massive collection of flip-flops. In this paper, we propose FSMx-Ultra, a framework for extracting FSMs from extremely unstructured gate-level netlists. FSMx-Ultra utilizes state-of-the-art graph theory concepts and algorithms to distinguish FSM state registers from other registers and then constructs gate-level state transition graphs (STGs) for each identified FSM state register using automatic test pattern generation (ATPG) techniques. The results of our experiments on 14 open-source benchmark designs illustrate that FSMx-Ultra can recover all FSMs quickly and precisely from synthesized gate-level netlists of diverse complexity and size utilizing various state encoding schemes.
Last updated:  2023-03-24
Rate-1 Incompressible Encryption from Standard Assumptions
Pedro Branco, Nico Döttling, and Jesko Dujmovic
Incompressible encryption, recently proposed by Guan, Wichs and Zhandry (EUROCRYPT'22), is a novel encryption paradigm geared towards providing strong long-term security guarantees against adversaries with bounded long-term memory. Given that the adversary forgets just a small fraction of a ciphertext, this notion provides strong security for the message encrypted therein, even if, at some point in the future, the entire secret key is exposed. This comes at the price of having potentially very large ciphertexts. Thus, an important efficiency measure for incompressible encryption is the message-to-ciphertext ratio (also called the rate). Guan et al. provided a low-rate instantiation of this notion from standard assumptions and a rate-1 instantiation from indistinguishability obfuscation (iO). In this work, we propose a simple framework to build rate-1 incompressible encryption from standard assumptions. Our construction can be realized from, e.g. the DDH and additionally the DCR or the LWE assumptions.
Last updated:  2023-03-24
Optimal Security Notion for Decentralized Multi-Client Functional Encryption
Ky Nguyen, Duong Hieu Phan, and David Pointcheval
Research on (Decentralized) Multi-Client Functional Encryption (or (D)MCFE) is very active, with interesting constructions, especially for the class of inner products. However, the security notions have been evolving over the time. While the target of the adversary in distinguishing ciphertexts is clear, legitimate scenarios that do not consist of trivial attacks on the functionality are less obvious. In this paper, we wonder whether only trivial attacks are excluded from previous security games. And, unfortunately, this was not the case. We then propose a stronger security notion, with a large definition of admissible attacks, and prove it is optimal: any extension of the set of admissible attacks is actually a trivial attack on the functionality, and not against the specific scheme. In addition, we show that all the previous constructions are insecure w.r.t. this new security notion. Eventually, we propose new DMCFE schemes for the class of inner products that provide the new features and achieve this stronger security notion.
Last updated:  2023-03-24
The Self-Anti-Censorship Nature of Encryption: On the Prevalence of Anamorphic Cryptography
Mirek Kutylowski, Giuseppe Persiano, Duong Hieu Phan, Moti Yung, and Marcin Zawada
s part of the responses to the ongoing ``crypto wars,'' the notion of {\em Anamorphic Encryption} was put forth [Persiano-Phan-Yung Eurocrypt '22]. The notion allows private communication in spite of a dictator who (in violation of the usual normative conditions under which Cryptography is developed) is engaged in an extreme form of surveillance and/or censorship, where it asks for all private keys and knows and may even dictate all messages. The original work pointed out efficient ways to use two known schemes in the anamorphic mode, bypassing the draconian censorship and hiding information from the all-powerful dictator. A question left open was whether these examples are outlier results or whether anamorphic mode is pervasive in existing systems. Here we answer the above question: we develop new techniques, expand the notion, and show that the notion of Anamorphic Cryptography is, in fact, very much prevalent. We first refine the notion of Anamorphic Encryption with respect to the nature of covert communication. Specifically, we distinguish {\em Single-Receiver Encryption} for many to one communication, and {\em Multiple-Receiver Encryption} for many to many communication within the group of conspiring (against the dictator) users. We then show that Anamorphic Encryption can be embedded in the randomness used in the encryption, and give families of constructions that can be applied to numerous ciphers. In total the families cover classical encryption schemes, some of which in actual use (RSA-OAEP, Pailler, Goldwasser-Micali, ElGamal schemes, Cramer-Shoup, and Smooth Projective Hash based systems). Among our examples is an anamorphic channel with much higher capacity than the regular channel. In sum, the work shows the very large extent of the potential futility of control and censorship over the use of strong encryption by the dictator (typical for and even stronger than governments engaging in the ongoing ``crypto-wars''): While such limitations obviously hurt utility which encryption typically brings to safety in computing systems, they essentially, are not helping the dictator. The actual implications of what we show here and what does it mean in practice require further policy and legal analyses and perspectives.
Last updated:  2023-03-24
Efficiency of SIDH-based signatures (yes, SIDH)
Wissam Ghantous, Federico Pintore, and Mattia Veroni
In this note we assess the efficiency of a SIDH-based digital signature built on a diminished variant of a recent identification protocol proposed by Basso et al. Despite the devastating attacks against the mathematical problem underlying SIDH, this identification protocol remains secure, as its security is backed by a different (and more standard) isogeny-finding problem. We conduct our analysis by applying some known cryptographic techniques to decrease the signature size by about 70% for all parameter sets (obtaining signatures of approximately 21 KB for SIKEp434). Moreover, we propose a minor optimisation to compute many isogenies in parallel from the same starting curve. Our assessment confirms that the problem of designing a practical isogeny-based signature scheme remains largely open. However, concretely determine the current state of the art which future optimisations can compare to appears to be of relevance for a problem which has witnessed only small steps towards a solution.
Last updated:  2023-03-24
Ruffle: Rapid 3-party shuffle protocols
Pranav Shriram A, Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, Bhavish Raj Gopal, and Somya Sangal
Secure shuffle is an important primitive that finds use in several applications such as secure electronic voting, oblivious RAMs, secure sorting, to name a few. For time-sensitive shuffle-based applications that demand a fast response time, it is essential to design a fast and efficient shuffle protocol. In this work, we design secure and fast shuffle protocols relying on the techniques of secure multiparty computation. We make several design choices that aid in achieving highly efficient protocols. Specifically, we consider malicious 3-party computation setting with an honest majority and design robust ring-based protocols. Our shuffle protocols provide a fast online (i.e., input-dependent) phase compared to the state-of-the-art for the considered setting. To showcase the efficiency improvements brought in by our shuffle protocols, we consider two distinct applications of anonymous broadcast and secure graph computation via the GraphSC paradigm. In both cases, multiple shuffle invocations are required. Hence, going beyond standalone shuffle invocation, we identify two distinct scenarios of multiple invocations and provide customised protocols for the same. Further, we showcase that our customized protocols not only provide a fast response time, but also provide improved overall run time for multiple shuffle invocations. With respect to the applications, we not only improve in terms of efficiency, but also work towards providing improved security guarantees, thereby outperforming the respective state-of-the-art works. We benchmark our shuffle protocols and the considered applications to analyze the efficiency improvements with respect to various parameters.
Last updated:  2023-03-24
QuantumCharge: Post-Quantum Cryptography for Electric Vehicle Charging
Dustin Kern, Christoph Krauß, Timm Lauser, Nouri Alnahawi, Alexander Wiesmaier, and Ruben Niederhagen
ISO 15118 enables charging and billing of Electric Vehicles (EVs) without user interaction by using locally installed cryptographic credentials that must be secure over the long lifetime of vehicles. In the dawn of quantum computers, Post-Quantum Cryptography (PQC) needs to be integrated into the EV charging infrastructure. In this paper, we propose QuantumCharge, a PQC extension for ISO 15118, which includes concepts for migration, crypto-agility, verifiable security, and the use of PQC-enabled hardware security modules. Our prototypical implementation and the practical evaluation demonstrate the feasibility, and our formal analysis shows the security of QuantumCharge, which thus paves the way for secure EV charging infrastructures of the future.
Last updated:  2023-03-24
Shield: Secure Allegation Escrow System with Stronger Guarantees
Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, and Bhavish Raj Gopal
The rising issues of harassment, exploitation, corruption, and other forms of abuse have led victims to seek comfort by acting in unison against common perpetrators (e.g., #MeToo movement). One way to curb these issues is to install allegation escrow systems that allow victims to report such incidents. The escrows are responsible for identifying victims of a common perpetrator and taking the necessary action to bring justice to them. However, users hesitate to participate in these systems due to the fear of such sensitive reports being leaked to perpetrators, who may further misuse them. Thus, to increase trust in the system, cryptographic solutions are being designed to realize secure allegation escrow (SAE) systems. In the work of Arun et al. (NDSS'20), which presents the state-of-the-art solution, we identify attacks that can leak sensitive information and compromise victim privacy. We also report issues present in prior works that were left unidentified. To arrest all these breaches, we put forth an SAE system that prevents the identified attacks and retains the salient features from all prior works. The cryptographic technique of secure multi-party computation (MPC) serves as the primary underlying tool in designing our system. At the heart of our system lies a new duplicity check protocol and an improved matching protocol. We also provide additional features such as allegation modification and deletion, which were absent in the state of the art. To demonstrate feasibility, we benchmark the proposed system with state-of-the-art MPC protocols and report the cost of processing an allegation. Different settings that affect system performance are analyzed, and the reported values showcase the practicality of our solution.
Last updated:  2023-03-24
CPU to FPGA Power Covert Channel in FPGA-SoCs
Mathieu Gross, Robert Kunzelmann, and Georg Sigl
FPGA-SoCs are a popular platform for accelerating a wide range of applications due to their performance and flexibility. From a security point of view, these systems have been shown to be vulnerable to various attacks, especially side-channel attacks where an attacker can obtain the secret key of a cryptographic algorithm via laboratory mea- surement equipment or even remotely with sensors implemented inside the FPGA logic itself. Fortunately, a variety of countermeasures on the algorithmic level have been proposed to mitigate this threat. Beyond side- channel attacks, covert channels constitute another threat which enables communication through a hidden channel. In this work, we demonstrate the possibility of implementing a covert channel between the CPU and an FPGA by modulating the usage of the Power Distribution Network. We show that this resource is especially vulnerable since it can be easily controlled and observed, resulting in a stealthy communication and a high transmission data rate. The power usage is modulated using simple and inconspicuous instructions executed on the CPU. Additionally, we use Time-to-Digital Converter sensors to observe these power variations. The sensor circuits are programmed into the FPGA fabric using only standard logic components. Our covert channel achieves a transmission rate of up to 16.7 kbit/s combined with an error rate of 2.3%. Besides a good transmission quality, our covert channel is also stealthy and can be used as an activation function for a hardware trojan.
Last updated:  2023-03-24
Security analysis of the Classic McEliece, HQC and BIKE schemes in low memory
Yu Li and Li-Ping Wang
With the advancement of NIST PQC standardization, three of the four candidates in Round 4 are code-based schemes, namely Classic McEliece, HQC and BIKE. Currently, one of the most important tasks is to further analyze their security levels for the suggested parameter sets. At PKC 2022 Esser and Bellini restated the major information set decoding (ISD) algorithms by using nearest neighbor search and then applied these ISD algorithms to estimate the bit security of Classic McEliece, HQC and BIKE under the suggested parameter sets. However, all major ISD algorithms consume a large amount of memory, which in turn affects their time complexities. In this paper, we reestimate the bit-security levels of the parameter sets suggested by these three schemes in low memory by applying $K$-list sum algorithms to ISD algorithms. Compared with Esser-Bellini's results, our results achieve the best gains for Classic McEliece, HQC, and BIKE, with reductions in bit-security levels of $11.09$, $12.64$, and $12.19$ bits, respectively.
Last updated:  2023-03-24
SPRINT: High-Throughput Robust Distributed Schnorr Signatures
Fabrice Benhamouda, Shai Halevi, Hugo Krawczyk, Tal Rabin, and Yiping Ma
We describe high-throughput threshold protocols with guaranteed output delivery for generating Schnorr-type signatures. The protocols run a single message-independent interactive ephemeral randomness generation procedure (e.g., DKG) followed by a \emph{non-interactive} multi-message signature generation procedure. The protocols offer significant increase in throughput already for as few as ten parties while remaining highly-efficient for many hundreds of parties with thousands of signatures generated per minute (and over 10,000 in normal optimistic case). These protocols extend seamlessly to the dynamic/proactive setting, where each run of the protocol uses a new committee, and they support sub-sampling the committees from among an effectively unbounded number of nodes. The protocols work over a broadcast channel in both synchronous and asynchronous networks. The combination of these features makes our protocols a good match for implementing a signature service over an (asynchronous) public blockchain with many validators, where guaranteed output delivery is an absolute must. In that setting, there is a system-wide public key, where the corresponding secret signature key is distributed among the validators. Clients can submit messages (under suitable controls, e.g. smart contracts), and authorized messages are signed relative to the global public key. Asymptotically, when running with committees of $n$ parties, our protocols can generate $\Omega(n^2)$ signatures per run, while providing resilience against $\Omega(n)$ corrupted nodes, and using broadcast bandwidth of only $O(n^2)$ group elements and scalars. For example, we can sign about $n^2/16$ messages using just under $2n^2$ total bandwidth while supporting resilience against $n/4$ corrupted parties, or sign $n^2/8$ messages using just over $2n^2$ total bandwidth with resilience against $n/5$ corrupted parties. We prove security of our protocols by reduction to the hardness of the discrete logarithm problem in the random-oracle model.
Last updated:  2023-03-24
A Tightly Secure Identity-based Signature Scheme from Isogenies
Hyungrok Jo, Shingo Sato, and Junji Shikata
We present a tightly secure identity-based signature (IBS) scheme based on the supersingular isogeny problems. Although Shaw and Dutta proposed an isogeny-based IBS scheme with provable security, the security reduction is non-tight. For an IBS scheme with concrete security, the tightness of its security reduction affects the key size and signature size. Hence, it is reasonable to focus on a tight security proof for an isogeny-based IBS scheme. In this paper, we propose an isogeny-based IBS scheme based on the lossy CSI-FiSh signature scheme and give a tight security reduction for this scheme. While the existing isogeny-based IBS has the square-root advantage loss in the security proof, the security proof for our IBS scheme avoids such advantage loss, due to the properties of lossy CSI-FiSh.
Last updated:  2023-03-24
Generic Construction of Dual-Server Public Key Authenticated Encryption with Keyword Search
Keita Emura
Chen et al. (IEEE Transactions on Cloud Computing 2022) introduced dual-server public key authenticated encryption with keyword search (DS-PAEKS), and proposed a DS-PAEKS scheme under the decisional Diffie-Hellman assumption. In this paper, we propose a generic construction of DS-PAEKS from PAEKS, public key encryption, and signatures. By providing a concrete attack, we show that the DS-PAEKS scheme of Chen et al. is vulnerable. That is, the proposed generic construction yields the first DS-PAEKS schemes. Our attack with a slight modification works against the Chen et al. dual-server public key encryption with keyword search (DS-PEKS) scheme (IEEE Transactions on Information Forensics and Security 2016). Moreover, we demonstrate that the Tso et al. generic construction of DS-PEKS from public key encryption (IEEE Access 2020) is also vulnerable. We also analyze other pairing-free PAEKS schemes (Du et al., Wireless Communications and Mobile Computing 2022 and Lu and Li, IEEE Transactions on Mobile Computing 2022). Though we did not find any attack against these schemes, we show that at least their security proofs are wrong.
Last updated:  2023-03-23
A Duality Between One-Way Functions and Average-Case Symmetry of Information
Shuichi Hirahara, Rahul Ilango, Zhenjian Lu, Mikito Nanashima, and Igor C. Oliveira
Symmetry of Information (SoI) is a fundamental property of Kolmogorov complexity that relates the complexity of a pair of strings and their conditional complexities. Understanding if this property holds in the time-bounded setting is a longstanding open problem. In the nineties, Longpré and Mocas (1993) and Longpré and Watanabe (1995) established that if SoI holds for time-bounded Kolmogorov complexity then cryptographic one-way functions do not exist, and asked if a converse holds. We show that one-way functions exist if and only if (probabilistic) time-bounded SoI fails on average, i.e., if there is a samplable distribution of pairs (x,y) of strings such that SoI for pK$^t$ complexity fails for many of these pairs. Our techniques rely on recent perspectives offered by probabilistic Kolmogorov complexity and meta-complexity, and reveal further equivalences between inverting one-way functions and the validity of key properties of Kolmogorov complexity in the time-bounded setting: (average-case) language compression and (average-case) conditional coding. Motivated by these results, we investigate correspondences of this form for the worst-case hardness of NP (i.e., NP ⊄ BPP) and for the average-case hardness of NP (i.e., DistNP ⊄ HeurBPP), respectively. Our results establish the existence of similar dualities between these computational assumptions and the failure of results from Kolmogorov complexity in the time-bounded setting. In particular, these characterizations offer a novel way to investigate the main hardness conjectures of complexity theory (and the relationships among them) through the lens of Kolmogorov complexity and its properties.
Last updated:  2023-03-23
A Note on Hybrid Signature Schemes
Nina Bindel and Britta Hale
This draft presents work-in-progress concerning hybrid/composite signature schemes. More concretely, we give several tailored combinations of Fiat-Shamir based signature schemes (such as Dilithium) or Falcon with RSA or DSA. We observe that there are a number of signature hybridization goals, few of which are not achieved through parallel signing or concatenation approaches. These include proof composability (that the post-quantum hybrid signature security can easily be linked to the component algorithms), weak separability, strong separability, backwards compatibility, hybrid generality (i.e., hybrid compositions that can be instantiated with different algorithms once proven to be secure), and simultaneous verification. We do not consider backwards compatibility in this work, but aim in our constructions to show the feasibility of achieving all other properties. As a work-in-progress, the constructions are presented without the accompanying formal security analysis, to be included in an update.
Last updated:  2023-03-23
Lightweight Techniques for Private Heavy Hitters
Dan Boneh, Elette Boyle, Henry Corrigan-Gibbs, Niv Gilboa, and Yuval Ishai
This paper presents Poplar, a new system for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client’s string. A web-browser vendor, for instance, can use Poplar to figure out which homepages are popular, without learning any user’s homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Poplar uses two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Poplar protects client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions. A limitation of Poplar is that it reveals to the servers slightly more information than the set of popular strings itself. We precisely define and quantify this leakage and explain how to ameliorate its effects. In an experimental evaluation with two servers on opposite sides of the U.S., the servers can find the 200 most popular strings among a set of 400,000 client-held 256-bit strings in 54 minutes. Our protocols are highly parallelizable. We estimate that with 20 physical machines per logical server, Poplar could compute heavy hitters over ten million clients in just over one hour of computation.
Last updated:  2023-03-23
A Differential Fault Attack against Deterministic Falcon Signatures
Sven Bauer and Fabrizio De Santis
We describe a fault attack against the deterministic variant of the Falcon signature scheme. It is the first fault attack that exploits specific properties of deterministic Falcon. The attack works under a very liberal and realistic single fault random model. The main idea is to inject a fault into the pseudo-random generator of the pre-image trapdoor sampler, generate different signatures for the same input, find reasonably short lattice vectors this way, and finally use lattice reduction techniques to obtain the private key. We investigate the relationship between fault location, the number of faults, computational effort for a possibly remaining exhaustive search step and success probability.
Last updated:  2023-03-23
Interactive Oracle Arguments in the QROM and Applications to Succinct Verification of Quantum Computation
Islam Faisal
This work is motivated by the following question: can an untrusted quantum server convince a classical verifier of the answer to an efficient quantum computation using only polylogarithmic communication? We show how to achieve this in the quantum random oracle model (QROM), after a non-succinct instance-independent setup phase. We introduce and formalize the notion of post-quantum interactive oracle arguments for languages in QMA, a generalization of interactive oracle proofs (Ben-Sasson-Chiesa-Spooner). We then show how to compile any non-adaptive public-coin interactive oracle argument (with private setup) into a succinct argument (with setup) in the QROM. To conditionally answer our motivating question via this framework under the post-quantum hardness assumption of LWE, we show that the XZ local Hamiltonian problem with at least inverse-polylogarithmic relative promise gap has an interactive oracle argument with instance-independent setup, which we can then compile. Assuming a variant of the quantum PCP conjecture that we introduce called the weak XZ quantum PCP conjecture, we obtain a succinct argument for QMA (and consequently the verification of quantum computation) in the QROM (with non-succinct instance-independent setup) which makes only black-box use of the underlying cryptographic primitives.
Last updated:  2023-03-23
Making Classical (Threshold) Signatures Post-Quantum for Single Use on a Public Ledger
Laurane Marco, Abdullah Talayhan, and Serge Vaudenay
The Bitcoin architecture heavily relies on the ECDSA signature scheme which is broken by quantum adversaries as the secret key can be computed from the public key in quantum polynomial time. To mitigate this attack, bitcoins can be paid to the hash of a public key (P2PKH). However, the first payment reveals the public key so all bitcoins attached to it must be spent at the same time (i.e. the remaining amount must be transferred to a new wallet). Some problems remain with this approach: the owners are vulnerable against rushing adversaries between the time the signature is made public and the time it is committed to the blockchain. Additionally, there is no equivalent mechanism for threshold signatures. Finally, no formal analysis of P2PKH has been done. In this paper, we formalize the security notion of a digital signature with a hidden public key and we propose and prove the security of a generic transformation that converts a classical signature to a post-quantum one that can be used only once. We compare it with P2PKH. Namely, our proposal relies on pre-image resistance instead of collision resistance as for P2PKH, so allows for shorter hashes. Additionally, we propose the notion of a delay signature to address the problem of the rushing adversary when used with a public ledger and discuss the advantages and disadvantages of our approach. We further extend our results to threshold signatures.
Last updated:  2023-03-23
\(\texttt{POLKA}\): Towards Leakage-Resistant Post-Quantum CCA-Secure Public Key Encryption
Clément Hoffmann, Benoît Libert, Charles Momin, Thomas Peters, and François-Xavier Standaert
As for any cryptographic algorithm, the deployment of post-quantum CCA-secure public-key encryption schemes may come with the need to be protected against side-channel attacks. For existing post-quantum schemes that have not been developed with leakage in mind, recent results showed that the cost of these protections can make their implementations more expensive by orders of magnitude. In this paper, we describe a new design, coined \(\texttt{POLKA}\), that is specifically tailored for this purpose. It leverages various ingredients in order to enable efficient side-channel protected implementations such as: (i) the rigidity property (which intuitively means that de-randomized encryption and decryption are injective functions) to avoid the very leaky re-encryption step of the Fujisaki-Okamoto transform, (ii) the randomization of the decryption thanks to the incorporation of a dummy ciphertext, removing the adversary’s control of its intermediate computations and making these computations ephemeral, (iii) key-homomorphic computations that can be masked against side-channel attacks with overheads that scale linearly in the number of shares, (iv) hard physical learning problem to argue about the security of some critical unmasked operations. Furthermore, we use an explicit rejection mechanism (returning an error symbol for invalid ciphertexts) to avoid the additional leakage caused by implicit rejection. As a result, all the operations of \(\texttt{POLKA}\) can be protected against leakage in a much cheaper way than state-of-the-art designs, opening the way towards schemes that are both quantum-safe and leakage-resistant.
Last updated:  2023-03-23
Asynchronous Remote Key Generation for Post-Quantum Cryptosystems from Lattices
Nick Frymann, Daniel Gardham, and Mark Manulis
Asynchronous Remote Key Generation (ARKG), introduced by Frymann et al. at CCS 2020, allows for the generation of unlinkable public keys by third parties, for which corresponding private keys may be later learned only by the key pair's legitimate owner. These key pairs can then be used in common public-key cryptosystems, including signatures, PKE, KEMs, and schemes supporting delegation, such as proxy signatures. The only known instance of ARKG generates discrete-log-based keys. In this paper, we introduce new ARKG constructions for lattice-based cryptosystems. The key pairs generated using our ARKG scheme can be applied to lattice-based signatures and KEMs, which have recently been selected for standardisation in the NIST PQ process, or as alternative candidates. In particular, we address challenges associated with the noisiness of lattice hardness assumptions, which requires a new generalised definition of ARKG correctness, whilst preserving the security and privacy properties of the former instantiation. Our ARKG construction uses key encapsulation techniques by Brendel et al. (SAC 2020) coined Split KEMs. As an additional contribution, we also show that Kyber (Bos et al., EuroS&P 2018) can be used to construct a Split KEM. The security of our protocol is based on standard LWE assumptions. We also discuss its use with selected candidates from the NIST process and provide an implementation and benchmarks.
Last updated:  2023-03-23
The Round Complexity of Statistical MPC with Optimal Resiliency
Benny Applebaum, Eliran Kachlon, and Arpita Patra
In STOC 1989, Rabin and Ben-Or (RB) established an important milestone in the fields of cryptography and distributed computing by showing that every functionality can be computed with statistical (information-theoretic) security in the presence of an active (aka Byzantine) rushing adversary that controls up to half of the parties. We study the round complexity of general secure multiparty computation and several related tasks in the RB model. Our main result shows that every functionality can be realized in only four rounds of interaction which is known to be optimal. This completely settles the round complexity of statistical actively-secure optimally-resilient MPC, resolving a long line of research. Along the way, we construct the first round-optimal statistically-secure verifiable secret sharing protocol (Chor, Goldwasser, Micali, and Awerbuch; STOC 1985), show that every single-input functionality (e.g., multi-verifier zero-knowledge) can be realized in 3 rounds, and prove that the latter bound is optimal. The complexity of all our protocols is exponential in the number of parties, and the question of deriving polynomially-efficient protocols is left for future research. Our main technical contribution is a construction of a new type of statistically-secure signature scheme whose existence was open even for smaller resiliency thresholds. We also describe a new statistical compiler that lifts up passively-secure protocols to actively-secure protocols in a round-efficient way via the aid of protocols for single-input functionalities. This compiler can be viewed as a statistical variant of the GMW compiler (Goldreich, Micali, Wigderson; STOC, 1987) that originally employed zero-knowledge proofs and public-key encryption.
Last updated:  2023-03-23
Multivariate Correlation Attacks and the Cryptanalysis of LFSR-based Stream Ciphers
Isaac A. Canales-Martínez and Igor Semaev
Cryptanalysis of modern symmetric ciphers may be done by using linear equation systems with multiple right hand sides, which describe the encryption process. The tool was introduced by Raddum and Semaev where several solving methods were developed. In this work, the probabilities are ascribed to the right hand sides and a statistical attack is then applied. The new approach is a multivariate generalisation of the correlation attack by Siegenthaler. A fast version of the attack is provided too. It may be viewed as an extension of the fast correlation attack by Meier and Staffelbach, based on exploiting so called parity-checks for linear recurrences. Parity-checks are a particular case of the relations that we introduce in the present work. The notion of a relation is irrelevant to linear recurrences. We show how to apply the method to some LFSR-based stream ciphers including those from the Grain family. The new method generally requires a lower number of the keystream bits to recover the initial states than other techniques reported in the literature.
Last updated:  2023-03-23
A Treasury System for Cryptocurrencies: Enabling Better Collaborative Intelligence
Bingsheng Zhang, Roman Oliynykov, and Hamed Balogun
A treasury system is a community controlled and decentralized collaborative decision-making mechanism for sustainable funding of the blockchain development and maintenance. During each treasury period, project proposals are submitted, discussed, and voted for; top-ranked projects are funded from the treasury. The Dash governance system is a real-world example of such kind of systems. In this work, we, for the first time, provide a rigorous study of the treasury system. We modeled, designed, and implemented a provably secure treasury system that is compatible with most existing blockchain infrastructures, such as Bitcoin, Ethereum, etc. More specifically, the proposed treasury system supports liquid democracy/delegative voting for better collaborative intelligence. Namely, the stake holders can either vote directly on the proposed projects or delegate their votes to experts. Its core component is a distributed universally composable secure end-to-end verifiable voting protocol. The integrity of the treasury voting decisions is guaranteed even when all the voting committee members are corrupted. To further improve efficiency, we proposed the world’s first honest verifier zero-knowledge proof for unit vector encryption with logarithmic size communication. This partial result may be of independent interest to other cryptographic protocols. A pilot system is implemented in Scala over the Scorex 2.0 framework, and its benchmark results indicate that the proposed system can support tens of thousands of treasury participants with high efficiency.
Last updated:  2023-03-22
Single Instance Self-Masking via Permutations
Asaf Cohen, Paweł Cyprys, and Shlomi Dolev
Self-masking allows the masking of success criteria, part of a problem instance (such as the sum in a subset-sum instance) that restricts the number of solutions. Self-masking is used to prevent the leakage of helpful information to attackers; while keeping the original solution valid and, at the same time, not increasing the number of unplanned solutions. Self-masking can be achieved by xoring the sums of two (or more) independent subset sum instances \cite{DD20, CDM22}, and by doing so, eliminate all known attacks that use the value of the sum of the subset to find the subset fast, namely, in a polynomial time; much faster than the naive exponential exhaustive search. We demonstrate that the concept of self-masking can be applied to a single instance of the subset sum and a single instance of the permuted secret-sharing polynomials. We further introduce the benefit of permuting the bits of the success criteria, avoiding leakage of information on the value of the $i$'th bit of the success criteria, in the case of a single instance, or the parity of the $i$'th bit of the success criteria in the case of several instances. In the case of several instances, we permute the success criteria bits of each instance prior to xoring them with each other. One basic permutation and its nesting versions (e.g., $\pi^i$) are used, keeping the solution space small and at the same time, attempting to create an ``all or nothing'' effect, where the result of a wrong $\pi$ trials does not imply much.
Last updated:  2023-03-22
Maximally-Fluid MPC with Guaranteed Output Delivery
Giovanni Deligios, Aarushi Goel, and Chen-Da Liu-Zhang
To overcome the limitations of traditional secure multi-party computation (MPC) protocols that consider a static set of participants, in a recent work, Choudhuri et al. [CRYPTO 2021] introduced a new model called Fluid MPC, which supports {\em dynamic} participants. Protocols in this model allow parties to join and leave the computation as they wish. Unfortunately, known fluid MPC protocols (even with strong honest-majority), either only achieve security with abort, or require strong computational and trusted setup assumptions. In this work, we also consider the "hardest" setting --- called the maximally-fluid model --- where each party can leave the computation after participating in a single round. We study the problem of designing information-theoretic maximally-fluid MPC protocols that achieve security with guaranteed output delivery (without relying on trusted setup), and obtain the following main results: (1) We design a perfectly secure maximally-fluid MPC protocol, that achieves guaranteed output delivery against unbounded adversaries who are allowed to corrupt less than a third of the parties in every round/committee. (2) We show that the corruption threshold in the above protocol is optimal. In particular, we prove that in fluid MPC, when the adversary can corrupt a third (or more) of the parties in any round, it is impossible to achieve information-theoretic security and guaranteed output delivery simultaneously --- even assuming a common random string (CRS) setup. Additionally, for the case where the adversary is allowed to corrupt up to half of the parties in each committee, we present a new computationally secure maximally-fluid MPC protocol with guaranteed output delivery. Unlike prior works that require correlated setup and NIZKs, our construction only uses a common random string setup and is based on linearly-homomorphic equivocal commitments.
Last updated:  2023-03-22
Post-Quantum Privacy Pass via Post-Quantum Anonymous Credentials
Guru-Vamsi Policharla, Bas Westerbaan, Armando Faz-Hernández, and Christopher A Wood
It is known that one can generically construct a post-quantum anonymous credential scheme, supporting the showing of arbitrary predicates on its attributes using general-purpose zero-knowledge proofs secure against quantum adversaries [Fischlin, CRYPTO 2006]. Traditionally, such a generic instantiation is thought to come with impractical sizes and performance. We show that with careful choices and optimizations, such a scheme can perform surprisingly well. In fact, it performs competitively against state-of-the-art post-quantum blind signatures, for the simpler problem of post-quantum unlinkable tokens, required for a post-quantum version of Privacy Pass. To wit, a post-quantum Privacy Pass constructed in this way using zkDilithium, our proposal for a STARK-friendly variation on Dilithium2, allows for a trade-off between token size (85–175KB) and generation time (0.3–5s) with a proof security level of 115 bits. Verification of these tokens can be done in 20–30ms. We argue that these tokens are reasonably practical, adding less than a second upload time over traditional tokens, supported by a measurement study. Finally, we point out a clear advantage of our approach: the flexibility afforded by the general purpose zero-knowledge proofs. We demonstrate this by showing how we can construct a rate-limited variant of Privacy Pass that doesn't not rely on non-collusion for privacy.
Last updated:  2023-03-22
Accelerating HE Operations from Key Decomposition Technique
Miran Kim, Dongwon Lee, Jinyeong Seo, and Yongsoo Song
Lattice-based homomorphic encryption (HE) schemes are based on the noisy encryption technique, where plaintexts are masked with some random noise for security. Recent advanced HE schemes rely on a decomposition technique to manage the growth of noise, which involves a conversion of a ciphertext entry into a short vector followed by multiplication with an evaluation key. Prior to this work, the decomposition procedure turns out to be the most time-consuming part, as it requires discrete Fourier transforms (DFTs) over the base ring for efficient polynomial arithmetic. In this paper, an expensive decomposition operation over a large modulus is replaced with relatively cheap operations over a ring of integers with a small bound. Notably, the cost of DFTs is reduced from quadratic to linear with the level of a ciphertext without any extra noise growth. We demonstrate the implication of our approach by applying it to the key-switching procedure. Our experiments show that the new key-switching method achieves a speedup of 1.2--2.3 or 2.1--3.3 times over the previous method, when the dimension of a base ring is $2^{15}$ or $2^{16}$, respectively.
Last updated:  2023-03-22
An Efficient Threshold Access-Structure for RLWE-Based Multiparty Homomorphic Encryption
Christian Mouchet, Elliott Bertrand, and Jean-Pierre Hubaux
We propose and implement a multiparty homomorphic encryption (MHE) scheme with a $t$-out-of-$N$-threshold access-structure that is efficient and does not require a trusted dealer in the common random-string model. We construct this scheme from the ring-learning-with-error (RLWE) assumptions, and as an extension of the MHE scheme of Mouchet et al. (PETS 21). By means of a specially adapted share re-sharing procedure, this extension can be used to relax the $N$-out-of-$N$-threshold access structure of the original scheme into a $t$-out-of-$N$-threshold one. This procedure introduces only a single round of communication during the setup phase, after which any set of at least $t$ parties can compute a $t$-out-of-$t$ additive sharing of the secret key with no interaction; this new sharing can be used directly in the scheme of Mouchet et al. We show that, by performing Shamir re-sharing over the MHE ciphertext-space ring with a carefully chosen exceptional set, this reconstruction procedure can be made secure and has negligible overhead. Moreover, it only requires the parties to store a constant-size state after its setup phase. Hence, in addition to fault tolerance, lowering the corruption threshold also yields considerable efficiency benefits, by enabling the distribution of batched secret-key operations among the online parties. We implemented and open-sourced our scheme in the Lattigo library.
Last updated:  2023-03-22
Generic Construction of Forward Secure Public Key Authenticated Encryption with Keyword Search
Keita Emura
Forward security is a fundamental requirement in searchable encryption, where a newly generated ciphertext is not allowed to be searched by previously generated trapdoors. However, forward security is somewhat overlooked in the public key encryption with keyword search (PEKS) context and there are few proposals, whereas forward security has been stated as a default security notion in the (dynamic) symmetric searchable encryption (SSE) context. In the PEKS context, forward secure PEKS (FS-PEKS) is essentially the same as public key encryption with temporary keyword search (PETKS) proposed by Abdalla et al. (JoC 2016) which can be constructed generically from hierarchical identity-based encryption (HIBE) with level-1 anonymity. Alternatively, Zeng et al. (IEEE Transactions on Cloud Computing 2022) also proposed a generic construction of FS-PEKS from attribute-based searchable encryption supporting OR gates. In the public key authenticated encryption with keyword search (PAEKS) context, a concrete forward secure PAEKS (FS-PAEKS) construction has been proposed by Jiang et al. (The Computer Journal 2022), and no generic construction has been proposed to date. In this paper, we propose a generic construction of FS-PAEKS from PAEKS. In addition to PAEKS, we employ 0/1 encodings proposed by Lin et al. (ACNS 2005). We also show that the Jiang et al. FS-PAEKS scheme does not provide forward security, and thus our generic construction yields the first secure FS-PAEKS schemes. Our generic construction is quite simple, and it can also be applied to construct FS-PEKS. Our generic construction yields a comparably efficient FS-PEKS scheme compared to the previous scheme. Moreover, it eliminates the hierarchical structure or attribute-based feature of the previous generic constructions which is meaningful from a feasibility perspective.
Last updated:  2023-03-22
An Overview of Hash Based Signatures
Uncategorized
Vikas Srivastava, Anubhab Baksi, and Sumit Kumar Debnath
Show abstract
Uncategorized
Digital signatures are one of the most basic cryptographic building blocks which are utilized to provide attractive security features like authenticity, unforgeability, and undeniability. The security of existing state of the art digital signatures is based on hardness of number theoretic hardness assumptions like discrete logarithm and integer factorization. However, these hard problems are insecure and face a threat in the quantum world. In particular, quantum algorithms like Shor’s algorithm can be used to solve the above mentioned hardness problem in polynomial time. As an alternative, a new direction of research called post-quantum cryptography (PQC) is supposed to provide a new generation of quantum-resistant digital signatures. Hash based signature is one such candidate to provide post quantum secure digital signatures. Hash based signature schemes are a type of digital signature scheme that use hash functions as their central building block. They are efficient, flexible, and can be used in a variety of applications. In this document, we provide an overview of the hash based signatures. Our presentation of the topic covers a wide range of aspects that are not only comprehensible for readers without expertise in the subject matter, but also serve as a valuable resource for experts seeking reference material.
Last updated:  2023-03-21
Unbounded Leakage-Resilience and Leakage-Detection in a Quantum World
Alper Cakan, Vipul Goyal, Chen-Da Liu-Zhang, and João Ribeiro
Side-channel attacks, which aim to leak side information on secret system components, are ubiquitous. Even simple attacks, such as measuring time elapsed or radiation emitted during encryption and decryption procedures, completely break textbook versions of many cryptographic schemes. This has prompted the study of leakage-resilient cryptography, which remains secure in the presence of side-channel attacks. Classical leakage-resilient cryptography must necessarily impose restrictions on the type of leakage one aims to protect against. As a notable example, the most well-studied leakage model is that of bounded leakage, where it is assumed that an adversary learns at most $\ell$ bits of leakage on secret components, for some leakage bound $\ell$. Although this leakage bound is necessary, it is unclear if such a bound is realistic in practice since many practical side-channel attacks cannot be captured by bounded leakage. In this work, we investigate the possibility of designing cryptographic schemes that provide guarantees against arbitrary side-channel attacks: - Using techniques from uncloneable quantum cryptography, we design several basic leakage-resilient primitives, such as secret sharing, (weak) pseudorandom functions, digital signatures, and public- and private-key encryption, which remain secure under (polynomially) unbounded classical leakage. In particular, this leakage can be much longer than the (quantum) secret being leaked upon. In our view, leakage is the result of observations of quantities such as power consumption and hence is most naturally viewed as classical information. - In the even stronger adversarial setting where the adversary is allowed to obtain unbounded quantum leakage (and thus leakage-resilience is impossible), we design schemes for many cryptographic tasks which support leakage-detection. This means that we can efficiently check whether the security of such a scheme has been compromised by a side-channel attack. These schemes are based on techniques from cryptography with certified deletion. - We also initiate a study of classical cryptographic schemes with (bounded) post-quantum leakage-resilience. These schemes resist side-channel attacks performed by adversaries with quantum capabilities which may even share arbitrary entangled quantum states. That is, even if such adversaries are non-communicating, they can still have "spooky" communication via entangled states.
Last updated:  2023-03-21
Private Access Control for Function Secret Sharing
Sacha Servan-Schreiber, Simon Beyzerov, Eli Yablon, and Hyojae Park
Function Secret Sharing (FSS; Eurocrypt 2015) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares of f(x) on an input x, without learning information about f. In this paper, we initiate the study of access control for FSS. Given the shares of f, the evaluators can ensure that the dealer is authorized to share the provided function. For a function family F and an access control list defined over the family, the evaluators receiving the shares of f ∈ F can efficiently check that the dealer knows the access key for f. This model enables new applications of FSS, such as: – anonymous authentication in a multi-party setting, – access control in private databases, and – authentication and spam prevention in anonymous communication systems. Our definitions and constructions abstract and improve the concrete efficiency of several re- cent systems that implement ad-hoc mechanisms for access control over FSS. The main building block behind our efficiency improvement is a discrete-logarithm zero-knowledge proof-of-knowledge over secret-shared elements, which may be of independent interest. We evaluate our constructions and show a 50–70× reduction in computational overhead com- pared to existing access control techniques used in anonymous communication. In other applications, such as private databases, the processing cost of introducing access control is only 1.5–3× when amortized over databases with 500,000 or more items.
Last updated:  2023-03-21
As easy as ABC: Optimal (A)ccountable (B)yzantine (C)onsensus is easy!
Pierre Civit, Seth Gilbert, Vincent Gramoli, Rachid Guerraoui, and Jovan Komatovic
It is known that the agreement property of the Byzantine consensus problem among $n$ processes can be violated in a non-synchronous system if the number of faulty processes exceeds $t_0 = n / 3 - 1$. In this paper, we investigate the accountable Byzantine consensus problem in non-synchronous systems: the problem of solving Byzantine consensus whenever possible (e.g., when the number of faulty processes does not exceed $t_0$) and allowing correct processes to obtain proof of culpability of (at least) $t_0 + 1$ faulty processes whenever correct processes disagree. We present four complementary contributions: 1) We introduce $ABC$: a simple yet efficient transformation of any Byzantine consensus protocol to an accountable one. $ABC$ introduces an overhead of (1) only two all-to-all communication rounds and $O(n^2)$ additional bits in executions with up to $t_0$ faults (i.e., in the common case). 2) We define the accountability complexity, a complexity metric representing the number of accountability-specific messages that correct processes must send. Furthermore, we prove a tight lower bound. In particular, we show that any accountable Byzantine consensus algorithm incurs cubic accountability complexity. Moreover, we illustrate that the bound is tight by applying the $ABC$ transformation to any Byzantine consensus protocol. 3) We demonstrate that, when applied to an optimal Byzantine consensus protocol, $ABC$ constructs an accountable Byzantine consensus protocol that is (1) optimal in solving consensus whenever consensus is solvable with respect to the communication complexity, and (2) optimal in obtaining accountability whenever disagreement occurs with respect to the accountability complexity. 4) We generalize $ABC$ to other distributed computing problems besides the classic consensus problem. We characterize a class of agreement tasks, including reliable and consistent broadcast, that $ABC$ renders accountable.
Last updated:  2023-03-21
Somewhere Randomness Extraction and Security against Bounded-Storage Mass Surveillance
Jiaxin Guan, Daniel Wichs, and Mark Zhandry
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) $1\%$ of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) $1\%$ of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency. As the core technical tool, we study an information-theoretic problem which we refer to as "somewhere randomness extraction". Suppose $X_1, \ldots, X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1, \ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = \mathsf{Ext}(X_i;S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
Last updated:  2023-03-21
Trellis: Robust and Scalable Metadata-private Anonymous Broadcast
Simon Langowski, Sacha Servan-Schreiber, and Srinivas Devadas
Trellis is a mix-net based anonymous broadcast system with cryptographic security guarantees. Trellis can be used to anonymously publish documents or communicate with other users, all while assuming full network surveillance. In Trellis, users send messages through a set of servers in successive rounds. The servers mix and post the messages to a public bulletin board, hiding which users sent which messages. Trellis hides all network metadata, remains robust to changing network conditions, guarantees availability to honest users, and scales with the number of mix servers. Trellis provides three to five orders of magnitude faster performance and better network robustness compared to Atom, the state-of-the-art anonymous broadcast system with a comparable threat model. In achieving these guarantees, Trellis contributes: (1) a simpler theoretical mixing analysis for a routing mix network constructed with a fraction of malicious servers, (2) anonymous routing tokens for verifiable random paths, and (3) lightweight blame protocols built on top of onion routing to identify and eliminate malicious parties. We implement and evaluate Trellis in a networked deployment. With 128 servers, Trellis achieves a throughput of 320 bits per second. Trellis’s throughput is only 100 to 1000× slower compared to Tor (which has 6,000 servers and 2 million daily users) and is potentially deployable at a smaller “enterprise” scale. Our implementation is open-source.
Last updated:  2023-03-21
Machine-Checked Security for $\mathrm{XMSS}$ as in RFC 8391 and $\mathrm{SPHINCS}^{+}$
Manuel Barbosa, François Dupressoir, Benjamin Grégoire, Andreas Hülsing, Matthias Meijers, and Pierre-Yves Strub
This work presents a novel machine-checked tight security proof for $\mathrm{XMSS}$ — a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of $\mathrm{SPHINCS}^{+}$, one of the signature schemes recently selected for standardization as a result of NIST’s post-quantum competition. In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$. For the case of $\mathrm{SPHINCS}^{+}$, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion. At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for $\mathrm{XMSS}$ and formally verify it using the EasyCrypt proof assistant. As part of this endeavor, we formally verify the crucial step common to (the security proofs of) $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$ that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct. As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes — e.g., hash functions and digital signature schemes — establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as $\mathrm{LMS}$ or $\mathrm{SPHINCS}^{+}$.
Last updated:  2023-03-21
Game Theoretical Analysis of DAG-Ledgers Backbone
Simone Galimberti and Maria Potop-Butucaru
We study the rational behaviors of participants in $DAG$-Based Distributed Ledgers. We analyze generic algorithms that encapsulate the main actions of participants in a $DAG$-based distributed ledger: voting for a block, and checking its validity. Knowing that those actions have costs, and validating a block gives rewards to users who participated in the validation procedure, we study using game theory how strategic participants behave while trying to maximize their gains. We consider scenarios with different type of participants and investigate if there exist equilibria where the properties of the protocols are guaranteed. The analysis is focused on the study of equilibria with trembling participants (i.e. rational participants that can do unintended actions with a low probability). We found that in presence of trembling participants, there exist equilibria where protocols properties may be violated.
Last updated:  2023-03-21
Quasi-linear masking to protect against both SCA and FIA
Claude Carlet, Abderrahman Daif, Sylvain Guilley, and Cédric Tavernier
The implementation of cryptographic algorithms must be protected against physical attacks. Side-channel and fault injection analyses are two prominent such implem\-entation-level attacks. Protections against either do exist; they are characterized by security orders: the higher the order, the more difficult the attack. In this paper, we leverage fast discrete Fourier transform to reduce the complexity of high-order masking, and extend it to allow for fault detection and/or correction. The security paradigm is that of code-based masking. Coding theory is amenable both to mix the information and masking material at a prescribed order, and to detect and/or correct errors purposely injected by an attacker. For the first time, we show that quasi-linear masking (pioneered by Goudarzi, Joux and Rivain at ASIACRYPT 2018) can be achieved alongside with cost amortisation. This technique consists in masking several symbols/bytes with the same masking material, therefore improving the efficiency of the masking. Similarly, it allows to optimize the detection capability of codes as linear codes are all the more efficient as the information to protect is longer. Namely, we prove mathematically that our scheme features side-channel security order of $d+1-t$, detects $d$ faults and corrects $\lfloor(d-1)/2\rfloor$ faults, where $2d+1$ is the encoding length and $t$ is the information size ($t\geq1$). Applied to AES, one can get side-channel protection of order $d=7$ when masking one column/line ($t=4$ bytes) at once. In addition to the theory, that makes use of the Frobenius Additive Fast Fourier Transform, we show performance results, both in software and hardware.
Last updated:  2023-03-21
Enhanced pqsigRM: Code-Based Digital Signature Scheme with Short Signature and Fast Verification for Post-Quantum Cryptography
Jinkyu Cho, Jong-Seon No, Yongwoo Lee, Zahyun Koo, and Young-Sik Kim
We present a novel code-based digital signature scheme, called enhanced pqsigRM for post-quantum cryptography (PQC). This scheme is based on a modified Reed--Muller (RM) code, which reduces the signature size and verification time compared with existing code-based signature schemes. In fact, it strengthens pqsigRM submitted to NIST for post-quantum cryptography standardization. The proposed scheme has the advantage of the short signature size and fast verification and uses public codes that are more difficult to distinguish from random codes. We use $(U,U+V)$-codes with the high-dimensional hull to overcome the disadvantages of code-based schemes. The proposed decoder samples from coset elements with small Hamming weight for any given syndrome and efficiently finds such an element. Using a modified RM code, the proposed signature scheme resists various known attacks on RM-code-based cryptography. It has advantages on signature size, verification time, and proven security. For 128 bits of classical security, the signature size of the proposed signature scheme is 512 bytes, which corresponds to 1/4.7 of that of CRYSTALS-DILITHIUM, and the number of median verification cycles is 1,717,336, which corresponds to the five times of that of CRYSTALS-DILITHIUM.
Last updated:  2023-03-21
CaSCaDE: (Time-Based) Cryptography from Space Communications DElay
Uncategorized
Carsten Baum, Bernardo David, Elena Pagnin, and Akira Takahashi
Show abstract
Uncategorized
Time-based cryptographic primitives such as Time-Lock Puzzles (TLPs) and Verifiable Delay Functions (VDFs) have recently found many applications to the efficient design of secure protocols such as randomness beacons or multiparty computation with partial fairness. However, current TLP and VDF candidate constructions rely on the average hardness of sequential computational problems. Unfortunately, obtaining concrete parameters for these is notoriously hard, as there cannot be a large gap between the honest parties’ and the adversary’s runtime when solving the same problem. Moreover, even a constant improvement in algorithms for solving these problems can render parameter choices, and thus deployed systems, insecure - unless very conservative and therefore highly inefficient parameters are chosen. In this work, we investigate how to construct time-based cryptographic primitives from communication delay, which has a known lower bound given the physical distance between devices: the speed of light. In order to obtain high delays, we explore the sequential communication delay that arises when sending a message through a constellation of satellites. This has the advantage that distances between protocol participants are guaranteed as positions of satellites are observable, so delay lower bounds can be easily computed. At the same time, building cryptographic primitives for this setting is challenging due to the constrained resources of satellites and possible corruptions of parties within the constellation. We address these challenges by constructing efficient proofs of sequential communication delay to convince a verifier that a message has accrued delay by traversing a path among satellites. As part of this construction, we propose the first ordered multisignature scheme with security under a version of the the discrete logarithm assumption, which enjoys constant-size signatures and, modulo preprocessing, computational complexity independent of the number of signers. Building on our proofs of sequential communication delay, we show new constructions of Publicly Verifiable TLPs and VDFs whose delay guarantees are rooted on physical communication delay lower bounds. Our protocols as well as the ordered multisignature are analysed in the Universal Composability framework using novel models for sequential communication delays and (ordered) multisignatures. A direct application of our results is a randomness beacon that only accesses expensive communication resources in case of cheating.
Last updated:  2023-03-21
Efficient Laconic Cryptography from Learning With Errors
Uncategorized
Nico Döttling, Dimitris Kolonelos, Russell W. F. Lai, Chuanwei Lin, Giulio Malavolta, and Ahmadreza Rahimi
Show abstract
Uncategorized
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice's input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables "Bob-optimized" protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical. This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of $2^{50}$, encryption and decryption are in the order of single digit milliseconds. Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct: - Laconic oblivious transfer - Registration-based encryption scheme - Laconic private-set intersection protocol All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model).
Last updated:  2023-03-21
Real World Deniability in Messaging
Daniel Collins, Simone Colombo, and Loïs Huguenin-Dumittan
This work discusses real world deniability in messaging. We highlight how the different models for cryptographic deniability do not ensure practical deniability. To overcome this situation, we propose a model for real world deniability that takes into account the entire messaging system. We then discuss how deniability is (not) used in practice and the challenges arising from the design of a deniable system. We propose a simple, yet powerful solution for deniability: applications should enable direct modification of local messages; we discuss the impacts of this strong deniability property.
Last updated:  2023-03-21
Discretization Error Reduction for Torus Fully Homomorphic Encryption
Kang Hoon Lee and Ji Won Yoon
In recent history of fully homomorphic encryption, bootstrapping has been actively studied throughout many HE schemes. As bootstrapping is an essential process to transform somewhat homomorphic encryption schemes into fully homomorphic, enhancing its performance is one of the key factors of improving the utility of homomorphic encryption. In this paper, we propose an extended bootstrapping for TFHE, which we name it by EBS. One of the main drawback of TFHE bootstrapping was that the precision of bootstrapping is mainly decided by the polynomial dimension $N$. Thus if one wants to bootstrap with high precision, one must enlarge $N$, or take alternative method. Our EBS enables to use small $N$ for parameter selection, but to bootstrap in higher dimension to keep high precision. Moreover, it can be easily parallelized for faster computation. Also, the EBS can be easily adapted to other known variants of TFHE bootstrappings based on the original bootstrapping algorithm. We implement our EBS along with the full domain bootstrapping methods known ($\mathsf{FDFB}$, $\mathsf{TOTA}$, $\mathsf{Comp}$), and show how much our EBS can improve the precision for those bootstrapping methods. We provide experimental results and thorough analysis with our EBS, and show that EBS is capable of bootstrapping with high precision even with small $N$, thus small key size, and small complexity than selecting large $N$ by birth.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.