All papers in 2023 (Page 8 of 1971 results)

Last updated:  2024-04-25
Tight Security of TNT and Beyond: Attacks, Proofs and Possibilities for the Cascaded LRW Paradigm
Ashwin Jha, Mustafa Khairallah, Mridul Nandi, and Abishanka Saha
Liskov, Rivest and Wagner laid the theoretical foundations for tweakable block ciphers (TBC). In a seminal paper, they proposed two (up to) birthday-bound secure design strategies --- LRW1 and LRW2 --- to convert any block cipher into a TBC. Several of the follow-up works consider cascading of LRW-type TBCs to construct beyond-the-birthday bound (BBB) secure TBCs. Landecker et al. demonstrated that just two-round cascading of LRW2 can already give a BBB security. Bao et al. undertook a similar exercise in context of LRW1 with TNT --- a three-round cascading of LRW1 --- that has been shown to achieve BBB security as well. In this paper, we present a CCA distinguisher on TNT that achieves a non-negligible advantage with $ O(2^{n/2}) $ queries, directly contradicting the security claims made by the designers. We provide a rigorous and complete advantage calculation coupled with experimental verification that further support our claim. Next, we provide new and simple proofs of birthday-bound CCA security for both TNT and its single-key variant, which confirm the tightness of our attack. Furthering on to a more positive note, we show that adding just one more block cipher call, referred as 4-LRW1, does not just re-establish the BBB security, but also amplifies it up to $ 2^{3n/4} $ queries. As a side-effect of this endeavour, we propose a new abstraction of the cascaded LRW-design philosophy, referred to as the LRW+ paradigm, comprising two block cipher calls sandwiched between a pair of tweakable universal hashes. This helps us to provide a modular proof covering all cascaded LRW constructions with at least $ 2 $ rounds, including 4-LRW1, and its more established relative, the well-known CLRW2, or more aptly, 2-LRW2.
Last updated:  2024-05-13
Pianist: Scalable zkRollups via Fully Distributed Zero-Knowledge Proofs
Tianyi Liu, Tiancheng Xie, Jiaheng Zhang, Dawn Song, and Yupeng Zhang
In the past decade, blockchains have seen various financial and technological innovations, with cryptocurrencies reaching a market cap of over 1 trillion dollars. However, scalability is one of the key issues hindering the deployment of blockchains in many applications. To improve the throughput of the transactions, zkRollups and zkEVM techniques using the cryptographic primitive of zero-knowledge proofs (ZKPs) have been proposed and many companies are adopting these technologies in the layer-2 solutions. However, in these technologies, the proof generation of the ZKP is the bottleneck and the companies have to deploy powerful machines with TBs of memory to batch a large number of transactions in a ZKP. In this work, we improve the scalability of these techniques by proposing new schemes of fully distributed ZKPs. Our schemes can improve the efficiency and the scalability of ZKPs using multiple machines, while the communication among the machines is minimal. With our schemes, the ZKP generation can be distributed to multiple participants in a model similar to the mining pools. Our protocols are based on Plonk, an efficient zero-knowledge proof system with a universal trusted setup. The first protocol is for data-parallel circuits. For a computation of $M$ sub-circuits of size $T$ each, using $M$ machines, the prover time is $O(T\log T + M \log M)$, while the prover time of the original Plonk on a single machine is $O(MT\log (MT))$. Our protocol incurs only $O(1)$ communication per machine, and the proof size and verifier time are both $O(1)$, the same as the original Plonk. Moreover, we show that with minor modifications, our second protocol can support general circuits with arbitrary connections while preserving the same proving, verifying, and communication complexity. The technique is general and may be of independent interest for other applications of ZKP. We implement Pianist (Plonk vIA uNlimited dISTribution), a fully distributed ZKP system using our protocols. Pianist can generate the proof for 8192 transactions in 313 seconds on 64 machines. This improves the scalability of the Plonk scheme by 64$\times$. The communication per machine is only 2.1 KB, regardless of the number of machines and the size of the circuit. The proof size is 2.2 KB and the verifier time is 3.5 ms. We further show that Pianist has similar improvements for general circuits. On a randomly generated circuit with $2^{25}$ gates, it only takes 5s to generate the proof using 32 machines, 24.2$\times$ faster than Plonk on a single machine.
Last updated:  2023-08-22
Computational Wiretap Coding from Indistinguishability Obfuscation
Yuval Ishai, Aayush Jain, Paul Lou, Amit Sahai, and Mark Zhandry
A wiretap coding scheme for a pair of noisy channels $(\mathsf{ChB},\mathsf{ChE})$ enables Alice to reliably communicate a message to Bob by sending its encoding over $\mathsf{ChB}$, while hiding the message from an adversary Eve who obtains the same encoding over $\mathsf{ChE}$. A necessary condition for the feasibility of wiretap coding is that $\mathsf{ChB}$ is not a degradation of $\mathsf{ChE}$, namely Eve cannot simulate Bob’s view. While insufficient in the information-theoretic setting, a recent work of Ishai, Korb, Lou, and Sahai (Crypto 2022) showed that the non-degradation condition is sufficient in the computational setting, assuming idealized flavors of obfuscation. The question of basing a similar feasibility result on standard cryptographic assumptions was left open, even in simple special cases. In this work, we settle the question for all discrete memoryless channels where the (common) input alphabet of $\mathsf{ChB}$ and $\mathsf{ChE}$ is binary, and with arbitrary finite output alphabet, under standard (sub-exponential) hardness assumptions: namely those assumptions that imply indistinguishability obfuscation (Jain-Lin-Sahai 2021, 2022), and injective PRGs. In particular, this establishes the feasibility of computational wiretap coding when $\mathsf{ChB}$ is a binary symmetric channel with crossover probability $p$ and $\mathsf{ChE}$ is a binary erasure channel with erasure probability $e$, where $e>2p$. On the information-theoretic side, our result builds on a new polytope characterization of channel degradation for pairs of binary-input channels, which may be of independent interest.
Last updated:  2024-07-15
SIGMA: Secure GPT Inference with Function Secret Sharing
Kanav Gupta, Neha Jawalkar, Ananta Mukherjee, Nishanth Chandran, Divya Gupta, Ashish Panwar, and Rahul Sharma
Secure 2-party computation (2PC) enables secure inference that offers protection for both proprietary machine learning (ML) models and sensitive inputs to them. However, the existing secure inference solutions suffer from high latency and communication overheads, particularly for transformers. Function secret sharing (FSS) is a recent paradigm for obtaining efficient 2PC protocols with a preprocessing phase. We provide SIGMA, the first end-to-end system for secure transformer inference based on FSS. By constructing new FSS-based protocols for complex machine learning functionalities, such as Softmax, GeLU and SiLU, and also accelerating their computation on GPUs, SIGMA improves the latency of secure inference of transformers by $11-19\times$ over the state-of-the-art that uses preprocessing and GPUs. We present the first secure inference of generative pre-trained transformer (GPT) models. In particular, SIGMA executes Meta's LLaMA2 (available on HuggingFace) with 13 billion parameters in 44 seconds and GPT2 in 1.6 seconds.
Last updated:  2023-08-22
Finding Orientations of Supersingular Elliptic Curves and Quaternion Orders
Sarah Arpin, James Clements, Pierrick Dartois, Jonathan Komada Eriksen, Péter Kutas, and Benjamin Wesolowski
Orientations of supersingular elliptic curves encode the information of an endomorphism of the curve. Computing the full endomorphism ring is a known hard problem, so one might consider how hard it is to find one such orientation. We prove that access to an oracle which tells if an elliptic curve is $\mathfrak{O}$-orientable for a fixed imaginary quadratic order $\mathfrak{O}$ provides non-trivial information towards computing an endomorphism corresponding to the $\mathfrak{O}$-orientation. We provide explicit algorithms and in-depth complexity analysis. We also consider the question in terms of quaternion algebras. We provide algorithms which compute an embedding of a fixed imaginary quadratic order into a maximal order of the quaternion algebra ramified at $p$ and $\infty$. We provide code implementations in Sagemath which is efficient for finding embeddings of imaginary quadratic orders of discriminants up to $O(p)$, even for cryptographically sized $p$.
Last updated:  2024-08-16
Whipping the MAYO Signature Scheme using Hardware Platforms
Florian Hirner, Michael Streibl, Florian Krieger, Ahmet Can Mert, and Sujoy Sinha Roy
NIST issued a new call in 2023 to diversify the portfolio of quantum-resistant digital signature schemes since the current portfolio relies on lattice problems. The MAYO scheme, which builds on the Unbalanced Oil and Vinegar (UOV) problem, is a promising candidate for this new call. MAYO introduces emulsifier maps and a novel 'whipping' technique to significantly reduce the key sizes compared to previous UOV schemes. This paper provides a comprehensive analysis of the implementation aspects of MAYO and proposes several optimization techniques that we use to implement a high-speed hardware accelerator. The first optimization technique is the partial unrolling of the emulsification process to increase parallelization. The second proposed optimization is a novel memory structure enabling the parallelization of significant bottlenecks in the MAYO scheme. In addition to this, we present a flexible transposing technique for the data format used in MAYO that can be expanded to other UOV-based schemes. We use these techniques to design the first high-speed ASIC and FPGA accelerator that supports all operations of the MAYO scheme for different NIST security levels. Compared with state-of-the-art, like HaMAYO [23] and UOV [7], our FPGA design shows a performance benefit of up to three orders of magnitude in both latency and area-time-product. Furthermore, we lower the BRAM consumption by up to $2.8 \times$ compared to these FPGA implementations. Compared to high-end CPU implementations, our ASIC design allows between $2.81\times$ and $60.14\times$ higher throughputs. This increases the number of signing operations per second from $483$ to $13424$, thereby fostering performant deployment of the MAYO scheme in time-critical applications.
Last updated:  2023-08-22
Automatic Preimage Attack Framework on \ascon Using a Linearize-and-Guess Approach
Huina Li, Le He, Shiyao Chen, Jian Guo, and Weidong Qiu
\ascon is the final winner of the lightweight cryptography standardization competition $(2018-2023)$. In this paper, we focus on preimage attacks against round-reduced \ascon. The preimage attack framework, utilizing the linear structure with the allocating model, was initially proposed by Guo \textit{et al.} at ASIACRYPT 2016 and subsequently improved by Li \textit{et al.} at EUROCRYPT 2019, demonstrating high effectiveness in breaking the preimage resistance of \keccak. In this paper, we extend this preimage attack framework to \ascon from two aspects. Firstly, we propose a linearize-and-guess approach by analyzing the algebraic properties of the \ascon permutation. As a result, the complexity of finding a preimage for 2-round \ascon-\xof with a 64-bit hash value can be significantly reduced from $2^{39}$ guesses to $2^{27.56}$ guesses. To support the effectiveness of our approach, we find an actual preimage of all ‘0’ hash in practical time. Secondly, we develop a SAT-based automatic preimage attack framework using the linearize-and-guess approach, which is efficient to search for the optimal structures exhaustively. Consequently, we present the best theoretical preimage attacks on 3-round and 4-round \ascon-\xof so far.
Last updated:  2023-09-16
Key-Agreement with Perfect Completeness from Random Oracles
Noam Mazor
In the Random Oracle Model (ROM) all parties have oracle access to a common random function, and the parties are limited in the number of queries they can make to the oracle. The Merkle’s Puzzles protocol, introduced by Merkle [CACM ’78], is a key-agreement protocol in the ROM with a quadratic gap between the query complexity of the honest parties and the eavesdropper. This quadratic gap is known to be optimal, by the works of Impagliazzo and Rudich [STOC ’89] and Barak and Mahmoody [Crypto ’09]. When the oracle function is injective or a permutation, Merkle’s Puzzles has perfect completeness. That is, it is certain that the protocol results in agreement between the parties. However, without such an assumption on the random function, there is a small error probability, and the parties may end up holding different keys. This fact raises the question: Is there a key-agreement protocol with perfect completeness and super-linear security in the ROM? In this paper we give a positive answer to the above question, showing that changes to the query distribution of the parties in Merkle’s Puzzles, yield a protocol with perfect completeness and roughly the same security.
Last updated:  2024-03-08
An optimization of the addition gate count in Plonkish circuits
Steve Thakur
We slightly generalize Plonk's ([GWC19]) permutation argument by replacing permutations with (possibly non-injective) self-maps of an interval. We then use this succinct argument to obtain a protocol for weighted sums on committed vectors, which, in turn, allows us to eliminate the intermediate gates arising from high fan-in additions in Plonkish circuits. We use the KZG10 polynomial commitment scheme, which allows for a universal updateable CRS linear in the circuit size. In keeping with our recent work ([Th23]), we have used the monomial basis since it is compatible with any sufficiently large prime scalar field. In settings where the scalar field has a suitable smooth order subgroup, the techniques can be efficiently ported to a Lagrange basis. The proof size is constant, as is the verification time which is dominated by a single pairing check. For committed vectors of length $n$, the proof generation is $O(n\cdot \log(n))$ and is dominated by the $\mathbb{G}_1$-MSMs and a single sum of a few polynomial products over the prime scalar field via multimodular FFTs.
Last updated:  2023-08-30
Quantum security analysis of Wave
Uncategorized
Johanna Loyer
Show abstract
Uncategorized
Wave is a code-based digital signature scheme. Its hardness relies on the unforgeability of signature and the indistinguishability of its public key, a parity check matrix of a ternary $(U, U+V)$-code. The best known attacks involve solving the Decoding Problem using the Information Set Decoding algorithm (ISD) to defeat these two problems. Our main contribution is the description of a quantum smoothed Wagner's algorithm within the ISD, which improves the forgery attack on Wave in the quantum model. We also recap the best known key and forgery attacks against Wave in the classical and quantum models. For each one, we explicitly express their time complexity in the function of Wave parameters and deduce the claimed security of Wave.
Last updated:  2023-08-24
Phoenixx: Linear consensus with random sampling
David Chaum, Bernardo Cardoso, William Carter, Mario Yaksetig, and Baltasar Aroso
We present Phoenixx, a round and leader based Byzantine fault tolerant consensus protocol, that operates in the partial synchrony network communications model. Phoenixx combines the three phase approach from HotStuff, with a novel Endorser Sampling, that selects a subset of nodes, called endorsers, to "compress'' the opinion of the network. Unlike traditional sampling approaches that select a subset of the network to run consensus on behalf of the network and disseminate the outcome, Phoenixx still requires participation of the whole network. The endorsers, however, assume a special role as they confirm that at least $2f+1$ validators are in agreement and issue a compressed certificate, attesting the network reached a decision. Phoenixx achieves linear communication complexity, while maintaining safety, liveness, and optimistic responsiveness, without using threshold signatures.
Last updated:  2024-05-20
Faster constant-time evaluation of the Kronecker symbol with application to elliptic curve hashing
Diego F. Aranha, Benjamin Salling Hvass, Bas Spitters, and Mehdi Tibouchi
We generalize the Bernstein-Yang (BY) algorithm for constant-time modular inversion to compute the Kronecker symbol, of which the Jacobi and Legendre symbols are special cases. We start by developing a basic and easy-to-implement divstep version of the algorithm defined in terms of full-precision division steps. We then describe an optimized version due to Hamburg over word-sized inputs, similar to the jumpdivstep version of the BY algorithm, and formally verify its correctness. Along the way, we introduce a number of optimizations for implementing both versions in constant time and at high-speed. The resulting algorithms are particularly suitable for the special case of computing the Legendre symbol with dense prime $p$, where no efficient addition chain is known for the conventional approach by exponentiation to $\frac{p-1}{2}$. This is often the case for the base field of popular pairing-friendly elliptic curves. Our high-speed implementation for a range of parameters shows that the new algorithm is up to 40 times faster than the conventional exponentiation approach, and up to 25.7\% faster than the previous state of the art. We illustrate the performance of the algorithm with an application for hashing to elliptic curves, where the observed savings amount to 14.7\% -- 48.1\% when used for testing quadratic residuosity within the SwiftEC hashing algorithm. We also apply our techniques to the CTIDH isogeny-based key exchange, with savings of 3.5--13.5\%.
Last updated:  2023-08-21
Public-Key Encryption from Average Hard NP Language
Hongda Li, Peifang Ni, and Yao Zan
The question of whether public-key encryption (PKE) can be constructed from the assumption that one-way functions (OWF) exist remains a central open problem. In this paper we give two constructions of bit PKE scheme derived from any NP language L, along with a polynomial-time instance-witness sampling algorithm. Furthermore, we prove that if L is average hard NP language, the the presented schemes is CPA secure. Our results give a positive answer to this longstanding problem, as the existence of OWF implies the existence of average hard NP language with a polynomial-time instance-witness sampling algorithm. Additionally, we obtain a witness encryption (WE) scheme for NP language based on the presented PKE scheme. This result highlights that WE scheme can also be established based on the existence of OWF.
Last updated:  2023-08-21
Nonlinear computations on FinTracer tags
Michael Brand, Tania Churchill, and Carsten Friedrich
Recently, the FinTracer algorithm was introduced as a versatile framework for detecting economic crime typologies in a privacy-preserving fashion. Under the hood, FinTracer stores its data in a structure known as the ``FinTracer tag’’. One limitation of FinTracer tags, however, is that because their underlying cryptographic implementation relies on additive semi-homomorphic encryption, all the system's oblivious computations on tag data are linear in their input ciphertexts. This allows a FinTracer user to combine information from multiple tags in some ways, but not generically. In this paper, we describe an efficient method to perform general nonlinear computations on FinTracer tags, and show how this ability can be used to detect a wide range of complex crime typologies, as well as to extract many new types of information, while retaining all of FinTracer's original privacy guarantees.
Last updated:  2023-08-20
Efficient Oblivious Sorting and Shuffling for Hardware Enclaves
Tianyao Gu, Yilei Wang, Bingnan Chen, Afonso Tinoco, Elaine Shi, and Ke Yi
Oblivious sorting is arguably the most important building block in the design of efficient oblivious algorithms. We propose new oblivious sorting algorithms for hardware enclaves. Our algorithms achieve asymptotic optimality in terms of both computational overhead and the number of page swaps the enclave has to make to fetch data from insecure memory or disk. We also aim to minimize the concrete constants inside the big-O. One of our algorithms achieve bounds tight to the constant in terms of the number of page swaps. We have implemented our algorithms and made them publicly available through open source. In comparison with (an unoptimized version of) bitonic sort, which is asymptotically non-optimal but the de facto algorithm used in practice, we achieve a speedup of 2000 times for 12 GB inputs.
Last updated:  2024-02-04
Batchman and Robin: Batched and Non-batched Branching for Interactive ZK
Yibin Yang, David Heath, Carmit Hazay, Vladimir Kolesnikov, and Muthuramakrishnan Venkitasubramaniam
Vector Oblivious Linear Evaluation (VOLE) supports fast and scalable interactive Zero-Knowledge (ZK) proofs. Despite recent improvements to VOLE-based ZK, compiling proof statements to a control-flow oblivious form (e.g., a circuit) continues to lead to expensive proofs. One useful setting where this inefficiency stands out is when the statement is a disjunction of clauses L1 ∨ · · · ∨ LB. Typically, ZK requires paying the price to handle all B branches. Prior works have shown how to avoid this price in communication, but not in computation. Our main result, Batchman, is asymptotically and concretely efficient VOLE-based ZK for batched disjunctions, i.e. statements containing R repetitions of the same disjunction. This is crucial for, e.g., emulating CPU steps in ZK. Our prover and verifier complexity is only O(RB + R|C| + B|C|), where |C| is the maximum circuit size of the B branches. Prior works’ computation scales in RB|C|. For non-batched disjunctions, we also construct a VOLE-based ZK protocol, Robin, which is (only) communication efficient. For small fields and for statistical security parameter λ, this protocol’s communication improves over the previous state of the art (Mac′n′Cheese, Baum et al., CRYPTO’21) by up to factor λ. Our implementation outperforms prior state of the art. E.g., we achieve up to $6×$ improvement over Mac′n′Cheese (Boolean, single disjunction), and for arithmetic batched disjunctions our experiments show we improve over QuickSilver (Yang et al., CCS’21) by up to $70×$ and over AntMan (Weng et al., CCS’22) by up to $36×$.
Last updated:  2024-03-05
On Soundness Notions for Interactive Oracle Proofs
Alexander R. Block, Albert Garreta, Pratyush Ranjan Tiwari, and Michał Zając
Interactive oracle proofs (IOPs) (Ben-Sasson et al., TCC 2016; Reingold et al., SICOMP 2021) have emerged as a powerful model for proof systems combining IP and PCP. While IOPs are not any more powerful than PCPs from a complexity theory perspective, their potential to create succinct proofs and arguments has been demonstrated by many recent constructions achieving better parameters such as total proof length, alphabet size, and query complexity. In this work, we establish new results on the relationship between various notions of soundness for IOPs. First, we formally generalize the notion of round-by-round soundness (Canetti et al., STOC 2019) and round-by-round knowledge soundness (Chiesa et al., TCC 2019). Given this generalization, we then examine its relationship to the notions of generalized special soundness (Attema et al., CRYPTO 2021) and generalized special unsoundness (Attema et al., TCC 2022). We show that: 1. generalized special soundness implies generalized round-by-round soundness; 2. generalized round-by-round knowledge soundness implies generalized special soundness; 3. generalized special soundness does not imply generalized round-by-round knowledge soundness; 4. generalized round-by-round soundness (resp., special unsoundness) is an upper bound (resp., a lower bound) on standard soundness, and this relationship is tight when the round-by-round soundness and special unsoundness errors are equal; and 5. any special sound IOP can be transformed via (a variant of) the Fiat-Shamir transformation (in the Random Oracle Model) into a non-interactive proof that is adaptively sound in the Quantum Random Oracle Model.
Last updated:  2023-09-13
A flexible Snark via the monomial basis
Steve Thakur
We describe a pairing-based Snark with a universal updateable CRS that can be instantiated with any pairing-friendly curve endowed with a sufficiently large prime scalar field. We use the monomial basis, thus sidestepping the need for large smooth order subgroups in the scalar field. In particular, the scheme can be instantiated with outer curves to widely used curves such as Ed25519, secp256k1, BN254 and BLS12-381. This allows us to largely circumvent the overhead of non-native field arithmetic for succinct proofs of valid signatures in Ed25519 and secp256k1 and one layer recursion with BN254 or BLS12-381. The proof size is constant ($10$ $\mathbb{G}_1$, $20$ $\mathbb{F}_p$), as is the verification time, which is dominated by a single pairing check (i.e. two pairings). The Prover time is dominated by the $10$ multi-scalar multiplications in $\mathbb{G}_1$ - with a combined MSM length of $22\cdot |\mathrm{Circuit}|$- and, to a lesser extent, the computation of a single sum of polynomial products over the prime scalar field via multimodular FFTs. The scheme supports succinct lookup arguments for subsets as well as subsequences. Our construction relies on homomorphic table commitments, which makes them amenable to vector lookups. The Prover algorithm runs in runtime $O(M\cdot \log(M))$, where $M = \max \{|\text{Circuit}| , \;|\text{Table}|\}.$ Furthermore, the scheme supports custom gates, albeit at the cost of a larger proof size. As an application of the techniques in this paper, we describe a protocol that supports multiple *univariate* custom gates $\mathcal{G}_i$ of high degree that are sparsely distributed in the sense that $$\sum_{i} \deg(\mathcal{G}_i)\cdot \#(\mathcal{G}_i\;\text{gates}) \; = \; O(|\text{Circuit}|). $$ This comes at the cost of three additional $\mathbb{G}_1$ elements and does not blow up the proof generation time, i.e. it does not entail MSMs or FFTs of length larger than the circuit size.
Last updated:  2024-02-19
LaKey: Efficient Lattice-Based Distributed PRFs Enable Scalable Distributed Key Management
Matthias Geihs and Hart Montgomery
Distributed key management (DKM) services are multi-party services that allow their users to outsource the generation, storage, and usage of cryptographic private keys, while guaranteeing that none of the involved service providers learn the private keys in the clear. This is typically achieved through distributed key generation (DKG) protocols, where the service providers generate the keys on behalf of the users in an interactive protocol, and each of the servers stores a share of each key as the result. However, with traditional DKM systems, the key material stored by each server grows linearly with the number of users. An alternative approach to DKM is via distributed key derivation (DKD) where the user key shares are derived on-demand from a constant-size (in the number of users) secret-shared master key and the corresponding user's identity, which is achieved by employing a suitable distributed pseudorandom function (dPRF). However, existing suitable dPRFs require on the order of 100 interaction rounds between the servers and are therefore insufficient for settings with high network latency and where users demand real-time interaction. To resolve the situation, we initiate the study of lattice-based distributed PRFs, with a particular focus on their application to DKD. Concretely, we show that the LWE-based PRF presented by Boneh et al. at CRYPTO'13 can be turned into a distributed PRF suitable for DKD that runs in only 8 online rounds, which is an improvement over the start-of-the-art by an order of magnitude. We further present optimizations of this basic construction. We show a new construction with improved communication efficiency proven secure under the same ``standard'' assumptions. Then, we present even more efficient constructions, running in as low as 5 online rounds, from non-standard, new lattice-based assumptions. We support our findings by implementing and evaluating our protocol using the MP-SPDZ framework (Keller, CCS '20). Finally, we give a formal definition of our DKD in the UC framework and prove a generic construction (for which our construction qualifies) secure in this model.
Last updated:  2024-04-08
Ordering Transactions with Bounded Unfairness: Definitions, Complexity and Constructions
Aggelos Kiayias, Nikos Leonardos, and Yu Shen
An important consideration in the context of distributed ledger protocols is fairness in terms of transaction ordering. Recent work [Crypto 2020] revealed a connection of (receiver) order fairness to social choice theory and related impossibility results arising from the Condorcet paradox. As a result of the impossibility, various relaxations of order fairness were proposed in prior works. Given that distributed ledger protocols, especially those processing smart contracts, must serialize the input transactions, a natural objective is to minimize the distance (in terms of number of transactions) between any pair of unfairly ordered transactions in the output ledger — a concept we call bounded unfairness. In state machine replication (SMR) parlance this asks for minimizing the number of unfair state updates occurring before the processing of any request. This unfairness minimization objective gives rise to a natural class of parametric order fairness definitions that has not been studied before. As we observe, previous realizable relaxations of order fairness do not yield good unfairness bounds. Achieving optimal order fairness in the sense of bounded unfairness turns out to be connected to the graph theoretic properties of the underlying transaction dependency graph and specifically the bandwidth metric of strongly connected components in this graph. This gives rise to a specific instance of the definition that we call “directed bandwidth order-fairness” which we show that it captures the best possible that any ledger protocol can achieve in terms of bounding unfairness. We prove ordering transactions in this fashion is NP-hard and non-approximable for any constant ratio. Towards realizing the property, we put forth a new distributed ledger protocol called Taxis that achieves directed bandwidth order-fairness. We present two variations, one that matches the property perfectly but (necessarily) lacks in performance and liveness, and another that achieves liveness and better complexity while offering a slightly relaxed version of the property. Finally, we comment on applications of our work to social choice, a direction which we believe to be of independent interest.
Last updated:  2023-08-21
Towards Private Deep Learning-based Side-Channel Analysis using Homomorphic Encryption
Fabian Schmid, Shibam Mukherjee, Stjepan Picek, Marc Stöttinger, Fabrizio De Santis, and Christian Rechberger
Side-channel analysis certification is a process designed to certify the resilience of cryptographic hardware and software implementations against side-channel attacks. In certain cases, third-party evaluations by external companies or departments are necessary due to limited budget, time, or even expertise with the penalty of a significant exchange of sensitive information during the evaluation process. In this work, we investigate the potential of Homomorphic Encryption (HE) in performing side-channel analysis on HE-encrypted measurements. With HE applied to side-channel analysis (SCA), a third party can perform SCA on encrypted measurement data and provide the outcome of the analysis without gaining insights about the actual cryptographic implementation under test. To this end, we evaluate its feasibility by analyzing the impact of AI-based side-channel analysis using HE (private SCA) on accuracy and execution time and compare the results with an ordinary AI-based side-channel analysis (plain SCA). Our work suggests that both unprotected and protected cryptographic implementations can be successfully attacked already today with standard server equipment and modern HE protocols/libraries, while the traces are HE-encrypted.
Last updated:  2023-10-10
Verifiable random function from the Deuring correspondence and higher dimensional isogenies
Antonin Leroux
In this paper, we introduce the family $\mathsf{DeuringVRF}_{y,z}$ of Verifiable Random Function (VRF) protocols. Based on isogenies between supersingular curves, the random function at the heart of our scheme is the one that computes the codomain of an isogeny of big prime degree from its kernel. In $\mathsf{DeuringVRF}_{y,z}$, the evaluation is done with algorithms for the Deuring correspondence that make use of isogenies in dimension $z$, and the verification is based on the isogeny representation obtained from isogenies in dimension $y$. The main advantage of the $\mathsf{DeuringVRF}_{y,z}$ family is its compactness, with proof sizes of a few hundred bytes, which is orders of magnitude smaller than other generic purpose post-quantum VRF constructions. We describe four variants of our scheme with $(y,z) \in \lbrace (2,1),(2,2),(4,1), (4,2) \rbrace$ each offering different tradeoffs between compactness, evaluation efficiency and verification efficiency. In the process, we introduce several new algorithms that might be of independent interest. In particular, for the variants with $z=2$, we introduce the first algorithm to translate an ideal into the corresponding isogeny of dimension $1$ using isogenies between abelian variety of dimension $2$ as a tool. The main advantage of this new algorithm compared to existing solution is the relaxation of the constraints on the prime characteristic: our new algorithm can run efficiently with ``SIDH primes" that are very easy to generate unlike ``SQIsign primes" that are currently required by the state of the art appoach. We believe that this algorithm opens a promising research direction to speed-up other schemes based on the Deuring correspondence such as the SQIsign signature scheme.
Last updated:  2023-08-18
Revealable Functional Commitments: How to Partially Reveal a Secret Function
Bharath Namboothiry
A revealable functional commitment allows a prover to commit to a secret polynomial size function $f$. Later, the prover has the ability to (1) prove that $y = f(x)$ for public $x, y$ and (2) open a small window into $f$'s machinery, via an encoded set of constraints - all without divulging any other information about $f$. In this way, revealable functional commitments allow the operator of a proprietary function to prove desired predicate about the function. For example, a government can commit to a bail decision algorithm, and prove that the same algorithm is being used for all defendants. They can also quell concerns about bias, and increase transparency processes by revealing windows into what their function does - while keeping most of their function secret to prevent exploitation. To build a revealable functional commitment, we introduce a 'proof of reveal', to show that a set of constraints, combined with a set of guarantees about those constraints, is consistent with a committed secret function. We show that combining a algebraic holomorphic proof (AHP), a 'proof of function relation' (PFR), and a proof of reveal yields a secure revealable functional commitment scheme. Additionally, we construct proof of reveals for two popular PFR-equipped AHPs, and obtain two instantiations of revealable functional commitments. Towards that end, we also develop interactive protocols that prove properties of committed polynomials, which may have independent value.
Last updated:  2023-12-07
On the Black-Box Impossibility of Multi-Designated Verifiers Signature Schemes from Ring Signature Schemes
Kyosuke Yamashita and Keisuke Hara
From the work by Laguillaumie and Vergnaud in ICICS'04, it has been widely believed that multi-designated verifier signature schemes (MDVS) can be constructed from ring signature schemes in general. However in this paper, somewhat surprisingly, we prove that it is impossible to construct an MDVS scheme from a ring signature scheme in a black-box sense (in the standard model). The impossibility stems from the difference between the definitions of unforgeability. To the best of our knowledge, existing works demonstrating the constructions do not provide formal reduction from an MDVS scheme to a ring signature scheme, and thus the impossibility has been overlooked for a long time.
Last updated:  2023-08-18
A Note on ``Authenticated Key Agreement Protocol for Secure Communication Establishment in Vehicle-to-Grid Environment With FPGA Implementation''
Zhengjun Cao and Lihua Liu
We show that the key agreement scheme [IEEE Trans. Veh. Technol. 71(4): 3470-3479, 2022] fails to keep user anonymity, not as claimed.
Last updated:  2024-08-02
Representations of Group Actions and their Applications in Cryptography
Giuseppe D'Alconzo and Antonio J. Di Scala
Cryptographic group actions provide a flexible framework that allows the instantiation of several primitives, ranging from key exchange protocols to PRFs and digital signatures. The security of such constructions is based on the intractability of some computational problems. For example, given the group action $(G,X,\star)$, the weak unpredictability assumption (Alamati et al., Asiacrypt 2020) requires that, given random $x_i$'s in $X$, no probabilistic polynomial time algorithm can compute, on input $\{(x_i,g\star x_i)\}_{i=1,\dots,Q}$ and $y$, the set element $g\star y$. In this work, we study such assumptions, aided by the definition of group action representations and a new metric, the $q$-linear dimension, that estimates the "linearity'' of a group action, or in other words, how much it is far from being linear. We show that under some hypotheses on the group action representation, and if the $q$-linear dimension is polynomial in the security parameter, then the weak unpredictability and other related assumptions cannot hold. This technique is applied to some actions from cryptography, like the ones arising from the equivalence of linear codes, as a result, we obtain the impossibility of using such actions for the instantiation of certain primitives. As an additional result, some bounds on the $q$-linear dimension are given for classical groups, such as $\mathcal{S}_n$, $\mathrm{GL}(\mathbb{F}^n)$ and the cyclic group $\mathbb{Z}_n$ acting on itself.
Last updated:  2024-02-09
Automated Analysis of Protocols that use Authenticated Encryption: How Subtle AEAD Differences can impact Protocol Security
Cas Cremers, Alexander Dax, Charlie Jacomme, and Mang Zhao
Many modern security protocols such as TLS, WPA2, WireGuard, and Signal use a cryptographic primitive called Authenticated Encryption (optionally with Authenticated Data), also known as an AEAD scheme. AEAD is a variant of symmetric encryption that additionally provides authentication. While authentication may seem to be a straightforward additional requirement, it has in fact turned out to be complex: many different security notions for AEADs are still being proposed, and several recent protocol-level attacks exploit subtle behaviors that differ among real-world AEAD schemes. We provide the first automated analysis method for protocols that use AEADs that can systematically find attacks that exploit the subtleties of the specific type of AEAD used. This can then be used to analyze specific protocols with a fixed AEAD choice, or to provide guidance on which AEADs might be (in)sufficient to make a protocol design secure. We develop generic symbolic AEAD models, which we instantiate for the Tamarin prover. Our approach can automatically and efficiently discover protocol attacks that could previously only be found using manual inspection, such as the Salamander attack on Facebook’s message franking, and attacks on SFrame and YubiHSM. Furthermore, our analysis reveals undesirable behaviors of several other protocols.
Last updated:  2023-08-17
Probabilistic Related-Key Statistical Saturation Cryptanalysis
Muzhou Li, Nicky Mouha, Ling Sun, and Meiqin Wang
The related-key statistical saturation (RKSS) attack is a cryptanalysis method proposed by Li et al. at FSE 2019. It can be seen as the extension of previous statistical saturation attacks under the related-key setting. The attack takes advantage of a set of plaintexts with some bits fixed, while the other bits take all possible values, and considers the relation between the value distributions of a part of the ciphertext bits generated under related keys. Usually, RKSS distinguishers exploit the property that the value distribution stays invariant under the modification of the key. However, this property can only be deterministically verified if the plaintexts cover all possible values of a selection of bits. In this paper, we propose the probabilistic RKSS cryptanalysis which avoids iterating over all non-fixed plaintext bits by applying a statistical method on top of the original RKSS distinguisher. Compared to the RKSS attack, this newly proposed attack has a significantly lower data complexity and has the potential of attacking more rounds. As an illustration, for reduced-round Piccolo, we obtain the best key recovery attacks (considering both pre- and post-whitening keys) on both versions in terms of the number of rounds. Note that these attacks do not threaten the full-round security of Piccolo.
Last updated:  2024-03-01
HERMES: Efficient Ring Packing using MLWE Ciphertexts and Application to Transciphering
Youngjin Bae, Jung Hee Cheon, Jaehyung Kim, Jai Hyun Park, and Damien Stehlé
Most of the current fully homomorphic encryption (FHE) schemes are based on either the learning-with-errors (LWE) problem or on its ring variant (RLWE) for storing plaintexts. During the homomorphic computation of FHE schemes, RLWE formats provide high throughput when considering several messages, and LWE formats provide a low latency when there are only a few messages. Efficient conversion can bridge the advantages of each format. However, converting LWE formats into RLWE format, which is called ring packing, has been a challenging problem. We propose an efficient solution for ring packing for FHE. The main improvement of this work is twofold. First, we accelerate the existing ring packing methods by using bootstrapping and ring switching techniques, achieving practical runtimes. Second, we propose a new method for efficient ring packing, HERMES, by using ciphertexts in Module-LWE (MLWE) formats, to also reduce the memory. To this end, we generalize the tools of LWE and RLWE formats for MLWE formats. On a single-thread implementation, HERMES consumes $10.2$s for the ring packing of $2^{15}$ LWE-format ciphertexts into an RLWE-format ciphertext. This gives $41$x higher throughput compared to the state-of-the-art ring packing for FHE, PEGASUS [S&P'21], which takes $51.7$s for packing $2^{12}$ LWE ciphertexts with similar homomorphic capacity. We also illustrate the efficiency of HERMES by using it for transciphering from LWE symmetric encryption to CKKS fully homomorphic encryption, significantly outperforming the recent proposals HERA [Asiacrypt'21] and Rubato [Eurocrypt'22].
Last updated:  2023-08-16
Multi-Stage Group Key Distribution and PAKEs: Securing Zoom Groups against Malicious Servers without New Security Elements
Cas Cremers, Eyal Ronen, and Mang Zhao
Video conferencing apps like Zoom have hundreds of millions of daily users, making them a high-value target for surveillance and subversion. While such apps claim to achieve some forms of end-to-end encryption, they usually assume an incorruptible server that is able to identify and authenticate all the parties in a meeting. Concretely this means that, e.g., even when using the “end-to-end encrypted” setting, malicious Zoom servers could eavesdrop or impersonate in arbitrary groups. In this work, we show how security against malicious servers can be improved by changing the way in which such protocols use passwords (known as passcodes in Zoom) and integrating a password-authenticated key exchange (PAKE) protocol. To formally prove that our approach achieves its goals, we formalize a class of cryptographic protocols suitable for this setting, and define a basic security notion for them, in which group security can be achieved assuming the server is trusted to correctly authorize the group members. We prove that Zoom indeed meets this notion. We then propose a stronger security notion that can provide security against malicious servers, and propose a transformation that can achieve this notion. We show how we can apply our transformation to Zoom to provably achieve stronger security against malicious servers, notably without introducing new security elements.
Last updated:  2023-08-24
Cascading Four Round LRW1 is Beyond Birthday Bound Secure
Nilanjan Datta, Shreya Dey, Avijit Dutta, and Sougata Mandal
In CRYPTO'02, Liskov et al. have introduced a new symmetric key primitive called tweakable block cipher. They have proposed two constructions of designing a tweakable block cipher from block ciphers. The first proposed construction is called $\mathsf{LRW1}$ and the second proposed construction is called $\mathsf{LRW2}$. Although, $\mathsf{LRW2}$ has been extended in later works to provide beyond birthday bound security (e.g., cascaded $\mathsf{LRW2}$ in CRYPTO'12 by Landecker et al.), but extension of the $\mathsf{LRW1}$ has received no attention until the work of Bao et al. in EUROCRYPT'20, where the authors have shown that one round extension of $\mathsf{LRW1}$, i.e., masking the output of $\mathsf{LRW1}$ with the given tweak and then re-encrypting it with the same block cipher, gives security up to $2^{2n/3}$ queries. Recently, Khairallah has shown a birthday bound distinguishing attack on the construction and hence invalidated the security claim of Bao et al. This has led to the open research question, that {\em how many round are required for cascading $\mathsf{LRW1}$ to achieve beyond birthday bound security ?} In this paper, we have shown that cascading $\mathsf{LRW1}$ up to four rounds is sufficient for ensuring beyond the birthday bound security. In particular, we have shown that $\mathsf{CLRW1}^4$ provides security up to $2^{3n/4}$ queries. Security analysis of our construction is based on the recent development of the mirror theory technique for tweakable random permutations under the framework of the Expectation Method.
Last updated:  2023-08-16
Post-Quantum Single Secret Leader Election (SSLE) From Publicly Re-randomizable Commitments
Dan Boneh, Aditi Partap, and Lior Rotem
A Single Secret Leader Election (SSLE) enables a group of parties to randomly choose exactly one leader from the group with the restriction that the identity of the leader will be known to the chosen leader and nobody else. At a later time, the elected leader should be able to publicly reveal her identity and prove that she is the elected leader. The election process itself should work properly even if many registered users are passive and do not send any messages. SSLE is used to strengthen the security of proof-of-stake consensus protocols by ensuring that the identity of the block proposer remains unknown until the proposer publishes a block. Boneh, Eskandarian, Hanzlik, and Greco (AFT'20) defined the concept of an SSLE and gave several constructions. Their most efficient construction is based on the difficulty of the Decision Diffie-Hellman problem in a cyclic group. In this work we construct the first efficient SSLE protocols based on the standard Learning With Errors (LWE) problem on integer lattices, as well as the Ring-LWE problem. Both are believed to be post-quantum secure. Our constructions generalize the paradigm of Boneh et al. by introducing the concept of a re-randomizable commitment (RRC). We then construct several post-quantum RRC schemes from lattice assumptions and prove the security of the derived SSLE protocols. Constructing a lattice-based RRC scheme is non-trivial, and may be of independent interest.
Last updated:  2023-10-19
Improved SNARK Frontend for Highly Repetitive Computations
Sriram Sridhar and Yinuo Zhang
Modern SNARK designs usually feature a frontend-backend paradigm: The frontend compiles a user's program into some equivalent circuit representation, while the backend calls for a SNARK specifically made for proving circuit satisfiability. While the circuit may be defined over small fields, the backend prover often needs to lift the computation to much larger fields for achieving soundness. This gap results in concrete overheads, for example, when proving SHA2 programs with pairing-based SNARKs. For a class of computations that are \emph{highly repetitive}, we propose an improved frontend that partially bridges this gap. Compared with existing works, our frontend yields circuit representations defined over larger fields but of smaller size. Our implementation shows that for $\approx 100$ iterations of SHA2-256 instances, our improved frontend boosts prover runtime by over $3.8 \times$. Central to our result and of independent interest, is an efficient technique for proving non-native ring arithmetic.
Last updated:  2023-08-16
CSI-Otter: Isogeny-based (Partially) Blind Signatures from the Class Group Action with a Twist
Shuichi Katsumata, Yi-Fu Lai, Jason T. LeGrow, and Ling Qin
In this paper, we construct the first provably-secure isogeny-based (partially) blind signature scheme. While at a high level the scheme resembles the Schnorr blind signature, our work does not directly follow from that construction, since isogenies do not offer as rich an algebraic structure. Specifically, our protocol does not fit into the "linear identification protocol" abstraction introduced by Hauck, Kiltz, and Loss (EUROCYRPT'19), which was used to generically construct Schnorr-like blind signatures based on modules such as classical groups and lattices. Consequently, our scheme is provably-secure in the poly-logarithmic (in the number of security parameter) concurrent execution and does not seem susceptible to the recent efficient ROS attack exploiting the linear nature of the underlying mathematical tool. In more detail, our blind signature exploits the "quadratic twist" of an elliptic curve in an essential way to endow isogenies with a strictly richer structure than abstract group actions (but still more restrictive than modules). The basic scheme has public key size $128$~B and signature size $8$~KB under the CSIDH-512 parameter sets---these are the smallest among all provably secure post-quantum secure blind signatures. Relying on a new "ring" variant of the group action inverse problem rGAIP, we can halve the signature size to 4~KB while increasing the public key size to 512~B. We provide preliminary cryptanalysis of rGAIP and show that for certain parameter settings, it is essentially as secure as the standard GAIP. Finally, we show a novel way to turn our blind signature into a partially blind signature, where we deviate from prior methods since they require hashing into the set of public keys while hiding the corresponding secret key---constructing such a hash function in the isogeny setting remains an open problem.
Last updated:  2023-12-29
A remark on the Independence Heuristic in the Dual Attack
Andreas Wiemers, Stephan Ehlen, and Kaveh Bashiri
Ducas and Pulles in "Does the Dual-Sieve Attack on Learning with Errors even Work?" especially report on experiments they made comparing the distributions of scores for random targets and BDD targets. They discovered that the distribution of scores for BDD targets deviate from the predictions made under the independence heuristic. Here, we want to derive approximations for the distributions which take into account the dependency that occur in the scores. These approximations allow to find heuristic estimates for the success probability of distinguishing between the two distributions.
Last updated:  2023-09-21
More Balanced Polynomials: Cube Attacks on 810- and 825-Round Trivium with Practical Complexities
Hao Lei, Jiahui He, Kai Hu, and Meiqin Wang
The key step of the cube attack is to recover the special polynomial, the superpoly, of the target cipher. In particular, the balanced superpoly, in which there exists at least one secret variable as a single monomial and none of the other monomials contain this variable, can be exploited to reveal one-bit information about the key bits. However, as the number of rounds grows, it becomes increasingly difficult to find such balanced superpolies. Consequently, traditional methods of searching for balanced superpolies soon hit a bottleneck. Aiming at performing a cube attack on more rounds of Trivium with a practical complexity, in this paper, we present three techniques to obtain sufficient balanced polynomials. 1. Based on the structure of Trivium, we propose a variable substitution technique to simplify the superpoly. 2. Obtaining the additional balanced polynomial by combining two superpolies to cancel the two-degree terms. 3. We propose an experimental approach to construct high-quality large cubes which may contain more subcubes with balanced superpolies and a heuristic search strategy for their subcubes whose superpolies are balanced. To illustrate the power of our techniques, we search for balanced polynomials for 810- and 825-round Trivium. As a result, we can mount cube attacks against 810- and 825-round Trivium with the time complexity of $2^{44.17}$ and $2^{53.17}$ round-reduced Trivium initializations, respectively, which can be verified in 48 minutes and 18 days on a PC with one A100 GPU. For the same level of time complexity, this improves the previous best results by $2$ and $5$ rounds, respectively.
Last updated:  2023-08-15
Waks-On/Waks-Off: Fast Oblivious Offline/Online Shuffling and Sorting with Waksman Networks
Sajin Sasy, Aaron Johnson, Ian Goldberg
As more privacy-preserving solutions leverage trusted execution environments (TEEs) like Intel SGX, it becomes pertinent that these solutions can by design thwart TEE side-channel attacks that research has brought to light. In particular, such solutions need to be fully oblivious to circumvent leaking private information through memory or timing side channels. In this work, we present fast fully oblivious algorithms for shuffling and sorting data. Oblivious shuffling and sorting are two fundamental primitives that are frequently used for permuting data in privacy-preserving solutions. We present novel oblivious shuffling and sorting algorithms in the offline/online model such that the bulk of the computation can be done in an offline phase that is independent of the data to be permuted. The resulting online phase provides performance improvements over state-of-the-art oblivious shuffling and sorting algorithms both asymptotically ($O(\beta n\log n)$ vs. $O(\beta n \log^2n)$) and concretely ($>5\times$ and $>3\times$ speedups), when permuting $n$ items each of size $\beta$. Our work revisits Waksman networks, and it uses the key observation that setting the control bits of a Waksman network for a uniformly random shuffle is independent of the data to be shuffled. However, setting the control bits of a Waksman network efficiently and fully obliviously poses a challenge, and we provide a novel algorithm to this end. The total costs (inclusive of offline computation) of our WaksShuffle shuffling algorithm and our WaksSort sorting algorithm are lower than all other fully oblivious shuffling and sorting algorithms when the items are at least moderately sized (i.e., $\beta$ > 1400 B), and the performance gap only widens as the item sizes increase. Furthermore, WaksShuffle improves the online cost of oblivious shuffling by $>5\times$ for shuffling $2^{20}$ items of any size; similarly WaksShuffle+QS, our other sorting algorithm, provides $>2.7\times$ speedups in the online cost of oblivious sorting.
Last updated:  2024-01-29
LOL: A Highly Flexible Framework for Designing Stream Ciphers
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, and Tian Tian
In this paper, we propose LOL, a general framework for designing blockwise stream ciphers, to achieve ultrafast software implementations for the ubiquitous virtual networks in 5G/6G environments and high-security level for post-quantum cryptography. The LOL framework is structurally strong, and all its components as well as the LOL framework itself enjoy high flexibility with various extensions. Following the LOL framework, we propose new stream cipher designs named LOL-MINI and LOL-DOUBLE with the support of the AES-NI and SIMD instructions: the former applies the basic LOL single mode while the latter uses the extended parallel-dual mode. Both LOL-MINI and LOL-DOUBLE support 256-bit key length and, according to our thorough evaluations, have 256-bit security margins against all existing cryptanalysis methods including differential, linear, integral, etc. The software performances of LOL-MINI and LOL-DOUBLE can reach 89 Gbps and 135 Gbps. In addition to pure encryptions, the LOL-MINI and LOL-DOUBLE stream ciphers can also be applied in a stream-cipher-then-MAC strategy to make an AEAD scheme.
Last updated:  2024-01-29
Practical Key-Extraction Attacks in Leading MPC Wallets
Nikolaos Makriyannis, Oren Yomtov, and Arik Galansky
Multi-Party Computation (MPC) has become a major tool for protecting hundreds of billions of dollars in cryptocurrency wallets. MPC protocols are currently powering the wallets of Coinbase, Binance, Zengo, BitGo, Fireblocks and many other fintech companies servicing thousands of financial institutions and hundreds of millions of end-user consumers. We present four novel key-extraction attacks on popular MPC signing protocols showing how a single corrupted party may extract the secret in full during the MPC signing process. Our attacks are highly practical (the practicality of the attack depends on the number of signature-generation ceremonies the attacker participates in before extracting the key). Namely, we show key-extraction attacks against different threshold-ECDSA protocols/implementations requiring $10^6$, $256$, $16$, and *one signature*, respectively. In addition, we provide proof-of-concept code that implements our attacks.
Last updated:  2023-08-24
Tight Security of TNT: Reinforcing Khairallah's Birthday-bound Attack
Ashwin Jha, Mridul Nandi, and Abishanka Saha
In a recent paper, Khairallah demonstrated a birthday-bound attack on TNT, thereby invalidating its (beyond-the-birthday-bound) CCA security claims. In this short note, we reestablish a birthday-bound CCA security bound for TNT. Furthermore, using a minor variant of Khairallah's attack, we show that our security bound is tight. We provide a rigorous and complete attack advantage calculations to further enhance the confidence in Khairallah's proposed attack strategy.
Last updated:  2023-08-14
Privacy-Preserving Outsourced Certificate Validation
Tarek Galal, Anja Lehmann
Digital Covid certificates are the first widely deployed end-user cryptographic certificates. For service providers, such as airlines or event ticket vendors, that needed to check that their (global) customers satisfy certain health policies, the verification of such Covid certificates was challenging though - not because of the cryptography involved, but due to the multitude of issuers, different certificate types and the evolving nature of country-specific policies that had to be supported. As Covid certificates contain sensitive health information, their (online) presentation to non-health related entities also poses clear privacy risk. To address both challenges, the EU proposed a specification for outsourcing the verification process to a validator service, that executes the process and informs service providers of the result. The WHO announced to adopt this approach for general vaccination credentials beyond Covid-19. While being beneficial to improve security and privacy for service providers, their solution requires strong trust assumption for the (central) validation service that learns all health-related details of the users. In our work, we propose and formally model a privacy-preserving variant of such an outsourced validation service. Therein the validator learns the attributes it is supposed to verify, but not the users identity. Still, the validator's assertion is blindly bound to the user's identity to ensure the desired user-binding. We analyze the EU specification in our model and show that it only meets a subset of those goals. Our analysis further shows that the EU protocol is unnecessarily complex and can be significantly simplified while maintaining the same (weak) level of security. Finally, we propose a new construction for privacy-preserving certificate validation that provably satisfies all desired goals.
Last updated:  2023-09-01
PMNS revisited for consistent redundancy and equality test
Fangan Yssouf Dosso, Alexandre Berzati, Nadia El Mrabet, and Julien Proy
The Polynomial Modular Number System (PMNS) is a non-positional number system for modular arithmetic. A PMNS is defined by a tuple $(p, n, \gamma, \rho, E)$, where $p$, $n$, $\gamma$ and $\rho$ are positive non-zero integers and $E\in\mathbb{Z}_{n}[X]$ is a monic polynomial such that $E(\gamma) \equiv 0 \pmod p$. The PMNS is a redundant number system. This redundancy property has already been used to randomise the data during the Elliptic Curve Scalar Multiplication (ECSM). In this paper, we refine the results on redundancy and propose several new results on PMNS. More precisely, we study a generalisation of the Montgomery-like internal reduction method proposed by Negre and Plantard, along with some improvements on parameter bounds for smaller memory cost to represent elements in this system. We also show how to perform equality test in the PMNS.
Last updated:  2023-08-14
Almost Tight Multi-User Security under Adaptive Corruptions from LWE in the Standard Model
Shuai Han, Shengli Liu, Zhedong Wang, Dawu Gu
In this work, we construct the first digital signature (SIG) and public-key encryption (PKE) schemes with almost tight multi-user security under adaptive corruptions based on the learning-with-errors (LWE) assumption in the standard model. Our PKE scheme achieves almost tight IND-CCA security and our SIG scheme achieves almost tight strong EUF-CMA security, both in the multi-user setting with adaptive corruptions. The security loss is quadratic in the security parameter, and independent of the number of users, signatures or ciphertexts. Previously, such schemes were only known to exist under number-theoretic assumptions or in classical random oracle model, thus vulnerable to quantum adversaries. To obtain our schemes from LWE, we propose new frameworks for constructing SIG and PKE with a core technical tool named probabilistic quasi-adaptive hash proof system (pr-QA-HPS). As a new variant of HPS, our pr-QA-HPS provides probabilistic public and private evaluation modes that may toss coins. This is in stark contrast to the traditional HPS [Cramer and Shoup, Eurocrypt 2002] and existing variants like approximate HPS [Katz and Vaikuntanathan, Asiacrypt 2009], whose public and private evaluations are deterministic in their inputs. Moreover, we formalize a new property called evaluation indistinguishability by requiring statistical indistinguishability of the two probabilistic evaluation modes, even in the presence of the secret key. The evaluation indistinguishability, as well as other nice properties resulting from the probabilistic features of pr-QA-HPS, are crucial for the multi-user security proof of our frameworks under adaptive corruptions. As for instantiations, we construct pr-QA-HPS from the LWE assumption and prove its properties with almost tight reductions, which admit almost tightly secure LWE-based SIG and PKE schemes under our frameworks. Along the way, we also provide new almost-tight reductions from LWE to multi-secret LWE, which may be of independent interest.
Last updated:  2023-08-13
Two Remarks on Torsion-Point Attacks in Isogeny-Based Cryptography
Francesco Sica
We fix an omission in [Petit17] on torsion point attacks of isogeny-based cryptosystems akin to SIDH, also reprised in [dQuehen-etal21]. In these works, their authors represent certain integers using a norm equation to derive a secret isogeny. However, this derivation uses as a crucial ingredient ([Petit17] Section 4.3), which we show to be incorrect. We then state sufficient conditions allowing to prove a modified version this lemma. A further idea of parametrizing solutions of the norm equation will show that these conditions can be fulfilled under the same heuristics of these previous works. Our contribution is a theoretical one. It doesn't invalidate the attack, which works as well in practice, but gives a correct mathematical justification for it. We also simplify the argument of Theorem 3 in [dQuehen-etal21] to show that the requirement that $m$ be small is unnecessary.
Last updated:  2023-08-13
Snowblind: A Threshold Blind Signature in Pairing-Free Groups
Elizabeth Crites, Chelsea Komlo, Mary Maller, Stefano Tessaro, Chenzhi Zhu
Both threshold and blind signatures have, individually, received a considerable amount of attention. However little is known about their combination, i.e., a threshold signature which is also blind, in that no coalition of signers learns anything about the message being signed or the signature being produced. Several applications of blind signatures (e.g., anonymous tokens) would benefit from distributed signing as a means to increase trust in the service and hence reduce the risks of key compromise. This paper builds the first blind threshold signatures in pairing-free groups. Our main contribution is a construction that transforms an underlying blind non-threshold signature scheme with a suitable structure into a threshold scheme, preserving its blindness. The resulting signing protocol proceeds in three rounds, and produces signatures consisting of one group element and two scalars. The underlying non-threshold blind signature schemes are of independent interest, and improve upon the current state of the art (Tessaro and Zhu, EUROCRYPT ’22) with shorter signatures (three elements, instead of four) and simpler proofs of security. All of our schemes are proved secure in the Random Oracle and Algebraic Group Models, assuming the hardness of the discrete logarithm problem.
Last updated:  2023-08-13
Parallel SAT Framework to Find Clustering of Differential Characteristics and Its Applications
Kosei Sakamoto, Ryoma Ito, Takanori Isobe
The most crucial but time-consuming task for differential cryptanalysis is to find a differential with a high probability. To tackle this task, we propose a new SAT-based automatic search framework to efficiently figure out a differential with the highest probability under a specified condition. As the previous SAT methods (e.g., the Sun et al’s method proposed at ToSC 2021(1)) focused on accelerating the search for an optimal single differential characteristic, these are not optimized for evaluating a clustering effect to obtain a tighter differential probability of differentials. In contrast, our framework takes advantage of a method to solve incremental SAT problems in parallel using a multi-threading technique, and consequently, it offers the following advantages compared with the previous methods: (1) speedy identification of a differential with the highest probability under the specified conditions; (2) efficient construction of the truncated differential with the highest probability from the obtained multiple differentials; and (3) applicability to a wide class of symmetric-key primitives. To demonstrate the effectiveness of our framework, we apply it to the block cipher PRINCE and the tweakable block cipher QARMA. We successfully figure out the tight differential bounds for all variants of PRINCE and QARMA within the practical time, thereby identifying the longest distinguisher for all the variants, which improves existing ones by one to four more rounds. Besides, we uncover notable differences between PRINCE and QARMA in the behavior of differential, especially for the clustering effect. We believe that our findings shed light on new structural properties of these important primitives. In the context of key recovery attacks, our framework allows us to derive the key-recovery-friendly truncated differentials for all variants of QARMA. Consequently, we report key recovery attacks based on (truncated) differential cryptanalysis on QARMA for the first time and show these key recovery attacks are competitive with existing other attacks.
Last updated:  2023-11-10
SoK: Privacy-Preserving Smart Contract
Huayi Qi, Minghui Xu, Dongxiao Yu, and Xiuzhen Cheng
The privacy concern in smart contract applications continues to grow, leading to the proposal of various schemes aimed at developing comprehensive and universally applicable privacy-preserving smart contract (PPSC) schemes. However, the existing research in this area is fragmented and lacks a comprehensive system overview. This paper aims to bridge the existing research gap on PPSC schemes by systematizing previous studies in this field. The primary focus is on two categories: PPSC schemes based on cryptographic tools like zero-knowledge proofs, as well as schemes based on trusted execution environments. In doing so, we aim to provide a condensed summary of the different approaches taken in constructing PPSC schemes. Additionally, we also offer a comparative analysis of these approaches, highlighting the similarities and differences between them. Furthermore, we shed light on the challenges that developers face when designing and implementing PPSC schemes. Finally, we delve into potential future directions for improving and advancing these schemes, discussing possible avenues for further research and development.
Last updated:  2023-08-12
One-Message Secure Reductions: On the Cost of Converting Correlations
Yuval Ishai, Mahimna Kelkar, Varun Narayanan, Liav Zafar
Correlated secret randomness is a useful resource for secure computation protocols, often enabling dramatic speedups compared to protocols in the plain model. This has motivated a line of work on identifying and securely generating useful correlations. Different kinds of correlations can vary greatly in terms of usefulness and ease of generation. While there has been major progress on efficiently generating oblivious transfer (OT) correlations, other useful kinds of correlations are much more costly to generate. Thus, it is highly desirable to develop efficient techniques for securely converting copies of a given source correlation into copies of a given target correlation, especially when the former are cheaper to generate than the latter. In this work, we initiate a systematic study of such conversions that only involve a single uni-directional message. We refer to such a conversion as a one-message secure reduction (OMSR). Recent works (Agarwal et al, Eurocrypt 2022; Khorasgani et al, Eurocrypt 2022) studied a similar problem when no communication is allowed; this setting is quite restrictive, however, with few non-trivial conversions being feasible. The OMSR setting substantially expands the scope of feasible results, allowing for direct applications to existing MPC protocols. We obtain the following positive and negative results. - OMSR constructions. We present a general rejection-sampling based technique for OMSR with OT source correlations. We apply it to substantially improve in the communication complexity of optimized protocols for distributed symmetric cryptography (Dinur et al., Crypto 2021). - OMSR lower bounds. We develop general techniques for proving lower bounds on the communication complexity of OMSR, matching our positive results up to small constant factors.
Last updated:  2023-08-12
Theoretical analysis of decoding failure rate of non-binary QC-MDPC codes
Kirill Vedenev, Yury Kosolapov
In this paper, we study the decoding failure rate (DFR) of non-binary QC-MDPC codes using theoretical tools, extending the results of previous binary QC-MDPC code studies. The theoretical estimates of the DFR are particularly significant for cryptographic applications of QC-MDPC codes. Specifically, in the binary case, it is established that exploiting decoding failures makes it possible to recover the secret key of a QC-MDPC cryptosystem. This implies that to attain the desired security level against adversaries in the CCA2 model, the decoding failure rate must be strictly upper-bounded to be negligibly small. In this paper, we observe that this attack can also be extended to the non--binary case as well, which underscores the importance of DFR estimation. Consequently, we study the guaranteed error-correction capability of non-binary QC-MDPC codes under one-step majority logic (OSML) decoder and provide a theoretical analysis of the 1-iteration parallel symbol flipping decoder and its combination with OSML decoder. Utilizing these results, we estimate the potential public-key sizes for QC-MDPC cryptosystems over $\mathbb{F}_4$ for various security levels. We find that there is no advantage in reducing key sizes when compared to the binary case.
Last updated:  2024-09-07
Improved Circuit Synthesis with Multi-Value Bootstrapping for FHEW-like Schemes
Johannes Mono, Kamil Kluczniak, and Tim Güneysu
In recent years, the research community has made great progress in improving techniques for privacy-preserving computation, such as fully homomorphic encryption (FHE). Despite the progress, there remain open challenges, mainly in performance and usability, to further advance the adoption of these technologies. This work provides multiple contributions that improve the current state-of-the-art in both areas. More specifically, we significantly simplify the multi-value bootstrapping by Carpov, Izabachène, and Mollimard [CIM19] for Boolean-based FHE schemes such as FHEW or TFHE, making the concept usable in practice. Based on our simplifications, we implement an easy-to-use interface for multi-value bootstrapping in the open-source library FHE-Deck [fhe23], derive new parameter sets for multi-bit encryptions with state-of-the-art security, and build a toolset that translates high-level code to multi-bit operations on encrypted data using circuit synthesis. We propose and integrate the first non-trivial FHE-specific optimizations for privacy-preserving circuit synthesis: look-up table (LUT) grouping and adder substitution. Using LUT grouping, we reduce the number of bootstrapping operations by almost 40% on average, while for adder substitution, we reduce the number of required bootstrappings by up to 85% for certain use cases. Overall, the execution time is up to 4.2x faster with all optimizations enabled compared to previous state-of-the-art circuit synthesis.
Last updated:  2024-08-25
Pay Less for Your Privacy: Towards Cost-Effective On-Chain Mixers
Zhipeng Wang, Marko Cirkovic, Duc V. Le, William Knottenbelt, and Christian Cachin
On-chain mixers, such as Tornado Cash (TC), have become a popular privacy solution for many non-privacy-preserving blockchain users. These mixers enable users to deposit a fixed amount of coins and withdraw them to another address, while effectively reducing the linkability between these addresses and securely obscuring their transaction history. However, the high cost of interacting with existing on-chain mixer smart contracts prohibits standard users from using the mixer, mainly due to the use of computationally expensive cryptographic primitives. For instance, the deposit cost of TC on Ethereum is approximately $1.1m$ gas (i.e., $66$ USD in June 2023), which is $53\times$ higher than issuing a base transfer transaction. In this work, we introduce the Merkle Pyramid Builder approach, to incrementally build the Merkle tree in an on-chain mixer and update the tree per batch of deposits, which can therefore decrease the overall cost of using the mixer. Our evaluation results highlight the effectiveness of this approach, showcasing a significant reduction of up to $7\times$ in the amortized cost of depositing compared to state-of-the-art on-chain mixers. Importantly, these improvements are achieved without compromising users' privacy. Furthermore, we propose the utilization of verifiable computations to shift the responsibility of Merkle tree updates from on-chain smart contracts to off-chain clients, which can further reduce deposit costs. Additionally, our analysis demonstrates that our designs ensure fairness by distributing Merkle tree update costs among clients over time.
Last updated:  2023-08-11
Non-distributable key-sharing protocol with particular emphasis on the Internet of Things
Mario Mastriani
Quantum key distribution (QKD) constitutes the most widespread family of information preservation techniques in the context of Quantum Cryptography. However, these techniques must deal with a series of technological challenges that prevent their efficient implementation, in space, as well as in exclusively terrestrial configurations. Moreover, the current smallsat constellations of Low Earth Orbit (LEO with an altitude of approx. 500 km) added to other satellites in Medium Earth Orbit (MEO), and geostationary orbits (GEO), plus a large amount of space debris present around the planet, have become a serious obstacle for Astronomy. In this work, a classic alternative to QKD is presented, also based on a symmetric key, but with low cost, and high efficiency, which dispenses with all the implementation problems present in the QKD protocols and does not require the use of satellites, and which we will call non-distributable key sharing (NDKS). Due to its low cost, simplicity of implementation, and high efficiency, NDKS is presented as an ideal solution to the problem of cybersecurity on the Internet of Things (IoT), in general, and IoT-fog-clouds, in particular.
Last updated:  2024-05-26
Securing Lattice-Based KEMs with Code-Based Masking: A Theoretical Approach
Uncategorized
Pierre-Augustin Berthet, Yoan Rougeolle, Cédric Tavernier, Jean-Luc Danger, and Laurent Sauvage
Show abstract
Uncategorized
The recent technological advances in Post-Quantum Cryptography (PQC) raise the questions of robust implementations of new asymmetric cryptographic primitives in today’s technology. This is the case for the lattice-based Module Lattice-Key Encapsulation Mechanism (ML-KEM) algorithm which is proposed by the NIST as the first standard for Key Encapsulation Mechanism (KEM), taking inspiration from CRYSTALS-Kyber. We have notably to make sure the ML-KEM implementation is resilient against physical attacks like Side-Channel Analysis (SCA) and Fault Injection Attacks (FIA). To reach this goal, we propose to adapt a masking countermeasure, more precisely the generic Direct Sum Masking method (DSM). By taking inspiration of a previous paper on AES, we extend the method to finite fields of characteristic prime other than 2 and even-length codes. We also investigate its application to Keccak, which is the hash-based function used in ML-KEM. We propose masked conversions and use cost-amortization to perform this hash. We provide the first masked implementation of ML-KEM with both SCA and FIA resilience able of correcting errors. Our FIA resilience allows for fault correction even within the multiplicative gadget. Finally, we adapt a polynomial evaluation method to compute masked polynomials with public coefficients over finite fields of characteristic different from 2.
Last updated:  2023-08-11
A Note on “Secure Quantized Training for Deep Learning”
Marcel Keller, Ke Sun
Keller and Sun (ICML'22) have found a gap in the accuracy between floating-point deep learning in cleartext and secure quantized deep learning using multi-party computation. We have discovered that this gap is caused by a bug in the implementation of max-pooling. In this note, we present updated figures to support this conclusion. We also add figures for another network on CIFAR-10.
Last updated:  2023-12-01
Arke: Scalable and Byzantine Fault Tolerant Privacy-Preserving Contact Discovery
Nicolas Mohnblatt, Alberto Sonnino, Kobi Gurkan, and Philipp Jovanovic
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Last updated:  2023-08-10
Jolt: SNARKs for Virtual Machines via Lookups
Arasu Arun, Srinath Setty, Justin Thaler
Succinct Non-interactive Arguments of Knowledge (SNARKs) allow an untrusted prover to establish that it correctly ran some "witness-checking procedure" on a witness. A zkVM (short for zero-knowledge Virtual Machine) is a SNARK that allows the witness-checking procedure to be specified as a computer program written in the assembly language of a specific instruction set architecture (ISA). A $\textit{front-end}$ converts computer programs into a lower-level representation such as an arithmetic circuit or generalization thereof. A SNARK for circuit-satisfiability can then be applied to the resulting circuit. We describe a new front-end technique called Jolt that applies to a variety of ISAs. Jolt arguably realizes a vision called the $\textit{lookup singularity}$, which seeks to produce circuits that only perform lookups into pre-determined lookup tables. The circuits output by Jolt primarily perform lookups into a gigantic lookup table, of size more than $2^{128}$, that depends only on the ISA. The validity of the lookups are proved via a new $\textit{lookup argument}$ called Lasso described in a companion work (Setty, Thaler, and Wahby, e-print 2023). Although size-$2^{128}$ tables are vastly too large to materialize in full, the tables arising in Jolt are structured, avoiding costs that grow linearly with the table size. We describe performance and auditability benefits of Jolt compared to prior zkVMs, focusing on the popular RISC-V ISA as a concrete example. The dominant cost for the Jolt prover applied to this ISA (on $64$-bit data types) is cryptographically committing to about six $256$-bit field elements per step of the RISC-V CPU. This compares favorably to prior zkVM provers, even those focused on far simpler VMs.
Last updated:  2023-08-10
Unlocking the lookup singularity with Lasso
Srinath Setty, Justin Thaler, Riad Wahby
This paper introduces Lasso, a new family of lookup arguments, which allow an untrusted prover to commit to a vector $a \in \mathbb{F}^m$ and prove that all entries of a reside in some predetermined table $t \in \mathbb{F}^n$. Lasso’s performance characteristics unlock the so-called "lookup singularity". Lasso works with any multilinear polynomial commitment scheme, and provides the following efficiency properties. For $m$ lookups into a table of size $n$, Lasso’s prover commits to just $m + n$ field elements. Moreover, the committed field elements are small, meaning that, no matter how big the field $\mathbb{F}$ is, they are all in the set $\{0, . . . , m\}$. When using a multiexponentiation-based commitment scheme, this results in the prover’s costs dominated by only $O(m + n)$ group operations (e.g., elliptic curve point additions), plus the cost to prove an evaluation of a multilinear polynomial whose evaluations over the Boolean hypercube are the table entries. This represents a significant improvement in prover costs over prior lookup arguments (e.g., plookup, Halo2’s lookups, lookup arguments based on logarithmic derivatives). Unlike all prior lookup arguments, if the table $t$ is structured (in a precise sense that we define), then no party needs to commit to $t$, enabling the use of much larger tables than prior works (e.g., of size $2^{128}$ or larger). Moreover, Lasso’s prover only "pays" in runtime for table entries that are accessed by the lookup operations. This applies to tables commonly used to implement range checks, bitwise operations, big-number arithmetic, and even transitions of a full-fledged CPU such as RISC-V. Specifically, for any integer parameter $c > 1$, Lasso’s prover’s dominant cost is committing to $3 \cdot c \cdot m + c \cdot n^{1/c}$ field elements. Furthermore, all these field elements are “small”, meaning they are in the set $\{0, . . . , \max{(m, n^{1/c}, q)} − 1\}$, where $q$ is the maximum value in $a$. Lasso’s starting point is Spark, a time-optimal polynomial commitment scheme for sparse polynomials in Spartan (CRYPTO 2020). We first provide a stronger security analysis for Spark. Spartan’s security analysis assumed that certain metadata associated with a sparse polynomial is committed by an honest party (this is acceptable for its purpose in Spartan, but not for Lasso). We prove that Spark remains secure even when that metadata is committed by a malicious party. This provides the first "standard" commitment scheme for sparse multilinear polynomials with optimal prover costs. We then generalize Spark to directly support a lookup argument for both structured and unstructured tables, with the efficiency characteristics noted above.
Last updated:  2024-09-08
Authentica: A Secure Authentication Mechanism using a Software-defined Unclonable Function
Ripon Patgiri and Laiphrakpam Dolendro Singh
Password-based authentication is an extensively used method to authenticate users. It uses cryptography to communicate the authentication process. On the contrary, the physically unclonable function (PUF)-based authentication mechanism is also gaining popularity rapidly due to its usability in IoT devices. It is a lightweight authentication mechanism that does not use cryptography protocol. PUF-based authentication mechanisms cannot authenticate users. To overcome the drawback of PUF, we introduce a software-defined unclonable function (SUF, for short). Contrary to the PUF, the SUF is used to authenticate users, not devices. We use SUF to implement a lightweight password-based authentication mechanism termed Authentica. Authentica bridges the gap between the password-based and the PUF-based authentication mechanism. Authentica does not use cryptography for authentication. However, we establish challenge-response using cryptography during the registration phase, which is a one-time cost. Authentica addresses a) impersonation attacks, b) collision attacks, c) dictionary and rainbow table attacks, d) replay attacks, e) DDoS attacks, f) the domino effect issues, and g) the challenge-response database leakage issues.
Last updated:  2023-08-10
Verifiable Verification in Cryptographic Protocols
Marc Fischlin, Felix Günther
Common verification steps in cryptographic protocols, such as signature or message authentication code checks or the validation of elliptic curve points, are crucial for the overall security of the protocol. Yet implementation errors omitting these steps easily remain unnoticed, as often the protocol will function perfectly anyways. One of the most prominent examples is Apple's goto fail bug where the erroneous certificate verification skipped over several of the required steps, marking invalid certificates as correctly verified. This vulnerability went undetected for at least 17 months. We propose here a mechanism which supports the detection of such errors on a cryptographic level. Instead of merely returning the binary acceptance decision, we let the verification return more fine-grained information in form of what we call a confirmation code. The reader may think of the confirmation code as disposable information produced as part of the relevant verification steps. In case of an implementation error like the goto fail bug, the confirmation code would then miss essential elements. The question arises now how to verify the confirmation code itself. We show how to use confirmation codes to tie security to basic functionality at the overall protocol level, making erroneous implementations be detected through the protocol not functioning properly. More concretely, we discuss the usage of confirmation codes in secure connections, established via a key exchange protocol and secured through the derived keys. If some verification steps in a key exchange protocol execution are faulty, then so will be the confirmation codes, and because we can let the confirmation codes enter key derivation, the connection of the two parties will eventually fail. In consequence, an implementation error like goto fail would now be detectable through a simple connection test.
Last updated:  2023-12-05
Fallen Sanctuary: A Higher-Order and Leakage-Resilient Rekeying Scheme
Rei Ueno, Naofumi Homma, Akiko Inoue, and Kazuhiko Minematsu
This paper presents a provably secure, higher-order, and leakage-resilient (LR) rekeying scheme named LR Rekeying with Random oracle Repetition (LR4), along with a quantitative security evaluation methodology. Many existing LR primitives are based on a concept of leveled implementation, which still essentially require a leak-free sanctuary (i.e., differential power analysis (DPA)-resistant component(s)) for some parts. In addition, although several LR pseudorandom functions (PRFs) based on only bounded DPA-resistant components have been developed, their validity and effectiveness for rekeying usage still need to be determined. In contrast, LR4 is formally proven under a leakage model that captures the practical goal of side-channel attack (SCA) protection (e.g., masking with a practical order) and assumes no unbounded DPA-resistant sanctuary. This proof suggests that LR4 resists exponential invocations (up to the birthday bound of key size) without using any unbounded leak-free component, which is the first of its kind. Moreover, we present a quantitative SCA success rate evaluation methodology for LR4 that combines the bounded leakage models for LR cryptography and a state-of-the-art information-theoretical SCA evaluation method. We validate its soundness and effectiveness as a DPA countermeasure through a numerical evaluation; that is, the number of secure calls of a symmetric primitive increases exponentially by increasing a security parameter under practical conditions.
Last updated:  2023-08-24
CLRW1$^{3}$ is not Secure Beyond the Birthday Bound: Breaking TNT with ${O(2^{n/2})}$ queries
Mustafa Khairallah
In this paper, we present a new distinguisher for the Tweak-aNd-Tweak (TNT) tweakable block cipher with $O(2^{n/2})$ complexity. The distinguisher is an adaptive chosen ciphertext distinguisher, unlike previous attacks that are only non-adaptive chosen plaintext attacks. However, the attack contradicts the security claims made by the designers. Given TNT can be seen as the three-round CLRW1 tweakable block cipher, our attack matches its more conservative bound. We provide the distinguisher description, a probabilistic analysis of its behaviour, experimental verification and an analysis of why the proof fails to capture the security of TNT. In summary, the distinguisher is based on collision counting and exploits non-uniformity in the statistical behaviour of random permutations. It reduces the goal of finding the collision to solving a difference equation defined over a random permutation. Due to this relation, the number of collisions observed by the distinguisher is twice as expected from an ideal tweakable block cipher.
Last updated:  2023-12-06
Optimal Flexible Consensus and its Application to Ethereum
Joachim Neu, Srivatsan Sridhar, Lei Yang, and David Tse
Classic BFT consensus protocols guarantee safety and liveness for all clients if fewer than one-third of replicas are faulty. However, in applications such as high-value payments, some clients may want to prioritize safety over liveness. Flexible consensus allows each client to opt for a higher safety resilience, albeit at the expense of reduced liveness resilience. We present the first construction that allows optimal safety-liveness tradeoff for every client simultaneously. This construction is modular and is realized as an add-on applied on top of an existing consensus protocol. The add-on consists of an additional round of voting and permanent locking done by the replicas, to sidestep a sub-optimal quorum-intersection-based constraint present in previous solutions. We adapt our construction to the existing Ethereum protocol to derive optimal flexible confirmation rules that clients can adopt unilaterally without requiring system-wide changes. This is possible because existing Ethereum protocol features can double as the extra voting and locking. We demonstrate an implementation using Ethereum's consensus API.
Last updated:  2023-11-30
Decentralized Finance (DeFi): A Survey
Erya Jiang, Bo Qin, Qin Wang, Zhipeng Wang, Qianhong Wu, Jian Weng, Xinyu Li, Chenyang Wang, Yuhang Ding, and Yanran Zhang
Decentralized Finance (DeFi) is a new paradigm in the creation, distribution, and utilization of financial services via the integration of blockchain technology. Our research conducts a comprehensive introduction and meticulous classification of various DeFi applications. Beyond that, we thoroughly analyze these risks from both technical and economic perspectives, spanning multiple layers. We point out research gaps and revenues, covering technical advancements, innovative economics, and sociology and ecology optimization.
Last updated:  2023-08-09
Infinite families of minimal binary codes via Krawtchouk polynomials
Xiaoni Du, René Rodríguez, Hao Wu
Linear codes play a crucial role in various fields of engineering and mathematics, including data storage, communication, cryptography, and combinatorics. Minimal linear codes, a subset of linear codes, are particularly essential for designing effective secret sharing schemes. In this paper, we introduce several classes of minimal binary linear codes by carefully selecting appropriate Boolean functions. These functions belong to a renowned class of Boolean functions, the general Maiorana-McFarland class. We employ a method first proposed by Ding et al. [7] to construct minimal codes violating the Ashikhmin-Barg bound (wide minimal codes) by using Krawtchouk polynomials. The lengths, dimensions, and weight distributions of the obtained codes are determined using the Walsh spectrum distribution of the chosen Boolean functions. Our findings demonstrate that a vast majority of the newly constructed codes are wide minimal codes. Furthermore, our proposed codes exhibit a significantly larger minimum distance, in some cases, compared to some existing similar constructions. Finally, we address this method, based on Krawtchouk polynomials, more generally, and highlight certain generic properties related to it. This study provides insights into the scope of this method.
Last updated:  2023-08-09
Mutator Sets and their Application to Scalable Privacy
Alan Szepieniec, Thorkil Værge
A mutator set is a cryptographic data structure for authenticating operations on a changing set of data elements called items. Informally: - There is a short commitment to the set. - There are succinct membership proofs for elements of the set. - It is possible to update the commitment as well as the membership proofs with minimal effort as new items are added to the set or as existing items are removed from it. - Items cannot be removed before they were added. - It is difficult to link an item's addition to the set to its removal from the set, except when using information available only to the party that generated it. This paper formally defines the notion, motivates its existence with an application to scalable privacy in the context of cryptocurrencies, and proposes an instantiation inspired by Merkle mountain ranges and Bloom filters.
Last updated:  2023-08-09
DeFi Auditing: Mechanisms, Effectiveness, and User Perceptions
Ding Feng, Rupert Hitsch, Kaihua Qin, Arthur Gervais, Roger Wattenhofer, Yaxing Yao, Ye Wang
Decentralized Finance (DeFi), a blockchain-based financial ecosystem, suffers from smart contract vulnerabilities that led to a loss exceeding 3.24 billion USD by April 2022. To address this, blockchain firms audit DeFi applications, a process known as DeFi auditing. Our research aims to comprehend the mechanism and efficacy of DeFi auditing. We discovered its ability to detect vulnerabilities in smart contract logic and interactivity with other DeFi entities, but also noted its limitations in communication, transparency, remedial action implementation, and in preventing certain DeFi attacks. Moreover, our interview study delved into user perceptions of DeFi auditing, unmasking gaps in awareness, usage, and trust, and offering insights to address these issues.
Last updated:  2023-08-09
Decentralized Threshold Signatures for Blockchains with Non-Interactive and Transparent Setup
Kwangsu Lee
Threshold signatures are digital signatures that support the multi-party signature generation such that a number of parties initially share a signing key and more than a threshold number of parties gather to generate a signature. In this paper, we propose a non-interactive decentralized threshold signature (NIDTS) scheme that supports the non-interactive and transparent key setup based on BLS signatures. Our NIDTS scheme has the following properties. 1) The key setup process is completely non-interactive and does not require message exchange between parties since the transfer of the register keys of parties is enough for a combiner to generate a compact verification key. 2) The register key of a party is compact since the size is independent of the number of group parties. 3) The signing process of parties is non-interactive. 4) The final threshold signature as well as partial signatures are succinct. We prove the security of our NIDTS scheme under computational assumptions in the random oracle model. Furthermore, we implement our NIDTS scheme in Rust and compare its performance with other scheme to show that the key setup of our scheme is more efficient. For example, in the unweighted setting of 1000 parties, the key setup process of the NIDTS scheme takes 164 seconds, which is 5.9 times faster than the key setup process of the multiverse threshold signature (MTS) scheme.
Last updated:  2023-11-15
On the security of REDOG
Tanja Lange, Alex Pellegrini, and Alberto Ravagnani
We analyze REDOG, a public-key encryption system submitted to the Korean competition on post-quantum cryptography. REDOG is based on rank-metric codes. We prove its incorrectness and attack its implementation, providing an efficient message recovery attack. Furthermore, we show that the security of REDOG is much lower than claimed. We then proceed to mitigate these issues and provide two approaches to fix the decryption issue, one of which also leads to better security.
Last updated:  2023-08-08
On Fully-Secure Honest Majority MPC without $n^2$ Round Overhead
Daniel Escudero, Serge Fehr
Fully secure multiparty computation (or guaranteed output delivery) among $n$ parties can be achieved with perfect security if the number of corruptions $t$ is less than $n/3$, or with statistical security with the help of a broadcast channel if $t<n/2$. In the case of $t<n/3$, it is known that it is possible to achieve linear communication complexity, but at a cost of having a round count of $\Omega(\mathsf{depth}(C) + n)$ in the worst case. The number of rounds can be reduced to $O(\mathsf{depth}(C))$ by either increasing communication, or assuming some correlated randomness (a setting also known as the preprocesing model). For $t<n/2$ it is also known that linear communication complexity is achievable, but at the cost of $\Omega(\mathsf{depth}(C) + n^2)$ rounds, due to the use of a technique called dispute control. However, in contrast to the $t<n/3$ setting, it is not known how to reduce this round count for $t<n/2$ to $O(\mathsf{depth}(C))$, neither allowing for larger communication, or by using correlated randomness. In this work we make progress in this direction by taking the second route above: we present a fully secure protocol for $t<n/2$ in the preprocessing model, that achieves linear communication complexity, and whose round complexity is only $O(\mathsf{depth}(C))$, without the additive $n^2$ term that appears from the use of dispute control. While on the $t<n/3$ such result requires circuits of width $\Omega(n)$, in our case circuits must be of width $\Omega(n^2)$, leaving it as an interesting future problem to reduce this gap. Our $O(\mathsf{depth}(C))$ round count is achieved by avoiding the use of dispute control entirely, relying on a different tool for guaranteeing output. In the $t<n/3$ setting when correlated randomness is available, this is done by using error correction to reconstruct secret-shared values, but in the $t<n/2$ case the equivalent is robust secret-sharing, which guarantees the reconstruction of a secret in spite of errors. However, we note that a direct use of such tool would lead to \emph{quadratic} communication, stemming from the fact that each party needs to authenticate their share towards each other party. At the crux of our techniques lies a novel method for reconstructing a batch of robustly secret-shared values while involving only a linear amount of communication per secret, which may also be of independent interest.
Last updated:  2023-08-08
Collaborative Privacy-Preserving Analysis of Oncological Data using Multiparty Homomorphic Encryption
Ravit Geva, Alexander Gusev, Yuriy Polyakov, Lior Liram, Oded Rosolio, Andreea Alexandru, Nicholas Genise, Marcelo Blatt, Zohar Duchin, Barliz Waissengrin, Dan Mirelman, Felix Bukstein, Deborah T. Blumenthal, Ido Wolf, Sharon Pelles-Avraham, Tali Schaffer, Lee A. Lavi, Daniele Micciancio, Vinod Vaikuntanathan, Ahmad Al Badawi, Shafi Goldwasser
Real-world healthcare data sharing is instrumental in constructing broader-based and larger clinical data sets that may improve clinical decision-making research and outcomes. Stakeholders are frequently reluctant to share their data without guaranteed patient privacy, proper protection of their data sets, and control over the usage of their data. Fully homomorphic encryption (FHE) is a cryptographic capability that can address these issues by enabling computation on encrypted data without intermediate decryptions, so the analytics results are obtained without revealing the raw data. This work presents a toolset for collaborative privacy-preserving analysis of oncological data using multiparty FHE. Our toolset supports survival analysis, logistic regression training, and several common descriptive statistics. We demonstrate using oncological data sets that the toolset achieves high accuracy and practical performance, which scales well to larger data sets. As part of this work, we propose a novel cryptographic protocol for interactive bootstrapping in multiparty FHE, which is of independent interest. The toolset we develop is general-purpose and can be applied to other collaborative medical and healthcare application domains.
Last updated:  2023-08-08
Extension of Shannon's theory of ciphers based on Latin rectangles
Karel BURDA
The paper extends Shannon's classical theory of ciphers. Here ciphers are modeled by Latin rectangles and their resistance to brute force attack is assessed through the valence of cryptograms. The valence of a cryptogram is defined as the number of all meaningful messages produced by decrypting the cryptogram with all possible keys. In this paper, the mean cryptogram valence of an arbitrary modern cipher with K keys, N outputs and N inputs, of which M inputs are messages, is derived. Furthermore, the lower bound on the valence of the cryptograms of entire ciphers is derived in this paper. The obtained parameters allow to assess the resistance of cryptograms, resp. ciphers against brute force attack. The model is general, illustrative and uses a simpler mathematical apparatus than existing theory. Therefore, it can also be used as an introduction to the theory of resistance of ciphers to brute force attack.
Last updated:  2023-08-08
Privacy-preserving edit distance computation using secret-sharing two-party computation
Hernán Darío Vanegas Madrigal, Daniel Cabarcas Jaramillo, Diego F. Aranha
The edit distance is a metric widely used in genomics to measure the similarity of two DNA chains. Motivated by privacy concerns, we propose a 2PC protocol to compute the edit distance while preserving the privacy of the inputs. Since the edit distance algorithm can be expressed as a mixed-circuit computation, our approach uses protocols based on secret-sharing schemes like Tinier and SPD$\mathbb{Z}_{2^k}$; and also daBits to perform domain conversion and edaBits to perform arithmetic comparisons. We modify the Wagner-Fischer edit distance algorithm, aiming at reducing the number of rounds of the protocol, and achieve a flexible protocol with a trade-off between rounds and multiplications. We implement our proposal in the MP-SPDZ framework, and our experiments show that it reduces the execution time respectively by 81\% and 54\% for passive and active security with respect to a baseline implementation in a LAN. The experiments also show that our protocol reduces traffic by two orders of magnitude compared to a BMR-MASCOT implementation.
Last updated:  2023-08-08
Shining Light on the Shadow: Full-round Practical Distinguisher for Lightweight Block Cipher Shadow
Sunyeop Kim, Myoungsu Shin, Seonkyu Kim, Hanbeom Shin, Insung Kim, Donggeun Kwon, Dongjae Lee, Seonggyeom Kim, Deukjo Hong, Jaechul Sung, Seokhie Hong
Shadow is a lightweight block cipher proposed at IEEE IoT journal 2021. Shadow’s main design principle is adopting a variant 4- branch Feistel structure in order to provide a fast diffusion rate. We define such a structure as Shadow structure and prove that it is al- most identical to the Generalized Feistel Network, which invalidates the design principle. Moreover, we give a structural distinguisher that can distinguish Shadow structure from random permutation with only two plaintext/ciphertext pairs. By exploiting the key schedule, the distin- guisher can be extended to key recovery attack with only one plain- text/ciphertext pair. Furthermore, by considering Shadow’s round func- tion, only certain forms of monomials can appear in the ciphertext, re- sulting in an integral distinguisher of four plaintext/ciphertext pairs. Even more, the algebraic degree does not increase more than 12 for Shadow-32 and 20 for Shadow-64 regardless of rounds used. Our results show that Shadow is highly vulnerable to algebraic attacks, and that algebraic attacks should be carefully considered when designing ciphers with AND, rotation, and XOR operations.
Last updated:  2023-08-08
RSA Blind Signatures with Public Metadata
Ghous Amjad, Kevin Yeo, Moti Yung
Anonymous tokens are digital signature schemes that enable an issuer to provider users with signatures without learning the input message or the resulting signature received by the user. These primitives allow applications to propagate trust while simultaneously protecting the identity of the user. Anonymous tokens have become a core component for improving the privacy of several real-world applications including ad measurements, authorization protocols, spam detection and VPNs. In certain applications, it is natural to associate signatures with specific public metadata ensuring that signatures only propagate trust with respect to only a certain set of scenarios. To solve this, we study the notion of anonymous tokens with public metadata in this work. We present a variant of RSA blind signatures with public metadata such that issuers may only generate signatures that verify for a certain choice of public metadata. We prove the security of our protocol under one-more RSA assumptions with multiple exponents that we introduce. Furthermore, we provide evidence that the concrete security bounds should be nearly identical to standard RSA blind signatures. The protocols in this paper have been proposed as a technical specification in an IRTF internet draft.
Last updated:  2023-10-18
A Methodology to Achieve Provable Side-Channel Security in Real-World Implementations
Uncategorized
Sonia Belaïd, Gaëtan Cassiers, Camille Mutschler, Matthieu Rivain, Thomas Roche, François-Xavier Standaert, and Abdul Rahman Taleb
Show abstract
Uncategorized
Physical side-channel attacks exploit a device's emanations to compromise the security of cryptographic implementations. Many countermeasures have been proposed against these attacks, especially the widely-used and efficient masking countermeasure. While theoretical models offer formal security proofs, they often rest on unrealistic assumptions, leading current approaches to prove the security of masked implementations to primarily rely on empirical verification. Consequently, the literature still lacks a well-defined framework for implementing proven secure constructions on physical devices. In this paper, we present a comprehensive methodology to transform an abstract circuit into a physical implementation secure against side-channel attacks. We introduce new tools for adapting the ideal noisy leakage model to practical scenarios. We also highlight the design objectives for embedded devices to achieve high levels of security, while acknowledging the limitations and challenges in applying leakage models in practice. Our aim is to demonstrate the possibility of bridging theory and practice, encouraging further research to achieve practical implementations proven secure against side-channel attacks without relying on ideal assumptions about the leakage.
Last updated:  2023-08-07
Towards a Quantum-resistant Weak Verifiable Delay Function
Thomas Decru, Luciano Maino, Antonio Sanso
In this paper, we present a new quantum-resistant weak Verifiable Delay Function based on a purely algebraic construction. Its delay depends on computing a large-degree isogeny between elliptic curves, whereas its verification relies on the computation of isogenies between products of two elliptic curves. One of its major advantages is its expected fast verification time. However, it is important to note that the practical implementation of our theoretical framework poses significant challenges. We examine the strengths and weaknesses of our construction, analyze its security and provide a proof-of-concept implementation.
Last updated:  2024-02-12
Verifiable Secret Sharing Simplified
Sourav Das, Zhuolun Xiang, Alin Tomescu, Alexander Spiegelman, Benny Pinkas, and Ling Ren
Verifiable Secret Sharing~(VSS) is a fundamental building block in cryptography. Despite its importance and extensive studies, existing VSS protocols are often complex and inefficient. Many of them do not support dual threads, are not publicly verifiable, or do not properly terminate in asynchronous networks. This paper presents a new and simple approach for designing VSS protocols in synchronous and asynchronous networks. Our VSS protocols are optimally fault-tolerant, i.e., they tolerate a $1/2$ and a $1/3$ fraction of malicious nodes in synchronous and asynchronous networks, respectively. They only require a public key infrastructure and the hardness of discrete logarithms. Our protocols support dual thresholds, and their transcripts are publicly verifiable. We implement our VSS protocols and evaluate them in a geo-distributed setting with up to 256 nodes. The evaluation demonstrates that our protocols offer asynchronous termination and public verifiability with performance that is comparable to that of existing asynchronous VSS schemes that lack these features. Compared to the existing asynchronous VSS schemes with similar guarantees, our approach lowers the bandwidth usage and latency by up to $90\%$.
Last updated:  2023-08-06
PicoEMP: A Low-Cost EMFI Platform Compared to BBI and Voltage Fault Injection using TDC and External VCC Measurements
Colin O'Flynn
Electromagnetic Fault Injection (EMFI) has been demonstrated to be useful for both academic and industrial research. Due to the dangerous voltages involved, most work is done with commercial tools. This paper introduces a safety-focused low-cost and open-source design that can be built for less than \$50 using only off-the-shelf parts. The paper also introduces an iCE40 based Time-to-Digital Converter (TDC), which is used to visualize the glitch inserted by the EMFI tool. This demonstrates the internal voltage perturbations between voltage, body biasing injection (BBI), and EMFI all result in similar waveforms. In addition, a link between an easy-to-measure external voltage measurement and the internal measurement is made. Attacks are also made on a hardware AES engine, and a soft-core RISC-V processor, all running on the same iCE40 FPGA. The platform is used to demonstrate several aspects of fault injection, including that the spatial positioning of the EMFI probe can impact the glitch strength, and that the same physical device may require widely different glitch parameters when running different designs.
Last updated:  2023-08-06
HI-Kyber: A novel high-performance implementation scheme of Kyber based on GPU
Xinyi Ji, Jiankuo Dong, Pinchang Zhang, Deng Tonggui, Hua Jiafeng, Fu Xiao
CRYSTALS-Kyber, as the only public key encryption (PKE) algorithm selected by the National Institute of Standards and Technology (NIST) in the third round, is considered one of the most promising post-quantum cryptography (PQC) schemes. Lattice-based cryptography uses complex discrete alogarithm problems on lattices to build secure encryption and decryption systems to resist attacks from quantum computing. Performance is an important bottleneck affecting the promotion of post quantum cryptography. In this paper, we present a High-performance Implementation of Kyber (named HI-Kyber) on the NVIDIA GPUs, which can increase the key-exchange performance of Kyber to the million-level. Firstly, we propose a lattice-based PQC implementation architecture based on kernel fusion, which can avoid redundant global-memory access operations. Secondly, We optimize and implement the core operations of CRYSTALS-Kyber, including Number Theoretic Transform (NTT), inverse NTT (INTT), pointwise multiplication, etc. Especially for the calculation bottleneck NTT operation, three novel methods are proposed to explore extreme performance: the sliced layer merging (SLM), the sliced depth-first search (SDFS-NTT) and the entire depth-first search (EDFS-NTT), which achieve a speedup of 7.5%, 28.5%, and 41.6% compared to the native implementation. Thirdly, we conduct comprehensive performance experiments with different parallel dimensions based on the above optimization. Finally, our key exchange performance reaches 1,664 kops/s. Specifically, based on the same platform, our HI-Kyber is 3.52$\times$ that of the GPU implementation based on the same instruction set and 1.78$\times$ that of the state-of-the-art one based on AI-accelerated tensor core.
Last updated:  2023-08-05
An Anonymous Authenticated Key Agreement Protocol Secure in Partially Trusted Registration Server Scenario for Multi-Server Architectures
Inam ul Haq, Jian Wang, Youwen Zhu, Sheharyar Nasir
The accelerated advances in information communication technologies have made it possible for enterprises to deploy large scale applications in a multi-server architecture (also known as cloud computing environment). In this architecture, a mobile user can remotely obtain desired services over the Internet from multiple servers by initially executing a single registration on a trusted registration server (RS). Due to the hazardous nature of the Internet, to protect user privacy and online communication, a lot of multi-server authenticated-key-agreement (MSAKA) schemes have been furnished. However, all such designs lack in two very vital aspects, i.e., 1) no security under the partially trusted RS and 2) RS cannot control a user to access only a wanted combination of service-providing servers. To address these shortcomings, we present a new MSAKA protocol using self-certified public-key cryptography (SCPKC). We confirm the security of the proposed scheme by utilizing the well-known automated verification tool AVISPA and also provide a formal security proof in the random oracle model. Moreover, the software implementation of the proposed scheme, and a performance and security metrics comparison shows that it portrays a better security performance trade-off, and hence is more appropriate for real-life applications having resource constraint devices.
Last updated:  2023-08-04
CycleFold: Folding-scheme-based recursive arguments over a cycle of elliptic curves
Abhiram Kothapalli, Srinath Setty
This paper introduces CycleFold, a new and conceptually simple approach to instantiate folding-scheme-based recursive arguments over a cycle of elliptic curves, for the purpose of realizing incrementally verifiable computation (IVC). Existing approach to solve this problem originates from BCTV (CRYPTO'14) who describe their approach for a SNARK-based recursive argument, and it was adapted by Nova (CRYPTO'22) to a folding-scheme-based recursive argument. A downside of this approach is that it represents a folding scheme verifier as a circuit on both curves in the cycle. (e.g., with Nova, this requires $\approx$10,000 multiplication gates on both curves in the cycle). CycleFold’s starting point is the observation that folding-scheme-based recursive arguments can be efficiently instantiated without a cycle of elliptic curves—except for a few scalar multiplications in their verifiers (2 in Nova, 1 in HyperNova, and 3 in ProtoStar). Accordingly, CycleFold uses the second curve in the cycle to merely represent a single scalar multiplication ($\approx$1,000--1,500 multiplication gates). CycleFold then folds invocations of that tiny circuit on the first curve in the cycle. This is nearly an order of magnitude improvement over the prior state-of-the-art in terms of circuit sizes on the second curve. CycleFold is particularly beneficial when instantiating folding-scheme-based recursive arguments over “half pairing” cycles (e.g., BN254/Grumpkin) as it keeps the circuit on the non-pairing-friendly curve minimal. The running instance in a CycleFold-based recursive argument consists of an instance on the first curve and a tiny instance on the second curve. Both instances can be proven using a zkSNARK defined over the scalar field of the first curve. On the conceptual front, with CycleFold, an IVC construction and nor its security proof has to explicitly reason about the cycle of elliptic curves. Finally, due to its simplicity, CycleFold-based recursive argument can be more easily be adapted to support parallel proving with the so-called "binary tree" IVC.
Last updated:  2023-08-08
Attribute-Based Multi-Input FE (and more) for Attribute-Weighted Sums
Shweta Agrawal, Junichi Tomida, Anshu Yadav
Recently, Abdalla, Gong and Wee (Crypto 2020) provided the first functional encryption scheme for attribute-weighted sums (AWS), where encryption takes as input $N$ (unbounded) attribute-value pairs $\{\vec{x}_i, \vec{z}_i\}_{I \in [N]}$ where $\vec{x}_i$ is public and $\vec{z}_i$ is private, the secret key is associated with an arithmetic branching programs $f$, and decryption returns the weighted sum ${\sum}_{{i \in [N]}} f(\vec{x}_i)^\top \vec{z}_i$, leaking no additional information about the $\vec{z}_i$'s. We extend FE for AWS to the significantly more challenging multi-party setting and provide the first construction for {\it attribute-based} multi-input FE (MIFE) supporting AWS. For $i \in [n]$, encryptor $i$ can choose an attribute $\vec{y}_i$ together with AWS input $\{\vec{x}_{i,j}, \vec{z}_{i,j}\}$ where $j \in [N_i]$ and $N_i$ is unbounded, the key generator can choose an access control policy $g_i$ along with its AWS function $h_i$ for each $i \in [n]$, and the decryptor can compute $$\sum_{i \in [n]}\sum_{j \in [N_{i}]}h_{i}(\vec{x}_{i,j})^{\top}\vec{z}_{i,j} \text{ iff } g_{i}(\vec{y}_{i}) =0 \text{ for all } i \in [n]$$ Previously, the only known attribute based MIFE was for the inner product functionality (Abdalla et al.~Asiacrypt 2020), where additionally, $\vec{y}_i$ had to be fixed during setup and must remain the same for all ciphertexts in a given slot. Our attribute based MIFE implies the notion of multi-input {\it attribute based encryption} (\miabe) recently studied by Agrawal, Yadav and Yamada (Crypto 2022) and Francati, Friolo, Malavolta and Venturi (Eurocrypt 2023), for a conjunction of predicates represented as arithmetic branching programs (ABP). Along the way, we also provide the first constructions of multi-client FE (MCFE) and dynamic decentralized FE (DDFE) for the AWS functionality. Previously, the best known MCFE and DDFE schemes were for inner products (Chotard et al.~ePrint 2018, Abdalla, Benhamouda and Gay, Asiacrypt 2019, and Chotard et al.~Crypto 2020). Our constructions are based on pairings and proven selectively secure under the matrix DDH assumption.
Last updated:  2024-05-01
REED: Chiplet-Based Accelerator for Fully Homomorphic Encryption
Aikata Aikata, Ahmet Can Mert, Sunmin Kwon, Maxim Deryabin, and Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) enables privacy-preserving computation and has many applications. However, its practical implementation faces massive computation and memory overheads. To address this bottleneck, several Application-Specific Integrated Circuit (ASIC) FHE accelerators have been proposed. All these prior works put every component needed for FHE onto one chip (monolithic), hence offering high performance. However, they suffer from practical problems associated with large-scale chip design, such as inflexibility, low yield, and high manufacturing cost. In this paper, we present the \emph{first-of-its-kind} multi-chiplet-based FHE accelerator `REED' for overcoming the limitations of prior monolithic designs. To utilize the advantages of multi-chiplet structures while matching the performance of larger monolithic systems, we propose and implement several novel strategies in the context of FHE. These include a scalable chiplet design approach, an effective framework for workload distribution, a custom inter-chiplet communication strategy, and advanced pipelined Number Theoretic Transform and automorphism design to enhance performance. Experimental results demonstrate that REED 2.5D microprocessor consumes 96.7mm$^2$ chip area, 49.4 W average power in 7nm technology. It could achieve a remarkable speedup of up to 2,991$\times$ compared to a CPU (24-core 2$\times$Intel X5690) and offer 1.9$\times$ better performance, along with a 50\% reduction in development costs when compared to state-of-the-art ASIC FHE accelerators. Furthermore, our work presents the \textit{first} instance of benchmarking an encrypted deep neural network (DNN) training. Overall, the REED architecture design offers a highly effective solution for accelerating FHE, thereby significantly advancing the practicality and deployability of FHE in real-world applications.
Last updated:  2023-09-05
PAP: A Privacy-Preserving Authentication Scheme with Anonymous Payment for V2G Networks
Xiaohan Yue, Xue Bi, Haibo Yang, Shi Bai, and Yuan He
Vehicle-to-grid (V2G) networks, as an emerging smart grid paradigm, can be integrated with renewable energy resources to provide power services and manage electricity demands. When accessing electricity services, an electric vehicle(EV) typically provides authentication or/and payment information containing identifying data to a service provider, which raises privacy concerns as malicious entities might trace EV activity or exploit personal information. Although numerous anonymous authentication and payment schemes have been presented for V2G networks, no such privacy-preserving scheme supports authentication and payment simultaneously. Therefore, this paper is the first to present a privacy-preserving authentication scheme with anonymous payment for V2G networks (PAP, for short). In addition, this scheme also supports accountability and revocability, which are practical features to prevent malicious behavior; minimal attribute disclosure, which maximizes the privacy of EV when responding to the service provider's flexible access policies; payment binding, which guarantees the accountability in the payment phase; user-controlled linkability, which enables EV to decide whether different authentication sessions are linkable for continuous services. On the performance side, we implement PAP with the pairing cryptography library, then evaluate it on different hardware platforms, showing that it is essential for V2G applications.
Last updated:  2023-12-03
A Novel CCA Attack for NTRU+ KEM
Joohee Lee, Minju Lee, Hansol Ryu, and Jaehui Park
The KpqC competition has begun in 2022, that aims to standardize Post-Quantum Cryptography (PQC) in the Republic of Korea. Among the 16 submissions of the KpqC competition, the lattice-based schemes exhibit the most promising and balanced features in performance. In this paper, we propose an effective classical CCA attack to recover the transmitted session key for NTRU+, one of the lattice-based Key Encapsulation Mechanisms (KEM) proposed in the KpqC competition, for the first time. With the proposed attacks, we show that all the suggested parameters of NTRU+ do not satisfy the claimed security. We also suggest a way to modify the NTRU+ scheme to defend our attack.
Last updated:  2023-08-03
Broadcast-Optimal Two Round MPC with Asynchronous Peer-to-Peer Channels
Ivan Damgård, Divya Ravi, Luisa Siniscalchi, Sophia Yakoubov
In this paper we continue the study of two-round broadcast-optimal MPC, where broadcast is used in one of the two rounds, but not in both. We consider the realistic scenario where the round that does not use broadcast is asynchronous. Since a first asynchronous round (even when followed by a round of broadcast) does not admit any secure computation, we introduce a new notion of asynchrony which we call $(t_d, t_m)$-asynchrony. In this new notion of asynchrony, an adversary can delay or drop up to $t_d$ of a given party's incoming messages; we refer to $t_d$ as the deafness threshold. Similarly, the adversary can delay or drop up to $t_m$ of a given party's outgoing messages; we refer to $t_m$ as the muteness threshold. We determine which notions of secure two-round computation are achievable when the first round is $(t_d, t_m)$-asynchronous, and the second round is over broadcast. Similarly, we determine which notions of secure two-round computation are achievable when the first round is over broadcast, and the second round is (fully) asynchronous. We consider the cases where a PKI is available, when only a CRS is available but private communication in the first round is possible, and the case when only a CRS is available and no private communication is possible before the parties have had a chance to exchange public keys.
Last updated:  2023-08-03
Faster cellular automata cryptosystems with neighbor sequences
Uncategorized
Kittiphop Phalakarn, Athasit Surarerks
Show abstract
Uncategorized
The encryption processes and cryptosystems are very important. We use them to protect our private information over the Internet. Cellular automata are ones of the computational models that can also be used in cryptosystems. The advantage of the cellular automata is their abilities to work in parallel, and thus can reduce the encryption time. Some applications require the encryption time to be small, so this paper aims to reduce the encryption time of the cellular automata cryptosystems. We propose a new technique to permit the cryptosystems to get the avalanche effect faster. This avalanche effect is one of the desired properties for cryptosystems. In the proposed technique, the new type of neighbor is defined, a sequence of neighbor tuples. We apply our technique to Seredynski and Bouvry’s work, and the results show that the number of iterations can be reduced up to three times. This makes our cellular automata cryptosystems run faster. The relationship between the size of the neighbor and the size of the cellular automata, and the effect of neighbor sequences to the hardware implementations are left for further studies.
Last updated:  2023-11-13
SwiftRange: A Short and Efficient Zero-Knowledge Range Argument For Confidential Transactions and More
Nan Wang, Sid Chi-Kin Chau, and Dongxi Liu
Zero-knowledge range proofs play a critical role in confidential transactions (CT) on blockchain systems. They are used to prove the non-negativity of committed transaction payments without disclosing the exact values. Logarithmic-sized range proofs with transparent setups, e.g., Bulletproofs, which aim to prove a committed value lies in the range $[0, 2^N-1]$ where $N$ is the bit length of the range, have gained growing popularity for communication-critical blockchain systems as they increase scalability by allowing a block to accommodate more transactions. In this paper, we propose SwiftRange, a new type of logarithmic-sized zero-knowledge range argument with a transparent setup in the discrete logarithm setting. Our argument can be a drop-in replacement for range proofs in blockchain-based confidential transactions. Compared with Bulletproofs, our argument has higher computational efficiency and lower round complexity while incurring comparable communication overheads for CT-friendly ranges, where $N \in \{32,64\}$. Specifically, a single SwiftRange achieves 1.73$\times$ and 1.37$\times$ proving efficiency with no more than 1.1$\times$ communication costs for both ranges, respectively. More importantly, our argument is doubly efficient in verification efficiency. Furthermore, our argument has a smaller size when $N \leq 16$, making it competitive for many other communication-critical applications. Our argument supports the aggregation of multiple single arguments for greater efficiency in communication and verification. Finally, we benchmarked our argument against the state-of-the-art range proofs to demonstrate its practicality.
Last updated:  2023-10-19
STAMP-Single Trace Attack on M-LWE Pointwise Multiplication in Kyber
Bolin Yang, Prasanna Ravi, Fan Zhang, Ao Shen, and Shivam Bhasin
In this work, we propose a novel single-trace key recovery attack targeting side-channel leakage from the key-generation and encryption procedure of Kyber KEM. Our attack exploits the inherent nature of the Module-Learning With Errors (Module-LWE) problem used in Kyber KEM. We demonstrate that the inherent reliance of Kyber KEM on the Module-LWE problem results in higher number of repeated and secret key-related computations, referred to as STAMPs appearing on a single side channel trace, compared to the Ring-LWE problem of similar security level. We exploit leakage from the pointwise multiplication operation and take advantage of the properties of the Module-LWE instance to enable a potential single trace key recovery attack. We validated the efficacy of our attack on both simulated and real traces, and we performed experiments on both the reference and assembly optimized implementation of Kyber KEM, taken from the pqm4 library, a well-known benchmarking and testing framework for PQC schemes on the ARM Cortex-M4 microcontroller. We also analyze the applicability of our attack on countermeasures against traditional SCA such as masking and shuffling. We believe our work motivates more research towards SCA resistant implementation of key-generation and encryption procedure for Kyber KEM.
Last updated:  2023-08-02
Delegated Time-Lock Puzzle
Aydin Abadi, Dan Ristea, Steven J. Murdoch
Time-Lock puzzles (TLP) are cryptographic protocols that enable a client to lock a message in such a way that a server can only unlock it after a specific time period. However, existing TLPs have certain limitations: (i) they assume that both the client and server always possess sufficient computational resources and (ii) they solely focus on the lower time bound for finding a solution, disregarding the upper bound that guarantees a regular server can find a solution within a certain time frame. Additionally, existing TLPs designed to handle multiple puzzles either (a) entail high verification costs or (b) lack generality, requiring identical time intervals between consecutive solutions. To address these limitations, this paper introduces, for the first time, the concept of a "Delegated Time-Lock Puzzle" and presents a protocol called "Efficient Delegated Time- Lock Puzzle" (ED-TLP) that realises this concept. ED-TLP allows the client and server to delegate their resource-demanding tasks to third-party helpers. It facilitates real-time verification of solution correctness and efficiently handles multiple puzzles with varying time intervals. ED-TLP ensures the delivery of solutions within predefined time limits by incorporating both an upper bound and a fair payment algorithm. We have implemented ED-TLP and conducted a comprehensive analysis of its overheads, demonstrating the efficiency of the construction
Last updated:  2023-12-22
Long Paper: Provable Secure Parallel Gadgets
Uncategorized
Francesco Berti, Sebastian Faust, and Maximilian Orlt
Show abstract
Uncategorized
Side-channel attacks are a fundamental threat to the security of cryptographic implementations. One of the most prominent countermeasures against side-channel attacks is masking, where each intermediate value of the computation is secret shared, thereby concealing the computation's sensitive information. An important security model to study the security of masking schemes is the random probing model, in which the adversary obtains each intermediate value of the computation with some probability $p$. To construct secure masking schemes, an important building block is the refreshing gadget, which updates the randomness of the secret shared intermediate values. Recently, Dziembowski, Faust, and Zebrowski (ASIACRYPT'19) analyzed the security of a simple refreshing gadget by using a new technique called the leakage diagram. In this work, we follow the approach of Dziembowski et al. and significantly improve its methodology. Concretely, we refine the notion of a leakage diagram via so-called dependency graphs, and show how to use this technique for arbitrary complex circuits via composition results and approximation techniques. To illustrate the power of our new techniques, as a case study, we designed provably secure parallel gadgets for the random probing model, and adapted the ISW multiplication such that all gadgets can be parallelized. Finally, we evaluate concrete security levels, and show how our new methodology can further improve the concrete security level of masking schemes. This results in a compiler provable secure up to a noise level of $ O({1})$ for affine circuits and $ O({1}/{{n^n}})$ in general.
Last updated:  2023-08-02
A Relational Credential System from $q$-SDH-based Graph Signatures
Syh-Yuan Tan, Ioannis Sfyrakis, Thomas Gross
An attribute-based credential system enables users to prove possession of a credential and statements over certified attributes to verifiers in zero-knowledge while maintaining anonymity and unlinkability. In a relational anonymous credential system, users can further prove their relationship to other entities in their social graph, such as position in an organizational hierarchy or friends-of-friends status in an online social network graph, while protecting their own privacy and that of other users involved in the social graph. While traditional anonymous credential schemes make no provisions for privacy-preserving relationship predicates, a relational credential system is more usable, because it can facilitate relationship-based access control with a wide range of predicates and offers strong privacy guarantees for relationship proofs. We propose the first relational credential scheme, based on a new $q$-SDH graph signature scheme and an efficient zero-knowledge proof system for graph predicates. We rigorously prove the security for the proposed scheme and provide a benchmark using Facebook social graphs.
Last updated:  2023-08-02
Exploring Blockchain Technology through a Modular Lens: A Survey
Uncategorized
Minghui Xu, Yihao Guo, Chunchi Liu, Qin Hu, Dongxiao Yu, Zehui Xiong, Dusit Niyato, Xiuzhen Cheng
Show abstract
Uncategorized
Blockchain has attracted significant attention in recent years due to its potential to revolutionize various industries by providing trustlessness. To comprehensively examine blockchain systems, this article presents both a macro-level overview on the most popular blockchain systems, and a micro-level analysis on a general blockchain framework and its crucial components. The macro-level exploration provides a big picture on the endeavors made by blockchain professionals over the years to enhance the blockchain performance while the micro-level investigation details the blockchain building blocks for deep technology comprehension. More specifically, this article introduces a general modular blockchain analytic framework that decomposes a blockchain system into interacting modules and then examines the major modules to cover the essential blockchain components of network, consensus, and distributed ledger at the micro-level. The framework as well as the modular analysis jointly build a foundation for designing scalable, flexible, and application-adaptive blockchains that can meet diverse requirements. Additionally, this article explores popular technologies that can be integrated with blockchain to expand functionality and highlights major challenges. Such a study provides critical insights to overcome the obstacles in designing novel blockchain systems and facilitates the further development of blockchain as a digital infrastructure to service new applications.
Last updated:  2023-08-01
A Systematic Study of Data Augmentation for Protected AES Implementations
Huimin Li, Guilherme Perin
Side-channel attacks against cryptographic implementations are mitigated by the application of masking and hiding countermeasures. Hiding countermeasures attempt to reduce the Signal-to-Noise Ratio of measurements by adding noise or desynchronization effects during the execution of the cryptographic operations. To bypass these protections, attackers adopt signal processing techniques such as pattern alignment, filtering, averaging, or resampling. Convolutional neural networks have shown the ability to reduce the effect of countermeasures without the need for trace preprocessing, especially alignment, due to their shift invariant property. Data augmentation techniques are also considered to improve the regularization capacity of the network, which improves generalization and, consequently, reduces the attack complexity. In this work, we deploy systematic experiments to investigate the benefits of data augmentation techniques against masked AES implementations when they are also protected with hiding countermeasures. Our results show that, for each countermeasure and dataset, a specific neural network architecture requires a particular data augmentation configuration to achieve significantly improved attack performance. Our results clearly show that data augmentation should be a standard process when targeting datasets with hiding countermeasures in deep learning-based side-channel attacks.
Last updated:  2023-08-01
Towards Open Scan for the Open-source Hardware
Leonid Azriel, Avi Mendelson
The open-source hardware IP model has recently started gaining popularity in the developer community. This model offers the integrated circuit (IC) developers wider standardization, faster time-to-market and richer platform for research. In addition, open-source hardware conforms to the Kerckhoff’s principle of a publicly-known algorithm and thus helps to enhance security. However, when security comes into consideration, source transparency is only one part of the solution. A complex global IC supply chain stands between the source and the final product. Hence, even if the source is known, the finished product is not guaranteed to match it. In this article, we propose the Open Scan model, in which, in addition to the source code, the IC vendor contributes a library-independent information on scan insertion. With scan information available, the user or a certification lab can perform partial reverse engineering of the IC to verify conformance to the advertised source. Compliance lists of open-source programs, such as of the OpenTitan cryptographic IC, can be amended to include this requirement. The Open Scan model addresses accidental and dishonest deviations from the golden model and partially addresses malicious modifications, known as hardware Trojans. We verify the efficiency of the proposed method in simulation with the Trust-Hub Trojan benchmarks and with several open-source benchmarks, in which we randomly insert modifications.
Last updated:  2023-08-01
DualDory: Logarithmic-Verifier Linkable Ring Signatures through Preprocessing
Jonathan Bootle, Kaoutar Elkhiyaoui, Julia Hesse, Yacov Manevich
A linkable ring signature allows a user to sign anonymously on behalf of a group while ensuring that multiple signatures from the same user are detected. Applications such as privacy-preserving e-voting and e-cash can leverage linkable ring signatures to significantly improve privacy and anonymity guarantees. To scale to systems involving large numbers of users, short signatures with fast verification are a must. Concretely efficient ring signatures currently rely on a trusted authority maintaining a master secret, or follow an accumulator-based approach that requires a trusted setup. In this work, we construct the first linkable ring signature with both logarithmic signature size and verification that does not require any trusted mechanism. Our scheme, which relies on discrete-log type assumptions and bilinear maps, improves upon a recent concise ring signature called DualRing by integrating improved preprocessing arguments to reduce the verification time from linear to logarithmic in the size of the ring. Our ring signature allows signatures to be linked based on what message is signed, ranging from linking signatures on any message to only signatures on the same message. We provide benchmarks for our scheme and prove its security under standard assumptions. The proposed linkable ring signature is particularly relevant to use cases that require privacy-preserving enforcement of threshold policies in a fully decentralized context, and e-voting.
Last updated:  2023-10-11
Composable Oblivious Pseudo-Random Functions via Garbled Circuits
Sebastian Faller, Astrid Ottenhues, and Johannes Ottenhues
Oblivious Pseudo-Random Functions (OPRFs) are a central tool for building modern protocols for authentication and distributed computation. For example, OPRFs enable simple login protocols that do not reveal the password to the provider, which helps to mitigate known shortcomings of password-based authentication such as password reuse or mix-up. Reliable treatment of passwords becomes more and more important as we login to a multitude of services with different passwords in our daily life. To ensure the security and privacy of such services in the long term, modern protocols should always consider the possibility of attackers with quantum computers. Therefore, recent research has focused on constructing post-quantum-secure OPRFs. Unfortunately, existing constructions either lack efficiency, or they are based on complex and relatively new cryptographic assumptions, some of which have lately been disproved. In this paper, we revisit the security and the efficiency of the well-known “OPRFs via Garbled Circuits” approach. Such an OPRF is presumably post-quantum-secure and built from well-understood primitives, namely symmetric cryptography and oblivious transfer. We investigate security in the strong Universal Composability model, which guarantees security even when multiple instances are executed in parallel and in conjunction with arbitrary other protocols, which is a realistic scenario in today’s internet. At the same time, it is faster than other current post-quantumsecure OPRFs. Our implementation and benchmarks demonstrate that our proposed OPRF is currently among the best choices if the privacy of the data has to be guaranteed for a long time.
Last updated:  2023-12-13
Fast batched asynchronous distributed key generation
Jens Groth and Victor Shoup
We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
Last updated:  2023-12-08
zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
Haochen Sun, Tonghe Bai, Jason Li, and Hongyang Zhang
The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep neural networks. To protect the intellectual properties of AI developers, directly examining the training process by accessing the model parameters and training data is often prohibited for verifiers. In response to this challenge, we present zero-knowledge deep learning (zkDL), an efficient zero-knowledge proof for deep learning training. To address the long-standing challenge of verifiable computations of non-linearities in deep learning training, we introduce zkReLU, a specialized proof for the ReLU activation and its backpropagation. zkReLU turns the disadvantage of non-arithmetic relations into an advantage, leading to the creation of FAC4DNN, our specialized arithmetic circuit design for modelling neural networks. This design aggregates the proofs over different layers and training steps, without being constrained by their sequential order in the training process. With our new CUDA implementation that achieves full compatibility with the tensor structures and the aggregated proof design, zkDL enables the generation of complete and sound proofs in less than a second per batch update for an 8-layer neural network with 10M parameters and a batch size of 64, while provably ensuring the privacy of data and model parameters. To our best knowledge, we are not aware of any existing work on zero-knowledge proof of deep learning training that is scalable to million-size networks.
Last updated:  2023-08-07
Round-Optimal Black-Box MPC in the Plain Model
Yuval Ishai, Dakshita Khurana, Amit Sahai, Akshayaram Srinivasan
We give the first construction of a fully black-box round-optimal secure multiparty computation (MPC) protocol in the plain model. Our protocol makes black-box use of a sub-exponentially secure two-message statistical sender private oblivious transfer (SSP-OT), which in turn can be based on (sub-exponential variants of) almost all of the standard cryptographic assumptions known to imply public-key cryptography.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.