All papers in 2022 (Page 3 of 1781 results)

Last updated:  2023-01-20
Truncator: Time-space Tradeoff of Cryptographic Primitives
Foteini Baldimtsi, Konstantinos Chalkias, Panagiotis Chatzigiannis, Mahimna Kelkar
We present mining-based techniques to reduce the size of various cryptographic outputs without loss of security. Our approach can be generalized for multiple primitives, such as cryptographic key generation, signing, hashing and encryption schemes, by introducing a brute-forcing step to provers/senders aiming at compressing submitted cryptographic material. Interestingly, mining can result in record-size cryptographic outputs, and we show that 5%-12% shorter hash digests and signatures are practically feasible even with commodity hardware. As a result, our techniques make compressing addresses and transaction signatures possible in order to pay less fees in blockchain applications while decreasing the demand for blockchain space, a major bottleneck for initial syncing, communication and storage. Also, the effects of "compressing once - then reuse'' at mass scale can be economically profitable in the long run for both the Web2 and Web3 ecosystems. Our paradigm relies on a brute-force search operation in order to craft the primitive's output such that it fits into fewer bytes, while the "missing" fixed bytes are implied by the system parameters and omitted from the actual communication. While such compression requires computational effort depending on the level of compression, this cost is only paid at the source (i.e. in blockchains, senders are rewarded by lowered transaction fees), and the benefits of the compression are enjoyed by the whole ecosystem. As a starting point, we show how our paradigm applies to some basic primitives commonly used in blockchain applications but also traditional Web2 transactions (such as shorter digital certificates), and show how security is preserved using a bit security framework. Surprisingly, we also identified cases where wise mining strategies require proportionally less effort than naive brute-forcing, shorter hash-based signatures being one of the best examples. We also evaluate our approach for several primitives based on different levels of compression. Our evaluation concretely demonstrates the benefits both in terms of financial cost and storage if adopted by the community, and we showcase how our technique can achieve up to 83.21% reduction in smart contract gas fees at a cost of less than 4 seconds of computation on a single core.
Last updated:  2023-03-17
Multi-ciphertext security degradation for lattices
Daniel J. Bernstein
Typical lattice-based cryptosystems are commonly believed to resist multi-target attacks. For example, the New Hope proposal stated that it avoids "all-for-the-price-of-one attacks". An ACM CCS 2021 paper from Duman–Hövelmanns–Kiltz–Lyubashevsky–Seiler stated that "we can show that Adv_{PKE}^{IND-CPA} ≈ Adv_{PKE}^{(n,q_C)-IND-CPA} for "lattice-based schemes" such as Kyber, i.e. that one-out-of-many-target IND-CPA is as difficult to break as single-target IND-CPA, assuming "the hardness of MLWE as originally defined for the purpose of worst-case to average-case reductions". Meanwhile NIST expressed concern regarding multi-target attacks against non-lattice cryptosystems. This paper quantifies the asymptotic impact of multiple ciphertexts per public key upon standard analyses of known primal lattice attacks, assuming existing heuristics. The qualitative conclusions are that typical lattice PKEs asymptotically degrade in heuristic multi-ciphertext IND-CPA security as the number of ciphertexts increases. These PKE attacks also imply multi-ciphertext IND-CCA2 attacks against typical constructions of lattice KEMs. Quantitatively, the asymptotic heuristic security degradation is exponential in Θ(n) for decrypting many ciphertexts, cutting a constant fraction out of the total number of bits of security, and exponential in Θ(n/log n) for decrypting one out of many ciphertexts, for conservative cryptosystem parameters. This shows a contradiction between the existing heuristics and the idea that multi-target security matches single-target security. Also, whether or not the existing heuristics are correct, (1) there are flaws in the claim of an MLWE-based proof of tight multi-target security, and (2) there is a 2^{88}-guess attack breaking one out of 2^{40} ciphertexts for a FrodoKEM-640 public key, disproving FrodoKEM's claim that "the FrodoKEM parameter sets comfortably match their target security levels with a large margin".
Last updated:  2022-11-14
New Properties of Double Boomerang Connectivity Table
Qianqian Yang, Ling Song, Siwei Sun, Danping Shi, Lei Hu
The double boomerang connectivity table (DBCT) is a new table proposed recently to capture the behavior of two consecutive S-boxes in boomerang attacks. In this paper, we observe an interesting property of DBCT of S-box that the ladder switch and the S-box switch happen in most cases for two continuous S-boxes, and for some S-boxes only S-box switch and ladder switch are possible. This property implies an additional criterion for S-boxes to resist the boomerang attacks and provides as well a new evaluation direction for an S-box. Using an extension of the DBCT, we verify that some boomerang distinguishers of TweAES and Deoxys are flawed. On the other hand, inspired by the property, we put forward a formula for estimating boomerang cluster probabilities. Furthermore, we introduce the first model to search for boomerang distinguishers with good cluster probabilities. Applying the model to CRAFT, we obtain 9-round and 10-round boomerang distinguishers with a higher probability than that of previous works.
Last updated:  2023-02-10
Weighted Secret Sharing from Wiretap Channels
Fabrice Benhamouda, Shai Halevi, Lev Stambler
Secret-sharing allows splitting a piece of secret information among a group of shareholders, so that it takes a large enough subset of them to recover it. In \emph{weighted} secret-sharing, each shareholder has an integer weight, and it takes a subset of large-enough weight to recover the secret. Schemes in the literature for weighted threshold secret sharing either have share sizes that grow linearly with the total weight, or ones that depend on huge public information (essentially a garbled circuit) of size (quasi)polynomial in the number of parties. To do better, we investigate a relaxation, $(\alpha, \beta)$-ramp weighted secret sharing, where subsets of weight $\beta W$ can recover the secret (with $W$ the total weight), but subsets of weight $\alpha W$ or less cannot learn anything about it. These can be constructed from standard secret-sharing schemes, but known constructions require long shares even for short secrets, achieving share sizes of $\max\big(W,\frac{|\mathrm{secret}|}{\epsilon}\big)$, where $\epsilon=\beta-\alpha$. In this note we first observe that simple rounding let us replace the total weight $W$ by $N/\epsilon$, where $N$ is the number of parties. Combined with known constructions, this yields share sizes of $O\big(\max(N,|\mathrm{secret}|)/{\epsilon}\big)$. Our main contribution is a novel connection between weighted secret sharing and wiretap channels, that improves or even eliminates the dependence on~$N$, at a price of increased dependence on $1/\epsilon$. We observe that for certain additive-noise $(R,A)$ wiretap channels, any semantically secure scheme can be naturally transformed into an $(\alpha,\beta)$-ramp weighted secret-sharing, where $\alpha,\beta$ are essentially the respective capacities of the channels $A,R$. We present two instantiations of this type of construction, one using Binary Symmetric wiretap Channels, and the other using additive Gaussian Wiretap Channels. Depending on the parameters of the underlying wiretap channels, this gives rise to $(\alpha, \beta)$-ramp schemes with share sizes $|\mathrm{secret}|/\mathrm{poly}(\epsilon\log N)$ or even just $|\mathrm{secret}|/\mathrm{poly}(\epsilon)$.
Last updated:  2022-11-14
Rescue-Prime Optimized
Tomer Ashur, Al Kindi, Willi Meier, Alan Szepieniec, Bobbin Threadbare
This note specifies two instances of a hash function obtained from applying the Marvellous design strategy to a specific context. The context in question is native hashing in a STARKVirtual Machine such as Miden.
Last updated:  2022-11-25
Folding Schemes with Selective Verification
Carla Ràfols, Alexandros Zacharakis
In settings such as delegation of computation where a prover is doing computation as a service for many verifiers, it is important to amortize the prover’s costs without increasing those of the verifier. We introduce folding schemes with selective verification. Such a scheme allows a prover to aggregate m NP statements $x_i\in \mathcal{L}$ in a single statement $x\in\mathcal{L}$. Knowledge of a witness for $x$ implies knowledge of witnesses for all $m$ statements. Furthermore, each statement can be individually verified by asserting the validity of the aggregated statement and an individual proof $\pi$ with size sublinear in the number of aggregated statements. In particular, verification of statement $x_i$ does not require reading (or even knowing) all the statements aggregated. We demonstrate natural folding schemes for various languages: inner product relations, vector and polynomial commitment openings and relaxed R1CS of NOVA. All these constructions incur a minimal overhead for the prover, comparable to simply reading the statements.
Last updated:  2023-01-15
On Linearization Attack of Entropic Quasigroups Cryptography
Daniel Nager
In this paper we study linearization proposed on ePrint 2021/583, that's addressed to entropic quasigroups cryptography. We show how this attack can be avoided and actually linearization can be used to build valid cryptosystems.
Last updated:  2022-11-13
Security Analysis of Delay-Based Strong PUFs with Multiple Delay Lines
Anita Aghaie, Amir Moradi, Johannes Tobisch, Nils Wisiol
Using a novel circuit design, we investigate if the modeling-resistance of delay-based, CMOS-compatible strong PUFs can be increased by the usage of multiple delay lines. Studying a circuit inspired by the Arbiter PUF, but using four instead of merely two delay lines, we obtain evidence showing that the usage of many delay lines does not significantly increase the security of the strong PUF circuit. Based on our findings, we suggest future research directions.
Last updated:  2022-11-15
Solving Small Exponential ECDLP in EC-based Additively Homomorphic Encryption and Applications
Fei Tang, Guowei Ling, Chaochao Cai, Jinyong Shan, Xuanqi Liu, Peng Tang, Weidong Qiu
Additively Homomorphic Encryption (AHE) has been widely used in various applications, such as federated learning, blockchain, and online auctions. Elliptic Curve (EC) based AHE has the advantages of efficient encryption, homomorphic addition, scalar multiplication algorithms, and short ciphertext length. However, EC-based AHE schemes require solving a small exponential Elliptic Curve Discrete Logarithm Problem (ECDLP) when running the decryption algorithm, i.e., recovering the plaintext $m\in\{0,1\}^\ell$ from $m \ast G$. Therefore, the decryption of EC-based AHE schemes is inefficient when the plaintext length $\ell > 32$. This leads to people being more inclined to use RSA-based AHE schemes rather than EC-based ones. This paper proposes an efficient algorithm called $\mathsf{FastECDLP}$ for solving the small exponential ECDLP at $128$-bit security level. We perform a series of deep optimizations from two points: computation and memory overhead. These optimizations ensure efficient decryption when the plaintext length $\ell$ is as long as possible in practice. Moreover, we also provide a concrete implementation and apply $\mathsf{FastECDLP}$ to some specific applications. Experimental results show that $\mathsf{FastECDLP}$ is far faster than the previous works. For example, the decryption can be done in $0.35$ ms with a single thread when $\ell = 40$, which is about $30$ times faster than that of Paillier. Furthermore, we experiment with $\ell$ from $32$ to $54$, and the existing works generally only consider $\ell \leq 32$. The decryption only requires $1$ second with $16$ threads when $\ell = 54$. In the practical applications, we can speed up model training of existing vertical federated learning frameworks by $4$ to $14$ times. At the same time, the decryption efficiency is accelerated by about $140$ times in a blockchain financial system (ESORICS 2021) with the same memory overhead.
Last updated:  2022-11-12
Layered ROLLO-I: Faster rank-metric code-based KEM using ideal LRPC codes
Chanki Kim, Young-Sik Kim, Jong-Seon No
For the fast cryptographic operation, we newly propose a key encapsulation mechanism (KEM) called layered ROLLO-I by using block-wise interleaved ideal LRPC (BII-LRPC) codes. By multiplying random polynomials by small-sized ideal LRPC codes, faster operation can be obtained with an additional key size. Finally, some parameters of the proposed algorithm are suggested and compared with that of the existing ROLLO-I scheme.
Last updated:  2022-11-11
Practical Settlement Bounds for Longest-Chain Consensus
Peter Gaži, Ling Ren, Alexander Russell
Nakamoto's longest-chain consensus paradigm now powers the bulk of the world's cryptocurrencies and distributed finance infrastructure. An emblematic property of longest-chain consensus is that it provides probabilistic settlement guarantees that strengthen over time. This makes the exact relationship between settlement error and settlement latency a critical aspect of the protocol that both users and system designers must understand to make informed decisions. A recent line of work has finally provided a satisfactory rigorous accounting of this relationship for proof-of-work longest-chain protocols, but those techniques do not appear to carry over to the proof-of-stake setting. This article develops explicit, rigorous settlement bounds for proof-of-stake longest-chain protocols, placing them on equal footing with their proof-of-work counterparts. Our techniques apply with some adaptations also to the proof-of-work setting where they provide improvements to the state-of-the-art settlement bounds for proof-of-work protocols.
Last updated:  2023-08-10
Set (Non-)Membership NIZKs from Determinantal Accumulators
Helger Lipmaa, Roberto Parisella
We construct a falsifiable set (non-)membership NIZK $\Pi^*$ that is considerably more efficient than known falsifiable set (non-)membership NIZKs. It also has a universal CRS. $\Pi^*$ is based on the novel concept of determinantal accumulators. Determinantal primitives have a similar relation to recent pairing-based (non-succinct) NIZKs of Couteau and Hartmann (Crypto 2020) and Couteau et al. (CLPØ, Asiacrypt 2021) that structure-preserving primitives have to the Groth-Sahai NIZK. We also extend CLPØ by proposing efficient (non-succinct) set non-membership arguments for a large class of languages.
Last updated:  2022-11-11
DAG-$\Sigma$: A DAG-based Sigma Protocol for Relations in CNF
Gongxian Zeng, Junzuo Lai, Zhengan Huang, Yu Wang, Zhiming Zheng
At CRYPTO 1994, Cramer, Damg{\aa}rd and Schoenmakers proposed a general method to construct proofs of knowledge (PoKs), especially for $k$-out-of-$n$ partial knowledge, of which relations can be expressed in disjunctive normal form (DNF). Since then, proofs of $k$-out-of-$n$ partial knowledge have attracted much attention and some efficient constructions have been proposed. However, many practical scenarios require efficient PoK protocols for partial knowledge in other forms. In this paper, we mainly focus on PoK protocols for $k$-conjunctive normal form ($k$-CNF) relations, which have $n$ statements and can be expressed as follows: (i) $k$ statements constitute a clause via ``OR'' operations, and (ii) the relation consists of multiple clauses via ``AND'' operations. We propose an alternative Sigma protocol (called DAG-$\Sigma$ protocol) for $k$-CNF relations (in the discrete logarithm setting), by converting these relations to directed acyclic graphs (DAGs). Our DAG-$\Sigma$ protocol achieves less communication cost and smaller computational overhead compared with Cramer et al.'s general method.
Last updated:  2023-03-06
Extendable Threshold Ring Signatures with Enhanced Anonymity
Gennaro Avitabile, Vincenzo Botta, Dario Fiore
Threshold ring signatures are digital signatures that allow $t$ parties to sign a message while hiding their identity in a larger set of $n$ users called ''ring''. Recently, Aranha et al. [PKC 2022] introduced the notion of \emph{extendable} threshold ring signatures (ETRS). ETRS allow one to update, in a non-interactive manner, a threshold ring signature on a certain message so that the updated signature has a greater threshold, and/or an augmented set of potential signers. An application of this primitive is anonymous count me in. A first signer creates a ring signature with a sufficiently large ring announcing a proposition in the signed message. After such cause becomes \emph{public}, other parties can anonymously decide to support that proposal by producing an updated signature. Crucially, such applications rely on partial signatures being posted on a \emph{publicly accessible} bulletin board since users may not know/trust each other. In this paper, we first point out that even if anonymous count me in was suggested as an application of ETRS, the anonymity notion proposed in the previous work is insufficient in many application scenarios. Indeed, the existing notion guarantees anonymity only against adversaries who just see the last signature, and are not allowed to access the ''full evolution" of an ETRS. This is in stark contrast with applications where partial signatures are posted in a public bulletin board. We therefore propose stronger anonymity definitions and construct a new ETRS that satisfies such definitions. Interestingly, while satisfying stronger anonymity properties, our ETRS asymptotically improves on the two ETRS presented in prior work [PKC 2022] in terms of both time complexity and signature size. Our ETRS relies on extendable non-interactive witness-indistinguishable proof of knowledge (ENIWI PoK), a novel technical tool that we formalize and construct, and that may be of independent interest. We build our constructions from pairing groups under the SXDH assumption.
Last updated:  2022-11-10
Full Round Zero-sum Distinguishers on TinyJAMBU-128 and TinyJAMBU-192 Keyed-permutation in the Known-key setting
Orr Dunkelman, Shibam Ghosh, Eran Lambooij
TinyJAMBU is one of the finalists in the NIST lightweight standardization competition. This paper presents full round practical zero-sum distinguishers on the keyed permutation used in TinyJAMBU. We propose a full round zero-sum distinguisher on the 128- and 192-bit key variants and a reduced round zero-sum distinguisher for the 256-bit key variant in the known-key settings. Our best known-key distinguisher works with $2^{16}$ data/time complexity on the full 128-bit version and with $2^{23}$ data/time complexity on the full 192-bit version. For the 256-bit ver- sion, we can distinguish 1152 rounds (out of 1280 rounds) in the known- key settings. In addition, we present the best zero-sum distinguishers in the secret-key settings: with complexity $2^{23}$ we can distinguish 544 rounds in the forward direction or 576 rounds in the backward direction. For finding the zero-sum distinguisher, we bound the algebraic degree of the TinyJAMBU permutation using the monomial prediction technique proposed by Hu et al. at ASIACRYPT 2020. We model the monomial prediction rule on TinyJAMBU in MILP and find upper bounds on the degree by computing the parity of the number of solutions.
Last updated:  2022-11-10
Characterisation of Bijectivity Preserving Componentwise Modification of S-Boxes
Kaisa Nyberg
Various systematic modifications of vectorial Boolean functions have been used for finding new previously unknown classes of S-boxes with good or even optimal differential uniformity and nonlinearity. Recently, a new method was proposed for modification a component of a bijective vectorial Boolean function by using a linear function. It was shown that the modified function remains bijective under the assumption that the inverse of the function admits a linear structure. A previously known construction of such a modification based on bijective Gold functions in odd dimension is a special case of this type of modification. In this paper, we show that the existence of a linear structure is necessary. Further, we consider replacement of a component of a bijective vectorial Boolean function in the general case. We prove that a permutation on $\mathbb{F}_2^n$ remains bijective if and only if the replacement is done by composing the permutation with an unbalanced Feistel transformation where the round function is any Boolean function on $\mathbb{F}_2^{n-1}$.
Last updated:  2022-11-10
Baloo: Nearly Optimal Lookup Arguments
Arantxa Zapico, Ariel Gabizon, Dmitry Khovratovich, Mary Maller, Carla Ràfols
We present Baloo, the first protocol for lookup tables where the prover work is linear on the amount of lookups and independent of the size of the table. Baloo is built over the lookup arguments of Caulk and Caulk+, and the framework for linear relations of Rafols and Zapico. Our protocol supports commit-and-prove expansions: the prover selects the subtable containing the elements used in the lookup, that is unknown to the verifier, commits to it and later prove relation with the committed element. This feature makes Baloo especially suitable for prover input-ouput relations on hash functions, and in particular to instantiate the Ethereum Virtual Machine (EVM).
Last updated:  2022-11-10
Efficient privacy preserving top-k recommendation using homomorphic sorting
Pranav Verma, Anish Mathuria, Sourish Dasgupta
The existing works on privacy-preserving recommender systems based on homomorphic encryption do not filter top-k most relevant items on the server side. As a result, sending the encrypted rating vector for all items to the user retrieving the top-k items is necessary. This incurs significant computation and communication costs on the user side. In this work, we employ private sorting at the server to reduce the user-side overheads. In private sorting, the values and corresponding positions of elements must remain private. We use an existing private sorting protocol by Foteini and Olga and tailor it to the privacy-preserving top-k recommendation applications. We enhance it to use secure bit decomposition in the private comparison routine of the protocol. This leads to a notable reduction in cost overheads of users as well as the servers, especially at the keyserver where the computation cost is reduced to half. The dataserver does not have to perform costly encryption and decryption operations. It performs computationally less expensive modular exponentiation operations. Since the private comparison operation contributes significantly to the overall cost overhead, making it efficient enhances the sorting protocol’s performance. Our security analysis concludes that the proposed scheme is as secure as the original protocol.
Last updated:  2023-01-23
A Practical Full Key Recovery Attack on TFHE and FHEW by Inducing Decryption Errors
Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
Fully Homomorphic Encryption (FHE) promises to secure our data on the untrusted cloud, while allowing arbitrary computations. Recent research has shown two side channel attacks on the client side running a popular HE library. However, no side channel attacks have yet been reported on the server side in existing literature. The current paper shows that it is possible for adversaries to inject perturbations in the ciphertexts stored in the cloud to result in decryption errors. Most importantly, we highlight that when the client reports of such aberrations to the cloud service provider the complete secret key can be extracted in few attempts. Technically, this implies a break of the IND-CVA (Indistinguishability against Ciphertext Verification Attacks) security of the FHE schemes. The core idea of the attack is to extract the underlying error values for each homomorphically computed ciphertext and thereby construct an exact system of equations. As the security of the underlying Learning with Errors (LWE) collapse with the leakage of the errors, the adversary is capable of ascertaining the secret keys. We demonstrate this attack on two well-known FHE libraries, namely FHEW and TFHE. The objective of the server is to perform the attack in a stealthy manner, without raising any suspicion from the innocent client. Therefore in a practical scenario, the successful key retrieval from a client would require the server to perform the attack with as few queries as possible. Thus we craftily use timing information during homomorphic gate computations to optimise our attack and significantly reduce the required number of queries per ciphertext. More precisely, we need 8 and 23 queries to the client for each error recovery for FHEW and TFHE, respectively. We mount a full-key recovery attack 1 on TFHE (without and with bootstrapping) with key size of 630 bits and successfully faulted 739 and 930 ciphertexts to recover correct errors. This required a total of 19838 and 29200 client queries respectively. In case of FHEW with key size 500, we successfully faulted 766 ciphertexts to recover correct errors, which required 7565 client queries. The results serve as a stark reminder that FHE schemes need to be secured at the application level apart from being secure at the primitive level so that the security of participants against realistic attacks can be ensured.
Last updated:  2022-11-09
A Systematization of Voter Registration Security
Jack Cable, Andrés Fábrega, Sunoo Park, Michael A. Specter
Voter registration is an essential part of almost any election process, and its security is a critical component of election security. Yet, despite notable compromises of voter registration systems, relatively little academic work has been devoted to securing voter registration systems, compared to research on other aspects of election security. In this paper, we present a systematic treatment of voter registration system security. We propose the first rigorous definitional framework for voter registration systems, describing the entities and core functionalities inherent in most voter registration systems, the jurisdictional policies that constrain specific implementations, and key security properties. Our definitions are configurable based on jurisdiction-specific parameters and policies. We provide a template for the structured presentation of detailed jurisdictional policy information, via a series of tables, and illustrate its application with detailed case studies of the voter registration systems of three U.S. states and Panama. Throughout our research, with the aim of realism and practical applicability, we consulted current and former U.S. election officials, civil society, and non-profits in the elections space. We conclude with a list of critical questions regarding voter registration security.
Last updated:  2022-11-09
Vogue: Faster Computation of Private Heavy Hitters
Pranav Jangir, Nishat Koti, Varsha Bhat Kukkala, Arpita Patra, Bhavish Raj Gopal, Somya Sangal
Consider the problem of securely identifying τ -heavy hitters, where given a set of client inputs, the goal is to identify those inputs which are held by at least τ clients in a privacy-preserving manner. Towards this, we design a novel system Vogue, whose key highlight in comparison to prior works, is that it ensures complete privacy and does not leak any information other than the heavy hitters. In doing so, Vogue aims to achieve as efficient a solution as possible. To showcase these efficiency improvements, we benchmark our solution and observe that it requires around 14 minutes to compute the heavy hitters for τ = 900 on 256-bit inputs when considering 400K clients. This is in contrast to the state of the art solution that requires over an hour for the same. In addition to the static input setting described above, Vogue also accounts for streaming inputs and provides a protocol that outperforms the state-of-the-art therein. The efficiency improvements witnessed while computing heavy hitters in both, the static and streaming input settings, are attributed to our new secure stable compaction protocol, whose round complexity is independent of the size of the input array to be compacted
Last updated:  2022-11-09
Verifiable Private Information Retrieval
Shany Ben-David, Yael Tauman Kalai, Omer Paneth
A computational PIR scheme allows a client to privately query a database hosted on a single server without downloading the entire database. We introduce the notion of verifiable PIR (vPIR) where the server can convince the client that the database satisfies certain properties without additional rounds and while keeping the communication sub-linear. For example, the server can prove that the number of rows in the database that satisfy a predicate $P$ is exactly $n$. We define security by modeling vPIR as an ideal functionality and following the real-ideal paradigm. Starting from a standard PIR scheme, we construct a vPIR scheme for any database property that can be verified by a machine that reads the database once and maintains a bounded size state between rows. We also construct vPIR with public verification based on LWE or on DLIN. The main technical hurdle is to demonstrate a simulator that extracts a long input from an adversary that sends a single short message. Our vPIR constructions are based on the notion of batch argument for NP. As contribution of independent interest, we show that batch arguments are equivalent to quasi-arguments---a relaxation of SNARKs which is known to imply succinct argument for various sub-classes of NP.
Last updated:  2023-04-20
Take your MEDS: Digital Signatures from Matrix Code Equivalence
Tung Chou, Ruben Niederhagen, Edoardo Persichetti, Tovohery Hajatiana Randrianarisoa, Krijn Reijnders, Simona Samardjiska, Monika Trimoska
In this paper, we show how to use the Matrix Code Equivalence (MCE) problem as a new basis to construct signature schemes. This extends previous work on using isomorphism problems for signature schemes, a trend that has recently emerged in post-quantum cryptography. Our new formulation leverages a more general problem and allows for smaller data sizes, achieving competitive performance and great flexibility. Using MCE, we construct a zero-knowledge protocol which we turn into a signature scheme named Matrix Equivalence Digital Signature (MEDS). We provide an initial choice of parameters for MEDS, tailored to NIST's Category 1 security level, yielding public keys as small as 2.8 kB and signatures ranging from 18 kB to just around 6.5 kB, along with a reference implementation in C.
Last updated:  2023-09-15
Quantum Speed-Up for Multidimensional (Zero Correlation) Linear Distinguishers
Akinori Hosoyamada
This paper shows how to achieve a quantum speed-up for multidimensional (zero correlation) linear distinguishers. A previous work by Kaplan et al. has already shown a quantum quadratic speed-up for one-dimensional linear distinguishers. However, classical linear cryptanalysis often exploits multidimensional approximations to achieve more efficient attacks, and in fact it is highly non-trivial whether Kaplan et al.'s technique can be extended into the multidimensional case. To remedy this, we investigate a new quantum technique to speed-up multidimensional linear distinguishers. Firstly, we observe that there is a close relationship between the subroutine of Simon's algorithm and linear correlations via Fourier transform. Specifically, a slightly modified version of Simon's subroutine, which we call Correlation Extraction Algorithm (CEA), can be used to speed-up multidimensional linear distinguishers. CEA also leads to a speed-up for multidimensional zero correlation distinguishers, as well as some integral distinguishers through the correspondence of zero correlation and integral properties shown by Bogdanov et al.~and Sun et al. Furthermore, we observe possibility of a more than quadratic speed-ups for some special types of integral distinguishers when multiple integral properties exist. Especially, we show a single-query distinguisher on a 4-bit cell SPN cipher with the same integral property as 2.5-round AES. Our attacks are the first to observe such a speed-up for classical cryptanalytic techniques without relying on hidden periods or shifts. By replacing the Hadamard transform in CEA with the general quantum Fourier transform, our technique also speeds-up generalized linear distinguishers on an arbitrary finite abelian group.
Last updated:  2023-08-02
Less is more: refinement proofs for probabilistic proofs
Kunming Jiang, Devora Chait-Roth, Zachary DeStefano, Michael Walfish, Thomas Wies
There has been intense interest over the last decade in implementations of _probabilistic proofs_ (IPs, SNARKs, PCPs, and so on): protocols in which an untrusted party proves to a verifier that a given computation was executed properly, possibly in zero knowledge. Nevertheless, implementations still do not scale beyond small computations. A central source of overhead is the _front-end_: translating from the abstract computation to a set of equivalent arithmetic constraints. This paper introduces a general-purpose framework, called Distiller, in which a user translates to constraints not the original computation but an abstracted _specification_ of it. Distiller is the first in this area to perform such transformations in a way that is provably safe. Furthermore, by taking the idea of "encode a check in the constraints" to its literal logical extreme, Distiller exposes many new opportunities for constraint reduction, resulting in cost reductions for benchmark computations of 1.3–50$\times$, and in some cases, better asymptotics.
Last updated:  2023-12-08
Intermediate Certificate Suppression in Post-Quantum TLS: An Approximate Membership Querying Approach
Dimitrios Sikeridis, Sean Huntley, David Ott, and Michael Devetsikiotis
Quantum computing advances threaten the security of today's public key infrastructure, and have led to the pending standardization of alternative, quantum-resistant key encapsulation and digital signature cryptography schemes. Unfortunately, authentication algorithms based on the new post-quantum (PQ) cryptography create significant performance bottlenecks for TLS due to larger certificate chains which introduce additional packets and round-trips. The TLS handshake slowdown will be unacceptable to many applications, and detrimental to the broader adoption of quantum safe cryptography standards. In this paper, we propose a novel framework for Intermediate Certificate Authority (ICA) certificate suppression in TLS that reduces the authentication message size and prevents excessive round-trip delays. Our approach utilizes an approximate membership query (AMQ) data structure (probabilistic filter) to advertise known ICA certs to remote TLS endpoints so that unnecessary ICA certificates are omitted from the TLS handshake exchange. We showcase the extend of the PQ authentication overhead challenge in TLS, and evaluate the feasibility of AMQ filters for ICA suppression in terms of space and computational overhead. Finally, we experimentally evaluate the potential gains form our approach and showcase a $70\%$ reduction in exchanged ICA cert data that translates to 15-50 MB of savings in PQ TLS and for certain Web-based application scenarios.
Last updated:  2022-11-08
Avoiding Lock Outs: Proactive FIDO Account Recovery using Managerless Group Signatures
Sunpreet S. Arora, Saikrishna Badrinarayanan, Srinivasan Raghuraman, Maliheh Shirvanian, Kim Wagner, Gaven Watson
Passwords are difficult to remember, easy to guess and prone to hacking. While there have been several attempts to solve the aforementioned problems commonly associated with passwords, one of the most successful ones to date has been by the Fast Identity Online (FIDO) alliance. FIDO introduced a series of protocols that combine local authentication on a user device with remote validation on relying party servers using public-key cryptography. One of the fundamental problems of FIDO protocols is complete reliance on a single user device for authentication. More specifically, the private key used for signing relying party challenges can only be stored on a single device. Each FIDO authenticator key is linked uniquely to an account with a relying party service. As a result a lost or stolen user device necessitates creation of new user account, using a new device, with each (previously enrolled) relying party service. To overcome this limitation, we introduce a dynamic managerless group signature scheme that organizes authenticators into groups. Each authenticator in a group has a unique private key that links it to an account with a relying party, which can sign relying party challenges. The relying party server has a group verification key that can validate challenges signed using the private key of any authenticator in a group. Our approach provides additional redundancy and usability to the FIDO protocol whilst still achieving the security properties expected in the FIDO setting such as unforgeability and unlinkability.
Last updated:  2022-11-08
Executing and Proving over Dirty Ledgers
Christos Stefo, Zhuolun Xiang, Lefteris Kokoris-Kogias
Scaling blockchain protocols to perform on par with the expected needs of Web3.0 has been proven to be a challenging task with almost a decade of research. In the forefront of the current solution is the idea of separating the execution of the updates encoded in a block from the ordering of blocks. In order to achieve this, a new class of protocols called rollups has emerged. Rollups have as input a total ordering of valid and invalid transactions and as output a new valid state-transition. If we study rollups from a distributed computing perspective, we uncover that rollups take as input the output of a Byzantine Atomic Broadcast (BAB) protocol and convert it to a State Machine Replication (SMR) protocol. BAB and SMR, however, are considered equivalent as far as distributed computing is concerned and a solution to one can easily be retrofitted to solve the other simply by adding/removing an execution step before the validation of the input. This ``easy'' step of retrofitting an atomic broadcast solution to implement an SMR has, however, been overlooked in practice. In this paper, we formalize the problem and show that after BAB is solved, traditional impossibility results for consensus no longer apply towards an SMR. Leveraging this we propose a distributed execution protocol that allows reduced execution and storage cost per executor ($O(\frac{log^2n}{n})$) without relaxing the network assumptions of the underlying BAB protocol and providing censorship-resistance. Finally, we propose efficient non-interactive light client constructions that leverage our efficient execution protocols and do not require any synchrony assumptions or expensive ZK-proofs.
Last updated:  2023-02-27
Lower Bound Framework for Differentially Private and Oblivious Data Structures
Giuseppe Persiano, Kevin Yeo
In recent years, there has been significant work in studying data structures that provide privacy for the operations that are executed. These primitives aim to guarantee that observable access patterns to physical memory do not reveal substantial information about the queries and updates executed on the data structure. Multiple recent works, including Larsen and Nielsen [Crypto'18], Persiano and Yeo [Eurocrypt'19], Hubáček et al. [TCC'19] and Komargodski and Lin [Crypto'21], have shown that logarithmic overhead is required to support even basic RAM (array) operations for various privacy notions including obliviousness and differential privacy as well as different choices of sizes for RAM blocks $b$ and memory cells $\omega$. We continue along this line of work and present the first logarithmic lower bounds for differentially private RAMs (DPRAMs) that apply regardless of the sizes of blocks $b$ and cells $\omega$. This is the first logarithmic lower bounds for DPRAMs when blocks are significantly smaller than cells, that is $b \ll \omega$. Furthermore, we present new logarithmic lower bounds for differentially private variants of classical data structure problems including sets, predecessor (successor) and disjoint sets (union-find) for which sub-logarithmic plaintext constructions are known. All our lower bounds extend to the multiple non-colluding servers setting. We also address an unfortunate issue with this rich line of work where the lower bound techniques are difficult to use and require customization for each new result. To make the techniques more accessible, we generalize our proofs into a framework that reduces proving logarithmic lower bounds to showing that a specific problem satisfies two simple, minimal conditions. We show our framework is easy-to-use as all the lower bounds in our paper utilize the framework and hope our framework will spur more usage of these lower bound techniques.
Last updated:  2022-11-28
XMSS-SM3 and MT-XMSS-SM3: Instantiating Extended Merkle Signature Schemes with SM3
Siwei Sun, Tianyu Liu, Zhi Guan, Yifei He, Jiwu Jing, Lei Hu, Zhenfeng Zhang, Hailun Yan
We instantiate the hash-based post-quantum stateful signature schemes XMSS and its multi-tree version described in RFC 8391 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test.
Last updated:  2023-04-06
Extensible Decentralized Secret Sharing and Application to Schnorr Signatures
Michele Battagliola, Riccardo Longo, Alessio Meneghetti
Starting from links between coding theory and secret sharing we develop an extensible and decentralized version of Shamir Secret Sharing, that allows the addition of new users after the initial share distribution. On top of it we design a totally decentralized $(t,n)$-threshold Schnorr signature scheme that needs only $t$ users online during the key generation phase, while the others join later. Under standard assumptions we prove our scheme secure against adaptive malicious adversaries. Furthermore, we show how our security notion can be strengthen when considering a rushing adversary. Using a classical game-based argument, we prove that if there is an adversary capable of forging the scheme with non-negligible probability, then we can build a forger for the centralized Schnorr scheme with non-negligible probability.
Last updated:  2023-02-09
Modifications of Bijective S-Boxes with Linear Structures
Kaisa Nyberg
Various systematic modifications of vectorial Boolean functions have been used for finding new previously unknown classes of S-boxes with good or even optimal differential uniformity and nonlinearity. In this paper, a new general modification method is given that preserves the bijectivity property of the function in case the inverse of the function admits a linear structure. A previously known construction of such a modification based on bijective Gold functions in odd dimension is a special case of the new method.
Last updated:  2023-02-11
The SAT-Based Automatic Searching and Experimental Verification for Differential Characteristics with Application to Midori64
Yingying Li, Qichun Wang
In this paper, we show that it is inaccurate to apply the hypothesis of independent round keys to search for differential characteristics of a block cipher with a simple key schedule. Therefore, the derived differential characteristics may be invalid. We develop a SAT-based algorithm to verify the validity of differential characteristics. Furthermore, we take the key schedule into account and thus put forward an algorithm to directly find the valid differential characteristics. All experiments are performed on Midori64 and we find some interesting results.
Last updated:  2023-03-21
Trellis: Robust and Scalable Metadata-private Anonymous Broadcast
Simon Langowski, Sacha Servan-Schreiber, Srinivas Devadas
Trellis is a mix-net based anonymous broadcast system with cryptographic security guarantees. Trellis can be used to anonymously publish documents or communicate with other users, all while assuming full network surveillance. In Trellis, users send messages through a set of servers in successive rounds. The servers mix and post the messages to a public bulletin board, hiding which users sent which messages. Trellis hides all network metadata, remains robust to changing network conditions, guarantees availability to honest users, and scales with the number of mix servers. Trellis provides three to five orders of magnitude faster performance and better network robustness compared to Atom, the state-of-the-art anonymous broadcast system with a comparable threat model. In achieving these guarantees, Trellis contributes: (1) a simpler theoretical mixing analysis for a routing mix network constructed with a fraction of malicious servers, (2) anonymous routing tokens for verifiable random paths, and (3) lightweight blame protocols built on top of onion routing to identify and eliminate malicious parties. We implement and evaluate Trellis in a networked deployment. With 128 servers, Trellis achieves a throughput of 320 bits per second. Trellis’s throughput is only 100 to 1000× slower compared to Tor (which has 6,000 servers and 2 million daily users) and is potentially deployable at a smaller “enterprise” scale. Our implementation is open-source.
Last updated:  2022-11-07
A Masked Pure-Hardware Implementation of Kyber Cryptographic Algorithm
Uncategorized
Tendayi Kamucheka, Alexander Nelson, David Andrews, Miaoqing Huang
Show abstract
Uncategorized
Security against side-channel assisted attacks remains a focus and concern in the ongoing standardization process of quantum-computer-resistant cryptography algorithms. Hiding and masking techniques are currently under investigation to protect the Post-Quantum Cryptography (PQC) algorithms in the NIST PQC standardization process against sophisticated side-channel attacks. Between hiding and masking, masking is emerging as a popular option due to its simplicity and minimized cost of implementation compared with hiding, which often requires duplication of hardware resources and advanced analysis and design techniques to implement correctly. This work presents a pure hardware implementation of masked CCA2-secure Kyber-512, a candidate chosen by NIST to be standardized. A novel hiding technique that leverages the advantages of FPGAs over micro-controllers and is demonstrably secure against Simple Power Analysis (SPA) and Differential Power Analysis (DPA) side-channel attacks is presented. Finally, a novel hybrid hiding-masking approach is presented that achieves a reduced hardware resource and clock-cycle penalty compared with previously reported figures for similar PQC candidates. The Test Vector Leakage Assessment (TVLA) is adopted to demonstrate the absence of side-channel leakage.
Last updated:  2022-11-07
Threshold Implementations in Software: Micro-architectural Leakages in Algorithms
John Gaspoz, Siemen Dhooghe
This paper provides necessary properties to algorithmically secure first-order maskings in scalar micro-architectures. The security notions of threshold implementations are adapted following micro-processor leakage effects which are known to the literature. The resulting notions, which are based on the placement of shares, are applied to a two-share randomness-free PRESENT cipher and Keccak-f. The assembly implementations are put on a RISC-V and an ARM Cortex-M4 core. All designs are validated in the glitch and transition extended probing model and their implementations via practical lab analysis.
Last updated:  2024-01-22
On Structure-Preserving Cryptography and Lattices
Dennis Hofheinz, Kristina Hostáková, Roman Langrehr, and Bogdan Ursu
The Groth-Sahai proof system is a highly efficient pairing-based proof system for a specific class of group-based languages. Cryptographic primitives that are compatible with these languages (such that we can express, e.g., that a ciphertext contains a valid signature for a given message) are called "structure-preserving". The combination of structure-preserving primitives with Groth-Sahai proofs allows to prove complex statements that involve encryptions and signatures, and has proved useful in a variety of applications. However, so far, the concept of structure-preserving cryptography has been confined to the pairing setting. In this work, we propose the first framework for structure-preserving cryptography in the lattice setting. Concretely, we - define "structure-preserving sets" as an abstraction of (typically noisy) lattice-based languages, - formalize a notion of generalized structure-preserving encryption and signature schemes (capturing a number of existing lattice-based encryption and signature schemes), - construct a compatible zero-knowledge argument system that allows to argue about lattice-based structure-preserving primitives, - offer a lattice-based construction of verifiably encrypted signatures in our framework. Along the way, we also discover a new and efficient strongly secure lattice-based signature scheme. This scheme combines Rückert's lattice-based signature scheme with the lattice delegation strategy of Agrawal et al., which yields more compact and efficient signatures. We hope that our framework provides a first step towards a modular and versatile treatment of cryptographic primitives in the lattice setting.
Last updated:  2022-11-07
Towards Efficient Decentralized Federated Learning
Christodoulos Pappas, Dimitrios Papadopoulos, Dimitris Chatzopoulos, Eleni Panagou, Spyros Lalis, Manolis Vavalis
We focus on the problem of efficiently deploying a federated learning training task in a decentralized setting with multiple aggregators. To that end, we introduce a number of improvements and modifications to the recently proposed IPLS protocol. In particular, we relax its assumption for direct communication across participants, using instead indirect communication over a decentralized storage system, effectively turning it into a partially asynchronous protocol. Moreover, we secure it against malicious aggregators (that drop or alter data) by relying on homomorphic cryptographic commitments for efficient verification of aggregation. We implement the modified IPLS protocol and report on its performance and potential bottlenecks. Finally, we identify important next steps for this line of research.
Last updated:  2022-11-07
Four-Round Black-Box Non-Malleable Commitments from One-Way Permutations
Michele Ciampi, Emmanuela Orsini, Luisa Siniscalchi
We construct the first four-round non-malleable commitment scheme based solely on the black-box use of one-to-one one-way functions. Prior to our work, all non-malleable commitment schemes based on black-box use of polynomial-time cryptographic primitives require more than $16$ rounds of interaction. A key tool for our construction is a proof system that satisfies a new definition of security that we call non-malleable zero-knowledge with respect to commitments. In a nutshell, such a proof system can be safely run in parallel with a (potentially interactive) commitment scheme. We provide an instantiation of this tool using the MPC-in-the-Head approach in combination with BMR.
Last updated:  2022-11-07
Scalable and Transparent Proofs over All Large Fields, via Elliptic Curves (ECFFT part II)
Eli Ben-Sasson, Dan Carmon, Swastik Kopparty, David Levit
Concretely efficient interactive oracle proofs (IOPs) are of interest due to their applications to scaling blockchains, their minimal security assumptions, and their potential future-proof resistance to quantum attacks. Scalable IOPs, in which prover time scales quasilinearly with the computation size and verifier time scales poly-logarithmically with it, have been known to exist thus far only over a set of finite fields of negligible density, namely, over "FFT-friendly" fields that contain a sub-group of size $2^k$. Our main result is to show that scalable IOPs can be constructed over any sufficiently large finite field, of size that is at least quadratic in the length of computation whose integrity is proved by the IOP. This result has practical applications as well, because it reduces the proving and verification complexity of cryptographic statements that are naturally stated over pre-defined finite fields which are not "FFT-friendly". Prior state-of-the-art scalable IOPs relied heavily on arithmetization via univariate polynomials and Reed--Solomon codes over FFT-friendly fields. To prove our main result and extend scalability to all large finite fields, we generalize the prior techniques and use new algebraic geometry codes evaluated on sub-groups of elliptic curves (elliptic curve codes). We also show a new arithmetization scheme that uses the rich and well-understood group structure of elliptic curves to reduce statements of computational integrity to other statements about the proximity of functions evaluated on the elliptic curve to the new family of elliptic curve codes. This paper continues our recent work that used elliptic curves and their subgroups to create FFT-based algorithms for polynomial manipulation over generic finite fields. However, our new IOP constructions force us to use new codes (ones that are not based on polynomials), and this poses a new set of challenges involving the more restricted automorphism group of these codes, and the constraints of Riemann-Roch spaces of strictly positive genus.
Last updated:  2023-02-06
Secure Auctions in the Presence of Rational Adversaries
Chaya Ganesh, Bhavana Kanukurthi, Girisha Shankar
Sealed bid auctions are used to allocate a resource among a set of interested parties. Traditionally, auctions need the presence of a trusted auctioneer to whom the bidders provide their private bid values. Existence of such a trusted party is not an assumption easily realized in practice. Generic secure computation protocols can be used to remove a trusted party. However, generic techniques result in inefficient protocols, and typically do not provide fairness - that is, a corrupt party can learn the output and abort the protocol thereby preventing other parties from learning the output. At CRYPTO 2009, Miltersen, Nielsen and Triandopoulos [MNT09], introduced the problem of building auctions that are secure against rational bidders. Such parties are modeled as self-interested agents who care more about maximizing their utility than about learning information about bids of other agents. To realize this, they put forth a novel notion of information utility and introduce a game-theoretic framework that helps analyse protocols while taking into account both information utility as well as monetary utility. Unfortunately, their construction makes use a of generic MPC protocol and, consequently, the authors do not analyze the concrete efficiency of their protocol. In this work, we construct the first concretely efficient and provably secure protocol for First Price Auctions in the rational setting. Our protocol guarantees privacy and fairness. Inspired by [MNT09], we put forth a solution concept that we call Privacy Enhanced Computational Weakly Dominant Strategy Equilibrium that captures parties' privacy and monetary concerns in the game theoretic context, and show that our protocol realizes this. We believe this notion to be of independent interest. Our protocol is crafted specifically for the use case of auctions, is simple, using off-the-shelf cryptographic components. Executing our auction protocol on commodity hardware with 10 bidders, with bids of length 10, our protocol runs to completion in 0.141s and has total communication of 30KB.
Last updated:  2023-06-26
Exploiting algebraic structures in probing security
Maxime Plançon
The so-called $\omega$-encoding, introduced by Goudarzi, Joux and Rivain (Asiacrypt 2018), generalizes the commonly used arithmetic encoding. By using the additionnal structure of this encoding, they proposed a masked multiplication gadget (GJR) with quasilinear (randomness and operations) complexity. A follow-up contribution by Goudarzi, Prest, Rivain and Vergnaud in this line of research appeared in TCHES 2021. The authors revisited the aforementioned multiplication gadget (GPRV), and brought the IOS security notion for refresh gadgets to allow secure composition between probing secure gadgets. In this paper, we propose a follow up on GPRV, that is, a region-probing secure arithmetic circuit masked compiler. Our contribution stems from a single Lemma, linking algebra and probing security for a wide class of circuits, further taking advantage of the algebraic structure of $\omega$-encoding, and the extension field structure of the underlying field $\mathbb F$ that was so far left unexploited. On the theoretical side, we propose a security notion for $\boldsymbol{\omega}_d$-masked circuits which we call Reducible-To-Independent-K-linear (RTIK). When the number of shares $d$ is less than or equal to the degree $k$ of $\mathbb F$, RTIK circuits achieve region-probing security. Moreover, RTIK circuits may be composed naively and remain RTIK. We also propose a weaker version of IOS, which we call KIOS, for refresh gadgets. This notion allows to compose RTIK circuits with a randomness/security tradeoff compared to the naive composition. To substantiate our new definitions, we also provide examples of competitively efficient gadgets verifying the latter weaker security notions. Explicitly, we give 1) two refresh gadgets that use $d-1$ random field elements to refresh a length $d$ encoding, both of which are KIOS but not IOS, and 2) a multiplication gadget with bilinear multiplication complexity $d^{\log 3}$ and uses $d$ fresh random elements per run. Our compiler outperforms ISW asymptotically, but for our security proofs to hold, we do require that the number of shares $d$ is less than or equal to the degree of $\mathbb F$ as an extension, so that there is sufficient structure to exploit.
Last updated:  2022-11-07
Oblivious-Transfer Complexity of Noisy Coin-Toss via Secure Zero Communication Reductions
Saumya Goyal, Varun Narayanan, Manoj Prabhakaran
In p-noisy coin-tossing, Alice and Bob obtain fair coins which are of opposite values with probability p. Its Oblivious-Transfer (OT) complexity refers to the least number of OTs required by a semi-honest perfectly secure 2-party protocol for this task. We show a tight bound of Θ(log 1/p) for the OT complexity of p-noisy coin-tossing. This is the first instance of a lower bound for OT complexity that is independent of the input/output length of the function. We obtain our result by providing a general connection between the OT complexity of randomized functions and the complexity of Secure Zero Communication Reductions (SZCR), as recently de- fined by Narayanan et al. (TCC 2020), and then showing a lower bound for the complexity of an SZCR from noisy coin-tossing to (a predicate corresponding to) OT.
Last updated:  2023-04-18
DME: a full encryption, signature and KEM multivariate public key cryptosystem
Ignacio Luengo, Martín Avendaño
DME is a multivariate public key cryptosystem based on the composition of linear and exponential maps that allow the polynomials of the public key to be of a very high degree. A previous version of DME was presented to the NIST call (in the KEM category). The new version of DME adds one or two extra rounds of exponentials to the original two rounds. With this setting the composition gives a deterministic trapdoor one way permutation, which can be combined with an OAEP padding scheme for KEM and PSS00 for signature. In this paper we give the SUPERCOP timing of DME-OAEP and DME-PSS00 for three and four exponentials and compare them with NIST finalists. For NIST security level 5 the size of ciphertext and signature is only 64 bytes.
Last updated:  2022-11-06
On Extremal Algebraic Graphs and Multivariate Cryptosystems
Vasyl Ustimenko
Multivariate rule x_i -> f_i, i = 1, 2, ..., n, f_i from K[x_1, x_2, ..., x_n] over commutative ring K defines endomorphism σ_n of K[x_1, x_2, ..., x_n] into itself given by its values on variables x_i. Degree of σ_n can be defined as maximum of degrees of polynomials f_i. We say that family σ_n, n = 2, 3, .... has trapdoor accelerator ^nT if the knowledge of the piece of information ^nT allows to compute reimage x of y = σ_n(x) in time O(n^2). We use extremal algebraic graphs for the constructions of families of automorphisms σ_n with trapdoor accelerators and (σ_n)^{−1} of large order. We use these families for the constructions of new multivariate public keys and protocol based cryptosystems of El Gamal type of Postquantum Cryptography. Some of these cryptosystems use as encryption tools families of endomorphisms σn of unbounded degree such that their restriction on the varieties (K^∗)^n are injective. As usual K^∗ stands for the multiplicative group of commutative ring K with the unity. Spaces of plaintexts and ciphertexts are (K^∗)^n and K^n. Security of such cryptosystem of El Gamal type rests on the complexity of word decomposition problem in the semigroup of Eulerian endomorphisms of K[x_1, x_2; ... , x_n].
Last updated:  2023-03-28
Privacy-Preserving Blueprints
Markulf Kohlweiss, Anna Lysyanskaya, An Nguyen
In a world where everyone uses anonymous credentials for all access control needs, it is impossible to trace wrongdoers, by design. This makes legitimate controls, such as tracing illicit trade and terror suspects, impossible to carry out. Here, we propose a privacy-preserving blueprint capability that allows an auditor to publish an encoding $pk_A$ of the function $f(x,\cdot)$ for a publicly known function $f$ and a secret input $x$. For example, $x$ may be a secret watchlist, and $f(x,y)$ may return $y$ if $y\in x$. On input her data $y$ and the auditor's $pk_A$, a user can compute an escrow $Z$ such that anyone can verify that $Z$ was computed correctly from the user's credential attributes, and moreover, the auditor can recover $f(x,y)$ from $Z$. Our contributions are: * We define secure $f$-blueprint systems; our definition is designed to provide a modular extension to anonymous credential systems. * We show that secure $f$-blueprint systems can be constructed for all functions $f$ from fully homomorphic encryption and NIZK proof systems, or from non-interactive secure computation and NIZK. These results are of theoretical interest but is not efficient enough for practical use. * We realize an optimal blueprint system under the DDH assumption in the random-oracle model for the watchlist function.
Last updated:  2023-02-23
Reverse Firewalls for Oblivious Transfer Extension and Applications to Zero-Knowledge
Suvradip Chakraborty, Chaya Ganesh, Pratik Sarkar
In the setting of subversion, an adversary tampers with the machines of the honest parties thus leaking the honest parties' secrets through the protocol transcript. The work of Mironov and Stephens-Davidowitz (EUROCRYPT’15) introduced the idea of reverse firewalls (RF) to protect against tampering of honest parties' machines. All known constructions in the RF framework rely on the malleability of the underlying operations in order for the RF to rerandomize/sanitize the transcript. RFs are thus limited to protocols that offer some structure, and hence based on public-key operations. In this work, we initiate the study of $efficient$ Multiparty Computation (MPC) protocols in the presence of tampering. In this regard, - We construct the $first$ Oblivious Transfer (OT) extension protocol in the RF setting. We obtain $poly(\kappa)$ maliciously-secure OTs using $O(\kappa)$ public key operations and $O(1)$ inexpensive symmetric key operations, where $\kappa$ is the security parameter. - We construct the $first$ Zero-knowledge protocol in the RF setting where each multiplication gate can be proven using $O(1)$ symmetric key operations. We achieve this using our OT extension protocol and by extending the ZK protocol of Quicksilver (Yang, Sarkar, Weng and Wang, CCS'21) to the RF setting. - Along the way, we introduce new ideas for malleable interactive proofs that could be of independent interest. We define a notion of $full$ $malleability$ for Sigma protocols that unlike prior notions allow modifying the instance as well, in addition to the transcript. We construct new protocols that satisfy this notion, construct RFs for such protocols and use them in constructing our OT extension. The key idea of our work is to demonstrate that correlated randomness may be obtained in an RF-friendly way $without$ having to rerandomize the entire transcript. This enables us to avoid expensive public-key operations that grow with the circuit-size.
Last updated:  2022-11-05
Masked Iterate-Fork-Iterate: A new Design Paradigm for Tweakable Expanding Pseudorandom Function
Elena Andreeva, Benoit Cogliati, Virginie Lallemand, Marine Minier, Antoon Purnal, Arnab Roy
Many modes of operations for block ciphers or tweakable block ciphers do not require invertibility from their underlying primitive. In this work, we study fixed-length Tweakable Pseudorandom Function (TPRF) with large domain extension, a novel primitive that can bring high security and significant performance optimizations in symmetric schemes, such as (authenticated) encryption. Our first contribution is to introduce a new design paradigm, derived from the Iterate-Fork-Iterate construction, in order to build $n$-to-$\alpha n$-bit ($\alpha\geq2$), $n$-bit secure, domain expanding TPRF. We dub this new generic composition masked Iterate-Fork-Iterate (mIFI). We then propose a concrete TPRF instantiation ButterKnife that expands an $n$-bit input to $8n$-bit output via a public tweak and secret key. ButterKnife is built with high efficiency and security in mind. It is fully parallelizable and based on Deoxys-BC, the AES-based tweakable block cipher used in the authenticated encryption winner algorithm in the defense-in-depth category of the recent CAESAR competition. We analyze the resistance of ButterKnife to differential, linear, meet-in-the-middle, impossible differentials and rectangle attacks. A special care is taken to the attack scenarios made possible by the multiple branches. Our next contribution is to design and provably analyze two new TPRF-based deterministic authenticated encryption (DAE) schemes called SAFE and ZAFE that are highly efficient, parallelizable, and offer $(n+\min(n,t))/2$ bits of security, where $n,t$ denote respectively the input block and the tweak sizes of the underlying primitives. We further implement SAFE with ButterKnife to show that it achieves an encryption performance of 1.06 c/B for long messages on Skylake, which is 33-38% faster than the comparable Crypto'17 TBC-based ZAE DAE. Our second candidate ZAFE, which uses the same authentication pass as ZAE, is estimated to offer a similar level of speedup. Besides, we show that ButterKnife, when used in Counter Mode, is slightly faster than AES (0.50 c/B vs 0.56 c/B on Skylake).
Last updated:  2022-11-05
How to Hide MetaData in MLS-Like Secure Group Messaging: Simple, Modular, and Post-Quantum
Keitaro Hashimoto, Shuichi Katsumata, Thomas Prest
Secure group messaging (SGM) protocols allow large groups of users to communicate in a secure and asynchronous manner. In recent years, continuous group key agreements (CGKAs) have provided a powerful abstraction to reason on the security properties we expect from SGM protocols. While robust techniques have been developed to protect the contents of conversations in this context, it is in general more challenging to protect metadata (e.g. the identity and social relationships of group members), since their knowledge is often needed by the server in order to ensure the proper function of the SGM protocol. In this work, we provide a simple and generic wrapper protocol that upgrades non-metadata-hiding CGKAs into metadata-hiding CGKAs. Our key insight is to leverage the existence of a unique continuously evolving group secret key shared among the group members. We use this key to perform a group membership authentication protocol that convinces the server in an \textit{anonymous} manner that a user is a legitimate group member. Our technique only uses a standard signature scheme, and thus, the wrapper protocol can be instantiated from a wide range of assumptions, including post-quantum ones. It is also very efficient, as it increases the bandwidth cost of the underlying CGKA operations by at most a factor of two. To formally prove the security of our protocol, we use the universal composability (UC) framework and model a new ideal functionality ${\mathcal{F}_{\text{CGKA}}^{\sf mh}}$ capturing the correctness and security guarantee of metadata-hiding CGKA. To capture the above intuition of a ``wrapper'' protocol, we also define a restricted ideal functionality $\mathcal{F}_{\text{CGKA}}^{\sf ctxt}$, which roughly captures a non-metadata-hiding CGKA. We then show that our wrapper protocol UC-realizes ${\mathcal{F}_{\text{CGKA}}^{\sf mh}}$ in the $\mathcal{F}_{\text{CGKA}}^{\sf ctxt}$-hybrid model, which in particular formalizes the intuition that any non-metadata-hiding CGKA can be modularly bootstrapped into metadata-hiding CGKA.
Last updated:  2023-10-11
Dynamic Decentralized Functional Encryption with Strong Security
Ky Nguyen, David Pointcheval, and Robert Schädlich
Decentralized Multi-Client Functional Encryption (DMCFE) extends the basic functional encryption to multiple clients that do not trust each other. They can independently encrypt the multiple inputs to be given for evaluation to the function embedded in the functional decryption key. And they keep control on these functions as they all have to contribute to the generation of the functional decryption keys. Dynamic Decentralized Functional Encryption (DDFE) is the ultimate extension where one can dynamically join the system and the keys and ciphertexts can be built by dynamic subsets of clients. As any encryption scheme, all the FE schemes provide privacy of the plaintexts. But the functions associated to the functional decryption keys might be sensitive too (e.g. a model in machine learning). The function-hiding property has thus been introduced to additionally protect the function evaluated during the decryption process. In this paper, we first provide a generic conversion from DMCFE to DDFE, that preserves the security properties, in both the standard and the function-hiding setting. Then, new proof techniques allow us to analyze a new concrete construction of function-hiding DMCFE for inner products, that can thereafter be converted into a DDFE, with strong security guarantees: the adversary can adaptively query multiple challenge ciphertexts and multiple challenge keys. Previous constructions were proven secure in the selective setting only.
Last updated:  2023-11-07
The Key Lattice Framework for Concurrent Group Messaging
Kelong Cong, Karim Eldefrawy, Nigel P. Smart, and Ben Terner
Today, two-party secure messaging is well-understood and widely adopted on the Internet, e.g., Signal and WhatsApp. Multiparty protocols for secure group messaging on the other hand are less mature and many protocols with different tradeoffs exist. Generally, such protocols require parties to first agree on a shared secret group key and then periodically update it while preserving forward secrecy (FS) and post compromise security (PCS). We present a new framework, called a key lattice, for managing keys in concurrent group messaging. Our framework can be seen as a ``key management'' layer that enables concurrent group messaging when secure pairwise channels are available. Proving security of group messaging protocols using the key lattice requires new game-based security definitions for both FS and PCS. Our new definitions are both simpler and more natural than previous ones, as our framework combines both FS and PCS into directional variants of the same abstraction, and additionally avoids dependence on time-based epochs. Additionally, we give a concrete, standalone instantiation of a concurrent group messaging protocol for dynamic groups. Our protocol provides both FS and PCS, supports concurrent updates, and only incurs $O(1)$ overhead for securing the messaging payload, $O(n)$ update cost and $O(n)$ healing costs, which are optimal.
Last updated:  2023-03-31
Multivariate lookups based on logarithmic derivatives
Ulrich Haböck
Logarithmic derivatives translate products of linear factors into sums of their reciprocals, turning zeroes into simple poles of same multiplicity. Based on this simple fact, we construct an interactive oracle proof for multi-column lookups over the boolean hypercube, which makes use of a single multiplicity function instead of working with a rearranged union of table and witnesses. For single-column lookups the performance is comparable to the well-known Plookup strategy used by Hyperplonk+. However, the real power of our argument unfolds in the case of batch lookups when multiple columns are subject to a single-table lookup: While the number of field operations is comparable to the Hyperplonk+ lookup (extended to multiple columns), the oracles provided by our prover are much less expensive. For example, for columns of length 2^12, paper-pencil operation counts indicate that the logarithmic derivative lookup is between 1.5 and 4 times faster, depending on the number of columns.
Last updated:  2022-11-04
Key-Recovery Fault Injection Attack on the Classic McEliece KEM
Sabine Pircher, Johannes Geier, Julian Danner, Daniel Mueller-Gritschneder, Antonia Wachter-Zeh
We present a key-recovery fault injection attack on the Classic McEliece Key Encapsulation Mechanism (KEM). The fault injections target the error-locator polynomial of the Goppa code and the validity checks in the decryption algorithm, making a chosen ciphertext attack possible. Faulty decryption outputs are used to generate a system of polynomial equations in the secret support elements of the Goppa code. After solving the equations, we can determine a suitable Goppa polynomial and form an alternative secret key. To demonstrate the feasibility of the attack on hardware, we simulate the fault injections on virtual prototypes of two RISC-V cores at register-transfer level.
Last updated:  2023-08-17
Graph-Theoretic Algorithms for the Alternating Trilinear Form Equivalence Problem
Ward Beullens
At Eurocrypt`22 Tang, Duong, Joux, Plantard, Qiao, and Susilo proposed a digital signature algorithm based on the hardness of the isomorphism problem of alternating trilinear forms. They propose three concrete parameters in dimensions $9$, $10$, and $11$ respectively. We give new heuristic algorithms that solve this problem more efficiently. With our new algorithms, the first parameter set can be broken in less than a day on a laptop. For the second parameter set, we show there is a $2^{-17}$ fraction of the public keys that can also be broken in less than a day. We do not break the third parameter set in practice, but we claim it falls short of the target security level of $128$ bits.
Last updated:  2023-02-01
Pattern Matching in Encrypted Stream from Inner Product Encryption
Élie Bouscatié, Guilhem Castagnos, Olivier Sanders
Functional encryption features secret keys, each associated with a key function $f$, which allow to directly recover $f(x)$ from an encryption of $x$, without learning anything more about $x$. This property is particularly useful when delegating data processing to a third party as it allows the latter to perfom its task while ensuring minimum data leakage. However, this generic term conceals a great diversity in the cryptographic constructions that strongly differ according to the functions $f$ they support. A recent series of works has focused on the ability to search a pattern within a data stream, which can be expressed as a function $f$. One of the conclusions of these works was that this function $f$ was not supported by the current state-of-the-art, which incited their authors to propose a new primitive called Stream Encryption supporting Pattern Matching (SEPM). Some concrete constructions were proposed but with some limitations such as selective security or reliance on non-standard assumptions. In this paper, we revisit the relations between this primitive and two major subclasses of functional encryption, namely Hidden Vector Encryption (HVE) and Inner Product Encryption (IPE). We indeed first exhibit a generic transformation from HVE to SEPM, which immediately yields new efficient SEPM constructions with better features than existing ones. We then revisit the relations between HVE and IPE and show that we can actually do better than the transformation proposed by Katz, Sahai and Waters in their seminal paper on predicate encryption. This allows to fully leverage the vast state-of-the-art on IPE which contains adaptively secure constructions proven under standard assumptions. This results in countless new SEPM constructions, with all the features one can wish for. Beyond that, we believe that our work sheds a new light on the relations between IPE schemes and HVE schemes and in particular shows that some of the former are more suitable to construct the latter.
Last updated:  2023-02-08
Threshold-Optimal MPC With Friends and Foes
Nikolas Melissaris, Divya Ravi, Sophia Yakoubov
Alon et. al (Crypto 2020) initiated the study of MPC with Friends and Foes (FaF) security, which captures the desirable property that even up to $h^{*}$ honest parties should learn nothing additional about other honest parties’ inputs, even if the $t$ corrupt parties send them extra information. Alon et. al describe two flavors of FaF security: weak FaF, where the simulated view of up to $h^{*}$ honest parties should be indistinguishable from their real view, and strong FaF, where the simulated view of the honest parties should be indistinguishable from their real view even in conjunction with the simulated / real view of the corrupt parties. They give several initial FaF constructions with guaranteed output delivery (GOD); however, they leave some open problems. Their only construction which supports the optimal corruption bounds of $2t+h^{*} < n$ (where $n$ denotes the number of parties) only offers weak FaF security and takes much more than the optimal three rounds of communication. In this paper, we describe two new constructions with GOD, both of which support $2t+h^{*} < n$. Our first construction, based on threshold FHE, is the first three-round construction that matches this optimal corruption bound (though it only offers weak FaF security). Our second construction, based on a variant of BGW, is the first such construction that offers strong FaF security (though it requires more than three rounds, as well as correlated randomness). Our final contribution is further exploration of the relationship between FaF security and similar security notions. In particular, we show that FaF security does not imply mixed adversary security (where the adversary can make $t$ active and $h^{*}$ passive corruptions), and that Best of Both Worlds security (where the adversary can make $t$ active or $t+h^{*}$ passive corruptions, but not both) is orthogonal to both FaF and mixed adversary security.
Last updated:  2023-03-11
Endemic Oblivious Transfer via Random Oracles, Revisited
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, Kui Ren
The notion of Endemic Oblivious Transfer (EOT) was introduced by Masny and Rindal (CCS'19). EOT offers a weaker security guarantee than the conventional random OT; namely, the malicious parties can fix their outputs arbitrarily. The authors presented a 1-round UC-secure EOT protocol under a tailor-made and non-standard assumption, Choose-and-Open DDH, in the RO model. In this work, we systematically study EOT in the UC/GUC framework. We present a new 1-round UC-secure EOT construction in the RO model under the DDH assumption. Under the GUC framework, we propose the first 1-round EOT construction under the CDH assumption in the Global Restricted Observable RO (GroRO) model proposed by Canetti et al. (CCS'14). We also provide an impossibility result, showing there exist no 1-round GUC-secure EOT protocols in the Global Restricted Programmable RO (GrpRO) model proposed by Camenisch et al. (Eurocrypt'18). Subsequently, we provide the first round-optimal (2-round) EOT protocol with adaptive security under the DDH assumption in the GrpRO model. Finally, we investigate the relations between EOT and other cryptographic primitives. As side products, we present the first 2-round GUC-secure commitment in the GroRO model as well as a separation between the GroRO and the GrpRO models, which may be of independent interest.
Last updated:  2022-11-07
Shielding Probabilistically Checkable Proofs: Zero-Knowledge PCPs from Leakage Resilience
Mor Weiss
Probabilistically Checkable Proofs (PCPs) allow a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form ``$x\in\mathcal{L}$'' by querying only few proof bits. Zero-Knowledge PCPs (ZK-PCPs) enhance standard PCPs to additionally guarantee that the view of any (possibly malicious) verifier querying a bounded number of proof bits can be efficiently simulated up to a small statistical distance. The first ZK-PCP construction of Kilian, Petrank and Tardos (STOC 1997), and following constructions employing similar techniques, necessitate that the honest verifier make several rounds of queries to the proof. This undesirable property, which is inherent to their technique, translates into increased round complexity in cryptographic applications of ZK-PCPs. We survey two recent ZK-PCP constructions -- due to Ishai, Yang and Weiss (TCC 2016-A), and Hazay, Venkitasubramaniam, and Weiss (ITC 2021) -- in which the honest verifier makes a single round of queries to the proof. Both constructions use entirely different techniques compared to previous ZK-PCP constructions, by showing connections to the seemingly-unrelated notion of leakage resilience. These constructions are incomparable to previous ZK-PCP constructions: while on the one hand the honest verifier only makes a single round of queries to the proof, these ZK-PCPs either obtain a smaller (polynomial) ratio between the query complexity of the honest and malicious verifiers, or obtain a weaker ZK guarantee in which the ZK simulator is not necessarily efficient.
Last updated:  2024-02-25
Your Reputation's Safe with Me: Framing-Free Distributed Zero-Knowledge Proofs
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam, and Mor Weiss
Distributed Zero-Knowledge (dZK) proofs, recently introduced by Boneh et al. (CYPTO`19), allow a prover $P$ to prove NP statements on an input $x$ which is distributed between $k$ verifiers $V_1,\ldots,V_k$, where each $V_i$ holds only a piece of $x$. As in standard ZK proofs, dZK proofs guarantee Completeness when all parties are honest; Soundness against a malicious prover colluding with $t$ verifiers; and Zero Knowledge against a subset of $t$ malicious verifiers, in the sense that they learn nothing about the NP witness and the input pieces of the honest verifiers. Unfortunately, dZK proofs provide no correctness guarantee for an honest prover against a subset of maliciously corrupted verifiers. In particular, such verifiers might be able to ``frame'' the prover, causing honest verifiers to reject a true claim. This is a significant limitation, since such scenarios arise naturally in dZK applications, e.g., for proving honest behavior, and such attacks are indeed possible in existing dZKs. We put forth and study the notion of strong completeness for dZKs, guaranteeing that true claims are accepted even when $t$ verifiers are maliciously corrupted. We then design strongly-complete dZK proofs using the ``MPC-in-the-head'' paradigm of Ishai et al. (STOC`07), providing a novel analysis that exploits the unique properties of the distributed setting. To demonstrate the usefulness of strong completeness, we present several applications in which it is instrumental in obtaining security. First, we construct a certifiable version of Verifiable Secret Sharing (VSS), which is a VSS in which the dealer additionally proves that the shared secret satisfies a given NP relation. Our construction withstands a constant fraction of corruptions, whereas a previous construction of Ishai et al. (TCC`14) could only handle $k^{\varepsilon}$ corruptions for a small $\varepsilon<1$. We also design a reusable version of certifiable VSS that we introduce, in which the dealer can prove an unlimited number of predicates on the same shared secret. Finally, we extend a compiler of Boneh et al. (CRYPTO`19), who used dZKs to transform a class of ``natural'' semi-honest protocols in the honest-majority setting into maliciously secure ones with abort. Our compiler uses strongly-complete dZKs to obtain identifiable abort.
Last updated:  2022-11-28
Two new infinite families of APN functions in trivariate form
Kangquan Li, Nikolay Kaleyski
We present two infinite families of APN functions in triviariate form over finite fields of the form $\mathbb{F}_{2^{3m}}$. We show that the functions from both families are permutations when $m$ is odd, and are 3-to-1 functions when $m$ is even. In particular, our functions are AB permutations for $m$ odd. Furthermore, we observe that for $m = 3$, i.e. for $\mathbb{F}_{2^9}$, the functions from our families are CCZ-equivalent to the two bijective sporadic APN instances discovered by Beierle and Leander. We also perform an exhaustive computational search for quadratic APN functions with binary coefficients in trivariate form over $\mathbb{F}_{2^{3m}}$ with $m \le 5$ and report on the results.
Last updated:  2022-11-03
An Assessment of Differential-Neural Distinguishers
Aron Gohr, Gregor Leander, Patrick Neumann
Since the introduction of differential-neural cryptanalysis, as the machine learning assisted differential cryptanalysis proposed in [Goh19] is coined by now, a lot of followup works have been published, showing the applicability for a wide variety of ciphers. In this work, we set out to vet a multitude of differential-neural distinguishers presented so far, and additionally provide general insights. Firstly, we show for a selection of different ciphers how differential-neural distinguishers for those ciphers can be (automatically) optimized, also providing guidance to do so for other ciphers as well. Secondly, we explore a correlation between a differential-neural distinguisher's accuracy and a standard notion of difference between the two underlying distributions. Furthermore, we show that for a whole (practically relevant) class of ciphers, the differential-neural distinguisher can use differential features only. At last, we also rectify a common mistake in current literature, and show that, making use of an idea already presented in the foundational work[Goh19], the claimed improvements from using multiple ciphertext-pairs at once are at most marginal, if not non-existent.
Last updated:  2022-11-03
Censorship-Resilient and Confidential Collateralized Second-Layer Payments
Kari Kostiainen, Sven Gnap, Ghassan Karame
Permissionless blockchains are too slow for applications like point-of-sale payments. While several techniques have been proposed to speed up blockchain payments, none of them are satisfactory for application scenarios like retail shopping. In particular, existing solutions like payment channels require users to lock up significant funds and schemes based on pre-defined validators enable easy transaction censoring. In this paper, we develop Quicksilver, the first blockchain payment scheme that works with practical collaterals and is fast, censorship-resilient, and confidential at the same time.We implement Quicksilver for EVM-compatible chains and show that censoring-resilient payments are fast and affordable on currently popular blockchains platforms like Ethereum and Polygon.
Last updated:  2022-11-03
Collusion-resistant broadcast encryption based on hidden RSA subgroups
Sigurd Eskeland
Public key broadcast encryption enables computations of ciphertexts, in which a single ciphertext is encrypted with regard to a set of recipients, and only the intended recipients can decrypt that ciphertext independently of each other and without interactions. A significant shortcoming of existing broadcast encryption schemes are long decryption keys comprising the public keys of pertaining recipients. Decryption therefore necessitates access to public keys, which requires key management and impacts computational and transmission overhead, accessibility, and storage. Moreover, a user description list referencing the pertaining recipients and their public keys must be appended to each ciphertext, which leads to the privacy implication of disclosing user/content-relations. Predominantly all broadcast encryption schemes are based on bilinear pairings. In this paper, we propose a collusion-resistant broadcast encryption scheme that is the first broadcast encryption scheme based on the factorization problem and hidden RSA subgroups. A novel feature is that the decryption key consists of a single element only, which leads to significantly reduced key management, improved computational efficiency, and elimination of the mentioned privacy issue.
Last updated:  2022-11-16
An Experimentally Verified Attack on 820-Round Trivium (Full Version)
Cheng Che, Tian Tian
The cube attack is one of the most important cryptanalytic techniques against Trivium. As the method of recovering superpolies becomes more and more effective, another problem of cube attacks, i.e., how to select cubes corresponding to balanced superpolies, is attracting more and more attention. It is well-known that a balanced superpoly could be used in both theoretical and practical analyses. In this paper, we present a novel framework to search for valuable cubes whose superpolies have an independent secret variable each, i.e., a linear variable not appearing in any nonlinear term. To control online complexity, valuable cubes are selected from very few large cubes. New ideas are given on the large cube construction and the subcube sieve. For the verification of this new algorithm, we apply it to Trivium. For 815-round Trivium, using one cube of size 47, we obtain more than 200 balanced superpolies containing 68 different independent secret variables. To make a trade-off between the number of cubes and computation complexity, we choose 35 balanced superpolies and mount a key-recovery attack on 815-round Trivium with a complexity of $2^{47.32}$. For 820-round Trivium, using two cubes of size 52, we obtain more than 100 balanced superpolies, which contain 54 different independent secret variables. With 30 balanced superpolies, we mount a key-recovery attack on 820-round Trivium with a complexity of $2^{53.17}$. Strong experimental evidence shows that the full key-recovery attacks on 815- and 820-round Trivium could be completed within six hours and two weeks on a PC with two RTX3090 GPUs, respectively.
Last updated:  2023-10-11
Best-of-Both-Worlds Multiparty Quantum Computation with Publicly Verifiable Identifiable Abort
Kai-Min Chung, Mi-Ying (Miryam) Huang, Er-Cheng Tang, and Jiapeng Zhang
Alon et al. (CRYPTO 2021) introduced a multiparty quantum computation protocol that is secure with identifiable abort (MPQC-SWIA). However, their protocol allows only inside MPQC parties to know the identity of malicious players. This becomes problematic when two groups of people disagree and need a third party, like a jury, to verify who the malicious party is. This issue takes on heightened significance in the quantum setting, given that quantum states may exist in only a single copy. Thus, we emphasize the necessity of a protocol with publicly verifiable identifiable abort (PVIA), enabling outside observers with only classical computational power to agree on the identity of the malicious party in case of an abort. However, achieving MPQC with PVIA poses significant challenges due to the no-cloning theorem, and previous works proposed by Mahadev (STOC 2018) and Chung et al. (Eurocrypt 2022) for classical verification of quantum computation fall short. In this paper, we obtain the first MPQC-PVIA protocol assuming post-quantum oblivious transfer and a classical broadcast channel. The core component of our construction is a new authentication primitive called auditable quantum authentication (AQA) that identifies the malicious sender with overwhelming probability. Additionally, we provide the first MPQC protocol with best-of-both-worlds (BoBW) security, which guarantees output delivery with an honest majority and remains secure with abort even if the majority is dishonest. Our best-of-both-worlds MPQC protocol also satisfies PVIA upon abort.
Last updated:  2024-03-27
Obfuscation of Evasive Algebraic Set Membership
Steven D. Galbraith and Trey Li
We define the membership function of a set as the function that determines whether an input is an element of the set. Canetti, Rothblum, and Varia showed how to obfuscate evasive membership functions of hyperplanes over a finite field of order an exponentially large prime, assuming the hardness of a modified decisional Diffie-Hellman problem. Barak, Bitansky, Canetti, Kalai, Paneth, and Sahai extended their work from hyperplanes to hypersurfaces of bounded degree, assuming multilinear maps. Both works are limited to algebraic sets over large fields of prime orders, and are based on less standard assumptions, although they prove virtual black-box security. In this paper, we handle much more general algebraic sets based on more standard assumptions, and prove input-hiding security, which is not weaker nor stronger than virtual black-box security (i.e., they are incomparable). Our first obfuscator handles affine algebraic sets over finite fields of order an arbitrary prime power. It is based on the preimage-resistance property of cryptographic hash function families. Our second obfuscator applies to both affine and projective algebraic sets over finite fields of order a polynomial size prime power. It is based on the same hardness assumption(s) required by input-hiding small superset obfuscation. Our paper is the first to handle the obfuscation problem of projective algebraic sets over small finite fields.
Last updated:  2023-03-06
Succinct Vector, Polynomial, and Functional Commitments from Lattices
Hoeteck Wee, David J. Wu
Vector commitment schemes allow a user to commit to a vector of values $\mathbf{x} \in \{0,1\}^\ell$ and later, open up the commitment to a specific set of positions. Both the size of the commitment and the size of the opening should be succinct (i.e., polylogarithmic in the length $\ell$ of the vector). Vector commitments and their generalizations to polynomial commitments and functional commitments are key building blocks for many cryptographic protocols. We introduce a new framework for constructing non-interactive lattice-based vector commitments and their generalizations. A simple instantiation of our framework yields a new vector commitment scheme from the standard short integer solution (SIS) assumption that supports private openings and large messages. We then show how to use our framework to obtain the first succinct functional commitment scheme that supports openings with respect to arbitrary bounded-depth Boolean circuits. In this scheme, a user commits to a vector $\mathbf{x} \in \{0,1\}^\ell$, and later on, open the commitment to any function $f(\mathbf{x})$. Both the commitment and the opening are non-interactive and succinct: namely, they have size $\textsf{poly}(\lambda, d, \log \ell)$, where $\lambda$ is the security parameter and $d$ is the depth of the Boolean circuit computing $f$. Previous constructions of functional commitments could only support constant-degree polynomials, or require a trusted online authority, or rely on non-falsifiable assumptions. The security of our functional commitment scheme is based on a new falsifiable family of "basis-augmented" SIS assumptions (BASIS) we introduce in this work. We also show how to use our vector commitment framework to obtain (1) a polynomial commitment scheme where the user can commit to a polynomial $f \in \mathbb{Z}_q[x]$ and subsequently open the commitment to an evaluation $f(x) \in \mathbb{Z}_q$; and (2) an aggregatable vector (resp., functional) commitment where a user can take a set of openings to multiple indices (resp., function evaluations) and aggregate them into a single short opening. Both of these extensions rely on the same BASIS assumption we use to obtain our succinct functional commitment scheme.
Last updated:  2023-06-09
Pseudorandom (Function-Like) Quantum State Generators: New Definitions and Applications
Prabhanjan Ananth, Aditya Gulati, Luowen Qian, Henry Yuen
Pseudorandom quantum states (PRS) are efficiently constructible states that are computationally indistinguishable from being Haar-random, and have recently found cryptographic applications. We explore new definitions, new properties and applications of pseudorandom states, and present the following contributions: 1. New Definitions: We study variants of pseudorandom function-like state (PRFS) generators, introduced by Ananth, Qian, and Yuen (CRYPTO'22), where the pseudorandomness property holds even when the generator can be queried adaptively or in superposition. We show the feasibility of these variants assuming the existence of post-quantum one-way functions. 2. Classical Communication: We show that PRS generators with logarithmic output length imply commitment and encryption schemes with classical communication. Previous constructions of such schemes from PRS generators required quantum communication. 3. Simplified Proof: We give a simpler proof of the Brakerski-Shmueli (TCC'19) result that polynomially-many copies of uniform superposition states with random binary phases are indistinguishable from Haar-random states. 4. Necessity of Computational Assumptions: We also show that a secure PRS with output length logarithmic, or larger, in the key length necessarily requires computational assumptions.
Last updated:  2022-11-02
Player-Replaceability and Forensic Support are Two Sides of the Same (Crypto) Coin
Peiyao Sheng, Gerui Wang, Kartik Nayak, Sreeram Kannan, Pramod Viswanath
Player-replaceability is a property of a blockchain protocol that ensures every step of the protocol is executed by an unpredictably random (small) set of players; this guarantees security against a fully adaptive adversary and is a crucial property in building permissionless blockchains. Forensic Support is a property of a blockchain protocol that provides the ability, with cryptographic integrity, to identify malicious parties when there is a safety violation; this provides the ability to enforce punishments for adversarial behavior and is a crucial component of incentive mechanism designs for blockchains. Player-replaceability and strong forensic support are both desirable properties, yet, none of the existing blockchain protocols have both properties. Our main result is to construct a new BFT protocol that is player-replaceable and has maximum forensic support. The key invention is the notion of a ``transition certificate'', without which we show that natural adaptations of extant BFT and longest chain protocols do not lead to the desired goal of simultaneous player-replaceability and forensic support.
Last updated:  2024-03-11
Building MPCitH-based Signatures from MQ, MinRank, Rank SD and PKP
Thibauld Feneuil
The MPC-in-the-Head paradigm is a useful tool to build practical signature schemes. Many such schemes have been already proposed, relying on different assumptions. Some are relying on standard symmetric primitives like AES, some are relying on MPC-friendly primitives like LowMC or Rain, and some are relying on well-known hard problems like the syndrome decoding problem. This work focuses on the third type of MPCitH-based signatures. Following the same methodology as the work of Feneuil, Joux and Rivain (CRYPTO'22), we apply the MPC-in-the-Head paradigm to several problems: the multivariate quadratic problem, the MinRank problem, the rank syndrome decoding problem and the permuted kernel problem. Our goal is to study how this paradigm behaves for each of those problems. For the multivariate quadratic problem, our scheme outperforms slightly the existing schemes when considering large fields (as $\mathbb{F}_{256}$), and for the permuted kernel problem, we obtain larger sizes. Even if both schemes do not outperform the existing ones according to the communication cost, they are highly parallelizable and compatible with some MPC-in-the-Head techniques (like fast signature verification) while the former proposals were not. Moreover, we propose two efficient MPC protocols to check that the rank of a matrix over a field $\mathbb{F}_q$ is upper bounded by a public constant. The first one relies on the rank decomposition while the second one relies on $q$-polynomials. We then use them to build signature schemes relying on the MinRank problem and the rank syndrome decoding problem. Those schemes outperform the former schemes, achieving sizes below $6$ KB (while using only 256 parties for the MPC protocol).
Last updated:  2023-02-06
Round-Optimal Oblivious Transfer and MPC from Computational CSIDH
Saikrishna Badrinarayanan, Daniel Masny, Pratyay Mukherjee, Sikhar Patranabis, Srinivasan Raghuraman, Pratik Sarkar
We present the first round-optimal and plausibly quantum-safe oblivious transfer (OT) and multi-party computation (MPC) protocols from the computational CSIDH assumption - the weakest and most widely studied assumption in the CSIDH family of isogeny-based assumptions. We obtain the following results: - The first round-optimal maliciously secure OT and MPC protocols in the plain model that achieve (black-box) simulation-based security while relying on the computational CSIDH assumption. - The first round-optimal maliciously secure OT and MPC protocols that achieves Universal Composability (UC) security in the presence of a trusted setup (common reference string plus random oracle) while relying on the computational CSIDH assumption. Prior plausibly quantum-safe isogeny-based OT protocols (with/without setup assumptions) are either not round-optimal, or rely on potentially stronger assumptions. We also build a 3-round maliciously-secure OT extension protocol where each base OT protocol requires only 4 isogeny computations. In comparison, the most efficient isogeny-based OT extension protocol till date due to Lai et al. [Eurocrypt 2021] requires 12 isogeny computations and 4 rounds of communication, while relying on the same assumption as our construction, namely the reciprocal CSIDH assumption.
Last updated:  2024-02-16
Witness Encryption for Succinct Functional Commitments and Applications
Matteo Campanelli, Dario Fiore, and Hamidreza Khoshakhlagh
Witness encryption (WE), introduced by Garg, Gentry, Sahai, and Waters (STOC 2013) allows one to encrypt a message to a statement $\mathsf{x}$ for some NP language $\mathcal{L}$, such that any user holding a witness for $\mathsf{x} \in \mathcal{L}$ can decrypt the ciphertext. The extreme power of this primitive comes at the cost of its elusiveness: a practical construction from established cryptographic assumptions is currently out of reach. In this work, we investigate a new notion of encryption that has a flavor of WE and that we can build only based on bilinear pairings, for interesting classes of computation. We do this by connecting witness encryption to functional commitments (FC). FCs are an advanced notion of commitments that allows fine-grained openings, that is non-interactive proofs to show that a commitment $\mathsf{cm}$ opens to $v$ such that $y=G(v)$, with the crucial feature that both commitments and openings are succinct. Our new WE notion, witness encryption for (succinct) functional commitment (WE-FC), allows one to encrypt a message with respect to a triple $(\mathsf{cm}, G, y)$, and decryption is unlocked using an FC opening that $\mathsf{cm}$ opens to $v$ such that $y=G(v)$. This mechanism is similar to the notion of witness encryption for NIZK of commitments [Benhamouda and Lin, TCC'20], with the crucial difference that ours supports commitments and decryption time whose size and complexity do not depend on the length of the committed data $v$. Our main contributions are therefore the formal definition of WE-FC, a generic methodology to compile an FC in bilinear groups into an associated WE-FC scheme (semantically secure in the generic group model), and a new FC construction for NC1 circuits that yields a WE-FC for the same class of functions. Similarly to [Benhamouda and Lin, TCC'20], we show how to apply WE-FC to construct multiparty reusable non-interactive secure computation (mrNISC) protocols. Crucially, the efficiency profile of WE-FC yields mrNISC protocols whose offline stage has shorter communication (only a succinct commitment from each party). As an additional contribution, we discuss further applications of WE-FC and show how to extend this primitive to better suit these settings.
Last updated:  2022-11-02
sVote with Control Components Voting Protocol. Computational Proof of Complete Verifiability and Privacy.
Enrique Larraia, Tamara Finogina, Nuria Costa
This document details the cryptographic analysis of the sVote v2.2.1 system - an e-voting solution developed by Scytl for the Switzerland context. We prove the complete verifiability and privacy under the Swiss legislation's informally stated goals. First, we derive the trust model for complete verifiability and voting secrecy from the Swiss Chancellery's requirements [1][2], supporting our interpretation by quotes from and references to relevant excerpts of the ordinance and the corresponding technical annex. Then, based on the derived model, we prove that sVote with Control Components provides complete verifiability and guarantees voting secrecy and the non-disclosure of early provisional results. We demonstrate that sVote fulfills the requirements of the Swiss federal chancellery for completely verifiable E-voting systems. In other words, we show that an adversary cannot break the complete verifiability and voting secrecy properties of sVote without being detected by either the voter or auditors. [1] Technical and administrative requirements for electronic vote casting v 2.0 https://www.bk.admin.ch/dam/bk/en/dokumente/pore/Annex_of_the_Federal_Chancellery_Ordinance_on_Electronic_Voting_V2.0_July_2018.pdf.download.pdf/Annex_of_the_Federal_Chancellery_Ordinance_on_Electronic_Voting_V2.0_July_2018.pdf [2] Federal Chancellery Ordinance on Electronic Voting https://www.fedlex.admin.ch/eli/cc/2013/859/en
Last updated:  2022-11-02
Non-Interactive Publicly-Verifiable Delegation of Committed Programs
Riddhi Ghosal, Amit Sahai, Brent Waters
In this work, we present the first construction of a fully non-interactive publicly-verifiable delegation scheme for committed programs. More specifically, we consider a setting where Alice is a trusted author who delegates to an untrusted worker the task of hosting a program $P$, represented as a Boolean circuit. Alice also commits to a succinct value based on $P$. Any arbitrary user/verifier without knowledge of $P$ should be convinced that they are receiving from the worker an actual computation of Alice's program on a given input $x$. Before our work, the only object known to imply this challenging form of delegation was a SNARG/SNARK for $\mathcal{NP}$. This is because from the point of view of the user/verifier, the program $P$ is an unknown witness to the computation. However, constructing a SNARG for $\mathcal{NP}$ from standard assumptions remains a major open problem. In our work, we show how to achieve delegation in this challenging context assuming only the hardness of the Learning With Errors (LWE) assumption, bypassing the apparent need for a SNARG for $\mathcal{NP}$.
Last updated:  2023-05-30
Label Correlation in Deep Learning-based Side-channel Analysis
Lichao Wu, Léo Weissbart, Marina Krček, Huimin Li, Guilherme Perin, Lejla Batina, Stjepan Picek
The efficiency of the profiling side-channel analysis can be significantly improved with machine learning techniques. Although powerful, a fundamental machine learning limitation of being data-hungry received little attention in the side-channel community. In practice, the maximum number of leakage traces that evaluators/attackers can obtain is constrained by the scheme requirements or the limited accessibility of the target. Even worse, various countermeasures in modern devices increase the conditions on the profiling size to break the target. This work demonstrates a practical approach to dealing with the lack of profiling traces. Instead of learning from a one-hot encoded label, transferring the labels to their distribution can significantly speed up the convergence of guessing entropy. By studying the relationship between all possible key candidates, we propose a new metric, denoted Label Correlation (LC), to evaluate the generalization ability of the profiling model. We validate LC with two common use cases: early stopping and network architecture search, and the results indicate its superior performance.
Last updated:  2024-02-26
ORTOA: One Round Trip Oblivious Access
Sujaya Maiyya, Yuval Steinhart, Divyakant Agrawal, Prabhanjan Ananth, and Amr El Abbadi
Many applications relying on cloud storage services typically encrypt their data to ensure data privacy. However, reading or writing the encrypted data to serve client requests reveals the type of client operation to a potentially untrusted cloud. An adversary can exploit this information leak to compromise a user’s privacy by tracking read/write access patterns. Existing approaches such as Oblivious RAM (ORAM) schemes hide the type of client access by always reading and then writing the data sequentially for both reads and writes, rendering one of these rounds redundant with respect to a client request. To mitigate this redundancy, we propose ORTOA- a family of protocols enabling single-round data access on remote storage without revealing the operation type. Specifically, we propose three protocols, two using existing cryptographic primitives of fully homomorphic encryption and trusted execution environments (TEEs), and a new primitive inspired by garbled circuits. Each of these protocols has different trust assumptions, allowing an application to choose the option best suited for its needs. To our knowledge, ORTOA is the first to propose generalized protocols to obfuscate the type of access in a single round, reducing the communication overhead in half. The proposed techniques can pave the way for novel ORAM schemes that hide both the type of access and the access pattern in a single round. Our experimental results show ORTOA achieving throughput gains of 1.7x-3.2x compared to a baseline requiring two rounds for access type concealment, with the baseline incurring latency 1.5-1.9x that of ORTOA for 160B-sized objects.
Last updated:  2023-09-20
Efficient Registration-Based Encryption
Noemi Glaeser, Dimitris Kolonelos, Giulio Malavolta, and Ahmadreza Rahimi
Registration-based encryption (RBE) was recently introduced as an alternative to identity-based encryption (IBE), to resolve the key-escrow problem: In RBE, the trusted authority is substituted with a weaker entity, called the key curator, who has no knowledge of any secret key. Users generate keys on their own and then publicly register their identities and their corresponding public keys to the key curator. RBE is a promising alternative to IBE, retaining many of its advantages while removing the key-escrow problem, the major drawback of IBE. Unfortunately, all existing constructions of RBE use cryptographic schemes in a non black-box way, which makes them prohibitively expensive. It has been estimated that the size of an RBE ciphertext would be in the order of terabytes (though no RBE has even been implemented). In this work, we propose a new approach to construct RBE, from standard assumptions in bilinear groups. Our scheme is black-box and it is concretely highly efficient—a ciphertext is 914 bytes. To substantiate this claim, we implemented a prototype of our scheme and we show that it scales to millions of users. The public parameters of the scheme are on the order of kilobytes. The most expensive operation (registration) takes at most a handful of seconds, whereas the encryption and decryption runtimes are on the order of milliseconds. This is the first-ever implementation of an RBE scheme and demonstrates that the practical deployment of RBE is already possible with today’s hardware.
Last updated:  2022-11-07
On Perfectly Secure Two-Party Computation for Symmetric Functionalities with Correlated Randomness
Bar Alon, Olga Nissenbaum, Eran Omri, Anat Paskin-Cherniavsky, Arpita Patra
A multiparty computation protocol is {\em perfectly secure} for some function $f$ if it perfectly emulates an ideal computation of $f$. Thus, perfect security is the strongest and most desirable notion of security, as it guarantees security in the face of any adversary and eliminates the dependency on any security parameter. Ben-Or et al. [STOC '88] and Chaum et al. [STOC '88] showed that any function can be computed with perfect security if strictly less than one-third of the parties can be corrupted. For two-party sender-receiver functionalities (where only one party receives an output), Ishai et al. [TCC '13] showed that any function can be computed with perfect security in the correlated randomness model. Unfortunately, they also showed that perfect security cannot be achieved in general for two-party functions that give outputs to both parties (even in the correlated randomness model). We study the feasibility of obtaining perfect security for deterministic symmetric two-party functionalities (i.e., where both parties obtain the same output) in the face of malicious adversaries. We explore both the plain model as well as the correlated randomness model. We provide positive results in the plain model, and negative results in the correlated randomness model. As a corollary, we obtain the following results. \begin{enumerate} \item We provide a characterization of symmetric functionalities with (up to) four possible outputs that can be computed with perfect security. The characterization is further refined when restricted to three possible outputs and to Boolean functions. All characterizations are the same for both the plain model and the correlated randomness model. \item We show that if a functionality contains an embedded XOR or an embedded AND, then it cannot be computed with perfect security (even in the correlated randomness model). \end{enumerate}
Last updated:  2022-11-06
The Parallel Reversible Pebbling Game: Analyzing the Post-Quantum Security of iMHFs
Jeremiah Blocki, Blake Holman, Seunghoon Lee
The classical (parallel) black pebbling game is a useful abstraction which allows us to analyze the resources (space, space-time, cumulative space) necessary to evaluate a function $f$ with a static data-dependency graph $G$. Of particular interest in the field of cryptography are data-independent memory-hard functions $f_{G,H}$ which are defined by a directed acyclic graph (DAG) $G$ and a cryptographic hash function $H$. The pebbling complexity of the graph $G$ characterizes the amortized cost of evaluating $f_{G,H}$ multiple times as well as the total cost to run a brute-force preimage attack over a fixed domain $\mathcal{X}$, i.e., given $y \in \{0,1\}^*$ find $x \in \mathcal{X}$ such that $f_{G,H}(x)=y$. While a classical attacker will need to evaluate the function $f_{G,H}$ at least $m=|\mathcal{X}|$ times a quantum attacker running Grover's algorithm only requires $\mathcal{O}(\sqrt{m})$ blackbox calls to a quantum circuit $C_{G,H}$ evaluating the function $f_{G,H}$. Thus, to analyze the cost of a quantum attack it is crucial to understand the space-time cost (equivalently width times depth) of the quantum circuit $C_{G,H}$. We first observe that a legal black pebbling strategy for the graph $G$ does not necessarily imply the existence of a quantum circuit with comparable complexity --- in contrast to the classical setting where any efficient pebbling strategy for $G$ corresponds to an algorithm with comparable complexity for evaluating $f_{G,H}$. Motivated by this observation we introduce a new parallel reversible pebbling game which captures additional restrictions imposed by the No-Deletion Theorem in Quantum Computing. We apply our new reversible pebbling game to analyze the reversible space-time complexity of several important graphs: Line Graphs, Argon2i-A, Argon2i-B, and DRSample. Specifically, (1) we show that a line graph of size $N$ has reversible space-time complexity at most $\mathcal{O}\left(N^{1+\frac{2}{\sqrt{\log N}}}\right)$. (2) We show that any $(e,d)$-reducible DAG has reversible space-time complexity at most $\mathcal{O}(Ne+dN2^d)$. In particular, this implies that the reversible space-time complexity of Argon2i-A and Argon2i-B are at most $\mathcal{O}(N^2 \log \log N/\sqrt{\log N})$ and $\mathcal{O}(N^2/\sqrt[3]{\log N})$, respectively. (3) We show that the reversible space-time complexity of DRSample is at most $\mathcal{O}(N^2 \log \log N/\log N)$. We also study the cumulative pebbling cost of reversible pebblings extending a (non-reversible) pebbling attack of Alwen and Blocki on depth-reducible graphs.
Last updated:  2022-11-06
Beyond Uber: Instantiating Generic Groups via PGGs
Balthazar Bauer, Pooya Farshim, Patrick Harasser, Adam O'Neill
The generic-group model (GGM) has been very successful in making the analyses of many cryptographic assumptions and protocols tractable. It is, however, well known that the GGM is “uninstantiable,” i.e., there are protocols secure in the GGM that are insecure when using any real-world group. This motivates the study of standard-model notions formalizing that a real-world group in some sense “looks generic.” We introduce a standard-model definition called pseudo-generic group (PGG), where we require exponentiations with base an (initially) unknown group generator to result in random-looking group elements. In essence, our framework delicately lifts the influential notion of Universal Computational Extractors of Bellare, Hoang, and Keelveedhi (BHK, CRYPTO 2013) to a setting where the underlying ideal reference object is a generic group. The definition we obtain simultaneously generalizes the Uber assumption family, as group exponents no longer need to be polynomially induced. At the core of our definitional contribution is a new notion of algebraic unpredictability, which reinterprets the standard Schwartz–Zippel lemma as a restriction on sources. We prove the soundness of our definition in the GGM with auxiliary-input (AI-GGM). Our remaining results focus on applications of PGGs. We first show that PGGs are indeed a generalization of Uber. We then present a number of applications in settings where exponents are not polynomially induced. In particular we prove that simple variants of ElGamal meet several advanced security goals previously achieved only by complex and inefficient schemes. We also show that PGGs imply UCEs for split sources, which in turn are sufficient in several applications. As corollaries of our AI-GGM feasibility, we obtain the security of all these applications in the presence of preprocessing attacks. Some of our implications utilize a novel type of hash function, which we call linear-dependence destroyers (LDDs) and use to convert standard into algebraic unpredictability. We give an LDD for low-degree sources, and establish their plausibility for all sources by showing, via a compression argument, that random functions meet this definition.
Last updated:  2023-07-04
MinRank in the Head: Short Signatures from Zero-Knowledge Proofs
Gora Adj, Luis Rivera-Zamarripa, Javier Verbel
In recent years, many digital signature scheme proposals have been built from the so-called MPC-in-the-head paradigm. This has shown to be an outstanding way to design efficient signatures with security based on hard problems. MinRank is an NP-complete problem extensively studied due to its applications to cryptanalysis since its introduction in 1999. However, only a few schemes base their security on its intractability, and their signature size is large compared with other proposals based on NP problems. This paper introduces the first MinRank-based digital signature scheme that uses the MPC-in-the-head, enabling it to achieve small signature sizes and running times. For NIST's category I parameter set, we obtain signatures of 6.5KB, which is competitive with the shortest proposals in the literature that are based on non-structured problems.
Last updated:  2023-02-07
Registered Attribute-Based Encryption
Susan Hohenberger, George Lu, Brent Waters, David J. Wu
Attribute-based encryption (ABE) generalizes public-key encryption and enables fine-grained control to encrypted data. However, ABE upends the traditional trust model of public-key encryption by requiring a single trusted authority to issue decryption keys. If an adversary compromises the central authority and exfiltrates its secret key, then the adversary can decrypt every ciphertext in the system. This work introduces registered ABE, a primitive that allows users to generate secret keys on their own and then register the associated public key with a "key curator" along with their attributes. The key curator aggregates the public keys from the different users into a single compact master public key. To decrypt, users occasionally need to obtain helper decryption keys from the key curator which they combine with their own secret keys. We require that the size of the aggregated public key, the helper decryption keys, the ciphertexts, as well as the encryption/decryption times to be polylogarithmic in the number of registered users. Moreover, the key curator is entirely transparent and maintains no secrets. Registered ABE generalizes the notion of registration-based encryption (RBE) introduced by Garg et al. (TCC 2018), who focused on the simpler setting of identity-based encryption. We construct a registered ABE scheme that supports an a priori bounded number of users and policies that can be described by a linear secret sharing scheme (e.g., monotone Boolean formulas) from assumptions on composite-order pairing groups. Our approach deviates sharply from previous techniques for constructing RBE and only makes black-box use of cryptography. All existing RBE constructions (a weaker notion than registered ABE) rely on heavy non-black-box techniques. The encryption and decryption costs of our construction are comparable to those of vanilla pairing-based ABE. Two limitations of our scheme are that it requires a structured reference string whose size scales quadratically with the number of users (and linearly with the size of the attribute universe) and the running time of registration scales linearly with the number of users. Finally, as a feasibility result, we construct a registered ABE scheme that supports general policies and an arbitrary number of users from indistinguishability obfuscation and somewhere statistically binding hash functions.
Last updated:  2023-06-15
WrapQ: Side-Channel Secure Key Management for Post-Quantum Cryptography
Markku-Juhani O. Saarinen
Transition to PQC brings complex challenges to builders of secure cryptographic hardware. PQC keys usually need to be stored off-module and protected via symmetric encryption and message authentication codes. Only a short, symmetric Key-Encrypting Key (KEK) can be managed on-chip with trusted non-volatile key storage. For secure use, PQC key material is handled in masked format; as randomized shares. Due to the masked encoding of the key material, algorithm-specific techniques are needed to protect the side-channel security of the PQC key import and export processes. In this work, we study key handling techniques used in real-life secure Kyber and Dilithium hardware. We describe WrapQ, a masking-friendly key-wrapping mechanism designed for lattice cryptography. On a high level, WrapQ protects the integrity and confidentiality of key material and allows keys to be stored outside the main security boundary of the module. Significantly, its wrapping and unwrapping processes minimize side-channel leakage from the KEK integrity/authentication keys as well as the masked Kyber or Dilithium key material payload. We demonstrate that masked Kyber or Dilithium private keys can be managed in a leakage-free fashion from a compact WrapQ format without updating its encoding in non-volatile (or read-only) memory. WrapQ has been implemented in a side-channel secure hardware module. Kyber and Dilithium wrapping and unwrapping functions were validated with 100K traces of ISO 17825 / TVLA-type leakage assessment.
Last updated:  2022-12-14
Simple, Fast, Efficient, and Tightly-Secure Non-Malleable Non-Interactive Timed Commitments
Peter Chvojka, Tibor Jager
Timed commitment schemes, introduced by Boneh and Naor (CRYPTO 2000), can be used to achieve fairness in secure computation protocols in a simple and elegant way. The only known non-malleable construction in the standard model is due to Katz, Loss, and Xu (TCC 2020). This construction requires general-purpose zero knowledge proofs with specific properties, and it suffers from an inefficient commitment protocol, which requires the committing party to solve a computationally expensive puzzle. We propose new constructions of non-malleable non-interactive timed commitments, which combine (an extension of) the Naor-Yung paradigm used to construct IND-CCA secure encryption with a non-interactive ZK proofs for a simple algebraic language. This yields much simpler and more efficient non-malleable timed commitments in the standard model. Furthermore, our constructions also compare favourably to known constructions of timed commitments in the random oracle model, as they achieve several further interesting properties that make the schemes very practical. This includes the possibility of using a homomorphism for the forced opening of multiple commitments in the sense of Malavolta and Thyagarajan (CRYPTO 2019), and they are the first constructions to achieve public verifiability, which seems particularly useful to apply the homomorphism in practical applications.
Last updated:  2022-10-31
Lattice-Based Quantum Advantage from Rotated Measurements
Yusuf Alnawakhtha, Atul Mantri, Carl A. Miller, Daochen Wang
Trapdoor claw-free functions (TCFs) are immensely valuable in cryptographic interactions between a classical client and a quantum server. Typically, a protocol has the quantum server prepare a superposition of two-bit strings of a claw and then measure it using Pauli-$X$ or $Z$ measurements. In this paper, we demonstrate a new technique that uses the entire range of qubit measurements from the $XY$-plane. We show the advantage of this approach in two applications. First, building on (Brakerski et al. 2018, Kalai et al. 2022), we show an optimized two-round proof of quantumness whose security can be expressed directly in terms of the hardness of the LWE (learning with errors) problem. Second, we construct a one-round protocol for blind remote preparation of an arbitrary state on the $XY$-plane up to a Pauli-$Z$ correction.
Last updated:  2022-11-13
Multiplicative Partially Homomorphic CRT Secret Sharing
Shlomi Dolev, Yaniv Kleinman
A new CRT-based positive (non-zero) secret-sharing scheme with perfect information-theoretic (PIT) security and multiplicative homomorphism is presented. The scheme is designed to support the evaluation of multiplications of non-zero secrets of multiplicative groups. Our CRT-based scheme is partially homomorphic, supporting homomorphic multiplications. Nevertheless, our scheme has the potential to be regarded as fully homomorphic for practical scenarios, such as bounded-sized multi-cloud databases.
Last updated:  2022-10-31
Peregrine: Toward Fastest FALCON Based on GPV Framework
Eun-Young Seo, Young-Sik Kim, Joon-Woo Lee, Jong-Seon No
FALCON and Crystals-Dilithium are the digital signatures algorithms selected as NIST PQC standards at the end of the third round. FALCON has the advantage of the shortest size of the combined public key and signature but has the disadvantage of the relatively long signing time. Since FALCON algorithm is faithfully designed based on theoretical security analysis, the implementation of the algorithms is quite complex and needs considerable complexity. In order to implement the FALCON algorithm, the isochronous discrete Gaussian sampling algorithm should be used to prevent the side-channel attack, which causes a longer signature time. Also, FFT operations with floating-point numbers should be performed in FALCON, and they cause difficulty in applying the masking technique, making it vulnerable to side-channel attacks. We propose the Peregrine signature algorithm by devising two methods to make the signing algorithm of the FALCON scheme efficient. To reduce the signing time, Peregrine replaces the discrete Gaussian sampling algorithm with the sampling algorithm from the centered binomial distribution in the key generation algorithm and the signing algorithm by adjusting the encryption parameters. Also, it replaces the fast Fourier transform (FFT) operations of floating-point numbers with the number theoretic transform (NTT) operations of integers represented in residue number system (RNS), making the scheme faster and easy to be applied with a masking technique to prevent the side channel attack.
Last updated:  2023-02-24
The DAG KNIGHT Protocol: A Parameterless Generalization of Nakamoto Consensus
Yonatan Sompolinsky, Michael Sutton
In 2008 Satoshi wrote the first permissionless consensus protocol, known as Nakamoto Consensus (NC), and implemented in Bitcoin. A large body of research was dedicated since to modify and extend NC, in various aspects: speed, throughput, energy consumption, computation model, and more. One line of work focused on alleviating the security-speed tradeoff which NC suffers from by generalizing Satoshi's blockchain into a directed acyclic graph of blocks, a block DAG. Indeed, the block creation rate in Bitcoin must be suppressed in order to ensure that the block interval is much longer than the worst case latency in the network. In contrast, the block DAG paradigm allows for arbitrarily high block creation rate and block sizes, as long as the capacity of nodes and of the network backbone are not exceeded. Still, these protocols, as well as other permissionless protocols, assume an a priori bound on the worst case latency, and hardcode a corresponding parameter in the protocol. Confirmation times then depend on this worst case bound, even when the network is healthy and messages propagate very fast. In this work we set out to alleviate this constraint, and create the first permissionless protocol which contains no a priori in-protocol bound over latency. DAG-KNIGHT is thus responsive to network conditions, while tolerating a corruption of up to 50% of the computational power (hashrate) in the network. To circumvent an impossibility result by Pass and Shi, we require that the client specifies locally an upper bound over the maximum adversarial recent latency in the network. DAG-KNIGHT is an evolution of the PHANTOM paradigm, which is a parameterized generalization of NC.
Last updated:  2023-06-02
Enhanced pqsigRM: Code-Based Digital Signature Scheme with Short Signature and Fast Verification for Post-Quantum Cryptography
Jinkyu Cho, Jong-Seon No, Yongwoo Lee, Zahyun Koo, Young-Sik Kim
We present a novel code-based digital signature scheme, called Enhanced pqsigRM for post-quantum cryptography (PQC). This scheme is based on modified Reed–Muller (RM) codes, which modified RM codes with several security problems. Enhanced pqsigRM is a strengthened version of pqsigRM, which was submitted to NIST PQC standardization in round 1. The proposed scheme has the advantage of short signature size, fast verification cycles. For 128 bits of classical security, the signature size of the proposed scheme is 1032 bytes, which corresponds to 0.42 times that of Crystals-Dilithium, and the number of median verification cycles is 235,656, which is smaller than that of Crystals-Dilithium. Also, we use public codes, called modified RM codes, that are more difficult to distinguish from random codes. We use (U,U + V )-codes with high-dimensional hull to make these. Using modified RM codes, the proposed signature scheme resists various known attacks on RM-code-based cryptography. The proposed decoder samples from coset elements with small Hamming weight for any given syndrome and efficiently finds such elements.
Last updated:  2022-10-30
A Control Theoretic Approach to Infrastructure-Centric Blockchain Tokenomics
Oguzhan Akcin, Robert P. Streit, Benjamin Oommen, Sriram Vishwanath, Sandeep Chinchali
There are a multitude of Blockchain-based physical infrastructure systems, ranging from decentralized 5G wireless to electric vehicle charging networks. These systems operate on a crypto-currency enabled token economy, where node suppliers are rewarded with tokens for enabling, validating, managing and/or securing the system. However, today's token economies are largely designed without infrastructure systems in mind, and often operate with a fixed token supply (e.g., Bitcoin). Such fixed supply systems often encourage early adopters to hoard valuable tokens, thereby resulting in reduced incentives for new nodes when joining or maintaining the network. This paper argues that token economies for infrastructure networks should be structured differently - they should continually incentivize new suppliers to join the network to provide services and support to the ecosystem. As such, the associated token rewards should gracefully scale with the size of the decentralized system, but should be carefully balanced with consumer demand to manage inflation and be designed to ultimately reach an equilibrium. To achieve such an equilibrium, the decentralized token economy should be adaptable and controllable so that it maximizes the total utility of all users, such as achieving stable (overall non-inflationary) token economies. Our main contribution is to model infrastructure token economies as dynamical systems - the circulating token supply, price, and consumer demand change as a function of the payment to nodes and costs to consumers for infrastructure services. Crucially, this dynamical systems view enables us to leverage tools from mathematical control theory to optimize the overall decentralized network’s performance. Moreover, our model extends easily to a Stackelberg game between the controller and the nodes, which we use for robust, strategic pricing. In short, we develop predictive, optimization-based controllers that outperform traditional algorithmic stablecoin heuristics by up to $2.4 \times$ in simulations based on real demand data from existing decentralized wireless networks.
Last updated:  2022-10-31
LMS-SM3 and HSS-SM3: Instantiating Hash-based Post-Quantum Signature Schemes with SM3
Siwei Sun, Tianyu Liu, Zhi Guan, Yifei He, Jiwu Jing, Lei Hu, Zhenfeng Zhang, Hailun Yan
We instantiate the hash-based post-quantum stateful signature schemes LMS and HSS described in RFC 8554 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test.
Last updated:  2022-10-30
Efficient Gaussian sampling for RLWE-based cryptography through a fast Fourier transform
Marcio Barbado Junior
Quantum computing threatens classical cryptography, leading to the search for stronger alternatives. The cryptographic approach based on lattices is considered as a viable option. Schemes with that approach use Gaussian sampling, a design which brings along two concerns: efficiency and information leakage. This work addresses those concerns in the RLWE formulation, for digital signatures. Efficiency mitigation uses the central limit theorem, and the Walsh–Hadamard transform, whereas the information leakage risk is reduced via isochronous implementation. Up to \( 2^{23} \) samples are queried, and the results are compared against those of a cumulative distribution table sampler. Statistical metrics show the suitability of the presented sampler in a number of contexts.
Last updated:  2023-01-14
On new results on Extremal Algebraic Graph Theory and their connections with Algebraic Cryptography
Vasyl Ustimenko
Homogeneous algebraic graphs defined over arbitrary field are classical objects of Algebraic Geometry. This class includes geometries of Chevalley groups $A_2(F)$, $B_2(F)$ and $G_2(F)$ defined over arbitrary field $F$. Assume that codimension of homogeneous graph is the ratio of dimension of variety of its vertices and the dimension of neighbourhood of some vertex. We evaluate minimal codimension $v(g)$ and $u(h)$ of algebraic graph of prescribed girth $g$ and cycle indicator. Recall that girth is the size of minimal cycle in the graph and girth indicator stands for the maximal value of the shortest path through some vertex. We prove that for even $h$ the inequality $u(h) \le (h-2)/2$ holds. We define a class of homogeneous algebraic graphs with even cycle indicator $h$ and codimension $(h-2)/2$. It contains geometries $A_2(F)$, $B_2(F)$ and $G_2(F)$ and infinitely many other homogeneous algebraic graphs.
Last updated:  2022-12-01
Quagmire ciphers and group theory: What is a Beaufort cipher?
Thomas Kaeding
We show that a Beaufort cipher is simultaneously both a quagmire 1 and a quagmire 2 cipher, which includes it in the set of quagmire 4 ciphers as well, albeit as a degenerate one. The Beaufort is one of a family of ciphers that share this property.
Last updated:  2023-10-07
An efficient verifiable state for zk-EVM and beyond from the Anemoi hash function
Jianwei Liu, Harshad Patil, Akhil Sai Peddireddy, Kevin Singh, Haifeng Sun, Huachuang Sun, and Weikeng Chen
In our survey of the various zk-EVM constructions, it becomes apparent that verifiable storage of the EVM state starts to be one of the dominating costs. This is not surprising because a big differentiator of EVM from UTXO is exactly the ability to carry states and, most importantly, their transitions; i.e., EVM is a **state** machine. In other words, to build an efficient zk-EVM, one must first build an efficient verifiable state. The common approach, which has been used in production, is a Merkle forest to authenticate the memory that would be randomly accessed within zk-SNARK, and optimize the verification of such memory accesses. In this note, we describe a way to instantiate a Merkle tree with very few gates in TurboPlonk. We use customized gates in TurboPlonk to implement a SNARK-friendly hash function called Anemoi and its Jive mode of operation, by Clémence Bouvier, Pierre Briaud, Pyrros Chaidos, Léo Perrin, Robin Salen, Vesselin Velichkov, and Danny Willems. We demonstrate that with $16$ gates ($\approx1$ gate per round in a 14-round Amenoi hash), one can verify a 3-to-1 compression in a 3-ary Merkle tree. Before this, prior implementations would often require hundreds of gates. We anticipate this technique to benefit a large number of applications built off zk-SNARK. Our code can be found in $\mathtt{noah}$: https://github.com/FindoraNetwork/noah
Last updated:  2022-10-28
Correlation Intractability and SNARGs from Sub-exponential DDH
Arka Rai Choudhuri, Sanjam Garg, Abhishek Jain, Zhengzhong Jin, Jiaheng Zhang
We provide the first constructions of SNARGs for Batch-NP and P based solely on the sub-exponential Decisional Diffie Hellman (DDH) assumption. Our schemes achieve poly-logarithmic proof sizes. Central to our results and of independent interest is a new construction of correlation-intractable hash functions for ``small input'' product relations verifiable in $\mathsf{TC}^0$, based on sub-exponential DDH.
Last updated:  2022-10-28
Policy-Based Redactable Signatures
Zachary A Kissel
In this work we make progress towards solving an open problem posed by Bilzhause et. al, to give constructions of redactable signature schemes that allow the signer to limit the possible redactions performed by a third party. A separate, but related notion, called controlled disclosure allows a redactor to limit future redactions. We look at two types of data, sets and linear data (data organized as a sequence). In the case of sets, we limit redactions using a policy modeled by a monotone circuit or any circuit depending on the size of the universe the set is drawn from. In the case of linear data, we give a linear construction from vector commitments that limits redactions using a policy modeled as a monotone circuit. Our constructions have the attractive feature that they are built using only blackbox techniques.
Last updated:  2023-08-22
Efficient and Universally Composable Non-Interactive Zero-Knowledge Proofs of Knowledge with Security Against Adaptive Corruptions
Anna Lysyanskaya and Leah Namisa Rosenbloom
Non-interactive zero-knowledge proofs of knowledge (NIZKPoK) serve as a key building block in many important cryptographic constructions. Achieving universally composable NIZKPoK secure against adaptive corruptions was a long-standing open problem, recently solved by Canetti, Sarkar, and Wang (Asiacrypt'22). This sole known construction requires heavy cryptographic machinery such as correlation-intractable hash functions, and is not ready for use in practice. In this paper, we give constructions of adaptively secure universally composable NIZKPoK in the global random-oracle model; we consider both the programmable and the non-programmable versions of the model. For many practical NIZK proof systems, our constructions incur only a polylogarithmic slowdown factor compared to stand-alone security.
Last updated:  2023-12-16
Towards Practical Secure Neural Network Inference: The Journey So Far and the Road Ahead
Zoltán Ádám Mann, Christian Weinert, Daphnee Chabal, and Joppe W. Bos
Neural networks (NNs) have become one of the most important tools for artificial intelligence (AI). Well-designed and trained NNs can perform inference (e.g., make decisions or predictions) on unseen inputs with high accuracy. Using NNs often involves sensitive data: depending on the specific use case, the input to the NN and/or the internals of the NN (e.g., the weights and biases) may be sensitive. Thus, there is a need for techniques for performing NN inference securely, ensuring that sensitive data remains secret. In the past few years, several approaches have been proposed for secure neural network inference. These approaches achieve better and better results in terms of efficiency, security, accuracy, and applicability, thus making big progress towards practical secure neural network inference. The proposed approaches make use of many different techniques, such as homomorphic encryption and secure multi-party computation. The aim of this survey paper is to give an overview of the main approaches proposed so far, their different properties, and the techniques used. In addition, remaining challenges towards large-scale deployments are identified.
Last updated:  2022-10-28
Multi-Point HashDH OPRF using Multiplicative Blinding with Application to Private Set Intersection
Minglang Dong
The privacy set intersection (PSI) protocol with the oblivious pseudorandom function (OPRF) as the core component is a crucial member of PSI family, and the most efficient PSI protocol at present also belongs to this category. Based on DDH assumption, Hash Diffie-Hellman (HashDH) PSI is one of the most classical PSI protocols. Benefiting by its low communication overhead, it still has tremendous research value today. The OPRF subprotocol at the bottom of classical DH-PSI protocol falls into the abstract blind-query-de-blinding OPRF paradigm, while employs the exponential blinding (Exp-HashDH) method. An alternative method called multiplication blinding (Mult-HashDH) offers the improvement which the exponential blinding can't give in performance. This method substitutes multiple variable-base exponentiations with fixed-base exponentiations, and by taking full advantage of this outstanding feature and pre-computation, the computational efficiency of the client can be at least doubled. However, neither Mult-HashDH OPRF nor Mult-HashDH PSI can give a strict security proof under the semi-honest model, which makes the security of the scheme is now reeling from a crisis of confidence. In this paper, the security proof of a modified Mult-HashDH OPRF is formally given under the semi-honest model, and then the HashDH PSI protocol is constructed based on it, which not only ensures the security of the scheme but also have no influence on damaging the efficiency of the protocol. the experimental comparison shows that our protocol achieves 2.65−13.20× speedup in running time.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.