https://eprint.iacr.org/rss/atom.xmlCryptology ePrint Archive2022-05-28T19:52:23+00:00None of your businesshttps://iacr.org/img/logo/iacrlogo_small.pngMetadata is available under the CC0 license https://creativecommons.org/publicdomain/zero/1.0/. Each article has a PDF with different license specified for each one.The Cryptology ePrint Archive provides rapid access to recent
research in cryptology. Papers have been placed here by the
authors and did not undergo any refereeing process other than
verifying that the work seems to be within the scope of
cryptology and meets some minimal acceptance criteria and
publishing conditions.https://eprint.iacr.org/2022/599TenderTee: Secure Tendermint2022-05-17T13:03:22+00:00Lionel BeltrandoMaria Potop-ButucaruJose AlfaroBlockchain and distributed ledger technologies have emerged as one of the most revolutionary distributed systems, with the goal of eliminating centralised intermediaries and installing distributed trusted services. They facilitate trustworthy trades and exchanges over the Internet, power cryptocurrencies, ensure transparency for documents, and much more.
Committee based-blockchains are considered today as a viable alternative to the original proof-of-work paradigm, since they offer strong consistency and are energy efficient. One of the most popular committee based-blockchain is Tendermint used as core by several popular blockchains such Tezos, Binance Smart Chain or Cosmos. Interestingly, Tendermint as many other committee based-blockchains is designed to tolerate one third of Byzantine nodes.
In this paper we propose TenderTee, an enhanced version of Tendermint, able to tolerate one half of Byzantine nodes. The resilience improvement is due to the use of a trusted abstraction, a light version of attested append-only memory, which makes the protocol immune to equivocation (i.e behavior of a faulty node when it sends different faulty messages to different nodes). Furthermore, we prove the correctness of TenderTee for both one-shot and repeated consensus specifications.Blockchain and distributed ledger technologies have emerged as one of the most revolutionary distributed systems, with the goal of eliminating centralised intermediaries and installing distributed trusted services. They facilitate trustworthy trades and exchanges over the Internet, power cryptocurrencies, ensure transparency for documents, and much more.
Committee based-blockchains are considered today as a viable alternative to the original proof-of-work paradigm, since they offer strong consistency and are energy efficient. One of the most popular committee based-blockchain is Tendermint used as core by several popular blockchains such Tezos, Binance Smart Chain or Cosmos. Interestingly, Tendermint as many other committee based-blockchains is designed to tolerate one third of Byzantine nodes.
In this paper we propose TenderTee, an enhanced version of Tendermint, able to tolerate one half of Byzantine nodes. The resilience improvement is due to the use of a trusted abstraction, a light version of attested append-only memory, which makes the protocol immune to equivocation (i.e behavior of a faulty node when it sends different faulty messages to different nodes). Furthermore, we prove the correctness of TenderTee for both one-shot and repeated consensus specifications.2022-05-17T13:03:22+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/600A Nearly Tight Proof of Duc et al.'s Conjectured Security Bound for Masked Implementations2022-05-17T13:03:55+00:00Loïc MasureOlivier RioulFrançois-Xavier StandaertWe prove a bound that approaches Duc et al.'s conjecture from Eurocrypt 2015 for the side-channel security of masked implementations. Let \(Y\) be a sensitive intermediate variable of a cryptographic primitive taking its values in a set \(\mathcal{Y}\). If \(Y\) is protected by masking (a.k.a. secret sharing) at order \(d\) (i.e., with $d+1$ shares), then the complexity of any non-adaptive side-channel analysis --- measured by the number of queries to the target implementation required to guess the secret key with sufficient confidence --- is lower bounded by a quantity inversely proportional to the product of mutual informations between each share of \(Y\) and their respective leakage. Our new bound is nearly tight in the sense that each factor in the product has an exponent of \(-1\) as conjectured, and its multiplicative constant is\(\mathcal{O}\left(\log |\mathcal{Y}| \cdot |\mathcal{Y}|^{-1} \cdot C^{-d}\right)\), where \(C = 2 \log(2) \approx 1.38\). It drastically improves upon previous proven bounds, where the exponent was \(-1/2\), and the multiplicative constant was \(\mathcal{O}\left(|\mathcal{Y}|^{-d}\right)\). As a consequence for side-channel security evaluators, it is possible to provably and efficiently infer the security level of a masked implementation by simply analyzing each individual share, under the necessary condition that the leakage of these shares are independent.We prove a bound that approaches Duc et al.'s conjecture from Eurocrypt 2015 for the side-channel security of masked implementations. Let \(Y\) be a sensitive intermediate variable of a cryptographic primitive taking its values in a set \(\mathcal{Y}\). If \(Y\) is protected by masking (a.k.a. secret sharing) at order \(d\) (i.e., with $d+1$ shares), then the complexity of any non-adaptive side-channel analysis --- measured by the number of queries to the target implementation required to guess the secret key with sufficient confidence --- is lower bounded by a quantity inversely proportional to the product of mutual informations between each share of \(Y\) and their respective leakage. Our new bound is nearly tight in the sense that each factor in the product has an exponent of \(-1\) as conjectured, and its multiplicative constant is\(\mathcal{O}\left(\log |\mathcal{Y}| \cdot |\mathcal{Y}|^{-1} \cdot C^{-d}\right)\), where \(C = 2 \log(2) \approx 1.38\). It drastically improves upon previous proven bounds, where the exponent was \(-1/2\), and the multiplicative constant was \(\mathcal{O}\left(|\mathcal{Y}|^{-d}\right)\). As a consequence for side-channel security evaluators, it is possible to provably and efficiently infer the security level of a masked implementation by simply analyzing each individual share, under the necessary condition that the leakage of these shares are independent.2022-05-17T13:03:55+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/601A Better Method to Analyze Blockchain Consistency2022-05-17T13:04:19+00:00Lucianna KifferRajmohan Rajaramanabhi shelatThe celebrated Nakamoto consensus protocol ushered in several new consensus applications including cryptocurrencies. A
few recent works have analyzed important properties of blockchains, including most significantly, consistency, which is a
guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol.
To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial
events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting.
The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains:
• Our new analysis provides a tighter guarantee on the consistency property of Nakamoto’s protocol, including for parameter regimes which previous work could not consider;
• We analyze a family of delaying attacks and extend them to other protocols;
• We analyze how long a participant should wait before considering a high-value transaction “confirmed”;
• We analyze the consistency of CliqueChain, a variation of the Chainweb system;
• We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore "balancing"-attack.
In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages.
We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.The celebrated Nakamoto consensus protocol ushered in several new consensus applications including cryptocurrencies. A
few recent works have analyzed important properties of blockchains, including most significantly, consistency, which is a
guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol.
To establish consistency, the prior analysis of Pass, Seeman and shelat required a careful counting of certain combinatorial
events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas provides another method of analyzing the blockchain under both a synchronous and partially synchronous setting.
The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains:
• Our new analysis provides a tighter guarantee on the consistency property of Nakamoto’s protocol, including for parameter regimes which previous work could not consider;
• We analyze a family of delaying attacks and extend them to other protocols;
• We analyze how long a participant should wait before considering a high-value transaction “confirmed”;
• We analyze the consistency of CliqueChain, a variation of the Chainweb system;
• We provide the first rigorous consistency analysis of GHOST under the partially synchronous setting and also analyze a folklore "balancing"-attack.
In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages.
We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes.2022-05-17T13:04:19+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/602Real-Time Frequency Detection to Synchronize Fault Injection on System-on-Chip2022-05-17T13:04:49+00:00Clément FanjasClément GaineDriss AboulkassimiSimon PontiéOlivier PotinThe success rate of Fault Injection (FI) and Side-Channel Analysis (SCA) depends on the quality of the synchronization available in the target.
As the modern SoCs implement complex hardware architectures able to run at high-speed frequency, the synchronization of hardware security characterization becomes therefore a real challenge.
However when I/Os are unavailable, unreachable or if the synchronization quality is not sufficient, other triggering methodologies should be investigated.
This paper proposes a new synchronization approach named Synchronization by Frequency Detection (SFD), which does not use the target I/Os.
This approach consists in the identification of a vulnerability following a specific code responsible for the activation of a characteristic frequency which can be detected in the EM field measured from the target.
A real time analysis of EM field is applied in order to trigger the injection upon the detection of this characteristic frequency.
For validating the proof-of-concept of this new triggering methodology, this paper presents an exploitation of the SFD concept against the Android Secure-Boot of a smartphone-grade SoC.
By triggering the attack upon the activation of a frequency at 124.5 MHz during a RSA signature computation, we were able to synchronize an electromagnetic fault injection to skip a vulnerable instruction in the Linux Kernel Authentication.
We successfully bypassed this security feature, effectively running Android OS with a compromised Linux Kernel with one success every 15 minutes.The success rate of Fault Injection (FI) and Side-Channel Analysis (SCA) depends on the quality of the synchronization available in the target.
As the modern SoCs implement complex hardware architectures able to run at high-speed frequency, the synchronization of hardware security characterization becomes therefore a real challenge.
However when I/Os are unavailable, unreachable or if the synchronization quality is not sufficient, other triggering methodologies should be investigated.
This paper proposes a new synchronization approach named Synchronization by Frequency Detection (SFD), which does not use the target I/Os.
This approach consists in the identification of a vulnerability following a specific code responsible for the activation of a characteristic frequency which can be detected in the EM field measured from the target.
A real time analysis of EM field is applied in order to trigger the injection upon the detection of this characteristic frequency.
For validating the proof-of-concept of this new triggering methodology, this paper presents an exploitation of the SFD concept against the Android Secure-Boot of a smartphone-grade SoC.
By triggering the attack upon the activation of a frequency at 124.5 MHz during a RSA signature computation, we were able to synchronize an electromagnetic fault injection to skip a vulnerable instruction in the Linux Kernel Authentication.
We successfully bypassed this security feature, effectively running Android OS with a compromised Linux Kernel with one success every 15 minutes.2022-05-17T13:04:49+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/603Distributed Blockchain Price Oracle2022-05-17T13:05:25+00:00Léonard LysMaria Potop-ButucaruBlockchain oracles are systems that connect blockchains with
the outside world by interfacing with external data providers. They provide
decentralized applications with the external information needed for
smart contract execution. In this paper, we focus on decentralized price
oracles, which are distributed systems that provide exchange rates of
digital assets to smart contracts. They are the cornerstone of the safety
of some decentralized finance applications such as stable coins or lending
protocols. They consist of a network of nodes called oracles that gather
information from off-chain sources such as an exchange market’s API and
feed it to smart contracts. Among the desired properties of a price oracle
system are low latency, availability, and low operating cost. Moreover,
they should overcome constraints such as having diverse data sources
which is known as the freeloading problem or Byzantine failures.
In this paper, we define the distributed price oracle problem and present
PoWacle, the first asynchronous decentralized oracle protocol that copes
with Byzantine behavior.Blockchain oracles are systems that connect blockchains with
the outside world by interfacing with external data providers. They provide
decentralized applications with the external information needed for
smart contract execution. In this paper, we focus on decentralized price
oracles, which are distributed systems that provide exchange rates of
digital assets to smart contracts. They are the cornerstone of the safety
of some decentralized finance applications such as stable coins or lending
protocols. They consist of a network of nodes called oracles that gather
information from off-chain sources such as an exchange market’s API and
feed it to smart contracts. Among the desired properties of a price oracle
system are low latency, availability, and low operating cost. Moreover,
they should overcome constraints such as having diverse data sources
which is known as the freeloading problem or Byzantine failures.
In this paper, we define the distributed price oracle problem and present
PoWacle, the first asynchronous decentralized oracle protocol that copes
with Byzantine behavior.2022-05-17T13:05:25+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/582Ponyta: Foundations of Side-Contract-Resilient Fair Exchange2022-05-17T13:24:48+00:00Hao ChungElisaweta MasserovaElaine ShiSri AravindaKrishnan ThyagarajanFair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV).
We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.Fair exchange is a fundamental primitive for blockchains, and is widely adopted in applications such as atomic swaps, payment channels, and DeFi. Most existing designs of blockchain-based fair exchange protocols consider only the users as strategic players, and assume honest miners. However, recent works revealed that the fairness of commonly deployed fair exchange protocols can be completely broken in the presence of user-miner collusion. In particular, a user can bribe the miners to help it cheat — a phenomenon also referred to as Miner Extractable Value (MEV).
We provide the first formal treatment of side-contract-resilient fair exchange. We propose a new fair exchange protocol called Ponyta, and we prove that the protocol is incentive compatible in the presence of user-miner collusion. In particular, we show that Ponyta satisfies a coalition-resistant Nash equilibrium. Further, we show how to use Ponyta to realize a cross-chain coin swap application, and prove that our coin swap protocol also satisfies coalition-resistant Nash equilibrium. Our work helps to lay the theoretical groundwork for studying side-contract-resilient fair exchange. Finally, we present practical instantiations of Ponyta in Bitcoin and Ethereum with minimal overhead in terms of costs for the users involved in the fair exchange, thus showcasing instantiability of Ponyta with a wide range of cryptocurrencies.2022-05-17T06:43:05+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1545Longest Chain Consensus Under Bandwidth Constraint2022-05-18T00:09:10+00:00Joachim NeuSrivatsan SridharLei YangDavid TseMohammad AlizadehSpamming attacks are a serious concern for consensus protocols, as witnessed by recent outages of a major blockchain, Solana. They cause congestion and excessive message delays in a real network due to its bandwidth constraints. In contrast, longest chain (LC), an important family of consensus protocols, has previously only been proven secure assuming an idealized network model in which all messages are delivered within bounded delay. This model-reality mismatch is further aggravated for Proof-of-Stake (PoS) LC where the adversary can spam the network with equivocating blocks. Hence, we extend the network model to capture bandwidth constraints, under which nodes now need to choose carefully which blocks to spend their limited download budget on. To illustrate this point, we show that 'download along the longest header chain', a natural download rule for Proof-of-Work (PoW) LC, is insecure for PoS LC. We propose a simple rule 'download towards the freshest block', formalize two common heuristics 'not downloading equivocations' and 'blocklisting', and prove in a unified framework that PoS LC with any one of these download rules is secure in bandwidth-constrained networks. In experiments, we validate our claims and showcase the behavior of these download rules under attack. By composing multiple instances of a PoS LC protocol with a suitable download rule in parallel, we obtain a PoS consensus protocol that achieves a constant fraction of the network's throughput limit even under worst-case adversarial strategies.Spamming attacks are a serious concern for consensus protocols, as witnessed by recent outages of a major blockchain, Solana. They cause congestion and excessive message delays in a real network due to its bandwidth constraints. In contrast, longest chain (LC), an important family of consensus protocols, has previously only been proven secure assuming an idealized network model in which all messages are delivered within bounded delay. This model-reality mismatch is further aggravated for Proof-of-Stake (PoS) LC where the adversary can spam the network with equivocating blocks. Hence, we extend the network model to capture bandwidth constraints, under which nodes now need to choose carefully which blocks to spend their limited download budget on. To illustrate this point, we show that 'download along the longest header chain', a natural download rule for Proof-of-Work (PoW) LC, is insecure for PoS LC. We propose a simple rule 'download towards the freshest block', formalize two common heuristics 'not downloading equivocations' and 'blocklisting', and prove in a unified framework that PoS LC with any one of these download rules is secure in bandwidth-constrained networks. In experiments, we validate our claims and showcase the behavior of these download rules under attack. By composing multiple instances of a PoS LC protocol with a suitable download rule in parallel, we obtain a PoS consensus protocol that achieves a constant fraction of the network's throughput limit even under worst-case adversarial strategies.2021-11-29T12:14:39+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/628The Availability-Accountability Dilemma and its Resolution via Accountability Gadgets2022-05-18T05:25:31+00:00Joachim NeuErtem Nusret TasDavid TseFor applications of Byzantine fault tolerant (BFT) consensus protocols where the participants are economic agents, recent works highlighted the importance of accountability: the ability to identify participants who provably violate the protocol. At the same time, being able to reach consensus under dynamic levels of participation is desirable for censorship resistance. We identify an availability-accountability dilemma: in an environment with dynamic participation, no protocol can simultaneously be accountably-safe and live. We provide a resolution to this dilemma by constructing a provably secure optimally-resilient accountability gadget to checkpoint a longest chain protocol, such that the full ledger is live under dynamic participation and the checkpointed prefix ledger is accountable. Our accountability gadget construction is black-box and can use any BFT protocol which is accountable under static participation. Using HotStuff as the black box, we implemented our construction as a protocol for the Ethereum 2.0 beacon chain, and our Internet-scale experiments with more than 4000 nodes show that the protocol achieves the required scalability and has better latency than the current solution Gasper, which was shown insecure by recent attacks.For applications of Byzantine fault tolerant (BFT) consensus protocols where the participants are economic agents, recent works highlighted the importance of accountability: the ability to identify participants who provably violate the protocol. At the same time, being able to reach consensus under dynamic levels of participation is desirable for censorship resistance. We identify an availability-accountability dilemma: in an environment with dynamic participation, no protocol can simultaneously be accountably-safe and live. We provide a resolution to this dilemma by constructing a provably secure optimally-resilient accountability gadget to checkpoint a longest chain protocol, such that the full ledger is live under dynamic participation and the checkpointed prefix ledger is accountable. Our accountability gadget construction is black-box and can use any BFT protocol which is accountable under static participation. Using HotStuff as the black box, we implemented our construction as a protocol for the Ethereum 2.0 beacon chain, and our Internet-scale experiments with more than 4000 nodes show that the protocol achieves the required scalability and has better latency than the current solution Gasper, which was shown insecure by recent attacks.2021-05-17T06:32:03+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1634McEliece needs a Break -- Solving McEliece-1284 and Quasi-Cyclic-2918 with Modern ISD2022-05-18T08:41:14+00:00Andre EsserAlexander MayFloyd ZweydingerWith the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates.
We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis on medium-sized instances (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to code length 2918 (before: 1938).
Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process.
For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC.
For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates.
We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis on medium-sized instances (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to code length 2918 (before: 1938).
Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process.
For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC.
For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.2021-12-17T14:23:07+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/476Generalized Channels from Limited Blockchain Scripts and Adaptor Signatures2022-05-18T14:11:19+00:00Lukas AumayrOguzhan ErsoyAndreas ErwigSebastian FaustKristina HostakovaMatteo MaffeiPedro Moreno-SanchezSiavash RiahiDecentralized and permissionless ledgers offer an inherently low transaction rate, as a result of their consensus protocol demanding the storage of each transaction on-chain. A prominent proposal to tackle this scalability issue is to utilize off-chain protocols, where parties only need to post a limited number of transactions on-chain. Existing solutions can roughly be categorized into: (i) application-specific channels (e.g., payment channels), offering strictly weaker functionality than the underlying blockchain; and (ii) state channels, supporting arbitrary smart contracts at the cost of being compatible only with the few blockchains having Turing-complete scripting languages (e.g., Ethereum).
In this work, we introduce and formalize the notion of generalized channels allowing users to perform any operation supported by the underlying blockchain in an off-chain manner. Generalized channels thus extend the functionality of payment channels and relax the definition of state channels. We present a concrete construction compatible with any blockchain supporting transaction authorization, time-locks and constant number of Boolean $\land$ and $\lor$ operations -- requirements fulfilled by many (non-Turing-complete) blockchains including the popular Bitcoin. To this end, we leverage adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our generalized channel construction in the Universal Composability framework.
As an important practical contribution, our generalized channel construction outperforms the state-of-the-art payment channel construction, the Lightning Network, in efficiency. Concretely, it halves the off-chain communication complexity and reduces the on-chain footprint in case of disputes from linear to constant in the number of off-chain applications funded by the channel. Finally, we evaluate the practicality of our construction via a prototype implementation and discuss various applications including financially secured fair two-party computation.Decentralized and permissionless ledgers offer an inherently low transaction rate, as a result of their consensus protocol demanding the storage of each transaction on-chain. A prominent proposal to tackle this scalability issue is to utilize off-chain protocols, where parties only need to post a limited number of transactions on-chain. Existing solutions can roughly be categorized into: (i) application-specific channels (e.g., payment channels), offering strictly weaker functionality than the underlying blockchain; and (ii) state channels, supporting arbitrary smart contracts at the cost of being compatible only with the few blockchains having Turing-complete scripting languages (e.g., Ethereum).
In this work, we introduce and formalize the notion of generalized channels allowing users to perform any operation supported by the underlying blockchain in an off-chain manner. Generalized channels thus extend the functionality of payment channels and relax the definition of state channels. We present a concrete construction compatible with any blockchain supporting transaction authorization, time-locks and constant number of Boolean $\land$ and $\lor$ operations -- requirements fulfilled by many (non-Turing-complete) blockchains including the popular Bitcoin. To this end, we leverage adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our generalized channel construction in the Universal Composability framework.
As an important practical contribution, our generalized channel construction outperforms the state-of-the-art payment channel construction, the Lightning Network, in efficiency. Concretely, it halves the off-chain communication complexity and reduces the on-chain footprint in case of disputes from linear to constant in the number of off-chain applications funded by the channel. Finally, we evaluate the practicality of our construction via a prototype implementation and discuss various applications including financially secured fair two-party computation.2020-04-28T10:07:48+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/522Public-key Cryptosystems and Signature Schemes from p-adic Lattices2022-05-19T03:34:47+00:00Yingpu DengLixia LuoYanbin PanZhaonan WangGuanju XiaoIn 2018, the longest vector problem and closest vector problem in local fields were introduced, as the p-adic analogues of the shortest vector problem and closest vector problem in lattices of Euclidean spaces. They are considered to be hard and useful in constructing cryptographic primitives, but no applications in cryptography were given. In this paper, we construct the first signature scheme and public-key encryption cryptosystem based on p-adic lattice by proposing a trapdoor function with the orthogonal basis of p-adic lattice. These cryptographic schemes have reasonable key size and efficiency, which shows that p-adic lattice can be a new alternative to construct cryptographic primitives and well worth studying.In 2018, the longest vector problem and closest vector problem in local fields were introduced, as the p-adic analogues of the shortest vector problem and closest vector problem in lattices of Euclidean spaces. They are considered to be hard and useful in constructing cryptographic primitives, but no applications in cryptography were given. In this paper, we construct the first signature scheme and public-key encryption cryptosystem based on p-adic lattice by proposing a trapdoor function with the orthogonal basis of p-adic lattice. These cryptographic schemes have reasonable key size and efficiency, which shows that p-adic lattice can be a new alternative to construct cryptographic primitives and well worth studying.2021-04-23T12:23:51+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1038Reinforced Concrete: A Fast Hash Function for Verifiable Computation2022-05-19T08:39:43+00:00Lorenzo GrassiDmitry KhovratovichReinhard LüfteneggerChristian RechbergerMarkus SchofneggerRoman WalchWe propose a new hash function Reinforced Concrete, which is the first generic purpose hash that is fast both for a zero-knowledge prover and in native x86 computations. It is suitable for a various range of zero-knowledge proofs and protocols, from set membership to generic purpose verifiable computation. Being up to 15x faster than its predecessor Poseidon hash, Reinforced Concrete inherits security from traditional time-tested schemes such as AES, whereas taking the zero-knowledge performance from a novel and efficient decomposition of a prime field into compact buckets.
The new hash function is suitable for a wide range of applications like privacy-preserving cryptocurrencies, verifiable encryption, protocols with state membership proofs, or verifiable computation. It may serve as a drop-in replacement for various prime-field hashes such as variants of MiMC, Poseidon, Pedersen hash, and others.We propose a new hash function Reinforced Concrete, which is the first generic purpose hash that is fast both for a zero-knowledge prover and in native x86 computations. It is suitable for a various range of zero-knowledge proofs and protocols, from set membership to generic purpose verifiable computation. Being up to 15x faster than its predecessor Poseidon hash, Reinforced Concrete inherits security from traditional time-tested schemes such as AES, whereas taking the zero-knowledge performance from a novel and efficient decomposition of a prime field into compact buckets.
The new hash function is suitable for a wide range of applications like privacy-preserving cryptocurrencies, verifiable encryption, protocols with state membership proofs, or verifiable computation. It may serve as a drop-in replacement for various prime-field hashes such as variants of MiMC, Poseidon, Pedersen hash, and others.2021-08-16T13:08:52+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/325FPGA Design Deobfuscation by Iterative LUT Modification at Bitstream Level2022-05-19T10:26:38+00:00Michail MoraitisElena DubrovaHardware obfuscation by redundancy addition is a well-known countermeasure against reverse engineering. For FPGA designs, such a technique can be implemented with a small overhead, however, its effectiveness is heavily dependent on the stealthiness of the redundant elements. Since there are powerful tools for combinational redundancy removal, opting for sequential redundancy is believed to result in stronger obfuscation. However, in this paper, we demonstrate that it is possible to identify sequential redundancy in obfuscated SRAM FPGA designs by ensuring the full controllability of each instantiated look-up table input via iterative bitstream modification. The presented algorithm works directly on bitstream and does not require the possession of a flattened netlist. The feasibility of our approach is verified on the example of an obfuscated SNOW 3G design implemented in a Xilinx 7-series FPGA.Hardware obfuscation by redundancy addition is a well-known countermeasure against reverse engineering. For FPGA designs, such a technique can be implemented with a small overhead, however, its effectiveness is heavily dependent on the stealthiness of the redundant elements. Since there are powerful tools for combinational redundancy removal, opting for sequential redundancy is believed to result in stronger obfuscation. However, in this paper, we demonstrate that it is possible to identify sequential redundancy in obfuscated SRAM FPGA designs by ensuring the full controllability of each instantiated look-up table input via iterative bitstream modification. The presented algorithm works directly on bitstream and does not require the possession of a flattened netlist. The feasibility of our approach is verified on the example of an obfuscated SNOW 3G design implemented in a Xilinx 7-series FPGA.2022-03-14T11:40:15+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/539Post Quantum Noise2022-05-19T13:59:58+00:00Yawning AngelBenjamin DowlingAndreas HülsingPeter SchwabeFlorian WeberWe introduce PQNoise, a post-quantum variant of the Noise framework. We demonstrate that it is possible to replace the Diffie-Hellman key-exchanges in Noise with KEMs in a secure way. A challenge is the inability to combine key pairs of KEMs, which can be resolved by certain forms of randomness-hardening for which we introduce a formal abstraction. We provide a generic recipe to turn classical Noise patterns into PQNoise patterns. We prove that the resulting PQNoise patterns achieve confidentiality and authenticity in the fACCE-model. Moreover we show that for those classical Noise-patterns that have been conjectured or proven secure in the fACCE-model our matching PQNoise-patterns eventually achieve the same security. Our security proof is generic and applies to any valid PQNoise pattern. This is made possible by another abstraction, called a hash-object, which hides the exact workings of how keying material is processed in an abstract stateful object that outputs pseudorandom keys under different corruption patterns. We also show that the hash chains used in Noise are a secure hash-object. Finally, we demonstrate the practicality of PQNoise delivering benchmarks for several base patterns.We introduce PQNoise, a post-quantum variant of the Noise framework. We demonstrate that it is possible to replace the Diffie-Hellman key-exchanges in Noise with KEMs in a secure way. A challenge is the inability to combine key pairs of KEMs, which can be resolved by certain forms of randomness-hardening for which we introduce a formal abstraction. We provide a generic recipe to turn classical Noise patterns into PQNoise patterns. We prove that the resulting PQNoise patterns achieve confidentiality and authenticity in the fACCE-model. Moreover we show that for those classical Noise-patterns that have been conjectured or proven secure in the fACCE-model our matching PQNoise-patterns eventually achieve the same security. Our security proof is generic and applies to any valid PQNoise pattern. This is made possible by another abstraction, called a hash-object, which hides the exact workings of how keying material is processed in an abstract stateful object that outputs pseudorandom keys under different corruption patterns. We also show that the hash chains used in Noise are a secure hash-object. Finally, we demonstrate the practicality of PQNoise delivering benchmarks for several base patterns.2022-05-10T08:08:22+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/587Doubly Efficient Interactive Proofs over Infinite and Non-Commutative Rings2022-05-19T14:04:39+00:00Eduardo Soria-VazquezWe introduce the first proof system for layered arithmetic circuits over an arbitrary ring $R$ that is (possibly) non-commutative and (possibly) infinite, while only requiring black-box access to its arithmetic and a subset $A \subseteq R$. Our construction only requires limited commutativity and regularity properties from $A$, similar to recent work on efficient information theoretic multi-party computation over non-commutative rings by Escudero and Soria-Vazquez (CRYPTO 2021), but furthermore covering infinite rings.
We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol.
Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.We introduce the first proof system for layered arithmetic circuits over an arbitrary ring $R$ that is (possibly) non-commutative and (possibly) infinite, while only requiring black-box access to its arithmetic and a subset $A \subseteq R$. Our construction only requires limited commutativity and regularity properties from $A$, similar to recent work on efficient information theoretic multi-party computation over non-commutative rings by Escudero and Soria-Vazquez (CRYPTO 2021), but furthermore covering infinite rings.
We achieve our results through a generalization of GKR-style interactive proofs (Goldwasser, Kalai and Rothblum, Journal of the ACM, 2015). When $A$ is a subset of the center of $R$, generalizations of the sum-check protocol and other building blocks are not too problematic. The case when the elements of $A$ only commute with each other, on the other hand, introduces a series of challenges. In order to overcome those, we need to introduce a new definition of polynomial ring over a non-commutative ring, the notion of left (and right) multi-linear extensions, modify the layer consistency equation and adapt the sum-check protocol.
Despite these changes, our results are compatible with recent developments such as linear time provers. Moreover, for certain rings our construction achieves provers that run in sublinear time in the circuit size. We obtain such result both for known cases, such as matrix and polynomial rings, as well as new ones, such as for some rings resulting from Clifford algebras. Besides efficiency improvements in computation and/or round complexity for several instantiations, the core conclusion of our results is that state of the art doubly efficient interactive proofs do not require much algebraic structure. This enables exact rather than approximate computation over infinite rings as well as agile proof systems, where the black-box choice of the underlying ring can be easily switched through the software life cycle.2022-05-17T06:45:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/546He-HTLC: Revisiting Incentives in HTLC2022-05-20T00:55:52+00:00Sarisht WadhwaJannis StoeterFan ZhangKartik NayakHashed Time-Locked Contracts (HTLCs) are a widely used primitive in blockchain systems. Unfortunately, HTLC is incentive-incompatible and is vulnerable to bribery attacks.
MAD-HTLC (Oakland'21) is an elegant solution aiming to address the incentive incompatibility of HTLC.
In this paper, we show that MAD-HTLC is also incentive-incompatible. The crux of the issue is that MAD-HTLC only considers passively rational miners. We argue that such a model fails to capture active rational behaviors. We demonstrate the importance of taking actively rational behaviors into consideration by showing three novel reverse-bribery attacks against MAD-HTLC that can be implemented using Trusted Execution Environments (TEEs) or zero-knowledge proofs (ZKPs). We further show that reverse bribery can be combined with original delaying attacks to render MAD-HTLC insecure regardless of the relationship between collateral and deposit.
Based on the learnings from our attacks, we devise a new smart contract specification, He-HTLC, which is lightweight and inert to incentive manipulation attacks. HE-HTLC, according to us, is the first specification to meet the HTLC specification even in the presence of actively rational miners.Hashed Time-Locked Contracts (HTLCs) are a widely used primitive in blockchain systems. Unfortunately, HTLC is incentive-incompatible and is vulnerable to bribery attacks.
MAD-HTLC (Oakland'21) is an elegant solution aiming to address the incentive incompatibility of HTLC.
In this paper, we show that MAD-HTLC is also incentive-incompatible. The crux of the issue is that MAD-HTLC only considers passively rational miners. We argue that such a model fails to capture active rational behaviors. We demonstrate the importance of taking actively rational behaviors into consideration by showing three novel reverse-bribery attacks against MAD-HTLC that can be implemented using Trusted Execution Environments (TEEs) or zero-knowledge proofs (ZKPs). We further show that reverse bribery can be combined with original delaying attacks to render MAD-HTLC insecure regardless of the relationship between collateral and deposit.
Based on the learnings from our attacks, we devise a new smart contract specification, He-HTLC, which is lightweight and inert to incentive manipulation attacks. HE-HTLC, according to us, is the first specification to meet the HTLC specification even in the presence of actively rational miners.2022-05-10T08:12:10+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/581Cryptanalysis of an Identity-Based Provable Data Possession Protocol with Compressed Cloud Storage2022-05-20T04:35:57+00:00Lidong HanGuangwu XuQi XieXiao TanChengliang TianThis letter addresses some security issues of an identity-based provable data possession protocol with compressed cloud storage (published in IEEE TIFS, doi:10.1109/TIFS.2022. 3159152). Some serious flaws are identified and an attack to the protocol is designed. This attack is able to recover the ephemeral secret keys from two encrypted blocks with high probability to reveal the original plaintext file completely. Moreover, an adversary can impersonate a data owner to outsource any file to the cloud in a malicious way. The main ingredients of the attack is some classical number theoretic results.This letter addresses some security issues of an identity-based provable data possession protocol with compressed cloud storage (published in IEEE TIFS, doi:10.1109/TIFS.2022. 3159152). Some serious flaws are identified and an attack to the protocol is designed. This attack is able to recover the ephemeral secret keys from two encrypted blocks with high probability to reveal the original plaintext file completely. Moreover, an adversary can impersonate a data owner to outsource any file to the cloud in a malicious way. The main ingredients of the attack is some classical number theoretic results.2022-05-16T14:54:40+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/077Multiple Noisy Private Remote Source Observations for Secure Function Computation2022-05-20T08:58:10+00:00Onur GunluMatthieu BlochRafael F. SchaeferThe problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include 1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals' observed sequences; 2) the information leakage to a fusion center with respect to the remote source is considered as another privacy leakage metric; 3) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless single-function computation with two transmitting nodes, which recover previous results in the literature, and for special cases that consider invertible functions simplified bounds are established.The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include 1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals' observed sequences; 2) the information leakage to a fusion center with respect to the remote source is considered as another privacy leakage metric; 3) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless single-function computation with two transmitting nodes, which recover previous results in the literature, and for special cases that consider invertible functions simplified bounds are established.2022-01-20T16:01:18+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/078Secure Lossy Function Computation with Multiple Private Remote Source Observations2022-05-20T09:37:25+00:00Onur GunluMatthieu BlochRafael F. SchaeferWe consider that multiple noisy observations of a remote source are used by different nodes in the same network to compute a function of the noisy observations under joint secrecy, joint privacy, and individual storage constraints, as well as a distortion constraint on the function computed. Suppose that an eavesdropper has access to one of the noisy observations in addition to the public messages exchanged between legitimate nodes. This model extends previous models by 1) considering a remote source as the source of dependency between the correlated random variables observed at different nodes; 2) allowing the function computed to be a distorted version of the target function, which allows to reduce the storage rate as compared to a reliable function computation scenario in addition to reducing secrecy and privacy leakages; 3) introducing a privacy metric that measures the information leakage about the remote source to the fusion center in addition to the classic privacy metric that measures the leakage to an eavesdropper; 4) considering two transmitting nodes to compute a function rather than one node. Single-letter inner and outer bounds are provided for the considered lossy function computation problem, and simplified bounds are established for two special cases, in which either the computed function is partially invertible or the function is invertible and the measurement channel of the eavesdropper is physically degraded with respect to the measurement channel of the fusion center.We consider that multiple noisy observations of a remote source are used by different nodes in the same network to compute a function of the noisy observations under joint secrecy, joint privacy, and individual storage constraints, as well as a distortion constraint on the function computed. Suppose that an eavesdropper has access to one of the noisy observations in addition to the public messages exchanged between legitimate nodes. This model extends previous models by 1) considering a remote source as the source of dependency between the correlated random variables observed at different nodes; 2) allowing the function computed to be a distorted version of the target function, which allows to reduce the storage rate as compared to a reliable function computation scenario in addition to reducing secrecy and privacy leakages; 3) introducing a privacy metric that measures the information leakage about the remote source to the fusion center in addition to the classic privacy metric that measures the leakage to an eavesdropper; 4) considering two transmitting nodes to compute a function rather than one node. Single-letter inner and outer bounds are provided for the considered lossy function computation problem, and simplified bounds are established for two special cases, in which either the computed function is partially invertible or the function is invertible and the measurement channel of the eavesdropper is physically degraded with respect to the measurement channel of the fusion center.2022-01-20T16:01:39+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/474Side-Channel Analysis of Lattice-Based Post-Quantum Cryptography: Exploiting Polynomial Multiplication2022-05-20T10:00:59+00:00Catinca MujdeiArthur BeckersJose Maria Bermudo MeraAngshuman KarmakarLennert WoutersIngrid VerbauwhedePolynomial multiplication algorithms such as Toom-Cook and the Number Theoretic Transform are fundamental building blocks for lattice-based post-quantum cryptography. In this work, we present correlation power analysis-based side-channel analysis methodologies targeting every polynomial multiplication strategy for all lattice-based post-quantum key encapsulation mechanisms in the final round of the NIST post-quantum standardization procedure. We perform practical experiments on real side-channel measurements demonstrating that our method allows to extract the secret key from all lattice-based post-quantum key encapsulation mechanisms. Our analysis demonstrates that the used polynomial multiplication strategy can significantly impact the time complexity of the attack.Polynomial multiplication algorithms such as Toom-Cook and the Number Theoretic Transform are fundamental building blocks for lattice-based post-quantum cryptography. In this work, we present correlation power analysis-based side-channel analysis methodologies targeting every polynomial multiplication strategy for all lattice-based post-quantum key encapsulation mechanisms in the final round of the NIST post-quantum standardization procedure. We perform practical experiments on real side-channel measurements demonstrating that our method allows to extract the secret key from all lattice-based post-quantum key encapsulation mechanisms. Our analysis demonstrates that the used polynomial multiplication strategy can significantly impact the time complexity of the attack.2022-04-22T13:00:16+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/233Variational quantum solutions to the Shortest Vector Problem2022-05-20T12:54:34+00:00Martin R. AlbrechtMiloš ProkopYixin ShenPetros WalldenA fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp. estimates) for the number of qubits required per dimension for any lattices (resp. random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with ≈ 10^3 noisy qubits such instances can be tackled.A fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp. estimates) for the number of qubits required per dimension for any lattices (resp. random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with ≈ 10^3 noisy qubits such instances can be tackled.2022-02-25T08:08:10+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/564SMILE: Set Membership from Ideal Lattices with Applications to Ring Signatures and Confidential Transactions2022-05-20T13:16:44+00:00Vadim LyubashevskyNgoc Khanh NguyenGregor SeilerIn a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set.
We also give a transformation of our set membership proof to a ring signature scheme. The ring signature size is also logarithmic in the size of the public key set and has size $16$ KB for a set of $2^5$ elements, and $22$ KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019).
We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.In a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set.
We also give a transformation of our set membership proof to a ring signature scheme. The ring signature size is also logarithmic in the size of the public key set and has size $16$ KB for a set of $2^5$ elements, and $22$ KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019).
We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.2021-05-03T20:15:01+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1443Platypus: A Central Bank Digital Currency with Unlinkable Transactions and Privacy Preserving Regulation2022-05-21T11:09:51+00:00Karl WüstKari KostiainenNoah DeliusSrdjan CapkunDue to the popularity of blockchain-based cryptocurrencies, the increasing digitalization of payments, and the constantly reducing role of cash in society, central banks have shown an increased interest in deploying central bank digital currencies (CBDCs) that could serve as a digital cash-equivalent. While most recent research on CBDCs focuses on blockchain technology, it is not clear that this choice of technology provides the optimal solution. In particular, the centralized trust model of a CBDC offers opportunities for different designs.
In this paper, we depart from blockchain designs and instead build on ideas from traditional e-cash schemes. We propose a new style of building digital currencies that combines the transaction processing model of e-cash with an account-based fund management model. We argue that such a style of building digital currencies is especially well-suited to CBDCs.
We also design the first such digital currency system, called Platypus, that provides strong privacy, high scalability, and expressive but simple regulation, which are all critical features for a CBDC. Platypus achieves these properties by adapting techniques similar to those used in anonymous blockchain cryptocurrencies like Zcash to fit our account model and applying them to the e-cash context.Due to the popularity of blockchain-based cryptocurrencies, the increasing digitalization of payments, and the constantly reducing role of cash in society, central banks have shown an increased interest in deploying central bank digital currencies (CBDCs) that could serve as a digital cash-equivalent. While most recent research on CBDCs focuses on blockchain technology, it is not clear that this choice of technology provides the optimal solution. In particular, the centralized trust model of a CBDC offers opportunities for different designs.
In this paper, we depart from blockchain designs and instead build on ideas from traditional e-cash schemes. We propose a new style of building digital currencies that combines the transaction processing model of e-cash with an account-based fund management model. We argue that such a style of building digital currencies is especially well-suited to CBDCs.
We also design the first such digital currency system, called Platypus, that provides strong privacy, high scalability, and expressive but simple regulation, which are all critical features for a CBDC. Platypus achieves these properties by adapting techniques similar to those used in anonymous blockchain cryptocurrencies like Zcash to fit our account model and applying them to the e-cash context.2021-10-27T19:34:58+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/338Lattice-Based Proof of Shuffle and Applications to Electronic Voting2022-05-22T10:07:47+00:00Diego F. AranhaCarsten BaumKristian GjøsteenTjerand SildeThor TungeA verifiable shuffle of known values is a method for proving that a collection of commitments opens to a given collection of known messages, without revealing a correspondence between commitments and messages. We propose the first practical verifiable shuffle of known values for lattice-based commitments.
Shuffles of known values have many applications in cryptography, and in particular in electronic voting. We use our verifiable shuffle of known values to build a practical lattice-based cryptographic voting system that supports complex ballots. Our scheme is also the first construction from candidate post-quantum secure assumptions to defend against compromise of the voter's computer using return codes.
We implemented our protocol and present benchmarks of its computational runtime. The size of the verifiable shuffle is $22 \tau$ KB and takes time $33 \tau$ ms for $\tau$ voters. This is around $5$ times faster and $40$ % smaller per vote than the lattice-basedvoting scheme by del Pino et al. (ACM CCS 2017), which can only handle yes/no-elections.A verifiable shuffle of known values is a method for proving that a collection of commitments opens to a given collection of known messages, without revealing a correspondence between commitments and messages. We propose the first practical verifiable shuffle of known values for lattice-based commitments.
Shuffles of known values have many applications in cryptography, and in particular in electronic voting. We use our verifiable shuffle of known values to build a practical lattice-based cryptographic voting system that supports complex ballots. Our scheme is also the first construction from candidate post-quantum secure assumptions to defend against compromise of the voter's computer using return codes.
We implemented our protocol and present benchmarks of its computational runtime. The size of the verifiable shuffle is $22 \tau$ KB and takes time $33 \tau$ ms for $\tau$ voters. This is around $5$ times faster and $40$ % smaller per vote than the lattice-basedvoting scheme by del Pino et al. (ACM CCS 2017), which can only handle yes/no-elections.2021-03-17T14:43:32+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/203Anonymous Tokens with Public Metadata and Applications to Private Contact Tracing2022-05-22T10:13:16+00:00Tjerand SildeMartin StrandAnonymous single-use tokens have seen recent applications in private Internet browsing and anonymous statistics collection. We develop new schemes in order to include public metadata such as expiration dates for tokens. This inclusion enables planned mass revocation of tokens without distributing new keys, which for natural instantiations can give 77 % and 90 % amortized traffic savings compared to Privacy Pass (Davidson et al., 2018) and DIT: De-Identified Authenticated Telemetry at Scale (Huang et al., 2021), respectively. By transforming the public key, we are able to append public metadata to several existing protocols essentially without increasing computation or communication.
Additional contributions include expanded definitions, a more complete framework for anonymous single-use tokens and a description of how anonymous tokens can improve the privacy in dp3t-like digital contact tracing applications. We also extend the protocol to create efficient and conceptually simple tokens with both public and private metadata, and tokens with public metadata and public verifiability from pairings.Anonymous single-use tokens have seen recent applications in private Internet browsing and anonymous statistics collection. We develop new schemes in order to include public metadata such as expiration dates for tokens. This inclusion enables planned mass revocation of tokens without distributing new keys, which for natural instantiations can give 77 % and 90 % amortized traffic savings compared to Privacy Pass (Davidson et al., 2018) and DIT: De-Identified Authenticated Telemetry at Scale (Huang et al., 2021), respectively. By transforming the public key, we are able to append public metadata to several existing protocols essentially without increasing computation or communication.
Additional contributions include expanded definitions, a more complete framework for anonymous single-use tokens and a description of how anonymous tokens can improve the privacy in dp3t-like digital contact tracing applications. We also extend the protocol to create efficient and conceptually simple tokens with both public and private metadata, and tokens with public metadata and public verifiability from pairings.2021-03-01T15:56:59+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1693Verifiable Decryption for BGV2022-05-22T10:20:56+00:00Tjerand SildeIn this work we present a direct construction for verifiable decryption for the BGV encryption scheme by combining existing zero-knowledge proofs for linear relations and bounded values. This is one of the first constructions of verifiable decryption protocols for lattice-based cryptography, and we give a protocol that is simpler and at least as efficient as the state of the art when amortizing over many ciphertexts.
To prove its practicality we provide concrete parameters, resulting in proof size of less than $44 \tau$ KB for $\tau$ ciphertexts with message space $2048$ bits. Furthermore, we provide an open source implementation showing that the amortized cost of the verifiable decryption protocol is only $76$ ms per message when batching over $\tau = 2048$ ciphertexts.In this work we present a direct construction for verifiable decryption for the BGV encryption scheme by combining existing zero-knowledge proofs for linear relations and bounded values. This is one of the first constructions of verifiable decryption protocols for lattice-based cryptography, and we give a protocol that is simpler and at least as efficient as the state of the art when amortizing over many ciphertexts.
To prove its practicality we provide concrete parameters, resulting in proof size of less than $44 \tau$ KB for $\tau$ ciphertexts with message space $2048$ bits. Furthermore, we provide an open source implementation showing that the amortized cost of the verifiable decryption protocol is only $76$ ms per message when batching over $\tau = 2048$ ciphertexts.2021-12-30T17:11:06+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/558Verifiable Decryption in the Head2022-05-22T10:34:51+00:00Kristian GjøsteenThomas HainesJohannes MüllerPeter RønneTjerand SildeIn this work we present a new approach to verifiable decryption which converts a 2-party passively secure distributed decryption protocol into a 1-party proof of correct decryption. To introduce our idea, we present a toy example for an ElGamal distributed decryption protocol that we also give a machine checked proof of, in addition to applying our method to lattices. This leads to an efficient and simple verifiable decryption scheme for lattice-based cryptography, especially for large sets of ciphertexts; it has small size and lightweight computations as we reduce the need of zero-knowledge proofs for each ciphertext. We believe the flexibility of the general technique is interesting and provides attractive trade-offs between complexity and security, in particular for the interactive variant with smaller soundness.
Finally, the protocol requires only very simple operations, making it easy to correctly and securely implement in practice. We suggest concrete parameters for our protocol and give a proof of concept implementation, showing that it is highly practical.In this work we present a new approach to verifiable decryption which converts a 2-party passively secure distributed decryption protocol into a 1-party proof of correct decryption. To introduce our idea, we present a toy example for an ElGamal distributed decryption protocol that we also give a machine checked proof of, in addition to applying our method to lattices. This leads to an efficient and simple verifiable decryption scheme for lattice-based cryptography, especially for large sets of ciphertexts; it has small size and lightweight computations as we reduce the need of zero-knowledge proofs for each ciphertext. We believe the flexibility of the general technique is interesting and provides attractive trade-offs between complexity and security, in particular for the interactive variant with smaller soundness.
Finally, the protocol requires only very simple operations, making it easy to correctly and securely implement in practice. We suggest concrete parameters for our protocol and give a proof of concept implementation, showing that it is highly practical.2021-05-03T20:11:50+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/422Verifiable Mix-Nets and Distributed Decryption for Voting from Lattice-Based Assumptions2022-05-22T10:40:11+00:00Diego F. AranhaCarsten BaumKristian GjøsteenTjerand SildeCryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland and Australia. Many organizations also use Helios for elections. While many efficient protocols exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because previous voting protocols do not carry over easily due to issues such as noise growth and approximate relations. In particular, this is a problem for tested designs such as verifiable mixing and decryption of ballot ciphertexts.
In this work, we make progress in this direction. We propose a new verifiable secret shuffle for BGV ciphertexts as well as a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning for BGV decryption, proving correctness of decryption steps in zero-knowledge.
We give concrete parameters for our system, estimate the size of each component and provide an implementation of all sub-protocols. Together, the shuffle and the decryption protocol are suitable for use in real-world cryptographic voting schemes, which we demonstrate with a prototype voting protocol design.Cryptographic voting protocols have recently seen much interest from practitioners due to their (planned) use in countries such as Estonia, Switzerland and Australia. Many organizations also use Helios for elections. While many efficient protocols exist from discrete log-type assumptions, the situation is less clear for post-quantum alternatives such as lattices. This is because previous voting protocols do not carry over easily due to issues such as noise growth and approximate relations. In particular, this is a problem for tested designs such as verifiable mixing and decryption of ballot ciphertexts.
In this work, we make progress in this direction. We propose a new verifiable secret shuffle for BGV ciphertexts as well as a compatible verifiable distributed decryption protocol. The shuffle is based on an extension of a shuffle of commitments to known values which is combined with an amortized proof of correct re-randomization. The verifiable distributed decryption protocol uses noise drowning for BGV decryption, proving correctness of decryption steps in zero-knowledge.
We give concrete parameters for our system, estimate the size of each component and provide an implementation of all sub-protocols. Together, the shuffle and the decryption protocol are suitable for use in real-world cryptographic voting schemes, which we demonstrate with a prototype voting protocol design.2022-04-06T13:01:01+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/101Lattice-Based Linkable Ring Signature in the Standard Model2022-05-22T15:08:30+00:00Mingxing HuZhen LiuRing signatures enable a user to sign messages on behalf of an arbitrary set of users, called the ring. The anonymity of the scheme guarantees that the signature does not reveal which member of the ring signed the message. The notion of linkable ring signatures (LRS) is an extension of the concept of ring signatures such that there is a public way of determining whether two signatures have been produced by the same signer. Lattice-based LRS is an important and active research line since lattice-based cryptography has attracted more attention due to its distinctive features, especially the quantum-resistant. However, all the existing lattice-based LRS relied on random oracle heuristics, i.e., no lattice-based LRS in the standard model has been introduced so far.
In this paper, we present a lattice-based LRS scheme in the standard model. Toward our goal, we present new lattice basis extending algorithms which are the key ingredients in our construction, that may be of independent interest.Ring signatures enable a user to sign messages on behalf of an arbitrary set of users, called the ring. The anonymity of the scheme guarantees that the signature does not reveal which member of the ring signed the message. The notion of linkable ring signatures (LRS) is an extension of the concept of ring signatures such that there is a public way of determining whether two signatures have been produced by the same signer. Lattice-based LRS is an important and active research line since lattice-based cryptography has attracted more attention due to its distinctive features, especially the quantum-resistant. However, all the existing lattice-based LRS relied on random oracle heuristics, i.e., no lattice-based LRS in the standard model has been introduced so far.
In this paper, we present a lattice-based LRS scheme in the standard model. Toward our goal, we present new lattice basis extending algorithms which are the key ingredients in our construction, that may be of independent interest.2022-01-31T07:46:38+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/177The Power of the Differentially Oblivious Shuffle in Distributed Privacy Mechanisms2022-05-22T15:58:00+00:00Mingxun ZhouElaine ShiThe shuffle model has been extensively investigated in the distributed differential privacy (DP) literature. For a class of useful computational tasks, the shuffle model allows us to achieve privacy-utility tradeoff similar to those in the central model, while shifting the trust from a central data curator to a ``trusted shuffle'' which can be implemented through either trusted hardware or cryptography. Very recently, several works explored cryptographic instantiations of
a new type of shuffle with relaxed security, called {\it differentially oblivious (DO) shuffles}. These works demonstrate that by relaxing the shuffler's security from simulation-style secrecy to differential privacy, we can achieve asymptotical efficiency improvements. A natural question arises, can we replace the shuffler in distributed DP mechanisms with a DO-shuffle while retaining a similar privacy-utility tradeoff?
In this paper, we prove an optimal privacy amplification theorem by composing any locally differentially private (LDP) mechanism with a DO-shuffler, achieving parameters that tightly match the shuffle model. Moreover, we explore multi-message protocols in the DO-shuffle model, and construct mechanisms for the real summation and histograph problems. Our error bounds approximate the best known results in the multi-message shuffle-model up to sub-logarithmic factors. Our results also suggest that just like in the shuffle model, allowing each client to send multiple messages is fundamentally more powerful than restricting to a single message. As an application, we derive the result of using repeated DO-shuffling for privacy-preserving time-series data aggregation.The shuffle model has been extensively investigated in the distributed differential privacy (DP) literature. For a class of useful computational tasks, the shuffle model allows us to achieve privacy-utility tradeoff similar to those in the central model, while shifting the trust from a central data curator to a ``trusted shuffle'' which can be implemented through either trusted hardware or cryptography. Very recently, several works explored cryptographic instantiations of
a new type of shuffle with relaxed security, called {\it differentially oblivious (DO) shuffles}. These works demonstrate that by relaxing the shuffler's security from simulation-style secrecy to differential privacy, we can achieve asymptotical efficiency improvements. A natural question arises, can we replace the shuffler in distributed DP mechanisms with a DO-shuffle while retaining a similar privacy-utility tradeoff?
In this paper, we prove an optimal privacy amplification theorem by composing any locally differentially private (LDP) mechanism with a DO-shuffler, achieving parameters that tightly match the shuffle model. Moreover, we explore multi-message protocols in the DO-shuffle model, and construct mechanisms for the real summation and histograph problems. Our error bounds approximate the best known results in the multi-message shuffle-model up to sub-logarithmic factors. Our results also suggest that just like in the shuffle model, allowing each client to send multiple messages is fundamentally more powerful than restricting to a single message. As an application, we derive the result of using repeated DO-shuffling for privacy-preserving time-series data aggregation.2022-02-20T20:12:45+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1688Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions2022-05-23T00:29:29+00:00Eunsang LeeJoon-Woo LeeJunghyun LeeYoung-Sik KimYongjune KimJong-Seon NoWoosuk ChoiRecently, the standard ResNet-20 network was successfully implemented on residue number system variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme using bootstrapping, but the implementation lacks practicality due to high latency and low security level. To improve the performance, we first minimize total bootstrapping runtime using multiplexed parallel convolution that collects sparse output data for multiple channels compactly. We also propose the \emph{imaginary-removing bootstrapping} to prevent the deep neural networks from catastrophic divergence during approximate ReLU operations. In addition, we optimize level consumptions and use lighter and tighter parameters. Simulation results show that we have 4.67$\times$ lower inference latency and 134$\times$ less amortized runtime (runtime per image) for ResNet-20 compared to the state-of-the-art previous work, and we achieve standard 128-bit security. Furthermore, we successfully implement ResNet-110 with high accuracy on the RNS-CKKS scheme for the first time.Recently, the standard ResNet-20 network was successfully implemented on residue number system variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme using bootstrapping, but the implementation lacks practicality due to high latency and low security level. To improve the performance, we first minimize total bootstrapping runtime using multiplexed parallel convolution that collects sparse output data for multiple channels compactly. We also propose the \emph{imaginary-removing bootstrapping} to prevent the deep neural networks from catastrophic divergence during approximate ReLU operations. In addition, we optimize level consumptions and use lighter and tighter parameters. Simulation results show that we have 4.67$\times$ lower inference latency and 134$\times$ less amortized runtime (runtime per image) for ResNet-20 compared to the state-of-the-art previous work, and we achieve standard 128-bit security. Furthermore, we successfully implement ResNet-110 with high accuracy on the RNS-CKKS scheme for the first time.2021-12-30T17:08:37+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/604Algorithm Substitution Attacks against Receivers2022-05-23T08:17:43+00:00Marcel ArmourBertram PoetteringThis work describes a class of Algorithm Substitution Attack (ASA) generically targeting the receiver of a communication between two parties. Our work provides a unified framework that applies to any scheme where a secret key is held by the receiver; in particular, message authentication schemes (MACs), authenticated encryption (AEAD) and public key encryption (PKE). Our unified framework brings together prior work targeting MAC schemes and AEAD schemes; we extend prior work by showing that public key encryption may also be targeted.
ASAs were initially introduced by Bellare, Paterson and Rogaway in light of revelations concerning mass surveillance, as a novel attack class against the confidentiality of encryption schemes. Such an attack replaces one or more of the regular scheme algorithms with a subverted version that aims to reveal information to an adversary (engaged in mass surveillance), while remaining undetected by users. Previous work looking at ASAs against encryption schemes can be divided into two groups. ASAs against PKE schemes target key generation by creating subverted public keys that allow an adversary to recover the secret key. ASAs against symmetric encryption target the encryption algorithm and leak information through a subliminal channel in the ciphertexts. We present a new class of attack that targets the decryption algorithm of an encryption scheme for symmetric encryption and public key encryption, or the verification algorithm for an authentication scheme. We present a generic framework for subverting a cryptographic scheme between a sender and receiver, and show how a decryption oracle allows a subverter to create a subliminal channel which can be used to leak secret keys. We then show that the generic framework can be applied to authenticated encryption with associated data, message authentication schemes, public key encryption and KEM/DEM constructions.
We consider practical considerations and specific conditions that apply for particular schemes, strengthening the generic approach. Furthermore, we show how the hybrid subversion of key generation and decryption algorithms can be used to amplify the effectiveness of our decryption attack. We argue that this attack represents an attractive opportunity for a mass surveillance adversary. Our work serves to refine the ASA model and contributes to a series of papers that raises awareness and understanding about what is possible with ASAs.This work describes a class of Algorithm Substitution Attack (ASA) generically targeting the receiver of a communication between two parties. Our work provides a unified framework that applies to any scheme where a secret key is held by the receiver; in particular, message authentication schemes (MACs), authenticated encryption (AEAD) and public key encryption (PKE). Our unified framework brings together prior work targeting MAC schemes and AEAD schemes; we extend prior work by showing that public key encryption may also be targeted.
ASAs were initially introduced by Bellare, Paterson and Rogaway in light of revelations concerning mass surveillance, as a novel attack class against the confidentiality of encryption schemes. Such an attack replaces one or more of the regular scheme algorithms with a subverted version that aims to reveal information to an adversary (engaged in mass surveillance), while remaining undetected by users. Previous work looking at ASAs against encryption schemes can be divided into two groups. ASAs against PKE schemes target key generation by creating subverted public keys that allow an adversary to recover the secret key. ASAs against symmetric encryption target the encryption algorithm and leak information through a subliminal channel in the ciphertexts. We present a new class of attack that targets the decryption algorithm of an encryption scheme for symmetric encryption and public key encryption, or the verification algorithm for an authentication scheme. We present a generic framework for subverting a cryptographic scheme between a sender and receiver, and show how a decryption oracle allows a subverter to create a subliminal channel which can be used to leak secret keys. We then show that the generic framework can be applied to authenticated encryption with associated data, message authentication schemes, public key encryption and KEM/DEM constructions.
We consider practical considerations and specific conditions that apply for particular schemes, strengthening the generic approach. Furthermore, we show how the hybrid subversion of key generation and decryption algorithms can be used to amplify the effectiveness of our decryption attack. We argue that this attack represents an attractive opportunity for a mass surveillance adversary. Our work serves to refine the ASA model and contributes to a series of papers that raises awareness and understanding about what is possible with ASAs.2022-05-23T08:17:43+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/605Weighted Attribute-Based Encryption with Parallelized Decryption2022-05-23T08:18:11+00:00Alexandru IonitaUnlike conventional ABE systems, which support Boolean attributes (with only 2 states: "1" and "0", or "Present" and "Absent"), weighted Attribute-based encryption schemes also support numerical values attached to attributes, and each terminal node of the access structure contains a threshold for a minimum weight. We propose a weighted ABE system, with access policy of logarithmic expansion, by dividing each weighted attribute in sub-attributes. On top of that, we show that the decryption can be parallelized, leading to a notable improvement in running time, compared to the serial version.Unlike conventional ABE systems, which support Boolean attributes (with only 2 states: "1" and "0", or "Present" and "Absent"), weighted Attribute-based encryption schemes also support numerical values attached to attributes, and each terminal node of the access structure contains a threshold for a minimum weight. We propose a weighted ABE system, with access policy of logarithmic expansion, by dividing each weighted attribute in sub-attributes. On top of that, we show that the decryption can be parallelized, leading to a notable improvement in running time, compared to the serial version.2022-05-23T08:18:11+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/606Security Against Honorific Adversaries: Efficient MPC with Server-aided Public Verifiability2022-05-23T08:19:46+00:00Li DuanYufan JiangYong LiJörn Müller-QuadeAndy RuppSecure multiparty computation (MPC) allows distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. MPC adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. Covert security was first introduced by Aumann and Lindell in 2007, which models a third type of active adversaries who cheat but can be caught with a probability. However, this probability is predefined externally, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. In this paper, we propose a new security notion called security against honorific adversaries, who may cheat during the protocol execution but are extremely unwilling to be punished. Intuitively, honorific adversaries can cheat successfully, but decisive evidence of misbehavior will be left to honest parties with a probability close to one. By introducing an independent but not trusted auditor to the MPC ideal functionality in the universal composability framework (UC), we avoid heavy cryptographic machinery in detection and complicated discussion about the probability of being caught. With this new notion, we construct new provably secure protocols without cut-and-choose for garbled circuits that are much more efficient than those in the covert and malicious model, with slightly more overhead than passively secure protocols.Secure multiparty computation (MPC) allows distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. MPC adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. Covert security was first introduced by Aumann and Lindell in 2007, which models a third type of active adversaries who cheat but can be caught with a probability. However, this probability is predefined externally, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. In this paper, we propose a new security notion called security against honorific adversaries, who may cheat during the protocol execution but are extremely unwilling to be punished. Intuitively, honorific adversaries can cheat successfully, but decisive evidence of misbehavior will be left to honest parties with a probability close to one. By introducing an independent but not trusted auditor to the MPC ideal functionality in the universal composability framework (UC), we avoid heavy cryptographic machinery in detection and complicated discussion about the probability of being caught. With this new notion, we construct new provably secure protocols without cut-and-choose for garbled circuits that are much more efficient than those in the covert and malicious model, with slightly more overhead than passively secure protocols.2022-05-23T08:19:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/607Noise*: A Library of Verified High-Performance Secure Channel Protocol Implementations (Long Version)2022-05-23T08:20:12+00:00Son HoJonathan ProtzenkoAbhishek BichhawatKarthikeyan BhargavanThe Noise protocol framework defines a succinct notation and execution
framework for a large class of 59+ secure channel protocols, some of
which are used in popular applications such as WhatsApp and WireGuard.
We present a verified implementation of a Noise
protocol compiler that takes any Noise protocol, and produces
an optimized C implementation with extensive correctness and security
guarantees. To this end, we formalize the complete Noise stack in
F*, from the low-level cryptographic library to a high-level API.
We write our compiler also in F*, prove that it meets our formal
specification once and for all, and then specialize it on-demand for
any given Noise protocol, relying on a novel technique called
hybrid embedding. We thusa establish functional correctness,
memory safety and a form of side-channel resistance for the generated
C code for each Noise protocol. We propagate these guarantees to the
high-level API, using defensive dynamic checks to prevent incorrect
uses of the protocol. Finally, we formally state and prove the
security of our Noise code, by building on a symbolic model of
cryptography in F*, and formally link high-level API
security goals stated in terms of security levels to
low-level cryptographic guarantees.
Ours are the first comprehensive verification results for a
protocol compiler that targets C code and the first verified
implementations of any Noise protocol. We evaluate our framework by
generating implementations for all 59 Noise protocols and by comparing
the size, performance, and security of our verified code against other
(unverified) implementations and prior security analyses of Noise.The Noise protocol framework defines a succinct notation and execution
framework for a large class of 59+ secure channel protocols, some of
which are used in popular applications such as WhatsApp and WireGuard.
We present a verified implementation of a Noise
protocol compiler that takes any Noise protocol, and produces
an optimized C implementation with extensive correctness and security
guarantees. To this end, we formalize the complete Noise stack in
F*, from the low-level cryptographic library to a high-level API.
We write our compiler also in F*, prove that it meets our formal
specification once and for all, and then specialize it on-demand for
any given Noise protocol, relying on a novel technique called
hybrid embedding. We thusa establish functional correctness,
memory safety and a form of side-channel resistance for the generated
C code for each Noise protocol. We propagate these guarantees to the
high-level API, using defensive dynamic checks to prevent incorrect
uses of the protocol. Finally, we formally state and prove the
security of our Noise code, by building on a symbolic model of
cryptography in F*, and formally link high-level API
security goals stated in terms of security levels to
low-level cryptographic guarantees.
Ours are the first comprehensive verification results for a
protocol compiler that targets C code and the first verified
implementations of any Noise protocol. We evaluate our framework by
generating implementations for all 59 Noise protocols and by comparing
the size, performance, and security of our verified code against other
(unverified) implementations and prior security analyses of Noise.2022-05-23T08:20:12+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/608Practical Provably Secure Flooding for Blockchains2022-05-23T08:20:37+00:00Chen-Da Liu-ZhangChristian MattUeli MaurerGuilherme RitoSøren Eller ThomsenIn recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks.
To close this apparent gap, we propose a flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal.
The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol. Hence, security in a weighted setting essentially comes for free with our techniques.In recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks.
To close this apparent gap, we propose a flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal.
The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol. Hence, security in a weighted setting essentially comes for free with our techniques.2022-05-23T08:20:37+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/609Optimal Single-Server Private Information Retrieval2022-05-23T08:20:59+00:00Mingxun ZhouWei-Kai LinYiannis TselekounisElaine Shi (random author ordering)We construct a single-server
pre-processing Private Information Retrieval
(PIR) scheme
with optimal bandwidth
and server computation (up to poly-logarithmic factors), assuming
hardness of the Learning With Errors (LWE) problem.
Our scheme achieves
amortized
$\widetilde{O}_{\lambda}(\sqrt{n})$
server and client computation and $\widetilde{O}_\lambda(1)$
bandwidth per query, completes in a single roundtrip, and requires
$\widetilde{O}_\lambda(\sqrt{n})$
client storage.
In particular, we achieve a significant
reduction in bandwidth over the
state-of-the-art scheme by Corrigan-Gibbs,
Henzinger, and Kogan (Eurocrypt'22):
their scheme requires as much as
$\widetilde{O}_{\lambda}(\sqrt{n})$
bandwidth per query, with comparable
computational and storage overhead as ours.We construct a single-server
pre-processing Private Information Retrieval
(PIR) scheme
with optimal bandwidth
and server computation (up to poly-logarithmic factors), assuming
hardness of the Learning With Errors (LWE) problem.
Our scheme achieves
amortized
$\widetilde{O}_{\lambda}(\sqrt{n})$
server and client computation and $\widetilde{O}_\lambda(1)$
bandwidth per query, completes in a single roundtrip, and requires
$\widetilde{O}_\lambda(\sqrt{n})$
client storage.
In particular, we achieve a significant
reduction in bandwidth over the
state-of-the-art scheme by Corrigan-Gibbs,
Henzinger, and Kogan (Eurocrypt'22):
their scheme requires as much as
$\widetilde{O}_{\lambda}(\sqrt{n})$
bandwidth per query, with comparable
computational and storage overhead as ours.2022-05-23T08:20:59+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/610On the Differential Spectrum of a Differentially $3$-Uniform Power Function2022-05-23T08:21:26+00:00Tingting PangNian LiXiangyong ZengIn this paper, we investigate the cardinality, denoted by $(j_1,j_2,j_3,j_4)_2$, of the intersection of $(\mathcal{C}^{(2)}_{j_1}-1)\cap(\mathcal{C}^{(2)}_{j_2}-2)\cap(\mathcal{C}^{(2)}_{j_3}-3)
\cap(\mathcal{C}^{(2)}_{j_4}-4)$ for $j_1,j_2,j_3,j_4\in\{0,1\}$, where $\mathcal{C}^{(2)}_0, \mathcal{C}^{(2)}_1$ are the cyclotomic classes of order two over the finite field $\mathbb{F}_{p^n}$, $p$ is an odd prime and $n$ is a positive integer. By making most use of the results on cyclotomic classes of orders two and four as well as the cardinality of the intersection
$(\mathcal{C}^{(2)}_{i_1}-1)\cap(\mathcal{C}^{(2)}_{i_2}-2)\cap(\mathcal{C}^{(2)}_{i_3}-3)$, we compute the values of $(j_1,j_2,j_3,j_4)_2$ in the case of $p=5$, where $i_1,i_2,i_3\in\{0,1\}$. As a consequence, the power function $x^{\frac{5^n-1}{2}+2}$ over $\mathbb{F}_{5^n}$ is shown to be differentially $3$-uniform and its differential spectrum is also completely determined.In this paper, we investigate the cardinality, denoted by $(j_1,j_2,j_3,j_4)_2$, of the intersection of $(\mathcal{C}^{(2)}_{j_1}-1)\cap(\mathcal{C}^{(2)}_{j_2}-2)\cap(\mathcal{C}^{(2)}_{j_3}-3)
\cap(\mathcal{C}^{(2)}_{j_4}-4)$ for $j_1,j_2,j_3,j_4\in\{0,1\}$, where $\mathcal{C}^{(2)}_0, \mathcal{C}^{(2)}_1$ are the cyclotomic classes of order two over the finite field $\mathbb{F}_{p^n}$, $p$ is an odd prime and $n$ is a positive integer. By making most use of the results on cyclotomic classes of orders two and four as well as the cardinality of the intersection
$(\mathcal{C}^{(2)}_{i_1}-1)\cap(\mathcal{C}^{(2)}_{i_2}-2)\cap(\mathcal{C}^{(2)}_{i_3}-3)$, we compute the values of $(j_1,j_2,j_3,j_4)_2$ in the case of $p=5$, where $i_1,i_2,i_3\in\{0,1\}$. As a consequence, the power function $x^{\frac{5^n-1}{2}+2}$ over $\mathbb{F}_{5^n}$ is shown to be differentially $3$-uniform and its differential spectrum is also completely determined.2022-05-23T08:21:26+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/611Further Cryptanalysis of a Type of RSA Variants2022-05-23T08:22:49+00:00Gongyu ShiGeng WangDawu GuTo enhance the security or the efficiency of the standard RSA cryptosystem, some variants have been proposed based on elliptic curves, Gaussian integers or Lucas sequences. A typical type of these variants which we called Type-A variants have the specified modified Euler's totient function $\psi(N)=(p^2-1)(q^2-1)$. But in 2018, based on cubic Pell equation, Murru and Saettone presented a new RSA-like cryptosystem, and it is another type of RSA variants which we called Type-B variants, since their scheme has $\psi(N)=(p^2+p+1)(q^2+q+1)$. For RSA-like cryptosystems, four key-related attacks have been widely analyzed, e.g., the small private key attack, the multiple private keys attack, the partial key exposure attack and the small prime difference attack. These attacks are well-studied on both standard RSA and Type-A variants. Recently, the small private key attack on Type-B variants has also been analyzed. In this paper, we make further cryptanalysis of Type-B variants, that is, we propose the first theoretical results of multiple private keys attack, partial key exposure attack as well as small prime difference attack on Type-B variants, and the validity of our attacks are verified by experiments. Our results show that for all three attacks, Type-B variants are less secure than standard RSA.To enhance the security or the efficiency of the standard RSA cryptosystem, some variants have been proposed based on elliptic curves, Gaussian integers or Lucas sequences. A typical type of these variants which we called Type-A variants have the specified modified Euler's totient function $\psi(N)=(p^2-1)(q^2-1)$. But in 2018, based on cubic Pell equation, Murru and Saettone presented a new RSA-like cryptosystem, and it is another type of RSA variants which we called Type-B variants, since their scheme has $\psi(N)=(p^2+p+1)(q^2+q+1)$. For RSA-like cryptosystems, four key-related attacks have been widely analyzed, e.g., the small private key attack, the multiple private keys attack, the partial key exposure attack and the small prime difference attack. These attacks are well-studied on both standard RSA and Type-A variants. Recently, the small private key attack on Type-B variants has also been analyzed. In this paper, we make further cryptanalysis of Type-B variants, that is, we propose the first theoretical results of multiple private keys attack, partial key exposure attack as well as small prime difference attack on Type-B variants, and the validity of our attacks are verified by experiments. Our results show that for all three attacks, Type-B variants are less secure than standard RSA.2022-05-23T08:22:49+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/612Cryptanalysis of Reduced Round SPEEDY2022-05-23T08:23:19+00:00Raghvendra RohitSantanu SarkarSPEEDY is a family of ultra low latency block ciphers proposed by Leander, Moos, Moradi and Rasoolzadeh at TCHES 2021. Although the designers gave some differential/linear distinguishers for reduced rounds, a concrete cryptanalysis considering key recovery attacks on SPEEDY was completely missing. The latter is crucial to understand the security margin of designs like SPEEDY which typically use low number of rounds to have low latency. In this work, we present the first third-party cryptanalysis of SPEEDY-$r$-192, where $r \in \{5, 6, 7\}$ is the number of rounds and 192 is block and key size in bits. We identify cube distinguishers for 2 rounds with data complexities $2^{14}$ and $2^{13}$, while the differential/linear distinguishers provided by designers has a complexity of $2^{39}$. Notably, we show that there are several such cube distinguishers, and thus, we then provide a generic description of them. We also investigate the structural properties of 13-dimensional cubes and give experimental evidence that the partial algebraic normal form of certain state bits after two rounds is always the same. Next, we utilize the 2 rounds distinguishers to mount a key recovery attack on 3 rounds SPEEDY.
Our attack require $2^{17.6}$ data, $2^{25.5}$ bits of memory and $2^{52.5}$ time. Our results show that the practical variant of SPEEDY, i.e., SPEEDY-5-192 has a security margin of only 2 rounds. We believe our work will bring new insights in understanding the security of SPEEDY.SPEEDY is a family of ultra low latency block ciphers proposed by Leander, Moos, Moradi and Rasoolzadeh at TCHES 2021. Although the designers gave some differential/linear distinguishers for reduced rounds, a concrete cryptanalysis considering key recovery attacks on SPEEDY was completely missing. The latter is crucial to understand the security margin of designs like SPEEDY which typically use low number of rounds to have low latency. In this work, we present the first third-party cryptanalysis of SPEEDY-$r$-192, where $r \in \{5, 6, 7\}$ is the number of rounds and 192 is block and key size in bits. We identify cube distinguishers for 2 rounds with data complexities $2^{14}$ and $2^{13}$, while the differential/linear distinguishers provided by designers has a complexity of $2^{39}$. Notably, we show that there are several such cube distinguishers, and thus, we then provide a generic description of them. We also investigate the structural properties of 13-dimensional cubes and give experimental evidence that the partial algebraic normal form of certain state bits after two rounds is always the same. Next, we utilize the 2 rounds distinguishers to mount a key recovery attack on 3 rounds SPEEDY.
Our attack require $2^{17.6}$ data, $2^{25.5}$ bits of memory and $2^{52.5}$ time. Our results show that the practical variant of SPEEDY, i.e., SPEEDY-5-192 has a security margin of only 2 rounds. We believe our work will bring new insights in understanding the security of SPEEDY.2022-05-23T08:23:19+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/613GLUE: Generalizing Unbounded Attribute-Based Encryption for Flexible Efficiency Trade-Offs2022-05-23T08:23:43+00:00Marloes VenemaGreg AlpárCiphertext-policy attribute-based encryption is a versatile primitive that has been considered extensively to securely manage data in practice. Especially completely unbounded schemes are attractive, because they do not restrict the sets of attributes and policies. So far, any such schemes that support negations in the access policy or that have online/offline extensions have an inefficient decryption algorithm.
In this work, we propose GLUE (Generalized, Large-universe, Unbounded and Expressive), which is a novel scheme that allows for the efficient implementation of the decryption while allowing the support of both negations and online/offline extensions. We achieve these properties simultaneously by uncovering an underlying dependency between encryption and decryption, which allows for a flexible trade-off in their efficiency. For the security proof, we devise a new technique that enables us to generalize multiple existing schemes. As a result, we obtain a completely unbounded scheme supporting negations that, to the best of our knowledge, outperforms all existing schemes in the decryption algorithm.Ciphertext-policy attribute-based encryption is a versatile primitive that has been considered extensively to securely manage data in practice. Especially completely unbounded schemes are attractive, because they do not restrict the sets of attributes and policies. So far, any such schemes that support negations in the access policy or that have online/offline extensions have an inefficient decryption algorithm.
In this work, we propose GLUE (Generalized, Large-universe, Unbounded and Expressive), which is a novel scheme that allows for the efficient implementation of the decryption while allowing the support of both negations and online/offline extensions. We achieve these properties simultaneously by uncovering an underlying dependency between encryption and decryption, which allows for a flexible trade-off in their efficiency. For the security proof, we devise a new technique that enables us to generalize multiple existing schemes. As a result, we obtain a completely unbounded scheme supporting negations that, to the best of our knowledge, outperforms all existing schemes in the decryption algorithm.2022-05-23T08:23:43+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/615Smoothing Codes and Lattices: Systematic Study and New Bounds2022-05-23T08:24:44+00:00Thomas Debris-AlazardLéo DucasNicolas ReschJean-Pierre TillichIn this article we revisit smoothing bounds in parallel between lattices \emph{and} codes. Initially introduced by Micciancio and Regev, these bounds were instantiated with Gaussian distributions and were crucial for arguing the security of many lattice-based cryptosystems. Unencumbered by direct application concerns, we provide a systematic study of how these bounds are obtained for both lattices \emph{and} codes, transferring techniques between both areas. We also consider various spherically symmetric noise distributions.
We found that the best strategy for a worst-case bound combines Parseval's Identity, the Cauchy-Schwarz inequality, and the second linear programming bound, and this for both codes and lattices, and for all noise distributions at hand. For an average-case analysis, the linear programming bound can be replaced by a tight average count.
This alone gives optimal results for spherically uniform noise over random codes and random lattices. This also improves previous Gaussian smoothing bound for worst-case lattices, but surprisingly this provides even better results for uniform noise than for Gaussian (or Bernouilli noise for codes).
This counter-intuitive situation can be resolved by adequate decomposition and truncation of Gaussian and Bernouilli distribution into a superposition of uniform noise, giving further improvement for those cases, and putting them on par with the uniform cases.In this article we revisit smoothing bounds in parallel between lattices \emph{and} codes. Initially introduced by Micciancio and Regev, these bounds were instantiated with Gaussian distributions and were crucial for arguing the security of many lattice-based cryptosystems. Unencumbered by direct application concerns, we provide a systematic study of how these bounds are obtained for both lattices \emph{and} codes, transferring techniques between both areas. We also consider various spherically symmetric noise distributions.
We found that the best strategy for a worst-case bound combines Parseval's Identity, the Cauchy-Schwarz inequality, and the second linear programming bound, and this for both codes and lattices, and for all noise distributions at hand. For an average-case analysis, the linear programming bound can be replaced by a tight average count.
This alone gives optimal results for spherically uniform noise over random codes and random lattices. This also improves previous Gaussian smoothing bound for worst-case lattices, but surprisingly this provides even better results for uniform noise than for Gaussian (or Bernouilli noise for codes).
This counter-intuitive situation can be resolved by adequate decomposition and truncation of Gaussian and Bernouilli distribution into a superposition of uniform noise, giving further improvement for those cases, and putting them on par with the uniform cases.2022-05-23T08:24:44+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/617SO-CCA Secure PKE in the Quantum Random Oracle Model or the Quantum Ideal Cipher Model2022-05-23T08:25:22+00:00Shingo SatoJunji ShikataSelective opening (SO) security is one of the most important security notions of public key encryption (PKE) in a multi-user setting. Even though messages and random coins used in some ciphertexts are leaked, SO security guarantees the confidentiality of the other ciphertexts. Actually, it is shown that there exist PKE schemes which meet the standard security such as indistinguishability against chosen ciphertext attacks (IND-CCA security) but do not meet SO security against chosen ciphertext attacks. Hence, it is important to consider SO security in the multi-user setting. On the other hand, many researchers have studied cryptosystems in the security model where adversaries can submit quantum superposition queries (i.e., quantum queries) to oracles. In particular, IND-CCA secure PKE and KEM schemes in the quantum random oracle model have been intensively studied so far. In this paper, we show that two kinds of constructions of hybrid encryption schemes meet simulation-based SO security against chosen ciphertext attacks (SIM-SO-CCA security) in the quantum random oracle model or the quantum ideal cipher model. The first scheme is constructed from any IND-CCA secure KEM and any simulatable data encapsulation mechanism (DEM). The second one is constructed from any IND-CCA secure KEM based on Fujisaki-Okamoto transformation and any strongly unforgeable message authentication code (MAC). We can apply any IND-CCA secure KEM scheme to the first one if the underlying DEM scheme meets simulatability, whereas we can apply strongly unforgeable MAC to the second one if the underlying KEM is based on Fujisaki-Okamoto transformation.Selective opening (SO) security is one of the most important security notions of public key encryption (PKE) in a multi-user setting. Even though messages and random coins used in some ciphertexts are leaked, SO security guarantees the confidentiality of the other ciphertexts. Actually, it is shown that there exist PKE schemes which meet the standard security such as indistinguishability against chosen ciphertext attacks (IND-CCA security) but do not meet SO security against chosen ciphertext attacks. Hence, it is important to consider SO security in the multi-user setting. On the other hand, many researchers have studied cryptosystems in the security model where adversaries can submit quantum superposition queries (i.e., quantum queries) to oracles. In particular, IND-CCA secure PKE and KEM schemes in the quantum random oracle model have been intensively studied so far. In this paper, we show that two kinds of constructions of hybrid encryption schemes meet simulation-based SO security against chosen ciphertext attacks (SIM-SO-CCA security) in the quantum random oracle model or the quantum ideal cipher model. The first scheme is constructed from any IND-CCA secure KEM and any simulatable data encapsulation mechanism (DEM). The second one is constructed from any IND-CCA secure KEM based on Fujisaki-Okamoto transformation and any strongly unforgeable message authentication code (MAC). We can apply any IND-CCA secure KEM scheme to the first one if the underlying DEM scheme meets simulatability, whereas we can apply strongly unforgeable MAC to the second one if the underlying KEM is based on Fujisaki-Okamoto transformation.2022-05-23T08:25:22+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/618A simple proof of ARX completeness2022-05-23T08:28:20+00:00Adriano KoleciIn the recent years there has been a growing interest in ARX ciphers thanks to their performance in low cost architectures. This work is a short and simple proof that Add, Rotate and Exclusive-OR (ARX) operations generate the permutation group S_{2^n} and it is made up by elementary arguments with minimal use of group theory.In the recent years there has been a growing interest in ARX ciphers thanks to their performance in low cost architectures. This work is a short and simple proof that Add, Rotate and Exclusive-OR (ARX) operations generate the permutation group S_{2^n} and it is made up by elementary arguments with minimal use of group theory.2022-05-23T08:28:20+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/619Asynchronous Dynamic Proactive Secret Sharing under Honest Majority: Refreshing Without a Consistent View on Shares2022-05-23T08:28:46+00:00Matthieu RambaudAntoine UrbanWe present the first proactive secret sharing under honest majority which is purely over pairwise asynchronous channels.
Moreover:
- it has robust reconstruction. In addition, provided a single broadcast in the sharing phase, then it commits the dealer to a value;
- it operates under a bare PKI and enables dynamic membership;
- the standard version carries over the model known as \cst{receiver-anonymous (Yoso)} in which participants speak only once then erase their memories.
Each refresh of a secret takes $2$ actual message delays and a total of $O(n^4)$ bits sent by the honest \players;
- it allows an optimization in $O(n^3)$ bits sent and latency of $5$ messages.We present the first proactive secret sharing under honest majority which is purely over pairwise asynchronous channels.
Moreover:
- it has robust reconstruction. In addition, provided a single broadcast in the sharing phase, then it commits the dealer to a value;
- it operates under a bare PKI and enables dynamic membership;
- the standard version carries over the model known as \cst{receiver-anonymous (Yoso)} in which participants speak only once then erase their memories.
Each refresh of a secret takes $2$ actual message delays and a total of $O(n^4)$ bits sent by the honest \players;
- it allows an optimization in $O(n^3)$ bits sent and latency of $5$ messages.2022-05-23T08:28:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/620Synthesizing Quantum Circuits of AES with Lower T-depth and Less Qubits2022-05-23T08:29:09+00:00Zhenyu HuangSiwei SunThe significant progress in the development of quantum computers has made the study of cryptanalysis based on quantum computing an active topic. To accurately estimate the resources required to carry out quantum attacks, the involved quantum algorithms have to be synthesized into quantum circuits with basic quantum gates. In this work, we present several generic synthesis and optimization techniques for circuits implementing the quantum oracles of iterative symmetric-key ciphers that are commonly employed in quantum attacks based on Grover and Simon’s algorithms. Firstly, a general structure for implementing the round functions of block ciphers in-place is proposed. Then, we present some novel techniques for synthesizing efficient quantum circuits of linear and non-linear cryptographic building blocks. We apply these techniques to AES and systematically investigate the strategies for depth-width trade-offs. Along the way, we derive a quantum circuit
for the AES S-box with provably minimal T-depth based on some new observations on its classical circuit. As a result, the T-depth and width (number of qubits) required for implementing the quantum circuits of AES are significantly reduced. Compared with the circuit proposed in EUROCRYPT 2020, the T-depth is reduced from 60 to 40 without increasing the width or 30 with a slight increase in width. These circuits are fully implemented in Microsoft Q# and the source code is publicly
available. Compared with the circuit proposed in ASIACRYPT 2020, the width of one of our circuits is reduced from 512 to 371, and the Toffoli-depth is reduced from 2016 to 1558 at the same time. Actually, we can reduce the width to 270 at the cost of increased depth. Moreover, a full spectrum of depth-width trade-offs is provided, setting new records for the synthesis and optimization of quantum circuits of AES.The significant progress in the development of quantum computers has made the study of cryptanalysis based on quantum computing an active topic. To accurately estimate the resources required to carry out quantum attacks, the involved quantum algorithms have to be synthesized into quantum circuits with basic quantum gates. In this work, we present several generic synthesis and optimization techniques for circuits implementing the quantum oracles of iterative symmetric-key ciphers that are commonly employed in quantum attacks based on Grover and Simon’s algorithms. Firstly, a general structure for implementing the round functions of block ciphers in-place is proposed. Then, we present some novel techniques for synthesizing efficient quantum circuits of linear and non-linear cryptographic building blocks. We apply these techniques to AES and systematically investigate the strategies for depth-width trade-offs. Along the way, we derive a quantum circuit
for the AES S-box with provably minimal T-depth based on some new observations on its classical circuit. As a result, the T-depth and width (number of qubits) required for implementing the quantum circuits of AES are significantly reduced. Compared with the circuit proposed in EUROCRYPT 2020, the T-depth is reduced from 60 to 40 without increasing the width or 30 with a slight increase in width. These circuits are fully implemented in Microsoft Q# and the source code is publicly
available. Compared with the circuit proposed in ASIACRYPT 2020, the width of one of our circuits is reduced from 512 to 371, and the Toffoli-depth is reduced from 2016 to 1558 at the same time. Actually, we can reduce the width to 270 at the cost of increased depth. Moreover, a full spectrum of depth-width trade-offs is provided, setting new records for the synthesis and optimization of quantum circuits of AES.2022-05-23T08:29:09+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/621Caulk: Lookup Arguments in Sublinear Time2022-05-23T08:29:32+00:00Arantxa ZapicoVitalik ButerinDmitry KhovratovichMary MallerAnca NitulescuMark SimkinWe present position-hiding linkability for vector commitment schemes: one can prove in zero knowledge that one or $m$ values that comprise commitment cm all belong to the vector of size $N$ committed to in C. Our construction Caulk can be used for membership proofs and lookup arguments and outperforms all existing alternatives in prover time by orders of magnitude.
For both single- and multi-membership proofs Caulk beats SNARKed Merkle proofs by the factor of 100 even if the latter instantiated with Poseidon hash. Asymptotically our prover needs $O(m^2 + m\log N)$ time to prove a batch of $m$ openings, whereas proof size is $O(1)$ and verifier time is $O(\log(\log N))$.
As a lookup argument, Caulk is the first scheme with prover time sublinear in the table size, assuming $O(N\log N)$ preprocessing time and $O(N)$ storage. It can be used as a subprimitive in verifiable computation schemes in order to drastically decrease the lookup overhead.
Our scheme comes with a reference implementation and benchmarks.We present position-hiding linkability for vector commitment schemes: one can prove in zero knowledge that one or $m$ values that comprise commitment cm all belong to the vector of size $N$ committed to in C. Our construction Caulk can be used for membership proofs and lookup arguments and outperforms all existing alternatives in prover time by orders of magnitude.
For both single- and multi-membership proofs Caulk beats SNARKed Merkle proofs by the factor of 100 even if the latter instantiated with Poseidon hash. Asymptotically our prover needs $O(m^2 + m\log N)$ time to prove a batch of $m$ openings, whereas proof size is $O(1)$ and verifier time is $O(\log(\log N))$.
As a lookup argument, Caulk is the first scheme with prover time sublinear in the table size, assuming $O(N\log N)$ preprocessing time and $O(N)$ storage. It can be used as a subprimitive in verifiable computation schemes in order to drastically decrease the lookup overhead.
Our scheme comes with a reference implementation and benchmarks.2022-05-23T08:29:32+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/622Efficient and Accurate homomorphic comparisons2022-05-23T08:30:04+00:00Olive ChakrabortyMartin ZuberWe design and implement a new efficient and accurate Fully homomorphic argmin/min or argmax/max comparison operator, which finds its application in numerous real-world use cases as a classifier. In particular we propose two versions of our algorithms using different tools from TFHE's functional bootstrapping toolkit. Our algorithm scales to any number of input data points with linear time complexity and logarithmic noise-propagation. Our algorithm is the fastest on the market for non-parallel comparisons with a high degree of accuracy and precision. For MNIST and SVHN datasets, which work under the PATE framework, using our algorithm, we achieve an accuracy of around 99.95 % for both.We design and implement a new efficient and accurate Fully homomorphic argmin/min or argmax/max comparison operator, which finds its application in numerous real-world use cases as a classifier. In particular we propose two versions of our algorithms using different tools from TFHE's functional bootstrapping toolkit. Our algorithm scales to any number of input data points with linear time complexity and logarithmic noise-propagation. Our algorithm is the fastest on the market for non-parallel comparisons with a high degree of accuracy and precision. For MNIST and SVHN datasets, which work under the PATE framework, using our algorithm, we achieve an accuracy of around 99.95 % for both.2022-05-23T08:30:04+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/623Fast Fully Secure Multi-Party Computation over Any Ring with Two-Thirds Honest Majority2022-05-23T08:30:52+00:00Anders DalskovDaniel EscuderoAriel NofWe introduce a new MPC protocol to securely compute any functionality over an arbitrary black-box finite ring (which may not be commutative), tolerating $t<n/3$ active corruptions while \textit{guaranteeing output delivery} (G.O.D.).
Our protocol is based on replicated secret-sharing, whose share size is known to grow exponentially with the number of parties $n$.
However, even though the internal storage and computation in our protocol remains exponential, the communication complexity of our protocol is \emph{constant}, except for a light constant-round check that is performed at the end before revealing the output.
Furthermore, the amortized communication complexity of our protocol is not only constant, but very small: only $1 + \frac{t-1}{n}<1\frac{1}{3}$ ring elements per party, per multiplication gate over two rounds of interaction.
This improves over the state-of-the art protocol in the same setting by Furukawa and Lindell (CCS 2019), which has a communication complexity of $2\frac{2}{3}$ \emph{field} elements per party, per multiplication gate and while achieving fairness only.
As an alternative, we also describe a variant of our protocol which has only one round of interaction per multiplication gate on average, and amortized communication cost of $\le 1\frac{1}{2}$ ring elements per party on average for any natural circuit.
Motivated by the fact that efficiency of distributed protocols are much more penalized by high communication complexity than local computation/storage, we perform a detailed analysis together with experiments in order to explore how large the number of parties can be, before the storage and computation overhead becomes prohibitive.
Our results show that our techniques are viable even for a moderate number of parties (e.g., $n>10$).We introduce a new MPC protocol to securely compute any functionality over an arbitrary black-box finite ring (which may not be commutative), tolerating $t<n/3$ active corruptions while \textit{guaranteeing output delivery} (G.O.D.).
Our protocol is based on replicated secret-sharing, whose share size is known to grow exponentially with the number of parties $n$.
However, even though the internal storage and computation in our protocol remains exponential, the communication complexity of our protocol is \emph{constant}, except for a light constant-round check that is performed at the end before revealing the output.
Furthermore, the amortized communication complexity of our protocol is not only constant, but very small: only $1 + \frac{t-1}{n}<1\frac{1}{3}$ ring elements per party, per multiplication gate over two rounds of interaction.
This improves over the state-of-the art protocol in the same setting by Furukawa and Lindell (CCS 2019), which has a communication complexity of $2\frac{2}{3}$ \emph{field} elements per party, per multiplication gate and while achieving fairness only.
As an alternative, we also describe a variant of our protocol which has only one round of interaction per multiplication gate on average, and amortized communication cost of $\le 1\frac{1}{2}$ ring elements per party on average for any natural circuit.
Motivated by the fact that efficiency of distributed protocols are much more penalized by high communication complexity than local computation/storage, we perform a detailed analysis together with experiments in order to explore how large the number of parties can be, before the storage and computation overhead becomes prohibitive.
Our results show that our techniques are viable even for a moderate number of parties (e.g., $n>10$).2022-05-23T08:30:52+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/624Cryptanalysis of Three Quantum Money Schemes2022-05-23T08:31:12+00:00Andriyan BilykJavad DoliskaniZhiyong GongWe investigate the security assumptions behind three public-key quantum money schemes. Aaronson and Christiano proposed a scheme based on hidden subspaces of the vector space $\mathbb{F}_2^n$ in 2012. It was conjectured by Pena et al in 2015 that the hard problem underlying the scheme can be solved in quasi-polynomial time. We confirm this conjecture by giving a polynomial time quantum algorithm for the underlying problem. Our algorithm is based on computing the Zariski tangent space of a random point in the hidden subspace.
Zhandry proposed a scheme based on multivariate hash functions in 2017. We give a polynomial time quantum algorithm for cloning a money state with high probability. Our algorithm uses the verification circuit of the scheme to produce a banknote from a given serial number.
Kane proposed a scheme based on modular forms in 2018. The underlying hard problem in Kane's scheme is cloning a quantum state that represents an eigenvector of a set of Hecke operators. We give a polynomial time quantum reduction from this hard problem to a linear algebra problem. The latter problem is much easier to understand, and we hope that our reduction opens new avenues to future cryptanalyses of this scheme.We investigate the security assumptions behind three public-key quantum money schemes. Aaronson and Christiano proposed a scheme based on hidden subspaces of the vector space $\mathbb{F}_2^n$ in 2012. It was conjectured by Pena et al in 2015 that the hard problem underlying the scheme can be solved in quasi-polynomial time. We confirm this conjecture by giving a polynomial time quantum algorithm for the underlying problem. Our algorithm is based on computing the Zariski tangent space of a random point in the hidden subspace.
Zhandry proposed a scheme based on multivariate hash functions in 2017. We give a polynomial time quantum algorithm for cloning a money state with high probability. Our algorithm uses the verification circuit of the scheme to produce a banknote from a given serial number.
Kane proposed a scheme based on modular forms in 2018. The underlying hard problem in Kane's scheme is cloning a quantum state that represents an eigenvector of a set of Hecke operators. We give a polynomial time quantum reduction from this hard problem to a linear algebra problem. The latter problem is much easier to understand, and we hope that our reduction opens new avenues to future cryptanalyses of this scheme.2022-05-23T08:31:12+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/625Byzantine Fault Tolerance from Weak Certificates2022-05-23T08:31:53+00:00Sisi DuanHaibin ZhangXiao SuiBaohan HuangChangchun MuGang DiXiaoyun WangState-of-the-art Byzantine fault-tolerant (BFT) protocols assuming partial synchrony such as SBFT and HotStuff use regular certificates obtained from $2f+1$ (partial) signatures. We show in this paper that one can use weak certificates obtained from only $f+1$ signatures to design more robust and much more efficient BFT protocols. We devise Dashing (a family of three HotStuff-style BFT protocols) and Star (a parallel BFT framework).
We begin with Dashing1 that targets both efficiency and robustness using weak certificates. Dashing1 is partition-tolerant and network-adaptive, and does not rely on fallback asynchronous BFT protocols. Dashing2 is a variant of Dashing1 and focuses on performance only. Then we show in Dashing3 how to further enable a fast path by using strong certificates obtained from $3f+1$ signatures, a challenging task we tackled in the paper.
We then leverage weak certificates to build a highly efficient BFT framework (Star) that delivers transactions from $n-f$ replicas using only a single consensus instance in the standard BFT model. Star completely separates bulk data transmission from consensus. Moreover, its data transmission process uses $O(n^2)$ messages only and can be effectively pipelined.
We demonstrate that the Dashing protocols achieve 10.7%-29.9% higher peak throughput than HotStuff. Meanwhile, Star, when being instantiated using PBFT, is an order of magnitude faster than HotStuff. Furthermore, unlike the Dashing protocols and HotStuff whose performance degrades as $f$ grows, the peak throughput of Star increases as $f$ grows. When deployed in a WAN with 91 replicas across five continents, Star achieves 243 ktx/sec, 15.8x the throughput of HotStuff.State-of-the-art Byzantine fault-tolerant (BFT) protocols assuming partial synchrony such as SBFT and HotStuff use regular certificates obtained from $2f+1$ (partial) signatures. We show in this paper that one can use weak certificates obtained from only $f+1$ signatures to design more robust and much more efficient BFT protocols. We devise Dashing (a family of three HotStuff-style BFT protocols) and Star (a parallel BFT framework).
We begin with Dashing1 that targets both efficiency and robustness using weak certificates. Dashing1 is partition-tolerant and network-adaptive, and does not rely on fallback asynchronous BFT protocols. Dashing2 is a variant of Dashing1 and focuses on performance only. Then we show in Dashing3 how to further enable a fast path by using strong certificates obtained from $3f+1$ signatures, a challenging task we tackled in the paper.
We then leverage weak certificates to build a highly efficient BFT framework (Star) that delivers transactions from $n-f$ replicas using only a single consensus instance in the standard BFT model. Star completely separates bulk data transmission from consensus. Moreover, its data transmission process uses $O(n^2)$ messages only and can be effectively pipelined.
We demonstrate that the Dashing protocols achieve 10.7%-29.9% higher peak throughput than HotStuff. Meanwhile, Star, when being instantiated using PBFT, is an order of magnitude faster than HotStuff. Furthermore, unlike the Dashing protocols and HotStuff whose performance degrades as $f$ grows, the peak throughput of Star increases as $f$ grows. When deployed in a WAN with 91 replicas across five continents, Star achieves 243 ktx/sec, 15.8x the throughput of HotStuff.2022-05-23T08:31:53+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/626The Simplest SAT Model of Combining Matsui's Bounding Conditions with Sequential Encoding Method2022-05-23T08:32:59+00:00Senpeng WangDengguo FengBin HuJie GuanTairong ShiKai ZhangAs the first generic method for finding the optimal differential and linear characteristics, Matsui's branch and bound search algorithm has played an important role
in evaluating the security of symmetric ciphers. By combining the Matsui's bounding conditions with automatic search models, the search efficiency can be improved. All the previous methods realize the bounding conditions by adding a set of constraints. This may increase the searching complexity of models. In this paper, by
using Information Theory to quantify the effect of bounding conditions, we give the general form of bounding conditions that can use all the information
provided by Matsui's bounding conditions. Then, a new method of combining bounding conditions with sequential encoding method is proposed. Different from all the previous methods,
our new method can realize the bounding conditions by removing the variables and clauses from Satisfiability Problem (SAT) models based on the original sequential encoding method. With the help of some small size Mixed
Integer Linear Programming (MILP) models, we build the simplest SAT model of combining Matsui's bounding conditions with sequential encoding method. Then, we apply our new method
to search the optimal differential and linear characteristics of some SPN, Feistel, and ARX block ciphers. The number of variables, clauses and the solving time of the SAT models
are decreased significantly. And we find some new differential and linear characteristics covering more rounds. For example, the optimal differential probability of the full rounds GIFT128 is obtained for the first time.As the first generic method for finding the optimal differential and linear characteristics, Matsui's branch and bound search algorithm has played an important role
in evaluating the security of symmetric ciphers. By combining the Matsui's bounding conditions with automatic search models, the search efficiency can be improved. All the previous methods realize the bounding conditions by adding a set of constraints. This may increase the searching complexity of models. In this paper, by
using Information Theory to quantify the effect of bounding conditions, we give the general form of bounding conditions that can use all the information
provided by Matsui's bounding conditions. Then, a new method of combining bounding conditions with sequential encoding method is proposed. Different from all the previous methods,
our new method can realize the bounding conditions by removing the variables and clauses from Satisfiability Problem (SAT) models based on the original sequential encoding method. With the help of some small size Mixed
Integer Linear Programming (MILP) models, we build the simplest SAT model of combining Matsui's bounding conditions with sequential encoding method. Then, we apply our new method
to search the optimal differential and linear characteristics of some SPN, Feistel, and ARX block ciphers. The number of variables, clauses and the solving time of the SAT models
are decreased significantly. And we find some new differential and linear characteristics covering more rounds. For example, the optimal differential probability of the full rounds GIFT128 is obtained for the first time.2022-05-23T08:32:59+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/627Secure Hierarchical Deterministic Wallet Supporting Stealth Address2022-05-23T08:33:30+00:00Xin YinZhen LiuGuomin YangGuoxing ChenHaojin ZhuOver the past decade, cryptocurrency has been undergoing a rapid development. Digital wallet, as the tool to store and manage the cryptographic keys, is the primary entrance for the public to access cryptocurrency assets.
Hierarchical Deterministic Wallet (HDW), proposed in Bitcoin Improvement Proposal 32 (BIP32), has attracted much attention and been widely used in the community, due to its virtues such as easy backup/recovery, convenient cold-address management, and supporting trust-less audits and applications in hierarchical organizations.
While HDW allows the wallet owner to generate and manage his keys conveniently, Stealth Address (SA) allows a payer to generate fresh address (i.e., public key) for the receiver without any interaction, so that users can achieve ``one coin each address" in a very convenient manner, which is widely regarded as a simple but effective way to protect user privacy. Consequently, SA has also attracted much attention and been widely used in the community.
However, as so far, there is not a secure wallet algorithm that provides the virtues of both HDW and SA. Actually, even for standalone HDW, to the best of our knowledge, there is no strict definition of syntax and models that captures the functionality and security (i.e., safety of coins and privacy of users) requirements that practical scenarios in cryptocurrency impose on wallet. As a result, the existing wallet algorithms either have (potential) security flaws or lack crucial functionality features.
In this work, we formally define the syntax and security models of Hierarchical Deterministic Wallet supporting Stealth Address (HDWSA), capturing the functionality and security (including safety and privacy) requirements imposed by the practice in cryptocurrency, which include all the versatile functionalities that lead to the popularity of HDW and SA as well as all the security guarantees that underlie these functionalities. We propose a concrete HDWSA construction and prove its security in the random oracle model.
We implement our scheme and the experimental results show that the efficiency is suitable for typical cryptocurrency settings.Over the past decade, cryptocurrency has been undergoing a rapid development. Digital wallet, as the tool to store and manage the cryptographic keys, is the primary entrance for the public to access cryptocurrency assets.
Hierarchical Deterministic Wallet (HDW), proposed in Bitcoin Improvement Proposal 32 (BIP32), has attracted much attention and been widely used in the community, due to its virtues such as easy backup/recovery, convenient cold-address management, and supporting trust-less audits and applications in hierarchical organizations.
While HDW allows the wallet owner to generate and manage his keys conveniently, Stealth Address (SA) allows a payer to generate fresh address (i.e., public key) for the receiver without any interaction, so that users can achieve ``one coin each address" in a very convenient manner, which is widely regarded as a simple but effective way to protect user privacy. Consequently, SA has also attracted much attention and been widely used in the community.
However, as so far, there is not a secure wallet algorithm that provides the virtues of both HDW and SA. Actually, even for standalone HDW, to the best of our knowledge, there is no strict definition of syntax and models that captures the functionality and security (i.e., safety of coins and privacy of users) requirements that practical scenarios in cryptocurrency impose on wallet. As a result, the existing wallet algorithms either have (potential) security flaws or lack crucial functionality features.
In this work, we formally define the syntax and security models of Hierarchical Deterministic Wallet supporting Stealth Address (HDWSA), capturing the functionality and security (including safety and privacy) requirements imposed by the practice in cryptocurrency, which include all the versatile functionalities that lead to the popularity of HDW and SA as well as all the security guarantees that underlie these functionalities. We propose a concrete HDWSA construction and prove its security in the random oracle model.
We implement our scheme and the experimental results show that the efficiency is suitable for typical cryptocurrency settings.2022-05-23T08:33:30+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/628High-Performance Polynomial Multiplication Hardware Accelerators for KEM Saber and NTRU2022-05-23T08:33:57+00:00Elizabeth CarterPengzhou HeJiafeng XieAlong the rapid development in building large-scale quantum computers, post-quantum cryptography (PQC) has drawn significant attention from research community recently as it is proven that the existing public-key cryptosystems are vulnerable to the quantum attacks. Following this direction, this paper presents a novel implementation of high-performance polynomial multiplication hardware accelerators for key encapsulation mechanism (KEM) Saber and NTRU, two PQC algorithms that are currently under the consideration by the National Institute of Standards and Technology (NIST) PQC standardization process. In total, we have carried out three layers of efforts to obtain the proposed work. First of all, we have proposed a new Dual Cyclic-Row Oriented Processing (Dual-CROP) technique to build a high-performance polynomial multiplication hardware accelerator for KEM Saber. Then, we have extended this hardware accelerator to NTRU with proper innovation and adjustment. Finally, through a series of complexity analysis and implementation based comparison, we have shown that the proposed hardware accelerators obtain better area-time complexities than known existing ones. It is expected that the outcome of this work can impact the ongoing NIST PQC standardization process and can be deployed further to construct efficient cryptoprocessors.Along the rapid development in building large-scale quantum computers, post-quantum cryptography (PQC) has drawn significant attention from research community recently as it is proven that the existing public-key cryptosystems are vulnerable to the quantum attacks. Following this direction, this paper presents a novel implementation of high-performance polynomial multiplication hardware accelerators for key encapsulation mechanism (KEM) Saber and NTRU, two PQC algorithms that are currently under the consideration by the National Institute of Standards and Technology (NIST) PQC standardization process. In total, we have carried out three layers of efforts to obtain the proposed work. First of all, we have proposed a new Dual Cyclic-Row Oriented Processing (Dual-CROP) technique to build a high-performance polynomial multiplication hardware accelerator for KEM Saber. Then, we have extended this hardware accelerator to NTRU with proper innovation and adjustment. Finally, through a series of complexity analysis and implementation based comparison, we have shown that the proposed hardware accelerators obtain better area-time complexities than known existing ones. It is expected that the outcome of this work can impact the ongoing NIST PQC standardization process and can be deployed further to construct efficient cryptoprocessors.2022-05-23T08:33:57+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/629Feel the Quantum Functioning: Instantiating Generic Multi-Input Functional Encryption from Learning with Errors (extended version)?2022-05-23T08:34:19+00:00Alexandros BakasAntonis MichalasEugene FrimpongReyhaneh RabbaninejadFunctional Encryption (FE) allows users who hold a specific decryption key, to learn a specific function of encrypted data while the actual plaintexts remain private. While FE is still in its infancy, it is our strong belief that in the years to come, this remarkable cryptographic primitive will have matured to the degree that will make it an integral part of access control systems, especially cloud-based ones. To this end, we believe it is of great importance to provide not only theoretical and generic constructions but also concrete instantiations of FE schemes from well-studied cryptographic assumptions. Therefore, in this paper, we undertake the task of presenting two instantiations of the generic work presented in [8] from the Decisional Diffie-Hellman (DDH) problem that also satisfies the property of verifiable decryption. Moreover, we present a novel multi-input FE (MIFE) scheme, that can be instantiated from Regev's cryptosystem, and thus remains secure even against quantum adversaries. Finally, we provide a multi-party computation (MPC) protocol that allows our MIFE construction to be deployed in the multi-client modeFunctional Encryption (FE) allows users who hold a specific decryption key, to learn a specific function of encrypted data while the actual plaintexts remain private. While FE is still in its infancy, it is our strong belief that in the years to come, this remarkable cryptographic primitive will have matured to the degree that will make it an integral part of access control systems, especially cloud-based ones. To this end, we believe it is of great importance to provide not only theoretical and generic constructions but also concrete instantiations of FE schemes from well-studied cryptographic assumptions. Therefore, in this paper, we undertake the task of presenting two instantiations of the generic work presented in [8] from the Decisional Diffie-Hellman (DDH) problem that also satisfies the property of verifiable decryption. Moreover, we present a novel multi-input FE (MIFE) scheme, that can be instantiated from Regev's cryptosystem, and thus remains secure even against quantum adversaries. Finally, we provide a multi-party computation (MPC) protocol that allows our MIFE construction to be deployed in the multi-client mode2022-05-23T08:34:19+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/630Enforcing fine-grained constant-time policies2022-05-23T08:35:02+00:00Basavesh Ammanaghatta ShivakumarGilles BartheBenjamin GrégoireVincent LaporteSwarn PriyaCryptographic constant-time (CT) is a popular programming disci- pline used by cryptographic libraries to protect themselves against timing attacks. The CT discipline aims to enforce that program ex- ecution does not leak secrets, where leakage is defined by a formal leakage model. In practice, different leakage models coexist, some- times even within a single library, both to reflect different architec- tures and to accommodate different security-efficiency trade-offs. Constant-timeness is popular and can be checked automatically by many tools. However, most sound tools are focused on a baseline (BL) leakage model. In contrast, (sound) verification methods for other leakage models are less developed, in part because these mod- els require modular arithmetic reasoning. In this paper, we develop a systematic, sound, approach for enforcing fine-grained constant- time policies beyond the BL model. Our approach combines two main ingredients: a verification infrastructure, which proves that source programs are constant-time, and a compiler infrastructure, which provably preserves constant-timeness for these fine-grained policies. By making these infrastructures parametric in the leakage model, we achieve the first approach that supports fine-grained constant-time policies. We implement the approach in the Jasmin framework for high-assurance cryptography, and we evaluate our approach with examples from the literature: OpenSSL and wolfSSL. We found a bug in OpenSSL and provided a formally verified fix.Cryptographic constant-time (CT) is a popular programming disci- pline used by cryptographic libraries to protect themselves against timing attacks. The CT discipline aims to enforce that program ex- ecution does not leak secrets, where leakage is defined by a formal leakage model. In practice, different leakage models coexist, some- times even within a single library, both to reflect different architec- tures and to accommodate different security-efficiency trade-offs. Constant-timeness is popular and can be checked automatically by many tools. However, most sound tools are focused on a baseline (BL) leakage model. In contrast, (sound) verification methods for other leakage models are less developed, in part because these mod- els require modular arithmetic reasoning. In this paper, we develop a systematic, sound, approach for enforcing fine-grained constant- time policies beyond the BL model. Our approach combines two main ingredients: a verification infrastructure, which proves that source programs are constant-time, and a compiler infrastructure, which provably preserves constant-timeness for these fine-grained policies. By making these infrastructures parametric in the leakage model, we achieve the first approach that supports fine-grained constant-time policies. We implement the approach in the Jasmin framework for high-assurance cryptography, and we evaluate our approach with examples from the literature: OpenSSL and wolfSSL. We found a bug in OpenSSL and provided a formally verified fix.2022-05-23T08:35:02+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/631Watermarking PRFs against Quantum Adversaries2022-05-23T08:35:29+00:00Fuyuki KitagawaRyo NishimakiWe initiate the study of software watermarking against quantum adversaries.
A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software.
Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state.
In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above.
Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries.
In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows.
- We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit.
- We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter).
We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much.
We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.We initiate the study of software watermarking against quantum adversaries.
A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software.
Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state.
In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above.
Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries.
In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows.
- We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit.
- We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter).
We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much.
We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.2022-05-23T08:35:29+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/632Recovering Rainbow's Secret Key with a First-Order Fault Attack2022-05-23T08:36:10+00:00Thomas AulbachTobias KovatsJuliane KrämerSoundes MarzouguiRainbow, a multivariate digital signature scheme and third round finalist in NIST's PQC standardization process, is a layered version of the unbalanced oil and vinegar (UOV) scheme.
We introduce two fault attacks, each focusing on one of the secret linear transformations $T$ and $S$ used to hide the structure of the central map in Rainbow. The first fault attack reveals a part of $T$ and we prove that this is enough to achieve a full key recovery with negligible computational effort for all parameter sets of Rainbow. The second one unveils $S$, which can be extended to a full key recovery by the Kipnis-Shamir attack.
Our work exposes the secret transformations used in multivariate signature schemes as an important attack vector for physical attacks, which need further protection.
Our attacks target the optimized Cortex-M4 implementation and require only first-order instruction skips and a moderate amount of faulted signatures.Rainbow, a multivariate digital signature scheme and third round finalist in NIST's PQC standardization process, is a layered version of the unbalanced oil and vinegar (UOV) scheme.
We introduce two fault attacks, each focusing on one of the secret linear transformations $T$ and $S$ used to hide the structure of the central map in Rainbow. The first fault attack reveals a part of $T$ and we prove that this is enough to achieve a full key recovery with negligible computational effort for all parameter sets of Rainbow. The second one unveils $S$, which can be extended to a full key recovery by the Kipnis-Shamir attack.
Our work exposes the secret transformations used in multivariate signature schemes as an important attack vector for physical attacks, which need further protection.
Our attacks target the optimized Cortex-M4 implementation and require only first-order instruction skips and a moderate amount of faulted signatures.2022-05-23T08:36:10+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/528On Random Sampling of Supersingular Elliptic Curves2022-05-23T09:31:59+00:00Marzio MulaNadir MurruFederico PintoreWe consider the problem of sampling random supersingular elliptic curves over finite fields of cryptographic size (SRS problem). The currently best-known method combines the reduction of a suitable complex multiplication (CM) $j$-invariant and a random walk over some supersingular isogeny graph. Unfortunately, this method is not suitable for numerous cryptographic applications because it gives information about the endomorphism ring of the generated curve. This motivates a stricter version of the SRS problem, requiring that the sampling algorithm gives no information about the endomorphism ring of the output curve (cSRS problem).
In this work we formally define the SRS and cSRS problems, which both enjoy a theoretical interest. We discuss the relevance of the latter also for cryptographic applications, and we provide a self-contained survey of the known approaches to both problems. Those for the cSRS problem work only for small finite fields, have exponential complexity in the characteristic of the base finite field (since they require computing and finding roots of polynomials of large degree), leaving the problem open. In the second part of the paper, we propose and analyse some alternative techniques — based either on Hasse invariant or division polynomials — and we explain the reasons why them do not readily lead to efficient cSRS algorithms, but they may open promising research directions.We consider the problem of sampling random supersingular elliptic curves over finite fields of cryptographic size (SRS problem). The currently best-known method combines the reduction of a suitable complex multiplication (CM) $j$-invariant and a random walk over some supersingular isogeny graph. Unfortunately, this method is not suitable for numerous cryptographic applications because it gives information about the endomorphism ring of the generated curve. This motivates a stricter version of the SRS problem, requiring that the sampling algorithm gives no information about the endomorphism ring of the output curve (cSRS problem).
In this work we formally define the SRS and cSRS problems, which both enjoy a theoretical interest. We discuss the relevance of the latter also for cryptographic applications, and we provide a self-contained survey of the known approaches to both problems. Those for the cSRS problem work only for small finite fields, have exponential complexity in the characteristic of the base finite field (since they require computing and finding roots of polynomials of large degree), leaving the problem open. In the second part of the paper, we propose and analyse some alternative techniques — based either on Hasse invariant or division polynomials — and we explain the reasons why them do not readily lead to efficient cSRS algorithms, but they may open promising research directions.2022-05-10T07:56:17+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/633CUDA-Accelerated RNS Multiplication in Word-Wise Homomorphic Encryption Schemes2022-05-23T09:48:02+00:00Shiyu ShenHao YangYu LiuZhe LiuYunlei ZhaoHomomorphic encryption (HE), which allows computation over encrypted data, has often been used to preserve privacy. However, the computationally heavy nature and complexity of network topologies make the deployment of HE schemes in the Internet of Things (IoT) scenario difficult. In this work, we propose CARM, the first optimized GPU implementation that covers BGV, BFV and CKKS, targeting for accelerating homomorphic multiplication using GPU in heterogeneous IoT systems. We offer constant-time low-level arithmetic with minimum instructions and memory usage, as well as performance- and memory-prior configurations, and exploit a parametric and generic design, and offer various trade-offs between resource and efficiency, yielding a solution suitable for accelerating RNS homomorphic multiplication on both high-performance and embedded GPUs. Through this, we can offer more real-time evaluation results and relieve the computational pressure on cloud devices. We deploy our implementations on two GPUs and achieve up to 378.4×, 234.5×, and 287.2× speedup for homomorphic multiplication of BGV, BFV, and CKKS on Tesla V100S, and 8.8×, 9.2×, and 10.3× on Jetson AGX Xavier, respectively.Homomorphic encryption (HE), which allows computation over encrypted data, has often been used to preserve privacy. However, the computationally heavy nature and complexity of network topologies make the deployment of HE schemes in the Internet of Things (IoT) scenario difficult. In this work, we propose CARM, the first optimized GPU implementation that covers BGV, BFV and CKKS, targeting for accelerating homomorphic multiplication using GPU in heterogeneous IoT systems. We offer constant-time low-level arithmetic with minimum instructions and memory usage, as well as performance- and memory-prior configurations, and exploit a parametric and generic design, and offer various trade-offs between resource and efficiency, yielding a solution suitable for accelerating RNS homomorphic multiplication on both high-performance and embedded GPUs. Through this, we can offer more real-time evaluation results and relieve the computational pressure on cloud devices. We deploy our implementations on two GPUs and achieve up to 378.4×, 234.5×, and 287.2× speedup for homomorphic multiplication of BGV, BFV, and CKKS on Tesla V100S, and 8.8×, 9.2×, and 10.3× on Jetson AGX Xavier, respectively.2022-05-23T09:48:02+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/634Round-Optimal Lattice-Based Threshold Signatures, Revisited2022-05-23T09:48:31+00:00Shweta AgrawalDamien StehleAnshu YadavThreshold signature schemes enable distribution of the signature issuing capability to multiple users, to mitigate the threat of signing key compromise. Though a classic primitive, these signatures have witnessed a surge of interest in recent times due to relevance to modern applications like blockchains and cryptocurrencies. In this work, we study round-optimal threshold signatures in the post- quantum regime and improve the only known lattice-based construction by Boneh et al [CRYPTO’18] as follows:
• Efficiency. We reduce the amount of noise flooding used in the construction from $2^{\Omega(\lambda)}$ down to
$\sqrt{Q}$, where $Q$ is the bound on the number of generated signatures and $\lambda$ is the security parameter. By using lattice hardness assumptions over polynomial rings, this allows to decrease the signature bit-lengths from $\widetilde{O}(\lambda^3)$ to~$\widetilde{O}(\lambda)$, bringing them significantly closer to practice. Our improvement relies on a careful analysis using Rényi divergence rather than statistical distance in the security proof.
• Instantiation. The construction of Boneh et al requires a standard signature scheme to be evaluated homomorphically. To instantiate this, we provide a homomorphism-friendly variant of Lyubashevsky’s signature [EUROCRYPT ’12] which achieves low circuit depth by being “rejection-free” and uses an optimal, moderate noise flooding of $\sqrt{Q}$, matching the above.
• Towards Adaptive Security. The construction of Boneh et al satisfies only selective security, where all the corrupted parties must be announced before any signing query is made. We improve this in two ways: in the Random Oracle Model, we obtain partial adaptivity where signing queries can be made before the corrupted parties are announced but the set of corrupted parties must be announced all at once. In the standard model, we obtain full adaptivity, where parties can be corrupted at any time but this construction is in a weaker pre-processing model where signers must be provided correlated randomness of length proportional to the number of signatures, in an offline preprocessing phase.Threshold signature schemes enable distribution of the signature issuing capability to multiple users, to mitigate the threat of signing key compromise. Though a classic primitive, these signatures have witnessed a surge of interest in recent times due to relevance to modern applications like blockchains and cryptocurrencies. In this work, we study round-optimal threshold signatures in the post- quantum regime and improve the only known lattice-based construction by Boneh et al [CRYPTO’18] as follows:
• Efficiency. We reduce the amount of noise flooding used in the construction from $2^{\Omega(\lambda)}$ down to
$\sqrt{Q}$, where $Q$ is the bound on the number of generated signatures and $\lambda$ is the security parameter. By using lattice hardness assumptions over polynomial rings, this allows to decrease the signature bit-lengths from $\widetilde{O}(\lambda^3)$ to~$\widetilde{O}(\lambda)$, bringing them significantly closer to practice. Our improvement relies on a careful analysis using Rényi divergence rather than statistical distance in the security proof.
• Instantiation. The construction of Boneh et al requires a standard signature scheme to be evaluated homomorphically. To instantiate this, we provide a homomorphism-friendly variant of Lyubashevsky’s signature [EUROCRYPT ’12] which achieves low circuit depth by being “rejection-free” and uses an optimal, moderate noise flooding of $\sqrt{Q}$, matching the above.
• Towards Adaptive Security. The construction of Boneh et al satisfies only selective security, where all the corrupted parties must be announced before any signing query is made. We improve this in two ways: in the Random Oracle Model, we obtain partial adaptivity where signing queries can be made before the corrupted parties are announced but the set of corrupted parties must be announced all at once. In the standard model, we obtain full adaptivity, where parties can be corrupted at any time but this construction is in a weaker pre-processing model where signers must be provided correlated randomness of length proportional to the number of signatures, in an offline preprocessing phase.2022-05-23T09:48:31+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/635Post-Quantum Secure Boot on Vehicle Network Processors2022-05-23T11:14:36+00:00Joppe W. BosBrian CarlsonJoost RenesMarius RotaruDaan SprenkelsGeoffrey P. WatersThe ability to trust a system to act safely and securely strongly relies on the integrity of the software that it runs. To guarantee
authenticity of the software one can include cryptographic data such as digital signatures on application images that can only be generated by trusted parties. These are typically based on cryptographic primitives such as Rivest-Shamir-Adleman (RSA) or Elliptic-Curve Cryptography (ECC), whose security will be lost whenever a large enough quantum computer is built. For that reason, migration towards Post-Quantum Cryptography (PQC) is necessary. This paper investigates the practical impact of migrating the secure boot flow on a Vehicle Network Processor (S32G274A) towards PQC. We create a low-memory fault-attack-
resistant implementation of the Dilithium signature verification algorithm and evaluate its impact on the boot flow.The ability to trust a system to act safely and securely strongly relies on the integrity of the software that it runs. To guarantee
authenticity of the software one can include cryptographic data such as digital signatures on application images that can only be generated by trusted parties. These are typically based on cryptographic primitives such as Rivest-Shamir-Adleman (RSA) or Elliptic-Curve Cryptography (ECC), whose security will be lost whenever a large enough quantum computer is built. For that reason, migration towards Post-Quantum Cryptography (PQC) is necessary. This paper investigates the practical impact of migrating the secure boot flow on a Vehicle Network Processor (S32G274A) towards PQC. We create a low-memory fault-attack-
resistant implementation of the Dilithium signature verification algorithm and evaluate its impact on the boot flow.2022-05-23T11:14:36+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/636Integer Syndrome Decoding in the Presence of Noise2022-05-23T18:50:12+00:00Vlad-Florin DragoiBrice ColombierPierre-Louis CayrelVincent GrossoCode-based cryptography received attention after
the NIST started the post-quantum cryptography standardization
process in 2016. A central NP-hard problem is the binary
syndrome decoding problem, on which the security of many
code-based cryptosystems lies. The best known methods to
solve this problem all stem from the information-set decoding
strategy, first introduced by Prange in 1962. A recent line of
work considers augmented versions of this strategy, with hints
typically provided by side-channel information. In this work,
we consider the integer syndrome decoding problem, where the
integer syndrome is available but might be noisy. We study
how the performance of the decoder is affected by the noise.
We provide experimental results on cryptographic parameters
for the BIKE and Classic McEliece cryptosystems, which are
finalist and alternate candidates for the third round of the NIST
standardization process, respectively.Code-based cryptography received attention after
the NIST started the post-quantum cryptography standardization
process in 2016. A central NP-hard problem is the binary
syndrome decoding problem, on which the security of many
code-based cryptosystems lies. The best known methods to
solve this problem all stem from the information-set decoding
strategy, first introduced by Prange in 1962. A recent line of
work considers augmented versions of this strategy, with hints
typically provided by side-channel information. In this work,
we consider the integer syndrome decoding problem, where the
integer syndrome is available but might be noisy. We study
how the performance of the decoder is affected by the noise.
We provide experimental results on cryptographic parameters
for the BIKE and Classic McEliece cryptosystems, which are
finalist and alternate candidates for the third round of the NIST
standardization process, respectively.2022-05-23T18:50:12+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/637Conditional Attribute-Based Proxy Re-Encryption and Its Instantiation2022-05-23T18:50:42+00:00Lisha YaoJian WengBimei WangIn attribute-based proxy re-encryption (AB-PRE) and attribute-based conditional proxy re-encryption (AB-CPRE) systems, the proxy transforms a ciphertext associated with policy $f$ to a ciphertext associated with policy $g$ or transforms a ciphertext for delegator satisfying a fine-grained condition to a ciphertext for delegatee. However, such PRE schemes have found many practical applications requiring fine-grained access control while keeping flexible delegation. Unfortunately, the existing PRE schemes are impossible to handle simultaneously with the above scenarios. In this work, we introduce the notion of conditional attribute-based proxy re-encryption (CAB-PRE), which enables a proxy only to transform a ciphertext associated with policy $f$ meeting the special delegation requirements by delegator to a ciphertext associated with policy $g$. We formalize its honestly re-encryption attacks (HRA) security model that implies CPA-secure, giving a concrete CAB-PRE scheme based on learning with errors (LWE) assumption. Finally, we show that CAB-PRE implies AB-PRE and AB-CPRE notions, and propose their constructions.In attribute-based proxy re-encryption (AB-PRE) and attribute-based conditional proxy re-encryption (AB-CPRE) systems, the proxy transforms a ciphertext associated with policy $f$ to a ciphertext associated with policy $g$ or transforms a ciphertext for delegator satisfying a fine-grained condition to a ciphertext for delegatee. However, such PRE schemes have found many practical applications requiring fine-grained access control while keeping flexible delegation. Unfortunately, the existing PRE schemes are impossible to handle simultaneously with the above scenarios. In this work, we introduce the notion of conditional attribute-based proxy re-encryption (CAB-PRE), which enables a proxy only to transform a ciphertext associated with policy $f$ meeting the special delegation requirements by delegator to a ciphertext associated with policy $g$. We formalize its honestly re-encryption attacks (HRA) security model that implies CPA-secure, giving a concrete CAB-PRE scheme based on learning with errors (LWE) assumption. Finally, we show that CAB-PRE implies AB-PRE and AB-CPRE notions, and propose their constructions.2022-05-23T18:50:42+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/616Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles2022-05-24T01:16:46+00:00Ren IshibashiKazuki YoneyamaAuthenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.Authenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.2022-05-23T08:25:02+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/638Impossibilities in Succinct Arguments: Black-box Extraction and More2022-05-24T05:59:47+00:00Matteo CampanelliChaya GaneshHamidreza KhoshakhlaghJanno SiimThe celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages we must assume non-falsifiable assumptions. We further investigate those barriers by showing new negative and positive results related to extractability and to the preprocessing model.
1. We first ask the question “are there further barriers to SNARGs that are knowledge-sound (SNARKs) and with a black-box extractor?”. We show it is impossible to have such SNARKs in the standard model. This separates SNARKs in the random oracle model (which can have black-box extraction) and those in the standard model.
2. We find positive results regarding the same question in the non-adaptive setting. Under the existence of SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP.
3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting.
4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also in the preprocessing model it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way.
Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.The celebrated result by Gentry and Wichs established a theoretical barrier for succinct non-interactive arguments (SNARGs), showing that for (expressive enough) hard-on-average languages we must assume non-falsifiable assumptions. We further investigate those barriers by showing new negative and positive results related to extractability and to the preprocessing model.
1. We first ask the question “are there further barriers to SNARGs that are knowledge-sound (SNARKs) and with a black-box extractor?”. We show it is impossible to have such SNARKs in the standard model. This separates SNARKs in the random oracle model (which can have black-box extraction) and those in the standard model.
2. We find positive results regarding the same question in the non-adaptive setting. Under the existence of SNARGs (without extractability) and from standard assumptions, it is possible to build SNARKs with black-box extractability for a non-trivial subset of NP.
3. On the other hand, we show that (under some mild assumptions) all NP languages cannot have SNARKs with black-box extractability even in the non-adaptive setting.
4. The Gentry-Wichs result does not account for the preprocessing model, under which fall several efficient constructions. We show that also in the preprocessing model it is impossible to construct SNARGs that rely on falsifiable assumptions in a black-box way.
Along the way, we identify a class of non-trivial languages, which we dub “trapdoor languages”, that bypass some of these impossibility results.2022-05-24T05:59:47+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/639Anamorphic Encryption: Private Communication against a Dictator2022-05-24T08:04:32+00:00Giuseppe PersianoDuong Hieu PhanMoti YungCryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet.
However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as "the bad guys can utilize other encryption system" so all other cryptosystems have to be declared illegal, or that "allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government)." It has remained a fundamental open issue, though, to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date.
In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to immediately (with no latency) send piggybacked secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments’ attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.Cryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet.
However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as "the bad guys can utilize other encryption system" so all other cryptosystems have to be declared illegal, or that "allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government)." It has remained a fundamental open issue, though, to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date.
In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to immediately (with no latency) send piggybacked secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments’ attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.2022-05-24T08:04:32+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/598Verifiable and forward private conjunctive keyword search from DIA tree2022-05-24T11:09:23+00:00Laltu SardarSushmita RujIn a dynamic searchable encryption (DSE) scheme, a cloud server can search on encrypted data that the client stores and updates from time to time. Due to information leakage during the search and update phase, DSE schemes are prone to file injection attacks. If during document addition, a DSE scheme does not leak any information about the previous search results, the scheme is said to be forward private. A DSE scheme that supports conjunctive keyword search should be forward private. There has been a fair deal of work on designing forward private DSE schemes in the presence of an honest-but-curious cloud server. However, a malicious cloud server might not run the protocol correctly and still want to be undetected. In a verifiable DSE, the cloud server not only returns the result of a search query but also provides proof that the result is computed correctly.
We design a forward private DSE scheme that supports conjunctive keyword search. At the heart of the construction is our proposed data structure called the dynamic interval accumulation tree (DIA tree). It is an accumulator-based authentication tree that efficiently returns both membership and non-membership proofs. Using the DIA tree, we can convert any single keyword forward private DSE scheme to a verifiable forward private DSE scheme that can support conjunctive queries as well. Our proposed scheme has the same storage as the base DSE scheme and low computational overhead on the client-side. We have shown the efficiency of our design by comparing it with existing conjunctive DSE schemes. The comparison also shows that our scheme is suitable for practical use.In a dynamic searchable encryption (DSE) scheme, a cloud server can search on encrypted data that the client stores and updates from time to time. Due to information leakage during the search and update phase, DSE schemes are prone to file injection attacks. If during document addition, a DSE scheme does not leak any information about the previous search results, the scheme is said to be forward private. A DSE scheme that supports conjunctive keyword search should be forward private. There has been a fair deal of work on designing forward private DSE schemes in the presence of an honest-but-curious cloud server. However, a malicious cloud server might not run the protocol correctly and still want to be undetected. In a verifiable DSE, the cloud server not only returns the result of a search query but also provides proof that the result is computed correctly.
We design a forward private DSE scheme that supports conjunctive keyword search. At the heart of the construction is our proposed data structure called the dynamic interval accumulation tree (DIA tree). It is an accumulator-based authentication tree that efficiently returns both membership and non-membership proofs. Using the DIA tree, we can convert any single keyword forward private DSE scheme to a verifiable forward private DSE scheme that can support conjunctive queries as well. Our proposed scheme has the same storage as the base DSE scheme and low computational overhead on the client-side. We have shown the efficiency of our design by comparing it with existing conjunctive DSE schemes. The comparison also shows that our scheme is suitable for practical use.2022-05-17T13:03:03+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/640Dialektos: Privacy-preserving Smart Contracts2022-05-24T12:06:46+00:00Tadas VaitiekūnasDigital ledger technologies supporting smart contracts usually does not ensure any privacy for user transactions or state. Most solutions to this problem either use private network setups, centralized parties, hardware enclaves, or cryptographic primitives, which are novel, complex, and computationally expensive. This paper looks into an alternative way of implementing smart contracts. Our construction of a protocol for smart contracts employs an overlay protocol design pattern for decentralized applications, which separates transaction ordering from transaction validation. This enables consensus on application state while revealing only encrypted versions of transactions to public consensus protocol network. UTXO-based smart contract model allows partitioning state of distributed ledger in a way that participants would need to decrypt and reach consensus only on those transactions, which are relevant to them. We present security analysis, which shows that, assuming presence of a secure consensus protocol, our construction achieves consensus on UTXO-based transactions, while hiding most of transaction details from all protocol parties, except a limited subset of parties, which need particular transactions for construction of their state.Digital ledger technologies supporting smart contracts usually does not ensure any privacy for user transactions or state. Most solutions to this problem either use private network setups, centralized parties, hardware enclaves, or cryptographic primitives, which are novel, complex, and computationally expensive. This paper looks into an alternative way of implementing smart contracts. Our construction of a protocol for smart contracts employs an overlay protocol design pattern for decentralized applications, which separates transaction ordering from transaction validation. This enables consensus on application state while revealing only encrypted versions of transactions to public consensus protocol network. UTXO-based smart contract model allows partitioning state of distributed ledger in a way that participants would need to decrypt and reach consensus only on those transactions, which are relevant to them. We present security analysis, which shows that, assuming presence of a secure consensus protocol, our construction achieves consensus on UTXO-based transactions, while hiding most of transaction details from all protocol parties, except a limited subset of parties, which need particular transactions for construction of their state.2022-05-24T12:06:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/641Self-Timed Masking: Implementing First-Order Masked S-Boxes Without Registers2022-05-24T14:00:41+00:00Mateus SimoesLilian BossuetNicolas BruneauVincent GrossoPatrick HaddadPassive physical attacks represent a threat to microelectronics systems by exploiting leakages through side-channels, such as power consumption and electromagnetic radiation. In this context, masking is a sound countermeasure against side-channel attacks, which splits the secret data into several randomly uniform data, achieving independence between the data processing and the secret variable. However, a secure masking scheme requires additional implementation costs. Furthermore, glitches and early evaluation can temporally weaken a masked implementation in hardware, creating a potential source of exploitable leakages.
This work shows how to create register-free masking schemes that avoid the early evaluation effect with the help of the dual-rail logic. Moreover, we employ monotonic functions with the purpose of eliminating the occurrence of glitches in combinational circuits. Finally, we evaluate different 2-share masked implementations of the PRESENT and AES S-boxes in a noiseless scenario in order to detect potential first-order leakages and to determine data propagation profiles correlated to the secret variables.Passive physical attacks represent a threat to microelectronics systems by exploiting leakages through side-channels, such as power consumption and electromagnetic radiation. In this context, masking is a sound countermeasure against side-channel attacks, which splits the secret data into several randomly uniform data, achieving independence between the data processing and the secret variable. However, a secure masking scheme requires additional implementation costs. Furthermore, glitches and early evaluation can temporally weaken a masked implementation in hardware, creating a potential source of exploitable leakages.
This work shows how to create register-free masking schemes that avoid the early evaluation effect with the help of the dual-rail logic. Moreover, we employ monotonic functions with the purpose of eliminating the occurrence of glitches in combinational circuits. Finally, we evaluate different 2-share masked implementations of the PRESENT and AES S-boxes in a noiseless scenario in order to detect potential first-order leakages and to determine data propagation profiles correlated to the secret variables.2022-05-24T14:00:41+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/184Communication-Efficient BFT Protocols Using Small Trusted Hardware to Tolerate Minority Corruption2022-05-24T17:53:37+00:00Sravya YandamuriIttai AbrahamKartik NayakMichael K. ReiterAgreement protocols for partially synchronous or asynchronous networks tolerate fewer than one-third Byzantine faults. If parties are equipped with trusted hardware that prevents equivocation, then fault tolerance can be improved to fewer than one-half Byzantine faults, but typically at the cost of increased communication complexity. In this work, we present results that use small trusted hardware without worsening communication complexity assuming the adversary controls a fraction of the network that is less than one-half. Our results include a version of HotStuff that retains linear communication complexity in each view and a version of the VABA protocol with quadratic communication, both leveraging trusted hardware to tolerate a minority of corruptions. Our results use expander graphs to achieve efficient communication in a manner that may be of independent interest.Agreement protocols for partially synchronous or asynchronous networks tolerate fewer than one-third Byzantine faults. If parties are equipped with trusted hardware that prevents equivocation, then fault tolerance can be improved to fewer than one-half Byzantine faults, but typically at the cost of increased communication complexity. In this work, we present results that use small trusted hardware without worsening communication complexity assuming the adversary controls a fraction of the network that is less than one-half. Our results include a version of HotStuff that retains linear communication complexity in each view and a version of the VABA protocol with quadratic communication, both leveraging trusted hardware to tolerate a minority of corruptions. Our results use expander graphs to achieve efficient communication in a manner that may be of independent interest.2021-02-20T17:39:05+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/827TransNet: Shift Invariant Transformer Network for Power Attack2022-05-25T05:24:32+00:00Suvadeep HajraSayandeep SahaManaar AlamDebdeep MukhopadhyayDeep learning (DL) has revolutionized Side Channel Analysis (SCA) in recent years. One of the major advantages of DL in the context of SCA is that it can automatically handle masking and desynchronization countermeasures, even while they are applied simultaneously for a cryptographic implementation. However, the success of the attack strongly depends on the DL model used for the attack. Traditionally, Convolutional Neural Networks (CNNs) have been utilized in this regard. This work proposes to use Transformer Network (TN) for attacking implementations secured with masking and desynchronization. Our choice is motivated by the fact that TN is good at capturing the dependencies among distant points of interest in a power trace. Furthermore, we show that TN can be made shift-invariant which is an important property required to handle desynchronized traces. Experimental validation on several public datasets establishes that our proposed TN-based model, called TransNet, outperforms the present state-of-the-art on several occasions. Specifically, TransNet outperforms the other methods by a wide margin when the traces are highly desynchronized. Additionally, TransNet shows good attack performance against implementations with desynchronized traces even when it is trained on synchronized traces. The Tensorflow implementation of TransNet is available at https://github.com/suvadeep-iitb/TransNet.Deep learning (DL) has revolutionized Side Channel Analysis (SCA) in recent years. One of the major advantages of DL in the context of SCA is that it can automatically handle masking and desynchronization countermeasures, even while they are applied simultaneously for a cryptographic implementation. However, the success of the attack strongly depends on the DL model used for the attack. Traditionally, Convolutional Neural Networks (CNNs) have been utilized in this regard. This work proposes to use Transformer Network (TN) for attacking implementations secured with masking and desynchronization. Our choice is motivated by the fact that TN is good at capturing the dependencies among distant points of interest in a power trace. Furthermore, we show that TN can be made shift-invariant which is an important property required to handle desynchronized traces. Experimental validation on several public datasets establishes that our proposed TN-based model, called TransNet, outperforms the present state-of-the-art on several occasions. Specifically, TransNet outperforms the other methods by a wide margin when the traces are highly desynchronized. Additionally, TransNet shows good attack performance against implementations with desynchronized traces even when it is trained on synchronized traces. The Tensorflow implementation of TransNet is available at https://github.com/suvadeep-iitb/TransNet.2021-06-21T07:50:24+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/643Accelerating the Best Trail Search on AES-Like Ciphers2022-05-25T06:16:09+00:00Seonggyeom KimDeukjo HongJaechul SungSeokhie HongIn this study, we accelerate Matsui's search algorithm to search for the best differential and linear trails of AES-like ciphers. Our acceleration points are twofold. The first exploits the structure and branch number of an AES-like round function to apply strict pruning conditions to Matsui's search algorithm. The second employs permutation characteristics in trail search to reduce the inputs that need to be analyzed. We demonstrate the optimization of the search algorithm by obtaining the best differential and linear trails of existing block ciphers: AES, LED, MIDORI-64, CRAFT, SKINNY, PRESENT, and GIFT. In particular, our search program finds the full-round best differential and linear trails of GIFT-64 (in approx. 1 s and 10 s) and GIFT-128 (in approx. 89 h and 452 h), respectively.
For a more in-depth application, we leverage the acceleration to investigate the optimal DC/LC resistance that GIFT-variants, called BOGI-based ciphers, can achieve. To this end, we identify all the BOGI-based ciphers and reduce them into 41,472 representatives. Deriving 16-, 32-, 64-, and 128-bit BOGI-based ciphers from the representatives, we obtain their best trails until 15, 15, 13, and 11 rounds, respectively. The investigation shows that 12 rounds are the minimum threshold for a 64-bit BOGI-based cipher to prevent efficient trails for DC/LC, whereas GIFT-64 requires 14 rounds. Moreover, it is shown that GIFT can provide better resistance by only replacing the existing bit permutation. Specifically, the bit permutation variants of GIFT-64 and GIFT-128 require fewer rounds, one and two, respectively, to prevent efficient differential and linear trails.In this study, we accelerate Matsui's search algorithm to search for the best differential and linear trails of AES-like ciphers. Our acceleration points are twofold. The first exploits the structure and branch number of an AES-like round function to apply strict pruning conditions to Matsui's search algorithm. The second employs permutation characteristics in trail search to reduce the inputs that need to be analyzed. We demonstrate the optimization of the search algorithm by obtaining the best differential and linear trails of existing block ciphers: AES, LED, MIDORI-64, CRAFT, SKINNY, PRESENT, and GIFT. In particular, our search program finds the full-round best differential and linear trails of GIFT-64 (in approx. 1 s and 10 s) and GIFT-128 (in approx. 89 h and 452 h), respectively.
For a more in-depth application, we leverage the acceleration to investigate the optimal DC/LC resistance that GIFT-variants, called BOGI-based ciphers, can achieve. To this end, we identify all the BOGI-based ciphers and reduce them into 41,472 representatives. Deriving 16-, 32-, 64-, and 128-bit BOGI-based ciphers from the representatives, we obtain their best trails until 15, 15, 13, and 11 rounds, respectively. The investigation shows that 12 rounds are the minimum threshold for a 64-bit BOGI-based cipher to prevent efficient trails for DC/LC, whereas GIFT-64 requires 14 rounds. Moreover, it is shown that GIFT can provide better resistance by only replacing the existing bit permutation. Specifically, the bit permutation variants of GIFT-64 and GIFT-128 require fewer rounds, one and two, respectively, to prevent efficient differential and linear trails.2022-05-25T05:27:02+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/644DiLizium 2.0: Revisiting Two-Party Crystals-Dilithium2022-05-25T08:48:40+00:00Peeter LaudNikita SnetkovJelizaveta VakarjukIn previous years there has been an increased interest in designing threshold signature schemes. Most of the recent works focus on constructing threshold versions of ECDSA or Schnorr signature schemes due to their appealing usage in blockchain technologies. Additionally, a lot of research is being done on cryptographic schemes that are resistant against quantum computer attacks. Presently, the most popular family of post-quantum algorithms is lattice-based cryptography, because its structure allows creation of cryptographic protocols that go beyond encryption and digital signature schemes.
In this work, we propose a new version of the two-party Crystals-Dilithium signature scheme. The security of our scheme is based on the hardness of Module-LWE and Module-SIS problems. In our construction, we follow a similar logic as Damgård et al. (PKC 2021) and use an additively homomorphic commitment scheme. However, compared to them, our protocol uses signature compression techniques from the original Crystals-Dilithium signature scheme which makes it closer to the version submitted to the NIST PQCIn previous years there has been an increased interest in designing threshold signature schemes. Most of the recent works focus on constructing threshold versions of ECDSA or Schnorr signature schemes due to their appealing usage in blockchain technologies. Additionally, a lot of research is being done on cryptographic schemes that are resistant against quantum computer attacks. Presently, the most popular family of post-quantum algorithms is lattice-based cryptography, because its structure allows creation of cryptographic protocols that go beyond encryption and digital signature schemes.
In this work, we propose a new version of the two-party Crystals-Dilithium signature scheme. The security of our scheme is based on the hardness of Module-LWE and Module-SIS problems. In our construction, we follow a similar logic as Damgård et al. (PKC 2021) and use an additively homomorphic commitment scheme. However, compared to them, our protocol uses signature compression techniques from the original Crystals-Dilithium signature scheme which makes it closer to the version submitted to the NIST PQC2022-05-25T08:36:33+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/https://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2021/158Two-Round Perfectly Secure Message Transmission with Optimal Transmission Rate2022-05-25T08:59:44+00:00Nicolas ReschChen YuanIn the model of Perfectly Secure Message Transmission (PSMT), a sender Alice is connected to a receiver Bob via $n$ parallel two-way channels, and Alice holds an $\ell$ symbol secret that she wishes to communicate to Bob. There is an unbounded adversary Eve that controls $t$ of the channels, where $n=2t+1$. Eve is able to corrupt any symbol sent through the channels she controls, and furthermore may attempt to infer Alice's secret by observing the symbols sent through the channels she controls. The transmission is required to be (a) reliable, i.e., Bob must always be able to recover Alice's secret, regardless of Eve's corruptions; and (b) private, i.e., Eve may not learn anything about Alice's secret. We focus on the two-round model, where Bob is permitted to first transmit to Alice, and then Alice responds to Bob.
In this work we provide upper and lower bounds for the PSMT model when the length of the communicated secret $\ell$ is asymptotically large. Specifically, we first construct a protocol that allows Alice to communicate an $\ell$ symbol secret to Bob by transmitting at most $2(1+o_{\ell \to \infty}(1))n\ell$ symbols. Under a reasonable assumption (which is satisfied by all known efficient two-round PSMT protocols), we complement this with a lower bound showing that $2n\ell$ symbols are necessary for Alice to privately and reliably communicate her secret. This provides strong evidence that our construction is optimal (even up to the leading constant).In the model of Perfectly Secure Message Transmission (PSMT), a sender Alice is connected to a receiver Bob via $n$ parallel two-way channels, and Alice holds an $\ell$ symbol secret that she wishes to communicate to Bob. There is an unbounded adversary Eve that controls $t$ of the channels, where $n=2t+1$. Eve is able to corrupt any symbol sent through the channels she controls, and furthermore may attempt to infer Alice's secret by observing the symbols sent through the channels she controls. The transmission is required to be (a) reliable, i.e., Bob must always be able to recover Alice's secret, regardless of Eve's corruptions; and (b) private, i.e., Eve may not learn anything about Alice's secret. We focus on the two-round model, where Bob is permitted to first transmit to Alice, and then Alice responds to Bob.
In this work we provide upper and lower bounds for the PSMT model when the length of the communicated secret $\ell$ is asymptotically large. Specifically, we first construct a protocol that allows Alice to communicate an $\ell$ symbol secret to Bob by transmitting at most $2(1+o_{\ell \to \infty}(1))n\ell$ symbols. Under a reasonable assumption (which is satisfied by all known efficient two-round PSMT protocols), we complement this with a lower bound showing that $2n\ell$ symbols are necessary for Alice to privately and reliably communicate her secret. This provides strong evidence that our construction is optimal (even up to the leading constant).2021-02-17T10:04:00+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/164Graph-Based Construction for Non-Malleable Codes2022-05-25T12:34:44+00:00Shohei SatakeYujie GuKouichi SakuraiNon-malleable codes are introduced to protect the communication against adversarial tampering of data, as a relaxation of the error-correcting codes and error-detecting codes. To explicitly construct non-malleable codes is a central and challenging problem which has drawn considerable attention and been extensively studied in the past few years. Recently, Rasmussen and Sahai built an interesting connection between non-malleable codes and (non-bipartite) expander graphs, which is the first explicit construction of non-malleable codes based on graph theory other than the typically exploited extractors. So far, there is no other graph-based construction for non-malleable codes yet. In this paper, we aim to explore more connections between non-malleable codes and graph theory. Specifically, we first extend the Rasmussen-Sahai construction to bipartite expander graphs. Accordingly, we establish several explicit constructions for non-malleable codes based on Lubotzky-Phillips-Sarnak Ramanujan graphs and generalized quadrangles, respectively. It is shown that the resulting codes can either work for a more flexible split-state model or have better code rate in comparison with the existing results.Non-malleable codes are introduced to protect the communication against adversarial tampering of data, as a relaxation of the error-correcting codes and error-detecting codes. To explicitly construct non-malleable codes is a central and challenging problem which has drawn considerable attention and been extensively studied in the past few years. Recently, Rasmussen and Sahai built an interesting connection between non-malleable codes and (non-bipartite) expander graphs, which is the first explicit construction of non-malleable codes based on graph theory other than the typically exploited extractors. So far, there is no other graph-based construction for non-malleable codes yet. In this paper, we aim to explore more connections between non-malleable codes and graph theory. Specifically, we first extend the Rasmussen-Sahai construction to bipartite expander graphs. Accordingly, we establish several explicit constructions for non-malleable codes based on Lubotzky-Phillips-Sarnak Ramanujan graphs and generalized quadrangles, respectively. It is shown that the resulting codes can either work for a more flexible split-state model or have better code rate in comparison with the existing results.2021-02-17T10:06:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/284Lattice-Based Zero-Knowledge Proofs and Applications: Shorter, Simpler, and More General2022-05-25T12:47:25+00:00Vadim LyubashevskyNgoc Khanh NguyenMaxime PlanconWe present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector $s$ satisfying $As=t\bmod q$. The currently most-efficient technique for constructing such a proof works by showing that the $\ell_\infty$ norm of $s$ is small. It creates a commitment to a polynomial vector $m$ whose CRT coefficients are the coefficients of $s$ and then shows that (1) $A\cdot \mathsf{CRT}(m)=t\bmod\,q$ and (2) in the case that we want to prove that the $\ell_\infty$ norm is at most $1$, the polynomial product $(m - 1)\cdot m\cdot(m+1)$ equals to $0$. While these schemes are already quite good for practical applications, the requirement of using the CRT embedding and only being naturally adapted to proving the $\ell_\infty$-norm, somewhat hinders the efficiency of this approach.
In this work, we show that there is a more direct and more efficient way to prove that the coefficients of $s$ have a small $\ell_2$ norm which does not require an equivocation with the $\ell_\infty$ norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors $ r$ and $s$ can be made to appear as a coefficient of a product (or sum of products) between polynomials which are functions of $r$ and $s$. Thus, by using a polynomial product proof system and hiding all but one coefficient, we are able to prove knowledge of the inner product of two vectors modulo $q$. Using a cheap, approximate range proof, one can then lift the proof to be over $\mathbb{Z}$ instead of $\mathbb{Z}_q$. Our protocols for proving short norms work over all (interesting) polynomial rings, but are particularly efficient for rings like $\mathbb{Z}[X]/(X^n+1)$ in which the function relating the inner product of vectors and polynomial products happens to be a ``nice'' automorphism.
The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector $s$ satisfying $As=t\bmod q$. The currently most-efficient technique for constructing such a proof works by showing that the $\ell_\infty$ norm of $s$ is small. It creates a commitment to a polynomial vector $m$ whose CRT coefficients are the coefficients of $s$ and then shows that (1) $A\cdot \mathsf{CRT}(m)=t\bmod\,q$ and (2) in the case that we want to prove that the $\ell_\infty$ norm is at most $1$, the polynomial product $(m - 1)\cdot m\cdot(m+1)$ equals to $0$. While these schemes are already quite good for practical applications, the requirement of using the CRT embedding and only being naturally adapted to proving the $\ell_\infty$-norm, somewhat hinders the efficiency of this approach.
In this work, we show that there is a more direct and more efficient way to prove that the coefficients of $s$ have a small $\ell_2$ norm which does not require an equivocation with the $\ell_\infty$ norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors $ r$ and $s$ can be made to appear as a coefficient of a product (or sum of products) between polynomials which are functions of $r$ and $s$. Thus, by using a polynomial product proof system and hiding all but one coefficient, we are able to prove knowledge of the inner product of two vectors modulo $q$. Using a cheap, approximate range proof, one can then lift the proof to be over $\mathbb{Z}$ instead of $\mathbb{Z}_q$. Our protocols for proving short norms work over all (interesting) polynomial rings, but are particularly efficient for rings like $\mathbb{Z}[X]/(X^n+1)$ in which the function relating the inner product of vectors and polynomial products happens to be a ``nice'' automorphism.
The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.2022-03-07T11:54:41+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/642Statistical Effective Fault Attacks: The other Side of the Coin2022-05-25T13:13:38+00:00Navid VafaeiSara ZareiNasour BagheriMaria EichlsederRobert PrimasHadi SoleimanyThe introduction of Statistical Ineffective Fault Attacks (SIFA) has led to a renewed interest in fault attacks. SIFA requires minimal knowledge of the concrete implementation and is effective even in the presence of common fault or power analysis countermeasures. However, further investigations reveal that undesired and frequent ineffective events, which we refer to as the noise phenomenon, are the bottleneck of SIFA that can considerably diminish its strength. This includes noise associated with the attack’s setup and caused by the countermeasures utilized in the implementation. This research aims to address this significant drawback. We present two novel statistical fault attack variants that are far more successful in dealing with these noisy conditions. The first variant is the Statistical Effective Fault Attack (SEFA), which exploits the non-uniform distribution of intermediate variables in circumstances when the induced faults are effective. The idea behind the second proposed method, dubbed Statistical Hybrid Fault Attacks (SHFA), is to take advantage of the biased distributions of both effective and ineffective cases simultaneously. Our experimental results in various case studies, including noise-free and noisy setups, back up our reasoning that SEFA surpasses SIFA in several instances and that SHFA outperforms both or is at least as efficient as the best of them.The introduction of Statistical Ineffective Fault Attacks (SIFA) has led to a renewed interest in fault attacks. SIFA requires minimal knowledge of the concrete implementation and is effective even in the presence of common fault or power analysis countermeasures. However, further investigations reveal that undesired and frequent ineffective events, which we refer to as the noise phenomenon, are the bottleneck of SIFA that can considerably diminish its strength. This includes noise associated with the attack’s setup and caused by the countermeasures utilized in the implementation. This research aims to address this significant drawback. We present two novel statistical fault attack variants that are far more successful in dealing with these noisy conditions. The first variant is the Statistical Effective Fault Attack (SEFA), which exploits the non-uniform distribution of intermediate variables in circumstances when the induced faults are effective. The idea behind the second proposed method, dubbed Statistical Hybrid Fault Attacks (SHFA), is to take advantage of the biased distributions of both effective and ineffective cases simultaneously. Our experimental results in various case studies, including noise-free and noisy setups, back up our reasoning that SEFA surpasses SIFA in several instances and that SHFA outperforms both or is at least as efficient as the best of them.2022-05-25T03:18:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/1437Round-Optimal and Communication-Efficient Multiparty Computation2022-05-25T15:57:32+00:00Michele CiampiRafail OstrovskyHendrik WaldnerVassilis ZikasTypical approaches for minimizing the round complexity of multiparty computation (MPC) come at the cost of increased communication complexity (CC) or the reliance on setup assumptions. A notable exception is the recent work of Ananth et al. [TCC 2019], which used Functional Encryption (FE) combiners to obtain a round optimal (two-round) semi-honest MPC in the plain model with a CC proportional to the depth and input-output length of the circuit being computed—we refer to such protocols as circuit scalable. This leaves open the question of obtaining communication efficient protocols that are secure against malicious adversaries in the plain model, which we present in this work. Concretely, our two main contributions are:
1) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-scalable maliciously secure MPC protocols in the plain model, assuming (succinct) FE combiners.
2) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-independent— i.e., with a CC that depends only on the input-output length of the circuit—maliciously secure MPC protocols in the plain model, assuming Multi-Key Fully-Homomorphic Encryption (MFHE). Our constructions are based on a new compiler that turns a wide class of MPC protocols into k-delayed-input function MPC protocols (a notion we introduce), where the function that is being computed is specified only in the k-th round of the protocol.
As immediate corollaries of our two compilers, we derive (1) the first round-optimal and circuit-scalable maliciously secure MPC protocol, and (2) the first round-optimal and circuit-independent maliciously secure MPC protocol in the plain model. The latter achieves the best to-date CC for a round-optimal maliciously secure MPC protocol. In fact, it is even communication-optimal when the output size of the function being evaluated is smaller than its input size (e.g., for boolean functions). All of our results are based on standard polynomial time assumptions.Typical approaches for minimizing the round complexity of multiparty computation (MPC) come at the cost of increased communication complexity (CC) or the reliance on setup assumptions. A notable exception is the recent work of Ananth et al. [TCC 2019], which used Functional Encryption (FE) combiners to obtain a round optimal (two-round) semi-honest MPC in the plain model with a CC proportional to the depth and input-output length of the circuit being computed—we refer to such protocols as circuit scalable. This leaves open the question of obtaining communication efficient protocols that are secure against malicious adversaries in the plain model, which we present in this work. Concretely, our two main contributions are:
1) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-scalable maliciously secure MPC protocols in the plain model, assuming (succinct) FE combiners.
2) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-independent— i.e., with a CC that depends only on the input-output length of the circuit—maliciously secure MPC protocols in the plain model, assuming Multi-Key Fully-Homomorphic Encryption (MFHE). Our constructions are based on a new compiler that turns a wide class of MPC protocols into k-delayed-input function MPC protocols (a notion we introduce), where the function that is being computed is specified only in the k-th round of the protocol.
As immediate corollaries of our two compilers, we derive (1) the first round-optimal and circuit-scalable maliciously secure MPC protocol, and (2) the first round-optimal and circuit-independent maliciously secure MPC protocol in the plain model. The latter achieves the best to-date CC for a round-optimal maliciously secure MPC protocol. In fact, it is even communication-optimal when the output size of the function being evaluated is smaller than its input size (e.g., for boolean functions). All of our results are based on standard polynomial time assumptions.2020-11-15T12:17:15+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1565Practical, Round-Optimal Lattice-Based Blind Signatures2022-05-26T04:31:57+00:00Shweta AgrawalElena KirshanovaDamien StehleAnshu YadavBlind signatures are a fundamental cryptographic primitive with numerous practical applications. While there exist many practical blind signatures from number-theoretic assumptions, the situation is far less satisfactory from post-quantum assumptions. In this work, we provide the first overall practical, lattice-based blind signature, supporting an unbounded number of signature queries and additionally enjoying optimal round complexity. We provide a detailed estimate of parameters achieved -- we obtain a signature of size less than 44KB, for a core-SVP hardness of 109 bits. The run-times of the signer, user and verifier are also very small.
Our scheme relies on the Gentry, Peikert and Vaikuntanathan signature [STOC'08] and non-interactive zero-knowledge proofs for linear relations with small unknowns, which are significantly more efficient than their general purpose counterparts. Its security stems from a new and arguably natural assumption which we introduce, called the one-more-ISIS assumption. This assumption can be seen as a lattice analogue of the one-more-RSA assumption by Bellare et al [JoC'03]. To gain confidence in our assumption, we provide a detailed analysis of diverse attack strategies.Blind signatures are a fundamental cryptographic primitive with numerous practical applications. While there exist many practical blind signatures from number-theoretic assumptions, the situation is far less satisfactory from post-quantum assumptions. In this work, we provide the first overall practical, lattice-based blind signature, supporting an unbounded number of signature queries and additionally enjoying optimal round complexity. We provide a detailed estimate of parameters achieved -- we obtain a signature of size less than 44KB, for a core-SVP hardness of 109 bits. The run-times of the signer, user and verifier are also very small.
Our scheme relies on the Gentry, Peikert and Vaikuntanathan signature [STOC'08] and non-interactive zero-knowledge proofs for linear relations with small unknowns, which are significantly more efficient than their general purpose counterparts. Its security stems from a new and arguably natural assumption which we introduce, called the one-more-ISIS assumption. This assumption can be seen as a lattice analogue of the one-more-RSA assumption by Bellare et al [JoC'03]. To gain confidence in our assumption, we provide a detailed analysis of diverse attack strategies.2021-12-02T02:40:34+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/136Twilight: A Differentially Private Payment Channel Network2022-05-26T08:39:05+00:00Maya DotanSaar TochnerAviv ZoharYossi GiladPayment channel networks (PCNs) provide a faster and cheaper alternative to transactions recorded on the blockchain. Clients can trustlessly establish payment channels with relays by locking coins and then send signed payments that shift coin balances over the network's channels. Although payments are never published, anyone can track a client's payment by monitoring changes in coin balances over the network's channels. We present Twilight, the first PCN that provides a rigorous differential privacy guarantee to its users.
Relays in Twilight run a noisy payment processing mechanism that hides the payments they carry. This mechanism increases the relay's cost, so Twilight combats selfish relays that wish to avoid it using a trusted execution environment (TEE) that ensures they follow its protocol.
The TEE does not store the channel's state, which minimizes the trusted computing base. Crucially, Twilight ensures that even if a relay breaks the TEE's security, it cannot break the integrity of the PCN. We analyze Twilight in terms of privacy and cost and study the trade-off between them. We implement Twilight using Intel's SGX framework and evaluate its performance using relays deployed on two continents. We show that a route consisting of 4 relays handles 820 payments/sec.Payment channel networks (PCNs) provide a faster and cheaper alternative to transactions recorded on the blockchain. Clients can trustlessly establish payment channels with relays by locking coins and then send signed payments that shift coin balances over the network's channels. Although payments are never published, anyone can track a client's payment by monitoring changes in coin balances over the network's channels. We present Twilight, the first PCN that provides a rigorous differential privacy guarantee to its users.
Relays in Twilight run a noisy payment processing mechanism that hides the payments they carry. This mechanism increases the relay's cost, so Twilight combats selfish relays that wish to avoid it using a trusted execution environment (TEE) that ensures they follow its protocol.
The TEE does not store the channel's state, which minimizes the trusted computing base. Crucially, Twilight ensures that even if a relay breaks the TEE's security, it cannot break the integrity of the PCN. We analyze Twilight in terms of privacy and cost and study the trade-off between them. We implement Twilight using Intel's SGX framework and evaluate its performance using relays deployed on two continents. We show that a route consisting of 4 relays handles 820 payments/sec.2022-02-09T08:58:26+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/756Provable Security Analysis of FIDO22022-05-26T11:52:25+00:00Manuel BarbosaAlexandra BoldyrevaShan ChenBogdan WarinschiWe carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance's proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2).
Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of its components.
We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute of CTAP2 for both stronger security and better performance.We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance's proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2).
Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of its components.
We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute of CTAP2 for both stronger security and better performance.2020-06-21T17:42:08+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/115GMHL: Generalized Multi-Hop Locks for Privacy-Preserving Payment Channel Networks2022-05-26T13:15:28+00:00Zilin LiuAnjia YangJian WengTao LiHuang ZengXiaojian LiangPayment channel network (PCN), not only improving the transaction throughput of blockchain but also realizing cross-chain payment, is a very promising solution to blockchain scalability problem. Most existing PCN constructions focus on either atomicity or privacy properties. Moreover, they are built on specific scripting features of the underlying blockchain such as HTLC or are tailored to several signature algorithms like ECDSA and Schnorr. In this work, we devise a Generalized Multi-Hop Locks (GMHL) based on adaptor signature and randomizable puzzle, which supports both atomicity and privacy preserving(unlinkability). We instantiate GMHL with a concrete design that relies on a Guillou-Quisquater-based adaptor signature and a novel designed RSA-based randomizable puzzle. Furthermore, we present a generic PCN construction based on GMHL, and formally prove its security in the universal composability framework. This construction only requires the underlying blockchain to perform signature verification, and thus can be applied to various (non-/Turing-complete) blockchains. Finally, we simulate the proposed GMHL instance and compare with other protocols. The results show that our construction is efficient comparable to other constructions while remaining the good functionalities.Payment channel network (PCN), not only improving the transaction throughput of blockchain but also realizing cross-chain payment, is a very promising solution to blockchain scalability problem. Most existing PCN constructions focus on either atomicity or privacy properties. Moreover, they are built on specific scripting features of the underlying blockchain such as HTLC or are tailored to several signature algorithms like ECDSA and Schnorr. In this work, we devise a Generalized Multi-Hop Locks (GMHL) based on adaptor signature and randomizable puzzle, which supports both atomicity and privacy preserving(unlinkability). We instantiate GMHL with a concrete design that relies on a Guillou-Quisquater-based adaptor signature and a novel designed RSA-based randomizable puzzle. Furthermore, we present a generic PCN construction based on GMHL, and formally prove its security in the universal composability framework. This construction only requires the underlying blockchain to perform signature verification, and thus can be applied to various (non-/Turing-complete) blockchains. Finally, we simulate the proposed GMHL instance and compare with other protocols. The results show that our construction is efficient comparable to other constructions while remaining the good functionalities.2022-01-31T07:56:46+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/303The More The Merrier: Reducing the Cost of Large Scale MPC2022-05-26T18:29:41+00:00S. Dov GordonDaniel StarinArkady YerukhimovichSecure multi-party computation (MPC) allows multiple parties to perform secure joint computations on their private inputs. Today, applications for MPC are growing with thousands of parties wishing to build federated machine learning models or trusted setups for blockchains. To address such scenarios we propose a suite of novel MPC protocols that maximize throughput when run with large numbers of parties. In particular, our protocols have both communication and computation complexity that decrease with the number of parties. Our protocols build on prior protocols based on packed secret-sharing, introducing new techniques to build more efficient computation for general circuits. Specifically, we introduce a new approach for handling linear attacks that arise in protocols using packed secret-sharing and we propose a method for unpacking shared multiplication triples without increasing the asymptotic costs. Compared with prior work, we avoid the $\log |C|$ overhead required when generically compiling circuits of size $|C|$ for use in a SIMD computation, and we improve over folklore ``committee-based'' solutions by a factor of $O(s)$, the statistical security parameter. In practice, our protocol is up to $10X$ faster than any known construction, under a reasonable set of parameters.Secure multi-party computation (MPC) allows multiple parties to perform secure joint computations on their private inputs. Today, applications for MPC are growing with thousands of parties wishing to build federated machine learning models or trusted setups for blockchains. To address such scenarios we propose a suite of novel MPC protocols that maximize throughput when run with large numbers of parties. In particular, our protocols have both communication and computation complexity that decrease with the number of parties. Our protocols build on prior protocols based on packed secret-sharing, introducing new techniques to build more efficient computation for general circuits. Specifically, we introduce a new approach for handling linear attacks that arise in protocols using packed secret-sharing and we propose a method for unpacking shared multiplication triples without increasing the asymptotic costs. Compared with prior work, we avoid the $\log |C|$ overhead required when generically compiling circuits of size $|C|$ for use in a SIMD computation, and we improve over folklore ``committee-based'' solutions by a factor of $O(s)$, the statistical security parameter. In practice, our protocol is up to $10X$ faster than any known construction, under a reasonable set of parameters.2021-03-09T13:47:55+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/614PPRKS: A Privacy Preserving Range Keyword Search Scheme2022-05-27T01:12:03+00:00Yu ZhangZongbin WangTihong QinPrivacy preserving keyword search (PPKS) is investigated in this paper, which aims to ensure the privacy of clients and servers when a database is accessed. Range query has been recognized as a common operation in databases. In this paper, a formal definition of PPKS supporting range query is given, a scheme (PPRKS) is presented in accordance with Paillier’s cryptosystem. To the best of our knowledge, PPRKS has been the only existing scheme that can effectively preserve the privacy of range keyword search. Moreover, it is demonstrated that the security of PPRKS is dependent on the semantic security of Paillier’s cryptosystem. A detailed performance analysis and a simulation are conducted to verify the practicality of PPRKS. As revealed by the theoretical analysis and the experimental results, the proposed scheme is practical.Privacy preserving keyword search (PPKS) is investigated in this paper, which aims to ensure the privacy of clients and servers when a database is accessed. Range query has been recognized as a common operation in databases. In this paper, a formal definition of PPKS supporting range query is given, a scheme (PPRKS) is presented in accordance with Paillier’s cryptosystem. To the best of our knowledge, PPRKS has been the only existing scheme that can effectively preserve the privacy of range keyword search. Moreover, it is demonstrated that the security of PPRKS is dependent on the semantic security of Paillier’s cryptosystem. A detailed performance analysis and a simulation are conducted to verify the practicality of PPRKS. As revealed by the theoretical analysis and the experimental results, the proposed scheme is practical.2022-05-23T08:24:12+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/1488General Properties of Quantum Bit Commitments2022-05-27T07:44:22+00:00Jun YanWhile unconditionally-secure quantum bit commitment (allowing both quantum computation and communication) is impossible, researchers turn to study the complexity-based one. A complexity-based canonical (non-interactive) quantum bit commitment scheme refers to a kind of scheme such that the commitment consists of just a single (quantum) message from the sender to the receiver that can be opened later by uncomputing the commit stage. In this work, we study general properties of complexity-based quantum bit commitments through the lens of canonical quantum bit commitments. Among other results, we in particular obtain the following two:
1. Any complexity-based quantum bit commitment scheme can be converted into the canonical (non-interactive) form (with its sum-binding property preserved).
2. Two flavors of canonical quantum bit commitments are equivalent; that is, canonical computationally-hiding statistically-binding quantum bit commitment exists if and only if the canonical statistically-hiding computationally-binding one exists. Combining this result with the first one, it immediately implies (unconditionally) that complexity-based quantum bit commitment is symmetric.
Canonical quantum bit commitments can be based on quantum-secure one-way functions or pseudorandom quantum states. But in our opinion, the formulation of canonical quantum bit commitment is so clean and simple that itself can be viewed as a plausible complexity assumption as well. We propose to explore canonical quantum bit commitment from perspectives of both quantum cryptography and quantum complexity theory in the future.While unconditionally-secure quantum bit commitment (allowing both quantum computation and communication) is impossible, researchers turn to study the complexity-based one. A complexity-based canonical (non-interactive) quantum bit commitment scheme refers to a kind of scheme such that the commitment consists of just a single (quantum) message from the sender to the receiver that can be opened later by uncomputing the commit stage. In this work, we study general properties of complexity-based quantum bit commitments through the lens of canonical quantum bit commitments. Among other results, we in particular obtain the following two:
1. Any complexity-based quantum bit commitment scheme can be converted into the canonical (non-interactive) form (with its sum-binding property preserved).
2. Two flavors of canonical quantum bit commitments are equivalent; that is, canonical computationally-hiding statistically-binding quantum bit commitment exists if and only if the canonical statistically-hiding computationally-binding one exists. Combining this result with the first one, it immediately implies (unconditionally) that complexity-based quantum bit commitment is symmetric.
Canonical quantum bit commitments can be based on quantum-secure one-way functions or pseudorandom quantum states. But in our opinion, the formulation of canonical quantum bit commitment is so clean and simple that itself can be viewed as a plausible complexity assumption as well. We propose to explore canonical quantum bit commitment from perspectives of both quantum cryptography and quantum complexity theory in the future.2020-11-29T19:14:51+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/1510Quantum Computationally Predicate-Binding Commitments with Application in Quantum Zero-Knowledge Arguments for NP2022-05-27T07:47:36+00:00Jun YanA quantum bit commitment scheme is to realize bit (rather than qubit) commitment by exploiting quantum communication and quantum computation. In this work, we study the binding property of the quantum string commitment scheme obtained by composing a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme (which can be realized based on quantum-secure one-way permutations(resp. functions)) in parallel. We show that the resulting scheme satisfies a stronger quantum computational binding property, which we will call predicate-binding, than the trivial honest-binding. Intuitively and very roughly, the predicate-binding property guarantees that given any inconsistent predicate pair over a set of strings (i.e. no strings in this set can satisfy both predicates), if a (claimed) quantum commitment can be opened so that the revealed string satisfies one predicate with certainty, then the same commitment cannot be opened so that the revealed string satisfies the other predicate (except for a negligible probability).
As an application, we plug a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme in Blum's zero-knowledge protocol for the NP-complete language Hamiltonian Cycle. The quantum computational soundness of the resulting protocol will follow immediately from the quantum computational predicate-binding property of commitments. Combined with the perfect(resp. statistical) zero-knowledge property which can be similarly established as in previous work, this gives rise to the first quantum perfect(resp. statistical) zero-knowledge argument system (with soundness error 1/2) for all NP languages based solely on quantum-secure one-way permutations(resp. functions).A quantum bit commitment scheme is to realize bit (rather than qubit) commitment by exploiting quantum communication and quantum computation. In this work, we study the binding property of the quantum string commitment scheme obtained by composing a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme (which can be realized based on quantum-secure one-way permutations(resp. functions)) in parallel. We show that the resulting scheme satisfies a stronger quantum computational binding property, which we will call predicate-binding, than the trivial honest-binding. Intuitively and very roughly, the predicate-binding property guarantees that given any inconsistent predicate pair over a set of strings (i.e. no strings in this set can satisfy both predicates), if a (claimed) quantum commitment can be opened so that the revealed string satisfies one predicate with certainty, then the same commitment cannot be opened so that the revealed string satisfies the other predicate (except for a negligible probability).
As an application, we plug a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme in Blum's zero-knowledge protocol for the NP-complete language Hamiltonian Cycle. The quantum computational soundness of the resulting protocol will follow immediately from the quantum computational predicate-binding property of commitments. Combined with the perfect(resp. statistical) zero-knowledge property which can be similarly established as in previous work, this gives rise to the first quantum perfect(resp. statistical) zero-knowledge argument system (with soundness error 1/2) for all NP languages based solely on quantum-secure one-way permutations(resp. functions).2020-12-02T10:07:49+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/183Improving Differential-Neural Cryptanalysis with Inception2022-05-27T11:41:04+00:00Liu ZhangZilong WangBaocang wangBoyang WangIn CRYPTO'19, Gohr proposed a new cryptanalysis method by building differential-neural distinguishers with neural networks. Gohr combined a differential-neural distinguisher with a classical differential path and achieved a 12-round (out of 22) key recovery attack on Speck32/64. Chen and Yu improved the accuracy of differential-neural distinguisher considering derived features from multiple-ciphertext pairs. Bao et al. enhanced the classical differential path by generalizing the concept of neutral bits, thus launching key recovery attacks for 13-round Speck32/64 and 16-round (out of 32) Simon32/64.
Our focus is on improving the accuracy of the distinguisher and training the distinguisher for more rounds using deep learning methods. To capture more dimensional information, we use multiple parallel convolutional layers with kernels of different sizes placed in front of the Residual Network to train differential-neural distinguisher inspired by the Inception in GoogLeNet. For Speck32/64, we obtain a 9-round differential-neural distinguisher and significantly improve the accuracy of distinguishers on 6-,7-, 8-round. For Simon32/64, we obtain a 12-round differential-neural distinguisher and significantly improve the accuracy of the distinguishers on 9-, 10-, and 11-round. In addition, we use neutral bits to solve the same distribution of data required to successfully launch a key recovery attack when using multiple-ciphertext pairs as the input of the neural network.
Under the combined effect of multiple improvements, the time complexity of our 11-, 12-, and 13-round key recovery attacks of Speck32/64 is decreased. Also, the success rate of our 12-round key recovery attack reaches 100% in 98 trials. For Simon32/64, we are able to implement a 17-round key recovery attack using the deep learning method for the first time. Also, we decrease the time complexity of the 16-round key recovery attack.In CRYPTO'19, Gohr proposed a new cryptanalysis method by building differential-neural distinguishers with neural networks. Gohr combined a differential-neural distinguisher with a classical differential path and achieved a 12-round (out of 22) key recovery attack on Speck32/64. Chen and Yu improved the accuracy of differential-neural distinguisher considering derived features from multiple-ciphertext pairs. Bao et al. enhanced the classical differential path by generalizing the concept of neutral bits, thus launching key recovery attacks for 13-round Speck32/64 and 16-round (out of 32) Simon32/64.
Our focus is on improving the accuracy of the distinguisher and training the distinguisher for more rounds using deep learning methods. To capture more dimensional information, we use multiple parallel convolutional layers with kernels of different sizes placed in front of the Residual Network to train differential-neural distinguisher inspired by the Inception in GoogLeNet. For Speck32/64, we obtain a 9-round differential-neural distinguisher and significantly improve the accuracy of distinguishers on 6-,7-, 8-round. For Simon32/64, we obtain a 12-round differential-neural distinguisher and significantly improve the accuracy of the distinguishers on 9-, 10-, and 11-round. In addition, we use neutral bits to solve the same distribution of data required to successfully launch a key recovery attack when using multiple-ciphertext pairs as the input of the neural network.
Under the combined effect of multiple improvements, the time complexity of our 11-, 12-, and 13-round key recovery attacks of Speck32/64 is decreased. Also, the success rate of our 12-round key recovery attack reaches 100% in 98 trials. For Simon32/64, we are able to implement a 17-round key recovery attack using the deep learning method for the first time. Also, we decrease the time complexity of the 16-round key recovery attack.2022-02-20T20:16:45+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2020/1450Subversion-Resilient Enhanced Privacy ID2022-05-27T12:53:18+00:00Antonio FaonioDario FioreLuca NizzardoClaudio SorienteAnonymous attestation for secure hardware platforms leverages tailored group signature schemes and assumes the hardware to be trusted.
Yet, there is an ever increasing concern on the trustworthiness of hardware components and embedded systems. A subverted hardware may, for example, use its signatures to exfiltrate identifying information or even the signing key.
In this paper we focus on Enhanced Privacy ID (EPID)---a popular anonymous attestation scheme used in commodity secure hardware platforms like Intel SGX.
We define and instantiate a \emph{subversion resilient} EPID scheme (or SR-EPID).
In a nutshell, SR-EPID provides the same functionality and security guarantees of the original EPID, despite potentially subverted hardware.
In our design, a ``sanitizer'' ensures no covert channel between the hardware and the outside world both during enrollment and during attestation (i.e., when signatures are produced). We design a practical SR-EPID scheme secure against adaptive corruptions and based on a novel combination of malleable NIZKs and hash functions modeled as random oracles.
Our approach has a number of advantages over alternative designs.
Namely, the sanitizer bears no secret information---hence, a memory leak does not erode security. Further, the role of sanitizer may be distributed in a cascade fashion among several parties so that sanitization becomes effective as long as one of the parties has access to a good source of randomness.
Also, we keep the signing protocol non-interactive, thereby minimizing latency during signature generation.Anonymous attestation for secure hardware platforms leverages tailored group signature schemes and assumes the hardware to be trusted.
Yet, there is an ever increasing concern on the trustworthiness of hardware components and embedded systems. A subverted hardware may, for example, use its signatures to exfiltrate identifying information or even the signing key.
In this paper we focus on Enhanced Privacy ID (EPID)---a popular anonymous attestation scheme used in commodity secure hardware platforms like Intel SGX.
We define and instantiate a \emph{subversion resilient} EPID scheme (or SR-EPID).
In a nutshell, SR-EPID provides the same functionality and security guarantees of the original EPID, despite potentially subverted hardware.
In our design, a ``sanitizer'' ensures no covert channel between the hardware and the outside world both during enrollment and during attestation (i.e., when signatures are produced). We design a practical SR-EPID scheme secure against adaptive corruptions and based on a novel combination of malleable NIZKs and hash functions modeled as random oracles.
Our approach has a number of advantages over alternative designs.
Namely, the sanitizer bears no secret information---hence, a memory leak does not erode security. Further, the role of sanitizer may be distributed in a cascade fashion among several parties so that sanitization becomes effective as long as one of the parties has access to a good source of randomness.
Also, we keep the signing protocol non-interactive, thereby minimizing latency during signature generation.2020-11-19T09:42:02+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/257Guaranteed Output in $O(\sqrt{n})$ Rounds for Round-Robin Sampling Protocols2022-05-27T13:33:43+00:00Ran CohenJack DoernerYashvanth Kondiabhi shelatWe introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets.
Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants.
We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets.
Due to their round-robin structure, protocols of this class inherently require $n$ sequential broadcast rounds, where $n$ is the number of participants.
We describe how to compile them generically into protocols that require only $O(\sqrt{n})$ broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require $\Omega(n)$ sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.2022-03-02T14:01:32+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1450Efficient Zero-Knowledge Argument in Discrete Logarithm Setting: Sublogarithmic Proof or Sublinear Verifier2022-05-28T05:38:08+00:00Sungwook KimHyeonbum Leeand Jae Hong SeoWe propose three interactive zero-knowledge arguments for arithmetic circuit of size $N$ in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features $O(\sqrt{\log N})$ communication and round complexities and $O(N)$ computational complexity for the verifier. Second argument features $O(\log N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. Third argument features $O(\log N)$ communication and $O(\sqrt{N}\log N)$ computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.We propose three interactive zero-knowledge arguments for arithmetic circuit of size $N$ in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features $O(\sqrt{\log N})$ communication and round complexities and $O(N)$ computational complexity for the verifier. Second argument features $O(\log N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. Third argument features $O(\log N)$ communication and $O(\sqrt{N}\log N)$ computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.2021-10-29T18:30:54+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/645Round-Optimal Multi-Party Computation with Identifiable Abort2022-05-28T09:04:44+00:00Michele CiampiDivya RaviLuisa Siniscalchiand Hendrik WaldnerSecure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model.
Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations.
The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.Secure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model.
Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations.
The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.2022-05-25T15:46:04+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2021/1486Mitaka: a simpler, parallelizable, maskable variant of Falcon2022-05-28T14:30:28+00:00Thomas EspitauPierre-Alain FouqueFrançois GérardMélissa RossiAkira TakahashiMehdi TibouchiAlexandre Walletand Yang YuThis work describes the Mitaka signature scheme: a new hash-and-sign
signature scheme over NTRU lattices which can be seen as a variant of
NIST finalist Falcon. It achieves comparable efficiency but is
considerably simpler, online/offline, and easier to parallelize and
protect against side-channels, thus offering significant advantages from
an implementation standpoint. It is also much more versatile in terms of
parameter selection.
We obtain this signature scheme by replacing the FFO lattice Gaussian
sampler in Falcon by the ``hybrid'' sampler of Ducas and Prest, for
which we carry out a detailed and corrected security analysis. In
principle, such a change can result in a substantial security loss, but
we show that this loss can be largely mitigated using new techniques in
key generation that allow us to construct much higher quality lattice
trapdoors for the hybrid sampler relatively cheaply. This new approach
can also be instantiated on a wide variety of base fields, in contrast
with Falcon's restriction to power-of-two cyclotomics.
We also introduce a new lattice Gaussian sampler with the same quality
and efficiency, but which is moreover compatible with the integral matrix
Gram root technique of Ducas et al., allowing us to avoid floating point
arithmetic. This makes it possible to realize the same signature
scheme as Mitaka efficiently on platforms with poor support for
floating point numbers.
Finally, we describe a provably secure masking of Mitaka. More precisely,
we introduce novel gadgets that allow provable masking at any order at much
lower cost than previous masking techniques for Gaussian sampling-based
signature schemes, for cheap and dependable side-channel protection.This work describes the Mitaka signature scheme: a new hash-and-sign
signature scheme over NTRU lattices which can be seen as a variant of
NIST finalist Falcon. It achieves comparable efficiency but is
considerably simpler, online/offline, and easier to parallelize and
protect against side-channels, thus offering significant advantages from
an implementation standpoint. It is also much more versatile in terms of
parameter selection.
We obtain this signature scheme by replacing the FFO lattice Gaussian
sampler in Falcon by the ``hybrid'' sampler of Ducas and Prest, for
which we carry out a detailed and corrected security analysis. In
principle, such a change can result in a substantial security loss, but
we show that this loss can be largely mitigated using new techniques in
key generation that allow us to construct much higher quality lattice
trapdoors for the hybrid sampler relatively cheaply. This new approach
can also be instantiated on a wide variety of base fields, in contrast
with Falcon's restriction to power-of-two cyclotomics.
We also introduce a new lattice Gaussian sampler with the same quality
and efficiency, but which is moreover compatible with the integral matrix
Gram root technique of Ducas et al., allowing us to avoid floating point
arithmetic. This makes it possible to realize the same signature
scheme as Mitaka efficiently on platforms with poor support for
floating point numbers.
Finally, we describe a provably secure masking of Mitaka. More precisely,
we introduce novel gadgets that allow provable masking at any order at much
lower cost than previous masking techniques for Gaussian sampling-based
signature schemes, for cheap and dependable side-channel protection.2021-11-15T12:45:55+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/646Faster Non-interactive Verifiable Computing2022-05-28T16:02:53+00:00Pascal LafourcadeGael Marcadetand Léo RobertIn 1986, A.Yao introduced the notion of garbled circuits, designed to verify the correctness of computations performed on an untrusted server. However, correctness is guaranteed for only one input, meaning that a new garbled circuit must be created for each new input. To address this drawback, in 2010 Gennaro et al. performed the evaluation of the garbled circuit homomorphically using Fully Homomorphic Encryption scheme, allowing to reuse the same garbled circuit for new inputs. Their solution requires to encrypt the garbled circuit at every new input. In this paper, we propose a verifiable-computation scheme allowing to verify the correctness of computations performed by an untrusted server for multiple inputs, where the garbled circuit is homomorphically encrypted only once. Hence, we have a faster scheme comparing to Gennaro’s solution, since for each new input, we reduce the computations by the size of the circuit representing the function to be computed, for the same security level. The key point to obtain this speed-up is to rely on Multi-Key Homomorphic Encryption (MKHE) and then to encrypt only once the garbled circuit.In 1986, A.Yao introduced the notion of garbled circuits, designed to verify the correctness of computations performed on an untrusted server. However, correctness is guaranteed for only one input, meaning that a new garbled circuit must be created for each new input. To address this drawback, in 2010 Gennaro et al. performed the evaluation of the garbled circuit homomorphically using Fully Homomorphic Encryption scheme, allowing to reuse the same garbled circuit for new inputs. Their solution requires to encrypt the garbled circuit at every new input. In this paper, we propose a verifiable-computation scheme allowing to verify the correctness of computations performed by an untrusted server for multiple inputs, where the garbled circuit is homomorphically encrypted only once. Hence, we have a faster scheme comparing to Gennaro’s solution, since for each new input, we reduce the computations by the size of the circuit representing the function to be computed, for the same security level. The key point to obtain this speed-up is to rely on Multi-Key Homomorphic Encryption (MKHE) and then to encrypt only once the garbled circuit.2022-05-25T15:58:19+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/647Quantum Implementation and Analysis of DEFAULT2022-05-28T16:03:52+00:00Kyungbae JangAnubhab BaksiJakub BreierHwajeong Seoand Anupam ChattopadhyayIn this paper, we present the quantum implementation and analysis of the recently proposed block cipher, DEFAULT. DEFAULT is consisted of two components, namely DEFAULT-LAYER and DEFAULT-CORE. Two instances of DEFAULT-LAYER is used before and after DEFAULT-CORE (the so-called `sandwich construction').
We discuss about the the various choices made to keep the cost for the basic quantum circuit and that of the Grover's oracle search, and compare it with the levels of quantum security specified by the United States' National Institute of Standards and Technology (NIST). All in all, our work nicely fits in the research trend of finding the possible quantum vulnerability of symmetric key ciphers.In this paper, we present the quantum implementation and analysis of the recently proposed block cipher, DEFAULT. DEFAULT is consisted of two components, namely DEFAULT-LAYER and DEFAULT-CORE. Two instances of DEFAULT-LAYER is used before and after DEFAULT-CORE (the so-called `sandwich construction').
We discuss about the the various choices made to keep the cost for the basic quantum circuit and that of the Grover's oracle search, and compare it with the levels of quantum security specified by the United States' National Institute of Standards and Technology (NIST). All in all, our work nicely fits in the research trend of finding the possible quantum vulnerability of symmetric key ciphers.2022-05-25T16:13:31+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/https://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2022/648Dynamic Searchable Encryption with Optimal Search in the Presence of Deletions2022-05-28T16:04:54+00:00Javad Ghareh ChamaniDimitrios PapadopoulosMohammadamin Karbasforushanand Ioannis DemertzisWe focus on the problem of Dynamic Searchable Encryption (DSE) with efficient (optimal/quasi-optimal) search in the presence of deletions. Towards that end, we first propose OSSE, the first DSE scheme that can achieve asymptotically optimal search time, linear to the result size and independent of any prior deletions, improving the previous state of the art by a multiplicative logarithmic factor. We then propose our second scheme LLSE, that achieves a sublogarithmic search overhead ($\log\log i_w$, where $i_w$ is the number or prior insertions for a keyword) compared to the optimal achieved by OSSE. While this is slightly worse than our first scheme, it still outperforms prior works, while also achieving faster deletions and asymptotically smaller server storage. Both schemes have standard leakage profiles and are forward-and-backward private. Our experimental evaluation is very encouraging as it shows our schemes consistently outperform the prior state-of-the-art DSE by $1.3$-$6.4\times$ in search computation time, while also requiring just a single roundtrip to receive the search result. Even compared with prior simpler and very efficient constructions in which all deleted records are returned as part of the result, our OSSE achieves better performance for deletion rates ranging from 45-55%, while the previous state-of-the-art quasi-optimal scheme achieves this for 65-75% deletion rates.We focus on the problem of Dynamic Searchable Encryption (DSE) with efficient (optimal/quasi-optimal) search in the presence of deletions. Towards that end, we first propose OSSE, the first DSE scheme that can achieve asymptotically optimal search time, linear to the result size and independent of any prior deletions, improving the previous state of the art by a multiplicative logarithmic factor. We then propose our second scheme LLSE, that achieves a sublogarithmic search overhead ($\log\log i_w$, where $i_w$ is the number or prior insertions for a keyword) compared to the optimal achieved by OSSE. While this is slightly worse than our first scheme, it still outperforms prior works, while also achieving faster deletions and asymptotically smaller server storage. Both schemes have standard leakage profiles and are forward-and-backward private. Our experimental evaluation is very encouraging as it shows our schemes consistently outperform the prior state-of-the-art DSE by $1.3$-$6.4\times$ in search computation time, while also requiring just a single roundtrip to receive the search result. Even compared with prior simpler and very efficient constructions in which all deleted records are returned as part of the result, our OSSE achieves better performance for deletion rates ranging from 45-55%, while the previous state-of-the-art quasi-optimal scheme achieves this for 65-75% deletion rates.2022-05-25T18:22:22+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/649IBE with Incompressible Master Secret and Small Identity Secrets2022-05-28T16:05:33+00:00Nico DöttlingSanjam GargSruthi Sekarand Mingyuan WangSide-stepping the protection provided by cryptography, exfiltration attacks are becoming a considerable real-world threat. With the goal of mitigating the exfiltration of cryptographic keys, big-key cryptosystems have been developed over the past few years. These systems come with very large secret keys which are thus hard to exfiltrate. Typically, in such systems, the setup time must be large as it generates the large secret key. However, subsequently, the encryption and decryption operations, that must be performed repeatedly, are required to be efficient. Specifically, the encryption uses only a small public key and the decryption only accesses small ciphertext-dependent parts of the full secret key. Nonetheless, these schemes require decryption to have access to the entire secret key. Thus, using such big-key cryptosystems necessitate that users carry around large secret keys on their devices, which can be a hassle and in some cases might also render exfiltration easy.
With the goal of removing this problem, in this work, we initiate the study of big-key identity-based encryption (bk-IBE). In such a system, the master secret key is allowed to be large but we require that the identity-based secret keys are short. This allows users to use the identity-based short keys as the ephemeral secret keys that can be more easily carried around and allow for decrypting ciphertexts matching a particular identity, e.g. messages that were encrypted on a particular date. In particular:
-We build a new definitional framework for bk-IBE capturing a range of applications. In the case when the exfiltration is small our definition promises stronger security --- namely, an adversary can break semantic security for only a few identities, proportional to the amount of leakage it gets. In contrast, in the catastrophic case where a large fraction of the master secret key has been ex-filtrated, we can still resort to a guarantee that the ciphertexts generated for a randomly chosen identity (or, an identity with enough entropy) remain protected. We demonstrate how this framework captures the best possible security guarantees.
-We show the first construction of such a bk-IBE offering strong security properties. Our construction is based on standard assumptions on groups with bilinear pairings and brings together techniques from seemingly different contexts such as leakage resilient cryptography, reusable two-round MPC, and laconic oblivious transfer. We expect our techniques to be of independent interest.Side-stepping the protection provided by cryptography, exfiltration attacks are becoming a considerable real-world threat. With the goal of mitigating the exfiltration of cryptographic keys, big-key cryptosystems have been developed over the past few years. These systems come with very large secret keys which are thus hard to exfiltrate. Typically, in such systems, the setup time must be large as it generates the large secret key. However, subsequently, the encryption and decryption operations, that must be performed repeatedly, are required to be efficient. Specifically, the encryption uses only a small public key and the decryption only accesses small ciphertext-dependent parts of the full secret key. Nonetheless, these schemes require decryption to have access to the entire secret key. Thus, using such big-key cryptosystems necessitate that users carry around large secret keys on their devices, which can be a hassle and in some cases might also render exfiltration easy.
With the goal of removing this problem, in this work, we initiate the study of big-key identity-based encryption (bk-IBE). In such a system, the master secret key is allowed to be large but we require that the identity-based secret keys are short. This allows users to use the identity-based short keys as the ephemeral secret keys that can be more easily carried around and allow for decrypting ciphertexts matching a particular identity, e.g. messages that were encrypted on a particular date. In particular:
-We build a new definitional framework for bk-IBE capturing a range of applications. In the case when the exfiltration is small our definition promises stronger security --- namely, an adversary can break semantic security for only a few identities, proportional to the amount of leakage it gets. In contrast, in the catastrophic case where a large fraction of the master secret key has been ex-filtrated, we can still resort to a guarantee that the ciphertexts generated for a randomly chosen identity (or, an identity with enough entropy) remain protected. We demonstrate how this framework captures the best possible security guarantees.
-We show the first construction of such a bk-IBE offering strong security properties. Our construction is based on standard assumptions on groups with bilinear pairings and brings together techniques from seemingly different contexts such as leakage resilient cryptography, reusable two-round MPC, and laconic oblivious transfer. We expect our techniques to be of independent interest.2022-05-25T21:33:28+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/650Supersingular Non-Superspecial Abelian Surfaces in Cryptography2022-05-28T16:07:26+00:00Jason T. LeGrowYan Bo Tiand Lukas ZobernigWe consider the use of supersingular abelian surfaces in cryptography. Several generalisations of well-known cryptographic schemes and constructions based on supersingular elliptic curves to the 2-dimensional setting of superspecial abelian surfaces have been proposed. The computational assumptions in the superspecial 2-dimensional case can be reduced to the corresponding 1-dimensional problems via a product decomposition by observing that every superspecial abelian surface is non-simple and separably isogenous to a product of supersingular elliptic curves. Instead, we propose to use supersingular non-superspecial isogeny graphs where such a product decomposition does not have a computable description via separable isogenies. We study the advantages and investigate security concerns of the move to supersingular non-superspecial abelian surfaces.We consider the use of supersingular abelian surfaces in cryptography. Several generalisations of well-known cryptographic schemes and constructions based on supersingular elliptic curves to the 2-dimensional setting of superspecial abelian surfaces have been proposed. The computational assumptions in the superspecial 2-dimensional case can be reduced to the corresponding 1-dimensional problems via a product decomposition by observing that every superspecial abelian surface is non-simple and separably isogenous to a product of supersingular elliptic curves. Instead, we propose to use supersingular non-superspecial isogeny graphs where such a product decomposition does not have a computable description via separable isogenies. We study the advantages and investigate security concerns of the move to supersingular non-superspecial abelian surfaces.2022-05-26T01:23:01+00:00https://creativecommons.org/licenses/by-sa/4.0/https://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2022/651Revisiting the Efficiency of Asynchronous Multi Party Computation Against General Adversaries2022-05-28T16:08:09+00:00Ananya AppanAnirudh Chandramouliand Ashish ChoudhuryIn this paper, we design secure multi-party computation (MPC) protocols in the asynchronous communication setting with optimal resilience. Our protocols are secure against a computationally-unbounded malicious adversary, characterized by an adversary structure $\mathcal{Z}$, which enumerates all possible subsets of potentially corrupt parties. Our protocols incur a communication of $\mathcal{O}(|\mathcal{Z}|^2)$ and $\mathcal{O}(|\mathcal{Z}|)$ bits per multiplication for perfect and statistical security respectively. These are the first protocols with this communication complexity, as such protocols were known only in the synchronous communication setting (Hirt and Tschudi, ASIACRYPT 2013).In this paper, we design secure multi-party computation (MPC) protocols in the asynchronous communication setting with optimal resilience. Our protocols are secure against a computationally-unbounded malicious adversary, characterized by an adversary structure $\mathcal{Z}$, which enumerates all possible subsets of potentially corrupt parties. Our protocols incur a communication of $\mathcal{O}(|\mathcal{Z}|^2)$ and $\mathcal{O}(|\mathcal{Z}|)$ bits per multiplication for perfect and statistical security respectively. These are the first protocols with this communication complexity, as such protocols were known only in the synchronous communication setting (Hirt and Tschudi, ASIACRYPT 2013).2022-05-26T05:42:10+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2022/419Dew: Transparent Constant-sized zkSNARKs2022-05-28T19:52:23+00:00Arasu ArunChaya GaneshSatya LokamTushar Mopuriand Sriram SridharWe construct polynomial commitment schemes
with constant sized evaluation proofs and logarithmic verification time in the transparent setting. To the best of our knowledge, this is the first result achieving this combination of properties.
Our starting point is a transparent inner product commitment scheme with constant-sized proofs and linear verification. We build on this to construct a polynomial commitment scheme with constant size evaluation proofs and logarithmic (in the degree of the polynomial) verification time. Our constructions make use of groups of unknown order instantiated by class groups. We prove security of our construction in the Generic Group Model (GGM).
Using our polynomial commitment scheme to compile an information-theoretic proof system yields Dew -- a transparent and constant-sized zkSNARK (Zero-knowledge Succinct Non-interactive ARguments of Knowledge) with logarithmic verification.
Finally, we show how to recover the result of DARK (Bünz et al., Eurocrypt 2020). DARK presented a succinct transparent polynomial commitment scheme with logarithmic proof size and verification. However, it was recently discovered to have a gap in its security proof (Block et al, CRYPTO 2021).
We recover its extractability based on our polynomial commitment construction, thus obtaining a transparent polynomial commitment scheme with logarithmic proof size and verification under the same assumptions as DARK, but with a prover time that is quadratic.We construct polynomial commitment schemes
with constant sized evaluation proofs and logarithmic verification time in the transparent setting. To the best of our knowledge, this is the first result achieving this combination of properties.
Our starting point is a transparent inner product commitment scheme with constant-sized proofs and linear verification. We build on this to construct a polynomial commitment scheme with constant size evaluation proofs and logarithmic (in the degree of the polynomial) verification time. Our constructions make use of groups of unknown order instantiated by class groups. We prove security of our construction in the Generic Group Model (GGM).
Using our polynomial commitment scheme to compile an information-theoretic proof system yields Dew -- a transparent and constant-sized zkSNARK (Zero-knowledge Succinct Non-interactive ARguments of Knowledge) with logarithmic verification.
Finally, we show how to recover the result of DARK (Bünz et al., Eurocrypt 2020). DARK presented a succinct transparent polynomial commitment scheme with logarithmic proof size and verification. However, it was recently discovered to have a gap in its security proof (Block et al, CRYPTO 2021).
We recover its extractability based on our polynomial commitment construction, thus obtaining a transparent polynomial commitment scheme with logarithmic proof size and verification under the same assumptions as DARK, but with a prover time that is quadratic.2022-04-06T12:59:35+00:00https://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/4.0/