All papers in 2020 (Page 16 of 1620 results)

Last updated:  2020-11-16
The randomized slicer for CVPP: sharper, faster, smaller, batchier
Léo Ducas, Thijs Laarhoven, Wessel P. J. van Woerden
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways: - We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis--Laarhoven--De Weger [PQCrypto 2019] and Laarhoven~[MathCrypt 2019]. - We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities. - We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker--Gama--Joux [Cryptology ePrint Archive, 2015]. Using $2^{0.185d + o(d)}$ memory, we can solve a single CVPP instance in $2^{0.264d + o(d)}$ time. - We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $2^{0.208d + o(d)}$ memory, we can heuristically solve CVPP instances in $2^{0.234d + o(d)}$ amortized time, for batches of size at least $2^{0.058d + o(d)}$. Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
Last updated:  2020-06-23
Hardness of LWE on General Entropic Distributions
Zvika Brakerski, Nico Döttling
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape. In many of its applications the so called ``LWE secret'' is not sampled uniformly, but comes from a distribution with some min-entropy. This variant, known as ``Entropic LWE'', has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius. In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli. Technically, we show that the entropic hardness of LWE relies on a simple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distribution $s$, given $s+e$, where $e$ is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
Last updated:  2020-02-06
InfoCommit: Information-Theoretic Polynomial Commitment and Verification
Saeid Sahraei, Salman Avestimehr
We introduce InfoCommit, a protocol for polynomial commitment and verification. InfoCommit consists of two phases. An initial commitment phase and an evaluation phase. During the commitment phase, the verifier and the prover engage in a private two-party computation algorithm so that the verifier extracts a private verification key. In the evaluation phase, the verifier is interested in learning the evaluations of the polynomial at several input points. InfoCommit has four main features. Firstly, the verifier is able to detect, with high probability, if the prover has responded with evaluations of the same polynomial that he has initially committed to. Secondly, InfoCommit provides rigorous privacy guarantees for the prover: upon observing the initial commitment and the response provided by the prover to $m$ evaluation requests, the verifier only learns $O(m^2)$ symbols about the coefficients of the polynomial. Thirdly, the verifiability guarantee is unconditional and without the need for a trusted party, while ``bounded storage" is the only assumption underlying the privacy of the algorithm. In particular, both properties hold regardless of the computation power of the two parties. Lastly, InfoCommit is doubly-efficient in the sense that in the evaluation phase, the verifier runs in $O(\sqrt{d})$ and the prover runs in $O(d)$, where $d-1$ is the degree of the polynomial.
Last updated:  2020-02-06
Efficient BIKE Hardware Design with Constant-Time Decoder
Andrew Reinders, Rafael Misoczki, Santosh Ghosh, Manoj Sastry
BIKE (Bit-flipping Key Encapsulation) is a promising candidate running in the NIST Post-Quantum Cryptography Standardization process. It is a code-based cryptosystem that enjoys a simple definition, well-understood underlying security, and interesting performance. The most critical step in this cryptosystem consists of correcting errors in a QC-MDPC linear code. The BIKE team proposed variants of the Bit-Flipping Decoder for this step for Round 1 and 2 of the standardization process. In this paper, we propose an alternative decoder which is more friendly to hardware implementations, leading to a latency-area performance comparable to the literature while introducing power side channel resilience. We also show that our design can accelerate all key generation, encapsulation and decapsulation operations using very few common logic building blocks.
Last updated:  2020-02-06
Separating Two-Round Secure Computation from Oblivious Transfer
Benny Applebaum, Zvika Brakerski, Sanjam Garg, Yuval Ishai, Akshayaram Srinivasan
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing *a round preserving reduction* to the task of secure *2-party* computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions. Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT. As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed ``non-compact'' variant of 2-party *secret sharing* from 2-round OT.
Last updated:  2020-04-27
A Verifiable and Practical Lattice-Based Decryption Mix Net with External Auditing
Xavier Boyen, Thomas Haines, Johannes Mueller
Mix nets are often used to provide privacy in modern security protocols, through shuffling. Some of the most important applications, such as secure electronic voting, require mix nets that are verifiable. In the literature, numerous techniques have been proposed to make mix nets verifiable. Some of them have also been employed for securing real political elections. With the looming possibility of quantum computers and their threat to cryptosystems based on classical hardness assumptions, there is significant pressure to migrate mix nets to post-quantum alternatives. At present, no verifiable and practical post-quantum mix net with external auditing is available as a drop-in replacement of existing constructions. In this paper, we give the first such construction. We propose a verifiable decryption mix net which solely employs practical lattice-based primitives. We formally prove that our mix net provides a high level of verifiability, and even accountability which guarantees that misbehaving mix servers can also be identified. Verification is executed by a (temporarily trusted) public auditor whose role can easily be distributed. To demonstrate practicality for real-world systems, we provide detailed performance benchmarks on our stand-alone implementation based only on the most conservative lattice hardness assumptions.
Last updated:  2020-10-01
A Security Model and Fully Verified Implementation for the IETF QUIC Record Layer
Antoine Delignat-Lavaud, Cédric Fournet, Bryan Parno, Jonathan Protzenko, Tahina Ramananandro, Jay Bosamiya, Joseph Lallemand, Itsaka Rakotonirina, Yi Zhou
We investigate the security of the QUIC record layer, as standardized by the IETF in draft version 30. This version features major differences compared to Google's original protocol and prior IETF drafts. We model packet and header encryption, which uses a custom construction for privacy. To capture its goals, we propose a security definition for authenticated encryption with semi-implicit nonces. We show that QUIC uses an instance of a generic construction parameterized by a standard AEAD-secure scheme and a PRF-secure cipher. We formalize and verify the security of this construction in F*. The proof uncovers interesting limitations of nonce confidentiality, due to the malleability of short headers and the ability to choose the number of least significant bits included in the packet counter. We propose improvements that simplify the proof and increase robustness against strong attacker models. In addition to the verified security model, we also give concrete functional specification for the record layer, and prove that it satisfies important functionality properties (such as successful decryption of encrypted packets) after fixing more errors in the draft. We then provide a high-performance implementation of the record layer that we prove to be memory safe, correct with respect to our concrete specification (inheriting its functional correctness properties), and secure with respect to our verified model. To evaluate this component, we develop a provably-safe implementation of the rest of the QUIC protocol. Our record layer achieves nearly 2 GB/s throughput, and our QUIC implementation's performance is within 21% of an unverified baseline.
Last updated:  2020-02-06
New Discrete Logarithm Computation for the Medium Prime Case Using the Function Field Sieve
Madhurima Mukhopadhyay, Palash Sarkar, Shashank Singh, Emmanuel Thome
The present work reports progress in discrete logarithm computation for the general medium prime case using the function field sieve algorithm. A new record discrete logarithm computation over a 1051-bit field having a 22-bit characteristic was performed. This computation builds on and implements previously known techniques. Analysis indicates that the relation collection and descent steps are within reach for fields with 32-bit characteristic and moderate extension degrees. It is the linear algebra step which will dominate the computation time for any discrete logarithm computation over such fields. Keywords: finite field, discrete logarithm, function field sieve.
Last updated:  2021-02-02
A Detailed Report on the Overhead of Hardware APIs for Lightweight Cryptography
Patrick Karl, Michael Tempelmeier
The "Competition for Authenticated Encryption: Security, Applicability, and Robustness" (CAESAR) was the first cryptographic competition that required designers to use a mandatory hardware API for their implementations. Recently, a similar hardware API for the NIST Lightweight Cryptography (LWC) project was proposed. Both APIs feature an accompanying development package to help designers implementing the API. In this paper, we have an in-depth look on these packages. We analyze the features of both packages, discuss their resource utilization, and demonstrate their impact on Ascon128, SpoC-64, and Gimli implementations on a modern Artix-7 FPGA. Finally, we provide some tweaks and enhancements to further optimize the development package for the LWC API.
Last updated:  2021-01-29
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model
Alex Davidson, Shuichi Katsumata, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
Constrained pseudorandom functions (CPRFs) allow learning ``constrained'' PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC’13], Kiayias et al. [CCS’13] and Boyle et al. [PKC’14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates. Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates. In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below. - We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for $t$-conjunctive normal form ($t$-CNF) predicates from one-way functions (OWFs) where $t$ is a constant. Here, $O(1)$-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that $t$-CNF includes bit-fixing predicates as a special case. - We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key security means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include $t$-CNF predicates for a constant $t$ as a special case. Thus, this construction supports more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption. - We construct adaptively secure and $O(1)$-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO). The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak $1$-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
Last updated:  2020-02-04
Blazing Fast OT for Three-Round UC OT Extension
Ran Canetti, Pratik Sarkar, Xiao Wang
Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date. Specifically: - Our base protocol incurs only 3 exponentiations per instance. - Our base protocol results in a 3 round extended OT protocol. - The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption. For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
Last updated:  2021-02-02
Fixing the Achilles Heel of E-Voting: The Bulletin Board
Lucca Hirschi, Lara Schmid, David Basin
The results of electronic elections should be verifiable so that any cheating is detected. To support this, many protocols employ an electronic bulletin board (BB) for publishing data that can be read by participants and used for verifiability checks. We demonstrate that the BB is itself a security-critical component that has often been treated too casually in previous designs and analyses. In particular, we present novel attacks on the e-voting protocols Belenios, Civitas, and Helios that violate some of their central security claims under realistic system assumptions. These attacks were outside the scope of prior security analyses as their verifiability notions assume an idealized BB. To enable the analysis of protocols under realistic assumptions about the BB, we introduce a new verifiability definition applicable to arbitrary BBs. We identify a requirement, called final-agreement, and formally prove that it is sufficient and, in most cases, necessary to achieve verifiability. We then propose a BB protocol that satisfies final-agreement under weak, realistic trust assumptions and provide a machine-checked proof thereof. Our protocol can replace existing BBs, enabling verifiability under much weaker trust assumptions.
Last updated:  2020-02-04
Practical Forgeries for ORANGE
Christoph Dobraunig, Florian Mendel, Bart Mennink
We analyze the authenticated encryption algorithm of ORANGE, a submission to the NIST lightweight cryptography standardization process. We show that it is practically possible to craft forgeries out of two observed transmitted messages that encrypt the same plaintext. The authors of ORANGE have confirmed the attack, and they discuss a fix for this attack in their second-round submission of ORANGE to the NIST lightweight cryptography competition.
Last updated:  2020-02-04
One-shot Signatures and Applications to Hybrid Quantum/Classical Authentication
Ryan Amos, Marios Georgiou, Aggelos Kiayias, Mark Zhandry
We define the notion of one-shot signatures, which are signatures where any secret key can be used to sign only a single message, and then self-destructs. While such signatures are of course impossible classically, we construct one-shot signatures using quantum no-cloning. In particular, we show that such signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes. We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols.
Last updated:  2020-02-04
Relaxed freshness in component authentication
Frank Schuhmacher
We suggests a relaxed freshness paradigm for challenge-response authentication for each field of application where challenger and responder are tightly coupled and authentication takes place in a friendly environment. Replay attacks are not feasible under this premise, and freshness can be relaxed to relative freshness: no refresh is required as long as all previously tested responders were authentic. One field of application is anti-counterfeiting of electronic device components. The main contribution is a formal security proof of an authentication scheme with choked refresh. A practical implication is the lifetime increase of stored challenge-response-pairs. This is a considerable advantage for solutions based on hardware intrinsic security. For solutions based on symmetric keys, it opens the possibility to use challenge-response-pairs instead of secret keys by the challenger – a cheap way to reduce the risk of key disclosure.
Last updated:  2020-02-04
MCU intrinsic group features for component authentication
Frank Schuhmacher
We provide a solution for the authentication of a component, accessory, smart card or tag by a main device via challenge-response-tests of two MCU intrinsic features: Progression in the execution of test programs (measured in processor clocks) and peripheral feedback to internal stimulation. The main device will be called challenger and the other device responder. Our solution requires that the authentic responders are characterized by a dedicated MCU model and a common responder ID in read-only MCU registers. Its main application is the detection/lock-out of counterfeit batteries, cartridges, sensors, or control units. The solution also suits as a redundant authentication factor in high security applications, such as payment, or conditional access.
Last updated:  2020-11-18
On the Security Goals of White-Box Cryptography
Estuardo Alpirez Bock, Alessandro Amadori, Chris Brzuska, Wil Michiels
We discuss existing and new security notions for white-box cryptography and comment on their suitability for Digital Rights Management and Mobile Payment Applications, the two prevalent use-cases of white-box cryptography. In particular, we put forward indistinguishability for white-box cryptography with hardware-binding (IND-WHW) as a new security notion that we deem central. We also discuss the security property of application-binding and explain the issues faced when defining it as a formal security notion. Based on our proposed notion for hardware-binding, we describe a possible white-box competition setup which assesses white-box implementations w.r.t. hardware-binding. Our proposed competition setup allows us to capture hardware-binding in a practically meaningful way. While some symmetric encryption schemes have been proven to admit plain white-box implementations, we show that not all secure symmetric encryption schemes are white-boxeable in the plain white-box attack scenario, i.e., without hardware-binding. Thus, even strong assumptions such as indistinguishability obfuscation cannot be used to provide secure white-box implementations for arbitrary ciphers. Perhaps surprisingly, our impossibility result does not carry over to the hardware-bound scenario. In particular, Alpirez Bock, Brzuska, Fischlin, Janson and Michiels (ePrint 2019/1014) proved a rather general feasibility result in the hardware-bound model. Equally important, the apparent theoretical distinction between the plain white-box model and the hardware-bound white-box model also translates into practically reduced attack capabilities as we explain in this paper.
Last updated:  2020-02-04
Improved Related-Tweakey Rectangle Attacks on Reduced-round Deoxys-BC-384 and Deoxys-I-256-128
Boxin Zhao, Xiaoyang Dong, Keting Jia, Willi Meier
Deoxys-BC is the core internal tweakable block cipher of the authenticated encryption schemes Deoxys-I and Deoxys-II. Deoxys-II is one of the six schemes in the final portfolio of the CAESAR competition, while Deoxys-I is a 3rd round candidate. By well studying the new method proposed by Cid et al. at ToSC 2017 and BDT technique proposed by Wang and Peyrin at ToSC 2019, we find a new 11-round related-tweakey boomerang distinguisher of Deoxys-BC-384 with probability of $2^{-118.4}$, and give a related-tweakey rectangle attack on 13-round Deoxys-BC-384 with a data complexity of $2^{125.2}$ and time complexity of $2^{186.7}$, and then apply it to analyze 13-round Deoxys-I-256-128 in this paper. This is the first time that an attack on 13-round Deoxys-I-256-128 is given, while the previous attack on this version only reaches 12 rounds.
Last updated:  2020-02-04
New Related-Tweakey Boomerang and Rectangle Attacks on Deoxys-BC Including BDT Effect
Boxin Zhao, Xiaoyang Dong, Keting Jia
In the CAESAR competition, Deoxys-I and Deoxys-II are two important authenticated encryption schemes submitted by Jean et al. Recently, Deoxys-II together with Ascon, ACORN, AEGIS-128, OCB and COLM have been selected as the final CAESAR portfolio. Notably, Deoxys-II is also the primary choice for the use case ``Defense in depth''. However, Deoxys-I remains to be one of the third-round candidates of the CAESAR competition. Both Deoxys-I and Deoxys-II adopt Deoxys-BC-256 and Deoxys-BC-384 as their internal tweakable block ciphers. In this paper, we investigate the security of round-reduced Deoxys-BC-256/-384 and Deoxys-I against the related-tweakey boomerang and rectangle attacks with some new boomerang distinguishers. For Deoxys-BC-256, we present 10-round related-tweakey boomerang and rectangle attacks for the popular setting $(|tweak|,|key|)=(128,128)$, which reach one more round than the previous attacks in this setting. Moreover, an 11-round related-tweakey rectangle attack on Deoxys-BC-256 is given for the first time. We also put forward a 13-round related-tweakey boomerang attack in the popular setting $(|tweak|,|key|)=(128,256)$ for Deoxys-BC-384, while the previous attacks in this setting only work for 12 rounds at most. In addition, the first 14-round related-tweakey rectangle attack on Deoxys-BC-384 is given when $(|tweak|<98,|key|>286)$, that attacks one more round than before. Besides, we give the first 10-round rectangle attack on the authenticated encryption mode Deoxys-I-128-128 with one more round than before, and we also reduce the complexity of the related-tweakey rectangle attack on 12-round Deoxys-I-256-128 by a factor of $2^{28}$. Our attacks can not be applied to (round-reduced) Deoxys-II.
Last updated:  2021-02-14
A Survey of Subscription Privacy on the 5G Radio Interface - The Past, Present and Future
Haibat Khan, Keith M. Martin
End-user privacy in mobile telephony systems is nowadays of great interest because of the envisaged hyper-connectivity and the potential of the unprecedented services (virtual reality, machine-type communication, vehicle-to-everything, IoT, etc.) being offered by the new 5G system. This paper reviews the state of subscription privacy in 5G systems. As the work on 5G Release 15 -- the first full set of 5G standards -- has recently been completed, this seems to be an appropriate occasion for such a review. The scope of the privacy study undertaken is limited to the wireless part of the 5G system which occurs between the service provider's base station and the subscriber's mobile phone. Although 5G offers better privacy guarantees than its predecessors, this work highlights that there still remain significant issues which need rectifying. We undertook an endeavor to (i) compile the privacy vulnerabilities that already existed in the previous mobile telephony generations. Thereafter, (ii) the privacy improvements offered by the recently finalized 5G standard were aggregated. Consequently, (iii) we were able to highlight privacy issues from previous generations that remain unresolved in 5G Release 15. For completeness, (iv) we also explore new privacy attacks which surfaced after the publication of the 5G standard. To address the identified privacy gaps, we also present future research directions in the form of proposed improvements.
Last updated:  2020-02-04
A direct proof of APN-ness of the Kasami functions
Claude Carlet, Kwang Ho Kim, Sihem Mesnager
Using recent results on solving the equation $X^{2^k+1}+X+a=0$ over a finite field $\GF{2^n}$, we address an open question raised by the first author in WAIFI 2014 concerning the APN-ness of the Kasami functions $x\mapsto x^{2^{2k}-2^k+1}$ with $gcd(k,n)=1$, $x\in\GF{2^n}$
Last updated:  2020-02-04
Many a Mickle Makes a Muckle: A Framework for Provably Quantum-Secure Hybrid Key Exchange
Benjamin Dowling, Torben Brandt Hansen, Kenneth G. Paterson
Hybrid Authenticated Key Exchange (AKE) protocols combine keying material from different sources (post-quantum, classical, and quantum key distribution (QKD)) to build protocols that are resilient to catastrophic failures of the different components. These failures may be due to advances in quantum computing, implementation vulnerabilities, or our evolving understanding of the quantum (and even classical) security of supposedly quantum-secure primitives. This hybrid approach is a prime candidate for initial deployment of post-quantum-secure cryptographic primitives because it hedges against undiscovered weaknesses. We propose a general framework HAKE for analysing the security of such hybrid AKE protocols. HAKE extends the classical Bellare-Rogaway model for AKE security to encompass forward security, post-compromise security, fine-grained compromise of different cryptographic components, and more. We use the framework to provide a security analysis of a new hybrid AKE protocol named Muckle. This protocol operates in one round trip and leverages the pre-established symmetric keys that are inherent to current QKD designs to provide message authentication, avoiding the need to use expensive post-quantum signature schemes. We provide an implementation of our Muckle protocol, instantiating our generic construction with classical and post-quantum Diffie-Hellman-based algorithmic choices. Finally, we report on benchmarking exercises against our implementation, examining its performance in terms of clock cycles, elapsed wall-time, and additional latency in both LAN and WAN settings.
Last updated:  2021-06-23
Improved key recovery on the Legendre PRF
Novak Kaluđerović, Thorsten Kleinjung, Dusan Kostic
We give an algorithm for key recovery of the Legendre pseudorandom function that supersedes the best known algorithms so far. The expected number of operations is $O(\sqrt{p\log{\log{p}}})$ on a $\Theta(\log{p})$-bit word machine, under reasonable heuristic assumptions, and requires only $\sqrt[4]{p~{\log^2{p}}\log{\log{p}}}$ oracle queries. If the number of queries $M$ is smaller, the expected number of operations is $\frac{{p}\log{p}\log\log{p}}{M^2}$. We further show that the algorithm works in many different generalisations -- using a different character instead of the Legendre symbol, using the Jacobi symbol, or using a degree $r$ polynomial in the Legendre symbol numerator. In the latter case we show how to use Möbius transforms to lower the complexity to $O(p^{\operatorname{max}\{r-3,r/2\}}r^2\log{p})$ Legendre symbol computations, and $O(p^{\operatorname{max}\{r-4,r/2\}}r^2\log{p})$ in the case of a reducible polynomial. We also give an $O(\sqrt[3]{p})$ quantum algorithm that does not require a quantum oracle, and comments on the action of the Möbius group in the linear PRF case. On the practical side we give implementational details of our algorithm. We give the solutions of the $64, 74$ and $84$-bit prime challenges for key recovery with $M=2^{20}$ queries posed by Ethereum, out of which only the $64$ and $74$-bit were solved earlier.
Last updated:  2020-02-04
Research on OpenSSL Elliptic Curves for Compliance with the Russian National Digital Signature Standard
Stanislav S. Malakhov
The survey deals with elliptic curves which are implemented in the OpenSSL 1.1.1d software library. The objective of this work is to highlight the elliptic curves which comply with the Russian national digital signature standard, namely the GOST R 34.10--2012. For this reason the paper focuses on the OpenSSL elliptic curves over a finite field of a prime order and provides the results of testing those curves for compliance with the GOST R 34.10--2012 requirements. Two cases are observed. The first case covers a complete set of restrictions imposed on elliptic curves parameters, whereas the second one differs in that a restriction on a bit length of a number of points on the curve is omitted. For both cases the paper presents tables which list the curves tested along with corresponding match marks. In order to conduct tests the Wolfram Mathematica computing system was employed, and the Wolfram language source code is given in the appendices. Note, that the paper does not address to a rationale of the requirements of the standard nor does it focus on the parameters generation issues.
Last updated:  2020-08-06
Fully Distributed Verifiable Random Functions and their Application to Decentralised Random Beacons
David Galindo, Jia Liu, Mihai Ordean, Jin-Mann Wong
We provide the first systematic analysis of two multiparty protocols, namely (Non-Interactive) Fully Distributed Verifiable Random Functions (DVRFs) and Decentralised Random Beacons (DRBs), including their syntax and definition of robustness and privacy properties. We refine current pseudorandomness definitions for distributed functions and show that the privacy provided by strong pseudorandomness, where an adversary is allowed to make partial function evaluation queries on the challenge value, is strictly better than that provided by standard pseudorandomness, where such adversarial queries are disallowed. We provide two new DVRF instantiations, named DDH-DVRF and GLOW-DVRF, that meet strong pseudorandomness under widely accepted cryptographic assumptions. Additionally we show that a previously sketched DVRF construction that we name Dfinity-DVRF meets the weaker property of standard pseudorandomness. We show the usefulness of our DRB formalism in two different ways. Firstly, we give a rigorous treatment of a folklore generic construction that builds a Decentralized Random Beacon from any DVRF instance and prove that it satisfies robustness and pseudorandomness provided that the original DVRF protocol is secure. Secondly, we capture several existing DRB protocols from academy and industry using our framework, which serves as an evidence of its wider applicability. DRBs have recently gained a lot traction as a key component for leader(s) election in decentralized ledger technologies, and we provide a classification of some of the most prominent. By building on an an obvious connection between Dfinity-DVRF and Threshold BLS signatures we discuss how to transform GLOW-DVRF into a modified Threshold BLS signature scheme with stronger security as an additional contribution of independent value. Finally, we report on experimental evaluations of the three concrete DVRFs studied under several cryptographic libraries, and we also report preliminary benchmark results on two of the DRBs obtained from the generic DVRF-to-DRB transformation. Our findings are that our two new DRB instantiations are strongly pseudorandom and strongly unbiasable, while exhibiting high performance and linear communication complexity (as the random beacon values are computed in a non-interactive fashion). We conclude that our new DRB instantiations are to our knowledge the most efficient instantiations currently available while enjoying strong and formally proven security properties. Some of our benchmarks can be independently verified as we provide an open source C++ reference implementation of the DVRFs discussed in this work.
Last updated:  2020-02-04
SCloud: Public Key Encryption and Key Encapsulation Mechanism Based on Learning with Errors
Zhongxiang Zheng, Anyu Wang, Haining Fan, Chunhuan Zhao, Chao Liu, Xue Zhang
We propose a new family of public key encryption (PKE) and key encapsulation mechanism (KEM) schemes based on the plain learning with errors (LWE) problem. Two new design techniques are adopted in the proposed scheme named SCloud: the sampling method and the error-reconciliation mechanism. The new sampling method is obtained by studying the property of the convolution of central binomial distribution and bounded uniform distribution which can achieve higher efficiency and more flexibility w.r.t the parameter choice. Besides, it is shown to be more secure against the dual attack due to its advantage in distinguish property. The new error-reconciliation mechanism is constructed by combining the binary linear codes and Gray codes. It can reduce the size of parameters, and then improve the encryption/decryption efficiency as well as communication efficiency, by making full use of the encryption space. Based on these two techniques, SCloud can provide various sets of parameters for refined security level.
Last updated:  2020-02-04
On the Profitability of Selfish Mining Against Multiple Difficulty Adjustment Algorithms
Michael Davidson, Tyler Diamond
The selfish mining attack allows cryptocurrency miners to mine more than their "fair share" of blocks, stealing revenue from other miners while reducing the overall security of payments. This malicious strategy has been extensively studied in Bitcoin, but far less attention has been paid to how the strategy may impact other cryptocurrencies. Because selfish mining is an attack against the difficulty adjustment algorithm (DAA) of a cryptocurrency, it may have a different effect when used on coins with different DAAs. In this work, we study the degree to which selfish mining can increase the revenue of miners for a wider variety of cryptocurrencies than have been studied before, including Bitcoin, Litecoin, Bitcoin Cash, Dash, Monero, and Zcash. To do so, we generalize the selfish mining strategy to blockchains with variable difficulty, and use simulations to measure how profitable the strategy is. We find that the other cryptocurrencies under consideration are far more susceptible to selfish mining than Bitcoin is, and that the strategy is profitable for miners with a lower hash rate. We also show that by dishonestly reporting block timestamps, selfish miners can generate enormously disproportionate revenues up to 2.5 times larger than they would through honest mining for some DAAs. For each DAA, we consider what happens when parameters are changed, and suggest parameter sets that would improve the algorithm’s resilience against selfish mining.
Last updated:  2020-02-10
A New Paradigm for Public-Key Functional Encryption for Degree-2 Polynomials
Romain Gay
We give the first public-key functional encryption that supports the generation of functional decryption keys for degree-2 polynomials, with succinct ciphertexts, whose semi-adaptive simulation-based security is proven under standard assumptions. At the heart of our new paradigm lies a so-called partially function-hiding functional encryption scheme for inner products, which admits public-key instances, and that is sufficient to build functional encryption for degree-2 polynomials. Doing so, we improve upon prior works, such as the constructions from Lin (CRYPTO 17) or Ananth Sahai (EUROCRYPT 17), both of which rely on function-hiding inner product FE, that can only exist in the private-key setting. The simplicity of our construction yields the most efficient FE for quadratic functions from standard assumptions (even those satisfying a weaker security notion). The interest of our methodology is that the FE for quadratic functions that builds upon any partially function-hiding FE for inner products inherits the security properties of the latter. In particular, we build a partially function-hiding FE for inner products that enjoys simulation security, in the semi-adaptive setting, where the challenge sent from the adversary can be chosen adaptively after seeing the public key (but before corrupting functional decryption keys). This is in contrast from prior public-key FE for quadratic functions from Baltico et al. (CRYPTO 17), which only achieved an indistinguishability-based, selective security. As a bonus, we show that we can obtain security against Chosen-Ciphertext Attacks straightforwardly. Even though this is the de facto security notion for encryption, this was not achieved by prior functional encryption schemes for quadratic functions, where the generic Fujisaki Okamoto transformation (CRYPTO 99) does not apply.
Last updated:  2020-06-22
Overcoming Impossibility Results in Composable Security using Interval-Wise Guarantees
Daniel Jost, Ueli Maurer
Composable security definitions, at times called simulation-based definitions, provide strong security guarantees that hold in any context. However, they are also met with some skepticism due to many impossibility results; goals such as commitments and zero-knowledge that are achievable in a stand-alone sense were shown to be unachievable composably (without a setup) since provably no efficient simulator exists. In particular, in the context of adaptive security, the so-called "simulator commitment problem" arises: once a party gets corrupted, an efficient simulator is unable to be consistent with its pre-corruption outputs. A natural question is whether such impossibility results are unavoidable or only artifacts of frameworks being too restrictive. In this work, we propose a novel type of composable security statement that evades the commitment problem. Our new type is able to express the composable guarantees of schemes that previously did not have a clear composable understanding. To this end, we leverage the concept of system specifications in the Constructive Cryptography framework, capturing the conjunction of several interval-wise guarantees, each specifying the guarantees between two events. We develop the required theory and present the corresponding new composition theorem. We present three applications of our theory. First, we show in the context of symmetric encryption with adaptive corruption how our notion naturally captures the expected confidentiality guarantee---the messages remain confidential until either party gets corrupted---and that it can be achieved by any standard semantically secure scheme (negating the need for non-committing encryption). Second, we present a composable formalization of (so far only known to be standalone secure) commitment protocols, which is instantiable without a trusted setup like a CRS. We show it to be sufficient for being used in coin tossing over the telephone, one of the early intuitive applications of commitments. Third, we reexamine a result by Hofheinz, Matt, and Maurer [Asiacrypt'15] implying that IND-ID-CPA security is not the right notion for identity-based encryption, unmasking this claim as an unnecessary framework artifact.
Last updated:  2020-02-04
Enabling Faster Operations for Deeper Circuits in Full RNS Variants of FV-like Somewhat Homomorphic Encryption
Jonathan Takeshita, Matthew Schoenbauer, Ryan Karl, Taeho Jung
Though Fully Homomorphic Encryption (FHE) has been realized, most practical implementations utilize leveled Somewhat Homomorphic Encryption (SHE) schemes, which have limits on the multiplicative depth of the circuits they can evaluate and avoid computationally intensive bootstrapping. Many SHE schemes exist, among which those based on Ring Learning With Error (RLWE) with operations on large polynomial rings are popular. Of these, variants allowing operations to occur fully in Residue Number Systems (RNS) have been constructed. This optimization allows homomorphic operations directly on RNS components without needing to reconstruct numbers from their RNS representation, making SHE implementations faster and highly parallel. In this paper, we present a set of optimizations to a popular RNS variant of the B/FV encryption scheme that allow for the use of significantly larger ciphertext moduli (e.g., thousands of bits) without increased overhead due to excessive numbers of RNS components or computational overhead, as well as computational optimizations. This allows for the use of larger ciphertext moduli, which leads to a higher multiplicative depth with the same computational overhead. Our experiments show that our optimizations yield runtime improvements of up to 4.48 for decryption and 14.68 for homomorphic multiplication for large ciphertext moduli.
Last updated:  2020-02-05
Witness Maps and Applications
Uncategorized
Suvradip Chakraborty, Manoj Prabhakaran, Daniel Wichs
Show abstract
Uncategorized
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $\mathbf{NP}$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a ``lossy'' deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an ``extractable'' mode to a CWM. \medskip Our main construction is a DMWM for all $\mathbf{NP}$ relations, assuming sub-exponentially secure indistinguishability obfuscation ($i\mathcal{O}$), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $i\mathcal{O}$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $i\mathcal{O}$ and sub-exponentially secure OWF. \medskip As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer, thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $1 - o(1)$.
Last updated:  2020-02-04
The MILP-Aided Conditional Differential Attack and Its Application to Trivium
Chen-Dong Ye, Tian Tian, Fan-Yang Zeng
Conditional differential attacks were proposed by Knellwolf et al. at ASIACRYPT 2010 which targeted at cryptographic primitives based on non-linear feedback shift registers. The main idea of conditional differential attacks lies in controlling the propagation of a difference through imposing some conditions on public/key variables. In this paper, we improve the conditional differential attack by introducing the mixed integer linear programming (MILP) method to it. Let $J=\{f_i(\boldsymbol{x},\boldsymbol{v})=\gamma_i| 1\le i\le N\}$ be a set of conditions that we want to impose, where $\boldsymbol{x}=(x_1,x_2,\ldots,x_n)$ (resp. $ \boldsymbol{v}=(v_1,v_2,\ldots,v_n)$) represents key (resp. public) variables and $\gamma_i \in\{0,1\}$ needs evaluating. Previous automatic conditional differential attacks evaluate $\gamma_1,\gamma_2,\ldots,\gamma_N$ just in order with the preference to zero. Based on the MILP method, conditions in $J$ could be automatically analysed together. In particular, to enhance the effect of conditional differential attacks, in our MILP models, we are concerned with minimizing the number of 1's in $\{\gamma_1,\gamma_2,\ldots,\gamma_N\}$ and maximizing the number of weak keys. ~~~We apply our method to analyse the security of Trivium. As a result, key-recovery attacks are preformed up to the 978-round Trivium and non-randomness is detected up to the 1108-round Trivium of its 1152 rounds both in the weak-key setting. All the results are the best known so far considering the number of rounds and could be experimentally verified. Hopefully, the new method would provide insights on conditional differential attacks and the security evaluation of Trivium.
Last updated:  2020-09-12
Streamlet: Textbook Streamlined Blockchains
Benjamin Y Chan, Elaine Shi
In the past five years or so, numerous blockchain projects have made tremendous progress towards improving permissioned consensus protocols (partly due to their promised applications in Proof-of-Stake cryptocurrencies). Although a significant leap has silently taken place in our understanding of consensus protocols, it is rather difficult to navigate this body of work, and knowledge of the new techniques appears scattered. In this paper, we describe an extremely simple and natural paradigm called Streamlet for constructing consensus protocols. Our protocols are inspired by the core techniques that have been uncovered in the past five years of work; but to the best of our knowledge our embodiment is simpler than ever before and we accomplish this by taking a ``streamlining'' idea to its full potential. We hope that our textbook constructions will help to decipher the past five years of work on consensus partly driven by the cryptocurrency community --- in particular, how remarkably simple the new generation of consensus protocols has become in comparison with classical mainstream approaches such as PBFT and Paxos.
Last updated:  2020-02-05
Streamlined Blockchains: A Simple and Elegant Approach (A Tutorial and Survey)
Elaine Shi
A blockchain protocol (also called state machine replication) allows a set of nodes to agree on an ever-growing, linearly ordered log of transactions. The classical consensus literature suggests two approaches for constructing a blockchain protocol: 1) through composition of single-shot consensus instances often called Byzantine Agreement; and 2) through direct construction of a blockchain where there is no clear-cut boundary between single-shot consensus instances. While conceptually simple, the former approach precludes cross-instance optimizations in a practical implementation. This perhaps explains why the latter approach has gained more traction in practice: specifically, well-known protocols such as Paxos and PBFT all follow the direct-construction approach. In this tutorial, we present a new paradigm called “streamlined blockchains” for directly constructing blockchain protocols. This paradigm enables a new family of protocols that are extremely simple and natural: every epoch, a proposer proposes a block extending from a notarized parent chain, and nodes vote if the proposal’s parent chain is not too old. Whenever a block gains enough votes, it becomes notarized. Whenever a node observes a notarized chain with several blocks of consecutive epochs at the end, then the entire chain chopping off a few blocks at the end is final. By varying the parameters highlighted in blue, we illustrate two variants for the partially synchronous and synchronous settings respectively. We present very simple proofs of consistency and liveness. We hope that this tutorial provides a compelling argument why this new family of protocols should be used in lieu of classical candidates (e.g., PBFT, Paxos, and their variants), both in practical implementation and for pedagogical purposes.
Last updated:  2024-04-18
Bootstrapping in FHEW-like Cryptosystems
Daniele Micciancio and Yuriy Polyakov
FHEW and TFHE are fully homomorphic encryption (FHE) cryptosystems that can evaluate arbitrary Boolean circuits on encrypted data by bootstrapping after each gate evaluation. The FHEW cryptosystem was originally designed based on standard (Ring, circular secure) LWE assumptions, and its initial implementation was able to run bootstrapping in less than 1 second. The TFHE cryptosystem used somewhat stronger assumptions, such as (Ring, circular secure) LWE over the torus with binary secret distribution, and applied several other optimizations to reduce the bootstrapping runtime to less than 0.1 second. Up to now, the gap between the underlying security assumptions prevented a fair comparison of the cryptosystems for the same security settings. We present a unified framework that includes the original and extended variants of both FHEW and TFHE cryptosystems, and implement it in the open-source PALISADE lattice cryptography library using modular arithmetic. Our analysis shows that the main distinction between the cryptosystems is the bootstrapping procedure used: Alperin-Sherif--Peikert (AP) for FHEW vs. Gama--Izabachene--Nguyen--Xie (GINX) for TFHE. All other algorithmic optimizations in TFHE equally apply to both cryptosystems. The GINX bootstrapping method makes essential the use of binary secrets, and cannot be directly applied to other secret distributions. In the process of comparing the two schemes, we present a simple, lightweight method to extend GINX bootstrapping (e.g., as employed by TFHE) to ternary uniform and Gaussian secret distributions, which are included in the HE community security standard. Our comparison of the AP and GINX bootstrapping methods for different secret distributions suggests that the TFHE/GINX cryptosystem provides better performance for binary and ternary secrets while FHEW/AP is faster for Gaussian secrets. We make a recommendation to consider the variants of FHEW and TFHE cryptosystems based on ternary and Gaussian secrets for standardization by the HE community.
Last updated:  2020-01-28
Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems
Ben Nassi, Dudi Nassi, Raz Ben-Netanel, Yisroel Mirsky, Oleg Drokin, Yuval Elovici
The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them.
Last updated:  2021-09-09
Bandwidth-efficient threshold EC-DSA
Guilhem Castagnos, Dario Catalano, Fabien Laguillaumie, Federico Savasta, Ida Tucker
Threshold Signatures allow n parties to share the power of issuing digital signatures so that any coalition of size at least (t+1) can sign, whereas groups of t or less players cannot. Over the last few years many schemes addressed the question of realizing efficient threshold variants for the specific case of EC-DSA signatures. In this paper we present new solutions to the problem that aim at reducing the overall bandwidth consumption. Our main contribution is a new variant of the Gennaro and Goldfeder protocol from ACM CCS 2018 that avoids all the required range proofs, while retaining provable security against malicious adver- saries in the dishonest majority setting. Our experiments show that – for all levels of security – our signing protocol reduces the bandwidth consumption of best previously known secure protocols for factors varying between 4.4 and 9, while key generation is consistently two times less expensive. Furthermore compared to these same protocols, our signature generation is faster for 192-bits of security and beyond.
Last updated:  2020-10-19
Metal: A Metadata-Hiding File-Sharing System
Weikeng Chen, Raluca Ada Popa
File-sharing systems like Dropbox offer insufficient privacy because a compromised server can see the file contents in the clear. Although encryption can hide such contents from the servers, metadata leakage remains significant. The goal of our work is to develop a file-sharing system that hides metadata---including user identities and file access patterns. Metal is the first file-sharing system that hides such metadata from malicious users and that has a latency of only a few seconds. The core of Metal consists of a new two-server multi-user oblivious RAM (ORAM) scheme, which is secure against malicious users, a metadata-hiding access control protocol, and a capability sharing protocol. Compared with the state-of-the-art malicious-user file-sharing scheme PIR-MCORAM (Maffei et al.'17), which does not hide user identities, Metal hides the user identities and is 500x faster (in terms of amortized latency) or 10^5x faster (in terms of worst-case latency).
Last updated:  2020-04-22
Random Walks and Concurrent Zero-Knowledge
Anand Aiyer, Xiao Liang, Nilu Nalini, Omkant Pandey
The established bounds on the round-complexity of (black-box) concurrent zero-knowledge (cZK) consider adversarial verifiers with complete control over the scheduling of messages of different sessions. Consequently, such bounds only represent a $\textit{worst}$ case study of concurrent schedules, forcing $\widetilde{\Omega}(\log n)$ rounds for $\textit{all}$ protocol sessions. What happens in "average" cases against random schedules? Must all sessions still suffer large number of rounds? Rosen and Shelat first considered such possibility, and constructed a cZK protocol that adjusts its round-complexity based on existing network conditions. While they provide experimental evidence for its average-case performance, no provable guarantees are known. In general, a proper framework for studying and understanding the average-case schedules for cZK is missing. We present the first theoretical framework for performing such average-case studies. Our framework models the network as a stochastic process where a new session is opened with probability $p$ or an existing session receives the next message with probability $1-p$; the existing session can be chosen either in a first-in-first-out (FIFO) or last-in-first-out (LIFO) order. These two orders are fundamental and serve as good upper and lower bounds for other simple variations. We also develop methods for establishing provable average-case bounds for cZK in these models. The bounds in these models turn out to be intimately connected to various properties of one-dimensional random walks that reflect at the origin. Consequently, we establish new and tight asymptotic bounds for such random walks, including: expected rate of return-to-origin, changes of direction, and concentration of "positive" movements. These results may be of independent interest. Our analysis shows that the Rosen-Shelat protocol is highly sensitive to even moderate network conditions, resulting in a large fraction of non-optimal sessions. We construct a more robust protocol by generalizing the "footer-free" condition of Rosen-Shelat which leads to significant improvements for both FIFO and LIFO models.
Last updated:  2021-05-27
Efficient polynomial commitment schemes for multiple points and polynomials
Uncategorized
Dan Boneh, Justin Drake, Ben Fisch, Ariel Gabizon
Show abstract
Uncategorized
We present an enhanced version of the Kate, Zaverucha and Goldberg polynomial commitment scheme [KZG, ASIACRYPT 2010] where a single group element can be an opening proof for multiple polynomials each evaluated at a different arbitrary subset of points. As a sample application we ``plug in'' this scheme into the PLONK proving system[GWC, 2019] to obtain improved proof size and prover run time at the expense of additional verifier ${\mathbb{G}}_2$ operations and pairings, and additional ${\mathbb{G}}_2$ SRS elements. We also present a second scheme where the proof consists of two group elements and the verifier complexity is better than previously known batched verification methods for [KZG].
Last updated:  2023-09-05
Better Secret-Sharing via Robust Conditional Disclosure of Secrets
Benny Applebaum, Amos Beimel, Oded Nir, and Naty Peter
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $2^{n-o(n)}$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $2^{0.994n+o(n)}$, which was later improved to $2^{0.892n+o(n)}$ by Applebaum et al. (EUROCRYPT 2019). In this paper we improve the exponent of general secret-sharing schemes down to $0.637$. For the special case of linear secret-sharing schemes, we get an exponent of $0.762$ (compared to $0.942$ of Applebaum et al.). As our main building block, we introduce a new \emph{robust} variant of conditional disclosure of secrets (robust CDS) that achieves unconditional security even under bounded form of re-usability. We show that the problem of general secret-sharing schemes reduces to robust CDS protocols with sub-exponential overhead and derive our main result by implementing robust CDS with a non-trivial exponent. The latter construction follows by presenting a general immunization procedure that turns standard CDS into a robust CDS.
Last updated:  2020-11-20
Exploring HTTPS Security Inconsistencies: A Cross-Regional Perspective
Eman Salem Alashwali, Pawel Szalachowski, Andrew Martin
If two or more identical HTTPS clients, located at different geographic locations (regions), make an HTTPS request to the same domain (e.g. example.com), on the same day, will they receive the same HTTPS security guarantees in response? Our results give evidence that this is not always the case. We conduct scans for the top 250000 most visited domains on the Internet, from clients located at five different regions: Australia, Brazil, India, the UK, and the US. Our scans gather data from both application (URLs and HTTP headers) and transport (servers' selected TLS version, ciphersuite, and certificate) layers. Overall, we find that HTTPS inconsistencies at the application layer are higher than those at the transport layer. We also find that HTTPS security inconsistencies are strongly related to URLs and IPs diversity among regions, and to a lesser extent to the presence of redirections. Further manual inspection shows that there are several reasons behind URLs diversity among regions such as downgrading to the plain-HTTP protocol, using different subdomains, different TLDs, or different home page documents. Furthermore, we find that downgrading to plain-HTTP is related to websites' regional blocking. We also provide attack scenarios that show how an attacker can benefit from HTTPS security inconsistencies, and introduce a new attack scenario which we call the "region confusion" attack. Finally, based on our observations, we draw some recommendations including the need for testing tools for domain administrators and users that help to mitigate and detect regional domains' inconsistencies, standardising regional domains format with the same-origin policy (of domains) in mind, standardising secure URL redirections, and avoid redirections whenever possible.
Last updated:  2020-01-26
Quantum Random Number Generation with the Superconducting Quantum Computer IBM 20Q Tokyo
Kentaro Tamura, Yutaka Shikano
Quantum random number generators (QRNGs) produce theoretically unpredictable random numbers. A typical QRNG is implemented in quantum optics [Herrero-Collantes, M., Garcia-Escartin, J. C.: Quantum Random Number Generators. Rev. Mod. Phys. \textbf{89}, 015004 (2017)]. Quantum computers become QRNGs when given certain programs. The simplest example of such a program applies the Hadamard gate on all qubits and performs measurement. As a result of repeatedly running this program on a 20-qubit superconducting quantum computer (IBM 20Q Tokyo), we obtained a sample with a length of 43,560. However, statistical analysis showed that this sample was biased and correlated. One of the post-processed samples passed statistical tests. To show the effectiveness of post-processing, a larger sample size is required. The present study of quantum random number generation and statistical testing may provide a potential candidate for benchmarking tests of actual quantum computing devices.
Last updated:  2020-01-26
Improved Quantum Circuits for Elliptic Curve Discrete Logarithms
Thomas Häner, Samuel Jaques, Michael Naehrig, Martin Roetteler, Mathias Soeken
We present improved quantum circuits for elliptic curve scalar multiplication, the most costly component in Shor's algorithm to compute discrete logarithms in elliptic curve groups. We optimize low-level components such as reversible integer and modular arithmetic through windowing techniques and more adaptive placement of uncomputing steps, and improve over previous quantum circuits for modular inversion by reformulating the binary Euclidean algorithm. Overall, we obtain an affine Weierstrass point addition circuit that has lower depth and uses fewer T gates than previous circuits. While previous work mostly focuses on minimizing the total number of qubits, we present various trade-offs between different cost metrics including the number of qubits, circuit depth and T-gate count. Finally, we provide a full implementation of point addition in the Q# quantum programming language that allows unit tests and automatic quantum resource estimation for all components.
Last updated:  2020-01-26
Wyner-Ziv reconciliation for key exchange based on Ring-LWE
Charbel Saliba, Laura Luzzi, Cong Ling
We consider a key encapsulation mechanism (KEM) based on ring-LWE where reconciliation is performed on an $N$-dimensional lattice using Wyner-Ziv coding. More precisely, we consider Barnes-Wall lattices and use Micciancio and Nicolosi's bounded distance decoder with polynomial complexity $\mathcal{O}(N \log^2(N))$. We show that in the asymptotic regime for large $N$, the achievable key rate is $\Theta(\log N)$ bits per dimension, while the error probability $P_e$ vanishes exponentially in $N$. Unlike previous works, our scheme does not require a dither.
Last updated:  2020-07-19
Memory-Tight Reductions for Practical Key Encapsulation Mechanisms
Rishiraj Bhattacharyya
The efficiency of a black-box reduction is an important goal of modern cryptography. Traditionally, the time complexity and the success probability were considered as the main aspects of efficiency measurements. In CRYPTO 2017, Auerbach et al. introduced the notion of memory-tightness in cryptographic reductions and showed a memory-tight reduction of the existential unforgeability of the RSA-FDH signature scheme. Unfortunately, their techniques do not extend directly to the reductions involving intricate RO-programming. The problem seems to be inherent as all the other existing results on memory-tightness are lower bounds and impossibility results. In fact, Auerbach et al. conjectured that a memory-tight reduction for IND-CCA security of Hashed-ElGamal KEM is impossible. -We refute the above conjecture. Using a simple RO simulation technique, we provide memory-tight reductions of IND-CCA  security of the Cramer-Shoup and the ECIES version of Hashed-ElGamal KEM. - We prove memory-tight reductions for different variants of Fujisaki-Okamoto Transformation. We analyze the modular transformations introduced by  Hofheinz,  Hövermanns and Kiltz (TCC 2017). In addition to the constructions involving implicit rejection, we present a memory-tight reduction for the IND-CCA  security of the transformation ${\mbox{QFO}_m^\perp}$. Our techniques can withstand correctness-errors, and applicable to several lattice-based KEM candidates.
Last updated:  2020-12-15
Rolling up sleeves when subversion's in a field?
Daniel R. L. Brown
A nothing-up-my-sleeve number is a cryptographic constant, such as a field size in elliptic curve cryptography, with qualities to assure users against subversion of the number by the system designer. A number with low Kolmogorov descriptional complexity resists being subverted to the extent that the speculated subversion would leave a trace that cannot be hidden within the short description. The roll programming language, a version of Godel's 1930s definition of computability, can somewhat objectively quantify low descriptional complexity, a nothing-up-my-sleeve quality, of a number. For example, curves NIST-P-256, Curve25519, and NIST-P-521 have fields sizes with roll programs of 112, 84, and 63 words (respectively).
Last updated:  2020-08-10
Anonymous Symmetric-Key Communication
Fabio Banfi, Ueli Maurer
We study anonymity of probabilistic encryption (pE) and probabilistic authenticated encryption (pAE). We start by providing concise game-based security definitions capturing anonymity for both pE and pAE, and then show that the commonly used notion of indistinguishability from random ciphertexts (IND\($\)) indeed implies the anonymity notions for both pE and pAE. This is in contrast to a recent work of Chan and Rogaway (Asiacrypt 2019), where it is shown that IND\($\)-secure nonce-based authenticated encryption can only achieve anonymity if a sophisticated transformation is applied. Moreover, we also show that the Encrypt-then-MAC paradigm is anonymity-preserving, in the sense that if both the underlying probabilistic MAC (pMAC) and pE schemes are anonymous, then also the resulting pAE scheme is. Finally, we provide a composable treatment of anonymity using the constructive cryptography framework of Maurer and Renner (ICS 2011). We introduce adequate abstractions modeling various kinds of anonymous communication channels for many senders and one receiver in the presence of an active man-in-the-middle adversary. Then we show that the game-based notions indeed are anonymity-preserving, in the sense that they imply constructions between such anonymous channels, thus generating authenticity and/or confidentiality as expected, but crucially retaining anonymity if present.
Last updated:  2022-04-21
Anonymous Tokens with Private Metadata Bit
Ben Kreuter, Tancrède Lepoint, Michele Orrù, Mariana Raykova
We present a cryptographic construction for anonymous tokens with private metadata bit, called PMBTokens. This primitive enables an issuer to provide a user with a lightweight, single-use anonymous trust token that can embed a single private bit, which is accessible only to the party who holds the secret authority key and is private with respect to anyone else. Our construction generalizes and extends the functionality of Privacy Pass (PETS’18) with this private metadata bit capability. It is based on the DDH and CTDH assumptions in the random oracle model and provides unforgeability, unlinkability, and privacy for the metadata bit. Both Privacy Pass and PMBTokens rely on non-interactive zero-knowledge proofs (NIZKs). We present new techniques to remove the need for NIZKs, while still achieving unlinkability. We implement our constructions and we report their efficiency costs.
Last updated:  2020-02-26
Post-Quantum Authentication in TLS 1.3: A Performance Study
Dimitrios Sikeridis, Panos Kampanakis, Michael Devetsikiotis
The potential development of large-scale quantum computers is raising concerns among IT and security research professionals due to their ability to solve (elliptic curve) discrete logarithm and integer factorization problems in polynomial time. All currently used public key algorithms would be deemed insecure in a post-quantum (PQ) setting. In response, the National Institute of Standards and Technology (NIST) has initiated a process to standardize quantum-resistant crypto algorithms, focusing primarily on their security guarantees. Since PQ algorithms present significant differences over classical ones, their overall evaluation should not be performed out-of-context. This work presents a detailed performance evaluation of the NIST signature algorithm candidates and investigates the imposed latency on TLS 1.3 connection establishment under realistic network conditions. In addition, we investigate their impact on TLS session throughput and analyze the trade-off between lengthy PQ signatures and computationally heavy PQ cryptographic operations. Our results demonstrate that the adoption of at least two PQ signature algorithms would be viable with little additional overhead over current signature algorithms. Also, we argue that many NIST PQ candidates can effectively be used for less time-sensitive applications, and provide an in-depth discussion on the integration of PQ authentication in encrypted tunneling protocols, along with the related challenges, improvements, and alternatives. Finally, we propose and evaluate the combination of different PQ signature algorithms across the same certificate chain in TLS. Results show a reduction of the TLS handshake time and a significant increase of a server's TLS tunnel connection rate over using a single PQ signature scheme.
Last updated:  2020-04-09
On Instantiating the Algebraic Group Model from Falsifiable Assumptions
Thomas Agrikola, Dennis Hofheinz, Julia Kastner
We provide a standard-model implementation (of a relaxation) of the algebraic group model (AGM, [Fuchsbauer, Kiltz, Loss, CRYPTO 2018]). Specifically, we show that every algorithm that uses our group is algebraic, and hence ``must know'' a representation of its output group elements in terms of its input group elements. Here, ``must know'' means that a suitable extractor can extract such a representation efficiently. We stress that our implementation relies only on falsifiable assumptions in the standard model, and in particular does not use any knowledge assumptions. As a consequence, our group allows to transport a number of results obtained in the AGM into the standard model, under falsifiable assumptions. For instance, we show that in our group, several Diffie-Hellman-like assumptions (including computational Diffie-Hellman) are equivalent to the discrete logarithm assumption. Furthermore, we show that our group allows to prove the Schnorr signature scheme tightly secure in the random oracle model. Our construction relies on indistinguishability obfuscation, and hence should not be considered as a practical group itself. However, our results show that the AGM is a realistic computational model (since it can be instantiated in the standard model), and that results obtained in the AGM are also possible with standard-model groups.
Last updated:  2020-06-05
RSA and redactable blockchains
Dima Grigoriev, Vladimir Shpilrain
A blockchain is redactable if a private key holder (e.g. a central authority) can change any single block without violating integrity of the whole blockchain, but no other party can do that. In this paper, we offer a simple method of constructing redactable blockchains inspired by the ideas underlying the well-known RSA encryption scheme. Notably, our method can be used in conjunction with any reasonable hash function that is used to build a blockchain. Public immutability of a blockchain in our construction is based on the computational hardness of the RSA problem and not on properties of the underlying hash function. Corruption resistance is based on the computational hardness of the discrete logarithm problem.
Last updated:  2020-02-03
Further Clarification on Mantin's Digraph Repetition Bias in RC4
Pranab Chakraborty, Subhamoy Maitra
In this paper we provide a theoretical argument towards an unsolved question related to Mantin's ``Digraph Repetition Bias" (Eurocrypt 2005) that is observed in the key-stream of RC4. The open question, that depends on the observation that arrival of four consecutive same bytes (of the form $AAAA$) in RC4 key-stream is slightly negatively biased, was posed by Bricout et al [Des. Codes Cryptogr. (2018) 86:743-770] in 2016. Moreover, for the first time, we consider the ``Reverse Digraph Repetition Bias" and show that there is significant negative bias in arrival of $ABBA$ ($A, B$ two distinct bytes) in RC4 key-stream.
Last updated:  2020-11-06
Daence: Salsa20 and ChaCha in Deterministic Authenticated Encryption with no noNCEnse
Taylor R Campbell
We present Daence, a deterministic authenticated cipher based on a pseudorandom function family and a universal hash family, similar to SIV. We recommend instances with Salsa20 or ChaCha, and Poly1305, for high performance, high security, and easy deployment.
Last updated:  2020-12-16
Talek: Private Group Messaging with Hidden Access Patterns
Raymond Cheng, William Scott, Elisaweta Masserova, Irene Zhang, Vipul Goyal, Thomas Anderson, Arvind Krishnamurthy, Bryan Parno
Talek is a private group messaging system that sends messages through potentially untrustworthy servers, while hiding both data content and the communication patterns among its users. Talek explores a new point in the design space of private messaging; it guarantees access sequence indistinguishability, which is among the strongest guarantees in the space, while assuming an anytrust threat model, which is only slightly weaker than the strongest threat model currently found in related work. Our results suggest that this is a pragmatic point in the design space, since it supports strong privacy and good performance: we demonstrate a 3-server Talek cluster that achieves throughput of 9,433 messages/second for 32,000 active users with 1.7-second end-to-end latency. To achieve its security goals without coordination between clients, Talek relies on information-theoretic private information retrieval. To achieve good performance and minimize server-side storage, Talek intro- duces new techniques and optimizations that may be of independent interest, e.g., a novel use of blocked cuckoo hashing and support for private notifications. The latter provide a private, efficient mechanism for users to learn, without polling, which logs have new messages.
Last updated:  2020-01-21
A Performant, Misuse-Resistant API for Primality Testing
Jake Massimo, Kenneth G. Paterson
Primality testing is a basic cryptographic task. But developers today are faced with complex APIs for primality testing, along with documentation that fails to clearly state the reliability of the tests being performed. This leads to the APIs being incorrectly used in practice, with potentially disastrous consequences. In an effort to overcome this, we present a primality test having a simplest-possible API: the test accepts a number to be tested and returns a Boolean indicating whether the input was composite or probably prime. For all inputs, the output is guaranteed to be correct with probability at least $1 - 2^{128}$. The test is performant: on random, odd, 1024-bit inputs, it is faster than the default test used in OpenSSL by 17\%. We investigate the impact of our new test on the cost of random prime generation, a key use case for primality testing. The OpenSSL developers have adopted our suggestions in full; our new API and primality test are scheduled for release in OpenSSL 3.0.
Last updated:  2021-05-31
Dual System in Lattice: Fully Secure ABE from LWE Assumption
Geng Wang, Ming Wan, Zhen Liu, Dawu Gu
Dual system encryption is an important method used in pairing-based cryptography for constructing fully secure IBE, ABE and FE schemes. A long time open question is that, whether there is an analogue of dual system method in lattice, which can be used to prove the full security of lattice-based ABE or FE schemes. We solve this problem in this paper. We do this by introducing a new primitive called approximate inner product encryption (aIPE), which is the approximate version of the well known inner product encryption. We show that a fully secure ABE supporting CNF as its access policy can be constructed from a selectively secure aIPE and the LWE assumption. We also point out that the functionality of aIPE is included in FE for arbitrary circuits, which can be constructed from LWE assumption, hence the full security of our scheme can be totally based on the hardness of LWE.
Last updated:  2020-01-21
Attack on LAC Key Exchange in Misuse Situation
Aurelien Greuet, Simon Montoya, Guenael Renault
LAC is a Ring Learning With Error based cryptosystem that has been proposed to the NIST call for post-quantum standardization and passed the first round of the submission process. The particularity of LAC is to use an error-correction code ensuring a high security level with small key sizes and small ciphertext sizes. LAC team proposes a CPA secure cryptosystem, LAC.CPA, and a CCA secure one, LAC.CCA, obtained by applying the Fujisaki-Okamoto transformation on LAC.CPA. In this paper, we study the security of LAC Key Exchange (KE) mechanism, using LAC.CPA, in a misuse context: when the same secret key is reused for several key exchanges and an active adversary has access to a mismatch oracle. This oracle indicates information on the possible mismatch at the end of the KE protocol. In this context, we show that an attacker needs at most $8$ queries to the oracle to retrieve one coefficient of a static secret key. This result has been experimentally confirmed using the reference and optimized implementations of LAC. Since our attack can break the CPA version in a misuse context, the Authenticated KE protocol, based on the CCA version, is not impacted. However, this research provides a tight estimation of LAC resilience against this type of attacks.
Last updated:  2020-08-24
Lift-and-Shift: Obtaining Simulation Extractable Subversion and Updatable SNARKs Generically
Behzad Abdolmaleki, Sebastian Ramacher, Daniel Slamanig
Zero-knowledge proofs and in particular succinct non-interactive zero-knowledge proofs (so called zk-SNARKs) are getting increasingly used in real-world applications, with cryptocurrencies being the prime example. Simulation extractability (SE) is a strong security notion of zk-SNARKs which informally ensures non-malleability of proofs. This property is acknowledged as being highly important by leading companies in this field such as Zcash and supported by various attacks against the malleability of cryptographic primitives in the past. Another problematic issue for the practical use of zk-SNARKs is the requirement of a fully trusted setup, as especially for large-scale decentralized applications finding a trusted party that runs the setup is practically impossible. Quite recently, the study of approaches to relax or even remove the trust in the setup procedure, and in particular subversion as well as updatable zk-SNARKs (with latter being the most promising approach), has been initiated and received considerable attention since then. Unfortunately, so far SE-SNARKs with aforementioned properties are only constructed in an ad-hoc manner and no generic techniques are available. In this paper we are interested in such generic techniques and therefore firstly revisit the only available lifting technique due to Kosba et al. (called COCO) to generically obtain SE-SNARKs. By exploring the design space of many recently proposed SNARK- and STARK-friendly symmetric-key primitives we thereby achieve significant improvements in the prover computation and proof size. Unfortunately, the COCO framework as well as our improved version (called OCOCO) is not compatible with updatable SNARKs. Consequently, we propose a novel generic lifting transformation called Lamassu. It is built using different underlying ideas compared to COCO (and OCOCO). In contrast to COCO it only requires key-homomorphic signatures (which allow to shift keys) covering well studied schemes such as Schnorr or ECDSA. This makes Lamassu highly interesting, as by using the novel concept of so called updatable signatures, which we introduce in this paper, we can prove that Lamassu preserves the subversion and in particular updatable properties of the underlying zk-SNARK. This makes Lamassu the first technique to also generically obtain SE subversion and updatable SNARKs. As its performance compares favorably to OCOCO, Lamassu is an attractive alternative that in contrast to OCOCO is only based on well established cryptographic assumptions.
Last updated:  2020-02-23
Simple Schnorr Signature with Pedersen Commitment as Key
Gary Yu
In a transaction-output-based blockchain system, where each transaction spends UTXOs (the previously unspent transaction outputs), a user must provide a signature, or more precisely a \(\textit{scriptSig}\) for Bitcoin, to spend an UTXO, which proves the ownership of the spending output. When Pedersen commitment \(g^xh^a\) or ElGamal commitment \((g^xh^a,h^x)\) introduced into blockchain as transaction output, for supporting confidential transaction feature, where the input and output amounts in a transaction are hidden, the prior signature schemes such as Schnorr signature scheme and its variants does not directly work here if using the commitment as the public key, since nobody including the committer knows the private key of a \(g^xh^a\) when $a$ is not zero, meaning no one knows the $c$ such that \((g^c=g^xh^a)\). This is a signature scheme which is able to use the \(C=g^xh^a\) as the signature public key for any value of $a$. The signer, proceeding from a random Pedersen commitment \(R=g^{k_1}h^{k_2}\), generates a random bit sequence $e$, by multiplication of a stored private key $x$ with the bit sequence $e$ and by addition of the random number $k_1$ to get the $u$, by multiplication of the committed value $a$ with the bit sequence $e$ and by addition of the random number $k_2$ to get the $v$, finally constructs \(\sigma=(R,u,v)\) as the signature, with the corresponding public key $C$. In turn, the verifier calculates a Pedersen commitment \(S=g^uh^v\), and accepts the signature if \(S=RC^e\). For an electronic signature, a hash value $e$ is calculated from a random Pedersen commitment $R$, the Pedersen commitment $C$, and from the message $m$ to be signed. This signature scheme will be very helpful in the design of a non-interactive transaction in Mimblewimble.
Last updated:  2022-09-07
Auditable Asymmetric Password Authenticated Public Key Establishment
Antonio Faonio, Maria Isabel Gonzalez Vasco, Claudio Soriente, Hien Thi Thu Truong
Non-repudiation of messages generated by users is a desirable feature in a number of applications ranging from online banking to IoT scenarios. However, it requires certified public keys and usually results in poor usability as a user must carry around his certificate (e.g., in a smart-card) or must install it in all of his devices. A user-friendly alternative, adopted by several companies and national administrations, is to have a ``cloud-based'' PKI. In a nutshell, each user has a PKI certificate stored at a server in the cloud; users authenticate to the server---via passwords or one-time codes---and ask it to sign messages on their behalf. As such, there is no way for the server to prove to a third party that a signature on a given message was authorized by a user. As the server holds the user's certified key, it might as well have signed arbitrary messages in an attempt to impersonate that user. In other words, a user could deny having signed a message, by claiming that the signature was produced by the server without his consent. The same holds in case the secret key is derived deterministically from the user's password, for the server, by knowing the password, may still frame the user. In this paper we provide a "password-only" solution to non-repudiation of user messages by introducing Auditable Asymmetric Password Authenticated Public Key Establishment (A2PAKE). This is a PAKE-like protocol that generates an asymmetric key-pair where the public key is output to every participant, but the secret key is private output to just one of the parties (e.g., the user). Further, the protocol can be audited, i.e., given the public key output by a protocol run with a user, the server can prove to a third party that the corresponding secret key is held by that specific user. Thus, if the user signs messages with that secret key, then signatures are non-repudiable. We provide a universally composable definition of A2PAKE and an instantiation based on a distributed oblivious pseudo-random function. We also develop a prototype implementation of our instantiation and use it to evaluate its performance in realistic settings.
Last updated:  2020-01-21
ARX-KW, a family of key wrapping constructions using SipHash and ChaCha
Satō Shinichi
ARX-KW is a family of key wrapping construction based on add-rotate-xor primitives: the pseudo-random function SipHash for authentication and the stream cipher ChaCha for confidentiality. This paper presents ARX-KW, proposes a specific instantiation of ARX-KW and details the design decisions that were made.
Last updated:  2020-05-14
Learning when to stop: a mutual information approach to fight overfitting in profiled side-channel analysis
Guilherme Perin, Ileana Buhan, Stjepan Picek
Today, deep neural networks are a common choice for conducting the profiled side-channel analysis. Such techniques commonly do not require pre-processing, and yet, they can break targets protected with countermeasures. Unfortunately, it is not trivial to find neural network hyper-parameters that would result in such top-performing attacks. The hyper-parameter leading the training process is the number of epochs during which the training happens. If the training is too short, the network does not reach its full capacity, while if the training is too long, the network overfits, and is not able to generalize to unseen examples. Finding the right moment to stop the training process is particularly difficult for side-channel analysis as there are no clear connections between machine learning and side-channel metrics that govern the training and attack phases, respectively. In this paper, we tackle the problem of determining the correct epoch to stop the training in deep learning-based side-channel analysis. We explore how information is propagated through the hidden layers of a neural network, which allows us to monitor how training is evolving. We demonstrate that the amount of information, or, more precisely, mutual information transferred to the output layer, can be measured and used as a reference metric to determine the epoch at which the network offers optimal generalization. To validate the proposed methodology, we provide extensive experimental results that confirm the effectiveness of our metric for avoiding overfitting in the profiled side-channel analysis.
Last updated:  2020-01-21
On the smoothing parameter and last minimum of random orthogonal lattices
Elena Kirshanova, Huyen Nguyen, Damien Stehlé, Alexandre Wallet
Let $X \in {\mathbb{Z}}^{n \times m}$, with each entry independently and identically distributed from an integer Gaussian distribution. We consider the orthogonal lattice $\Lambda^\perp(X)$ of $X$, i.e., the set of vectors $\mathbf{v} \in {\mathbb{Z}}^m$ such that $X \mathbf{v}= \mathbf{0}$. In this work, we prove probabilistic upper bounds on the smoothing parameter and the $(m-n)$-th minimum of $\Lambda^\perp(X)$. These bounds improve and the techniques build upon prior works of Agrawal, Gentry, Halevi and Sahai [Asiacrypt'13], and of Aggarwal and Regev [Chicago J. Theoret. Comput. Sci.'16].
Last updated:  2020-01-22
AKCN-E8: Compact and Flexible KEM from Ideal Lattice
Zhengzhong JIn, Yunlei Zhao
A remarkable breakthrough in mathematics in recent years is the proof of the long-standing conjecture: sphere packing (i.e., packing unit balls) in the $E_8$ lattice is optimal in the sense of the best density \cite{V17} for sphere packing in $\mathbb{R}^8$. In this work, based on the $E_8$ lattice code, we design a mechanism for asymmetric key consensus from noise (AKCN), referred to as AKCN-E8, for error correction and key consensus. As a direct application of the AKCN-E8 code, we present highly practical key encapsulation mechanism (KEM) from the ideal lattice based on the ring learning with errors (RLWE) problem. Compared to the RLWE-based NewHope-KEM \cite{newhope-NIST}, which is a variant of NewHope-Usenix \cite{newhope15} and is now a promising candidate in the second round of NIST post-quantum cryptography (PQC) standardization competition, our AKCN-E8-KEM has the following advantages: * The size of shared-key is doubled.. * More compact ciphertexts, at the same or even higher security level. * More flexible parameter selection for tradeoffs among security, ciphertext size and error probability.
Last updated:  2020-03-20
When one vulnerable primitive turns viral: Novel single-trace attacks on ECDSA and RSA
Alejandro Cabrera Aldaya, Billy Bob Brumley
Microarchitecture based side-channel attacks are common threats nowadays. Intel SGX technology provides a strong isolation from an adversarial OS, however, does not guarantee protection against side-channel attacks. In this paper, we analyze the security of the mbedTLS binary GCD algorithm, an implementation that offers interesting challenges when compared for example with OpenSSL, due to the usage of very tight loops in the former. Using practical experiments we demonstrate the mbedTLS binary GCD implementation is vulnerable to side-channel analysis using the SGX-Step framework against mbedTLS based SGX enclaves. We analyze the security of some use cases of this algorithm in this library, resulting in the discovery of a new vulnerability in the ECDSA code path that allows a single-trace attack against this implementation. This vulnerability is three-fold interesting: * It resides in the implementation of a countermeasure which makes it more dangerous due to the false state of security the countermeasure currently offers. * It reduces mbedTLS ECDSA security to an integer factorization problem. * An unexpected GCD call inside the ECDSA code path compromises the countermeasure. We also cover an orthogonal use case, this time inside the mbedTLS RSA code path during the computation of a CRT parameter when loading a private key. The attack also exploits the binary GCD implementation threat, showing how a single vulnerable primitive leads to multiple vulnerabilities. We demonstrate both security threats with end-to-end attacks using 1000 trials each, showing in both cases single-trace attacks can be achieved with success rates very close to 100%.
Last updated:  2020-04-11
Parameterized Hardware Accelerators for Lattice-Based Cryptography and Their Application to the HW/SW Co-Design of qTESLA
Wen Wang, Shanquan Tian, Bernhard Jungk, Nina Bindel, Patrick Longa, Jakub Szefer
This paper presents a set of efficient and parameterized hardware accelerators that target post-quantum lattice-based cryptographic schemes, including a versatile cSHAKE core, a binary-search CDT-based Gaussian sampler, and a pipelined NTT-based polynomial multiplier, among others. Unlike much of prior work, the accelerators are fully open-sourced, are designed to be constant-time, and can be parameterized at compile-time to support different parameters without the need for re-writing the hardware implementation. These flexible, publicly-available accelerators are leveraged to demonstrate the first hardware-software co-design using RISC-V of the post-quantum lattice-based signature scheme qTESLA with provably secure parameters. In particular, this work demonstrates that the NIST’s Round 2 level 1 and level 3 qTESLA variants achieve over a 40-100x speedup for key generation, about a 10x speedup for signing, and about a 16x speedup for verification, compared to the baseline RISC-V software-only implementation. For instance, this corresponds to execution in 7.7, 34.4, and 7.8 milliseconds for key generation, signing, and verification, respectively, for qTESLA’s level 1 parameter set on an Artix-7 FPGA, demonstrating the feasibility of the scheme for embedded applications.
Last updated:  2020-11-26
Security Analysis Against "A New Encryption Scheme for Multivariate Quadratic Systems"
Yasuhiko Ikematsu, Shuhei Nakamura
A Gr¥"{o}bner basis algorithm computes a good basis for an ideal of a polynomial ring and appears in various situations of cryptography. In particular, it has been used in the security analysis of multivariate public key cryptography (MPKC), and has been studied for a long time; however, it is far from a complete understanding. We consider the algebraic attack using a Gr¥"{o}bner basis algorithm for a new multivariate encryption scheme proposed by Jiahui Chen et al. at Theoretical Computer Science 2020. Their idea to construct a new scheme was to use the minus and plus modifiers to prevent known attacks, such as linearization attack. Moreover, they discussed to have a resistance to the algebraic attack using a Gr¥"{o}bner basis algorithm. However, in our experiments, the algebraic attack breaks their claimed 80- and 128-bit security parameters in reasonable times. It is necessary to understand whether their scheme can avoid such an attack by introducing a slight modification. In this paper, we theoretically describe why the algebraic attack breaks their scheme and give a precise complexity of the algebraic attack. As a result, we demonstrate that the algebraic attack can break the claimed 80- and 128-bit security parameters in the complexities of approximately 25 and 32 bits, respectively. Moreover, based on our complexity estimation of the algebraic attack, we conclude that Chen et al.'s scheme is not practical.
Last updated:  2020-01-17
Impossible Differential Cryptanalysis of Reduced-Round Tweakable TWINE
Mohamed Tolba, Muhammad ElSheikh, Amr M. Youssef
Tweakable TWINE (T-TWINE) is a new lightweight tweakable block cipher family proposed by Sakamoto $et$ $al$. at IWSEC 2019. T-TWINE is the first Tweakable Block Cipher (TBC) that is built on Generalized Feistel Structure (GFS). It is based on the TWINE block cipher in addition to a simple tweak scheduling based on SKINNY’s tweakey schedule. Similar to TWINE, it has two versions, namely, T-TWINE-80 and T-TWINE-128, both have a block length of 64 bits and employ keys of length 80 and 128 bits, respectively. In this paper, we present impossible differential attacks against reduced-round versions of T-TWINE-80 and T-TWINE-128. First, we present an 18-round impossible differential distinguisher against T-TWINE. Then, using this distinguisher, we attack 25 and 27 rounds of T-TWINE-80 and T-TWINE-128, respectively.
Last updated:  2020-01-17
Low-Latency Hardware Masking with Application to AES
Pascal Sasdrich, Begül Bilgin, Michael Hutter, Mark Marson
During the past two decades there has been a great deal of research published on masked hardware implementations of AES and other cryptographic primitives. Unfortunately, many hardware masking techniques can lead to increased latency compared to unprotected circuits for algorithms such as AES, due to the high-degree of nonlinear functions in their designs. In this paper, we present a hardware masking technique which does not increase the latency for such algorithms. It is based on the LUT-based Masked Dual-Rail with Pre-charge Logic (LMDPL) technique presented at CHES 2014. First, we show 1-glitch extended strong noninterference of a nonlinear LMDPL gadget under the 1-glitch extended probing model. We then use this knowledge to design an AES implementation which computes a full AES-128 operation in 10 cycles and a full AES-256 operation in 14 cycles. We perform practical side-channel analysis of our implementation using the Test Vector Leakage Assessment (TVLA) methodology and analyze univariate as well as bivariate t-statistics to demonstrate its DPA resistance level
Last updated:  2020-05-07
Delphi: A Cryptographic Inference Service for Neural Networks
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, Raluca Ada Popa
Many companies provide neural network prediction services to users for a wide range of applications. However, current prediction systems compromise one party's privacy: either the user has to send sensitive inputs to the service provider for classification, or the service provider must store its proprietary neural networks on the user's device. The former harms the personal privacy of the user, while the latter reveals the service provider's proprietary model. We design, implement, and evaluate Delphi, a secure prediction system that allows two parties to execute neural network inference without revealing either party's data. Delphi approaches the problem by simultaneously co-designing cryptography and machine learning. We first design a hybrid cryptographic protocol that improves upon the communication and computation costs over prior work. Second, we develop a planner that automatically generates neural network architecture configurations that navigate the performance-accuracy trade-offs of our hybrid protocol. Together, these techniques allow us to achieve a 22x improvement in online prediction latency compared to the state-of-the-art prior work.
Last updated:  2020-01-17
ISA Extensions for Finite Field Arithmetic - Accelerating Kyber and NewHope on RISC-V
Erdem Alkim, Hülya Evkan, Norman Lahr, Ruben Niederhagen, Richard Petri
We present and evaluate a custom extension to the RISC-V instruction set for finite fields arithmetic. The result serves as a very compact approach to software-hardware co-design of PQC implementations in the context of small embedded processors such as smartcards. The extension provides instructions that implement finite field operations with subsequent reduction of the result. As small finite fields are used in various PQC schemes, such instructions can provide a considerable speedup for an otherwise software-based implementation. Furthermore, we create a prototype implementation of the presented instructions for the extendable VexRiscv core, integrate the result into a chip design, and evaluate the design on two different FPGA platforms. The effectiveness of the extension is evaluated by using the instructions to optimize the Kyber and Newhope key-encapsulation schemes. To that end, we also present an optimized software implementation for the standard RISC-V instruction set for the polynomial arithmetic underlying those schemes, which serves as basis for comparison. Both variants are tuned on an assembler level to optimally use the processor pipelines of contemporary RISC-V CPUs. The result shows a speedup for the polynomial arithmetic of up to 85% over the basic software implementation. Using the custom instructions drastically reduces the code and data size of the implementation without introducing runtime-performance penalties at a small cost in circuit size. When used in the selected schemes, the custom instructions can be used to replace a full general purpose multiplier to achieve very compact implementations.
Last updated:  2020-01-17
Practical Searchable Symmetric Encryption Supporting Conjunctive Queries without Keyword Pair Result Pattern Leakage
Uncategorized
Changshe Ma, Yiping Gu, Hongfei Li
Show abstract
Uncategorized
Recently proposed searchable symmetric encryption (SSE) scheme HXT improves the OXT by avoiding the KPRP leakage at the cost of increasing the storage by two orders of magnitude. In this paper, we reconsider the principle of designing SSE protocols to prevent KPRP leakage. At first, we introduce a new primitive called subset membership check (SMC), where a set is encrypted such that its subset membership can be checked only through a protocol between Sender and Tester. The security of SMC requires that nothing is revealed other than the membership of a subset after each execution of the protocol. We propose a hash-based SMC implementation with efficient computation, communication, and storage. Secondly, based on the hash-based SMC, we present two practical SSE protocols that support conjunctive queries without KPRP leakage. Our first protocol, called ‘Practical Hidden Cross-Tags’ (PHXT), maintains the same storage size as OXT while preserving the same privacy and functionality as HXT. Our second protocol, called ‘Fast Hidden Cross-Tags’ (FHXT), further optimizes the performances of PHXT through eliminating the expensive Diffie-Hellman type operations. Compared with HXT, our FHXT reduces the storage size, server’s computational costs, client’s computational costs, and the communication overhead by 96.09%, 98.44%, 79.54%, and 78.57%, respectively.
Last updated:  2020-01-18
New Subquadratic Algorithms for Constructing Lightweight Hadamard MDS Matrices (Full Version)
Tianshuo Cong, Ximing Fu, Xuting Zhou, Yuli Zou, Haining Fan
Maximum Distance Separable (MDS) Matrix plays a crucial role in designing cryptosystems. In this paper we mainly talk about constructing lightweight Hadamard MDS matrices based on subquadratic multipliers over $GF(2^4)$. We firstly propose subquadratic Hadamard matrix-vector product formulae (HMVP), and provide two new XOR count metrics. To the best of our knowledge, subquadratic multipliers have not been used to construct MDS matrices. Furthermore, combined with HMVP formulae we design a construction algorithm to find lightweight Hadamard MDS matrices under our XOR count metric. Applying our algorithms, we successfully find MDS matrices with the state-of-the-art fewest XOR counts for $4 \times 4$ and $8 \times 8$ involutory and non-involutory MDS matrices. Experiment results show that our candidates save up to $40.63\%$ and $10.34\%$ XOR gates for $8 \times 8$ and $4 \times 4$ matrices over $GF(2^4)$ respectively.
Last updated:  2020-01-17
On Analysis of Lightweight Stream Ciphers with Keyed Update
Orhun Kara, Muhammed F. Esgin
As the need for lightweight cryptography has grown even more due to the evolution of the Internet of Things, it has become a greater challenge for cryptographers to design ultra lightweight stream ciphers in compliance with the rule of thumb that the internal state size should be at least twice as the key size to defend against generic Time-Memory-Data Tradeoff (TMDT) attacks. However, recently in 2015, Armknecht and Mikhalev sparked a new light on designing keystream generators (KSGs), which in turn yields stream ciphers, with small internal states, called KSG with Keyed Update Function (KSG with KUF), and gave a concrete construction named Sprout. But, currently, security analysis of KSGs with KUF in a general setting is almost non-existent. Our contribution in this paper is two-fold. 1) We give a general mathematical setting for KSGs with KUF, and for the first time, analyze a class of such KSGs, called KSGs with Boolean Keyed Feedback Function (KSG with Boolean KFF), generically. In particular, we develop two generic attack algorithms applicable to any KSG with Boolean KFF having almost arbitrary output and feedback functions where the only requirement is that the secret key incorporation is biased. We introduce an upper bound for the time complexity of the first algorithm. Our extensive experiments validate our algorithms and assumptions made thereof. 2) We study Sprout to show the effectiveness of our algorithms in a practical instance. A straightforward application of our generic algorithm yields one of the most successful attacks on Sprout.
Last updated:  2020-01-17
Pragmatic Authenticated Key Agreement for IEEE Std 802.15.6
Haibat Khan, Benjamin Dowling, Keith M. Martin
The IEEE Std 802.15.6 is the latest international standard for Wireless Body Area Networks (WBANs). The security of communication in this standard is based upon four elliptic-curve based key agreement protocols. These protocols have been shown to exhibit serious security vulnerabilities but surprisingly, do not provision any privacy guarantees. To date, no suitable key agreement protocol has been proposed which fulfils all the requisite objectives for IEEE Std 802.15.6. In this paper two key agreement protocols are presented which, in addition to being efficient and provisioning advance security properties, also offer the essential privacy attributes of anonymity and unlinkability. The protocols are also quantum-safe as they are independent of any public-key based operations. We develop a formal security and privacy model in an appropriate complexity-theoretic framework and prove the proposed protocols secure in this model.
Last updated:  2020-01-17
Bypassing Non-Outsourceable Proof-of-Work Schemes Using Collateralized Smart Contracts
Alexander Chepurnoy, Amitabh Saxena
Centralized pools and renting of mining power are considered as sources of possible censorship threats and even 51% attacks for decentralized cryptocurrencies. Non-outsourceable Proof-of-Work schemes have been proposed to tackle these issues. However, tenets in the folklore say that such schemes could potentially be bypassed by using escrow mechanisms. In this work, we propose a concrete example of such a mechanism which is using collateralized smart contracts. Our approach allows miners to bypass non-outsourceable Proof-of-Work schemes if the underlying blockchain platform supports smart contracts in a sufficiently advanced language. In particular, the language should allow access to the PoW solution. At a high level, our approach requires the miner to lock collateral covering the reward amount and protected by a smart contract that acts as an escrow. The smart contract has logic that allows the pool to collect the collateral as soon as the miner collects any block reward. We propose two variants of the approach depending on when the collateral is bound to the block solution. Using this, we show how to bypass previously proposed non-outsourceable Proof-of-Work schemes (with the notable exception for strong non-outsourceable schemes) and show how to build mining pools for such schemes.
Last updated:  2020-04-24
Zone Encryption with Anonymous Authentication for V2V Communication
Jan Camenisch, Manu Drijvers, Anja Lehmann, Gregory Neven, Patrick Towa
Vehicle-to-vehicle (V2V) communication systems are currently being prepared for real-world deployment, but they face strong opposition over privacy concerns. Position beacon messages are the main culprit, being broadcast in cleartext and pseudonymously signed up to 10 times per second. So far, no practical solutions have been proposed to en- crypt or anonymously authenticate V2V messages. We propose two cryptographic innovations that enhance the privacy of V2V communication. As a core contribution, we introduce zone-encryption schemes, where vehicles generate and authentically distribute encryption keys associated to static geographic zones close to their location. Zone encryption provides security against eavesdropping, and, combined with a suitable anonymous authentication scheme, ensures that messages can only be sent by genuine vehicles, while adding only 224 Bytes of cryptographic overhead to each message. Our second contribution is an authentication mechanism fine-tuned to the needs of V2V which allows vehicles to authentically distribute keys, and is called dynamic group signatures with attributes. Our instantiation features unlimited locally generated pseudonyms, negligible credential download-and-storage costs, identity recovery by a trusted authority, and compact signatures of 216 Bytes at a 128-bit security level.
Last updated:  2021-01-06
BLAZE: Blazing Fast Privacy-Preserving Machine Learning
Arpita Patra, Ajith Suresh
Machine learning tools have illustrated their potential in many significant sectors such as healthcare and finance, to aide in deriving useful inferences. The sensitive and confidential nature of the data, in such sectors, raises natural concerns for the privacy of data. This motivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed. Typically, ML techniques require large computing power, which leads clients with limited infrastructure to rely on the method of Secure Outsourced Computation (SOC). In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis. In this work, we explore PPML techniques in the SOC setting for widely used ML algorithms-- Linear Regression, Logistic Regression, and Neural Networks. We propose BLAZE, a blazing fast PPML framework in the three server setting tolerating one malicious corruption over a ring ($\mathbb{Z}_{2^{\ell}}$). BLAZE achieves the stronger security guarantee of fairness (all honest servers get the output whenever the corrupt server obtains the same). Leveraging an input-independent preprocessing phase, BLAZE has a fast input-dependent online phase relying on efficient PPML primitives such as: (i) A dot product protocol for which the communication in the online phase is independent of the vector size, the first of its kind in the three server setting; (ii) A method for truncation that shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a constant round complexity. This improves over the truncation method of ABY3 (Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that is of the order of the depth of RCA (which is the same as the underlying ring size). An extensive benchmarking of BLAZE for the aforementioned ML algorithms over a 64-bit ring in both WAN and LAN settings shows massive improvements over ABY3. Concretely, we observe improvements up to $333\times$ for Linear Regression, $53\times$ for Logistic Regression and $276\times$ for Neural Networks over WAN. Similarly, we show improvements up to $2610\times$ for Linear Regression, $54\times$ for Logistic Regression and $278\times$for Neural Networks over LAN.
Last updated:  2020-07-26
Consistency of Proof-of-Stake Blockchains with Concurrent Honest Slot Leaders
Aggelos Kiayias, Saad Quader, Alexander Russell
We improve the fundamental security threshold of eventual consensus Proof-of-Stake (PoS) blockchain protocols under longest-chain rule, reflecting for the first time the positive effect of rounds with concurrent honest leaders. Current analyses of these protocols reduce consistency to the dynamics of an abstract, round-based block creation process that is determined by three probabilities: $p_A$, the probability that a round has at least one adversarial leader; $p_h$, the probability that a round has a single honest leader; and $p_H$, the probability that a round has multiple, but honest, leaders. We present a consistency analysis that achieves the optimal threshold $p_h + p_H > p_A$. This is a first in the literature and can be applied to both the simple synchronous setting and the setting with bounded delays. Moreover, we achieve the optimal consistency error $e^{-\Theta(k)}$ where $k$ is the confirmation time. We also provide an efficient algorithm to explicitly calculate these error probabilities in the synchronous setting. All existing consistency analyses either incur a penalty for rounds with concurrent honest leaders, or treat them neutrally. Specifically, the consistency analyses in Ouroboros Praos (Eurocrypt 2018) and Genesis (CCS 2018) assume that the probability of a uniquely honest round exceeds that of the other two events combined (i.e., $p_h - p_H > p_A$); the analyses in Sleepy Consensus (Asiacrypt 2017) and Snow White (Fin. Crypto 2019) assume that a uniquely honest round is more likely than an adversarial round (i.e., $p_h > p_A$). In addition, previous analyses completely break down when uniquely honest rounds become less frequent, i.e., $p_h < p_A$. These thresholds determine the critical trade-off between the honest majority, network delays, and consistency error. Our new results can be directly applied to improve the consistency guarantees of the existing protocols. We complement these results with a consistency analysis in the setting where uniquely honest slots are rare, even letting $p_h = 0$, under the added assumption that honest players adopt a consistent chain selection rule.
Last updated:  2020-06-25
A Compact and Scalable Hardware/Software Co-design of SIKE
Pedro Maat C. Massolino, Patrick Longa, Joost Renes, Lejla Batina
We present efficient and compact hardware/software co-design implementations of the Supersingular Isogeny Key Encapsulation (SIKE) protocol on field-programmable gate arrays (FPGAs). In order to be better equipped for different post-quantum scenarios, our architectures were designed to feature high-flexibility by covering all the currently available parameter sets and with support for primes up to 1008 bits. In particular, any of the current SIKE parameters equivalent to the post-quantum security of AES-128/192/256 and SHA3-256 can be selected and run on-the-fly. This security scalability property, together with the small footprint and efficiency of our architectures, makes them ideal for embedded applications in a post-quantum world. In addition, the proposed implementations exhibit regular, constant-time execution, which provides protection against timing and simple side-channel attacks. Our results demonstrate that supersingular isogeny-based primitives such as SIDH and SIKE can indeed be deployed for embedded applications featuring competitive performance. For example, our smallest architecture based on a 128-bit MAC unit takes only 3855 slices, 21 BRAMs and 57 DSPs on a Virtex 7 690T and can perform key generation, encapsulation and decapsulation in 14.2, 24.1 and 25.7 milliseconds for SIKEp434 and in 51.7, 85.4 and 92.1 milliseconds for SIKEp751, respectively.
Last updated:  2020-01-15
Online Performance Evaluation of Deep Learning Networks for Side-Channel Analysis
Damien Robissout, Gabriel Zaid, Brice Colombier, Lilian Bossuet, Amaury Habrard
Deep learning based side-channel analysis has seen a rise in popularity over the last few years. A lot of work is done to understand the inner workings of the neural networks used to perform the attacks and a lot is still left to do. However, finding a metric suitable for evaluating the capacity of the neural networks is an open problem that is discussed in many articles. We propose an answer to this problem by introducing an online evaluation metric dedicated to the context of side-channel analysis and use it to perform early stopping on existing convolutional neural networks found in the literature. This metric compares the performance of a network on the training set and on the validation set to detect underfitting and overfitting. Consequently, we improve the performance of the networks by finding their best training epoch and thus reduce the number of traces used by 30%. The training time is also reduced for most of the networks considered.
Last updated:  2020-05-13
Bitstream Modification Attack on SNOW 3G
Michail Moraitis, Elena Dubrova
SNOW 3G is one of the core algorithms for confidentiality and integrity in several 3GPP wireless communication standards, including the new Next Generation (NG) 5G. It is believed to be resistant to classical cryptanalysis. In this paper, we show that a key can be extracted from an unprotected FPGA implementation of SNOW 3G by a fault attack. The faults are injected by modifying the content of Look- Up Tables (LUTs) directly in the bitstream. The main challenge is to identify target LUTs whose modification reduces the non-linear state updating function of SNOW 3G to a linear one. We present an algorithm for finding all k-input LUTs implementing a given k-variable Boolean function in the bitstream. We also introduce a key independent bitstream exploration technique which reduces the complexity of some search tasks from exponential to linear. This idea has not been exploited in previous bitstream modification attacks. Finally, we propose a countermeasure which makes the identification of target LUTs an intractable problem by considerably increasing the number of candidates into target LUTs.
Last updated:  2020-01-15
Proof-of-Stake Blockchain Protocols with Near-Optimal Throughput
Matthias Fitzi, Peter Gaži, Aggelos Kiayias, Alexander Russell
One of the most significant challenges in the design of blockchain protocols is increasing their transaction-processing throughput. In this work we put forth for the first time a formal execution model that enables to express transaction throughput while supporting formal security arguments regarding persistence and liveness. We then present a protocol in the proof-of-stake setting achieving near-optimal throughput under adaptive active corruption of any minority of the stake.
Last updated:  2020-01-15
Analysis on Aigis-Enc: asymmetrical and symmetrical
Yupu Hu, Siyue Dong, Xingting Dong
Aigis-Enc is an encryption algorithm based on asymmetrical LWE. In this algorithm, the compression process is utilized during both key generation and encryption (which is equivalent to add some LWR noise). Then encapsulation is realized by FO transformation. It is well known that FO transformation is not considered for discussing CPA security. On the other hand, since the security reduction of LWR is hard to proceed, it is not considered for discussing the CPA security of Aigis-Enc. But compression must be put into consideration when we discuss decryption failure probability. In other words, when we discuss the CPA security of Aigis-Enc, the compression and FO transformation are ignored. But when decryption failure probability is discussed, compression should be taken into consideration while FO transformation remains ignored. According to the assumptions above, Aigis-Enc designers claim that the CPA security of Aigis-Enc is approximately equal to that of the symmetrical LWE scheme in the same scale, and the decryption failure probability of Aigis-Enc is far below that of the symmetrical LWE scheme in the same scale. In this paper, we make a thorough comparison between Aigis-Enc (with the recommended parameters) and the symmetrical LWE encryption scheme in the same scale. Our conclusion is as followed: (1) The comparison on CPA security. The former’s is 160.898, and the latter’s is 161.836. (2) The comparison on computation complexity. In key generation phase, the ratio of the former and the latter on sampling amount of distribution \(\left[ {\begin{array}{*{20}{c}} 0&1\\ {\frac{1}{2}}&{\frac{1}{2}} \end{array}} \right]\) is 5:4; In encryption phase, that ratio is 19:14. The other computations remain the same. (3) The comparison on decryption failure probability. The former’s is $2^{-128.699}$, the latter's is $2^{-67.0582}$. The comparison seems to be dramatic. But in fact, we can slightly increase some traffic to keep failure probability unchanged. In other words, by compressing less to keep decryption failure probability unchanged. In specific: we change the parameters \(\left( {{d_1},{d_2},{d_3}} \right)\) from \(\left( {9,9,4} \right)\) to \(\left( {10,10,4} \right)\), which means a large part of the public key remains the same, the small part of the public key changes from 9 bits per entry into 10bits. A large part of the ciphertext changes from 9 bits per entry into 10 bits, the small part of the ciphertext remains the same. As thus, the communication traffic increases less than $\frac{1}{9}$, while the decryption failure probability is lower than $2^{-128.699}$. We generalize those attacks presented by designers of Aigis-Enc, including primal attacks and dual attacks. More detailedly, our attacks are more extensive, simpler, and clearer. With them, we obtain the optimal attacks and “the optimal-optimal attack” on Aigis-Enc and the symmetrical LWE scheme in the same scale.
Last updated:  2020-01-13
Constant-round Dynamic Group Key Exchange from RLWE Assumption
Rakyong Choi, Dongyeon Hong, Kwangjo Kim
In this paper, we propose a novel lattice-based group key exchange protocol with dynamic membership. Our protocol is constructed by generalizing Dutta-Barua protocol to RLWE setting, inspired by Apon et al.’s recent paper in PQCrypto 2019. We describe our (static) group key exchange protocol from Apon et al.’s paper by modifying its third round and computation step. Then, we present both authenticated and dynamic group key exchange protocol with Join and Leave algorithms. The number of rounds for authenticated group key exchange remains the same as unauthenticated one. Our protocol also supports the scalable property so that the number of rounds does not change depending on the number of group participants. By assuming the hardness of RLWE assumption and unforgeability of digital signatures, we give a full security proof for (un-)authenticated (dynamic) group key exchange protocols.
Last updated:  2021-04-30
SkyEye: A Traceable Scheme for Blockchain
Tianjun Ma, Haixia Xu, Peili Li
Many studies focus on the blockchain privacy protection. Unfortunately, the privacy protection brings some issues (e.g., money-laundering problem). Tracing users' identities is a critical step in addressing these issues. When each user's identity in the blockchain data is determined, the regulator can do some regulatory operations (such as Big Data analysis) to decide who should be punished or who should own the lost data. In this paper, we propose SkyEye, a traceable scheme for blockchain, that can be applied to a class of blockchain application. SkyEye enables the regulator to trace users' identities. Moreover, we demonstrate the security of SkyEye under specific cryptographic assumptions. Finally, we implement two prototypes of SkyEye, and evaluate the running time and related data storage requirements by performing the aforementioned prototypes.
Last updated:  2020-02-08
Scalable Open-Vote Network on Ethereum
Mohamed Seifelnasr, Hisham S. Galal, Amr M. Youssef
McCorry et al. (Financial Cryptography 2017) presented the first implementation of a decentralized self-tallying voting protocol on Ethereum. However, their implementation did not scale beyond 40 voters since all the computations were performed on the smart contract. In this paper, we tackle this problem by delegating the bulk computations to an off-chain untrusted administrator in a verifiable manner. Specifically, the administrator tallies the votes off-chain and publishes a Merkle tree that encodes the tallying computation trace. Then, the administrator submits the Merkle tree root and the tally result to the smart contract. Subsequently, the smart contract transits to an intermediate phase where at least a single honest voter can contend the administrator's claimed result if it was not computed correctly. Then, in the worst case, the smart contract verifies the dispute at the cost of an elliptic curve point addition and scalar multiplication, and two Merkle proofs of membership which are logarithmic in the number of voters. This allows our protocol to achieve higher scalability without sacrificing the public verifiability or voters' privacy. To assess our protocol, we implemented an open-source prototype on Ethereum and carried out multiple experiments for different numbers of voters. The results of our implementation confirm the scalability and efficiency of our proposed solution which does not exceed the current block gas limit for any practical number of voters.
Last updated:  2020-01-13
A New Approach for the Implementation of Binary Matrices Using SLP Applications
Mahdi Sajadieh, Mohsen Mousavi
In this paper, we propose a method for implementing binary matrices with low-cost XOR. First, using a random-iterative method, we obtain a list S from a binary matrix A. Then, based on the list S, we construct a binary matrix B. Next, we find a relation between the implementations of A and B. In other words, using the implementation of the matrix B, we get a low-cost implementation for the matrix A. Also, we show that the implementation of an MDS matrix M is associated with the form of the binary matrix used to construct the binary form of M. In addition, we propose a heuristics algorithm to implement MDS matrices. The best result of this paper is the implementation of a 8 × 8 involutory MDS matrix over 8-bit words with 408 XOR gates. The Paar algorithm is used as an SLP application to obtain implementations of this paper.
Last updated:  2020-01-13
Locally Decodable Codes with Randomized Encoding
Kuan Cheng, Xin Li, Yu Zheng
We initiate a study of locally decodable codes with randomized encoding. Standard locally decodable codes are error correcting codes with a deterministic encoding function and a randomized decoding function, such that any desired message bit can be recovered with good probability by querying only a small number of positions in the corrupted codeword. This allows one to recover any message bit very efficiently in sub-linear or even logarithmic time. Besides this straightforward application, locally decodable codes have also found many other applications such as private information retrieval, secure multiparty computation, and average-case complexity. However, despite extensive research, the tradeoff between the rate of the code and the number of queries is somewhat disappointing. For example, the best known constructions still need super-polynomially long codeword length even with a logarithmic number of queries, and need a polynomial number of queries to achieve a constant rate. In this paper, we show that by using a randomized encoding, in several models we can achieve significantly better rate-query tradeoff. In addition, our codes work for both the standard Hamming errors, and the more general and harder edit errors.
Last updated:  2021-02-17
K-Cipher: A Low Latency, Bit Length Parameterizable Cipher
Michael Kounavis, Sergej Deutsch, Santosh Ghosh, David Durham
We present the design of a novel low latency, bit length parameterizable cipher, called the ``K-Cipher''. K-Cipher is particularly useful to applications that need to support ultra low latency encryption at arbitrary ciphertext lengths. We can think of a range of networking, gaming and computing applications that may require encrypting data at unusual block lengths for many different reasons, such as to make space for other unencrypted state values. Furthermore, in modern applications, encryption is typically required to complete inside stringent time frames in order not to affect performance. K-Cipher has been designed to meet these requirements. In the paper we present the K-Cipher design and specification and discuss its security properties. Our analysis indicates that K-Cipher is secure against both known ciphertext, as well as adaptive chosen plaintext adversaries. Finally, we present synthesis results of 32-bit and 64-bit K-Cipher encrypt datapaths. Our results show that the encrypt datapaths can complete in no more than 767 psec, or 3 clocks in 3.9-4.9 GHz frequencies, and are associated with a maximum area requirement of 1875 um^2.
Last updated:  2020-01-13
Differentially-Private Multi-Party Sketching for Large-Scale Statistics
Seung Geol Choi, Dana Dachman-Soled, Mukul Kulkarni, Arkady Yerukhimovich
We consider a scenario where multiple organizations holding large amounts of sensitive data from their users wish to compute aggregate statistics on this data while protecting the privacy of individual users. To support large-scale analytics we investigate how this privacy can be provided for the case of sketching algorithms running in time sub-linear of the input size. We begin with the well-known LogLog sketch for computing the number of unique elements in a data stream. We show that this algorithm already achieves differential privacy (even without adding any noise) when computed using a private hash function by a trusted curator. Next, we show how to eliminate this requirement of a private hash function by injecting a small amount of noise, allowing us to instantiate an efficient LogLog protocol for the multi-party setting. To demonstrate the practicality of this approach, we run extensive experimentation on multiple datasets, including the publicly available IP address data set from University of Michigan’s scans of internet IPv4 space, to determine the tradeoffs among efficiency, privacy and accuracy of our implementation for varying numbers of parties and input sizes. Finally, we generalize our approach for the LogLog sketch and obtain a general framework for constructing multi-party differentially private protocols for several other sketching algorithms.
Last updated:  2020-01-30
Verified Security of BLT Signature Scheme
Denis Firsov, Ahto Buldas, Ahto Truu, Risto Laanoja
The majority of real-world applications of digital signatures use timestamping to ensure non-repudiation in face of possible key revocations. This observation led Buldas, Laanoja, and Truu to a server-assisted digital signature scheme built around cryptographic timestamping. In this paper, we report on the machine-checked proofs of existential unforgeability under the chosen-message attack (EUF-CMA) of some variations of BLT digital signature scheme. The proofs are developed and verified using the EasyCrypt framework, which provides interactive theorem proving supported by the state-of-the-art SMT solvers.
Last updated:  2020-01-10
On Roots Factorization for PQC Algorithms
Alexander Maximov
In this paper we consider several methods for an efficient extraction of roots of a polynomial over large finite fields. The problem of computing such roots is often the performance bottleneck for some multivariate quantum-immune cryptosystems, such as HFEv-based Quartz, Gui, etc. We also discuss a number of techniques for fast computation of traces as part of the factorization process. These optimization methods could significantly improve the performance of cryptosystems where roots factorization is a part thereof.
Last updated:  2020-10-16
Post-Quantum Secure Architectures for Automotive Hardware Secure Modules
Wen Wang, Marc Stöttinger
The rapid development of information technology in the automotive industry has driven increasing requirements on incorporating security functionalities in the in-vehicle architecture, which is usually realized by adding a Hardware Secure Module (HSM) in the Electronic Central Unit (ECU). Therefore, secure communications can be enforced by carrying out secret cryptographic computations within the HSM by use of the embedded hardware accelerators. However, there is no common standard for designing the architecture for an automotive HSM. A future design of a common automotive HSM is desired by the automotive industry which not only fits to the increasing performance demand, but also further defends against future attacks by attackers exploiting large-scale quantum computers. The arrival of future quantum computers motivates the investigation into post-quantum cryptography (PQC), which will retain the security of an HSM in the future. We analyzed the candidates in NIST's PQC standardization process, and proposed new sets of hardware accelerators for the future generation of the automotive HSMs. Our evaluation results show that building a post-quantum secure automotive HSM is feasible and can meet the hard requirements imposed by a modern vehicle ECU.
Last updated:  2020-01-09
Single Secret Leader Election
Dan Boneh, Saba Eskandarian, Lucjan Hanzlik, Nicola Greco
In a Single Secret Leader Election (SSLE), a group of participants aim to randomly choose exactly one leader from the group with the restriction that the identity of the leader will be known to the chosen leader and nobody else. At a later time, the elected leader should be able to publicly reveal her identity and prove that she has won the election. The election process itself should work properly even if many registered users are passive and do not send any messages. Among the many applications of SSLEs, their potential for enabling more efficient proof-of-stake based cryptocurrencies have recently received increased attention. This paper formally defines SSLE schemes and presents three constructions that provide varying security and performance properties. First, as an existence argument, we show how to realize an ideal SSLE using indistinguishability obfuscation. Next, we show how to build SSLE from low-depth threshold fully homomorphic encryption (TFHE) via a construction which can be instantiated with a circuit of multiplicative depth as low as 10, for realistically-sized secret leader elections. Finally, we show a practical scheme relying on DDH that achieves a slightly relaxed notion of security but which boasts extremely lightweight computational requirements.
Last updated:  2020-01-09
The Arwen Trading Protocols (Full Version)
Ethan Heilman, Sebastien Lipmann, Sharon Goldberg
The Arwen Trading Protocols are layer-two blockchain protocols for traders to securely trade cryptocurrencies at a centralized exchange, without ceding custody of their coins to the exchange. Before trading begins, traders deposit their coins in an on-blockchain escrow where the agent of escrow is the blockchain itself. Each trade is backed by the coins locked in escrow. Each trade is fast, because it happens off-blockchain, and secure, because atomic swaps prevent even a hacked exchange from taking custody of a trader’s coins. Arwen is designed to work even with the "lowest common denominator" of blockchains—namely Bitcoin-derived coins without SegWit support. As a result, Arwen supports essentially all "Bitcoin-derived" coins e.g., BTC, LTC, BCH, ZEC, as well as Ethereum. Our protocols support Limit and RFQ order types, we implemented our RFQ protocol and are available for use at arwen.io.
Last updated:  2020-01-09
Threshold Multi-Signature with an Offline Recovery Party
Riccardo Longo, Alessio Meneghetti, Massimiliano Sala
Key custody is a sensitive aspect of cryptocurrencies. The employment of a custodian service together with threshold-multi-party signatures helps to manage secret keys more safely and effectively, e.g. allowing the recovery of crypto-assets when users lose their own keys. Advancing from a protocol by Gennaro et al. we propose a protocol with two main properties. First it allows the recovery party to remain offline during the enrollment of any user, solving a real-life problem of maintaining online only one trusted third party. Second our multi-party signature is compatible with a deterministic derivation of public and private keys.
Last updated:  2020-01-07
Differential Random Fault Attacks on certain CAESAR Stream Ciphers (Supplementary Material)
Kenneth Koon-Ho Wong, Harry Bartlett, Leonie Simpson, Ed Dawson
This document contains supplementary material to the paper with the same title available from the proceedings of the International Conference on Information Security and Cryptology (ICISC) 2019. In this supplementary material, we demonstrate that the random fault attack strategy described in the full paper can be applied to ciphers in the MORUS family, resulting in partial state recovery for these ciphers.
Last updated:  2020-01-14
eSIDH: the revenge of the SIDH
Daniel Cervantes-Vázquez, Eduardo Ochoa-Jiménez, Francisco Rodríguez-Henríquez
The Supersingular Isogeny-based Diffie-Hellman key exchange protocol (SIDH) was introduced by Jao an De Feo in 2011. SIDH operates on supersingular elliptic curves defined over quadratic extension fields of the form GF($p^2$), where $p$ is a large prime number of the form $p = 4^{e_A} 3^{e_B} - 1,$ where $e_A, e_B$ are positive integers such that $4^{e_A} \approx 3^{e_B}.$ In this paper, a variant of the SIDH protocol that we dubbed extended SIDH (eSIDH) is presented. The eSIDH variant makes use of primes of the form, $p = 4^{e_A} \ell_B^{e_B}\ell_C^{e_C} f - 1.$ Here $\ell_B, \ell_C $ are two small prime numbers; $f$ is a cofactor; and $e_A, e_B$ and $e_C$ are positive integers such that $4^{e_A} \approx \ell_B^{e_B}\ell_C^{e_C}.$ We show that for many relevant instantiations of the SIDH protocol, this new family of primes enjoys a faster field arithmetic than the one associated to traditional SIDH primes. Furthermore, the proposed eSIDH protocol preserves the length and format of SIDH private/public keys, and its richer opportunities for parallelism yields a noticeable speedup factor when implemented on multi-core platforms. Using a single-core SIDH $p_{751}$ implementation as a baseline, a parallel eSIDH $p_{765}$ instantiation yields an acceleration factor of $1.05, 1.30$ and $1.41,$ when implemented on $k = \{1, 2, 3\}$-core processors. In addition, eSIDH $p_{765}$ yields an acceleration factor of $1.050, 1.160$ and $1.162.$ when both protocols are implemented on $k = \{1, 2, 3\}$-core processors. To our knowledge, this work reports the first multi-core implementation of SIDH.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.