All papers in 2019 (Page 5 of 1498 results)

Last updated:  2019-09-29
On the Feasibility of Fine-Grained TLS Security Configurations in Web Browsers Based on the Requested Domain Name
Eman Salem Alashwali, Kasper Rasmussen
Most modern web browsers today sacrifice optimal TLS security for backward compatibility. They apply coarse-grained TLS configurations that support (by default) legacy versions of the protocol that have known design weaknesses, and weak ciphersuites that provide fewer security guarantees (e.g. non Forward Secrecy), and silently fall back to them if the server selects to. This introduces various risks including downgrade attacks such as the POODLE attack that exploits the browsers silent fallback mechanism to downgrade the protocol version in order to exploit the legacy version flaws. To achieve a better balance between security and backward compatibility, we propose a mechanism for fine-grained TLS configurations in web browsers based on the sensitivity of the domain name in the HTTPS request using a white listing technique. That is, the browser enforces optimal TLS configurations for connections going to sensitive domains while enforcing default configurations for the rest of the connections. We demonstrate the feasibility of our proposal by implementing a proof-of-concept as a Firefox browser extension. We envision this mechanism as a built-in security feature in web browsers, e.g. a button similar to the “Bookmark” button in Firefox browsers and as a standardised HTTP header, to augment browsers security.
Last updated:  2021-07-14
Full-Threshold Actively-Secure Multiparty Arithmetic Circuit Garbling
Eleftheria Makri, Tim Wood
In this work, we show how to garble arithmetic circuits with full active security in the general multiparty setting, secure in the full-threshold setting (that is, when only one party is assumed honest). Our solution allows interfacing Boolean garbled circuits with arithmetic garbled circuits. Previous works in the arithmetic circuit domain focused on the 2-party setting, or on semi-honest security and assuming an honest majority -- notably, the work of Ben-Efraim (Asiacrypt 2018) in the semi-honest, honest majority security model, which we adapt and extend. As an additional contribution, we improve on Ben-Efraim's selector gate. A selector gate is a gate that given two arithmetic inputs and one binary input, outputs one of the arithmetic inputs, based on the value of the selection bit input. Our new construction for the selector gate reduces the communication cost to almost half of that of Ben-Efraim's gate. This result applies both to the semi-honest and to the active security model.
Last updated:  2022-01-24
Anonymous Transactions with Revocation and Auditing in Hyperledger Fabric
Dmytro Bogatov, Angelo De Caro, Kaoutar Elkhiyaoui, Björn Tackmann
In permissioned blockchain systems, participants are admitted to the network by receiving a credential from a certification authority. Each transaction processed by the network is required to be authorized by a valid participant who authenticates via her credential. Use case settings where privacy is a concern thus require proper privacy-preserving authentication and authorization mechanisms. Anonymous credential schemes allow a user to authenticate while showing only those attributes necessary in a given setting. This makes them a great tool for authorizing transactions in permissioned blockchain systems based on the user's attributes. In most setups, there is one distinct certification authority for each organization in the network. Consequently, the use of plain anonymous credential schemes still leaks the association of a user to the organization that issued her credentials. Camenisch, Drijvers and Dubovitskaya (CCS 2017) therefore suggest the use of a delegatable anonymous credential scheme to also hide that remaining piece of information. In this paper, we propose the revocation and auditability - two functionalities that are necessary for real-world adoption - and integrate them into the scheme. We present a complete protocol, its security definition and the proof, and provide its open-source implementation. Our distributed-setting performance measurements show that the integration of the scheme with Hyperledger Fabric, while incurring an overhead in comparison to the less privacy-preserving solutions, is practical for settings with stringent privacy requirements.
Last updated:  2020-02-09
Proof-of-Burn
Kostis Karantias, Aggelos Kiayias, Dionysis Zindros
Proof-of-burn has been used as a mechanism to destroy cryptocurrency in a verifiable manner. Despite its well known use, the mechanism has not been previously formally studied as a primitive. In this paper, we put forth the first cryptographic definition of what a proof-of-burn protocol is. It consists of two functions: First, a function which generates a cryptocurrency address. When a user sends money to this address, the money is irrevocably destroyed. Second, a verification function which checks that an address is really unspendable. We propose the following properties for burn protocols. Unspendability, which mandates that an address which verifies correctly as a burn address cannot be used for spending; binding, which allows associating metadata with a particular burn; and uncensorability, which mandates that a burn address is indistinguishable from a regular cryptocurrency address. Our definition captures all previously known proof-of-burn protocols. Next, we design a novel construction for burning which is simple and flexible, making it compatible with all existing popular cryptocurrencies. We prove our scheme is secure in the Random Oracle model. We explore the application of destroying value in a legacy cryptocurrency to bootstrap a new one. The user burns coins in the source blockchain and subsequently creates a proof-of-burn, a short string proving that the burn took place, which she then submits to the destination blockchain to be rewarded with a corresponding amount. The user can use a standard wallet to conduct the burn without requiring specialized software, making our scheme user friendly. We propose burn verification mechanisms with different security guarantees, noting that the target blockchain miners do not necessarily need to monitor the source blockchain. Finally, we implement the verification of Bitcoin burns as an Ethereum smart contract and experimentally measure that the gas costs needed for verification are as low as standard Bitcoin transaction fees, illustrating that our scheme is practical.
Last updated:  2020-05-02
Secure Computation with Preprocessing via Function Secret Sharing
Elette Boyle, Niv Gilboa, Yuval Ishai
We propose a simple and powerful new approach for secure computation with input-independent preprocessing, building on the general tool of function secret sharing (FSS) and its efficient instantiations. Using this approach, we can make efficient use of correlated randomness to compute any type of gate, as long as a function class naturally corresponding to this gate admits an efficient FSS scheme. Our approach can be viewed as a generalization of the "TinyTable" protocol of Damgard et al. (Crypto 2017), where our generalized variant uses FSS to achieve exponential efficiency improvement for useful types of gates. By instantiating this general approach with efficient PRG-based FSS schemes of Boyle et al. (Eurocrypt 2015, CCS 2016), we can implement useful nonlinear gates for equality tests, integer comparison, bit-decomposition and more with optimal online communication and with a relatively small amount of correlated randomness. We also provide a unified and simplified view of several existing protocols in the preprocessing model via the FSS framework. Our positive results provide a useful tool for secure computation tasks that involve secure integer comparisons or conversions between arithmetic and binary representations. These arise in the contexts of approximating real-valued functions, machine-learning classification, and more. Finally, we study the necessity of the FSS machinery that we employ, in the simple context of secure string equality testing. First, we show that any "online-optimal" secure equality protocol implies an FSS scheme for point functions, which in turn implies one-way functions. Then, we show that information-theoretic secure equality protocols with relaxed optimality requirements would follow from the existence of big families of "matching vectors." This suggests that proving strong lower bounds on the efficiency of such protocols would be difficult.
Last updated:  2020-02-21
Is Information-Theoretic Topology-Hiding Computation Possible?
Marshall Ball, Elette Boyle, Ran Cohen, Tal Malkin, Tal Moran
Topology-hiding computation (THC) is a form of multi-party computation over an incomplete communication graph that maintains the privacy of the underlying graph topology. Existing THC protocols consider an adversary that may corrupt an arbitrary number of parties, and rely on cryptographic assumptions such as DDH. In this paper we address the question of whether information-theoretic THC can be achieved by taking advantage of an honest majority. In contrast to the standard MPC setting, this problem has remained open in the topology-hiding realm, even for simple "privacy-free" functions like broadcast, and even when considering only semi-honest corruptions. We uncover a rich landscape of both positive and negative answers to the above question, showing that what types of graphs are used and how they are selected is an important factor in determining the feasibility of hiding topology information-theoretically. In particular, our results include the following. We show that topology-hiding broadcast (THB) on a line with four nodes, secure against a single semi-honest corruption, implies key agreement. This result extends to broader classes of graphs, e.g., THB on a cycle with two semi-honest corruptions. On the other hand, we provide the first feasibility result for information-theoretic THC: for the class of cycle graphs, with a single semi-honest corruption. Given the strong impossibilities, we put forth a weaker definition of distributional-THC, where the graph is selected from some distribution (as opposed to worst-case). We present a formal separation between the definitions, by showing a distribution for which information theoretic distributional-THC is possible, but even topology-hiding broadcast is not possible information-theoretically with the standard definition. We demonstrate the power of our new definition via a new connection to adaptively secure low-locality MPC, where distributional-THC enables parties to "reuse" a secret low-degree communication graph even in the face of adaptive corruptions.
Last updated:  2019-10-22
Quantum Random Oracle Model with Auxiliary Input
Minki Hhan, Keita Xagawa, Takashi Yamakawa
The random oracle model (ROM) is an idealized model where hash functions are modeled as random functions that are only accessible as oracles. Although the ROM has been used for proving many cryptographic schemes, it has (at least) two problems. First, the ROM does not capture quantum adversaries. Second, it does not capture non-uniform adversaries that perform preprocessings. To deal with these problems, Boneh et al. (Asiacrypt'11) proposed using the quantum ROM (QROM) to argue post-quantum security, and Unruh (CRYPTO'07) proposed the ROM with auxiliary input (ROM-AI) to argue security against preprocessing attacks. However, to the best of our knowledge, no work has dealt with the above two problems simultaneously. In this paper, we consider a model that we call the QROM with (classical) auxiliary input (QROM-AI) that deals with the above two problems simultaneously and study security of cryptographic primitives in the model. That is, we give security bounds for one-way functions, pseudorandom generators, (post-quantum) pseudorandom functions, and (post-quantum) message authentication codes in the QROM-AI. We also study security bounds in the presence of quantum auxiliary inputs. In other words, we show a security bound for one-wayness of random permutations (instead of random functions) in the presence of quantum auxiliary inputs. This resolves an open problem posed by Nayebi et al. (QIC'15). In a context of complexity theory, this implies $ \mathsf{NP}\cap \mathsf{coNP} \not\subseteq \mathsf{BQP/qpoly}$ relative to a random permutation oracle, which also answers an open problem posed by Aaronson (ToC'05).
Last updated:  2019-09-29
Cerberus Channels: Incentivizing Watchtowers for Bitcoin
Georgia Avarikioti, Orfeas Stefanos Thyfronitis Litos, Roger Wattenhofer
Bitcoin and similar blockchain systems have a limited transaction throughput because each transaction must be processed by all parties, on-chain. Payment channels relieve the blockchain by allowing parties to execute transactions off-chain while maintaining the on-chain security guarantees, i.e., no party can be cheated out of their funds. However, to maintain these guarantees all parties must follow blockchain updates ardently. To alleviate this issue, a channel party can hire a "watchtower" to periodically check the blockchain for fraud on its behalf. However, watchtowers will only do their job properly if there are financial incentives, fees, and punishments. There are known solutions, but these need complex smart contracts, and as such are not applicable to Bitcoin's simple script language. This raises the natural question of whether incentivized watchtowers are at all possible in a system like Bitcoin. In this work, we answer this question affirmatively, by introducing Cerberus channels, an extension of Lightning channels. Cerberus channels reward watchtowers while remaining secure against bribing and collusion; thus participants can safely go offline for an extended period of time. We show that Cerberus channels are correct, and provide a proof-of-concept implementation in the Bitcoin script language.
Last updated:  2019-09-29
Short Paper: XOR Arbiter PUFs have Systematic Response Bias
Nils Wisiol, Niklas Pirnay
We demonstrate that XOR Arbiter PUFs with an even number of arbiter chains have inherently biased responses, even if all arbiter chains are perfectly unbiased. This rebukes the believe that XOR Arbiter PUFs are, like Arbiter PUFs, unbiased when ideally implemented and proves that independently manufactured Arbiter PUFs are not statistically independent. As an immediate result of this work, we suggest to use XOR Arbiter PUFs with odd numbers of arbiter chains whenever possible. Furthermore, our analysis technique can be applied to future types of PUF designs and can hence be used to identify design weaknesses, in particular when using Arbiter PUFs as building blocks and when developing designs with challenge pre-processing. Finally, we discuss consequences for the parameter recommendations of the Interpose PUF. Investigating the reason of the systematic bias of XOR Arbiter PUF, we exhibit that Arbiter PUFs suffer from a systematic uniqueness weakness.
Last updated:  2019-09-29
Low Complexity MDS Matrices Using $GF(2^n)$ SPB or GPB
Xinggu Chen, Haining Fan
While $GF(2^n)$ polynomial bases are widely used in symmetric-key components, e.g. MDS matrices, we show that even low time/space complexities can be achieved by using $GF(2^n)$ shifted polynomial bases (SPB) or generalized polynomial bases (GPB).
Last updated:  2019-12-07
Lattice-Face Key Infrastructure (LFKI) for Quantum Resistant Computing
Josiah Johnson Umezurike
A new light is shown by exploring a hybrid system designed to exhibit symmetric and asymmetric properties. LFKI is code named, end-to-end cryptographic system for cloud, mobile, internet of things (IOT) and devices (ECSMID). Until now, there had not been much done on lattice faces as a hybrid cryptographic solution. Here in, we do not owe respect to only randomization reduction or deterministic reduction. We embrace a collective approach to defining the old age question of what problem is hard enough in NP to resist a quantum assailant. Especially, non-deterministic reduction is used to show that lattices are interesting hard problems within the set of NP Complete problems. Though the shortest vector problem (SVP) seems promising. It is nearly enough to facilitate and establish lattice basis; an exception from the priori art [1]. The many configurations of their vertices seem to dismiss the wonderful properties of the dynamic faces abounding in various constructs. The elements of these faces in between regions bounded by the vertices and edges are of great interest to cryptography. When represented as numerical values serve as mathematical images of the basis distribution. It is demonstrated that each vector representation has the potential to generate cryptographically secure number of keys. They follow, somewhat rigid rules; deterministic and yet a chaotic arrangement of the lattice vectors represented within a matrix. A fitting rule is already available with necessary mechanisms to produce 1: n relationship of a plaintext for many ciphertexts. –Open Knight Tour (OKT) can easily modify to absorb larger matrices. We demonstrate, that a theoretical quantum circuit has the controls to resist the quantum assailant using continuous noise; both in a quasi-patterned formation and random formation of homogenous input yielding homomorphic outputs.
Last updated:  2020-10-06
KRNC: New Foundations for Permissionless Byzantine Consensus and Global Monetary Stability
Clinton Ehrlich, Anna Guzova
This paper applies biomimetic engineering to the problem of permissionless Byzantine consensus and achieves results that surpass the prior state of the art by four orders of magnitude. It introduces a biologically inspired asymmetric Sybil-resistance mechanism, Proof-of-Balance, which can replace symmetric Proof-of-Work and Proof-of-Stake weighting schemes. The biomimetic mechanism is incorporated into a permissionless blockchain protocol, Key Retroactivity Network Consensus (“KRNC”), which delivers ~40,000 times the security and speed of today’s decentralized ledgers. KRNC allows the fiat money that the public already owns to be upgraded with cryptographic inflation protection, eliminating the problems inherent in bootstrapping new currencies like Bitcoin and Ethereum. The paper includes two independently significant contributions to the literature. First, it replaces the non-structural axioms invoked in prior work with a new formal method for reasoning about trust, liveness, and safety from first principles. Second, it demonstrates how two previously overlooked exploits — book-prize attacks and pseudo-transfer attacks — collectively undermine the security guarantees of all prior permissionless ledgers.
Last updated:  2019-09-25
Cryptanalysis of a Protocol for Efficient Sorting on SHE Encrypted Data
Shyam Murthy, Srinivas Vivek
Sorting on encrypted data using Somewhat Homomorphic Encryption (SHE) schemes is currently inefficient in practice when the number of elements to be sorted is very large. Hence alternate protocols that can efficiently perform computation and sorting on encrypted data is of interest. Recently, Kesarwani et al. (EDBT 2018) proposed a protocol for efficient sorting on data encrypted using an SHE scheme in a model where one of the two non-colluding servers is holding the decryption key. The encrypted data to be sorted is transformed homomorphically by the first server using a randomly chosen monotonic polynomial with possibly large coefficients, and then the non-colluding server holding the decryption key decrypts, sorts, and conveys back the sorted order to the first server without learning the actual values except possibly for the order. In this work we demonstrate an attack on the above protocol that allows the non-colluding server holding the decryption key to recover the original plaintext inputs (up to a constant difference). Though our attack runs in time exponential in the size of plaintext inputs and degree of the polynomial but polynomial in the size of coefficients, we show that our attack is feasible for 32-bit inputs, hence accounting for several real world scenarios. Of independent interest is our algorithm for recovering the integer inputs (up to a constant difference) by observing only the integer polynomial outputs.
Last updated:  2019-09-25
The SPHINCS+ Signature Framework
Daniel J. Bernstein, Andreas Hülsing, Stefan Kölbl, Ruben Niederhagen, Joost Rijneveld, Peter Schwabe
We introduce SPHINCS+, a stateless hash-based signature framework. SPHINCS+ has significant advantages over the state of the art in terms of speed, signature size, and security, and is among the nine remaining signature schemes in the second round of the NIST PQC standardization project. One of our main contributions in this context is a new few-time signature scheme that we call FORS. Our second main contribution is the introduction of tweakable hash functions and a demonstration how they allow for a unified security analysis of hash-based signature schemes. We give a security reduction for SPHINCS+ using this abstraction and derive secure parameters in accordance with the resulting bound. Finally, we present speed results for our optimized implementation of SPHINCS+ and compare to SPHINCS-256, Gravity-SPHINCS, and Picnic.
Last updated:  2019-09-25
Matrix PRFs: Constructions, Attacks, and Applications to Obfuscation
Yilei Chen, Minki Hhan, Vinod Vaikuntanathan, Hoeteck Wee
We initiate a systematic study of pseudorandom functions (PRFs) that are computable by simple matrix branching programs; we refer to these objects as “matrix PRFs”. Matrix PRFs are attractive due to their simplicity, strong connections to complexity theory and group theory, and recent applications in program obfuscation. Our main results are: * We present constructions of matrix PRFs based on the conjectured hardness of some simple computational problems pertaining to matrix products. * We show that any matrix PRF that is computable by a read-c, width w branching program can be broken in time poly(w^c); this means that any matrix PRF based on constant-width matrices must read each input bit omega(log lambda) times. Along the way, we simplify the “tensor switching lemmas” introduced in previous IO attacks. * We show that a subclass of the candidate local-PRG proposed by Barak et al. [Eurocrypt 2018] can be broken using simple matrix algebra. * We show that augmenting the CVW18 IO candidate with a matrix PRF provably immunizes the candidate against all known algebraic and statistical zeroizing attacks, as captured by a new and simple adversarial model.
Last updated:  2019-12-13
Distributed Vector-OLE: Improved Constructions and Implementation
Phillipp Schoppmann, Adrià Gascón, Leonie Reichert, Mariana Raykova
We investigate concretely efficient protocols for distributed oblivious linear evaluation over vectors (Vector-OLE). Boyle et al. (CCS 2018) proposed a protocol for secure distributed pseudorandom Vector-OLE generation using sublinear communication, but they did not provide an implementation. Their construction is based on a variant of the LPN assumption and assumes a distributed key generation protocol for single-point Function Secret Sharing (FSS), as well as an efficient batching scheme to obtain multi-point FSS. We show that this requirement can be relaxed, resulting in a weaker variant of FSS, for which we give an efficient protocol. This allows us to use efficient probabilistic batch codes that were also recently used for batched PIR by Angel et al. (S&P 2018). We construct a full Vector-OLE generator from our protocols, and compare it experimentally with alternative approaches. Our implementation parallelizes very well, and has low communication overhead in practice. For generating a VOLE of size $2^{20}$, our implementation only takes $0.52$s on 32 cores.
Last updated:  2019-09-24
What's in a Downgrade? A Taxonomy of Downgrade Attacks in the TLS Protocol and Application Protocols Using TLS
Eman Salem Alashwali, Kasper Rasmussen
A number of important real-world protocols including the Transport Layer Security (TLS) protocol have the ability to negotiate various security-related choices such as the protocol version and the cryptographic algorithms to be used in a particular session. Furthermore, some insecure application-layer protocols such as the Simple Mail Transfer Protocol (SMTP) negotiate the use of TLS itself on top of the application protocol to secure the communication channel. These protocols are often vulnerable to a class of attacks known as downgrade attacks which targets this negotiation mechanism. In this paper we create the first taxonomy of TLS downgrade attacks. Our taxonomy classifies possible attacks with respect to four different vectors: the protocol element that is targeted, the type of vulnerability that enables the attack, the attack method, and the level of damage that the attack causes. We base our taxonomy on a thorough analysis of fifteen notable published attacks. Our taxonomy highlights clear and concrete aspects that many downgrade attacks have in common, and allows for a common language, classification, and comparison of downgrade attacks. We demonstrate the application of our taxonomy by classifying the surveyed attacks.
Last updated:  2020-12-22
On the Security of Multikey Homomorphic Encryption
Hyang-Sook Lee, Jeongeun Park
Multikey fully homomorphic encryption (MFHE) scheme enables homomorphic computation on data encrypted under different keys. To decrypt a result ciphertext, all the involved secret keys are required. For multi decryptor setting, decryption is a protocol with minimal interaction among parties. However, all prior schemes supporting the protocol are not secure in public channel against a passive external adversary who can see any public information not joining the protocol. Furthermore, the possible adversaries have not been defined clearly. In this paper, we revisit the security of MFHE and present a secure one-round decryption protocol. We apply it to one of existing schemes and prove the scheme is secure against possible static adversaries. As an application, we construct a two round multiparty computation without common random string.
Last updated:  2019-09-24
OCEAN: A Built-In Replacement for Mining Pools
Raymond Chee, Kartik Chitturi, Edouard Dufour-Sans, Kyle Soska
We propose OCEAN, an alternative miner reward system for blockchains that seeks to discourage pooling by providing a pool's variance mitigation functionality as a blockchain built-in. Our proposal relies on Succinct, Non-interactive Arguments of Knowledge (SNARKs), an advanced modern cryptographic tool that enables anyone to prove complex statements with the proof not growing in size with the complexity of the statement. We expect that blockchains that implement our proposal would see less pooling centralization than what is currently observable in deployed cryptocurrencies.
Last updated:  2019-10-14
Preimages and Collisions for Up to 5-Round Gimli-Hash Using Divide-and-Conquer Methods
Fukang Liu, Takanori Isobe, Willi Meier
The Gimli permutation was proposed in CHES 2017 and the hash mode Gimli-Hash is now included in the Round 2 candidate Gimli in NIST's Lightweight Cryptography Standardization process. In the Gimli document, the security of the Gimli permutation has been intensively investigated. However, little is known about the security of Gimli-Hash. The designers of Gimli have claimed $2^{128}$ security against all attacks on Gimli-Hash, whose hash is a 256-bit value. Firstly, we present the trivial generic preimage attack on the structure of Gimli-Hash matching the $2^{128}$ security bound, both, in time and memory complexity. Following such a generic preimage attack framework, we then describe specific preimage attacks on the first 2/3/4/5 rounds and the last 2/3/4 rounds (out of 24) of Gimli-Hash using the divide-and-conquer methods. As will be shown, the application of the divide-and-conquer methods much benefits from the properties of the SP-box and the linear layer of Gimli. Therefore, this work can also be viewed as a first step to exploit specific properties of the SP-box. Finally, the divide-and-conquer method was also applied to a collision attack on up to 5-round Gimli-Hash. Among all the attacks, the preimage attacks on the first and the last 2 rounds of Gimli-Hash are practical. The collision attack on the first 3 rounds of Gimli-Hash is practical. The collision attack and second preimage attack on the last 3 rounds of Gimli-Hash are practical. All practical attacks are experimentally verified. We hope our analysis can advance the understanding of Gimli-Hash.
Last updated:  2019-10-05
When NTT Meets Karatsuba: Preprocess-then-NTT Technique Revisited
Yiming Zhu, Zhen Liu, Yanbin Pan
The Number Theoretic Transform (NTT) technique is widely used in the implementation of the cryptographic schemes based on the Ring Learning With Errors problem(RLWE), since it provides efficient algorithm for multiplication of polynomials over the finite field. However, to employ NTT, the finite field is required to have some special root of unity, such as $n$-th root, which makes the module $q$ in RLWE big since we need $q\equiv 1\mod 2n$ to ensure such root exits. At Inscrypt 2018, Zhou et al. proposed a technique called preprocess-then-NTT to reduce the value of modulus $q$ while the NTT still works, and the time complexity is just a constant ($>1$) multiple of the original NTT algorithm asymptotically. In this paper, we revisit the preprocess-then-NTT technique and point out it can be improved such that its time complexity is as the same as the original NTT algorithm asymptotically. What's more, through experiments we find that even compared with the original NTT our improved algorithm may have some advantages in efficiency.
Last updated:  2019-09-23
Puncturable Proxy Re-Encryption supporting to Group Messaging Service
Tran Viet Xuan Phuong, Willy Susilo, Jongkil Kim, Guomin Yang, Dongxi Liu
This work envisions a new encryption primitive for many-to-many paradigms such as group messaging systems. Previously, puncturable encryption (PE) was introduced to provide forward security for asynchronous messaging services. However, existing PE schemes were proposed only for one-to-one communication, and causes a significant overhead for a group messaging system. In fact, the group communication over PE can only be achieved by encrypting a message multiple times for each receiver by the sender's device, which is usually suitable to restricted resources such as mobile phones or sensor devices. Our new suggested scheme enables to re-encrypt ciphertexts of puncturable encryption by a message server (i.e., a proxy) so that computationally heavy operations are delegated to the server who has more powerful processors and a constant power source. We then proposed a new Puncturable Proxy Re-Encryption (PPRE) scheme. The scheme is inspired by unidirectional proxy re-encryption (UPRE), which achieves forward secrecy through fine-grained revocation of decryption capability by integrating the PE scheme. This paper first presents a forward secure PPRE in the group messaging service. Our scheme is IND-CCA secure under 3-weak Decision Bilinear Diffie-Hellman Inversion assumption
Last updated:  2019-10-04
Adaptively Secure Garbling Schemes for Parallel Computations
Kai-Min Chung, Luowen Qian
We construct the first adaptively secure garbling scheme based on standard public-key assumptions for garbling a circuit $C: \{0, 1\}^n \mapsto \{0, 1\}^m$ that simultaneously achieves a near-optimal online complexity $n + m + \textrm{poly}(\lambda, \log |C|)$ (where $\lambda$ is the security parameter) and \emph{preserves the parallel efficiency} for evaluating the garbled circuit; namely, if the depth of $C$ is $d$, then the garbled circuit can be evaluated in parallel time $d \cdot \textrm{poly}(\log|C|, \lambda)$. In particular, our construction improves over the recent seminal work of Garg et al. (Eurocrypt 2018), which constructs the first adaptively secure garbling scheme with a near-optimal online complexity under the same assumptions, but the garbled circuit can only be evaluated gate by gate in a sequential manner. Our construction combines their novel idea of linearization with several new ideas to achieve parallel efficiency without compromising online complexity. We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation. Our construction is based on the work of Garg et al. (Crypto 2018) for adaptively secure garbled RAM, but again introduces several new ideas to handle parallel RAM computation, which may be of independent interests. As an application, this yields the first constant round secure computation protocol for persistent PRAM programs in the malicious settings from standard assumptions.
Last updated:  2020-07-15
Fractal: Post-Quantum and Transparent Recursive Proofs from Holography
Alessandro Chiesa, Dev Ojha, Nicholas Spooner
We present a new methodology to efficiently realize recursive composition of succinct non-interactive arguments of knowledge (SNARKs). Prior to this work, the only known methodology relied on pairing-based SNARKs instantiated on cycles of pairing-friendly elliptic curves, an expensive algebraic object. Our methodology does not rely on any special algebraic objects and, moreover, achieves new desirable properties: it is *post-quantum* and it is *transparent* (the setup is public coin). We exploit the fact that recursive composition is simpler for SNARKs with *preprocessing*, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS. We experimentally validate our methodology, demonstrating feasibility in practice.
Last updated:  2022-03-28
Private Information Retrieval with Sublinear Online Time
Henry Corrigan-Gibbs, Dmitry Kogan
We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client's query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that, in this model, our protocols are optimal in terms of the trade-off they achieve between communication and running time.
Last updated:  2020-07-02
Non-monotonic Practical ABE with Direct Revocation, Blackbox Traceability, and a Large Attribute Universe
Dirk Thatmann
This work shows all necessary calculations to extend the ``Practical Attribute Based Encryption: Traitor Tracing, Revocation, and Large Universe'' scheme of Liu and Wong with non-monotonic access structures. We ensure that the blackbox traceability property is preserved.
Last updated:  2019-09-23
iUC: Flexible Universal Composability Made Simple
Jan Camenisch, Stephan Krenn, Ralf Kuesters, Daniel Rausch
Proving the security of complex protocols is a crucial and very challenging task. A widely used approach for reasoning about such protocols in a modular way is universal composability. A perfect model for universal composability should provide a sound basis for formal proofs and be very flexible in order to allow for modeling a multitude of different protocols. It should also be easy to use, including useful design conventions for repetitive modeling aspects, such as corruption, parties, sessions, and subroutine relationships, such that protocol designers can focus on the core logic of their protocols. While many models for universal composability exist, including the UC, GNUC, and IITM models, none of them has achieved this ideal goal yet. As a result, protocols cannot be modeled faithfully and/or using these models is a burden rather than a help, often even leading to underspecified protocols and formally incorrect proofs. Given this dire state of affairs, the goal of this work is to provide a framework for universal composability which combines soundness, flexibility, and usability in an unmatched way. Developing such a security framework is a very difficult and delicate task, as the long history of frameworks for universal composability shows. We build our framework, called iUC, on top of the IITM model, which already provides soundness and flexibility while lacking sufficient usability. At the core of iUC is a single simple template for specifying essentially arbitrary protocols in a convenient, formally precise, and flexible way. We illustrate the main features of our framework with example functionalities and realizations.
Last updated:  2019-09-23
Rate-1 Trapdoor Functions from the Diffie-Hellman Problem
Nico Döttling, Sanjam Garg, Mohammad Hajiabadi, Kevin Liu, Giulio Malavolta
Trapdoor functions (TDFs) are one of the fundamental building blocks in cryptography. Studying the underlying assumptions and the efficiency of the resulting instantiations is therefore of both theoretical and practical interest. In this work we improve the input-to-image rate of TDFs based on the Diffie-Hellman problem. Specically, we present: \begin{enumerate} \item A rate-1 TDF from the computational Diffie-Hellman (CDH) assumption, improving the result of Garg, Gay, and Hajiabadi [EUROCRYPT 2019], which achieved linear-size outputs but with large constants. Our techniques combine non-binary alphabets and high-rate error-correcting codes over large fields. \item A rate-1 deterministic public-key encryption satisfying block-source security from the decisional Diffie-Hellman (DDH) assumption. While this question was recently settled by Döttling et al. [CRYPTO 2019], our scheme is conceptually simpler and concretely more efficient. We demonstrate this fact by implementing our construction. \end{enumerate}
Last updated:  2019-09-23
DLSCA: a Tool for Deep Learning Side Channel Analysis
Martin Brisfors, Sebastian Forsmark
Abstract - Research on Side Channel Analysis (SCA) is very active and progressing at a fast pace. The idea of using Machine Learning (ML), and more recently Deep Learning(DL), to help SCA data is explored extensively. One issue facing security researchers interested in contributing to this cause is the difficulties getting started. While replicating previous works with open source code is not difficult, taking the next steps from there can be daunting. The presented open-source DLSCA tool is created to aid with research on DL-based SCA and to help newcomers to DL to get started. It is hoped to contribute to investigating the strengths and limitations of ML-based SCA. Keywords - Machine Learning, Side Channel Attack, Software Tool
Last updated:  2019-09-23
Secure Delegation of Isogeny Computations and Cryptographic Applications
Robi Pedersen, Osmanbey Uzunkol
We address the problem of speeding up isogeny computation for supersingular elliptic curves over finite fields using untrusted computational resources like third party servers or cloud service providers (CSPs). We first propose new, efficient and secure delegation schemes. This especially enables resource-constrained devices (e.g. smart cards, RFID tags, tiny sensor nodes) to effectively deploy post-quantum isogeny-based cryptographic protocols. To the best of our knowledge, these new schemes are the first attempt to generalize the classical secure delegation schemes for group exponentiations and pairing computation to an isogeny-based post-quantum setting. Then, we apply these secure delegation subroutines to improve the performance of supersingular isogeny-based zero-knowledge proofs of identity. Our experimental results show that, at the 128−bit quantum-security level, the proving party only needs about 3% of the original protocol cost, while the verifying party’s effort is fully reduced to comparison operations. Lastly, we also apply our delegation schemes to decrease the computational cost of the decryption step for the NIST postquantum standardization candidate SIKE.
Last updated:  2019-09-23
Efficient Private PEZ Protocols for Symmetric Functions
Yoshiki Abe, Mitsugu Iwamoto, Kazuo Ohta
A private PEZ protocol is a variant of secure multi-party computation performed using a (long) PEZ dispenser. The original paper by Balogh et al. presented a private PEZ protocol for computing an arbitrary function with n inputs. This result is interesting, but no follow-up work has been presented since then, to the best of our knowledge. We show herein that it is possible to shorten the initial string (the sequence of candies filled in a PEZ dispenser) and the number of moves (a player pops out a specified number of candies in each move) drastically if the function is symmetric. Concretely, it turns out that the length of the initial string is reduced from O((2^n)!) for general functions in Balogh et al.'s results to O(n * n!) for symmetric functions, and 2^n moves for general functions are reduced to n^2 moves for symmetric functions. Our main idea is to utilize the recursive structure of symmetric functions to construct the protocol recursively. This idea originates from a new initial string we found for a private PEZ protocol for the three-input majority function, which is different from the one with the same length given by Balogh et al. without describing how they derived it.
Last updated:  2020-05-28
Not a Free Lunch but a Cheap Lunch: Experimental Results for Training Many Neural Nets Efficiently
Joey Green, Tilo Burghardt, Elisabeth Oswald
Neural Networks have become a much studied approach in the recent literature on profiled side channel attacks: many articles examine their use and performance in profiled single-target DPA style attacks. In this setting a single neural net is tweaked and tuned based on a training data set. The effort for this is considerable, as there a many hyper-parameters that need to be adjusted. A straightforward, but impractical, extension of such an approach to multi-target DPA style attacks requires deriving and tuning a network architecture for each individual target. Our contribution is to provide the first practical and efficient strategy for training many neural nets in the context of a multi target attack. We show how to configure a network with a set of hyper-parameters for a specific intermediate (SubBytes) that generalises well to capture the leakage of other intermediates as well. This is interesting because although we can't beat the no free lunch theorem (i.e. we find that different profiling methods excel on different intermediates), we can still get ``good value for money'' (i.e. good classification results across many intermediates with reasonable profiling effort).
Last updated:  2019-09-23
Lattice Trapdoors and IBE from Middle-Product LWE
Alex Lombardi, Vinod Vaikuntanathan, Thuy Duong Vuong
Middle-product learning with errors (MP-LWE) was recently introduced by Rosca, Sakzad, Steinfeld and Stehlé (CRYPTO 2017) as a way to combine the efficiency of Ring-LWE with the more robust security guarantees of plain LWE. While Ring-LWE is at the heart of efficient lattice-based cryptosystems, it involves the choice of an underlying ring which is essentially arbitrary. In other words, the effect of this choice on the security of Ring-LWE is poorly understood. On the other hand, Rosca et al. showed that a new LWE variant, called MP-LWE, is as secure as Polynomial-LWE (another variant of Ring-LWE) over any of a broad class of number fields. They also demonstrated the usefulness of MP-LWE by constructing an MP-LWE based public-key encryption scheme whose efficiency is comparable to Ring-LWE based public-key encryption. In this work, we take this line of research further by showing how to construct Identity-Based Encryption (IBE) schemes that are secure under a variant of the MP-LWE assumption. Our IBE schemes match the efficiency of Ring-LWE based IBE, including a scheme in the random oracle model with keys and ciphertexts of size $\tilde{O}(n)$ (for $n$-bit identities). We construct our IBE scheme following the lattice trapdoors paradigm of [Gentry, Peikert, and Vaikuntanathan, STOC'08]; our main technical contributions are introducing a new leftover hash lemma and instantiating a new variant of lattice trapdoors compatible with MP-LWE. This work demonstrates that the efficiency/security tradeoff gains of MP-LWE can be extended beyond public-key encryption to more complex lattice-based primitives.
Last updated:  2020-01-22
HEAX: An Architecture for Computing on Encrypted Data
M. Sadegh Riazi, Kim Laine, Blake Pelton, Wei Dai
With the rapid increase in cloud computing, concerns surrounding data privacy, security, and confidentiality also have been increased significantly. Not only cloud providers are susceptible to internal and external hacks, but also in some scenarios, data owners cannot outsource the computation due to privacy laws such as GDPR, HIPAA, or CCPA. Fully Homomorphic Encryption (FHE) is a groundbreaking invention in cryptography that, unlike traditional cryptosystems, enables computation on encrypted data without ever decrypting it. However, the most critical obstacle in deploying FHE at large-scale is the enormous computation overhead. In this paper, we present HEAX, a novel hardware architecture for FHE that achieves unprecedented performance improvement. HEAX leverages multiple levels of parallelism, ranging from ciphertext-level to fine-grained modular arithmetic level. Our first contribution is a new highly-parallelizable architecture for number-theoretic transform (NTT) which can be of independent interest as NTT is frequently used in many lattice-based cryptography systems. Building on top of NTT engine, we design a novel architecture for computation on homomorphically encrypted data. We also introduce several techniques to enable an end-to-end, fully pipelined design as well as reducing on-chip memory consumption. Our implementation on reconfigurable hardware demonstrates 164-268× performance improvement for a wide range of FHE parameters.
Last updated:  2020-08-17
Subversion-Resistant Commitment Schemes: Definitions and Constructions
Karim Baghery
A commitment scheme allows a committer to create a commitment to a secret value, and later may open and reveal the secret value in a verifiable manner. In the common reference string model, (equivocal) commitment schemes require a setup phase which is supposed to be done by a third trusted party. Recently, various news is reported about the subversion of $\textit{trusted}$ setup phase in mass-surveillance activities; strictly speaking about commitment schemes, recently it was discovered that the SwissPost-Scytl mix-net uses a trapdoor commitment scheme, that allows undetectably altering the votes and breaking users' privacy, given the trapdoor [Hae19, LPT19]. Motivated by such news and recent studies on subversion-resistance of various cryptographic primitives, this research studies the security of commitment schemes in the presence of a maliciously chosen commitment key. To attain a clear understanding of achievable security, we define a variety of current definitions called subversion hiding, subversion equivocality, and subversion binding. Then we provide both negative and positive results on constructing subversion-resistant commitment schemes, by showing that some combinations of notions are not compatible while presenting subversion-resistant constructions that can achieve other combinations.
Last updated:  2022-02-24
Separating Symmetric and Asymmetric Password-Authenticated Key Exchange
Julia Hesse
Password-Authenticated Key Exchange (PAKE) is a method to establish cryptographic keys between two users sharing a low-entropy password. In its asymmetric version, one of the users acts as a server and only stores some function of the password, e.g., a hash. Upon server compromise, the adversary learns H(pw). Depending on the strength of the password, the attacker now has to invest more or less work to reconstruct pw from H(pw). Intuitively, asymmetric PAKE seems more challenging than symmetric PAKE since the latter is not supposed to protect the password upon compromise. In this paper, we provide three contributions: - Separating symmetric and asymmetric PAKE. We prove that a strong assumption like a programmable random oracle is necessary to achieve security of asymmetric PAKE in the Universal Composability (UC) framework. For symmetric PAKE, programmability is not required. Our results also rule out the existence of UC-secure asymmetric PAKE in the CRS model. - Revising the security definition. We identify and close some gaps in the UC security definition of 2-party asymmetric PAKE given by Gentry, MacKenzie and Ramzan (Crypto 2006). For this, we specify a natural corruption model for server compromise attacks. We further remove an undesirable weakness that lets parties wrongly believe in security of compromised session keys. We demonstrate usefulness by proving that the Omega-method proposed by Gentry et al. satisfies our new security notion for asymmetric PAKE. To our knowledge, this is the first formal security proof of the Omega-method in the literature. - Composable multi-party asymmetric PAKE. We showcase how our revisited security notion for 2-party asymmetric PAKE can be used to obtain asymmetric PAKE protocols in the multi-user setting and discuss important aspects for implementing such a protocol.
Last updated:  2019-10-14
A Framework for UC-Secure Commitments from Publicly Computable Smooth Projective Hashing
Behzad Abdolmaleki, Hamidreza Khoshakhlagh, Daniel Slamanig
Hash proof systems or smooth projective hash functions (SPHFs) have been proposed by Cramer and Shoup (Eurocrypt'02) and can be seen as special type of zero-knowledge proof system for a language. While initially used to build efficient chosen-ciphertext secure public-key encryption, they found numerous applications in several other contexts. In this paper, we revisit the notion of SPHFs and introduce a new feature (a third mode of hashing) that allows to compute the hash value of an SPHF without having access to neither the witness nor the hashing key, but some additional auxiliary information. We call this new type publicly computable SPHFs (PC-SPHFs) and present a formal framework along with concrete instantiations from a large class of SPHFs. We then show that this new tool generically leads to commitment schemes that are secure against adaptive adversaries, assuming erasures in the Universal Composability (UC) framework, yielding the first UC secure commitments build from a single SPHF instance. Instantiating our PC-SPHF with an SPHF for labeled Cramer-Shoup encryption gives the currently most efficient non-interactive UC-secure commitment. Finally, we also discuss additional applications to information retrieval based on anonymous credentials being UC secure against adaptive adversaries.
Last updated:  2021-01-12
Local Proofs Approaching the Witness Length
Noga Ron-Zewi, Ron D. Rothblum
Interactive oracle proofs (IOPs) are a hybrid between interactive proofs and PCPs. In an IOP the prover is allowed to interact with a verifier (like in an interactive proof) by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent (like in a PCP). In this work we construct, for a large class of NP relations, IOPs in which the communication complexity approaches the witness length. More precisely, for any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant $\gamma>0$, we construct an IOP with communication complexity $(1+\gamma) \cdot n$, where $n$ is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant. This result improves over prior works on short IOPs/PCPs in two ways. First, the communication complexity in these short IOPs is proportional to the complexity of verifying the NP witness, which can be polynomially larger than the witness size. Second, even ignoring the difference between witness length and non-deterministic verification time, prior works incur (at the very least) a large constant multiplicative overhead to the communication complexity. In particular, as a special case, we also obtain an IOP for Circuit-SAT with rate approaching 1: the communication complexity is $(1+\gamma) \cdot t$, for circuits of size $t$ and any constant $\gamma>0$. This improves upon the prior state-of-the-art work of Ben Sasson et al. (ICALP, 2017) who construct an IOP for CircuitSAT with communication length $c \cdot t$ for a large (unspecified) constant $c \geq 1$. Our proof leverages recent constructions of high-rate locally testable tensor codes. In particular, we bypass the barrier imposed by the low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes) - a core component in all known short PCP/IOP constructions.
Last updated:  2019-11-18
Breaking and Fixing Anonymous Credentials for the Cloud
Ulrich Haböck, Stephan Krenn
In an attribute-based credential (ABC) system, users obtain a digital certificate on their personal attributes, and can later prove possession of such a certificate in an unlinkable way, thereby selectively disclosing chosen attributes to the service provider. Recently, the concept of encrypted ABCs (EABCs) was introduced by Krenn et al. at CANS 2017, where virtually all computation is outsourced to a semi-trusted cloud-provider called wallet, thereby overcoming existing efficiency limitations on the user’s side, and for the first time enabling “privacy-preserving identity management as a service”. While their approach is highly relevant for bringing ABCs into the real world, we present a simple attack allowing the wallet to learn a user's attributes when colluding with another user -- a scenario which is not covered by their modeling but which needs to be considered in practice. We then revise the model and construction of Krenn et al. in various ways, such that the above attack is no longer possible. Furthermore, we also remove existing non-collusion assumptions between wallet and service provider or issuer from their construction. Our protocols are still highly efficient in the sense that the computational effort on the end user side consists of a single exponentiation only, and otherwise efficiency is comparable to the original work of Krenn et al.
Last updated:  2019-09-19
Sharing the LUOV: Threshold Post-Quantum Signatures
Daniele Cozzo, Nigel P. smart
We examine all of the signature submissions to Round-2 of the NIST PQC ``competition'' in the context of whether one can transform them into threshold signature schemes in a relatively straight forward manner. We conclude that all schemes, except the ones in the MQ family, have significant issues when one wishes to convert them using relatively generic MPC techniques. The lattice based schemes are hampered by requiring a mix of operations which are suited to both linear secret shared schemes (LSSS)- and garbled circuits (GC)-based MPC techniques (thus requiring costly transfers between the two paradigms). The Picnic and SPHINCS+ algorithms are hampered by the need to compute a large number of hash function queries on secret data. Of the nine submissions the two which would appear to be most suitable for using in a threshold like manner are Rainbow and LUOV, with LUOV requiring less rounds and less data storage.
Last updated:  2019-09-18
A New Method for Geometric Interpretation of Elliptic Curve Discrete Logarithm Problem
Daniele Di Tullio, Ankan Pal
In this paper, we intend to study the geometric meaning of the discrete logarithm problem defined over an Elliptic Curve. The key idea is to reduce the Elliptic Curve Discrete Logarithm Problem (EC-DLP) into a system of equations. These equations arise from the interesection of quadric hypersurfaces in an affine space of lower dimension. In cryptography, this interpretation can be used to design attacks on EC-DLP. Presently, the best known attack algorithm having a sub-exponential time complexity is through the implementation of Summation Polynomials and Weil Descent. It is expected that the proposed geometric interpretation can result in faster reduction of the problem into a system of equations. These overdetermined system of equations are hard to solve. We have used F4 (Faugere) algorithms and got results for primes less than 500,000. Quantum Algorithms can expedite the process of solving these over-determined system of equations. In the absence of fast algorithms for computing summation polynomials, we expect that this could be an alternative. We do not claim that the proposed algorithm would be faster than Shor's algorithm for breaking EC-DLP but this interpretation could be a candidate as an alternative to the 'summation polynomial attack' in the post-quantum era. Key Words: Elliptic Curve Discrete Logarithm Problem, Intersection of Curves, Grobner Basis, Vanishing Ideals.
Last updated:  2020-10-13
Privacy-preserving auditable token payments in a permissioned blockchain system
Elli Androulaki, Jan Camenisch, Angelo De Caro, Maria Dubovitskaya, Kaoutar Elkhiyaoui, Björn Tackmann
Token management systems were the first application of blockchain technology and are still the most widely used one. Early implementations such as Bitcoin or Ethereum provide virtually no privacy beyond basic pseudonymity: all transactions are written in plain to the blockchain, which makes them perfectly linkable and traceable. Several more recent blockchain systems, such as Monero or Zerocash, implement improved levels of privacy. Most of these systems target the permissionless setting, just like Bitcoin. Many practical scenarios, in contrast, require token systems to be permissioned, binding the tokens to user identities instead of pseudonymous addresses, and also requiring auditing functionality in order to satisfy regulation such as AML/KYC. We present a privacy-preserving token management system that is designed for permissioned blockchain systems and supports fine-grained auditing. The scheme is secure under computational assumptions in bilinear groups, in the random-oracle model.
Last updated:  2019-09-18
A Study of Persistent Fault Analysis
Andrea Caforio, Subhadeep Banik
Persistent faults mark a new class of injections that perturb lookup tables within block ciphers with the overall goal of recovering the encryption key. Unlike earlier fault types persistent faults remain intact over many encryptions until the affected device is rebooted, thus allowing an adversary to collect a multitude of correct and faulty ciphertexts. It was shown to be an efficient and effective attack against substitution-permutation networks. In this paper, the scope of persistent faults is further broadened and explored. More specifically, we show how to construct a key-recovery attack on generic Feistel schemes in the presence of persistent faults. In a second step, we leverage these faults to reverse-engineer AES- and PRESENT-like ciphers in a chosen-key setting, in which some of the computational layers, like substitution tables, are kept secret. Finally, we propose a novel, dedicated, and low-overhead countermeasure that provides adequate protection for hardware implementations against persistent fault injections.
Last updated:  2019-09-18
Adventures in Supersingularland
Sarah Arpin, Catalina Camacho-Navarro, Kristin Lauter, Joelle Lim, Kristina Nelson, Travis Scholl, Jana Sotáková
In this paper, we study isogeny graphs of supersingular elliptic curves. Supersingular isogeny graphs were introduced as a hard problem into cryptography by Charles, Goren, and Lauter for the construction of cryptographic hash functions. These are large expander graphs, and the hard problem is to find an efficient algorithm for routing, or path-finding, between two vertices of the graph. We consider four aspects of supersingular isogeny graphs, study each thoroughly and, where appropriate, discuss how they relate to one another. First, we consider two related graphs that help us understand the structure: the `spine' $\mathcal{S}$, which is the subgraph of $\mathcal{G}_\ell(\overline{\mathbb{F}_p})$ given by the $j$-invariants in $\mathbb{F}_p$, and the graph $\mathcal{G}_\ell(\mathbb{F}_p)$, in which both curves and isogenies must be defined over $\mathbb{F}_p$. We show how to pass from the latter to the former. The graph $\mathcal{S}$ is relevant for cryptanalysis because routing between vertices in $\mathbb{F}_p$ is easier than in the full isogeny graph. The $\mathbb{F}_p$-vertices are typically assumed to be randomly distributed in the graph, which is far from true. We provide an analysis of the distances of connected components of $\mathcal{S}$. Next, we study the involution on $\mathcal{G}_\ell(\overline{\mathbb{F}_p})$ that is given by the Frobenius of $\mathbb{F}_p$ and give heuristics on how often shortest paths between two conjugate $j$-invariants are preserved by this involution (mirror paths). We also study the related question of what proportion of conjugate $j$-invariants are $\ell$-isogenous for $\ell = 2,3$. We conclude with experimental data on the diameters of supersingular isogeny graphs when $\ell = 2$ and compare this with previous results on diameters of LPS graphs and random Ramanujan graphs.
Last updated:  2019-09-18
Dynamic Searchable Symmetric Encryption with Forward and Stronger Backward Privacy
Cong Zuo, Shi-Feng Sun, Joseph K. Liu, Jun Shao, Josef Pieprzyk
Dynamic Searchable Symmetric Encryption (DSSE) enables a client to perform updates and searches on encrypted data which makes it very useful in practice. To protect DSSE from the leakage of updates (leading to break query or data privacy), two new security notions, forward and backward privacy, have been proposed recently. Although extensive attention has been paid to forward privacy, this is not the case for backward privacy. Backward privacy, first formally introduced by Bost et al., is classified into three types from weak to strong, exactly Type-III to Type-I. To the best of our knowledge, however, no practical DSSE schemes without trusted hardware (e.g. SGX) have been proposed so far, in terms of the strong backward privacy and constant roundtrips between the client and the server. In this work, we present a new DSSE scheme by leveraging simple symmetric encryption with homomorphic addition and bitmap index. The new scheme can achieve both forward and backward privacy with one roundtrip. In particular, the backward privacy we achieve in our scheme (denoted by Type-I$^-$) is somewhat stronger than Type-I. Moreover, our scheme is very practical as it involves only lightweight cryptographic operations. To make it scalable for supporting billions of files, we further extend it to a multi-block setting. Finally, we give the corresponding security proofs and experimental evaluation which demonstrate both security and practicality of our schemes, respectively.
Last updated:  2019-09-18
Truthful and Faithful Monetary Policy for a Stablecoin Conducted by a Decentralised, Encrypted Artificial Intelligence
David Cerezo Sánchez
The Holy Grail of a decentralised stablecoin is achieved on rigorous mathematical frameworks, obtaining multiple advantageous proofs: stability, convergence, truthfulness, faithfulness, and malicious-security. These properties could only be attained by the novel and interdisciplinary combination of previously unrelated fields: model predictive control, deep learning, alternating direction method of multipliers (consensus-ADMM), mechanism design, secure multi-party computation, and zero-knowledge proofs. For the first time, this paper proves: - the feasibility of decentralising the central bank while securely preserving its independence in a decentralised computation setting - the benefits for price stability of combining mechanism design, provable security, and control theory, unlike the heuristics of previous stablecoins - the implementation of complex monetary policies on a stablecoin, equivalent to the ones used by central banks and beyond the current fixed rules of cryptocurrencies that hinder their price stability - methods to circumvent the impossibilities of Guaranteed Output Delivery (G.O.D.) and fairness: standing on truthfulness and faithfulness, we reach G.O.D. and fairness under the assumption of rational parties As a corollary, a decentralised artificial intelligence is able to conduct the monetary policy of a stablecoin, minimising human intervention.
Last updated:  2020-01-16
Modeling Memory Faults in Signature and Authenticated Encryption Schemes
Uncategorized
Marc Fischlin, Felix Günther
Show abstract
Uncategorized
Memory fault attacks, inducing errors in computations, have been an ever-evolving threat to cryptographic schemes since their discovery for cryptography by Boneh et al. (Eurocrypt 1997). Initially requiring physical tampering with hardware, the software-based rowhammer attack put forward by Kim et al. (ISCA 2014) enabled fault attacks also through malicious software running on the same host machine. This led to concerning novel attack vectors, for example on deterministic signature schemes, whose approach to avoid dependency on (good) randomness renders them vulnerable to fault attacks. This has been demonstrated in realistic adversarial settings in a series of recent works. However, a unified formalism of different memory fault attacks, enabling also to argue the security of countermeasures, is missing yet. In this work, we suggest a generic extension for existing security models that enables a game-based treatment of cryptographic fault resilience. Our modeling specifies exemplary memory fault attack types of different strength, ranging from random bit-flip faults to differential (rowhammer-style) faults to full adversarial control on indicated memory variables. We apply our model first to deterministic signatures to revisit known fault attacks as well as to establish provable guarantees of fault resilience for proposed fault-attack countermeasures. In a second application to nonce-misuse resistant authenticated encryption, we provide the first fault-attack treatment of the SIV mode of operation and give a provably secure fault-resilient variant.
Last updated:  2019-09-18
Improved Cryptanalysis of the KMOV Elliptic Curve Cryptosystem
Abderrahmane Nitaj, Willy Susilo, Joseph Tonien
This paper presents two new improved attacks on the KMOV cryptosystem. KMOV is an encryption algorithm based on elliptic curves over the ring ${\mathbb{Z}}_N$ where $N=pq$ is a product of two large primes of equal bit size. The first attack uses the properties of the convergents of the continued fraction expansion of a specific value derived from the KMOV public key. The second attack is based on Coppersmith's method for finding small solutions of a multivariate polynomial modular equation. Both attacks improve the existing attacks on the KMOV cryptosystem.
Last updated:  2019-09-18
A New Public Key Cryptosystem Based on Edwards Curves
Maher Boudabra, Abderrahmane Nitaj
The elliptic curve cryptography plays a central role in various cryptographic schemes and protocols. For efficiency reasons, Edwards curves and twisted Edwards curves have been introduced. In this paper, we study the properties of twisted Edwards curves on the ring $\mathbb{Z}/n\mathbb{Z}$ where $n=p^rq^s$ is a prime power RSA modulus and propose a new scheme and study its efficiency and security.
Last updated:  2019-09-18
A New Attack on RSA and Demytko's Elliptic Curve Cryptosystem
Abderrahmane Nitaj, Emmanuel Fouotsa
Let $N=pq$ be an RSA modulus and $e$ be a public exponent. Numerous attacks on RSA exploit the arithmetical properties of the key equation $ed-k(p-1)(q-1)=1$. In this paper, we study the more general equation $eu-(p-s)(q-r)v=w$. We show that when the unknown integers $u$, $v$, $w$, $r$ and $s$ are suitably small and $p-s$ or $q-r$ is factorable using the Elliptic Curve Method for factorization ECM, then one can break the RSA system. As an application, we propose an attack on Demytko's elliptic curve cryptosystem. Our method is based on Coppersmith's technique for solving multivariate polynomial modular equations.
Last updated:  2020-03-19
CrypTFlow: Secure TensorFlow Inference
Nishant Kumar, Mayank Rathee, Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma
We present CrypTFlow, a first of its kind system that converts TensorFlow inference code into Secure Multi-party Computation (MPC) protocols at the push of a button. To do this, we build three components. Our first component, Athos, is an end-to-end compiler from TensorFlow to a variety of semi-honest MPC protocols. The second component, Porthos, is an improved semi-honest 3-party protocol that provides significant speedups for TensorFlow like applications. Finally, to provide malicious secure MPC protocols, our third component, Aramis, is a novel technique that uses hardware with integrity guarantees to convert any semi-honest MPC protocol into an MPC protocol that provides malicious security. The malicious security of the protocols output by Aramis relies on integrity of the hardware and semi-honest security of MPC. Moreover, our system matches the inference accuracy of plaintext TensorFlow. We experimentally demonstrate the power of our system by showing the secure inference of real-world neural networks such as ResNet50 and DenseNet121 over the ImageNet dataset with running times of about 30 seconds for semi-honest security and under two minutes for malicious security. Prior work in the area of secure inference has been limited to semi-honest security of small networks over tiny datasets such as MNIST or CIFAR. Even on MNIST/CIFAR, CrypTFlow outperforms prior work.
Last updated:  2020-12-02
New point compression method for elliptic $\mathbb{F}_{\!q^2}$-curves of $j$-invariant $0$
Dmitrii Koshelev
In the article we propose a new compression method (to $2\lceil \log_2(q) \rceil + 3$ bits) for the $\mathbb{F}_{\!q^2}$-points of an elliptic curve $E_b\!: y^2 = x^3 + b$ (for $b \in \mathbb{F}_{\!q^2}^*$) of $j$-invariant $0$. It is based on $\mathbb{F}_{\!q}$-rationality of some generalized Kummer surface $GK_b$. This is the geometric quotient of the Weil restriction $R_b := \mathrm{R}_{\: \mathbb{F}_{\!q^2}/\mathbb{F}_{\!q}}(E_b)$ under the order $3$ automorphism restricted from $E_b$. More precisely, we apply the theory of conic bundles $\big($i.e., conics over the function field $\mathbb{F}_{\!q}(t)\big)$ to obtain explicit and quite simple formulas of a birational $\mathbb{F}_{\!q}$-isomorphism between $GK_b$ and $\mathbb{A}^{\!2}$. Our point compression method consists in computation of these formulas. To recover (in the decompression stage) the original point from $E_b(\mathbb{F}_{\!q^2}) = R_b(\mathbb{F}_{\!q})$ we find an inverse image of the natural map $R_b \to GK_b$ of degree $3$, i.e., we extract a cubic root in $\mathbb{F}_{\!q}$. For $q \not\equiv 1 \: (\mathrm{mod} \ 27)$ this is just a single exponentiation in $\mathbb{F}_{\!q}$, hence the new method seems to be much faster than the classical one with $x$ coordinate, which requires two exponentiations in $\mathbb{F}_{\!q}$.
Last updated:  2021-10-04
Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS
Alessandro Chiesa, Yuncong Hu, Mary Maller, Pratyush Mishra, Psi Vesely, Nicholas Ward
We present a methodology to construct preprocessing zkSNARKs where the structured reference string (SRS) is universal and updatable. This exploits a novel use of *holography* [Babai et al., STOC 1991], where fast verification is achieved provided the statement being checked is given in encoded form. We use our methodology to obtain a preprocessing zkSNARK where the SRS has linear size and arguments have constant size. Our construction improves on Sonic [Maller et al., CCS 2019], the prior state of the art in this setting, in all efficiency parameters: proving is an order of magnitude faster and verification is thrice as fast, even with smaller SRS size and argument size. Our construction is most efficient when instantiated in the algebraic group model (also used by Sonic), but we also demonstrate how to realize it under concrete knowledge assumptions. We implement and evaluate our construction. The core of our preprocessing zkSNARK is an efficient *algebraic holographic proof* (AHP) for rank-1 constraint satisfiability (R1CS) that achieves linear proof length and constant query complexity.
Last updated:  2019-09-18
The Function-Inversion Problem: Barriers and Opportunities
Henry Corrigan-Gibbs, Dmitry Kogan
The task of function inversion is central to cryptanalysis: breaking block ciphers, forging signatures, and cracking password hashes are all special cases of the function-inversion problem. In 1980, Hellman showed that it is possible to invert a random function $f\colon [N] \to [N]$ in time $T = \widetilde{O}(N^{2/3})$ given only $S = \widetilde{O}(N^{2/3})$ bits of precomputed advice about $f$. Hellman’s algorithm is the basis for the popular “Rainbow Tables” technique (Oechslin, 2003), which achieves the same asymptotic cost and is widely used in practical cryptanalysis. Is Hellman’s method the best possible algorithm for inverting functions with preprocessed advice? The best known lower bound, due to Yao (1990), shows that $ST = \widetilde{\Omega}(N)$, which still admits the possibility of an $S = T = \widetilde{O}(N^{1/2})$ attack. There remains a long-standing and vexing gap between Hellman’s $N^{2/3}$ upper bound and Yao’s $N^{1/2}$ lower bound. Understanding the feasibility of an $S = T = N^{1/2}$ algorithm is cryptanalytically relevant since such an algorithm could perform a key-recovery attack on AES-128 in time $2^{64}$ using a precomputed table of size $2^{64}$. For the past 29 years, there has been no progress either in improving Hellman’s algorithm or in strengthening Yao’s lower bound. In this work, we connect function inversion to problems in other areas of theory to (1) explain why progress may be difficult and (2) explore possible ways forward. Our results are as follows: - We show that *any* improvement on Yao’s lower bound on function-inversion algorithms will imply new lower bounds on depth-two circuits with arbitrary gates. Further, we show that proving strong lower bounds on *non-adaptive* function-inversion algorithms would imply breakthrough circuit lower bounds on linear-size log-depth circuits. - We take first steps towards the study of the *injective* function-inversion problem, which has manifold cryptographic applications. In particular, we show that improved algorithms for breaking PRGs with preprocessing would give improved algorithms for inverting injective functions with preprocessing. - Finally, we show that function inversion is closely related to well-studied problems in communication complexity and data structures. Through these connections we immediately obtain the best known algorithms for problems in these domains.
Last updated:  2019-10-29
Predicate Encryption from Bilinear Maps and One-Sided Probabilistic Rank
Josh Alman, Robin Hui
In predicate encryption for a function $f$, an authority can create ciphertexts and secret keys which are associated with `attributes'. A user with decryption key $K_y$ corresponding to attribute $y$ can decrypt a ciphertext $CT_x$ corresponding to a message $m$ and attribute $x$ if and only if $f(x,y)=0$. Furthermore, the attribute $x$ remains hidden to the user if $f(x,y) \neq 0$. We construct predicate encryption from assumptions on bilinear maps for a large class of new functions, including sparse set disjointness, Hamming distance at most $k$, inner product mod 2, and any function with an efficient Arthur-Merlin communication protocol. Our construction uses a new probabilistic representation of Boolean functions we call `one-sided probabilistic rank,' and combines it with known constructions of inner product encryption in a novel way.
Last updated:  2019-09-18
Verifiable Registration-Based Encryption
Rishab Goyal, Satyanarayana Vusirikala
In a recent work, Garg, Hajiabadi, Mahmoody, and Rahimi (TCC 18) introduced a new encryption framework, which they referred to as Registration-Based Encryption (RBE). The central motivation behind RBE was to provide a novel methodology for solving the well-known key-escrow problem in Identity-Based Encryption (IBE) systems. Informally, in an RBE system there is no private-key generator unlike IBE systems, but instead it is replaced with a public key accumulator. Every user in an RBE system samples its own public-secret key pair, and sends the public key to the accumulator for registration. The key accumulator has no secret state, and is only responsible for compressing all the registered user identity-key pairs into a short public commitment. Here the encryptor only requires the compressed parameters along with the target identity, whereas a decryptor requires supplementary key material along with the secret key associated with the registered public key. The initial construction by Garg et al. (TCC 18) based on standard assumptions only provided weak efficiency properties. In a follow-up work by Garg, Hajiabadi, Mahmoody, Rahimi, and Sekar (PKC 19), they gave an efficient RBE construction from standard assumptions. However, both these works considered the key accumulator to be honest which might be too strong an assumption in real-world scenarios. In this work, we initiate a formal study of RBE systems with malicious key accumulators. To that end, we introduce a strengthening of the RBE framework which we call Verifiable RBE (VRBE). A VRBE system additionally gives the users an extra capability to obtain short proofs from the key accumulator proving correct (and unique) registration for every registered user as well as proving non-registration for any yet unregistered identity. We construct VRBE systems which provide succinct proofs of registration and non-registration from standard assumptions (such as CDH, Factoring, LWE). Our proof systems also naturally allow a much more efficient audit process which can be perfomed by any non-participating third party as well. A by-product of our approach is that we provide a more efficient RBE construction than that provided in the prior work of Garg et al. (PKC 19). And, lastly we initiate a study on extension of VRBE to a wider range of access and trust structures.
Last updated:  2019-09-18
Breaking the Bluetooth Pairing – The Fixed Coordinate Invalid Curve Attack
Eli Biham, Lior Neumann
Bluetooth is a widely deployed standard for wireless communications between mobile devices. It uses authenticated Elliptic Curve Diffie-Hellman for its key exchange. In this paper we show that the authentication provided by the Bluetooth pairing protocols is insufficient and does not provide the promised MitM protection. We present a new attack that modifies the y-coordinates of the public keys (while preserving the x-coordinates). The attack compromises the encryption keys of all of the current Bluetooth authenticated pairing protocols, provided both paired devices are vulnerable. Specifically, it successfully compromises the encryption keys of 50% of the Bluetooth pairing attempts, while in the other 50% the pairing of the victims is terminated. The affected vendors have been informed and patched their products accordingly, and the Bluetooth specification had been modified to address the new attack. We named our new attack the “Fixed Coordinate Invalid Curve Attack”. Unlike the well known “Invalid Curve Attack” of Biehl et. al. which recovers the private key by sending multiple specially crafted points to the victim, our attack is a MitM attack which modifies the public keys in a way that lets the attacker deduce the shared secret.
Last updated:  2019-09-18
A Machine-Checked Proof of Security for AWS Key Management Service
José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Matthew Campagna, Ernie Cohen, Benjamin Gregoire, Vitor Pereira, Bernardo Portela, Pierre-Yves Strub, Serdar Tasiran
We present a machine-checked proof of security for the domain management protocol of Amazon Web Services' KMS (Key Management Service) a critical security service used throughout AWS and by AWS customers. Domain management is at the core of AWS KMS; it governs the top-level keys that anchor the security of encryption services at AWS. We show that the protocol securely implements an ideal distributed encryption mechanism under standard cryptographic assumptions. The proof is machine-checked in the EasyCrypt proof assistant and is the largest EasyCrypt development to date.
Last updated:  2019-09-18
A Conditional Privacy Preserving Authentication and Multi Party Group Key Establishment Scheme for Real-Time Application in VANETs
Swapnil Paliwal, Anvita Chandrakar
Vehicular Ad-hoc Networks (VANETs) are a cardinal part of intelligent transportation system (ITS) which render various services in terms of traffic and transport management. The VANET is used to manage growing traffic and manage data about traffic conditions, weather, road conditions, speed of the vehicle, etc. Even though, VANETs are self-sufficient and effective networks but they still suffer from various security and privacy issues. VANETs need to ensure that an adversary should not be able to breach user associated data and delete or modify the exchanged messages for its gains, as these messages comprise of sensitive data. In this paper, we have proposed an authentication and key-agreement protocol based on cryptographic hash functions which makes it lightweight in nature and also suitable for VANET environment. Moreover, to enhance the security and reliability of the entire system, the proposed key-agreement protocol makes use of random session modulus to compute a dynamic session key i.e. for every session, vehicles generate their session specific secret modulus which are then converged to form a common group session key. The formal verification of the proposed work is done using Real - or - Random oracle model, AVISPA and BAN Logic while informal security analysis shows that the proposed protocol can withstand various attacks. The simulation results and analysis prove that the proposed work is efficient and has a real-time application in VANET environment.
Last updated:  2019-09-18
Hardware-Software Co-Design Based Obfuscation of Hardware Accelerators
Abhishek Chakraborty, Ankur Srivastava
Existing logic obfuscation approaches aim to protect hardware design IPs from SAT attack by increasing query count and output corruptibility of a locked netlist. In this paper, we demonstrate the ineffectiveness of such techniques to obfuscate hardware accelerator platforms. Subsequently, we propose a Hardware/software co-design based Accelerator Obfuscation (HSCAO) scheme to provably safeguard the IP of such designs against SAT as well as removal/bypass type of attacks while still maintaining high output corruptability for applications. The attack resiliency of HSCAO scheme is manifested by using a sequence of keys to obfuscate instruction encoding for an application. Experimental evaluations utilizing an accelerator simulator demonstrate the effectiveness of our proposed countermeasure.
Last updated:  2019-09-18
Accelerated V2X provisioning with Extensible Processor Platform
Henrique S. Ogawa, Thomas E. Luther, Jefferson E. Ricardini, Helmiton Cunha, Marcos Simplicio Jr., Diego F. Aranha, Ruud Derwig, Harsh Kupwade-Patil
With the burgeoning Vehicle-to-Everything (V2X) communication, security and privacy concerns are paramount. Such concerns are usually mitigated by combining cryptographic mechanisms with suitable key management architecture. However, cryptographic operations may be quite resource-intensive, placing a considerable burden on the vehicle’s V2X computing unit. To assuage this issue, it is reasonable to use hardware acceleration for common cryptographic primitives, such as block ciphers, digital signature schemes, and key exchange protocols. In this scenario, custom extension instructions can be a plausible option, since they achieve fine-tune hardware acceleration with a low to moderate logic overhead, while also reducing code size. In this article, we apply this method along with dual-data memory banks for the hardware acceleration of the PRESENT block cipher, as well as for the $F_{2^{255}-19}$ finite field arithmetic employed in cryptographic primitives based on Curve25519 (e.g., EdDSA and X25519). As a result, when compared with a state-of-the-art software-optimized implementation, the performance of PRESENT is improved by a factor of 17 to 34 and code size is reduced by 70%, with only a 4.37% increase in FPGA logic overhead. In addition, we improve the performance of operations over Curve25519 by a factor of ~2.5 when compared to an Assembly implementation on a comparable processor, with moderate logic overhead (namely, 9.1%). Finally, we achieve significant performance gains in the V2X provisioning process by leveraging our hardware-accelerated cryptographic primitives
Last updated:  2019-10-14
Dynamic Searchable Encryption with Access Control
Uncategorized
Johannes Blömer, Nils Löken
Show abstract
Uncategorized
We present a searchable encryption scheme for dynamic document collections in a multi-user scenario. Our scheme features fine-grained access control to search results, as well as access control to operations such as adding documents to the document collection, or changing individual documents. The scheme features verifiability of search results. Our scheme also satisfies the forward privacy notion crucial for the security of dynamic searchable encryption schemes.
Last updated:  2019-11-22
Card-based Cryptography Meets Formal Verification
Alexander Koch, Michael Schrempp, Michael Kirsten
Card-based cryptography provides simple and practicable protocols for performing secure multi-party computation (MPC) with just a deck of cards. For the sake of simplicity, this is often done using cards with only two symbols, e.g., clubs and hearts. Within this paper, we target the setting where all cards carry distinct symbols, catering for use-cases with commonly available standard decks and a weaker indistinguishability assumption. As of yet, the literature provides for only three protocols and no proofs for non-trivial lower bounds on the number of cards. As such complex proofs (handling very large combinatorial state spaces) tend to be involved and error-prone, we propose using formal verification for finding protocols and proving lower bounds. In this paper, we employ the technique of software bounded model checking (SBMC), which reduces the problem to a bounded state space, which is automatically searched exhaustively using a SAT solver as a backend. Our contribution is twofold: (a) We identify two protocols for converting between different bit encodings with overlapping bases, and then show them to be card-minimal. This completes the picture of tight lower bounds on the number of cards with respect to runtime behavior and shuffle properties of conversion protocols. For computing AND, we show that there is no protocol with finite runtime using four cards with distinguishable symbols and fixed output encoding, and give a four-card protocol with an expected finite runtime using only random cuts. (b) We provide a general translation of proofs for lower bounds to a bounded model checking framework for automatically finding card- and length-minimal protocols and to give additional confidence in lower bounds. We apply this to validate our method and, as an example, confirm our new AND protocol to have a shortest run for protocols using this number of cards.
Last updated:  2019-09-18
Post-Quantum Variants of ISO/IEC Standards: Compact Chosen Ciphertext Secure Key Encapsulation Mechanism from Isogenies
Kazuki Yoneyama
ISO/IEC standardizes several chosen ciphertext-secure key encapsulation mechanism (KEM) schemes in ISO/IEC 18033-2. However, all ISO/IEC KEM schemes are not quantum resilient. In this paper, we introduce new isogeny-based KEM schemes (i.e., CSIDH-ECIES-KEM and CSIDH-PSEC-KEM) by modifying Diffie-Hellman-based KEM schemes in ISO/IEC standards. The main advantage of our schemes are compactness. The key size and the ciphertext overhead of our schemes are smaller than these of SIKE, which is submitted to NIST's post-quantum cryptosystems standardization, for current security analyses.
Last updated:  2020-06-19
An LLL Algorithm for Module Lattices
Changmin Lee, Alice Pellet-Mary, Damien Stehlé, Alexandre Wallet
The LLL algorithm takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. We provide a generalization to R-modules contained in K^n for arbitrary number fields K and dimension n, with R denoting the ring of integers of K. Concretely, we introduce an algorithm that efficiently finds short vectors in rank-n modules when given access to an oracle that finds short vectors in rank-2 modules, and an algorithm that efficiently finds short vectors in rank-2 modules given access to a Closest Vector Problem oracle for a lattice that depends only on K. The second algorithm relies on quantum computations and its analysis is heuristic. In the special case of free modules, we propose a dequantized version of this algorithm.
Last updated:  2020-03-11
Sponges Resist Leakage: The Case of Authenticated Encryption
Jean Paul Degabriele, Christian Janson, Patrick Struck
In this work we advance the study of leakage-resilient Authenticated Encryption with Associated Data (AEAD) and lay the theoretical groundwork for building such schemes from sponges. Building on the work of Barwell et al. (ASIACRYPT 2017), we reduce the problem of constructing leakage-resilient AEAD schemes to that of building fixed-input-length function families that retain pseudorandomness and unpredictability in the presence of leakage. Notably, neither property is implied by the other in the leakage-resilient setting. We then show that such a function family can be combined with standard primitives, namely a pseudorandom generator and a collision-resistant hash, to yield a nonce-based AEAD scheme. In addition, our construction is quite efficient in that it requires only two calls to this leakage-resilient function per encryption or decryption call. This construction can be instantiated entirely from the T-sponge to yield a concrete AEAD scheme which we call SLAE. We prove this sponge-based instantiation secure in the non-adaptive leakage setting. SLAE bears many similarities and is indeed inspired by ISAP, which was proposed by Dobraunig et al. at FSE 2017. However, while retaining most of the practical advantages of ISAP, SLAE additionally benefits from a formal security treatment.
Last updated:  2019-09-16
John Chan, Phillip Rogaway
The customary formulation of authenticated encryption (AE) requires the decrypting party to supply the correct nonce with each ciphertext it decrypts. To enable this, the nonce is often sent in the clear alongside the ciphertext. But doing this can forfeit anonymity and degrade usability. Anonymity can also be lost by transmitting associated data (AD) or a session-ID (used to identify the operative key). To address these issues, we introduce anonymous AE, wherein ciphertexts must conceal their origin even when they are understood to encompass everything needed to decrypt (apart from the receiver's secret state). We formalize a type of anonymous AE we call anAE, anonymous nonce-based AE, which generalizes and strengthens conventional nonce-based AE, nAE. We provide an efficient construction for anAE, NonceWrap, from an nAE scheme and a blockcipher. We prove NonceWrap secure. While anAE does not address privacy loss through traffic-flow analysis, it does ensure that ciphertexts, now more expansively construed, do not by themselves compromise privacy.
Last updated:  2019-09-19
On Fully Secure MPC with Solitary Output
Shai Halevi, Yuval Ishai, Eyal Kushilevitz, Nikolaos Makriyannis, Tal Rabin
We study the possibility of achieving full security, with guaranteed output delivery, for secure multiparty computation of functionalities where only one party receives output, to which we refer as solitary functionalities. In the standard setting where all parties receive an output, full security typically requires an honest majority; otherwise even just achieving fairness is impossible. However, for solitary functionalities, fairness is clearly not an issue. This raises the following question: Is full security with no honest majority possible for all solitary functionalities? We give a negative answer to this question, by showing the existence of solitary functionalities that cannot be computed with full security. While such a result cannot be proved using fairness based arguments, our proof builds on the classical proof technique of Cleve (STOC 1986) for ruling out fair coin-tossing and extends it in a nontrivial way. On the positive side, we show that full security against any number of malicious parties is achievable for many natural and useful solitary functionalities, including ones for which the multi-output version cannot be realized with full security.
Last updated:  2019-09-11
An efficient and secure ID-based multi-proxy multi-signature scheme based on lattice
Rahim Toluee, Taraneh Eghlidos
Multi-proxy multi-signature schemes are useful in distributed networks, where a group of users cooperatively could delegate their administrative rights to the users of another group, who are authorized to generate the proxy signatures cooperatively on behalf of the original signers. In this paper, we aim to propose an ID-based lattice-based multi-proxy multi-signature (ILMPMS) scheme, which enjoys security against quantum computers and efficiency due to ID-based framework, linear operations and possibility of parallel computations based on lattices. For this purpose, we first propose an ID-based lattice-based multi-signature scheme, used as the underlying signature in our ILMPMS scheme. We prove existential unforgeability of both schemes against adaptive chosen-message attack in the random oracle model based on the hardness of the learning with errors problem over standard lattices.
Last updated:  2019-09-11
How to leverage hardness of constant degree expanding polynomials over R to build iO
Aayush Jain, Huijia Lin, Christian Matt, Amit Sahai
In this work, we introduce and construct $D$-restricted Functional Encryption (FE) for any constant $D \ge 3$, based only on the SXDH assumption over bilinear groups. This generalizes the notion of $3$-restricted FE recently introduced and constructed by Ananth et al. (ePrint 2018) in the generic bilinear group model. A $D=(d+2)$-restricted FE scheme is a secret key FE scheme that allows an encryptor to efficiently encrypt a message of the form $M=(\vec{x},\vec{y},\vec{z})$. Here, $\vec{x}\in F_{p}^{d\times n}$ and $\vec{y},\vec{z}\in F_{p}^n$. Function keys can be issued for a function $f=\Sigma_{\vec{I}=(i_1,..,i_d,j,k)}\ c_{\vec{I}}\cdot \vec{x}[1,i_1] \cdots \vec{x}[d,i_d] \cdot \vec{y}[j]\cdot \vec{z}[k]$ where the coefficients $c_{\vec{I}}\in F_{p}$. Knowing the function key and the ciphertext, one can learn $f(\vec{x},\vec{y},\vec{z})$, if this value is bounded in absolute value by some polynomial in the security parameter and $n$. The security requirement is that the ciphertext hides $\vec{y}$ and $\vec{z}$, although it is not required to hide $\vec{x}$. Thus $\vec{x}$ can be seen as a public attribute. $D$-restricted FE allows for useful evaluation of constant-degree polynomials, while only requiring the SXDH assumption over bilinear groups. As such, it is a powerful tool for leveraging hardness that exists in constant-degree expanding families of polynomials over $\mathbb{R}$. In particular, we build upon the work of Ananth et al. to show how to build indistinguishability obfuscation (iO) assuming only SXDH over bilinear groups, LWE, and assumptions relating to weak pseudorandom properties of constant-degree expanding polynomials over $\mathbb{R}$.
Last updated:  2019-09-11
Approximate Trapdoors for Lattices and Smaller Hash-and-Sign Signatures
Yilei Chen, Nicholas Genise, Pratyay Mukherjee
We study a relaxed notion of lattice trapdoor called approximate trapdoor, which is defined to be able to invert Ajtai's one-way function approximately instead of exactly. The primary motivation of our study is to improve the efficiency of the cryptosystems built from lattice trapdoors, including the hash-and-sign signatures. Our main contribution is to construct an approximate trapdoor by modifying the gadget trapdoor proposed by Micciancio and Peikert. In particular, we show how to use the approximate gadget trapdoor to sample short preimages from a distribution that is simulatable without knowing the trapdoor. The analysis of the distribution uses a theorem (implicitly used in past works) regarding linear transformations of discrete Gaussians on lattices. Our approximate gadget trapdoor can be used together with the existing optimization techniques to improve the concrete performance of the hash-and-sign signature in the random oracle model under (Ring-)LWE and (Ring-)SIS assumptions. Our implementation shows that the sizes of the public-key and signature can be reduced by half from those in schemes built from exact trapdoors.
Last updated:  2019-09-11
Faster Sieving Algorithm for Approximate SVP with Constant Approximation Factors
Divesh Aggarwal, Bogdan Ursu, Serge Vaudenay
Abstract. There is a large gap between theory and practice in the complexities of sieving algorithms for solving the shortest vector problem in an arbitrary Euclidean lattice. In this paper, we work towards reducing this gap, providing theoretical refinements of the time and space complexity bounds in the context of the approximate shortest vector problem. This is achieved by relaxing the requirements on the AKS algorithm, rather than on the ListSieve, resulting in exponentially smaller bounds starting from $\mu\approx 2$, for constant values of $\mu$. We also explain why these improvements carry over to also give the fastest quantum algorithms for the approximate shortest vector problem.
Last updated:  2020-02-07
Quantum LLL with an Application to Mersenne Number Cryptosystems
Uncategorized
Marcel Tiepelt, Alan Szepieniec
Show abstract
Uncategorized
In this work we analyze the impact of translating the well-known LLL algorithm for lattice reduction into the quantum setting. We present the first (to the best of our knowledge) quantum circuit representation of a lattice reduction algorithm in the form of explicit quantum circuits implementing the textbook LLL algorithm. Our analysis identifies a set of challenges arising from constructing reversible lattice reduction as well as solutions to these challenges. We give a detailed resource estimate with the Toffoli gate count and the number of logical qubits as complexity metrics. As an application of the previous, we attack Mersenne number cryptosystems by Groverizing an attack due to Beunardeau et. al that uses LLL as a subprocedure. While Grover's quantum algorithm promises a quadratic speedup over exhaustive search given access to a oracle that distinguishes solutions from non-solutions, we show that in our case, realizing the oracle comes at the cost of a large number of qubits. When an adversary translates the attack by Beunardeau et al. into the quantum setting, the overhead of the quantum LLL circuit may be as large as $2^{52}$ qubits for the text-book implementation and $2^{33}$ for a floating-point variant.
Last updated:  2020-01-11
Efficient Tightly-Secure Structure-Preserving Signatures and Unbounded Simulation-Sound QA-NIZK Proofs
Mojtaba Khalili, Daniel Slamanig
We show how to construct structure-preserving signatures (SPS) and unbounded quasi-adaptive non-interactive zero-knowledge (USS QA-NIZK) proofs with a tight security reduction to simple assumptions, being the first with a security loss of $\mathcal{O}(1)$. Specifically, we present a SPS scheme which is more efficient than existing tightly secure SPS schemes and from an efficiency point of view is even comparable with other non-tight SPS schemes. In contrast to existing work, however, we only have a lower security loss of $\mathcal{O}(1)$, resolving an open problem posed by Abe et al. (CRYPTO 2017). In particular, our tightly secure SPS scheme under the SXDH assumption requires 11 group elements. Moreover, we present the first tightly secure USS QA-NIZK proofs with a security loss of $\mathcal{O}(1)$ which also simultaneously have a compact common reference string and constant size proofs (5 elements under the SXDH assumption, which is only one element more than the best non-tight USS QA-NIZK). From a technical perspective, we present a novel randomization technique, inspired by Naor-Yung paradigm and adaptive partitioning, to obtain a randomized pseudorandom function (PRF). In particular, our PRF uses two copies under different keys but with shared randomness. Then we adopt ideas of Kiltz, Pan and Wee (CRYPTO 2015), who base their SPS on a randomized PRF, but in contrast to their non-tight reduction our approach allows us to achieve tight security. Similarly, we construct the first compact USS QA-NIZK proofs adopting techniques from Kiltz and Wee (EUROCRYPT 2015). We believe that the techniques introduced in this paper to obtain tight security with a loss of $\mathcal{O}(1)$ will have value beyond our proposed constructions.
Last updated:  2019-09-11
On Perfect Correctness without Derandomization
Gilad Asharov, Naomi Ephraim, Ilan Komargodski, Rafael Pass
We give a method to transform any indistinguishability obfuscator that suffers from correctness errors into an indistinguishability obfuscator that is $\textit{perfectly}$ correct, assuming hardness of Learning With Errors (LWE). The transformation requires sub-exponential hardness of the obfuscator and of LWE. Our technique also applies to eliminating correctness errors in general-purpose functional encryption schemes, but here it is sufficient to rely on the polynomial hardness of the given scheme and of LWE. Both of our results can be based $\textit{generically}$ on any perfectly correct, single-key, succinct functional encryption scheme (that is, a scheme supporting Boolean circuits where encryption time is a fixed polynomial in the security parameter and the message size), in place of LWE. Previously, Bitansky and Vaikuntanathan (EUROCRYPT ’17) showed how to achieve the same task using a derandomization-type assumption (concretely, the existence of a function with deterministic time complexity $2^{O(n)}$ and non-deterministic circuit complexity $2^{\Omega(n)}$) which is non-game-based and non-falsifiable.
Last updated:  2020-11-21
Optimal-Round Preprocessing-MPC via Polynomial Representation and Distributed Random Matrix
Dor Bitan, Shlomi Dolev
We present preprocessing-MPC schemes of arithmetic functions with optimal round complexity, function-independent correlated randomness, and communication and space complexities that grow linearly with the size of the function. We extend our results to the client-server model and present a scheme which enables a user to outsource the storage of confidential data to $N$ distrusted servers and have the servers perform computations over the encrypted data in a single round of communication. We further extend our results to handle Boolean circuits. All our schemes have perfect passive security against coalitions of up to $N-1$ parties. Our schemes are based on a novel secret sharing scheme, Distributed Random Matrix (DRM), which we present here. The DRM secret sharing scheme supports homomorphic multiplications, and, after a single round of communication, supports homomorphic additions. Our approach deviates from standard conventions of MPC. First, we consider a representation of the function f as a multivariate polynomial (rather than an arithmetic circuit). Second, we divide the problem into two cases. We begin with solving the Non-Vanishing case, in which the inputs are non-zero elements of $F_p$. In this case, our schemes have space complexity $O(nkN)$ and communication complexity $O(nk(N^2))$, where $n$ is the size of the input, and $k$ is the number of monomials of the function. Then, we present several solutions for the general case, in which some of the secrets can be zero. In these solutions, the space and communication complexities are either $O(nk(N^2)(2^n))$ and $O(nk(N^3)(2^n))$, or $O(nkN)$ and $O(nk(N^2))$, respectively, where $K$ is the size of a modified version of $f$. $K$ is bounded by the square of the maximal possible size of $k$.
Last updated:  2020-08-03
Randomly Choose an Angle from Immense Number of Angles to Rotate Qubits, Compute and Reverse
Dor Bitan, Shlomi Dolev
Homomorphic encryption (HE) schemes enable the processing of encrypted data and may be used by a user to outsource storage and computations to an untrusted server. A plethora of HE schemes has been suggested in the past four decades, based on various assumptions, and which achieve different attributes. In this work, we assume that the user and server are quantum computers, and look for HE schemes of classical data. We set a high bar of requirements and ask what can be achieved under these requirements. Namely, we look for HE schemes which are efficient, information-theoretically secure, perfectly correct, and which support homomorphic operations in a fully compact and non-interactive way. Fully compact means that decryption costs O(1) time and space. In contrast to the legacy quantum one-time pad scheme, our scheme is computation agnostic. That is, when delegating computations, the user can remain utterly oblivious to the implementation method chosen by the cloud. We suggest an encryption scheme based on random bases and discuss the homomorphic properties of that scheme. One of the advantages of our scheme is providing better security in the face of weak measurements (WM). Measurements of this kind enable collecting partial information on a quantum state while only slightly disturbing the state. We suggest here a novel QKD scheme based on random bases, which is resilient against WM-based attacks. We demonstrate the usefulness of our scheme in several applications. Notably, we bring up a new concept we call securing entanglement. We look at entangled systems of qubits as a resource, used for carrying out quantum computations, and show how our scheme may be used to guarantee that an entangled system can be used only by its rightful owners. To the best of our knowledge, this concept has not been discussed in previous literature.
Last updated:  2020-06-10
A Simple and Efficient Key Reuse Attack on NTRU Cryptosystem
Uncategorized
Jintai Ding, Joshua Deaton, Kurt Schmidt, Vishakha, Zheng Zhang
Show abstract
Uncategorized
In 1998, Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman introduced the famous NTRU cryptosystem, and called it "A ring-based public key cryptosystem". Actually, it turns out to be a lattice based cryptosystem that is resistant to Shor's algorithm. There are several modifications to the original NTRU and two of them are selected as round 2 candidates of NIST post quantum public key scheme standardization. In this paper, we present a simple attack on the original NTRU scheme. The idea comes from Ding et al.'s key mismatch attack. Essentially, an adversary can find information on the private key of a KEM by not encrypting a message as intended but in a manner which will cause a failure in decryption if the private key is in a certain form. In the present, NTRU has the encrypter generating a random polynomial with "small" coefficients, but we will have the coefficients be "large". After this, some further work will create an equivalent key.
Last updated:  2020-02-18
Recursive Proof Composition without a Trusted Setup
Sean Bowe, Jack Grigg, Daira Hopwood
Non-interactive arguments of knowledge are powerful cryptographic tools that can be used to demonstrate the faithful execution of arbitrary computations with publicly verifiable proofs. Increasingly efficient protocols have been described in recent years, with verification time and/or communication complexity that is sublinear in the size of the computation being described. These efficiencies can be exploited to realize recursive proof composition: the concept of proofs that attest to the correctness of other instances of themselves, thereby allowing large computational effort to be incrementally verified. All previously known realizations of recursive proof composition have required a trusted setup and cycles of expensive pairing-friendly elliptic curves. We obtain and implement Halo, the first practical example of recursive proof composition without a trusted setup, using the discrete log assumption over normal cycles of elliptic curves. In the process we develop several novel techniques that may be of independent interest.
Last updated:  2019-09-11
Transparent Polynomial Commitment Scheme with Polylogarithmic Communication Complexity
Alexander Vlasov, Konstantin Panarin
We introduce novel efficient and transparent construction of the polynomial commitment scheme. A polynomial commitment scheme allows one side (the prover) to commit to a polynomial of predefined degree $d$ with a string that can be later used by another side (the verifier) to confirm claimed evaluations of the committed polynomial at specific points. Efficiency means that communication costs of interaction between prover and verifier during the protocol are very small compared to sending the whole committed polynomial itself, and is polylogarithmic in our case. Transparency means that our scheme doesn't require any preliminary trusted setup ceremony. We explicitly state that our polynomial commitment scheme is not hiding, although zero knowledge can be achieved at the application level in most of the cases.
Last updated:  2019-09-11
Revisiting the Hybrid attack on sparse and ternary secret LWE
Yongha Son, Jung Hee Cheon
In the practical use of the Learning With Error (LWE) based cryptosystems, it is quite common to choose the secret to be extremely small: one popular choice is ternary ($\pm 1, 0$) coefficient vector, and some further use ternary vector having only small numbers of nonzero coefficient, what is called sparse and ternary vector. This use of small secret also benefits to attack algorithms against LWE, and currently LWE-based cryptosystems including homomorphic encryptions (HE) set parameters based on the attack complexity of those improved attacks. In this work, we revisit the well-known Howgrave-Graham's hybrid attack, which was originally designed to solve the NTRU problem, with respect to sparse and ternary secret LWE case, and also refine the previous analysis for the hybrid attack in line with LWE setting. Moreover, upon our analysis we estimate attack complexity of the hybrid attack for several LWE parameters. As a result, we argue the currently used HE parameters should be raised to maintain the same security level by considering the hybrid attack; for example, the parameter set $(n, \log q, \sigma) = (65536, 1240, 3.2)$ with Hamming weight of secret key $h = 64,$ which was estimated to satisfy $\ge 128$ bit-security by the previously considered attacks, is newly estimated to provide only $113$ bit-security by the hybrid attack.
Last updated:  2019-09-10
Towards Instantiating the Algebraic Group Model
Julia Kastner, Jiaxin Pan
The Generic Group Model (GGM) is one of the most important tools for analyzing the hardness of a cryptographic problem. Although a proof in the GGM provides a certain degree of confidence in the problem's hardness, it is a rather strong and limited model, since it does not allow an algorithm to exploit any property of the group structure. To bridge the gap between the GGM and the Standard Model, Fuchsbauer, Kiltz, and Loss proposed a model, called the Algebraic Group Model (AGM, CRYPTO 2018). In the AGM, an adversary can take advantage of the group structure, but it needs to provide a representation of its group element outputs, which seems weaker than the GGM but stronger than the Standard Model. Due to this additional information we learn about the adversary, the AGM allows us to derive simple but meaningful security proofs. In this paper, we take the first step to bridge the gap between the AGM and the Standard Model. We instantiate the AGM under Standard Assumptions. More precisely, we construct two algebraic groups under the Knowledge of Exponent Assumption (KEA). In addition to the KEA, our first construction requires symmetric pairings, and our second construction needs an additively homomorphic Non-Interactive Zero-Knowledge (NIZK) argument system, which can be implemented by a standard variant of Diffie-Hellman Assumption in the asymmetric pairing setting. Furthermore, we show that both of our constructions provide cryptographic hardness which can be used to construct secure cryptosystems. We note that the KEA provably holds in the GGM. Our results show that, instead of instantiating the seemingly complex AGM directly, one can try to instantiate the GKEA under falsifiable assumptions in the Standard Model. Thus, our results can serve as a stepping stone towards instantiating the AGM under falsifiable assumptions.
Last updated:  2019-09-12
The Local Forking Lemma and its Application to Deterministic Encryption
Mihir Bellare, Wei Dai, Lucy Li
We bypass impossibility results for the deterministic encryption of public-key-dependent messages, showing that, in this setting, the classical Encrypt-with-Hash scheme provides message-recovery security, across a broad range of message distributions. The proof relies on a new variant of the forking lemma in which the random oracle is reprogrammed on just a single fork point rather than on all points past the fork.
Last updated:  2019-09-10
Quantum Algorithms for the Approximate $k$-List Problem and their Application to Lattice Sieving
Elena Kirshanova, Erik Mårtensson, Eamonn W. Postlethwaite, Subhayan Roy Moulik
The Shortest Vector Problem (SVP) is one of the mathematical foundations of lattice based cryptography. Lattice sieve algorithms are amongst the foremost methods of solving SVP. The asymptotically fastest known classical and quantum sieves solve SVP in a \(d\)-dimensional lattice in \(2^{cd + o(d)}\) time steps with \(2^{c'd + o(d)}\) memory for constants \(c, c'\). In this work, we give various quantum sieving algorithms that trade computational steps for memory. We first give a quantum analogue of the classical \(k\)-Sieve algorithm [Herold--Kirshanova--Laarhoven, PKC'18] in the Quantum Random Access Memory (QRAM) model, achieving an algorithm that heuristically solves SVP in \(2^{0.2989d + o(d)}\) time steps using \(2^{0.1395d + o(d)}\) memory. This should be compared to the state-of-the-art algorithm [Laarhoven, Ph.D Thesis, 2015] which, in the same model, solves SVP in \(2^{0.2653d + o(d)}\) time steps and memory. In the QRAM model these algorithms can be implemented using \(poly(d)\) width quantum circuits. Secondly, we frame the \(k\)-Sieve as the problem of \(k\)-clique listing in a graph and apply quantum \(k\)-clique finding techniques to the \(k\)-Sieve. Finally, we explore the large quantum memory regime by adapting parallel quantum search [Beals et al., Proc. Roy. Soc. A'13] to the \(2\)-Sieve and giving an analysis in the quantum circuit model. We show how to heuristically solve SVP in \(2^{0.1037d + o(d)}\) time steps using \(2^{0.2075d + o(d)}\) quantum memory.
Last updated:  2020-09-22
Asynchronous Distributed Key Generation for Computationally-Secure Randomness, Consensus, and Threshold Signatures.
Eleftherios Kokoris-Kogias, Dahlia Malkhi, Alexander Spiegelman
In this paper, we present the first fully asynchronous distributed key generation (ADKG) algorithm as well as the first distributed key generation algorithm that can create keys with a dual $(f,2f+1)-$threshold that are necessary for scalable consensus (which so far needs a trusted dealer assumption). In order to create a DKG with a dual $(f,2f+1)-$ threshold we first answer in the affirmative the open question posed by Cachin et al. how to create an AVSS protocol with recovery thresholds $ f+1 < k \le 2f+1$, which is of independent interest. Our High-threshold-AVSS (\textit{HAVSS}) uses an asymmetric bi-variate polynomial, where the secret shared is hidden from any set of $k$ nodes but an honest node that did not participate in the sharing phase can still recover his share with only $n-2f$ shares, hence be able to contribute in the secret reconstruction. Another building block for ADKG is a novel \textit{Eventually Perfect} Common Coin (EPCC) abstraction and protocol that enables the participants to create a common coin that might fail to agree at most $f+1$ times (even if invoked a polynomial number of times). Using \textit{EPCC} we implement an Eventually Efficient Asynchronous Binary Agreement (EEABA) in which each instance takes $O(n^2)$ bits and $O(1)$ rounds in expectation, except for at most $f+1$ instances which may take $O(n^4)$ bits and $O(n)$ rounds in total. Using EEABA we construct the first fully Asynchronous Distributed Key Generation (ADKG) which has the same overhead and expected runtime as the best partially-synchronous DKG ($O(n^4)$ words, $O(n)$ rounds). As a corollary of our ADKG we can also create the first Validated Asynchronous Byzantine Agreement (VABA) in the authenticated setting that does not need a trusted dealer to setup threshold signatures of degree $n-f$. Our VABA has an overhead of expected $O(n^2)$ words and $O(1)$ time per instance after an initial $O(n^4)$ words and $O(n)$ time bootstrap via ADKG.
Last updated:  2020-11-28
Security Reductions for White-Box Key-Storage in Mobile Payments
Estuardo Alpirez Bock, Chris Brzuska, Marc Fischlin, Christian Janson, Wil Michiels
The goal of white-box cryptography is to provide security even when the cryptographic implementation is executed in adversarially controlled environments. White-box implementations nowadays appear in commercial products such as mobile payment applications, e.g., those certified by Mastercard. Interestingly, there, white-box cryptography is championed as a tool for secure storage of payment tokens, and importantly, the white-boxed storage functionality is bound to a hardware functionality to prevent code-lifting attacks. In this paper, we show that the approach of using hardware binding and obfuscation for secure storage is conceptually sound. Following security specifications by Mastercard, we first define security for a white-box key derivation functions (WKDF) that is bound to a hardware functionality. WKDFs with hardware-binding model a secure storage functionality, as the WKDFs in turn can be used to derive encryption keys for secure storage. We then provide a proof-of-concept construction of WKDFs based on pseudorandom functions (PRF) and obfuscation. To show that our use of cryptographic primitives is sound, we perform a cryptographic analysis and reduce the security of our WKDF to the cryptographic assumptions of indistinguishability obfuscation and PRF-security. The hardware-functionality that our WKDF is bound to is a PRF-like functionality. Obfuscation helps us to hide the secret key used for the verification, essentially emulating a signature functionality as is provided by the Android key store. We rigorously define the required security properties of a hardware-bound white-box payment application (WPAY) for generating and encrypting valid payment requests. We construct a WPAY, which uses a WKDF as a secure building block. We thereby show that a WKDF can be securely combined with any secure symmetric encryption scheme, including those based on standard ciphers such as AES.
Last updated:  2019-09-10
A Critical Analysis of ISO 17825 (`Testing methods for the mitigation of non-invasive attack classes against cryptographic modules')
Carolyn Whitnall, Elisabeth Oswald
The ISO standardisation of `Testing methods for the mitigation of non-invasive attack classes against cryptographic modules' (ISO/IEC 17825:2016) specifies the use of the Test Vector Leakage Assessment (TVLA) framework as the sole measure to assess whether or not an implementation of (symmetric) cryptography is vulnerable to differential side-channel attacks. It is the only publicly available standard of this kind, and the first side-channel assessment regime to exclusively rely on a TVLA instantiation. TVLA essentially specifies statistical leakage detection tests with the aim of removing the burden of having to test against an ever increasing number of attack vectors. It offers the tantalising prospect of `conformance testing': if a device passes TVLA, then, one is led to hope, the device would be secure against all (first-order) differential side-channel attacks. In this paper we provide a statistical assessment of the specific instantiation of TVLA in this standard. This task leads us to inquire whether (or not) it is possible to assess the side-channel security of a device via leakage detection (TVLA) only. We find a number of grave issues in the standard and its adaptation of the original TVLA guidelines. We propose some innovations on existing methodologies and finish by giving recommendations for best practice and the responsible reporting of outcomes.
Last updated:  2019-09-10
Simple and Efficient KDM-CCA Secure Public Key Encryption
Fuyuki Kitagawa, Takahiro Matsuda, Keisuke Tanaka
We propose two efficient public key encryption (PKE) schemes satisfying key dependent message security against chosen ciphertext attacks (KDM-CCA security). The first one is KDM-CCA secure with respect to affine functions. The other one is KDM-CCA secure with respect to polynomial functions. Both of our schemes are based on the KDM-CPA secure PKE schemes proposed by Malkin, Teranishi, and Yung (EUROCRYPT 2011). Although our schemes satisfy KDM-CCA security, their efficiency overheads compared to Malkin et al.'s schemes are very small. Thus, efficiency of our schemes is drastically improved compared to the existing KDM-CCA secure schemes. We achieve our results by extending the construction technique by Kitagawa and Tanaka (ASIACRYPT 2018). Our schemes are obtained via semi-generic constructions using an IND-CCA secure PKE scheme as a building block. We prove the KDM-CCA security of our schemes based on the decisional composite residuosity (DCR) assumption and the IND-CCA security of the building block PKE scheme. Moreover, our security proofs are tight if the IND-CCA security of the building block PKE scheme is tightly reduced to its underlying computational assumption. By instantiating our schemes using existing tightly IND-CCA secure PKE schemes, we obtain the first tightly KDM-CCA secure PKE schemes whose ciphertext consists only of a constant number of group elements.
Last updated:  2020-09-25
COSAC: COmpact and Scalable Arbitrary-Centered Discrete Gaussian Sampling over Integers
Raymond K. Zhao, Ron Steinfeld, Amin Sakzad
The arbitrary-centered discrete Gaussian sampler is a fundamental subroutine in implementing lattice trapdoor sampling algorithms. However, existing approaches typically rely on either a fast implementation of another discrete Gaussian sampler or pre-computations with regards to some specific discrete Gaussian distributions with fixed centers and standard deviations. These approaches may only support sampling from standard deviations within a limited range, or cannot efficiently sample from arbitrary standard deviations determined on-the-fly at run-time. In this paper, we propose a compact and scalable rejection sampling algorithm by sampling from a continuous normal distribution and performing rejection sampling on rounded samples. Our scheme does not require pre-computations related to any specific discrete Gaussian distributions. Our scheme can sample from both arbitrary centers and arbitrary standard deviations determined on-the-fly at run-time. In addition, we show that our scheme only requires a low number of trials close to 2 per sample on average, and our scheme maintains good performance when scaling up the standard deviation. We also provide a concrete error analysis of our scheme based on the Renyi divergence. We implement our sampler and analyse its performance in terms of storage and speed compared to previous results. Our sampler's running time is center-independent and is therefore applicable to implementation of convolution-style lattice trapdoor sampling and identity-based encryption resistant against timing side-channel attacks.
Last updated:  2020-11-13
On Perfect Correctness in (Lockable) Obfuscation
Rishab Goyal, Venkata Koppula, Satyanarayana Vusirikala, Brent Waters
In a lockable obfuscation scheme a party takes as input a program $P$, a lock value $\alpha$, a message $m$ and produces an obfuscated program $\tilde{P}$. The obfuscated program can be evaluated on an input $x$ to learn the message $m$ if $P(x)= \alpha$. The security of such schemes states that if $\alpha$ is randomly chosen (independent of $P$ and $m$), then one cannot distinguish an obfuscation of $P$ from a ``dummy'' obfuscation. Existing constructions of lockable obfuscation achieve provable security under the Learning with Errors assumption. One limitation of these constructions is that they achieve only statistical correctness and allow for a possible one sided error where the obfuscated program could output the $m$ on some value $x$ where $P(x) \neq \alpha$. In this work we motivate the problem of studying perfect correctness in lockable obfuscation for the case where the party performing the obfuscation might wish to inject a backdoor or hole in correctness. We begin by studying the existing constructions and identify two components that are susceptible to imperfect correctness. The first is in the LWE-based pseudo random generators (PRGs) that are non-injective, while the second is in the last level testing procedure of the core constructions. We address each in turn. First, we build upon previous work to design injective PRGs that are provably secure from the LWE assumption. Next, we design an alternative last level testing procedure that has additional structure to prevent correctness errors. We then provide a surgical proof of security (to avoid redundancy) that connects our construction to the construction by Goyal, Koppula, and Waters (GKW). Specifically, we show how for a random value $\alpha$ an obfuscation under our new construction is indistinguishable from an obfuscation under the existing GKW construction.
Last updated:  2019-09-09
LLL and stochastic sandpile models
Jintai Ding, Seungki Kim, Tsuyoshi Takagi, Yuntao Wang
We introduce stochastic sandpile models which imitate numerous aspects of the practical behavior of the LLL algorithm with compelling accuracy. In addition, we argue that the physics and mathematics of sandpile models provide satisfactory heuristic explanations to much of the mysteries of LLL, and pleasant implications for lattice-based cryptography as a whole. Based on these successes, we suggest a paradigm in which one regards blockwise reduction algorithms as 1-d stochastic self-organized criticality(SOC) models and study them as such.
Last updated:  2020-03-03
Side-Channel Countermeasures' Dissection and the Limits of Closed Source Security Evaluations
Uncategorized
Olivier Bronchain, François-Xavier Standaert
Show abstract
Uncategorized
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d'Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by exhibiting how a countermeasures' dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by exhibiting that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is trivial. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
Last updated:  2020-04-03
SPAE a mode of operation for AES on low-cost hardware
Philippe Elbaz-Vincent, Cyril Hugounenq, Sébastien Riou
We propose SPAE, a single pass, patent free, authenticated encryption with associated data (AEAD) for AES. The algorithm has been developped to address the needs of a growing trend in IoT systems: storing code and data on a low cost flash memory external to the main SOC. Existing AEAD algorithms such as OCB, GCM, CCM, EAX , SIV, provide the required functionality however in practice each of them suffer from various drawbacks for this particular use case. Academic contributions such as ASCON and AEGIS-128 are suitable and efficient however they require the development of new hardware accelerators and they use primitives which are not ‘approved’ by governemental institutions such as NIST, BSI, ANSSI. From a silicon manufacturer point of view, an efficient AEAD which use existing AES hardware is much more enticing: the AES is required already by most industry standards invovling symmetric encryption (GSMA, EMVco, FIDO, Bluetooth, ZigBee to name few). This paper expose the properties of an ideal AEAD for external memory encryption, present the SPAE algorithm and analyze various security aspects. Performances of SPAE on actual hardware are better than OCB, GCM and CCM.
Last updated:  2019-09-06
Lucente Stabile Atkins (LSA) Cryptosystem (Unbreakable)
Francesco Lucente Stabile, Carey Patrick Atkins
The LSA cryptosystem is an asymmetric encryption algorithm which is based on both group and number theory that follows Kerckhoffs’s principle and relies on a specific case of Gauss’s Generalization of Wilson’s Theorem. Unlike prime factorization based algorithms, the eavesdropping cryptanalyst has no indication that he has successfully decrypted the ciphertext. For this reason, we aim to show that LSAis not only more secure than existing asymmetric algorithms but has the potential to be significantly computationally faster.
Last updated:  2019-09-05
Threshold Implementations in the Robust Probing Model
Siemen Dhooghe, Svetla Nikova, Vincent Rijmen
Threshold Implementations (TI) are secure algorithmic countermeasures against side-channel attacks in the form of differential power analysis. The strength of TI lies in its minimal algorithmic requirements. These requirements have been studied over more than 10 years and many efficient implementations for symmetric primitives have been proposed. Thus, over the years the practice of protecting implementations matured, however, the theory behind threshold implementations remained the same. In this work, we revise this theory by looking at the properties of correctness, non-completeness, and uniformity as a composable security model. We prove that this model provides first-order and higher-order univariate security in the glitch-robust probing model which lets us expand the theoretic framework of TI. We first provide a link between uniformity and the notion of non-interference, a known composable security notion building out the probing model. We then relax the notion of non-completeness which helps the design of secure expansion and compression functions. Lastly, we provide generalisations of the threshold notions to allow for general secret sharing schemes and provide examples of how different sharing schemes affect the security and efficiency of the countermeasure.
Last updated:  2019-09-05
Forkcipher: a New Primitive for Authenticated Encryption of Very Short Messages
Elena Andreeva, Virginie Lallemand, Antoon Purnal, Reza Reyhanitabar, Arnab Roy, Damian Vizar
Highly efficient encryption and authentication of short messages is an essential requirement for enabling security in constrained scenarios such as the CAN FD in automotive systems (max. message size 64 bytes), massive IoT, critical communication domains of 5G, and Narrowband IoT, to mention a few. In addition, one of the NIST lightweight cryptography project requirements is that AEAD schemes shall be “optimized to be efficient for short messages (e.g., as short as 8 bytes)”. In this work we introduce and formalize a novel primitive in symmetric cryptography called a forkcipher. A forkcipher is a keyed function expanding a fixed-length input to a fixed-length output. We define its security as indistinguishability under chosen ciphertext attack. We give a generic construction validation via the new iterate-fork-iterate design paradigm. We then propose ForkSkinny as a concrete forkcipher instance with a public tweak and based on SKINNY: a tweakable lightweight block cipher constructed using the TWEAKEY framework. We conduct extensive cryptanalysis of ForkSkinny against classical and structure-specific attacks. We demonstrate the applicability of forkciphers by designing three new provably-secure, nonce-based AEAD modes which offer performance and security tradeoffs and are optimized for efficiency of very short messages. Considering a reference block size of 16 bytes, and ignoring possible hardware optimizations, our new AEAD schemes beat the best SKINNY-based AEAD modes. More generally, we show forkciphers are suited for lightweight applications dealing with predominantly short messages, while at the same time allowing handling arbitrary messages sizes. Furthermore, our hardware implementation results show that when we exploit the inherent parallelism of ForkSkinny we achieve the best performance when directly compared with the most efficient mode instantiated with the SKINNY block cipher.
Last updated:  2019-09-05
Twisted Hessian Isogenies
Thinh Dang, Dustin Moody
Elliptic curves are typically defined by Weierstrass equations. Given a kernel, the well-known Velu’s formula shows how to explicitly write down an isogeny between Weierstrass curves. However, it is not clear how to do the same on other forms of elliptic curves without isomorphisms mapping to and from the Weierstrass form. Previous papers have shown some isogeny formulas for (twisted) Edwards, Huff, and Montgomery forms of elliptic curves. Continuing this line of work, this paper derives an explicit formula for isogenies between elliptic curves in (twisted) Hessian form.
Last updated:  2019-09-05
Boomerang Uniformity of Popular S-box Constructions
Shizhu Tian, Christina Boura, Léo Perrin
In order to study the resistance of a block cipher against boomerang attacks, a tool called the Boomerang Connectivity Table (BCT) for S-boxes was recently introduced. Very little is known today about the properties of this table especially for bijective S-boxes defined for $n$ variables with $n\equiv 0 \mod{4}$. In this work we study the boomerang uniformity of some popular constructions used for building large S-boxes, e.g. for 8 variables, from smaller ones. We show that the BCTs of all the studied constructions have abnormally high values in some positions. This remark permits us in some cases to link the boomerang properties of an S-box with other well-known cryptanalytic techniques on such constructions while in other cases it leads to the discovery of new ones. A surprising outcome concerns notably the Feistel and MISTY networks. While these two structures are very similar, their boomerang uniformity can be very different. In a second time, we investigate the boomerang uniformity under EA-equivalence for Gold and the inverse function (as used respectively in MPC-friendly ciphers and the AES) and we prove that the boomerang uniformity is EA-invariant in these cases. Finally, we present an algorithm for inverting a given BCT and provide experimental results on the size of the BCT-equivalence classes for some $4$ and $8$-bit S-boxes.
Last updated:  2020-08-24
Middle-Product Learning with Rounding Problem and its Applications
Shi Bai, Katharina Boudgoust, Dipayan Das, Adeline Roux-Langlois, Weiqiang Wen, Zhenfei Zhang
At CRYPTO 2017, Rosca et al. introduce a new variant of the Learning With Errors (LWE) problem, called the Middle-Product LWE (MP-LWE). The hardness of this new assumption is based on the hardness of the Polynomial LWE (P-LWE) problem parameterized by a set of polynomials, making it more secure against the possible weakness of a single defining polynomial. As a cryptographic application, they also provide an encryption scheme based on the MP-LWE problem. In this paper, we propose a deterministic variant of their encryption scheme, which does not need Gaussian sampling and is thus simpler than the original one. Still, it has the same quasi-optimal asymptotic key and ciphertext sizes. The main ingredient for this purpose is the Learning With Rounding (LWR) problem which has already been used to derandomize LWE type encryption. The hardness of our scheme is based on a new assumption called Middle-Product Computational Learning With Rounding, an adaption of the computational LWR problem over rings, introduced by Chen et al. at ASIACRYPT 2018. We prove that this new assumption is as hard as the decisional version of MP-LWE and thus benefits from worst-case to average-case hardness guarantees.
Last updated:  2019-09-05
Security of Symmetric Primitives against Key-Correlated Attacks
Aisling Connolly, Pooya Farshim, Georg Fuchsbauer
We study the security of symmetric primitives against key-correlated attacks (KCA), whereby an adversary can arbitrarily correlate keys, messages, and ciphertexts. Security against KCA is required whenever a primitive should securely encrypt key-dependent data, even when it is used under related keys. KCA is a strengthening of the previously considered notions of related-key attack (RKA) and key-dependent message (KDM) security. This strengthening is strict, as we show that 2-round Even–Mansour fails to be KCA secure even though it is both RKA and KDM secure. We provide feasibility results in the ideal-cipher model for KCAs and show that 3-round Even–Mansour is KCA secure under key offsets in the random-permutation model. We also give a natural transformation that converts any authenticated encryption scheme to a KCA-secure one in the random-oracle model. Conceptually, our results allow for a unified treatment of RKA and KDM security in idealized models of computation.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.