All papers in 2019 (Page 3 of 1498 results)
A constant-rate non-malleable code in the split-state model.
Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely "unrelated value''.
The family which received the most attention is the family of tampering functions in the so called (2-part) split-state model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.
In this work, we give a constant rate non-malleable code from the tampering family containing so called 2-lookahead functions and forgetful functions, and combined with the work of Dodis, Kazana and the authors from STOC 2015, this gives the first constant rate non-malleable code in the split-state model with negligible error.
An Efficient Passive-to-Active Compiler for Honest-Majority MPC over Rings
Uncategorized
Uncategorized
Multiparty computation (MPC) over rings such as or
has received a great deal of attention recently due to
its ease of implementation and attractive performance. Several actively secure
protocols over these rings have been implemented, for both the dishonest
majority setting and the setting of three parties with one corruption.
However, in the honest majority setting, no \emph{concretely} efficient
protocol for arithmetic computation over rings has yet been proposed that
allows for an \emph{arbitrary} number of parties.
We present a novel compiler for MPC over the ring in the
honest majority setting that turns a semi-honest protocol into an
actively secure protocol with very little overhead. The communication cost per
multiplication is only twice that of the semi-honest protocol, making
the resultant actively secure protocol almost as fast.
To demonstrate the efficiency of our compiler, we implement both an optimized
3-party variant (based on replicated secret-sharing), as well as a protocol
for parties (based on a recent protocol from TCC 2019). For the 3-party
variant, we obtain a protocol which outperforms the previous state of the art
that we can experimentally compare against. Our -party variant
is the first implementation for this particular setting, and we show
that it performs comparably to the current state of the art over fields.
Exploring Energy Efficient Quantum-resistant Signal Processing Using Array Processors
Quantum computers threaten to compromise public-key cryptography schemes such as DSA and ECDSA in polynomial time, which poses an imminent threat to secure signal processing. The cryptography community has responded with the
development and standardization of post-quantum cryptography (PQC) algorithms, a class of public-key algorithms based on hard problems that no known quantum algorithms can solve in polynomial time. Ring learning with error (RLWE) lattice-
based cryptographic (LBC) protocols are one of the most promising families of PQC schemes in terms of efficiency and versatility. Two common methods to compute polynomial multiplication, the most compute-intensive routine in the
RLWE schemes are convolutions and Number Theoretic Transform (NTT).
In this work, we explore the energy efficiency of polynomial multiplier using systolic architecture for the first time. As an early exploration, we design two high-throughput systolic array polynomial multipliers, including NTT-based and convolution-based, and compare them to our low-cost sequential (non-systolic) NTT-based multiplier. Our sequential NTT-based multiplier achieves more than 3x speedup over the state-of-the-art FGPA implementation of the polynomial multiplier in the NewHope-Simple key exchange mechanism on a low-cost Artix7 FPGA.
When synthesized on a Zynq UltraScale+ FPGA, the NTT-based systolic and convolution-based systolic designs achieve on average 1.7x and 7.5x speedup over our sequential NTT-based multiplier respectively, which can lead to generating over
2x more signatures per second by CRYSTALS-Dilithium, a PQC digital signature scheme. These explorations will help designers select the right PQC implementations for making future signal processing applications quantum-
resistant.
FastSwap: Concretely Efficient Contingent Payments for Complex Predicates
FastSwap is a simple and concretely efficient contingent payment scheme for complex predicates, inspired by FairSwap.
FastSwap only relies on symmetric primitives (in particular symmetric encryption and cryptographic hash functions) and avoids `heavy-weight' primitives such as general ZKP systems.
FastSwap is particularly well-suited for applications where the witness or predicate is large (on the order of MBs / GBs) or expensive to calculate.
Additionally FastSwap allows predicates to be implemented using virtually any computational model (including branching execution), which e.g. enables practitioners to express the predicate in smart contract languages already familiar to them, without an expensive transformation to satisfiability of arithmetic circuits.
The cost of this efficiency during honest execution is a logarithmic number of rounds during
a dispute resolution in the presence of a corrupted party (compared to constant round complexity for existing schemes). Let the witness be of size and the predicate of size , where computing takes steps. In the honest case the off-chain communication complexity is for a small constant , the on-chain communication complexity is for a small constant . In the malicious case the on-chain communication complexity is with small constants. Concretely with suitable optimizations the number of rounds (on-chain transactions) for a computation of steps can be brought to in the honest case with an estimated cost of USD on the Ethereum blockchain and to rounds with an estimated cost of USD in case of a dispute.
A trip between creation and destruction of non-commutative public key exchange protocols
Uncategorized
Uncategorized
Conventional asymmetric key exchange protocols rely on computing elements in commutative groups, where the employed trapdoor-permutation function is commutative, allowing Alice and Bob to compute the same element in as changing the orders of the variables or elements doesn't alter the output. The research found in this paper is focused on the analysis of key exchange protocols found in non-commutative cryptography, sometimes called group-based cryptography. Variations of these schemes made by the author are also included. Concretely, four schemes are presented using matrices over finite fields and permutation groups containing all the theory to break these schemes along with its pseudo-code and implementations in Mathematica.
Hashing to elliptic curves of -invariant
This article generalizes the simplified Shallue--van de Woestijne--Ulas (SWU) method of a deterministic finite field mapping to the case of any elliptic -curve of -invariant . In comparison with the (classical) SWU method the simplified SWU method allows to avoid one quadratic residuosity test in the field , which is a quite painful operation in cryptography with regard to timing attacks. More precisely, in order to derive we obtain a rational -curve (and its explicit quite simple proper -parametrization) on the Kummer surface associated with the direct product , where is the quadratic -twist of . Our approach of finding is based on the fact that every curve has a vertical -isogeny of degree .
LizarMong: Excellent Key Encapsulation Mechanism based on RLWE and RLWR
The RLWE family algorithms submitted to the NIST post-quantum cryptography standardization process have each merit in terms of security, correctness, performance, and bandwidth. However, there is no splendid algorithm in all respects. Besides, various recent studies have been published that affect security and correctness, such as side-channel attacks and error dependencies. To date, though, no algorithm has fully considered all the aspects. We propose a novel Key Encapsulation Mechanism scheme called LizarMong, which is based on RLizard. LizarMong combines the merit of each algorithm and state-of-the-art studies. As a result, it achieves up to 85% smaller bandwidth and 3.3 times faster performance compared to RLizard. Compared to the NIST's candidate algorithms with a similar security, the bandwidth is about 5-42% smaller, and the performance is about 1.2-4.1 times faster. Also, our scheme resists the known side-channel attacks.
Mitigating Leakage in Secure Cloud-Hosted Data Structures: Volume-Hiding for Multi-Maps via Hashing
Volume leakage has recently been identified as a major threat to the security of cryptographic cloud-based data structures by Kellaris et al. [CCS’16] (see also the attacks in Grubbs et al. [CCS’18] and Lacharité et al. [S&P’18]). In this work, we focus on volume-hiding implementations of encrypted multi-maps as first considered by Kamara and Moataz [Eurocrypt’19]. Encrypted multi-maps consist of outsourcing the storage of a multi-map to an untrusted server, such as a cloud storage system, while maintaining the ability to perform private queries. Volume-hiding encrypted multi-maps ensure that the number of responses (volume) for any query remains hidden from the adversarial server. As a result, volume-hiding schemes can prevent leakage attacks that leverage the adversary’s knowledge of the number of query responses to compromise privacy.
We present both conceptual and algorithmic contributions towards volume-hiding encrypted multi-maps. We introduce the first formal definition of volume-hiding leakage functions. In terms of design, we present the first volume-hiding encrypted multi-map dprfMM whose storage and query complexity are both asymptotically optimal. Furthermore, we experimentally show that our construction is practically efficient. Our server storage is smaller than the best previous construction while we improve query complexity by a factor of 10-16x.
In addition, we introduce the notion of differentially private volume-hiding leakage functions which strikes a better, tunable balance between privacy and efficiency. To accompany our new notion, we present a differentially private volume-hiding encrypted multi-map dpMM whose query complexity is the volume of the queried key plus an additional logarithmic factor. This is a significant improvement compared to all previous volume-hiding schemes whose query overhead was the maximum volume of any key. In natural settings, our construction improves the average query overhead by a factor of 150-240x over the previous best volume-hiding construction even when considering small privacy budget of .
SÉTA: Supersingular Encryption from Torsion Attacks
We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit's so called torsion attacks on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of Séta and present an IND-CCA variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an "uber" isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation.
Trapdoor DDH groups from pairings and isogenies
Trapdoor DDH groups are an appealing cryptographic primitive where DDH instances are hard to solve unless provided with additional information (i.e., a trapdoor). In this paper, we introduce a new trapdoor DDH group construction
using pairings and isogenies of supersingular elliptic curves. The construction solves all shortcomings of previous constructions as identified by Seurin (RSA 2013). We also present partial attacks on a previous construction due to Dent--Galbraith, and we provide a formal security definition of the related notion of ``trapdoor pairings''.
On constant-time QC-MDPC decoding with negligible failure rate
The QC-MDPC code-based KEM Bit Flipping Key Encapsulation (BIKE) is one of the Round-2 candidates of the NIST PQC standardization project. It has a variant that is proved to be IND-CCA secure. The proof models the KEM with some black-box ("ideal") primitives. Specifically, the decapsulation invokes an ideal primitive called "decoder", required to deliver its output with a negligible Decoding Failure Rate (DFR). The concrete instantiation of BIKE substitutes this ideal primitive with a new decoding algorithm called "Backflip", that is shown to have the required negligible DFR. However, it runs in a variable number of steps and this number depends on the input and on the key. This paper proposes a decoder that has a negligible DFR and also runs in a fixed (and small) number of steps. We propose that the instantiation of BIKE uses this decoder with our recommended parameters. We study the decoder's DFR as a function of the scheme's parameters to obtain a favorable balance between the communication bandwidth and the number of steps that the decoder runs. In addition, we build a constant-time software implementation of the proposed instantiation, and show that its performance characteristics are quite close to the IND-CPA variant. Finally, we discuss a subtle gap that needs to be resolved for every IND-CCA secure KEM (BIKE included) where the decapsulation has nonzero failure probability: the difference between average DFR and "worst-case" failure probability per key and ciphertext.
Threshold Schemes from Isogeny Assumptions
We initiate the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Quantum-resistant HHS based on supersingular isogeny graphs have recently become usable thanks to the record class group precomputation performed for the signature scheme CSI-FiSh.
Using the HHS equivalent of the technique of Shamir's secret sharing in the exponents, we adapt isogeny based schemes to the threshold setting. In particular we present threshold versions of the CSIDH public key encryption, and the CSI-FiSh signature schemes.
The main highlight is a threshold version of CSI-FiSh which runs almost as fast as the original scheme, for message sizes as low as 1880 B, public key sizes as low as 128 B, and thresholds up to 56; other speed-size-threshold compromises are possible.
MatRiCT: Efficient, Scalable and Post-Quantum Blockchain Confidential Transactions Protocol
We introduce MatRiCT, an efficient RingCT protocol for blockchain confidential transactions, whose security is based on ``post-quantum'' (module) lattice assumptions. The proof length of the protocol is around two orders of magnitude shorter than the existing post-quantum proposal, and scales efficiently to large anonymity sets, unlike the existing proposal. Further, we provide the first full implementation of a post-quantum RingCT, demonstrating the practicality of our scheme. In particular, a typical transaction can be generated in a fraction of a second and verified in about 23 ms on a standard PC. Moreover, we show how our scheme can be extended to provide auditability, where a user can select a particular authority from a set of authorities to reveal her identity. The user also has the ability to select no auditing and all these auditing options may co-exist in the same environment.
The key ingredients, introduced in this work, of MatRiCT are 1) the shortest to date scalable ring signature from standard lattice assumptions with no Gaussian sampling required, 2) a novel balance zero-knowledge proof and 3) a novel extractable commitment scheme from (module) lattices. We believe these ingredients to be of independent interest for other privacy-preserving applications such as secure e-voting. Despite allowing 64-bit precision for transaction amounts, our new balance proof, and thus our protocol, does not require a range proof on a wide range (such as 32- or 64-bit ranges), which has been a major obstacle against efficient lattice-based solutions.
Further, we provide new formal definitions for RingCT-like protocols, where the real-world blockchain setting is captured more closely. The definitions are applicable in a generic setting, and thus are believed to contribute to the development of future confidential transaction protocols in general (not only in the lattice setting).
Comparison of proof-of-work based blockchains against federated consensus and proof-of-validation based blockchains
This paper reports the results of survey done on the architecture and functionalities involved in blockchains. Moreover, it reports the results of comparison between proof-of-work-based blockchains, Bitcoin and Ethereum, against federated consensus-based blockchain, Ripple, and proof-of-validation-based blockchain, Tendermint, along the parameters like peer to peer network setup and maintenance, cryptocurrency involved, details of transaction execution and validation, block creation, block validation
and consensus protocol and application development.
Full-Round Differential Attack on DoT Block Cipher
The lightweight encryption design DoT was published by Patil et al in 2019. It
is based on SPN (substitution permutation network) structure. Its block and key
size are 64-bit and 128-bit respectively. In this paper, we analyse the security of
DoT against differential attack and present a series of differential distinguishers
for full-round DOT. Our analysis proves that DoT we can be distinguished from
a random permutation with probability equal to 2^62. Diffusion layer of DoT is a
combination of byte shuffling, 8-P permutation, 32-bit word shuffling and circular
shift operations. We analyse the security of DoT with and without 8-P permutation
in its diffusion layer. Our results indicate that DoT provides better resistance to
differential attack without using the 8-P permutation.
Shorter QA-NIZK and SPS with Tighter Security
Quasi-adaptive non-interactive zero-knowledge proof (QA-NIZK) systems and structure-preserving signature (SPS) schemes are two powerful tools for constructing practical pairing-based cryptographic schemes. Their efficiency directly affects the efficiency of the derived ad- vanced protocols.
We construct more efficient QA-NIZK and SPS schemes with tight security reductions. Our QA-NIZK scheme is the first one that achieves both tight simulation soundness and constant proof size (in terms of number of group elements) at the same time, while the recent scheme from Abe et al. (ASIACRYPT 2018) achieved tight security with proof size linearly depending on the size of the language and the witness. Assuming the hardness of the Symmetric eXternal Diffie-Hellman (SXDH) problem, our scheme contains only 14 elements in the proof and remains independent of the size of the language and the witness. Moreover, our scheme has tighter simulation soundness than the previous schemes.
Technically, we refine and extend a partitioning technique from a recent SPS scheme (Gay et al., EUROCRYPT 2018). Furthermore, we improve the efficiency of the tightly secure SPS schemes by using a relaxation of NIZK proof system for OR languages, called designated-prover NIZK system. Under the SXDH assumption, our SPS scheme contains 11 group elements in the signature, which is shortest among the tight schemes and is the same as an early non-tight scheme (Abe et al., ASIACRYPT 2012). Compared to the shortest known non-tight scheme (Jutla and Roy, PKC 2017), our scheme achieves tight security at the cost of 5 additional elements.
All the schemes in this paper are proven secure based on the Matrix Diffie-Hellman assumptions (Escala et al., CRYPTO 2013). These are a class of assumptions which include the well-known SXDH and DLIN assumptions and provide clean algebraic insights to our constructions. To the best of our knowledge, our schemes achieve the best efficiency among schemes with the same functionality and security properties. This naturally leads to improvement of the efficiency of cryptosystems based on simulation-sound QA-NIZK and SPS.
Breaking the Hidden Irreducible Polynomials Scheme
In 2019 Gómez described a new public key
cryptography scheme based on ideas from multivariate public key cryptography
using hidden irreducible polynomials. We show that the scheme's design has a
flaw which lets an attacker recover the private key directly from the public
key.
Privacy-Preserving Decision Tree Training and Prediction against Malicious Server
Privacy-preserving machine learning enables secure outsourcing of machine learning tasks to an untrusted service provider (server) while preserving the privacy of the user's data (client). Attaining good concrete efficiency for complicated machine learning tasks, such as training decision trees, is one of the challenges in this area. Prior works on privacy-preserving decision trees required the parties to have comparable computational resources, and instructed the client to perform computation proportional to the complexity of the entire task.
In this work we present new protocols for privacy-preserving decision trees, for both training and prediction, achieving the following desirable properties:
1. Efficiency: the client's complexity is independent of the training-set size during training, and of the tree size during prediction.
2. Security: privacy holds against malicious servers.
3. Practical usability: high accuracy, fast prediction, and feasible training demonstrated on standard UCI datasets, encrypted with fully homomorphic encryption.
To the best of our knowledge, our protocols are the first to offer all these properties simultaneously.
The core of our work consists of two technical contributions. First, a new low-degree polynomial approximation for functions, leading to faster protocols for training and prediction on encrypted data. Second, a design of an easy-to-use mechanism for proving privacy against malicious adversaries that is suitable for a wide family of protocols, and in particular, our protocols; this mechanism could be of independent interest.
Partially-Fair Computation from Timed-Release Encryption and Oblivious Transfer
We describe a new protocol to achieve two party -fair exchange: at any point in the unfolding of the protocol the difference in the probabilities of the parties having acquired the desired term is bounded by a value that can be made as small as necessary. Our construction uses oblivious transfer and sidesteps previous impossibility results by using a timed-release encryption, that releases its contents only after some lower bounded time. We show that our protocol can be easily generalized to an -fair two-party protocol for all functionalities. To our knowledge, this is the first protocol to truly achieve -fairness for all functionalities. All previous constructions achieving some form of fairness for all functionalities (without relying on a trusted third party) had a strong limitation: the fairness guarantee was only guaranteed to hold if the honest parties are at least as powerful as the corrupted parties and invest a similar amount of resources in the protocol, an assumption which is often not realistic. Our construction does not have this limitation: our protocol provides a clear upper bound on the running time of all parties, and partial fairness holds even if the corrupted parties have much more time or computational power than the honest parties. Interestingly, this shows that a minimal use of timed-release encryption suffices to circumvent an impossibility result of Katz and Gordon regarding -fair computation for all functionalities, without having to make the (unrealistic) assumption that the honest parties are as computationally powerful as the corrupted parties - this assumption was previously believed to be unavoidable in order to overcome this impossibility result. We present detailed security proofs of the new construction, which are non-trivial and form the core technical contribution of this work.
Fast Secrecy Computation with Multiplication Under the Setting of using Secret Sharing Scheme
In this paper, we describe two new protocols for secure secrecy computation with information theoretical security against the semi-honest adversary and the dishonest majority. Typically, unconditionally secure secrecy computation using the secret sharing scheme is considered impossible under the setting of . Therefore, in our previous work, we first took the approach of finding conditions required for secure secrecy computation under the setting of and realized a new technique of conditionally secure secrecy computation. We showed that secrecy computation using a secret sharing scheme can be realized with a semi-honest adversary with the following three preconditions: (1) the value of a secret and a random number used in secrecy multiplication does not include 0; (2) there is a set of shares on 1 that is constructed from random numbers that are unknown to the adversary; and (3) in secrecy computation involving consecutive computation, the position of shares in a set of shares that are handled by each server is fixed. In this paper, we differentiate the relationship between the parameter of -threshold secret sharing scheme and N of the number of servers/parties, and realize secrecy computation of multiplication under the setting of . In addition, we improve the processing speed of our protocol by dividing the computation process into a Preprocessing Phase and a Computation Phase and shifting the cost for communication to the Preprocessing Phase. This technique allows for information that does not depend on any of the private values, to be generated in advance and significantly reduce the cost of communication in the Computation Phase. For example, for secrecy computation without repetition, the cost for communication can be totally removed in the Computation Phase. As a result, we realize the method for secrecy computation that is faster compared to conventional methods. In addition, our protocols provided solutions for the aforementioned three preconditions and realize secure secrecy computation without any limitation in terms of usability.
Post-quantum Zero Knowledge in Constant Rounds
We construct a constant-round zero-knowledge classical argument for NP secure against quantum attacks. We assume the existence of Quantum Fully-Homomorphic Encryption and other standard primitives, known based on the Learning with Errors Assumption for quantum algorithms. As a corollary, we also obtain a constant-round zero-knowledge quantum argument for QMA.
At the heart of our protocol is a new no-cloning non-black-box simulation technique.
An IND-CCA-Secure Code-Based EncryptionScheme Using Rank Metric
The use of rank instead of Hamming metric has been proposed to address the main drawback of code-based cryptography: large key sizes. There exist several Key Encapsulation Mechanisms (KEM) and Public Key Encryption (PKE) schemes using rank metric including some submissions to the NIST call for standardization of Post-Quantum Cryptography. In this work, we present an IND-CCA PKE scheme based on the McEliece adaptation to rank metric proposed by Loidreau at PQC 2017. This IND-CCA PKE scheme based on rank metric does not use a hybrid construction KEM + symmetric encryption. Instead, we take advantage of the bigger message space obtained by the different parameters chosen in rank metric, being able to exchange multiple keys in one ciphertext. Our proposal is designed considering some specific properties of the random error generated during the encryption. We prove our proposal IND-CCA-secure in the QROM by using a security notion called disjoint simulatability introduced by Saito et al. in Eurocrypt 2018. Moreover, we provide security bounds by using the semi-oracles introduced by Ambainis et al.
Towards Quantum-Safe VPNs and Internet
Estimating that in 10 years time quantum computers capable of breaking public-key cryptography currently considered safe could exist, this threat is already eminent for information that require secrecy for more than 10 years. Considering the time required to standardize, implement and update existing networks signifies the urgency of adopting quantum-safe cryptography.
In this work, we investigate the trade-off between network and CPU overhead and the security levels defined by NIST. To do so, we integrate adapted OpenSSL libraries into OpenVPN, and perform experiments on a large variety of quantum-safe algorithms for respectively TLS versions 1.2 and 1.3 using OpenVPN and HTTPS independently. We describe the difficulties we encounter with the integration and we report the experimental performance results, comparing setting up the quantum-safe connection with setting up the connection without additional post-quantum cryptography.
Two PQ Signature Use-cases: Non-issues, challenges and potential solutions.
The recent advances and attention to quantum computing have raised serious security concerns among IT professionals. The ability of a quantum computer to efficiently solve (elliptic curve) discrete logarithm, and integer factorization problems poses a threat to current public key exchange, encryption, and digital signature schemes.
Consequently, in 2016 NIST initiated an open call for quantum-resistant crypto algorithms. This process, currently in its second round, has yielded nine signature algorithms for possible standardization. In this work, we are evaluating two post-quantum signature use-cases and analyze the signature schemes that seem most appropriate for them. We first consider Hash-Based Signatures for software signing and secure boot. We propose suitable parameters and show that their acceptable performance makes them good candidates for image signing. We then evaluate NIST candidate post-quantum signatures for TLS 1.3. We show that Dilithium and Falcon are the best available options but come with an impact on TLS performance. Finally, we present challenges and potential solutions introduced by these algorithms.
Updatable Oblivious Key Management for Storage Systems
We introduce Oblivious Key Management Systems (KMS) as a more secure alternative to traditional wrapping-based KMS that form the backbone of key management in large-scale data storage deployments. The new system, that builds on Oblivious Pseudorandom Functions (OPRF), hides keys and object identifiers from the KMS, offers unconditional security for key transport, provides key verifiability, reduces storage, and more. Further, we show how to provide all these features in a distributed threshold implementation that enhances protection against server compromise.
We extend this system with updatable encryption capability that supports key updates (known as key rotation) so that upon the periodic change of OPRF keys by the KMS server, a very efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. This enhances security with forward and post-compromise security, namely, security against future and past compromises, respectively, of the client's OPRF keys held by the KMS. Additionally, and in contrast to traditional KMS, our solution supports public key encryption and dispenses with any interaction with the KMS for data encryption (only decryption by the client requires such communication).
Our solutions build on recent work on updatable encryption but with significant enhancements applicable to the remote KMS setting. In addition to the critical security improvements, our designs are highly efficient and ready for use in practice. We report on experimental implementation and performance.
Rank-metric Encryption on Arm-Cortex M0
Since its invention by McEliece in 1978, cryptography based on Error Correcting Codes (ECC) has suffered from the reputation of not being suitable for constrained devices. Indeed, McEliece's scheme and its variants have large public keys and relatively long ciphertexts.
Recent works on these downsides explored the possible use of ECC based on rank metric instead of Hamming metric.
These codes were introduced in the late 80's to eliminate errors with repeating patterns, regardless of their Hamming weight.
Numerous proposals for the NIST Post-Quantum Cryptography (PQC) competition rely on these codes.
It has been proven that lattice-based cryptography and even hash-based signatures can run on lightweight devices,
but the question remains for code-based cryptography.
In this work, we demonstrate that this is actually possible for rank metric:
we have implemented the encryption operation of 5 schemes based on ECC in rank metric and made them run on an Arm Cortex-M0 processor, the smallest Arm processor available.
We describe the technical difficulties of porting rank-based cryptography to a resource-constrained device while maintaining decent performance and a suitable level of security against side-channel attacks, especially timing attacks.
A Comprehensive Framework for Fair and Efficient Benchmarking of Hardware Implementations of Lightweight Cryptography
In this paper, we propose a comprehensive framework for fair and efficient benchmarking of hardware implementations of lightweight cryptography (LWC). Our framework is centered around the hardware API (Application Programming Interface) for the implementations of lightweight authenticated ciphers, hash functions, and cores combining both functionalities. The major parts of our API include the minimum compliance criteria, interface, and communication protocol supported by the LWC core. The proposed API is intended to meet the requirements of all candidates submitted to the NIST Lightweight Cryptography standardization process, as well as all CAESAR candidates and current authenticated cipher and hash function standards. In order to speed-up the development of hardware implementations compliant with this API, we are making available the LWC Development Package and the corresponding Implementer’s Guide. Equipped with these resources, hardware designers can focus on implementing only a core functionality of a given algorithm. The development package facilitates the communication with external modules, full verification of the LWC core using simulation, and generation of optimized results. The proposed API for lightweight cryptography is a superset of the CAESAR Hardware API, endorsed by the organizers of the CAESAR competition, which was successfully used in the development of over 50 implementations of Round 2 and Round 3 CAESAR candidates. The primary extensions include support for optional hash functionality and the development of cores resistant against side-channel attacks. Similarly, the LWC Development Package is a superset of the part of the CAESAR Development Package responsible for support of Use Case 1 (lightweight) CAESAR candidates. The primary extensions include support for hash functionality, increasing the flexibility of the code shared among all candidates, as well as extended support for the detection of errors preventing the correct operation of cores during experimental testing. Overall, our framework supports (a) fair ranking of candidates in the NIST LWC standardization process from the point of view of their efficiency in hardware before and after the implementation of countermeasures against side-channel attacks, (b) ability to perform benchmarking within the limited time devoted to Round2 and any subsequent rounds of the NIST LWC standardization process, (c) compatibility among implementations of the same algorithm by different designers and (d) fast deployment of the best algorithms in real-life applications.
The Niederreiter cryptosystem and Quasi-Cyclic codes
McEliece and Niederreiter cryptosystems are robust and versatile cryptosystems. These cryptosystems work with any linear error-correcting codes. They are popular these days because they can be quantum-secure. In this paper, we study the Niederreiter cryptosystem using quasi-cyclic codes. We prove, if these quasi-cyclic codes satisfy certain conditions, the corresponding Niederreiter cryptosystem is resistant to the hidden subgroup problem using quantum Fourier sampling. Our proof requires the classification of finite simple groups.
Round-optimal Verifiable Oblivious Pseudorandom Functions From Ideal Lattices
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client's input, and likewise the client from learning anything about the server's key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known subexponential lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). Using recent developments in the area of post-quantum zero-knowledge arguments of knowledge, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.
SAVER: SNARK-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization
In the pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), there often exists a requirement for the proof system to be combined with encryption. As a typical example, a blockchain-based voting system requires the vote to be confidential (using encryption), while verifying voting validity (using zk-SNARKs). In these combined applications, a typical solution is to extend the zk-SNARK circuit to include the encryption code. However, complex cryptographic operations in the encryption algorithm increase the circuit size, which leads to impractically large proving time and CRS size.
In this paper, we propose SNARK-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is a verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides a verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin.
For the representative application, we also propose a Vote-SAVER based on SAVER, which is a novel voting system where voter's secret key lies only with the voter himself. The Vote-SAVER satisfies receipt-freeness (which implies ballot privacy), individual verifiability (which implies non-repudiation), vote verifiability, tally uniqueness, and voter anonymity. The experimental results show that our SAVER with respect to the Vote-SAVER relation yields 0.7s for zk-SNARK proving time and 10ms for encryption, with the CRS size of 16MB.
Repudiable Ring Signature: Stronger Security and Logarithmic-Size
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.
On the Security of RSA-PSS in the Wild
The RSA Probabilistic Signature Scheme (RSA-PSS) due to Bellare and Rogaway (EUROCRYPT 1996) is a widely deployed signature scheme. In particular it is a suggested replacement for the deterministic RSA Full Domain Hash (RSA-FDH) by Bellare and Rogaway (ACM CCS 1993) and PKCS#1 v1.5 (RFC 2313), as it can provide stronger security guarantees. It has since been shown by Kakvi and Kiltz (EUROCRYPT 2012, Journal of Cryptology 2018) that RSA-FDH provides similar security to that of RSA-PSS, also in the case when RSA-PSS is not randomized. Recently, Jager, Kakvi and May (ACM CCS 2018) showed that PKCS#1 v1.5 provides comparable security to both RSA-FDH and RSA-PSS. However, all these proofs consider each signature scheme in isolation, where in practice this is not the case. The most interesting case is that in TLS 1.3, PKCS#1 v1.5 signatures are still included for reasons of backwards compatibility, meaning both RSA-PSS and PKCS#1 v1.5 signatures are implemented. To save space, the key material is shared between the two schemes, which means the aforementioned security proofs no longer apply. We investigate the security of this joint usage of key material in the context of Sibling Signatures, which were introduced by Camenisch, Drijvers, and Dubovitskaya (ACM CCS 2017). It must be noted that we consider the standardised version of RSA-PSS (IEEE Standard P1363-2000), which deviates from the original scheme considered in all previous papers. We are able to show that this joint usage is indeed secure, and achieves a security level that closely matches that of PKCS#1 v1.5 signatures and that both schemes can be safely used, if the output lengths of the hash functions are chosen appropriately.
Last updated: 2019-12-18
Repudiable Ring Signatures: Stronger Definitions and Logarithmic-Size
Ring signatures allow a person to generate a signature on behalf of an ad hoc group, and can hide the true identity of the signer among the group. Repudiable ring signatures are the more strongly defined ring signatures, which can allow every non-signer to prove to others that the signature was not generated by himself.
This paper has two main areas of focus. First, we propose a new requirement for repudiable ring signatures, which is that no one can forge a valid repudiation for others. Second, as a breakthrough, we present the first logarithmic-size repudiable ring signatures which do not rely on a trusted setup or the random oracle model. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures and repudiations only grows logarithmically in the number of ring members.
Besides, our scheme also provides a new construction of logarithmic-size standard ring signatures.
Note on the noise growth of the RNS variants of the BFV scheme
In a recent work, Al Badawi et al. have noticed a different behaviour of the noise growth in practice between the two RNS variants of BFV from Bajard et al. and Halevi et al. Their experiments, based on the PALISADE and SEAL libraries, have shown that the multiplicative depth reached, in practice, by the first one was considerably smaller than the second one while theoretically equivalent in the worst-case. Their interpretation of this phenomenon was that the approximations used by Bajard et al. made the expansion factor behave differently than what the Central Limit Theorem would predict.
We have realized that this difference actually comes from the implementation of the SmMRq procedure of Bajard et al. in SEAL and PALISADE which is slightly different than what Bajard et al. had proposed. In this note we show that by fixing this small difference, the multiplicative depth of both variants is actually the same in practice.
Last updated: 2020-11-13
WaterCarver: Anonymous Confidential Blockchain System based on Account Model
The Account-model-based blockchain system gains its popularity due to its ease of use and native support of smart contracts. However, the privacy of users now becomes a major concern and bottleneck of its further adoption because users are not willing to leaving permanent public records online. Conventionally, the privacy of users includes transaction confidentiality and anonymity. While confidentiality can be easily protected using confidential transaction technique, anonymity can be quite challenging in an account-model-based blockchain system, because every transaction in the system inevitably updates transaction sender's as well as receiver's account balance. Even when the privacy of a blockchain system is well-protected, however, regulation becomes a new challenge to counter for further adoption of this system.
In this paper, we introduce a novel transaction-mix protocol, which provides confidentiality and, moreover, anonymity to account-model-based blockchain systems. By leveraging the state of art verifiable shuffle scheme to construct a shuffle of confidential transactions, we build one practical anonymous confidential blockchain system, WaterCarver, upon account model with the help of confidential transactions, verifiable shuffle, and zero-knowledge proofs. We further provide an efficient and robust solution for dynamic regulation with multiple regulators. Our regulation method achieves flexibility and perfect forward secrecy. Experiments show that the overall Transactions Per Second (TPS) of our system can be as large as 600 on a simple desktop.
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI,
protocols can tolerate up to of the parties being malicious. The introduction of ``Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA,
showing that even a majority of corrupted parties
can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the bound in terms of number of party corruptions. The above state of affairs
begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the lower bound.
In this work we study this question and formally demonstrate
how the above paradigm changes the rules of the game in cryptographic definitions.
First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality wrapper, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model---i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary.
We then present constructions for BA and MPC, which given access to such a network tolerate corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
Comments on Cryptographic Entropy Measurement
Uncategorized
Uncategorized
Random data and the entropy contained within is a critical component of information security.
Minimum entropy is the base measurement defined by NIST and what many of their entropy tests are based on.
However minimum entropy does not satisfy the basic requirements for an entropy measurement in either Shannon's original document or in Renyi's generalization of entropy .
This document suggests a different way forward with a reclassification of entropy into two classes.
With this differentiation, measurement tools are simplified and allow for more accurate assessments of entropy.
A Practical Model for Collaborative Databases: Securely Mixing, Searching and Computing
We introduce the notion of a Functionally Encrypted Datastore which collects data anonymously from multiple data-owners, stores it encrypted on an untrusted server, and allows untrusted clients to make select-and-compute queries on the collected data. Little coordination and no communication is required among the data-owners or the clients. Our notion is general enough to capture many real world scenarios that require controlled computation on encrypted data, such as is required for contact tracing in the wake of a pandemic. Our leakage and performance profile is similar to that of conventional searchable encryption systems, while the functionality we offer is significantly richer.
In more detail, the client specifies a query as a pair (Q, f) where Q is a filtering predicate which selects some subset of the dataset and f is a function on some computable values associated with the selected data. We provide efficient protocols for various functionalities of practical relevance. We demonstrate the utility, efficiency and scalability of our protocols via extensive experimentation. In particular, we evaluate the efficiency of our protocols in computations relevant to the Genome Wide Association Studies such as Minor Allele Frequency (MAF), Chi-square analysis and Hamming Distance.
On Round-By-Round Soundness and State Restoration Attacks
We show that the recently introduced notion of round-by-round soundness for
interactive proofs (Canetti et al.; STOC 2019) is equivalent to the notion of
soundness against state restoration attacks (Ben-Sasson, Chiesa, and Spooner;
TCC 2016). We also observe that neither notion is implied by the
random-oracle security of the Fiat-Shamir transform.
TI-PUF: Toward Side-Channel Resistant Physical Unclonable Functions
One of the main motivations behind introducing PUFs was their ability to resist physical attacks. Among them, cloning was the major concern of related scientific literature. Several primitive PUF designs have been introduced to the community, and several machine learning attacks have been shown capable to model such constructions. Although a few works have expressed how to make use of Side-Channel Analysis (SCA) leakage of PUF constructions to significantly improve the modeling attacks, little attention has been payed to provide corresponding countermeasures.
In this paper, we present a generic technique to operate any PUF primitive in an SCA-secure fashion. We, for the first time, make it possible to apply a provably-secure masking countermeasure – Threshold Implementation (TI) – on a strong PUF design. As a case study, we concentrate on the Interpose PUF, and based on practical experiments on an FPGA prototype, we demonstrate the ability of our construction to prevent the recovery of intermediate values through SCA measurements.
Security and Efficiency Trade-offs for Elliptic Curve Diffie-Hellman at the 128-bit and 224-bit Security Levels
Within the Transport Layer Security (TLS) Protocol Version 1.3, RFC 7748 specifies elliptic curves targeted at the 128-bit and the 224-bit security levels. For the 128-bit security level, the Montgomery curve Curve25519 and its birationally equivalent twisted Edwards curve Ed25519 are specified; for the 224-bit security level, the Montgomery curve Curve448, the Edwards curve Edwards448 (which is isogenous to Curve448) and another Edwards curve which is birationally equivalent to Curve448 are specified. Our first contribution is to provide the presently best known 64-bit assembly implementations of Diffie-Hellman shared secret computation using Curve25519. The main contribution of this work is to propose new pairs of Montgomery-Edwards curves at the 128-bit and the 224-bit security levels. The new curves are nice in the sense that they have very small curve coefficients and base points. Compared to the curves in RFC~7748, the new curves lose two bits of security. The gain is improved efficiency. For Intel processors, we have made different types of implementations of the Diffie-Hellman shared secret computation using the new curves. The new curve at the 128-bit level is faster than Curve25519 for all types of implementations, while the new curve at the 224-bit level is faster than Curve448 using 64-bit sequential implementation using schoolbook multiplication, but is slower than Curve448 for vectorized implementation using Karatsuba multiplication. Overall, the new curves provide good alternatives to Curve25519 and Curve448.
Secure Pairwise Key Sharing using Geometric Group Key Sharing Method (Full Paper)
In recent years, the concept of Internet of Things (IoT) network used to enable everyday objects and electronic devices to communicate with each other has been extensively discussed. There are three main types of communication that can be assumed in an IoT network: unicast, group, and broadcast communication. Apart from that, Hamasaki et al. considered geometric characteristics and proposed a method of geometric group key sharing. Thereafter, a key sharing method suitable for sharing a pairwise key by implementing the method proposed by Hamasaki et al. had been proposed by Nishigami et al. However, testing this method, we found that when a node and its fellow nodes are attacked together, the keys of the rest of the nodes will be leaked. Therefore, in this paper, using the feature introduced in geometric group key sharing, we propose a method that enables a pairwise key to be securely shared. In addition, we extend our method of pairwise key sharing to be applicable for group key sharing to achieve a way to share efficiently pairwise, group, and global keys used in broadcast communication. Finally, we evaluate the efficiency of our proposed method.
Expressive CP-ABE Scheme Satisfying Constant-Size Keys and Ciphertexts
Ciphertext-policy attribute-based encryption (CP-ABE) is a desirable scheme to use in cloud-based applications, especially on IoT devices. As most of these devices are battery-limited and memory-limited, leading to a constraint in designing a robust and straightforward mechanism involving less computation and less memory. But none of the systems are secure and based on conventional cryptosystems. Here we propose a constant-size secret key and constant-size ciphertext scheme based on RSA cryptosystem, which performs encryption and decryption in O(1) time complexity. We also prove that the scheme is secure and compare it with already existing schemes.
Permuted Puzzles and Cryptographic Hardness
A permuted puzzle problem is defined by a pair of distributions over . The problem is to distinguish samples from , where the symbols of each sample are permuted by a single secret permutation of .
The conjectured hardness of specific instances of permuted puzzle problems was recently used to obtain the first candidate constructions of Doubly Efficient Private Information Retrieval (DE-PIR) (Boyle et al. & Canetti et al., TCC'17). Roughly, in these works the distributions over are evaluations of either a moderately low-degree polynomial or a random function. This new conjecture seems to be quite powerful, and is the foundation for the first DE-PIR candidates, almost two decades after the question was first posed by Beimel et al. (CRYPTO'00). While permuted puzzles are a natural and general class of problems, their hardness is still poorly understood.
We initiate a formal investigation of the cryptographic hardness of permuted puzzle problems. Our contributions lie in three main directions:
1. Rigorous formalization. We formalize a notion of permuted puzzle distinguishing problems, extending and generalizing the proposed permuted puzzle framework of Boyle et al. (TCC'17).
2. Identifying hard permuted puzzles. We identify natural examples in which a one-time permutation provably creates cryptographic hardness, based on ``standard'' assumptions. In these examples, the original distributions are easily distinguishable, but the permuted puzzle distinguishing problem is computationally hard. We provide such constructions in the random oracle model, and in the plain model under the Decisional Diffie-Hellman (DDH) assumption. We additionally observe that the Learning Parity with Noise (LPN) assumption itself can be cast as a permuted puzzle.
3. Partial lower bound for the DE-PIR problem. We make progress towards better understanding the permuted puzzles underlying the DE-PIR constructions, by showing that a toy version of the problem, introduced by Boyle et al. (TCC'17), withstands a rich class of attacks, namely those that distinguish solely via statistical queries.
Zero-Knowledge Proofs for Set Membership: Efficient, Succinct, Modular
We consider the problem of proving in zero knowledge that an element of a public set satisfies a given property without disclosing the element, i.e., for some , `` and holds''. This problem arises in many applications (anonymous cryptocurrencies, credentials or whitelists) where, for privacy or anonymity reasons, it is crucial to hide certain data while ensuring properties of such data.
We design new \textit{modular} and \textit{efficient} constructions for this problem through new \textit{commit-and-prove zero-knowledge systems for set membership}, i.e. schemes proving for a value that is in a public commitment . We also extend our results to support {\em non-membership proofs}, i.e. proving .
Being commit-and-prove, our solutions can act as plug-and-play modules in statements of the form `` and holds'' by combining our set (non-)membership systems with any other commit-and-prove scheme for . Also, they work with Pedersen commitments over prime order groups which makes them compatible with popular systems such as Bulletproofs or Groth16.
We implemented our schemes as a software library, and tested experimentally their performance. Compared to previous work that achieves similar properties---the clever techniques combining zkSNARKs and Merkle Trees in Zcash---our solutions offer more flexibility, shorter public parameters and -- faster proving time for a set of size .
Cryptanalysis of FRS Obfuscation based on the CLT13 Multilinear Map
Uncategorized
Uncategorized
We present a classical polynomial time attack against the FRS branching program obfuscator of Fernando-Rasmussen-Sahai (Asiacrypt’17) (with one zerotest parameter), which is robust against all known classical cryptanalyses on obfuscators, when instantiated with the CLT13 multilinear map.
The first step is to recover a plaintext modulus of CLT13 multilinear map. To achieve the goal, we apply the Coron and Notarnicola (Asiacrypt'19) algorithm. However, because of parameter issues, the algorithm cannot be used directly.
In order to detour the issue, we convert a FRS obfuscator into a new program containing a small message space.
Through the conversion,
we obtain two zerotest parameters and encodings of zero except for two nonzero slots. Then, they are used to mitigate parameter constraints of the message space recovering algorithm.
Then, we propose a cryptanalysis of the FRS obfuscation based on the recovered message space. We show that there exist two functionally equivalent programs such that their obfuscated programs are computationally distinguishable. Thus, the FRS scheme does not satisfy the desired security without any additional constraints.
Probabilistic Properties of Modular Addition (Extended abstract)
We studied the applicability of differential cryptanalysis to cryptosystems based on operation of addition modulo . We obtained an estimate (accurate up to an additive constant) of expected value of entropy in rows of DDT of corresponding mapping. Moreover, the -th moments of are explored. In particular, asymptotic inequalities that describe the behavior of values and as were obtained. A simple analytical formula for the size of any given equivalence class was obtained. This formula helped to effectively compute the entropy distribution.
Simplifying Constructions and Assumptions for
The existence of secure indistinguishability obfuscators ( ) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. A recent line of work
[Ananth, Jain, and Sahai, 2018; Aggrawal, 2018; Lin and Matt, 2018; Jain, Lin, Matt, and Sahai, 2019] has developed a new theory for building ~from simpler building blocks, and represents the state of the art in constructing ~from succinct and instance-independent assumptions. This line of work has culminated in a construction of ~from four assumptions, consisting of two standard assumptions, namely sub-exponentially secure LWE and SXDH over bilinear groups, and two other pseudorandomness assumptions: The first
assumes weak pseudorandomness properties of generators computable by constant-degree polynomials over the integers, as well as an LWE leakage assumption, introduced by [Jain, Lin, Matt, and Sahai, 2019]. The second assumes the existence of Boolean PRGs with constant block locality [Goldreich 2000, Lin and Tessaro 2017]. In this work, we make the following contributions:
\begin{itemize}
\item We completely remove the need to assume a constant-block local PRG. This yields a construction of based on three assumptions of LWE, SXDH and a constant degree perturbation resilient generator [Jain, Lin, Matt, and Sahai, 2019]
\item Our construction is arguably simpler and more direct than previous constructions. We construct the notion of special homomorphic encoding (SHE) for all from LWE, by adapting techniques from Predicate Encryption [Gorbunov, Vaikunthanathan and Wee, 2015]. Prior to our work, SHE was only known for the class , from Ring LWE [Agrawal and Rosen, 2017]. Our new SHE allows our construction of to avoid an intermediate step of bootstrapping via randomized encodings. Indeed, we construct a functional encryption scheme whose ciphertext grows sublinearly only in the output length of the circuits as opposed to its size. This is first such scheme that does not rely on multilinear maps.
\item Finally, we investigate a main technical concept facilitating the line of work on ; namely the notion of \emph{partially hiding functional encryption} introduced by [Ananth, Jain, and Sahai 2018]. The partially hiding functional encryption used in these constructions allows an encryptor to encrypt vectors of the form and allows any decrptor with a key for function to learn . The encryption is allowed to reveal while keeping hidden. Furthermore, the size of the cipher-text should grow linearly in .
We significantly improve the starte of the art for partially hiding functional encryption: Assuming SXDH over bilinear maps, we construct a partially hiding FE scheme where the function is allowed to be any polynomial sized arithmetic branching program. Prior to our work, the best partially hiding FE only supported the class of constant degree polynomials over [Jain, Lin, Matt, and Sahai 2019].
\end{itemize}
Lattice-based Zero-knowledge SNARGs for Arithmetic Circuits
Succinct non-interactive arguments (SNARGs) enable verifying NP computations
with substantially lower complexity than that required
for classical NP verification. In this work, we construct a zero-knowledge
SNARG candidate that relies only on lattice-based assumptions
which are claimed to hold even in the presence of quantum computers.
Central to this new construction is the notion of linear-targeted malleability introduced
by Bitansky et al. (TCC 2013) and the conjecture that
variants of Regev encryption satisfy this property. Then, using
the efficient characterization of NP languages as
Square Arithmetic Programs we build the first quantum-resilient zk-SNARG for
arithmetic circuits with a constant-size proof consisting
of only 2 lattice-based ciphertexts.
Our protocol is designated-verifier, achieves zero-knowledge and has shorter
proofs and shorter CRS than the previous such schemes, e.g. Boneh et al.
(Eurocrypt 2017).
The Price of Active Security in Cryptographic Protocols
We construct the first actively-secure Multi-Party Computation (MPC) protocols with an arbitrary number of parties in the dishonest majority setting, for an arbitrary field F with constant communication overhead over the “passive-GMW” protocol (Goldreich, Micali and Wigderson, STOC ‘87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC ‘14) or a constant number of parties (Ishai et al. CRYPTO ‘08).
Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality , to an actively-secure protocol for general functionalities. Roughly, is parameterized by a linear-secret sharing scheme S, where it takes S-shares of two secrets and returns S-shares of their product.
We show that our compilation is concretely efficient for sufficiently large fields, resulting in an over- head of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on any passive implementation of , which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt ‘01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2.
Instantiating this compiler with an “honest-majority” implementation of FMULT, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damgård and Nielsen, CRYPTO ‘07).
Related-key Attack on 5-Round Kuznyechik
The first related-key attack on 3-round (of 9) Kuznyechik with 2-round
(of 8) key schedule was presented in CTCrypt'18. This article describes
a related-key attack on 5-round cipher with the same key schedule.
The presented one also has a practical complexity ( operations,
memory, related keys) and verified in practice.
We obtained result due to the simultaneous use of the integral properties
of the cipher transformations and the key schedule.
A Comparison of Chi^2-Test and Mutual Information as Distinguisher for Side-Channel Analysis
Masking is known as the most widely studied countermeasure against side-channel analysis attacks.
Since a masked implementation is based on a certain number of shares (referred to as the order of masking), it still exhibits leakages at higher orders.
In order to exploit such leakages, higher-order statistical moments individually at each order need to be estimated reflecting the higher-order attacks.
Instead, Mutual Information Analysis (MIA) known for more than 10 years avoids such a moment-based analysis by considering the entire distribution for the key recovery.
Recently the -test has been proposed for leakage detection and as a distinguisher where also the whole distribution of the leakages is analyzed.
In this work, we compare these two schemes to examine their dependency.
Indeed, one of the goals of this research is to conclude whether one can outperform the other.
In addition to a theoretical comparison, we present two case studies and their corresponding practical evaluations.
Both case studies are masked hardware implementations; one is an FPGA-based realization of a threshold implementation of PRESENT, and the other is an AES implementation as a coprocessor on a commercial smart card.
A Note on Masking Generic Boolean Functions
Masking is a popular countermeasure to protect cryptographic implementations against side-channel attacks (SCA). In the literature, a myriad of proposals of masking schemes can be found. They are typically defined by a masked multiplication, since this can serve as a basic building block for any nonlinear algorithm. However, when masking generic Boolean functions of algebraic degree t, it is very inefficient to construct the implementation from masked multiplications only. Further, it is not immediately clear from the description of a masked multiplication, how to efficiently implement a masked Boolean function.
In this work, we fill this gap in the literature with a detailed description and investigation of a generic masking methodology for Boolean functions of any degree t at any security order d.
A Note on Our Submission to Track 4 of iDASH 2019
iDASH is a competition soliciting implementations of cryptographic
schemes of interest in the context of biology. In 2019, one track
asked for multi-party computation implementations of training of a
machine learning model suitable for two datasets from cancer
research. In this note, we describe our solution submitted to the
competition. We found that the training can be run on three AWS
c5.9xlarge instances in less then one minute using MPC tolerating one
semi-honest corruption, and less than ten seconds at a slightly lower
accuracy. After seeing some winning solutions, we have lowered this
figure to less than a second.
Observations on the Quantum Circuit of the SBox of AES
In this paper, we propose some improved quantum circuits to implement the Sbox of AES. Our improved quantum circuits are based on the following strategies. First, we try to find the minimum set of the intermediate variables that can be used to compute the 8-bit output of the Sbox. Second, we check whether some wires store intermediate variables and remain idle until the end. And we can reduce the number of qubit by reusing some certain wires. Third, we try to compute the output of the Sbox without ancillas qubits, because we do not need to be clean up the wires storing the output of the Sbox. This operation will reduce the number of Toffoli gates. Our first quantum circuit only needs 26 qubits and 46 Toffoli gates, while quantum circuit proposed by
Langenberg \emph{et al.} required 32 qubits and 55 Toffoli gates. Furthermore, we can also construct our second quantum circuit with 22 qubits and 60 Toffoli gates.
A Note on a Static SIDH Protocol
It is well known, due to the adaptive attack by Galbraith, Petit, Shani, and Ti (GPST), that plain SIDH is insecure in the static setting. Recently, Kayacan's preprint "A Note on the Static-Static Key Agreement Protocol from Supersingular Isogenies", ePrint 2019/815, presented two possible fixes. Protocol A (also known as 2-SIDH, a low-degree instantiation of the more general k-SIDH) has been broken by Dobson, Galbraith, LeGrow, Ti, and Zobernig. In this short note we will show how to break Protocol B in one oracle query per private key bit and local complexity.
Last updated: 2020-10-07
On The Distinguishability of Ideal Ciphers
We present distinguishing attacks (based on the Birthday Paradox) which show that the use of permutations for a block cipher is insufficient to obtain a security of bits in the Ideal Cipher Model.
The context is that of an Oracle that can provide an Adversary the ciphertexts of a very small number of known plaintexts under a large number of (session) keys and IVs/nonces.
Our attacks distinguish an ideal cipher from a ``perfectly ideal'' block cipher, realised as an Oracle that can always produce new permutations up to the cardinality of the symmetric group on the block space.
The result is that in order to guarantee that an Adversary which is time limited to encryption requests has only a negligible advantage, the cipher needs to express distinct permutations. This seems to contradict a folklore belief about the security of using a block cipher in the multi-key setting, i.e.\ to obtain -bit security it is sufficient to use - or -bit keys depending on the mode of operation and the use case.
Non-Profiled Side Channel Attack based on Deep Learning using Picture Trace
In this paper, we suggest a new format for converting side channel traces to fully utilize the deep learning schemes. Due to the fact that many deep learning schemes have been advanced based on MNIST style datasets, we convert from raw-trace based on float or byte data to picture-formatted trace based on position. This is induced that the best performance can be acquired from deep learning schemes. Although the overfitting cannot be avoided in our suggestion, the accuracy for validation outperforms to previous results of side channel analysis based on deep learning. Additionally, we provide a novel criteria for attack success or fail based on statistical confidence level rather than rule of thumb. Even though the data storage is slightly increased, our suggestion can completely be recovered the correct key compared to previous results. Moreover, our suggestion scheme has a lot of potential to improve side channel attack.
SIMS : Self Sovereign Identity Management System with Preserving Privacy in Blockchain
Blockchain, which is a useful tool for providing data integrity, has emerged as an alternative to centralized servers. Concentrating on the integrity of the blockchain, many applications have been developed. Specifically, a blockchain can be utilized in proving the user's identity using its strong integrity. However, since all data in the blockchain is publicly available, it can cause privacy problems if the user's identity is stored in the blockchain unencrypted. Although the encryption of the private information can diminish privacy problems in the blockchain, it is difficult to transparently utilize encrypted user information in the blockchain. To provide integrity and privacy of user information simultaneously in the blockchain,
we propose a SIMS (Self-Sovereign Identity Management System) framework based on a zk-SNARK (zero-knowledge Succinct Non-interactive ARgument of Knowledge). In our proposed SIMS, the user information is employed in a privacy-preserving way due to the zero-knowledge property of the zk-SNARK. We construct a SIMS scheme and prove its security. We describe applications of SIMS and demonstrate its practicality through efficient implementations.
Forward and Backward Private DSSE for Range Queries
Due to its capabilities of searches and updates over the encrypted database, the dynamic searchable symmetric encryption
(DSSE) has received considerable attention recently. To resist leakage abuse attacks, a secure DSSE scheme usually requires forward and backward privacy. However, the existing forward and backward private DSSE schemes either only support single keyword queries or require more interactions between the client and the server. In this paper, we first give a new leakage function for range queries, which is more complicated than the one for single keyword queries. Furthermore, we propose a concrete forward and backward private DSSE scheme by using a refined binary tree data structure. Finally, the detailed security analysis and extensive experiments demonstrate that our proposal is secure and efficient, respectively.
Computationally Modeling User-Mediated Authentication Protocols
User interaction constitutes a largely unexplored field in protocol analysis, even in instances where the user takes an active role as a trusted third party, such as in the Internet of Things (IoT) device initialization protocols. Initializing the study of computational analysis of 3-party authentication protocols where one party is a physical user, this research introduces the 3-party possession user mediated authentication (3-PUMA) model. The 3-PUMA model addresses active user participation in a protocol which is designed to authenticate possession of a fixed data string – such as in IoT device commissioning. To demonstrate the 3-PUMA model in practice, we provide a computational analysis of the ISO/IEC 9798- 6:2010 standard’s Mechanism 7a authentication protocol which includes a user interface and interaction as well as a device-to-device channel. We show that the security of ISO/IEC 9798-6:2010 Mechanism 7a relies upon a non-standard MAC security notion, which we term existential unforgeability under key collision attacks (EUF-KCA). It is unknown if any standardized MAC algorithm achieves EUF-KCA security, indicating a potential vulnerability in the standard.
Linear-Regression on Packed Encrypted Data in the Two-Server Model
Developing machine learning models from federated training data, containing many independent samples, is an important task that can significantly enhance the potential applicability and prediction power of learned models. Since single users, like hospitals or individual labs, typically collect data-sets that do not support accurate learning with high confidence, it is desirable to combine data from several users without compromising data privacy.
In this paper, we develop a privacy-preserving solution for learning a linear regression model from data collectively contributed by several parties (``data owners''). Our protocol is based on the protocol of Giacomelli et al. (ACNS 2018) that utilized two non colluding servers and Linearly Homomorphic Encryption (LHE) to learn regularized linear regression models. Our methods use a different LHE scheme that allows us to significantly reduce both the number and runtime of homomorphic operations, as well as the total runtime complexity. Another advantage of our protocol is that the underlying LHE scheme is based on a different (and post-quantum secure) security assumption than Giacomelli et al.
Our approach leverages the Chinese Remainder Theorem, and Single Instruction Multiple Data representations, to obtain our improved performance. For a 1000 x 40 linear regression task we can learn a model in a total of 3 seconds for the homomorphic operations, compared to more than 100 seconds reported in the literature. Our approach also scales up to larger feature spaces: we implemented a system that can handle a 1000 x 100 linear regression task, investing minutes of server computing time after a more significant offline pre-processing by the data owners. We intend to incorporate our protocol and implementations into a comprehensive system that can handle secure federated learning at larger scales.
QFactory: classically-instructed remote secret qubits preparation
The functionality of classically-instructed remotely prepared random secret qubits was introduced in (Cojocaru et al 2018) as a way to enable classical parties to participate in secure quantum computation and communications protocols. The idea is that a classical party (client) instructs a quantum party (server) to generate a qubit to the server's side that is random, unknown to the server but known to the client. Such task is only possible under computational assumptions. In this contribution we define a simpler (basic) primitive consisting of only BB84 states, and give a protocol that realizes this primitive and that is secure against the strongest possible adversary (an arbitrarily deviating malicious server). The specific functions used, were constructed based on known trapdoor one-way functions, resulting to the security of our basic primitive being reduced to the hardness of the Learning With Errors problem. We then give a number of extensions, building on this basic module: extension to larger set of states (that includes non-Clifford states); proper consideration of the abort case; and verifiablity on the module level. The latter is based on ``blind self-testing'', a notion we introduced, proved in a limited setting and conjectured its validity for the most general case.
Single-Trace Vulnerability of Countermeasures against Instruction-related Timing Attack
In this paper, we propose that countermeasures against instruction-related timing attack would be vulnerable to single-trace attacks, which are presented at ISPEC 2017 and CHES 2019. The countermeasures use determiner to make operations, which leak timing side-channel information, perform in a constant-time. Since determiner is divided into two groups according to secret credentials, it is possible to recover secret credentials by clustering determiner into two groups.
Physical Cryptography
We recall a series of physical cryptography solutions and provide the reader with relevant security analyses. We mostly turn our attention to describing attack scenarios against schemes solving Yao's millionaires' problem, protocols for comparing information without revealing it and public key cryptosystems based on physical properties of systems.
Efficient Homomorphic Comparison Methods with Optimal Complexity
Comparison of two numbers is one of the most frequently used operations, but it has been a challenging task to efficiently compute the comparison function in homomorphic encryption (HE) which basically support addition and multiplication.
Recently, Cheon et al. (Asiacrypt 2019) introduced a new approximate representation of the comparison function with a rational function, and showed that this rational function can be evaluated by an iterative algorithm. Due to this iterative feature, their method achieves a logarithmic computational complexity compared to previous polynomial approximation methods; however, the computational complexity is still not optimal, and the algorithm is quite slow for large-bit inputs in HE implementation.
In this work, we propose new comparison methods with optimal asymptotic complexity based on composite polynomial approximation. The main idea is to systematically design a constant-degree polynomial by identifying the \emph{core properties} to make a composite polynomial get close to the sign function (equivalent to the comparison function) as the number of compositions increases. We additionally introduce an acceleration method applying a mixed polynomial composition for some other polynomial with different properties instead of . Utilizing the devised polynomials and , our new comparison algorithms only require computational complexity to obtain an approximate comparison result of satisfying within error.
The asymptotic optimality results in substantial performance enhancement: our comparison algorithm on encrypted -bit integers for takes milliseconds in amortized running time, which is times faster than the previous work.
Arbitrary Univariate Function Evaluation and Re-Encryption Protocols over Lifted-ElGamal Type Ciphertexts
Homomorphic encryption (HE) is one of the main tools in secure multiparty computation (MPC), and the (elliptic-curve) lifted-ElGamal cryptosystem is certainly the most efficient among the existing HE schemes. However, the combination of MPC with this most efficient HE has rarely appeared in the literature. This is mainly because the major known techniques for (additively) HE-based MPC are not available for this scheme due to its typical restriction that only a plaintext in a small range can be efficiently decrypted.
In this paper, we resolve this problem. By our technique, a Server having a lifted-ElGamal ciphertext with unknown small plaintext can obtain a ciphertext for an arbitrary function by just one-round communication with a semi-honest Client (and also two-rounds with a malicious Client) having a decryption key, where is kept secret for both parties. This property enlarges much the variations of MPC based on the most efficient lifted-ElGamal cryptosystem. As an application, we implemented MPC for exact edit distance between two encrypted strings; our experiment for strings of length shows that the protocol takes only seconds in LAN environments and about minutes even in WAN environments. Moreover, our technique is also available with other "lifted-ElGamal type" HE schemes and admits different keys/schemes for the original and the resulting ciphertexts. For example, we can securely convert a level-2 (i.e., after multiplication) ciphertext for some two-level HE schemes into a level-1 (i.e., before multiplication) ciphertext, and securely apply arbitrary functions to encrypted plaintexts for some attribute-based HE schemes. This is the first result (even by using communication) on realizing these two functionalities.
Efficient Construction of Nominative Signature Secure under Symmetric Key Primitives and Standard Assumptions on Lattice
Nominative signature is a cryptographic primitive where two parties collude to produce a signature. It is a user certification system and has applications in variety of sectors where nominee cannot trust heavily on the nominator to validate nominee’s certificate and only targeted entities are allowed to verify signature on sensitive data. We provide a new construction for nominative signature from standard assumptions on lattice. Our construction relies on collision resistant preimage sampleable function and symmetric key primitives like collision resistant pseudorandom function and zero knowledge proof system ZKB++ for Boolean circuits. We provide a detailed security analysis and show that our construction achieves security under unforgeability, invisibility, impersonation and non-repudiation in existing model. Furthermore, our construction exhibits non-transferability. The security under non-repudiation is achieved in the quantum random oracle model using Unruh transform to ZKB++.
Distinguishing LWE Instances Using Fourier Transform: A Refined Framework and its Applications
As a fundamental tool in lattice-based cryptosystems, discrete Gaussian samplers play important roles in both efficiency and security of lattice-based schemes. Approximate discrete rounded Gaussian sampler, central binomial sampler and bounded uniform sampler are three types of error samplers that are commonly used in the designs of various schemes. However, known cryptanalytics about error samplers concentrate on their standard deviations and no analysis about distinct structures of distributions have been proposed. In this paper, we address this problem by considering the dual attack for LWE instances and investigating Fourier transforms of these distributions. We introduce the concept of local width which enables us to get a more detailed look of these distributions and the distinguish advantages. We make an analysis of dual attack for different distributions and provide a novel measure model to describe the differences.
Within this refined framework, we also propose a novel type of error sampler which can achieve high efficiency, security as well as flexibility.
Linear-Size Constant-Query IOPs for Delegating Computation
We study the problem of delegating computations via interactive proofs that can be probabilistically checked. Known as *interactive oracle proofs* (IOPs), these proofs extend probabilistically checkable proofs (PCPs) to multi-round protocols, and have received much attention due to their application to constructing cryptographic proofs (such as succinct non-interactive arguments).
We prove that a rich class of NEXP-complete problems, which includes machine computations over large fields and succinctly-described arithmetic circuits, has constant-query IOPs with O(T)-size proofs and polylog(T)-time verification for T-size computations. This is the first construction that simultaneously achieves linear-size proofs and fast verification, regardless of query complexity.
An important metric when using IOPs to delegate computations is the cost of producing the proof. The highly-optimized proof length in our construction enables a very efficient prover, with arithmetic complexity O(T log T). Hence this construction is also the first to simultaneously achieve prover complexity O(T log T) and verifier complexity polylog(T).
Transparent SNARKs from DARK Compilers
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with logarithmic size evaluation proofs and verification time, measured in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumptions. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs, which we call Polynomial IOPs, to obtain doubly-efficient public-coin interactive arguments of knowledge for any NP relation with succinct communication. With linear preprocessing, the online verifier's work is logarithmic in the circuit complexity of the relation.
There are many existing examples of Polynomial IOPs (PIOPs) dating back to the first PCP (BFLS, STOC'91).
We present a generic compilation of any PIOP using our DARK polynomial commitment scheme. In particular, compiling the PIOP from PLONK (GWC, ePrint'19), an improvement on Sonic (MBKM, CCS'19), yields a public-coin interactive argument with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic communication, and logarithmic (online) verification time in the circuit size. Applying Fiat-Shamir results in a SNARK, which we call *Supersonic*.
Supersonic is also concretely efficient with 10KB proofs and under 100ms verification time for circuits with 1 million gates (estimated for 120-bit security). Most importantly, this SNARK is transparent: it does not require a trusted setup. We obtain zk-SNARKs by applying a hiding variant of our polynomial commitment scheme with zero-knowledge evaluations. Supersonic is the first complete zk-SNARK system that has both a practical prover time as well as asymptotically logarithmic proof size and verification time.
The original proof had a significant gap that was discovered by Block et al. (CRYPTO 2021). The new security proof closes the gap and shows that the original protocol with a slightly adjusted parameter is still secure. Towards this goal, we introduce the notion of almost-special-sound protocols which likely has broader applications.
Guess what?! On the impossibility of unconditionally secure public-key encryption
We (once again) refute recurring claims about a public-key encryption scheme that allegedly provides unconditional security.
This is approached from two angles: We give an information-theoretic proof of impossibility, as well as a concrete attack breaking the proposed scheme in essentially no time.
Dynamic Searchable Encryption with Small Client Storage
We study the problem of dynamic searchable encryption (DSE) with forward-and-backward privacy. Many DSE schemes have been proposed recently but the most efficient ones have one limitation: they require maintaining an operation counter for each unique keyword, either stored locally at the client or accessed obliviously (e.g., with an oblivious map) at the server, during every operation. We propose three new schemes that overcome the above limitation and achieve constant permanent client storage with improved performance, both asymptotically and experimentally, compared to prior state-of-the-art works. In particular, our first two schemes adopt a “static-to-dynamic” transformation which eliminates the need for oblivious accesses during searches. Due to this, they are the first practical schemes with minimal client storage and non-interactive search. Our third scheme is the first quasi-optimal forward-and-backward DSE scheme with only a logarithmic overhead for retrieving the query result (independently of previous deletions). While it does require an oblivious access during search in order to keep permanent client storage minimal, its practical performance is up to four orders of magnitude better than the best existing scheme with quasi-optimal search.
Last updated: 2020-11-21
Cube Cryptanalysis of Round-Reduced ACORN
The cube attack is one of the most powerful techniques in cryptanalysis of symmetric cryptographic primitives. The basic idea of cube attack is to determine the value of a polynomial in key bits by summing over a cube (a subset of public variables, e.g., plaintext bits or IV bits). If the degree of the polynomial is relatively low, then we can obtain a low-degree equation in key bits, thus may contribute to reducing the complexity of key recovery.
In this paper, we use cube cryptanalysis to analyze the authenticated stream cipher ACORN (one of the 6 algorithms in the final portfolio of the CAESAR competition), and give some new results in both distinguishing attacks and key recovery attacks. Firstly, we give a new method of finding cube testers, which is based on the greedy algorithm of finding cubes, and the numeric mapping method for estimating the algebraic degree of NFSR-based cryptosystems. We apply it to ACORN, and obtain the best practical distinguishing attacks for its 690-round variant using a cube of size 38, and its 706-round variant using a cube of size 46. Then we theoretically analyze the security bound of ACORN via the division property based cube attack. By exploiting the embedded property, we find some new distinguishers for ACORN, so the zero-sum property of the output of its 775-round variant can be observed with a complexity of . Finally, we propose a key recovery attack on ACORN reduced to 772 rounds. The time complexity to recover the linear superpoly of the 123-dimensional cube is . As far as we know, this is the best key recovery attack on round-reduced ACORN. It is also worth noting that this work does not threaten the security of ACORN.
Analysis of Nakamoto Consensus, Revisited
In the Bitcoin white paper, Nakamoto proposed a very simple Byzantine fault tolerant consensus algorithm that is also known as Nakamoto consensus. Despite its simplicity, some existing analysis of Nakamoto consensus appears to be long and involved. In this technical report, we aim to make such analysis simple and transparent so that we can teach senior undergraduate students and graduate students in our institutions. This report is largely based on a 3-hour tutorial given by one of the authors in June 2019.
Practical Volume-Based Attacks on Encrypted Databases
Recent years have seen an increased interest towards strong security primitives for encrypted databases (such as oblivious protocols), that hide the access patterns of query execution, and reveal only the volume of results. However, recent work has shown that even volume leakage can enable the reconstruction of entire columns in the database. Yet, existing attacks rely on a set of assumptions that are unrealistic in practice: for example, they (i) require a large number of queries to be issued by the user, or (ii) assume certain distributions on the queries or underlying data (e.g., that the queries are distributed uniformly at random, or that the database does not contain missing values).
In this work, we present new attacks for recovering the content of individual user queries, assuming no leakage from the system except the number of results and avoiding the limiting assumptions above. Unlike prior attacks, our attacks require only a single query to be issued by the user for recovering the keyword. Furthermore, our attacks make no assumptions about the distribution of issued queries or the underlying data. Instead, our key insight is to exploit the behavior of real-world applications.
We start by surveying 11 applications to identify two key characteristics that can be exploited by attackers -- (i) file injection, and (ii) automatic query replay. We present attacks that leverage these two properties in concert with volume leakage, independent of the details of any encrypted database system. Subsequently, we perform an attack on the real Gmail web client by simulating a server-side adversary. Our attack on Gmail completes within a matter of minutes, demonstrating the feasibility of our techniques. We also present three ancillary attacks for situations when certain mitigation strategies are employed.
Integrita: Protecting View-Consistency in Online Social Network with Federated Servers
Current designs of Online Social Networks (OSN) deploy centralized architecture where a central OSN provider handles all the users’ read and write requests over the shared data (e.g., Facebook wall or a group page). The historical incidents demonstrate that such centralization is leveraged for censorship and violating view consistency; a corrupted provider deliberately displays different views of the shared data to the users. Integrita provides a data-sharing mechanism that protects view consistency by replacing the centralized architecture with the federated-server model consisting of N malicious providers, N − 1 of which can be colluding. The state of the shared data is modeled by an append-only data structure, stored at the servers side, which contains the history of all the operations performed by the users. The consistency of users’ views towards shared data depends on their accessibility to the intact log of operations. Integrita guarantees that the servers cannot manipulate the log without being detected by the users. Unlike the state-of-the-art, Integrita accomplishes this neither by using storage inefficient data replication nor by requiring users to exchange their views. Every user, without relying on the presence of other users, can verify whether his operation has been added to the log and is visible to the rest of the users. We introduce and achieve a new level of view consistency named q-detectable consistency, where any inconsistency between users’ views cannot remain undetected for more than q operations where q is a function of the number of the servers. This level of consistency is stronger than what centralized counterparts offer. Also, our proposal reduces the storage overhead imposed by replication-based solutions by the multiplicative factor of 1/N. Furthermore, the application of Integrita is not limited to OSNs, and can be integrated into any log-based systems e.g., versioning control system as well.
Sub-Linear Privacy-Preserving Near-Neighbor Search
In Near-Neighbor Search (NNS), a client queries a database (held by a server) for the most similar data (near-neighbors) given a certain similarity metric. The Privacy-Preserving variant (PP-NNS) requires that neither server nor the client shall learn information about the other party’s data except what can be inferred from the outcome of NNS. The overwhelming growth in the size of current datasets and the lack of a truly secure server in the online world render the existing solutions impractical; either due to their high computational requirements or non-realistic assumptions which potentially compromise privacy. PP-NNS having query time sub-linear in the size of the database has been suggested as an open research direction by Li et al. (CCSW’15). In this paper, we provide the first such algorithm, called Privacy-Preserving Locality Sensitive Indexing (SLSI) which has a sub-linear query time and the ability to handle honest-but-curious parties. At the heart of our proposal lies a secure binary embedding scheme generated from a novel probabilistic transformation over locality sensitive hashing family. We provide information-theoretic bound for the privacy guarantees and support our theoretical claims using substantial empirical evidence on real-world datasets.
Probabilistic Data Structures in Adversarial Environments
Probabilistic data structures use space-efficient representations of data in order to (approximately) respond to queries about the data. Traditionally, these structures are accompanied by probabilistic bounds on query-response errors. These bounds implicitly assume benign attack models, in which the data and the queries are chosen non-adaptively, and independent of the randomness used to construct the representation. Yet probabilistic data structures are increasingly used in settings where these assumptions may be violated.
This work provides a provable-security treatment of probabilistic data structures in adversarial environments. We give a syntax that captures a wide variety of in-use structures, and our security notions support derivation of error bounds in the presence of powerful attacks.
We use our formalisms to analyze Bloom filters, counting (Bloom) filters and count-min sketch data structures. For the traditional version of these, our security findings are largely negative; however, we show that simple embellishments (e.g., using salts or secret keys) yields structures that provide provable security, and with little overhead.
Side-channel Attacks on Blinded Scalar Multiplications Revisited
In a series of recent articles (from 2011 to 2017), Schindler et al. show that exponent/scalar blinding is not as effective a countermeasure as expected against side-channel attacks targeting RSA modular exponentiation and ECC scalar multiplication. Precisely, these works demonstrate that if an attacker is able to retrieve many randomizations of the same secret, this secret can be fully recovered even when a significative proportion of the blinded secret bits are erroneous. With a focus on ECC, this paper improves the best results of Schindler et al. in the specific case of structured-order elliptic curves. Our results show that larger blinding material and higher error rates can be successfully handled by an attacker in practice. This study also opens new directions in this line of work by the proposal of a three-steps attack process that isolates the attack critical path (in terms of complexity and success rate) and hence eases the development of future solutions.
Multi-Locking and Perfect Argument Order: Two Major Improvements of Attribute-Based Encryption~(Long Paper)
Attribute Based Encryption, proposed by Sahai and Waters in 2007, is a set of promising cryptographic schemes that enable various fine grained access control on encrypted data. With a unique encryption key, a user is able to encrypt data for a very specific group of recipient that matches a set of attributes contained inside their decryption key.
In current scenario where personal devices share an increasing volume of private data on the web, such encryption algorithms are more than ever a strong alternative to standard encryption algorithms.
In this paper, we propose two major improvements of ABE namely the Perfect Argument Order Optimization and the Multi-Locking. Multi-Locking ABE is an extension of ABE that enables to share access control policy on an arbitrary number of entities. We also make a step further for the speed-up of ABE by providing the ``Perfect Argument Order Optimization'', which is a generalization of the ``Fixed Argument Optimization'' of Scott et al. to a much wider range of ABE constructions (and in particular to our Multi-Locking ABE).
Based on those two improvements we propose a construction of the first privacy-preserving Cloud service based on ABE, allowing ephemeral accesses to the data.
The Multi-Locking ABE and the Perfect Argument Order Optimization have been successfully integrated to the OpenABE library, providing a speed-up for a variety of ABE constructions.
On the Efficiency of Software Implementations of Lightweight Block Ciphers from the Perspective of Programming Languages
Lightweight block ciphers are primarily designed for resource constrained devices. However, due to service requirements of large-scale IoT networks and systems, the need for efficient software implementations can not be ruled out. A number of studies have compared software implementations of different lightweight block ciphers on a specific platform but to the best of our knowledge, this is the first attempt to benchmark various software implementations of a single lightweight block cipher across different programming languages and platforms in the cloud architecture. In this paper, we defined six lookup-table based software implementations for lightweight block ciphers with their characteristics ranging from memory to throughput optimized variants. We carried out a thorough analysis of the two costs associated with each implementation (memory and operations) and discussed possible trade-offs in detail. We coded all six types of implementations for three key settings (64, 80, 128 bits) of LED (a lightweight block cipher) in four programming languages (Java, C#, C++, Python). We highlighted the impact of choice relating to implementation type, programming language, and platform by benchmarking the seventy-two implementations for throughput and software efficiency on 32 & 64-bit platforms for two major operating systems (Windows & Linux) on Amazon Web Services Cloud. The results showed that these choices can affect the efficiency of a cryptographic primitive by a factor as high as 400.
Last updated: 2020-07-20
A Scalable Blockchain Based Digital Rights Management System
Uncategorized
Uncategorized
The internet has the main advantage of transparent and sharing, but on the other hand, it has a disadvantage that digital contents are not protected. Due to the online environment, it is not easy to achieve a well protected Digital Rights Management System. Any digital content that is freely allowed to spread online have zero value. The content provider only gets a one-time profit when they upload their work to a platform and transfer the right of the production to the platform. Now the platform is assumed to hold the right. But due to the online availability of content, anyone can download it and can make various copies. After this, the value of the digital content becomes zero, because the value can only be determined by the difficulty of access to the content. There is no way to track the leakage or copyright to the spread of digital material. Anyone is allowed to use it for their purpose. In this paper, we propose a distributed media transaction framework for digital rights management(DRM) scheme based on digital watermarking and scalable blockchain network model. The first generation of blockchain technology is suffering from high latency, low throughput, high transaction cost, high energy and high computational power consumption as well as centralization due to mining pools. In this paper, we mainly focus on removing or improving all these issues from the original blockchain system to make it suitable for our digital rights management model. Our model allows only authorized user to use online contents and provide original multimedia contents. The DRM also take care of digital contents and keep track records of required content modification, copyright transfer or other transaction trails related to multimedia data. We use digital watermarking to reclaim the uniqueness and copyright ownership of the off-line content once it is leaked.
Automated Search for Block Cipher Differentials: A GPU-Accelerated Branch-and-Bound Algorithm
Differential cryptanalysis of block ciphers requires the identification of differential characteristics with high probability. For block ciphers with large block sizes and number of rounds, identifying these characteristics is computationally intensive. The branch-and-bound algorithm was proposed by Matsui to automate this task. Since then, numerous improvements were made to the branch-and-bound algorithm by bounding the number of active s-boxes, incorporating a meet-in-the-middle approach, and adapting it to various block cipher architectures. Although mixed-integer linear programming (MILP) has been widely used to evaluate the differential resistance of block ciphers, MILP is still inefficient for clustering singular differential characteristics to obtain differentials (also known as the differential effect). The branch-and-bound method is still better suited for the task of trail clustering. However, it requires enhancements before being feasible for block ciphers with large block sizes, especially for a large number of rounds. Motivated by the need for a more efficient branch-and-bound algorithm to search for block cipher differentials, we propose a GPU-accelerated branch-and-bound algorithm. The proposed approach substantially increases the performance of the differential cluster search. We were able to derive a branch enumeration and evaluation kernel that is 5.95 times faster than its CPU counterpart. To showcase its practicality, the proposed algorithm is applied on TRIFLE-BC, a 128-bit block cipher. By incorporating a meet-in-the-middle approach with the proposed GPU kernel, we were able to improve the search efficiency (on 20 rounds of TRIFLE-BC) by approximately 58 times as compared to the CPU-based approach. Differentials consisting of up to 50 million individual characteristics can be constructed for 20 rounds of TRIFLE, leading to slight improvements to the overall differential probabilities. Even for larger rounds (43 rounds), the proposed algorithm is still able to construct large clusters of over 500 thousand characteristics. This result depicts the practicality of the proposed algorithm in constructing large differentials even for a 128-bit block cipher, which could be used to improve cryptanalytic findings against other block ciphers in the future.
Anonyma: Anonymous Invitation-Only Registration in Malicious Adversarial Model
In invitation-based systems, a new user can register upon having a threshold number of invitations issued by the existing members. The newcomer hands his invitations to the system administrator who verifies whether the invitations are issued by legitimate members. This causes the administrator to be aware of who is invited by whom. However, the inviter-invitee relationship is privacy-sensitive information and can lead to inference attacks where the invitee's profile (e.g., political view or location) can be extracted through the profiles of his inviters. Addressing this problem, we propose Anonyma, an anonymous invitation-based system, where a corrupted administrator, who may even collude with a subset of existing members, is not able to figure out who is invited by whom. We formally define and prove the inviter anonymity as well as unforgeability of invitations against a malicious and adaptive adversary. Our design only incurs a constant cost to authenticate a new registration. This is significantly better than similar works where the generation of invitations and verification of new registration cause an overhead linear in the total number of existing members. Besides, Anonyma is efficiently scalable in the sense that once a user joins the system, the administrator can instantly, and without re-keying the existing members, issue credentials for the newcomer to be able to act as an inviter. We additionally design Anonymax, an anonymous cross-network invitation-based system empowering third-party authentication where the invitations issued by the members of one system can be used for registering to another system.
A New Secure and Efficient Ownership Transfer Protocol based on Quadric Residue and Homomorphic Encryption
In systems equipped with radio frequency identification (RFID) technology, several security concerns may arise when the ownership of a tag should be transferred from one owner to another, e.g., the confidentiality of information related to the old owner or the new owner. Therefore, this transfer is usually done via a security protocol called the ownership transfer protocol. If the ownership of several things together transmitted from one owner to another during a single session, the protocol is referred to as the group ownership transfer protocol.
Lee et al. recently proposed a new group ownership transfer protocol by using cloud server, as a trusted third-party, and based on homomorphic encryption and quadratic residue. In this paper, at first, we explain some important security attacks against this recently proposed RFID group ownership transfer protocol. The success probability of any attack that is presented in this paper is and the complexity is just a run of the protocol.
Zhu et al. also in order to provide simultaneous transfer of group of tags in multi-owner environment proposed a lightweight anonymous group ownership transfer protocol. In this paper, we show that it suffers from desynchronization attack. The success probability of this attack is "1" and its complexity is only five runs of group ownership transfer protocol.
In addition, to overcome the Lee \textit{et al.} protocol security weaknesses, we present a new group ownership transfer protocol which is resistant against all known active and passive attacks, including the attacks presented in this paper. The provided security proof through informal methods and also formal methods such as Barrows-Abadi-Needham logic and Scyther tool show the proposed protocol's security correctness.
Exploring Lightweight Efficiency of ForkAES
Recently the ForkAES construction was proposed by Andreeva et al. for efficiently performing authenticated encryption of very short messages on next generation IoT devices. The ForkAES tweakable block cipher uses around one and a half AES encryption calls to produce a pair of ciphertexts for any given plaintext. However the only downside of the construction is that it needs to store an extra state of 128 bits in addition with the storage elements required to perform AES encryption. Thus a hardware implementation of ForkAES would require additional circuit area to accommodate the extra state.
In this paper, we first show that it is possible to implement ForkAES without any additional storage elements other than those required to implement AES, if the AES circuit can additionally perform decryption. Such an implementation naturally requires more clock cycles to perform ForkAES operations. We extend the recently proposed Atomic AES v2.0 architecture to realize ForkAES and compare the area-latency trade-offs incurred with and without an additional storage. The area of the most compact ForkAES design takes about 1.2 times that of AES.
In the second part of the paper we look at another important parameter of lightweight efficiency, i.e. energy. It is well known that round based constructions for AES are the most energy efficient ones. We extend the so-called S3K2 construction of Banik et al. (IEEE HOST 17) to realize ForkAES in an energy-preserving manner, and compare the effects of some design choices. The energy consumption of our best ForkAES design takes about 2 times that of AES. From lightweight design perspective, our results hence demonstrate that although ForkAES lives up to its promise (of being roughly 1.5 times that of AES) in terms of its area, the same does not hold for its energy consumption.
Swap and Rotate: Lightweight linear layers for SPN-based blockciphers
In CHES 2017, Jean et al. presented a paper on ``Bit-Sliding'' in which the authors proposed lightweight constructions for SPN based block ciphers like AES, PRESENT, and SKINNY. The main idea behind these constructions was to reduce the length of the datapath to 1 bit and to reformulate the linear layer for these ciphers so that they require fewer scan flip-flops (which have built-in multiplexer functionality and so larger in area as compared to a simple flip-flop). In this paper, we develop their idea even further in few separate directions.
First, we prove that given an arbitrary linear transformation, it is always possible to construct the linear layer using merely 2 scan flip-flops. This points to an optimistic venue to follow to gain further GE reductions, yet the straightforward application of the techniques in our proof to PRESENT and GIFT leads to inefficient implementations of the linear layer, as reducing ourselves to 2 scan flip-flops setting requires thousands of clock cycles and leads to very high latency.
Equipped with the well-established formalism on permutation groups, we explore whether we can reduce the number of clock cycles to a practical level, i.e. few hundreds, by adding few more pairs of scan flip flops. For PRESENT, we show that 4 (resp. 8, 12) scan flip-flops are sufficient to complete the permutation layer in 384 (resp. 256, 128) clock cycles. For GIFT, we show that 4 (resp. 8, 10) scan flip flops correspond to 320 (resp. 192, 128) clock cycles. Hence our results give a good balance between throughput and circuit area, that is, we are able to construct circuits with reasonable throughput (by increasing the latency to threefold at most) and smaller than the state-of-the-art implementations of PRESENT and GIFT.
Finally, in order to provide the best of the two worlds (i.e. circuit area and latency), we push our scan flip-flop choices even further to completely eliminate the latency incurred by the permutation layer, without compromising our stringent GE budget. We show that not only 12 scan flip flops are sufficient to execute PRESENT permutation in 64 clock cycles, but also the same scan flip flops can be used readily in a combined encryption decryption circuit. Our final design of PRESENT and GIFT beat the record of Jean et al. and Banik et al. in both latency and in circuit-size metric. We believe that the techniques presented in our work can also be used at choosing bit-sliding-friendly linear layer permutations for the future SPN-based designs.
Topology-Hiding Computation for Networks with Unknown Delays
Topology-Hiding Computation (THC) allows a set of parties to securely compute a function over an incomplete network without revealing information on the network topology. Since its introduction in TCC'15 by Moran et al., the research on THC has focused on reducing the communication complexity, allowing larger graph classes, and tolerating stronger corruption types.
All of these results consider a fully synchronous model with a known upper bound on the maximal delay of all communication channels. Unfortunately, in any realistic setting this bound has to be extremely large, which makes all fully synchronous protocols inefficient. In the literature on multi-party computation, this is solved by considering the fully asynchronous model. However, THC is unachievable in this model (and even hard to define), leaving even the definition of a meaningful model as an open problem.
The contributions of this paper are threefold. First, we introduce a meaningful model of unknown and random communication delays for which THC is both definable and achievable. The probability distributions of the delays can be arbitrary for each channel, but one needs to make the (necessary) assumption that the delays are independent. The existing fully-synchronous THC protocols do not work in this setting and would, in particular, leak information about the topology. Second, in the model with trusted stateless hardware boxes introduced at Eurocrypt'18 by Ball et al., we present a THC protocol that works for any graph class. Third, we explore what is achievable in the standard model without trusted hardware and present a THC protocol for specific graph types (cycles and trees) secure under the DDH assumption. The speed of all protocols scales with the actual (unknown) delay times, in contrast to all previously known THC protocols whose speed is determined by the assumed upper bound on the network delay.
Adaptive Security of Practical Garbling Schemes
A garbling scheme enables one to garble a circuit C and an input x in a way that C(x) can be evaluated, but nothing else is revealed. Since the first construction by Yao, there have been tremendous practical efficiency improvements for selectively secure garbling schemes, where the adversary is forced to choose both input and circuit to be garbled at the same time. However, in the more realistic setting of adaptive security --where an adversary can choose the input adaptively based on the garbled circuit-- not much is known about practical efficiency improvements.
In this work, we initiate the study of practical garbling schemes that are both more efficient than Yao's construction and adaptively secure. We provide insights into characteristics of these schemes and highlight the limitations of current techniques for proving adaptive security in this regime. Furthermore, we present an adaptively secure garbling scheme that garbles XOR gates with 2 and AND gates with 3 ciphertexts per gate, thus providing the first practical garbling scheme with adaptive security based on PRFs whose garbled circuit size is smaller than that of Yao's construction.
On collisions related to an ideal class of order 3 in CSIDH
CSIDH is an isogeny-based key exchange, which is a candidate for post quantum cryptography. It uses the action of an ideal class group on Fp-isomorphic classes of supersingular elliptic curves. In CSIDH, the ideal classes are represented by vectors with integer coefficients. The number of ideal classes represented by these vectors de- termines the security level of CSIDH. Therefore, it is important to investigate the correspondence between the vectors and the ideal classes. Heuristics show that integer vectors in a certain range represent “almost” uniformly all of the ideal classes. However, the precise correspondence between the integer vectors and the ideal classes is still unclear. In this paper, we investigate the correspondence between the ideal classes and the integer vectors and show that the vector (1, . . . , 1) corresponds to an ideal class of order 3. Consequently, the integer vectors in CSIDH have collisions related to this ideal class. Here, we use the word “collision” in the sense of distinct vectors belonging to the same ideal class, i.e., distinct secret keys that correspond to the same public key in CSIDH. We further propose a new ideal representation in CSIDH that does not include these collisions and give formulae for efficiently computing the action of the new representation.
Towards Post-Quantum Secure Symmetric Cryptography: A Mathematical Perspective
We introduce an independent research project on symmetric cryptography with a focus on foreseeable industrial needs and higher post-quantum security compared to currently used symmetric algorithms. It was initiated by the independent IT-Security experts Kovac and Underhill. The result is the new symmetric cryptographic algorithm eAES, which is intended to be a stronger brother of the widely used Advanced Encryption Standard, the standardized version of the Rijndael algorithm.
In this analysis we show, that eAES offers an enhanced complexity by a factor ≥ 2^126 regarding the quantum cryptanalysis Grover’s search algorithm compared to AES for 256 bit keys. Furthermore we outline the basic facts and open questions regarding quantum algebraic attacks on eAES.
Behind multiple trapdoors: A cryptographic commitment scheme for establishing secure communications
This paper introduces a cryptographic commitment using multiple platform groups where the attacker must solve multiple instances of the Discrete Logarithm Problem to break the scheme. The goal is that Alice and Bob establish a secret communication after verifying a finite set of values that have been computed using multiple trapdoor-permutation functions. Moreover, applicable cryptanalytic techniques and procedures that entirely define this scheme are discussed
High-Speed Modular Multipliers for Isogeny-Based Post-Quantum Cryptography
As one of the post-quantum protocol candidates, the supersingular isogeny key encapsulation (SIKE) protocol delivers promising public and secret key sizes over other candidates. Nevertheless, the considerable computations form the bottleneck and limit its practical applications. The modular multiplication operations occupy a large proportion of the overall computations required by the SIKE protocol. The VLSI implementation of the high-speed modular multiplier remains a big challenge. In this paper, we propose three improved modular multiplication algorithms based on an unconventional radix for this protocol, all of which cost about 20% fewer computations than the prior art. Besides, a multi-precision scheme is also introduced for the proposed algorithms to improve the scalability in hardware implementation, resulting in three new algorithms. We then present very efficient high-speed modular multiplier architectures for the six algorithms. It is shown that these new architectures can be highly optimized and extensively pipelined to obtain high throughput thanks to the adopted overlapping processing scheme. The FPGA implementation results show the proposed multipliers without the multi-precision scheme all achieve about 60 times higher throughput than the state-of-the-art design (the FFM2 multiplier), and those with the multi-precision scheme all acquire almost 10 times higher throughput than this work. Meanwhile, each of the multi-precision based designs has almost the same resource consumptions as the FFM2 does.
Secure Multi-party Quantum Computation with a Dishonest Majority
The cryptographic task of secure multi-party (classical) computation has received a lot of attention in the last decades. Even in the extreme case where a computation is performed between k mutually distrustful players, and security is required even for the single honest player if all other players are colluding adversaries, secure protocols are known. For quantum computation, on the other hand, protocols allowing arbitrary dishonest majority have only been proven for k = 2. In this work, we generalize the approach taken by Dupuis, Nielsen and Salvail (CRYPTO 2012) in the two-party setting to devise a secure, efficient protocol for multi-party quantum computation for any number of players k, and prove security against up to k − 1 colluding adversaries. The quantum round complexity of the protocol for computing a quantum circuit with g gates acting on w qubits is O((w + g)k). To achieve efficiency, we develop a novel public verification protocol for the Clifford authentication code, and a testing protocol for magic-state inputs, both using classical multi-party computation.
Efficient simulation of random states and random unitaries
We consider the problem of efficiently simulating random quantum states and random unitary operators, in a manner which is convincing to unbounded adversaries with black-box oracle access.
This problem has previously only been considered for restricted adversaries. Against adversaries with an a priori bound on the number of queries, it is well-known that t-designs suffice. Against polynomial-time adversaries, one can use pseudorandom states (PRS) and pseudorandom unitaries (PRU), as defined in a recent work of Ji, Liu, and Song; unfortunately, no provably secure construction is known for PRUs.
In our setting, we are concerned with unbounded adversaries. Nonetheless, we are able to give stateful quantum algorithms which simulate the ideal object in both settings of interest. In the case of Haar- random states, our simulator is polynomial-time, has negligible error, and can also simulate verification and reflection through the simulated state. This yields an immediate application to quantum money: a money scheme which is information-theoretically unforgeable and untraceable. In the case of Haar-random unitaries, our simulator takes polynomial space, but simulates both forward and inverse access with zero error.
These results can be seen as the first significant steps in developing a theory of lazy sampling for random quantum objects.
Authentication in Key-Exchange: Definitions, Relations and Composition
We present a systematic approach to define and study authentication notions in authenticated key-exchange protocols. We propose and use a flexible and expressive predicate-based definitional framework. Our definitions capture key and entity authentication, in both implicit and explicit variants, as well as key and entity confirmation, for authenticated key-exchange protocols. In particular, we capture critical notions in the authentication space such as key-compromise impersonation resistance and security against unknown key-share attacks. We first present and explore these definitions within the Bellare-Rogaway model and then extend them to Canetti-Krawczyk-style models.
We then show two useful applications of our framework. First, we look at the authentication guarantees of three representative protocols to draw several useful lessons for protocol design. The core technical contribution of this paper is then to formally establish that composition of secure implicitly authenticated key-exchange with subsequent confirmation protocols yields explicit authentication guarantees. Without a formal separation of implicit and explicit authentication from secrecy, a proof of this folklore result could not have been established.
Rational isogenies from irrational endomorphisms
In this paper, we introduce a polynomial-time algorithm to compute a connecting -ideal between two supersingular elliptic curves over with common -endomorphism ring , given a description of their full endomorphism rings. This algorithm provides a reduction of the security of the CSIDH cryptosystem to the problem of computing endomorphism rings of supersingular elliptic curves. A similar reduction for SIDH appeared at Asiacrypt 2016, but relies on totally different techniques. Furthermore, we also show that any supersingular elliptic curve constructed using the complex-multiplication method can be located precisely in the supersingular isogeny graph by explicitly deriving a path to a known base curve. This result prohibits the use of such curves as a building block for a hash function into the supersingular isogeny graph.
Efficient Redactable Signature and Application to Anonymous Credentials
Let us assume that Alice has received a constant-size signature on a set of messages from some organization. Depending on the situation, Alice might need to disclose, prove relations about or hide some of these messages. Ideally, the complexity of the corresponding protocols should not depend on the hidden messages. In particular, if Alice wants to disclose only messages, then the authenticity of the latter should be verifiable in at most operations.
Many solutions were proposed over the past decades, but they only provide a partial answer to this problem. In particular, we note that they suffer either from the need to prove knowledge of the hidden elements or from the inability to prove that the latter satisfy some relations.
In this paper, we propose a very efficient constant-size redactable signature scheme that addresses all the problems above. Signatures can indeed be redacted to remain valid only on a subset of messages included in . The resulting redacted signature consists of 4 elements and can be verified with essentially exponentiations. Different shows of the same signature can moreover be made unlinkable leading to a very efficient anonymous credentials system.
A note on short invertible ring elements and applications to cyclotomic and trinomials number fields
Ring-SIS based -protocols require the construction of a challenge set in some ring , usually an order in a number field . These protocols impose various requirements on the subset , and constructing a good or even optimal challenge set is a non-trivial task that involves making various trade-offs. </p>
In particular, the set should be 'large', elements in should be 'small', differences of distinct elements in should be invertible modulo a rational prime , this prime should be small, and finally primes that split in many factors are favorable. Clearly, these requirements on require certain trade-offs. The splitting behavior and size of the prime , for example, influence the invertibility condition. </p>
Given an order in an arbitrary number field , this work aims at constructing subsets with precisely the above mentioned properties. Cyclotomic number fields possess some convenient properties and as a result most protocols are defined over these specific fields. However, recent attacks have shown that these convenient properties might also be of use to an attacker, thereby weakening or even breaking the cryptographic schemes. </p>
In this paper, we show that a known result on constructing challenge sets in cyclotomic number fields (Lyubashevsky and Seiler 2018) follows from standard Galois theory, thereby simplifying the proof. In addition, this approach leads to a natural generalization from cyclotomic to arbitrary number fields. Along the way we prove a conjectured result on the practical applicability for cyclotomic number fields and prove the optimality of certain constructions. We apply the generalization to construct challenge sets in trinomial number fields of the form with irreducible. Finally, we find a new construction for challenge sets resulting in smaller prime sizes at the cost of slightly increasing the -norm of the challenges.