All papers in 2016 (Page 3 of 1195 results)

Last updated:  2017-02-15
Leakage-Resilient and Misuse-Resistant Authenticated Encryption
Francesco Berti, François Koeune, Olivier Pereira, Thomas Peters, François-Xavier Standaert
Leakage-resilience and misuse-resistance are two important properties for the deployment of authenticated encryption schemes. They aim at mitigating the impact of implementation flaws due to side-channel leakages and misused randomness. In this paper, we discuss their interactions and incompatibilities. For this purpose, we first show a generic composition mode of a MAC with an encryption scheme that leads to a misuse-resistant authenticated encryption scheme, and also show that misuse-resistance does not hold anymore in the presence of leakages, even when relying on leakage-resilient MACs and encryption schemes. Next, we argue that full misuse-resistance with leakage may be impossible to achieve with simple primitives such as hash functions and block ciphers. As a result, we formalize a new security notion of ciphertext integrity with misuse and leakage, which seems to be the best that can be achieved in a symmetric cryptographic setting, and describe first efficient constructions satisfying it.
Last updated:  2017-02-28
Measuring small subgroup attacks against Diffie-Hellman
Luke Valenta, David Adrian, Antonio Sanso, Shaanan Cohney, Joshua Fried, Marcella Hastings, J. Alex Halderman, Nadia Heninger
Several recent standards, including NIST SP 800- 56A and RFC 5114, advocate the use of “DSA” parameters for Diffie-Hellman key exchange. While it is possible to use such parameters securely, additional validation checks are necessary to prevent well-known and potentially devastating attacks. In this paper, we observe that many Diffie-Hellman implementations do not properly validate key exchange inputs. Combined with other protocol properties and implementation choices, this can radically decrease security. We measure the prevalence of these parameter choices in the wild for HTTPS, POP3S, SMTP with STARTTLS, SSH, IKEv1, and IKEv2, finding millions of hosts using DSA and other non-“safe” primes for Diffie-Hellman key exchange, many of them in combination with potentially vulnerable behaviors. We examine over 20 open-source cryptographic libraries and applications and observe that until January 2016, not a single one validated subgroup orders by default. We found feasible full or partial key recovery vulnerabilities in OpenSSL, the Exim mail server, the Unbound DNS client, and Amazon’s load balancer, as well as susceptibility to weaker attacks in many other applications.
Last updated:  2017-04-02
Improving Authenticated Dynamic Dictionaries, with Applications to Cryptocurrencies
Leonid Reyzin, Dmitry Meshkov, Alexander Chepurnoy, Sasha Ivanov
We improve the design and implementation of two-party and three-party authenticated dynamic dictionaries and apply these dictionaries to cryptocurrency ledgers. A public ledger (blockchain) in a cryptocurrency needs to be easily verifiable. However, maintaining a data structure of all account balances, in order to verify whether a transaction is valid, can be quite burdensome: a verifier who does not have the large amount of RAM required for the data structure will perform slowly because of the need to continually access secondary storage. We demonstrate experimentally that authenticated dynamic dictionaries can considerably reduce verifier load. On the other hand, per-transaction proofs generated by authenticated dictionaries increase the size of the blockchain, which motivates us to find a solution with most compact proofs. Our improvements to the design of authenticated dictionaries reduce proof size and speed up verification by 1.4-2.5 times, making them better suited for the cryptocurrency application. We further show that proofs for multiple transactions in a single block can compressed together, reducing their total length by approximately an additional factor of 2. We simulate blockchain verification, and show that our verifier can be about 20 times faster than a disk-bound verifier under a realistic transaction load.
Last updated:  2016-10-17
Comparing Sboxes of Ciphers from the Perspective of Side-Channel Attacks
Liran Lerman, Olivier Markowitch, Nikita Veshchikov
Side-channel attacks exploit physical characteristics of implementations of cryptographic algorithms in order to extract sensitive information such as the secret key. These physical attacks are among the most powerful attacks against real-world cryptosystems. This paper analyses the non-linear part (called Sboxes) of ciphers, which is often targeted by implementation attacks. We analyse Sboxes of several candidates that were sub- mitted to the competition on authenticated encryption (CAESAR) as well as several other ciphers. We compare theoretical metrics with results from simulations and with real experiments. In this paper, we demonstrate that, in some contexts, the theoretical metrics provide no information on the resiliency of the Sboxes against side-channel attacks.
Last updated:  2016-11-30
Estimating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3
Uncategorized
Matthew Amy, Olivia Di Matteo, Vlad Gheorghiu, Michele Mosca, Alex Parent, John Schanck
Show abstract
Uncategorized
We investigate the cost of Grover's quantum search algorithm when used in the context of pre-image attacks on the SHA-2 and SHA-3 families of hash functions. Our cost model assumes that the attack is run on a surface code based fault-tolerant quantum computer. Our estimates rely on a time-area metric that costs the number of logical qubits times the depth of the circuit in units of surface code cycles. As a surface code cycle involves a significant classical processing stage, our cost estimates allow for crude, but direct, comparisons of classical and quantum algorithms. We exhibit a circuit for a pre-image attack on SHA-256 that is approximately $2^{153.8}$ surface code cycles deep and requires approximately $2^{12.6}$ logical qubits. This yields an overall cost of $2^{166.4}$ logical-qubit-cycles. Likewise we exhibit a SHA3-256 circuit that is approximately $2^{146.5}$ surface code cycles deep and requires approximately $2^{20}$ logical qubits for a total cost of, again, $2^{166.5}$ logical-qubit-cycles. Both attacks require on the order of $2^{128}$ queries in a quantum black-box model, hence our results suggest that executing these attacks may be as much as $275$ billion times more expensive than one would expect from the simple query analysis.
Last updated:  2018-03-23
Bootstrapping the Blockchain, with Applications to Consensus and Fast PKI Setup
Juan A. Garay, Aggelos Kiayias, Nikos Leonardos, Giorgos Panagiotakos
The Bitcoin backbone protocol [Eurocrypt 2015] extracts basic properties of Bitcoin's underlying {\em blockchain} data structure, such as ``common prefix'' and ``chain quality,'' and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are ``proofs of work'' (POWs), adversarial hashing power strictly less than $1/2$ {\em and} no adversarial pre-computation---or, alternatively, the existence of an unpredictable ``genesis'' block. In this paper we first show how to remove the latter assumption, presenting a ``bootstrapped'' Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks ``from scratch'' in the presence of adversarial pre-computation. Importantly, the round complexity of the genesis block generation process is \emph{independent} of the number of participants. Next, we consider applications of our construction, including a PKI generation protocol and a consensus protocol without trusted setup assuming an honest majority (in terms of computational power). Previous results in the same setting (unauthenticated parties, no trusted setup, POWs) required a round complexity linear in the number of participants.
Last updated:  2020-02-12
Revisiting the Wrong-Key-Randomization Hypothesis
Tomer Ashur, Tim Beyne, Vincent Rijmen
Linear cryptanalysis is considered to be one of the strongest techniques in the cryptanalyst’s arsenal. In most cases, Matsui’s Algorithm 2 is used for the key recovery part of the attack. The success rate analysis of this algorithm is based on an assumption regarding the bias of a linear approximation for a wrong key, known as the wrong-key-randomization hypothesis. This hypothesis was refined by Bogdanov and Tischhauser to take into account the stochastic nature of the bias for a wrong key. We provide further refinements to the analysis of Matsui’s Algorithm 2 by considering sampling without replacement. This paper derives the distribution of the observed bias for wrong keys when sampling is done without replacement and shows that less data are required in this scenario. It also develops formulas for the success probability and the required data complexity when this approach is taken. The formulas predict that the success probability may reach a peak and then decrease as more pairs are considered. We provide a new explanation for this behavior and derive the conditions for encountering it. We empirically verify our results and compare them to previous work.
Last updated:  2016-12-21
Scrypt is Maximally Memory-Hard
Uncategorized
Joël Alwen, Binyi Chen, Krzysztof Pietrzak, Leonid Reyzin, Stefano Tessaro
Show abstract
Uncategorized
Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost. As memory, unlike computation, costs about the same across different platforms, MHFs cannot be evaluated at significantly lower cost on dedicated hardware like ASICs. MHFs have found widespread applications including password hashing, key derivation, and proofs-of-work. This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known. We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is $\Omega(n^2 w)$, where $w$ and $n$ are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC '15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF. Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT '16) who proved a weaker lower bound of $\Omega(n^2 w /\log^2 n)$ for a restricted class of adversaries.
Last updated:  2017-09-21
Zero Knowledge Protocols from Succinct Constraint Detection
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner
We study the problem of constructing proof systems that achieve both soundness and zero knowledge unconditionally (without relying on intractability assumptions). Known techniques for this goal are primarily *combinatorial*, despite the fact that constructions of interactive proofs (IPs) and probabilistically checkable proofs (PCPs) heavily rely on *algebraic* techniques to achieve their properties. We present simple and natural modifications of well-known "algebraic" IP and PCP protocols that achieve unconditional (perfect) zero knowledge in recently introduced models, overcoming limitations of known techniques. 1. We modify the PCP of Ben-Sasson and Sudan [BS08] to obtain zero knowledge for NEXP in the model of Interactive Oracle Proofs [BCS16,RRR16], where the verifier, in each round, receives a PCP from the prover. 2. We modify the IP of Lund, Fortnow, Karloff, and Nisan [LFKN92] to obtain zero knowledge for #P in the model of Interactive PCPs [KR08], where the verifier first receives a PCP from the prover and then interacts with him. The simulators in our zero knowledge protocols rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes: * An algorithm to detect constraints for Reed--Muller codes of exponential length. This algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. * An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree. This algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are "locally" spanned by a small number of small-support constraints.
Last updated:  2023-09-04
A Key to Success -- Success Exponents for Side-Channel Distinguishers
Sylvain Guilley, Annelie Heuser, and Olivier Rioul
The success rate is the classical metric for evaluating the performance of side-channel attacks. It is generally computed empirically from measurements for a particular device or using simulations. Closed-form expressions of success rate are desirable because they provide an explicit functional dependence on relevant parameters such as number of measurements and signal-to-noise ratio which help to understand the effectiveness of a given attack and how one can mitigate its threat by countermeasures. However, such closed-form expressions involve high-dimensional complex statistical functions that are hard to estimate. In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature. Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.
Last updated:  2016-11-03
Fast Arithmetic Modulo $2^xp^y\pm 1$
Uncategorized
Joppe W. Bos, Simon Friedberger
Show abstract
Uncategorized
We give a systematic overview of techniques to compute efficient arithmetic modulo $2^xp^y\pm 1$. This is useful for computations in the supersingular isogeny Diffie-Hellman (SIDH) key-exchange protocol which is one of the more recent contenders in the post-quantum public-key arena. One of the main computational bottlenecks in this key-exchange protocol is computing modular arithmetic in a finite field defined by a prime of this special shape. Recent implementations already use this special prime shape to speed up the cryptographic implementations but it remains unclear if the choices made are optimal or if one can do better. Our overview shows that in the SIDH setting, where arithmetic over a quadratic extension field is required, the approaches based on Montgomery multiplication are to be preferred. Furthermore, the outcome of our search reveals that there exist moduli which result in even faster implementations.
Last updated:  2016-10-15
Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data
Uncategorized
Dario Fiore, Cédric Fournet, Esha Ghosh, Markulf Kohlweiss, Olga Ohrimenko, Bryan Parno
Show abstract
Uncategorized
Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC. In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access. Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.
Last updated:  2021-05-31
Design Strategies for ARX with Provable Bounds: SPARX and LAX (Full Version)
Daniel Dinu, Léo Perrin, Aleksei Udovenko, Vesselin Velichkov, Johann Großschädl, Alex Biryukov
We present, for the first time, a general strategy for designing ARX symmetric-key primitives with provable resistance against single-trail differential and linear cryptanalysis. The latter has been a long standing open problem in the area of ARX design. The wide trail design strategy (WTS), that is at the basis of many S-box based ciphers, including the AES, is not suitable for ARX designs due to the lack of S-boxes in the latter. In this paper we address the mentioned limitation by proposing the long trail design strategy (LTS) -- a dual of the WTS that is applicable (but not limited) to ARX constructions. In contrast to the WTS, that prescribes the use of small and efficient S-boxes at the expense of heavy linear layers with strong mixing properties, the LTS advocates the use of large (ARX-based) S-Boxes together with sparse linear layers. With the help of the so-called Long Trail argument, a designer can bound the maximum differential and linear probabilities for any number of rounds of a cipher built according to the LTS. To illustrate the effectiveness of the new strategy, we propose SPARX -- a family of ARX-based block ciphers designed according to the LTS. SPARX has 32-bit ARX-based S-boxes and has provable bounds against differential and linear cryptanalysis. In addition, SPARX is very efficient on a number of embedded platforms. Its optimized software implementation ranks in the top 6 of the most software-efficient ciphers along with SIMON, SPECK, Chaskey, LEA and RECTANGLE. As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{é}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{é}n challenge.
Last updated:  2016-10-15
Exact Security Analysis of Hash-then-Mask Type Probabilistic MAC Constructions
Avijit Dutta, Ashwin Jha, Mridul Nandi
Probabilistic MAC (message authentication code) is an alternative choice for a stateful MAC where maintaining internal state may be difficult or unsafe. Usually tag of a probabilistic MAC consists of an $m$-bit random coin (also called {\em salt}) and an $n$-bit core-tag depending on the salt. In terms of the security, probabilistic MAC falls under birthday collision of salts which is absent in stateful MAC. XMACR is an example of probabilistic MAC which remains secure up to $o(2^{m/2})$ tag generation queries. To achieve security beyond birthday in $n$, one can naturally use a large salt. For example, $\mathrm{MACRX}_3$ sets $m = 3n$ and provides security up to $o(2^{n})$ tag-generation queries. Large salt may restrict its applicability as it increases the cost of random string generation as well as the size of the overall tag. RWMAC (randomized version of WMAC) provides similar security with $m = n$ but it uses a PRF (pseudorandom function) over $2n$-bit inputs which is naturally more costlier than those over $n$-bit inputs. Achieving beyond birthday security using $n$-bit PRF and $n$-bit salt is a practical and challenging problem. Minematsu in FSE 2010 proposed Enhanced Hash-then-Mask (\tx{EHtM}) using $n$-bit salt and showed its security up to $o(2^{2n/3})$ tag-generation queries. In this paper we revisit this construction and we provide exact security analysis of \tx{EHtM}. In particular, we show that it has higher security, namely up to $o(2^{3n/4})$ queries, than what claimed by the designer. Moreover, we demonstrate a single attempt forgery attack which makes about $2^{3n/4}$ tag generation queries. XMACR and \tx{EHtM} follow the hash-then-mask paradigm due to Carter-Wegman. We revisit six possible constructions following hash-then-mask paradigm and we provide exact security analysis for all of these constructions, some of which however were known before.
Last updated:  2018-04-17
Securing Systems with Scarce Entropy: LWE-Based Lossless Computational Fuzzy Extractor for the IoT
Uncategorized
Christopher Huth, Daniela Becker, Jorge Guajardo, Paul Duplys, Tim Güneysu
Show abstract
Uncategorized
With the advent of the Internet of Things, lightweight devices necessitate secure and cost-efficient key storage. Since traditional secure key storage is expensive, novel solutions have been developed based on the idea of deriving the key from noisy entropy sources. Such sources when combined with fuzzy extractors allow cryptographically strong key derivation. Information theoretic fuzzy extractors require large amounts of input entropy to account for entropy loss in the key extraction process. It has been shown by Fuller \textit{et al.}~(ASIACRYPT'13) that the entropy loss can be reduced if the requirement is relaxed to computational security based on the hardness of the Learning with Errors problem. Using this computational fuzzy extractor, we show how to construct a device-server authentication system providing outsider chosen perturbation security and pre-application robustness. We present the first implementation of a \emph{lossless} computational fuzzy extractor where the entropy of the source equals the entropy of the key on a constrained device. The implementation needs only 1.45KB of SRAM and 9.8KB of Flash memory on an 8-bit microcontroller. Furthermore, we also show how a device-server authentication system can be constructed and efficiently implemented in our system. We compare our implementation to existing work in terms of security, while achieving no entropy loss.
Last updated:  2016-10-15
Efficient No-dictionary Verifiable SSE
Wakaha Ogata, Kaoru Kurosawa
In the model of "no-dictionary" verifiable searchable symmetric encryption (SSE) scheme, a client does not need to keep the set of keywords ${\cal W}$ in the search phase, where ${\cal W}$ is called a dictionary. Still a malicious server cannot cheat the client by saying that ``your search word $w$ does not exist in the dictionary ${\cal W}$" when it exists. In the previous such schemes, it takes $O(\log m)$ time for the server to prove that $w \not\in {\cal W}$, where $m=|{\cal W}|$ is the number of keywords. In this paper, we show a generic method to transform any SSE scheme (that is only secure against passive adversaries) to a no-dictionary verifiable SSE scheme. In the transformed scheme, it takes only $O(1)$ time for the server to prove that $w \not\in {\cal W}$.
Last updated:  2016-10-15
TruSpy: Cache Side-Channel Information Leakage from the Secure World on ARM Devices
Ning Zhang, Kun Sun, Deborah Shands, Wenjing Lou, Y. Thomas Hou
As smart, embedded devices are increasingly integrated into our daily life, the security of these devices has become a major concern. The ARM processor family, which powers more than 60% of embedded devices, introduced TrustZone technology to offer security protection via an isolated execution environment called secure world. Caches in TrustZone-enabled processors are extended with a non-secure (NS) bit to indicate whether a cache line is used by the secure world or the normal world. This cache design improves system performance by eliminating the need to perform cache flush during world switches; however, it also enables cache contention between the two worlds. In this work, we present TruSpy, the first study of timingbased cache side-channel information leakage of TrustZone. Our proposed attack exploits the cache contention between normal world and secure world to recover secret information from secure world. Two attacks are proposed in TruSpy, namely, the normal world OS attack and the normal world Android app attack. In the OS-based attack, the attacker is able to access virtual-to-physical address translation and high precision timers. In the Android app-based attack, these tools are unavailable to the attacker, so we devise a novel method that uses the expected channel statistics to allocate memory for cache probing. We also show how an attacker might use the less accurate performance event interface as a timer. Using the T-table based AES implementation in OpenSSL 1.0.1f as an example, we demonstrate that it is possible for a normal world attacker to steal a fine-grained secret from the secure world using a timing-based cache side-channel. We can recover the full AES encryption key via either the OSbased attack or the Android app-based attack. Since our zero permission TruSpy attack is based on the cache design in TrustZone enabled ARM processors, it poses a significant threat to a wide array of devices. To mitigate the newly discovered threat, we also propose both application-based and system-oriented countermeasures.
Last updated:  2016-10-15
The Reason Why Some Divide-and-Conquer Algorithms Cannot Be Efficiently Implemented
Zhengjun Cao, Lihua Liu
In the literature there are some divide-and-conquer algorithms, such as Karatsuba's algorithm and Strassen's algorithm, which play a key role in analyzing the performance of some cryptographic protocols and attacks. But we find these algorithms are rarely practically implemented although their theoretical complexities are attractive. In this paper, we point out that the reason of this phenomenon is that decomposing the original problem into subproblems does not take constant time. The type of problem decomposition results in data expand exponentially. In such case the time for organizing data (including assigning address, writing and reading) which is conventionally ignored, accumulates significantly.
Last updated:  2016-10-12
Testing the Trustworthiness of IC Testing: An Oracle-less Attack on IC Camouflaging
Uncategorized
Muhammad Yasin, Ozgur Sinanoglu, Jeyavijayan Rajendran
Show abstract
Uncategorized
Test of integrated circuits (ICs) is essential to ensure their quality; the test is meant to prevent defective and out-of-spec ICs from entering into the supply chain. The test is conducted by comparing the observed IC output with the expected test responses for a set of test patterns; the test patterns are generated using automatic test pattern generation algorithms. Existing test-pattern generation algorithms aim to achieve higher fault coverage at lower test costs. In an attempt to reduce the size of test data, these algorithms reveal the maximum information about the internal circuit structure. This is realized through sensitizing the internal nets to the outputs as much as possible, unintentionally leaking the secrets embedded in the circuit as well. In this paper, we present HackTest, an attack that extracts secret information generated in the test data, even if the test data does not explicitly contain the secret. HackTest can break the existing intellectual property (IP) protection techniques, such as camouflaging, within two minutes for our benchmarks using only the camouflaged layout and the test data. HackTest applies to all existing camouflaged gate-selection techniques and is successful even in the presence of state-of-the-art test infrastructure, i.e. test data compression circuits. Our attack necessitates that the IC test data generation algorithms be reinforced with security. We also discuss potential countermeasures to prevent HackTest.
Last updated:  2017-06-27
Side channels in deduplication: trade-offs between leakage and efficiency
Frederik Armknecht, Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Mohsen Toorani
Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency.
Last updated:  2016-10-12
On Adaptively Secure Multiparty Computation with a Short CRS
Ran Cohen, Chris Peikert
In the setting of multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function of their private inputs. A protocol is adaptively secure if honest parties might get corrupted \emph{after} the protocol has started. Recently (TCC 2015) three constant-round adaptively secure protocols were presented [CGP15, DKR15, GP15]. All three constructions assume that the parties have access to a \emph{common reference string} (CRS) whose size depends on the function to compute, even when facing semi-honest adversaries. It is unknown whether constant-round adaptively secure protocols exist, without assuming access to such a CRS. In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute. First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol. Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds. Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function.
Last updated:  2017-02-24
(Universal) Unconditional Verifiability in E-Voting without Trusted Parties
Uncategorized
Gina Gallegos-Garcia, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, Peter Y. A. Ryan
Show abstract
Uncategorized
In e-voting protocol design, cryptographers must balance usability and strong security guarantees, such as privacy and verifiability. In traditional e-voting protocols, privacy is often provided by a trusted authority that learns the votes and computes the tally. Some protocols replace the trusted authority by a set of authorities, and privacy is guaranteed if less than a threshold number of authorities are corrupt. For verifiability, stronger security guarantees are demanded. Typically, corrupt authorities that try to fake the result of the tally must always be detected. To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal verifiability. Additionally, NIZKs also improve usability because they allow voters to cast a vote non-interactively. The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle model. The former requires a trusted party for the generation of a CRS. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound. In this paper, we address the design of e-voting protocols that provide verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs can be used for this purpose. All our e-voting schemes are private under the Decision Linear assumption, while the verifiability holds unconditionally. We first present a general construction that supports any tally function but with the drawback of representing the computation as a circuit. Then, we show how to efficiently instantiate it for specific types of elections through Groth-Sahai proofs. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and with non-interactive protocols that does not rely on trust assumptions.
Last updated:  2016-10-12
Server-Aided Revocable Identity-Based Encryption from Lattices
Khoa Nguyen, Huaxiong Wang, Juanyang Zhang
Server-aided revocable identity-based encryption (SR-IBE), recently proposed by Qin et al. at ESORICS 2015, offers significant advantages over previous user revocation mechanisms in the scope of IBE. In this new system model, almost all the workloads on users are delegated to an untrusted server, and users can compute decryption keys at any time period without having to communicate with either the key generation center or the server. In this paper, inspired by Qin et al.’s work, we design the first SR-IBE scheme from lattice assumptions. Our scheme is more efficient than existing constructions of lattice-based revocable IBE. We prove that the scheme is selectively secure in the standard model, based on the hardness of the Learning with Errors problem. At the heart of our design is a “double encryption” mechanism that enables smooth interactions between the message sender and the server, as well as between the server and the recipient, while ensuring the confidentiality of messages.
Last updated:  2016-10-12
Invariant Subspace Attack Against Midori64 and The Resistance Criteria for S-box Designs
Jian Guo, Jérémy Jean, Ivica Nikolić, Kexin Qiao, Yu Sasaki, Siang Meng Sim
We present an invariant subspace attack on the block cipher Midori64, proposed at Asiacrypt 2015. Our analysis shows that Midori64 has a class of 2^{32} weak keys. Under any such key, the cipher can be distinguished with only a single chosen query, and the key can be recovered in 2^{16} time with two chosen queries. As both the distinguisher and the key recovery have very low complexities, we confirm our analysis by implementing the attacks. Some tweaks of round constants make Midori64 more resistant to the attacks, but some lead to even larger weak-key classes. To eliminate the dependency on the round constants, we investigate alternative S-boxes for Midori64 that provide certain level of security against the found invariant subspace attacks, regardless of the choice of the round constants. Our search for S-boxes is enhanced with a dedicated tool which evaluates the depth of any given 4-bit S-box that satisfies certain design criteria. The tool may be of independent interest to future S-box designs.
Last updated:  2017-09-21
Revealing Encryption for Partial Ordering
Uncategorized
Helene Haagh, Yue Ji, Chenxing Li, Claudio Orlandi, Yifan Song
Show abstract
Uncategorized
We generalize the cryptographic notion of Order Revealing Encryption (ORE) to arbitrary functions and we present a construction that allows to determine the (partial) ordering of two vectors i.e., given E(x) and E(y) it is possible to learn whether x is less than or equal to y, y is less than or equal to x or whether x and y are incomparable. This is the first non-trivial example of a Revealing Encryption (RE) scheme with output larger than one bit, and which does not rely on cryptographic obfuscation or multilinear maps.
Last updated:  2016-10-12
Authenticated communication from Quantum Readout of PUFs
Uncategorized
B. Skoric, P. W. H. Pinkse, A. P. Mosk
Show abstract
Uncategorized
Quantum Readout of Physical Unclonable Functions (PUFs) is a recently introduced method for remote authentication of objects. We present an extension of the protocol to enable the authentication of data: a verifier can check if received classical data was sent by the PUF holder. We call this modification QR-d or, in the case of the optical-PUF implementation, QSA-d. We discuss how QSA-d can be operated in a parallel way. We also present a protocol for authenticating quantum states.
Last updated:  2016-10-12
Statistical Analysis for Access-Driven Cache Attacks Against AES
Uncategorized
Liwei Zhang, A. Adam Ding, Yunsi Fei, Zhen Hang Jiang
Show abstract
Uncategorized
In recent years, side-channel timing attacks utilizing architectural behavior have been applied to cloud settings, presenting a realistic and serious cyber threat. Access-driven cache attacks allow the adversary to observe side-channel leakage (cache access pattern) of a critical cryptographic implementation to infer the secret key. However, what the attackers observe may deviate from the real cache footprint of the victim process, affecting the effectiveness of cache-based timing attacks using the observed leakage. Various countermeasures, including secure cache and architectures design, should also be evaluated accurately for their side-channel resilience. To address this need, this paper proposes a mathematical model for access-driven cache attacks, and derives explicit success rate formulas for those attacks. It is the first theoretical model that explicitly considers the misclassification errors for cache access and cache non-access by the victim cryptographic process. We implement several access-driven cache attacks and use our models to evaluate them. We demonstrate that the proposed statistical model predicts the success rate of cache-based timing attacks accurately. We also apply the model onto various cache defense architectures for evaluation.
Last updated:  2016-10-10
Garbling Gadgets for Boolean and Arithmetic Circuits
Marshall Ball, Tal Malkin, Mike Rosulek
We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations. For arithmetic circuits over the integers, our construction results in garbled circuits with {\em free} addition, weighted threshold gates with cost independent of fan-in, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an {\em exponential} improvement over the state of the art for threshold gates (including AND/OR gates) of high fan-in. Our construction can be efficiently instantiated with practical symmetric-key primitives (e.g., AES), and is proven secure under similar assumptions to that of the Free-XOR garbling scheme (Kolesnikov \& Schneider, ICALP 2008). We give an extensive comparison between our scheme and state-of-the-art garbling schemes applied to boolean circuits.
Last updated:  2016-10-10
System Clock and Power Supply Cross-Checking for Glitch Detection
Pei Luo, Chao Luo, Yunsi Fei
Cryptographic systems are vulnerable to different kinds of fault injection attacks. System clock glitch is one of the most widely used fault injection methods used in different attacks. In this paper, we propose a method to detect glitches in system clock to fight against clock glitch based fault attacks. We implement the proposed scheme in Virtex-5 FPGA and inject clock glitches into FPGA, results show that the proposed scheme can be easily implemented in both ASICs and FPGAs with very small overhead. Detection results show that the proposed scheme can detect very high frequency clock glitches with very high detection rate.
Last updated:  2016-10-10
Faulty Clock Detection for Crypto Circuits Against Differential Fault Analysis Attack
Pei Luo, Yunsi Fei
Clock glitch based Differential Fault Analysis (DFA) attack is a serious threat to cryptographic devices. Previous error detection schemes for cryptographic devices target improving the circuit reliability and cannot resist such DFA attacks. In this paper, we propose a novel faulty clock detection method which can be easily implemented either in FPGAs or integrated circuits to detect the glitches in system clock. Results show that the proposed method can detect glitches efficiently while needs very few system resource. It is also highly reconfigurable to tolerant clock inherent jitters, and will not involve complex design work for different processing technologies.
Last updated:  2016-10-10
High-speed VLSI implementation of Digit-serial Gaussian normal basis Multiplication over GF(2m)
Bahram Rashidi, Sayed Masoud Sayedi, Reza Rezaeian Farashahi
In this paper, by employing the logical effort technique an efficient and high-speed VLSI implementation of the digit-serial Gaussian normal basis multiplier is presented. It is constructed by using AND, XOR and XOR tree components. To have a low-cost implementation with low number of transistors, the block of AND gates are implemented by using NAND gates based on the property of the XOR gates in the XOR tree. To optimally decrease the delay and increase the drive ability of the circuit the logical effort method as an efficient method for sizing the transistors is employed. By using this method and also a 4-input XOR gate structure, the circuit is designed for minimum delay. The digit-serial Gaussian normal basis multiplier is implemented over two binary finite fields GF(2163) and GF(2233) in 0.18μm CMOS technology for three different digit sizes. The results show that the proposed structures, compared to previous structures, have been improved in terms of delay and area parameters.
Last updated:  2016-12-29
A Cryptographic Proof of Regularity Lemmas: Simpler Unified Proofs and Refined Bounds
Uncategorized
Maciej Skorski
Show abstract
Uncategorized
In this work we present a short and unified proof for the Strong and Weak Regularity Lemma, based on the cryptographic technique called \emph{low-complexity approximations}. In short, both problems reduce to a task of finding constructively an approximation for a certain target function under a class of distinguishers (test functions), where distinguishers are combinations of simple rectangle-indicators. In our case these approximations can be learned by a simple iterative procedure, which yields a unified and simple proof, achieving for any graph with density $d$ and any approximation parameter $\epsilon$ the partition size \begin{itemize} \item a tower of 2's of height $O\left( d_{}\epsilon^{-2} \right)$ for a variant of Strong Regularity \item a power of 2 with exponent $O\left(d\epsilon^{-2} \right)$ for Weak Regularity \end{itemize} The novelty in our proof is as follows: (a) a simple approach which yields both strong and weaker variant, and (b) improvements for sparse graphs. At an abstract level, our proof can be seen a refinement and simplification of the ``analytic'' proof given by Lovasz and Szegedy.
Last updated:  2017-03-21
Practical low data-complexity subspace-trail cryptanalysis of round-reduced PRINCE
Uncategorized
Lorenzo Grassi, Christian Rechberger
Show abstract
Uncategorized
Subspace trail cryptanalysis is a very recent new cryptanalysis technique, and includes differential, truncated differential, impossible differential, and integral attacks as special cases. In this paper, we consider PRINCE, a widely analyzed block cipher proposed in 2012. After the identification of a 2.5 rounds subspace trail of PRINCE, we present several (truncated differential) attacks up to 6 rounds of PRINCE. This includes a very practical attack with the lowest data complexity of only 8 plaintexts for 4 rounds, which co-won the final round of the PRINCE challenge in the 4-round chosen-plaintext category. The attacks have been verified using a C implementation. Of independent interest, we consider a variant of PRINCE in which ShiftRows and MixLayer operations are exchanged in position. In particular, our result shows that the position of ShiftRows and MixLayer operations influences the security of PRINCE. The same analysis applies to follow-up designs inspired by PRINCE.
Last updated:  2017-08-03
Efficient compression of SIDH public keys
Uncategorized
Craig Costello, David Jao, Patrick Longa, Michael Naehrig, Joost Renes, David Urbanik
Show abstract
Uncategorized
Supersingular isogeny Diffie-Hellman (SIDH) is an attractive candidate for post-quantum key exchange, in large part due to its relatively small public key sizes. A recent paper by Azarderakhsh, Jao, Kalach, Koziel and Leonardi showed that the public keys defined in Jao and De Feo's original SIDH scheme can be further compressed by around a factor of two, but reported that the performance penalty in utilizing this compression blew the overall SIDH runtime out by more than an order of magnitude. Given that the runtime of SIDH key exchange is currently its main drawback in relation to its lattice- and code-based post-quantum alternatives, an order of magnitude performance penalty for a factor of two improvement in bandwidth presents a trade-off that is unlikely to favor public-key compression in many scenarios. In this paper, we propose a range of new algorithms and techniques that accelerate SIDH public-key compression by more than an order of magnitude, making it roughly as fast as a round of standalone SIDH key exchange, while further reducing the size of the compressed public keys by approximately 12.5%. These improvements enable the practical use of compression, achieving public keys of only 330 bytes for the concrete parameters used to target 128 bits of quantum security and further strengthens SIDH as a promising post-quantum primitive.
Last updated:  2017-04-07
On Removing Graded Encodings from Functional Encryption
Nir Bitansky, Huijia Lin, Omer Paneth
Functional encryption (FE) has emerged as an outstanding concept. By now, we know that beyond the immediate application to computation over encrypted data, variants with {\em succinct ciphertexts} are so powerful that they yield the full might of indistinguishability obfuscation (IO). Understanding how, and under which assumptions, such succinct schemes can be constructed has become a grand challenge of current research in cryptography. Whereas the first schemes were based themselves on IO, recent progress has produced constructions based on {\em constant-degree graded encodings}. Still, our comprehension of such graded encodings remains limited, as the instantiations given so far have exhibited different vulnerabilities. Our main result is that, assuming LWE, {\em black-box constructions} of {\em sufficiently succinct} FE schemes from constant-degree graded encodings can be transformed to rely on a much better-understood object --- {\em bilinear groups}. In particular, under an {\em über assumption} on bilinear groups, such constructions imply IO in the plain model. The result demonstrates that the exact level of ciphertext succinctness of FE schemes is of major importance. In particular, we draw a fine line between known FE constructions from constant-degree graded encodings, which just fall short of the required succinctness, and the holy grail of basing IO on better-understood assumptions. In the heart of our result, are new techniques for removing ideal graded encoding oracles from FE constructions. Complementing the result, for weaker ideal models, namely the generic-group model and the random-oracle model, we show a transformation from {\em collusion-resistant} FE in either of the two models directly to FE (and IO) in the plain model, without assuming bilinear groups.
Last updated:  2017-07-18
A kilobit hidden SNFS discrete logarithm computation
Joshua Fried, Pierrick Gaudry, Nadia Heninger, Emmanuel Thomé
We perform a special number field sieve discrete logarithm computation in a 1024-bit prime field. To our knowledge, this is the first kilobit-sized discrete logarithm computation ever reported for prime fields. This computation took a little over two months of calendar time on an academic cluster using the open-source CADO-NFS software. Our chosen prime $p$ looks random, and $p-1$ has a 160-bit prime factor, in line with recommended parameters for the Digital Signature Algorithm. However, our $p$ has been trapdoored in such a way that the special number field sieve can be used to compute discrete logarithms in $\mathbb{F}_p^*$, yet detecting that $p$ has this trapdoor seems out of reach. Twenty-five years ago, there was considerable controversy around the possibility of backdoored parameters for DSA. Our computations show that trapdoored primes are entirely feasible with current computing technology. We also describe special number field sieve discrete log computations carried out for multiple weak primes found in use in the wild.
Last updated:  2016-10-05
Quantum-Secure Symmetric-Key Cryptography Based on Hidden Shifts
Gorjan Alagic, Alexander Russell
Recent results of Kaplan et al., building on previous work by Kuwakado and Morii, have shown that a wide variety of classically-secure symmetric-key cryptosystems are completely broken when exposed to quantum CPA attacks. In such an attack, the quantum adversary has the ability to query the cryptographic functionality in superposition. The vulnerable cryptosystems include the Even-Mansour block cipher, the three-round Feistel network, the Encrypted-CBC-MAC, and many others. In this work, we study simple algebraic adaptations of such schemes that replace $(\mathbb Z/2)^n$ addition with operations over alternate finite groups--such as $\mathbb Z/{2^n}$--and provide evidence that these adaptations are secure against quantum CPA attacks. These adaptations furthermore retain the classical security properties (and basic structural features) enjoyed by the original schemes. We establish security by treating the (quantum) hardness of the well-studied Hidden Shift problem as a basic cryptographic assumption. We observe that this problem has a number of attractive features in this cryptographic context, including random self-reducibility, hardness amplification, and--in many cases of interest--a reduction from the "search version" to the "decisional version." We then establish, under this assumption, the security of several such hidden-shift adaptations of symmetric-key constructions against quantum CPA attack. We show that a Hidden Shift version of the Even-Mansour block cipher yields a quantum-secure pseudorandom function, and that a Hidden Shift version of the Encrypted CBC-MAC yields a collision-resistant hash function. Finally, we observe that such adaptations frustrate the direct Simon's algorithm-based attacks in more general circumstances, e.g., Feistel networks and slide attacks.
Last updated:  2018-09-13
Impossibility of Simulation Secure Functional Encryption Even with Random Oracles
Shashank Agrawal, Venkata Koppula, Brent Waters
In this work we study the feasibility of achieving simulation security in functional encryption (FE) in the random oracle model. Our main result is negative in that we give a functionality for which it is impossible to achieve simulation security even with the aid of random oracles. We begin by giving a formal definition of simulation security that explicitly incorporates the random oracles. Next, we show a particular functionality for which it is impossible to achieve simulation security. Here messages are interpreted as seeds to a (weak) pseudorandom function family F and private keys are ascribed to points in the domain of the function. On a message s and private key x one can learn F(s,x). We show that there exists an attacker that makes a polynomial number of private key queries followed by a single ciphertext query for which there exists no simulator. Our functionality and attacker access pattern closely matches the standard model impossibility result of Agrawal, Gorbunov, Vaikuntanathan and Wee (CRYPTO 2013). The crux of their argument is that no simulator can succinctly program in the outputs of an unbounded number of evaluations of a pseudorandom function family into a bounded size ciphertext. However, their argument does not apply in the random oracle setting since the oracle acts as an additional conduit of information which the simulator can program. We overcome this barrier by proposing an attacker who decrypts the challenge ciphertext with the secret keys issued earlier without using the random oracle, even though the decryption algorithm may require it. This involves collecting most of the useful random oracle queries in advance, without giving the simulator too many opportunities to program. We note that our negative result contradicts a theorem of De Caro et al. (CRYPTO 2013) (as originally stated) which claims that random oracles can transform any indistinguishability secure functional encryption system into one that is simulation secure. De Caro et. al subsequently revised their work to show such a transformation from a new indistinguishability definition called functional encryption ”for circuits with random oracle gates”. An implication of our result when combined with theirs is that this new definition of functional encryption for circuits with random oracle gates is impossible to achieve even when all algorithms have access to a random oracle. On the flip side, we demonstrate the utility of the random oracle in simulation security. Given only public key encryption and low-depth PRGs we show how to build an FE system that is simulation secure for any poly-time attacker that makes an unbounded number of message queries, but an a-priori bounded number of key queries. This bests what is possible in the standard model where it is only feasible to achieve security for an attacker that is bounded both in the number of key and message queries it makes. We achieve this by creating a system that leverages the random oracle to get one-key security and then adapt previously known techniques to boost the system to resist up to q queries. Finally, we ask whether it is possible to achieve simulation security for an unbounded number of messages and keys, but where all key queries are made after the message queries. We show this too is impossible to achieve using a different twist on our first impossibility result.
Last updated:  2016-10-04
SafeDeflate: compression without leaking secrets
Michał Zieliński
CRIME and BREACH attacks on TLS/SSL leverage the fact that compression ratio is not hidden by encryption to recover content of secrets. We introduce SafeDeflate---a modification of a standard Deflate algorithm which compression ratio does not leak information about secret tokens. The modification is compatible with existing Deflate and gzip decompressors. We introduce a model in which attacker can obtain ciphertexts of arbitrary compressed plaintext containing secret values. Then we prove that SafeDeflate is secure in this model.
Last updated:  2018-06-11
Computing generator in cyclotomic integer rings
Thomas Espitau, Pierre-Alain Fouque, Alexandre Gélin, Paul Kirchner
The Principal Ideal Problem (resp. Short Principal Ideal Problem), shorten as PIP (resp. SPIP), consists in finding a generator (resp. short generator) of a principal ideal in the ring of integers of a number field. Several lattice-based cryptosystems rely on the presumed hardness of these two problems. Yet, in practice, most of them do not use an arbitrary number field but a power-of-two cyclotomic field. The Smart and Vercauteren fully homomorphic encryption scheme and the multilinear map of Garg, Gentry and Halevi epitomize this common restriction. Recently, Cramer, Ducas, Peikert and Regev show that solving the SPIP in such cyclotomic rings boils down to solving the PIP. We complete their result by an algorithm that solves PIP in cyclotomic fields in subexponential time $L_{|\Delta_K|} (1/2) = 2^{N^{1/2+o(1)}}$, where $\Delta_K$ denotes the discriminant of the number field and N its degree. This asymptotic complexity could be compared with the one obtained by Biasse and Fieker method, that aims at finding a generator as we do, but runs in L_{|\Delta_K|} (2/3). Besides this theoretical improvement, our algorithm permits to recover in practice the secret key of the Smart and Vercauteren scheme, for the smallest proposed parameters.
Last updated:  2018-01-05
Two Simple Composition Theorems with H-coefficients
Jacques Patarin
We will present here two new and simple theorems that show that when we compose permutation generators with independent keys, then the ``quality'' of CCA security increases. These theorems (Theorems 2 and 5 of this paper) are written in terms of H-coefficients (which are nothing else, up to some normalization factors, than transition probabilities). Then we will use these theorems on the classical analysis of Random Feistel Schemes (i.e. Luby-Rackoff constructions) and we will compare the results obtained with the bounds obtained with the coupling technique. Finally, we will show an interesting difference between 5 and 6 Random Feistel Schemes. With 5 rounds on $2n$ bits $\rightarrow 2n$ bits, when the number of $q$ queries satisfies $\sqrt{2^n} \ll q \ll 2^n$, we have some ``holes'' in the H-coefficient values, i.e. some H values are much smaller than the average value of H. This property for 5 rounds does not exist anymore on 6 rounds.
Last updated:  2017-02-28
Constant-deposit multiparty lotteries on Bitcoin
Massimo Bartoletti, Roberto Zunino
An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.
Last updated:  2016-10-06
Improving the lower bound on the maximum nonlinearity of 1-resilient Boolean functions and designing functions satisfying all cryptographic criteria
Uncategorized
WeiGuo Zhang, Enes Pasalic
Show abstract
Uncategorized
In this paper, we improve the lower bound on the maximum nonlinearity of 1-resilient Boolean functions, for $n$ even, by proposing a method of constructing this class of functions attaining the best nonlinearity currently known. Thus for the first time, at least for small values of $n$, the upper bound on nonlinearity can be reached in a deterministic manner in difference to some heuristic search methods proposed previously. The nonlinearity of these functions is extremely close to the maximum nonlinearity attained by bent functions and it might be the case that this is the highest possible nonlinearity of 1-resilient functions. Apart from this theoretical contribution, it turns out that the cryptographic properties of these functions are overall good apart from their moderate resistance to fast algebraic attacks (FAA). This weakness is repaired by a suitable modification of the original functions giving a class of balanced functions with almost optimal resistance to FAA whose nonlinearity is better than the nonlinearity of other methods.
Last updated:  2017-02-15
Collusion-Resistant Broadcast Encryption with Tight Reductions and Beyond
Uncategorized
Linfeng Zhou
Show abstract
Uncategorized
The issue of tight security for identity-based encryption schemes (\(\mathsf{IBE}\)) in bilinear groups has been widely investigated and a lot of optimal properties have been achieved. Recently, a tightly secure IBE scheme in bilinear groups under the multi-challenge setting has been achieved by Chen et al. (to appear in PKC 2017), and their scheme even achieves constant-size public parameters and is adaptively secure. However, we note that the issue of tight security for broadcast encryption schemes (\(\mathsf{BE}\)) in bilinear groups has received less attention so far. Actually current broadcast encryption systems of bilinear groups are either not tightly secure or based on non-static assumptions. In this work we mainly focus on the issue of tight security for standard broadcast encryption schemes \footnote{We utilize the syntax of broadcast encryption schemes under the key-encapsulation setting in this work and it is easy to be transformed into one under the standard setting.}. We construct the \textit{first} tightly secure broadcast encryption scheme from static assumptions (i.e., decisional subgroup assumptions) in the selective security model by utilizing improved techniques derived from the Déjà Q framework (Eurocrypt 2014, TCC-A 2016). The proof of our construction will lead to only \(O(\log n)\) or \(O(\log \lambda)\) security loss, where \(n\) is the number of users in the system and \(\lambda\) is the security parameter. Following this result, we present a tightly secure non-zero inner product encryption scheme (\(\mathsf{NIPE}\)) from decisional subgroup assumptions in the selective security model. This NIPE scheme has the same parameter sizes as our BE scheme and there is only \(O(\log n)\) or \(O(\log \lambda)\) security loss as well, where \(n\) is the dimension of the inner product space and \(\lambda\) is the security parameter. Finally, we further present a tightly secure functional commitment scheme (\(\mathsf{FC}\)) for linear functions, which was introduced by Libert et al. (ICALP 16). In contrast with their scheme, which also suffers \(O(n)\) security loss during the reduction, there is only \(O(\log n)\) or \(O(\log \lambda)\) security loss in our FC scheme.
Last updated:  2017-02-21
ISAP -- Towards Side-Channel Secure Authenticated Encryption
Christoph Dobraunig, Maria Eichlseder, Stefan Mangard, Florian Mendel, Thomas Unterluggauer
Side-channel attacks and in particular differential power analysis (DPA) attacks pose a serious threat to cryptographic implementations. One approach to counteract such attacks are cryptographic schemes based on fresh re-keying. In settings of pre-shared secret keys, such schemes render DPA attacks infeasible by deriving session keys and by ensuring that the attacker cannot collect side-channel leakage on the session key during cryptographic operations with different inputs. While these schemes can be applied to secure standard communication settings, current re-keying approaches are unable to provide protection in settings where the same input needs to be processed multiple times. In this work, we therefore adapt the re-keying approach and present a symmetric authenticated encryption scheme that is secure against DPA attacks and that does not have such a usage restriction. This means that our scheme fully complies with the requirements given in the CAESAR call and hence, can be used like other nonce-based authenticated encryption schemes without loss of side-channel protection. Its resistance against side-channel analysis is highly relevant for several applications in practice, like bulk storage settings in general and the protection of FPGA bitfiles and firmware images in particular.
Last updated:  2017-05-25
Revisiting Covert Multiparty Computation
Geoffroy Couteau
Is it feasible for parties to securely evaluate a function on their joint inputs, while hiding not only their private input, but even the very fact that they are taking part to the protocol? This intriguing question was given a positive answer in the two-party case at STOC’05, and in the general case at FOCS’07, under the name of covert multiparty computation (CMPC). A CMPC protocol allows n players with inputs (x1 ···xn) to compute a function f with the following guarantees: – If all the parties are taking part to the protocol, and if the result of the computation is favorable to all the parties, then they get to learn f(x1,··· ,xn) (and nothing more) – Else, when the result is not favorable to all the parties, or if some player does not participate to the computation, no one gets to learn anything (and in particular, no player can learn whether any of the other parties was indeed participating to the protocol) While previous works proved the existence of CMPC under standard assumptions, their candidate CMPC protocols were exclusively of theoretical interest. In this work, we revisit the design of CMPC protocols and show that, perhaps surprisingly, this very strong security notion can be achieved essentially for free. More specifically, we show how to build a CMPC protocol out of a standard, state-of-the-art MPC protocol, where both the communication and the computation are the same than the original protocol, up to an additive factor independent of the size of the circuit. Along the way, we prove two variants of the UC theorem which greatly simplify the design and the security analysis of CMPC protocols.
Last updated:  2017-02-20
Orthogonalized Lattice Enumeration for Solving SVP
Uncategorized
Zhongxiang Zheng, Xiaoyun Wang, Guangwu Xu, Yang Yu
Show abstract
Uncategorized
In 2014, the orthogonalized integer representation was proposed independently by Ding et al. using genetic algorithm and Fukase et al. using sampling technique to solve SVP. Their results are promising. In this paper, we consider sparse orthogonalized integer representations for shortest vectors and propose a new enumeration method, called orthognalized enumeration, by integrating such a representation. Furthermore, we present a mixed BKZ method, called MBKZ, by alternately applying orthognalized enumeration and other existing enumeration methods. Compared to the existing ones, our methods have greater efficiency and achieve exponential speedups both in theory and in practice for solving SVP. Implementations of our algorithms have been tested to be effective in solving challenging lattice problems. We also develop some new technique to reduce enumeration space which has been demonstrated to be efficient experimentally, though a quantitative analysis about its success probability is not available.
Last updated:  2016-10-01
Functional Encryption for Computational Hiding in Prime Order Groups via Pair Encodings
Jongkil Kim, Willy Susilo, Fuchun Guo, Man Ho Au
Lewko and Waters introduced the computational hiding technique in Crypto'12. In their technique, two computational assumptions that achieve selective and co-selective security proofs lead to adaptive security of an encryption scheme. Later, pair encoding framework was introduced by Attrapadung in Eurocrypt'14. The pair encoding framework generalises the computational hiding technique for functional encryption (FE). It has been used to achieve a number of new FE schemes such as FE for regular languages and unbounded attribute based encryption allowing multi-use of attributes. Nevertheless, the generalised construction of Attrapadung's pair encoding for those schemes is adaptively secure only in composite order groups, which leads to efficiency loss. It remains a challenging task to explore constructions in prime order groups for gaining efficiency improvement, which leaves the research gap in the existing literature. In this work, we aim to address this drawback by proposing a new generalised construction for pair encodings in prime order groups. Our construction will lead to a number of new FE schemes in prime order groups, which have been previously introduced only in composite order groups by Attrapadung.
Last updated:  2016-10-01
Secure Computation in Online Social Networks
Foteini Baldimtsi, Dimitrios Papadopoulos, Stavros Papadopoulos, Alessandra Scafuro, Nikos Triandopoulos
Apart from their numerous other benefits, online social networks (OSNs) allow users to jointly compute on each other’s data (e.g., profiles, geo-locations, medical records, etc.). Privacy issues naturally arise in this setting due to the sensitive nature of the exchanged information. Ideally, nothing about a user’s data should be revealed to the OSN provider or "non-friend" users, and even her "friends" should only learn the output of a joint computation. In this work we propose the first security framework to capture these strong privacy guarantees for general-purpose computation. We also achieve two additional attractive properties: users do not need to be online while their friends compute on their data, and any user value uploaded at the server can be repeatedly used in multiple computations. We formalize our framework in the setting of secure multi-party computation (MPC) and provide two instantiations: the first is a non-trivial adaptation of garbled circuits that converts inputs under different keys to ones under the same key, and the second is based on two-party mixed protocols and involves a novel two-party re-encryption module. We experimentally validate the efficiency of our instantiations using state-of-the-art tools for two concrete use-cases.
Last updated:  2016-10-01
Isogeny graphs of ordinary abelian varieties
Uncategorized
Ernest Hunter Brooks, Dimitar Jetchev, Benjamin Wesolowski
Show abstract
Uncategorized
Fix a prime number $\ell$. Graphs of isogenies of degree a power of $\ell$ are well-understood for elliptic curves, but not for higher-dimensional abelian varieties. We study the case of absolutely simple ordinary abelian varieties over a finite field. We analyse graphs of so-called $\mathfrak l$-isogenies, resolving that they are (almost) volcanoes in any dimension. Specializing to the case of principally polarizable abelian surfaces, we then exploit this structure to describe graphs of a particular class of isogenies known as $(\ell, \ell)$-isogenies: those whose kernels are maximal isotropic subgroups of the $\ell$-torsion for the Weil pairing. We use these two results to write an algorithm giving a path of computable isogenies from an arbitrary absolutely simple ordinary abelian surface towards one with maximal endomorphism ring, which has immediate consequences for the CM-method in genus 2, for computing explicit isogenies, and for the random self-reducibility of the discrete logarithm problem in genus 2 cryptography.
Last updated:  2016-10-01
Bitsliced Masking and ARM: Friends or Foes?
Wouter de Groot, Kostas Papagiannopoulos, Antonio de La Piedra, Erik Schneider, Lejla Batina
Software-based cryptographic implementations can be vulnerable to side-channel analysis. Masking countermeasures rank among the most prevalent techniques against it, ensuring formally the protection vs. value-based leakages. However, its applicability is halted by two factors. First, a masking countermeasure involves a computational overhead that can render implementations inefficient. Second, physical effects such as glitches and distance-based leakages can cause the reduction of the security order in practice, rendering the masking protection less effective. This paper, attempts to address both factors. In order to reduce the computational cost, we implement a high-throughput, bitsliced, 2nd-order masked implementation of the PRESENT cipher, using assembly in ARM Cortex-M4. The implementation outperforms the current state of the art and is capable of encrypting a 64-bit block of plaintext in 6,532 cycles (excluding RNG), using 1,644 bytes of data RAM and 1,552 bytes of code memory. Second, we analyze experimentally the effectiveness of masking in ARM devices, i.e. we examine the effects of distance-based leakages on the security order of our implementation. We confirm the theoretical model behind distance leakages for the first time in ARM-based architectures.
Last updated:  2016-10-01
High throughput in slices: the case of PRESENT, PRINCE and KATAN64 ciphers
Kostas Papapagiannopoulos
This paper presents high-throughput assembly implementations of PRESENT, PRINCE and KATAN64 ciphers for the ATtiny family of AVR microcontrollers. We report throughput records, achieving the speed of 2967 clock cycles per block encryption for PRESENT, 1803 cycles for PRINCE and 23671 cycles for KATAN64. In addition, we offer insight into the `slicing' techniques used for high throughput and their application to lightweight cryptographic implementations. We also demonstrate the speed-memory tradeoff by constructing high-throughput implementations with large memory requirements.
Last updated:  2017-12-14
High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority
Jun Furukawa, Yehuda Lindell, Ariel Nof, Or Weinstein
In this paper, we describe a new protocol for secure three-party computation of any functionality, with an honest majority and a \textit{malicious} adversary. Our protocol has both an information-theoretic and computational variant, and is distinguished by extremely low communication complexity and very simple computation. We start from the recent semi-honest protocol of Araki et al. (ACM CCS 2016) in which the parties communicate only a single bit per AND gate, and modify it to be secure in the presence of malicious adversaries. Our protocol follows the paradigm of first constructing Beaver multiplication triples and then using them to verify that circuit gates are correctly computed. As in previous work (e.g., the so-called TinyOT and SPDZ protocols), we rely on the cut-and-choose paradigm to verify that triples are correctly constructed. We are able to utilize the fact that at most one of three parties is corrupted in order to construct an extremely simple and efficient method of constructing such triples. We also present an improved combinatorial analysis for this cut-and-choose which can be used to achieve improvements in other protocols using this approach.
Last updated:  2017-09-25
Stadium: A Distributed Metadata-Private Messaging System
Uncategorized
Nirvan Tyagi, Yossi Gilad, Derek Leung, Matei Zaharia, Nickolai Zeldovich
Show abstract
Uncategorized
Private communication over the Internet remains a challenging problem. Even if messages are encrypted, it is hard to deliver them without revealing metadata about which pairs of users are communicating. Scalable anonymity systems, such as Tor, are susceptible to traffic analysis attacks that leak metadata. In contrast, the largest-scale systems with metadata privacy require passing all messages through a small number of providers, requiring a high operational cost for each provider and limiting their deployability in practice. This paper presents Stadium, a point-to-point messaging system that provides metadata and data privacy while scaling its work efficiently across hundreds of low-cost providers operated by different organizations. Much like Vuvuzela, the current largest-scale metadata-private system, Stadium achieves its provable guarantees through differential privacy and the addition of noisy cover traffic. The key challenge in Stadium is limiting the information revealed from the many observable traffic links of a highly distributed system, without requiring an overwhelming amount of noise. To solve this challenge, Stadium introduces techniques for distributed noise generation and differentially private routing as well as a verifiable parallel mixnet design where the servers collaboratively check that others follow the protocol. We show that Stadium can scale to support 4X more users than Vuvuzela using servers that cost an order of magnitude less to operate than Vuvuzela nodes.
Last updated:  2016-10-01
Optimizing Secure Computation Programs with Private Conditionals
Uncategorized
Peeter Laud, Alisa Pankova
Show abstract
Uncategorized
Secure multiparty computation platforms are often provided with a programming language that allows to write privacy-preserving applications without thinking of the underlying cryptography. The control flow of these programs is expensive to hide, hence they typically disallow branching on private values. The application programmers have to specify their programs in terms of allowed constructions, either using ad-hoc methods to avoid such branchings, or the general methodology of executing all branches and obliviously selecting the effects of one at the end. There may be compiler support for the latter. The execution of all branches introduces significant computational overhead. If the branches perform similar private operations, then it may make sense to compute repeating patterns only once, even though the necessary bookkeeping also has overheads. In this paper, we propose a program optimization doing exactly that, allowing the overhead of private conditionals to be reduced. The optimization is quite general, and can be applied to various privacy-preserving platforms.
Last updated:  2016-10-01
A New Class of Differentially 4-uniform Permutations from the Inverse Function
Uncategorized
Jian Bai, Dingkang Wang
Show abstract
Uncategorized
Differentially 4-uniform permutations on $\mathbb{F}_{2^{2k}}$ with high nonlinearity and algebraic degree are often used in block ciphers and some stream ciphers as Substitution boxes. Recently,Chen et al.(An equivalent condition on the switching construction of differentially 4-uniform permutations on from the inverse function, International Journal of Computer Mathematics, DOI:10.1080/00207160.2016.1167884) presented a n equivalent condition on the switching construction. More precisely,they presented a sufficient and necessary condition on differentially 4-uniform permutations on $\mathbb{F}_{2^{2k}}$ of the form $G(x)=x^{-1}+f(x^{-1})$, where $f$ is a Boolean function. However, the number of the satisfied functions is so enormous that it is difficult to find all the functions. In this paper,a new class of such functions are constructed. These functions may provide more options for the design of Substitute boxes.
Last updated:  2016-10-29
Fast Actively Secure OT Extension for Short Secrets
Arpita Patra, Pratik Sarkar, Ajith Suresh
Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives with wide-spread application in general secure multi-party computation (MPC) as well as in a number of tailored and special-purpose problems of interest such as private set intersection (PSI), private information retrieval (PIR), contract signing to name a few. Often the instantiations of OT require prohibitive communication and computation complexity. OT extension protocols are introduced to compute a very large number of OTs referred as extended OTs at the cost of a small number of OTs referred as seed OTs. We present a fast OT extension protocol for small secrets in active setting. Our protocol when used to produce $1$-out-of-$n$ OTs outperforms all the known actively secure OT extensions. Our protocol is built on the semi-honest secure extension protocol of Kolesnikov and Kumaresan of CRYPTO'13 (referred as KK13 protocol henceforth) which is the best known OT extension for short secrets. At the heart of our protocol lies an efficient consistency checking mechanism that relies on the linearity of Walsh-Hadamard (WH) codes. Asymptotically, our protocol adds a communication overhead of $O(\mu \log{\kappa})$ bits over KK13 protocol irrespective of the number of extended OTs, where $\kappa$ and $\mu$ refer to computational and statistical security parameter respectively. Concretely, our protocol when used to generate a large enough number of OTs adds only $0.011-0.028\%$ communication overhead and $4-6\%$ runtime overhead both in LAN and WAN over KK13 extension. The runtime overheads drop below $2\%$ when in addition the number of inputs of the sender in the extended OTs is large enough. As an application of our proposed extension protocol, we show that it can be used to obtain the most efficient PSI protocol secure against a malicious receiver and a semi-honest sender.
Last updated:  2017-08-24
Key Reconciliation Protocols for Error Correction of Silicon PUF Responses
Brice Colombier, Lilian Bossuet, David Hély, Viktor Fischer
Physical Unclonable Functions (PUFs) are promising primitives for the lightweight authentication of an integrated circuit (IC). Indeed, by extracting an identifier from random process variations, they allow each instance of a design to be uniquely identified. However, the extracted identifiers are not stable enough to be used as is, and hence need to be corrected first. This is currently achieved using error-correcting codes in secure sketches, that generate helper data through a one-time procedure. As an alternative, we propose key reconciliation protocols. This interactive method, originating from quantum key distribution, allows two entities to correct errors in their respective correlated keys by discussing over a public channel. We believe that this can also be used by a device and a remote server to agree on two different responses to the same challenge from the same PUF obtained at different times. This approach has the advantage of requiring very few logic resources on the device side. The information leakage caused by the key reconciliation process is limited and easily computable. Results of implementation on FPGA targets are presented, showing that it is the most lightweight error-correction module to date.
Last updated:  2019-02-06
Kummer for Genus One over Prime Order Fields
Uncategorized
Sabyasachi Karati, Palash Sarkar
Show abstract
Uncategorized
This work considers the problem of fast and secure scalar multiplication using curves of genus one defined over a field of prime order. Previous work by Gaudry and Lubicz in 2009 had suggested the use of the associated Kummer line to speed up scalar multiplication. In the present work, we explore this idea in detail. The first task is to obtain an elliptic curve in Legendre form which satisfies necessary security conditions such that the associated Kummer line has small parameters and a base point with small coordinates. It turns out that the ladder step on the Kummer line supports parallelism and can be implemented very efficiently in constant time using the single-instruction multiple-data (SIMD) operations available in modern processors. For the 128-bit security level, this work presents three Kummer lines denoted as $K_1:={\sf KL2519(81,20)}$, $K_2:={\sf KL25519(82,77)}$ and $K_3:={\sf KL2663(260,139)}$ over the three primes $2^{251}-9$, $2^{255}-19$ and $2^{266}-3$ respectively. Implementations of scalar multiplications for all three Kummer lines using Intel intrinsics have been done and the code is publicly available. Timing results on the Skylake and the Haswell processors of Intel indicate that both fixed base and variable base scalar multiplications for $K_1$ and $K_2$ are faster than those achieved by {\sf Sandy2x}, which is a highly optimised SIMD implementation in assembly of the well known {\sf Curve25519}; for example, on Skylake, variable base scalar multiplication on $K_1$ is faster than {\sf Curve25519} by about 30\%. On Skylake, both fixed base and variable base scalar multiplication for $K_3$ are faster than {\sf Sandy2x}; whereas on Haswell, fixed base scalar multiplication for $K_3$ is faster than {\sf Sandy2x} while variable base scalar multiplication for both $K_3$ and {\sf Sandy2x} take roughly the same time. In fact, on Skylake, $K_3$ is both faster and also offers about 5 bits of higher security compared to {\sf Curve25519}. In practical terms, the particular Kummer lines that are introduced in this work are serious candidates for deployment and standardisation. We further illustrate the usefulness of the proposed Kummer lines by instantiating the quotient Digital Signature Algorithm (qDSA) on all the three Kummer lines.
Last updated:  2016-09-29
A Comparative S-Index in Factoring RSA Modulus via Lucas Sequences
Nur Azman Abu, Shekh Faisal Abdul-Latip, Muhammad Rezal Kamel Ariffin
General Lucas sequences are practically useful in cryptography. In the past quarter century, factoring large RSA modulo into its primes is one of the most important and most challenging problems in computational number theory. A factoring technique on RSA modulo is mainly hindered by the strong prime properties. The success of factoring few large RSA modulo within the last few decades has been due to computing prowess overcoming one strong prime of RSA modulo. In this paper, some useful properties of Lucas sequences shall be explored in factoring RSA modulo. This paper introduces the S-index formation in solving quadratic equation modulo N. The S-index pattern is very useful in designing an algorithm to factor RSA modulo. At any instance in the factoring algorithm, the accumulative result stands independently. In effect, there is no clear direction to maneuver whether to go left or right. The S-index will add another comparative tool to better maneuver in a factoring process. On one hand, it shall remain a theoretical challenge to overcome the strong prime properties. On the other hand, it shall remain a computational challenge to achieve a running time within polynomial time to factor RSA modulo. This paper will propose an avenue to do both using general Lucas sequences.
Last updated:  2016-09-29
Linear Complexity of Designs based on Coordinate Sequences of LRS and on Digital Sequences of Matrix/Skew LRS Coordinate Sequences over Galois Ring
Vadim N. Tsypyschev
This article continues investigation of ways to obtain pseudo-random sequences over Galois field via generating LRS over Galois ring and complication it. Previous work is available at http://eprint.iacr.org/2016/212 In this work we provide low rank estimations for sequences generated by two different designs based on coordinate sequences of linear recurrent sequences (LRS) of maximal period (MP) over Galois ring $R=GR(p^n,r)$, $p\ge 5$, $r\ge2$, with numbers $s$ such that $s=kr+2$, $k\in \mathbb{N}_0$, and based on digital sequences of coordinate sequences of matrix/skew MP LRS over such Galois rings.
Last updated:  2016-09-29
Concealing Secrets in Embedded Processors Designs
Hannes Gross, Manuel Jelinek, Stefan Mangard, Thomas Unterluggauer, Mario Werner
Side-channel analysis (SCA) attacks pose a serious threat to embedded systems. So far, the research on masking as a countermeasure against SCA focuses merely on cryptographic algorithms, and has either been implemented for particular hardware or software implementations. However, the drawbacks of protecting specific implementations are the lack of flexibility in terms of used algorithms, the impossibility to update protected hardware implementations, and long development cycles for protecting new algorithms. Furthermore, cryptographic algorithms are usually just one part of an embedded system that operates on informational assets. Protecting only this part of a system is thus not sufficient for most security critical embedded applications. In this work, we introduce a flexible, SCA-protected processor design based on the open-source V-scale RISC-V processor. The introduced processor design can be synthesized to defeat SCA attacks of arbitrary attack order. Once synthesized, the processor protects the computation on security-sensitive data against side-channel leakage. The benefits of our approach are (1) flexibility and updatability, (2) faster development of SCA-protected systems, (3) transparency for software developers, (4) arbitrary SCA protection level, (5) protection not only for cryptographic algorithms, but against leakage in general caused by processing sensitive data.
Last updated:  2016-09-29
Cryptography with Updates
Uncategorized
Prabhanjan Ananth, Aloni Cohen, Abhishek Jain
Show abstract
Uncategorized
Starting with the work of Bellare, Goldreich and Goldwasser [CRYPTO'94], a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature. In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attribute-based encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support non-interactive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zero-knowledge proofs and secure multiparty computation. To accomplish this goal, we introduce a new notion of updatable randomized encodings that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts. We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from one-way functions to compact functional encryption.
Last updated:  2019-01-29
Actively Secure 1-out-of-N OT Extension with Application to Private Set Intersection
Michele Orrù, Emmanuela Orsini, Peter Scholl
This paper describes a 1-out-of-N oblivious transfer (OT) extension protocol with active security, which achieves very low overhead compared with the passively secure protocol of Kolesnikov and Kumaresan (Crypto 2011). Our protocol obtains active security using a consistency check which requires only simple computation and has a communication overhead that is independent of the total number of OTs to be produced. We prove its security in both the random oracle model and the standard model, assuming a variant of correlation robustness. We describe an implementation, which demonstrates our protocol only incurs an overhead of around 5–30% on top of the passively secure protocol. Random 1-out-of-N OT is a key building block in recent, very efficient, passively secure private set intersection (PSI) protocols. Our random OT extension protocol has the interesting feature that it even works when N is exponentially large in the security parameter, provided that the sender only needs to obtain polynomially many outputs. We show that this can be directly applied to improve the performance of PSI, allowing the core private equality test and private set inclusion subprotocols to be carried out using just a single OT each. This leads to a reduction in communication of up to 3 times for the main component of PSI.
Last updated:  2016-09-28
Mistakes Are Proof That You Are Trying: On Verifying Software Encoding Schemes' Resistance to Fault Injection Attacks
Jakub Breier, Dirmanto Jap, Shivam Bhasin
Software encoding countermeasures are becoming increasingly popular among researchers proposing code-level prevention against data-dependent leakage allowing an attacker to mount a side-channel attack. Recent trends show that it is possible to design a solution that does not require excessive overhead and yet provides a reasonable security level. However, if the device leakage is hard to be observed, attacker can simply switch to a different class of physical attacks, such as fault injection attack. Instead of stacking several layers of countermeasures, it is always more convenient to choose one that provides decent protection against several attack methods. Therefore, in our paper we use our custom designed code analyzer to formally inspect a recently proposed software encoding countermeasure based on device-specific encoding function, and compare it with other solutions, either based on balanced look-up tables or balanced encoding. We also provide an experimental validation, using the laser fault injection setup. Our results show that the device-specific encoding scheme provides a good protection against fault injection attacks, being capable of preventing majority of faults using different fault models.
Last updated:  2017-03-09
Feeding Two Cats with One Bowl: On Designing a Fault and Side-Channel Resistant Software Encoding Scheme (Extended Version)
Jakub Breier, Xiaolu Hou
When it comes to side-channel countermeasures, software encoding schemes are becoming popular and provide a good level of security for general-purpose microcontrollers. However, these schemes are not designed to be fault resistant, and this property is discussed very rarely. Therefore, implementers have to pile up two different countermeasures in order to protect the algorithm against these two popular classes of attacks. In our paper, we discuss the fault resistance properties of encoding schemes in general. We define theoretical bounds that clearly show the possibilities and limitations of encoding-based countermeasures, together with trade-offs between side-channel and fault resistance. Moreover, we simulate several codes with respect to most popular fault models, using a general-purpose microcontroller assembly implementation. Our algorithm shows how to implement fault resistance to an encoding scheme that currently has the best side-channel resistant capabilities. As a result, we are able to design a code by using automated methods, that can provide the optimal trade-off between side-channel and fault resistance.
Last updated:  2019-07-28
Scalable Private Set Intersection Based on OT Extension
Benny Pinkas, Thomas Schneider, Michael Zohner
Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing any information about items that are not in the intersection. It is one of the best studied applications of secure computation and many PSI protocols have been proposed. However, the variety of existing PSI protocols makes it difficult to identify the solution that performs best in a respective scenario, especially since they were not compared in the same setting. In addition, existing PSI protocols are several orders of magnitude slower than an insecure naive hashing solution which is used in practice. In this work, we review the progress made on PSI protocols and give an overview of existing protocols in various security models. We then focus on PSI protocols that are secure against semi-honest adversaries and take advantage of the most recent efficiency improvements in OT extension and propose significant optimizations to previous PSI protocols and to suggest a new PSI protocol whose run-time is superior to that of existing protocols. We compare the performance of the protocols both theoretically and experimentally, by implementing all protocols on the same platform, give recommendations on which protocol to use in a particular setting, and evaluate the progress on PSI protocols by comparing them to the currently employed insecure naive hashing protocol. We demonstrate the feasibility of our new PSI protocol by processing two sets with a billion elements each.
Last updated:  2016-09-27
A Generalized Ideal Secret Sharing Scheme
Tanujay Sha
Sharing a secret efficiently amongst a group of participants is not easy since there is always an adversary / eavesdropper trying to retrieve the secret. In secret sharing schemes, every participant is given a unique share. When the desired group of participants come together and provide their shares, the secret is obtained. For other combinations of shares, a garbage value is returned. A threshold secret sharing scheme was proposed by Shamir and Blakeley independently. In this (n,t) threshold secret sharing scheme, the secret can be obtained when at least t out of n participants contribute their shares. This paper proposes a novel algorithm to reveal the secret only to the subsets of participants belonging to the access structure. This scheme implements totally generalized ideal secret sharing. Unlike threshold secret sharing schemes, this scheme reveals the secret only to the authorized sets of participants, not any arbitrary set of users with cardinality more than or equal to t. Since any access structure can be realized with this scheme, this scheme can be exploited to implement various access priorities and access control mechanisms. A major advantage of this scheme over the existing ones is that the shares being distributed to the participants is totally independent of the secret being shared. Hence, no restrictions are imposed on the scheme and it finds a wider use in real world applications.
Last updated:  2016-09-28
The complexity of the connected graph access structure on seven participants
Massoud Hadian Dehkordi, Ali Safi
In this paper, we study an important problem in secret sharing that determines the exact value or bound for the complexity. First, we used induced subgraph complexity of the graph G with access structure, Gama, to obtain a lower bound on the complexity of the graph G. Secondly, by applying decomposition techniques we obtain an upper bound on the complexity of the graph G. We determine the exact values of the complexity for each of the ten graph access structures on seven participants. Also, we improve the value bound of the complexity of the six graph access structures with seven participants.
Last updated:  2016-09-24
Atomic-AES: A Compact Implementation of the AES Encryption/Decryption Core
Uncategorized
Subhadeep Banik, Andrey Bogdanov, Francesco Regazzoni
Show abstract
Uncategorized
The implementation of the AES encryption core by Moradi et al. at Eurocrypt 2011 is one of the smallest in terms of gate area. The circuit takes around 2400 gates and operates on an 8 bit datapath. However this is an encryption only core and unable to cater to block cipher modes like CBC and ELmD that require access to both the AES encryption and decryption modules. In this paper we look to investigate whether the basic circuit of Moradi et al. can be tweaked to provide dual functionality of encryption and decryption (ENC/DEC) while keeping the hardware overhead as low as possible. As a result, we report an 8-bit serialized AES circuit that provides the functionality of both encryption and decryption and occupies around 2645 GE with a latency of 226 cycles. This is a substantial improvement over the next smallest AES ENC/DEC circuit (Grain of Sand) by Feldhofer et al. which takes around 3400 gates but has a latency of over 1000 cycles for both the encryption and decryption cycles.
Last updated:  2017-02-24
LIZARD - A Lightweight Stream Cipher for Power-constrained Devices
Uncategorized
Matthias Hamann, Matthias Krause, Willi Meier
Show abstract
Uncategorized
Time-memory-data (TMD) tradeoff attacks limit the security level of many classical stream ciphers (like $E_0$, A5/1, Trivium, Grain) to $\frac{1}{2}n$, where $n$ denotes the inner state length of the underlying keystream generator. In this paper, we present LIZARD, a lightweight stream cipher for power-constrained devices like passive RFID tags. Its hardware efficiency results from combining a Grain-like design with the $FP(1)$-mode, a recently suggested construction principle for the state initialization of stream ciphers, which offers provable $\frac{2}{3}n$-security against TMD tradeoff attacks aiming at key recovery. LIZARD uses 120-bit keys, 64-bit IVs and has an inner state length of 121 bit. It is supposed to provide 80-bit security against key recovery attacks. LIZARD allows to generate up to $2^{18}$ keystream bits per key/IV pair, which would be sufficient for many existing communication scenarios like Bluetooth, WLAN or HTTPS.
Last updated:  2016-09-24
Secure Channel Injection and Anonymous Proofs of Account Ownership
Liang Wang, Rafael Pass, abhi shelat, Thomas Ristenpart
We introduce secure channel injection (SCI) protocols, which allow one party to insert a private message into another party's encrypted communications. We construct an efficient SCI protocol for communications delivered over TLS, and use it to realize anonymous proofs of account ownership for SMTP servers. This allows alice@mail.com to prove ownership of some email address @mail.com, without revealing ``alice'' to the verifier. We show experimentally that our system works with standard email server implementations as well as Gmail. We go on to extend our basic SCI protocol to realize a ``blind'' certificate authority: the account holder can obtain a valid X.509 certificate binding alice@mail.com to her public key, if it can prove ownership of some email address @mail.com. The authority never learns which email account is used.
Last updated:  2016-09-24
Bit Coincidence Mining Algorithm II
Koh-ichi Nagao
In 2012, Petit et al. shows that under the algebraic geometrical assumption named "First Fall degree Assumption", the complexity of ECDLP over binary extension field ${\bf F}_{2^n}$ is in $O(exp(n^{2/3+o(1)}))$ where $\lim_{n \to \infty} o(1)=0$ and there are many generalizations and improvements for the complexity of ECDLP under this assumption. In 2015, the author proposes the bit coincidence mining algorithm, which states that under the heuristic assumption of the complexity of xL algorithm, the complexity of ECDLP $E/{\bf F}_q$ over arbitrary finite field including prime field, is in $O(exp(n^{1/2+o(1)}))$ where $n \sim \log_2 \#E({\bf F}_q) \sim \log_2 q$. It is the first (heuristic) algorithm for solving ECDLP over prime field in subexponential complexity. In both researches, ECDLP reduces to solving large equations system and from each assumption, the complexity for solving reduced equations system is subexponential (or polynomial) complexity. However, the obtained equations system is too large for solving in practical time and space, they are only the results for the complexity. xL algorithm, is the algorithm for solving quadratic equations system, which consists of $n$ variables and $m$ equations. Here, $n$ and $m$ are considered as parameters. Put $D=D(n,m)$ by the maximal degree of the polynomials, which appears in the computation of solving equations system by xL. Courtois et al. observe and assume the following assumption; 1) There are small integer $C_0$, such that $D(n,n+C_0)$ is usually in $O(\sqrt{n})$, and the cost for solving equations system is in $O(exp(n^{1/2+0(1)}))$. However, this observation is optimistic and it must have the following assumption 2) The equations system have small number of the solutions over algebraic closure. (In this draft we assume the number of the solutions is 0 or 1) In the previous version's bit coincidence mining algorithm (in 2015), the number of the solutions of the desired equations system over algebraic closure is small and it can be probabilistically controlled to be 1 and the assumption 2) is indirectly true. For my sense, the reason that xL algorithm, which is the beautiful heuristic, is not widely used is that the general equations system over finite field does not satisfy the assumption 2) (there are many solutions over algebraic closure) and is complexity is much larger. In the previous draft, I show that the ECDLP of $E({\bf F}_q)$ reduces to solving equations system consists of $d-1$ variables and $d+C_0-1$ equations where $C_0$ is an arbitrary positive integer and $d \sim C_0 \times \log_2 q$. So, the complexity for solving ECDLP is in subexponential under the following assumption a) There are some positive integer $C_0$ independent from $n$, such that solving quadratic equations system consists of $n$ variables and $m=n+C_0$ equations (and we must assume the assumption 2)) by xL algorithm, the maximum degree of the polynomials $D=D(n,m)$, appears in this routine is in $O(\sqrt{n})$ in high probability. Here, we propose the new algorithm that ECDLP of $E({\bf F}_q)$ is essentially reducing to solving equations system consists of $d-1$ variables and $\frac{b_0}{2}d$ equations where $b_0(\ge 2)$ is an arbitrary positive integer named block size and $d \sim (b_0-1)\log_{b_0} q$. Here, we mainly treat the case block size $b_0=3$. In this case, ECDLP is essentially reducing to solving equations system consists of about $2 \log_3 q$ variables and $3 \log_3 q$ equations. So that the desired assumption 1) is always true. Moreover, the number of the solutions (over algebraic closure) of this equations system can be probabilistically controlled to be 1 and the desired assumption 2) is also true. In the former part of this manuscript, the author states the algorithm for the construction of equations system that ECDLP is reduced and in the latter part of this manuscript, the author state the ideas and devices in order for increasing the number of the equations, which means the obtained equations system is easily solved by xL algorithm.
Last updated:  2016-09-24
Attacking embedded ECC implementations through cmov side channels
Erick Nascimento, Lukasz Chmielewski, David Oswald, Peter Schwabe
Side-channel attacks against implementations of elliptic-curve cryptography have been extensively studied in the literature and a large tool-set of countermeasures is available to thwart different attacks in different contexts. The current state of the art in attacks and countermeasures is nicely summarized in multiple survey papers, the most recent one by Danger et al. However, any combination of those countermeasures is ineffective against attacks that require only _a single trace_ and directly target a conditional move (cmov) -- an operation that is at the very foundation of all scalar-multiplication algorithms. This operation can either be implemented through arithmetic operations on registers or through various different approaches that all boil down to loading from or storing to a secret address. In this paper we demonstrate that such an attack is indeed possible for ECC software running on AVR ATmega microcontrollers, using a protected version of the popular $\mu$NaCl library as an example. For the targeted implementations, we are able to recover 99.6% of the key bits for the arithmetic approach and 95.3% of the key bits for the approach based on secret addresses, with confidence levels 76.1% and 78.8%, respectively. All publicly available ECC software for the AVR that we are aware of uses one of the two approaches and is thus in principle vulnerable to our attacks.
Last updated:  2019-08-18
Side-Channel Leakage Evaluation and Detection Based on Communication Theory
Uncategorized
Wei Yang, Yuchen Cao, Ke Ma, Hailong Zhang
Uncategorized
Side-channel attacks (SCAs) have been a realistic serious threat to crypto devices. Therefore, evaluating the SCAs resilience of a crypto device is important and necessary. The SCAs-secure evaluation criteria includes the information theoretic metric and the security metric. The former metric, i.e. mutual information (MI), measures the leakage amount of a crypto device. However, because the real leakage distribution of a crypto device is unknown, the leakage evaluation is difficult. Commonly, there are two ways to estimate the leakage distribution of a device, i.e. non-parametric ones and parametric ones. The former may bring a big error since the leakage model is not accurate. The latter is more precise since it can profile the leakage model, but may be infeasible in practice. To combine the merits of the two estimation ways, we bypass the direct estimation of the device's leakage distribution, and propose a non-profiling parametric estimation method. We analyze the side-channel as a communication channel, and use the average MI of the communication channel to estimate the side-channel MI. Besides, we find that the channel capacity can furnish an upper bound of the leakage amount of the device. Interestingly, based on the communication channel characteristic, we find that if we do consistency check for the channel parameters, a leakage detection method can be developed. Furthermore, the proposed method is capable of finding the Point-Of-Interests (POIs) in leakage traces and introducing few leakage points that cannot be used to mount SCAs. Finally, the experiments show the effectiveness of the proposed methods about leakage evaluation and detection.
Last updated:  2016-09-24
Breaking Cryptographic Implementations Using Deep Learning Techniques
Uncategorized
Houssem Maghrebi, Thibault Portigliatti, Emmanuel Prouff
Show abstract
Uncategorized
Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.
Last updated:  2016-09-22
Breaking Web Applications Built On Top of Encrypted Data
Paul Grubbs, Richard McPherson, Muhammad Naveed, Thomas Ristenpart, Vitaly Shmatikov
We develop a systematic approach for analyzing client-server applications that aim to hide sensitive user data from untrusted servers. We then apply it to Mylar, a framework that uses multi-key searchable encryption (MKSE) to build Web applications on top of encrypted data. We demonstrate that (1) the Popa-Zeldovich model for MKSE does not imply security against either passive or active attacks; (2) Mylar-based Web applications reveal users’ data and queries to passive and active adversarial servers; and (3) Mylar is generically insecure against active attacks due to system design flaws. Our results show that the problem of securing client-server applications against actively malicious servers is challenging and still unsolved. We conclude with general lessons for the designers of systems that rely on property-preserving or searchable encryption to protect data from untrusted servers.
Last updated:  2020-07-26
Snow White: Robustly Reconfigurable Consensus and Applications to Provably Secure Proof of Stake
Phil Daian, Rafael Pass, Elaine Shi
Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.
Last updated:  2017-05-11
The Sleepy Model of Consensus
Rafael Pass, Elaine Shi
The distributed systems literature adopts two primary network models, the synchronous model where honest messages are delivered in the next round, and the partially synchronous (or asynchronous) model where honest messages are subject to unpredictable adversarial delays. In this paper, we show that more nuanced formal models exist beyond the traditional synchrony and asynchrony stratification -- and interestingly, such new models allow us to articulate new robustness properties that traditional models would have failed to capture. More specifically, we articulate a new formal model called “the sleepy model of consensus”, where we classify honest nodes as being either alert or sleepy. Alertness implies that the node is online and has good network connections; whereas sleepiness captures any type of failure or network jitter. We then describe the Sleepy consensus protocol that achieves security as long as at any time, the number of alert nodes outnumber corrupt ones. No classical synchronous or asynchronous protocols attain such robustness guarantees, and yet we show how to leverage Nakamoto’s blockchain protocol, but without proofs-of-work, to achieve these properties, assuming collision resistant hash functions, the existence of a public-key infrastructure and a common reference string.
Last updated:  2017-02-17
Hybrid Consensus: Efficient Consensus in the Permissionless Model
Rafael Pass, Elaine Shi
Consensus, or state machine replication is a foundational building block of distributed systems and modern cryptography. Consensus in the classical, permissioned setting has been extensively studied in the 30 years of distributed systems literature. Recent developments in Bitcoin and other decentralized cryptocurrencies popularized a new form of consensus in a “permissionless” setting, where anyone can join and leave dynamically, and there is no a-priori knowledge of the consensus nodes. Despite this exciting breakthrough, today’s permissionless consensus protocols, often referred to as “blockchains”, are known to have terrible performance, which has resulted in heated, and at times acrimonious debates in the community. First, we show that unfortunately a performance loss is inherent for any protocol that secures against at least 1/3 corruptions in hashpower. Specifically, we formally define a new performance measure called responsiveness, and show that any responsive permissionless consensus protocol cannot tolerate 1/3 or more corruptions in hashpower. Next, we show a tightly matching uppper bound. Specifically, we propose a new permissionless consensus protocol called hybrid consensus, that is responsive and secures against up to 1/3 corruptions in hashpower. Hybrid consensus's idea is to bootstrap fast permissionless consensus by combining an inefficient blockchain protocol with a fast permissioned consensus protocol. Hybrid consensus uses the blockchain not to agree on transactions, but to agree on rotating committees which in turn execute permissioned consensus protocols to agree on transactions. While the high-level idea is intuitive, formally instantiating and reasoning about the protocol exposed a multitude of non-trivial technical subtleties and challenges.
Last updated:  2017-05-25
FruitChains: A Fair Blockchain
Rafael Pass, Elaine Shi
Nakamoto's famous blockchain protocol enables achieving consensus in a so-called permissionless setting---anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents ``sybil attacks'' (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. ``moderately hard functions') introduced by Dwork and Naor (Crypto'92). Recent work by Garay et al (EuroCrypt'15) and Pass et al (manuscript, 2016) demonstrate that this protocol provably achieves consistency and liveness assuming a) honest players control a majority of the computational power in the network, b) the puzzle-hardness is appropriately set as a function of the maximum network delay and the total computational power of the network, and c) the computational puzzle is modeled as a random oracle. Assuming honest participation, however, is a strong assumption, especially in a setting where honest players are expected to perform a lot of work (to solve the computational puzzles). In Nakamoto's Bitcoin application of the blockchain protocol, players are incentivized to solve these puzzles by receiving rewards for every ``blocks'' (of transactions) they contribute to the blockchain. An elegant work by Eyal and Sirer (FinancialCrypt'14), strengthening and formalizing an earlier attack discussed on the Bitcoin forum, demonstrates that a coalition controlling even a minority fraction of the computational power in the network can gain (close to) 2 times its ``fair share'' of the rewards (and transation fees) by deviating from the protocol instructions. In contrast, in a fair protocol, one would expect that players controlling a $\phi$ fraction of the computational resources to reap a $\phi$ fraction of the rewards. In this work, we present a new blockchain protocol---the FruitChain protocol---which satisfies the same consistency and liveness properties as Nakamoto's protocol (assuming an honest majority of the computing power), and additionally is $\delta$-approximately fair: with overwhelming probability, any honest set of players controlling a $\phi$ fraction of computational power is guaranteed to get at least a fraction $(1 - \delta) \phi$ of the blocks (and thus rewards) in any $Omega( \kappa/\delta )$ length segment of the chain (where $\kappa$ is the security parameter). As a consequence, if this blockchain protocol is used as the ledger underlying a cryptocurrency system, where rewards and transaction fees are evenly distributed among the miners of blocks in a length kappa segment of the chain, no coalition controlling less than a majority of the computing power can gain more than a factor $(1 + 3\delta)$ by deviating from the protocol (i.e., honest participation is an $n/2$-coalition-safe $3\delta$-Nash equilibrium). Finally, the fruit chain protocol enables decreasing the variance of mining rewards and as such significantly lessens (or even obliterates) the need for mining pools.
Last updated:  2016-09-22
Transparency Overlays and Applications
Melissa Chase, Sarah Meiklejohn
In this paper, we initiate a formal study of transparency, which in recent years has become an increasingly critical requirement for the systems in which people place trust. We present the abstract concept of a transparency overlay, which can be used in conjunction with any system to give it provable transparency guarantees, and then apply the overlay to two settings: Certificate Transparency and Bitcoin. In the latter setting, we show that the usage of our transparency overlay eliminates the need to engage in mining and allows users to store a single small value rather than the entire blockchain. Our transparency overlay is generically constructed from a signature scheme and a new primitive we call a dynamic list commitment, which in practice can be instantiated using a collision-resistant hash function.
Last updated:  2016-09-22
Computing discrete logarithms in cryptographically-interesting characteristic-three finite fields
Uncategorized
Gora Adj, Isaac Canales-Martínez, Nareli Cruz-Cortés, Alfred Menezes, Thomaz Oliveira, Luis Rivera-Zamarripa, Francisco Rodríguez-Henríquez
Show abstract
Uncategorized
Since 2013 there have been several developments in algorithms for computing discrete logarithms in small-characteristic finite fields, culminating in a quasi-polynomial algorithm. In this paper, we report on our successful computation of discrete logarithms in the cryptographically-interesting characteristic-three finite field ${\mathbb F}_{3^{6 \cdot 509}}$ using these new algorithms; prior to 2013, it was believed that this field enjoyed a security level of 128 bits. We also show that a recent idea of Guillevic can be used to compute discrete logarithms in the cryptographically-interesting finite field ${\mathbb F}_{3^{6 \cdot 709}}$ using essentially the same resources as we expended on the ${\mathbb F}_{3^{6 \cdot 509}}$ computation. Finally, we argue that discrete logarithms in the finite field ${\mathbb F}_{3^{6 \cdot 1429}}$ can feasibly be computed today; this is significant because this cryptographically-interesting field was previously believed to enjoy a security level of 192 bits.
Last updated:  2017-04-18
Small Field Attack, and Revisiting RLWE-Based Authenticated Key Exchange from Eurocrypt'15
Uncategorized
Boru Gong, Yunlei Zhao
Show abstract
Uncategorized
Authenticated key-exchange (AKE) plays a fundamental role in modern cryptography. Up to now, the HMQV protocol family is among the most efficient provably secure AKE protocols, which has been widely standardized and in use. Given recent advances in quantum computing, it would be highly desirable to develop lattice-based HMQV-analogue protocols for the possible upcoming post-quantum era. Towards this goal, an important step is recently made by Zhang et al. at Eurocrypt'15. Similar to HMQV, the HMQV-analogue protocols proposed there consists of two variants: a two-pass protocol $\Pi_2$, as well as a one-pass protocol $\Pi_1$ that implies, in turn, a signcryption scheme (named as "deniable encryption"). All these protocols are claimed to be provably secure under the ring-LWE (RLWE) assumption. In this work, we propose a new type of attack, referred to as small field attack (SFA), against the one-pass protocol $\Pi_1$, as well as its resultant deniable encryption scheme. With SFA, a malicious user can efficiently recover the static private key of the honest victim user in $\Pi_1$ with overwhelming probability. Moreover, the SFA attack is realistic and powerful in practice, in the sense that it is almost impossible for the honest user to prevent, or even detect, the attack. Besides, some new property regarding the CRT basis of $R_q$ is also developed in this work, which is essential for our small field attack and may be of independent interest. The security proof of the two-pass protocol $\Pi_2$ is then revisited. We are stuck at Claim 16 in [ZZDS14], with a gap identified and discussed in the security proof. To us, we do not know how to fix the gap, which traces back to some critical differences between the security proof of HMQV and that of its RLWE-based analogue.
Last updated:  2017-02-13
Parallel Implementations of Masking Schemes and the Bounded Moment Leakage Model
Uncategorized
Gilles Barthe, François Dupressoir, Sebastian Faust, Benjamin Grégoire, François-Xavier Standaert, Pierre-Yves Strub
Show abstract
Uncategorized
In this paper, we provide a necessary clarification of the good security properties that can be obtained from parallel implementations of masking schemes. For this purpose, we first argue that (i) the probing model is not straightforward to interpret, since it more naturally captures the intuitions of serial implementations, and (ii) the noisy leakage model is not always convenient, e.g. when combined with formal methods for the verification of cryptographic implementations. Therefore we introduce a new model, the bounded moment model, that formalizes a weaker notion of security order frequently used in the side-channel literature. Interestingly, we prove that probing security for a serial implementation implies bounded moment security for its parallel counterpart. This result therefore enables an accurate understanding of the links between formal security analyses of masking schemes and experimental security evaluations based on the estimation of statistical moments. Besides its consolidating nature, our work also brings useful technical contributions. First, we describe and analyze refreshing and multiplication algorithms that are well suited for parallel implementations and improve security against multivariate side-channel attacks. Second, we show that simple refreshing algorithms (with linear complexity) that are not secure in the continuous probing model are secure in the continuous bounded moment model. Eventually, we discuss the independent leakage assumption required for masking to deliver its security promises, and its specificities related to the serial or parallel nature of an implementation.
Last updated:  2016-09-19
The Shortest Signatures Ever
Mohamed Saied Emam Mohamed, Albrecht Petzoldt
Multivariate Cryptography is one of the main candidates for creating post quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. In this paper we present a general technique to reduce the signature size of multivariate schemes. Our technique enables us to reduce the signature size of nearly all multivariate signature schemes by 10 to 15 % without slowing down the scheme significantly. We can prove that the security of the underlying scheme is not weakened by this modification. Furthermore, the technique enables a further reduction of the signature size when accepting a slightly more costly verification process. This trade off between signature size and complexity of the verification process can not be observed for any other class of digital signature schemes. By applying our technique to the Gui signature scheme, we obtain the shortest signatures of all existing digital signature schemes.
Last updated:  2016-09-21
The closest vector problem in tensored root lattices of type A and in their duals
Léo Ducas, Wessel P. J. van Woerden
In this work we consider the closest vector problem (CVP) ---a problem also known as maximum-likelihood decoding--- in the tensor of two root lattices of type A ($A_m \otimes A_n$), as well as in their duals ($A^*_m \otimes A^*_n$). This problem is mainly motivated by {\em lattice based cryptography}, where the cyclotomic rings $\mathbb Z[\zeta_c]$ (resp. its co-different $\mathbb Z[\zeta_c]^\vee$) play a central role, and turn out to be isomorphic as lattices to tensors of $A^*$ lattices (resp. $A$ root lattices). In particular, our results lead to solving CVP in $\mathbb Z[\zeta_c]$ and in $\mathbb Z[\zeta_c]^\vee$ for conductors of the form $c = 2^\alpha p^\beta q^\gamma$ for any two odd primes $p,q$. For the primal case $A_m \otimes A_n$, we provide a full characterization of the Voronoi region in terms of simple cycles in the complete directed bipartite graph $K_{m+1,n+1}$. This leads ---relying on the Bellman-Ford algorithm for negative cycle detection--- to a CVP algorithm running in *polynomial time*. Precisely, our algorithm performs $O(l\ m^2 n^2 \min\{m,n\})$ operations on reals, where $l$ is the number of bits per coordinate of the input target. For the dual case, we use a gluing-construction to solve CVP in sub-exponential time $O(n m^{n+1})$.
Last updated:  2016-09-19
Multi-core FPGA Implementation of ECC with Homogeneous Co-Z Coordinate Representation
Bo-Yuan Peng, Yuan-Che Hsu, Yu-Jia Chen, Di-Chia Chueh, Chen-Mou Cheng, Bo-Yin Yang
Elliptic Curve Cryptography (ECC) is gaining popularity in recent years. Having short keys and short signatures in particular makes ECC likely to be adopted in numerous internet-of-things (IoT) devices. It is therefore critical to optimize ECC well for both speed and power consumption. Optimization opportunities exist on several different levels: algorithm, architecture, and/or implementation. We combine optimizations at every level in an efficient multi-core FPGA implementation. The core building block for our implementation is a Montgomery multiplier capable of modular additions and multiplications with an arbitrary prime modulus. The size of the prime modulus can also be changed easily, for which we have implemented and tested up to 528-bits used in the NIST P-521 curve. Based on this building block, we have developed a multi-core architecture that supports multiple parallel modular additions, multiplications, and inverses. Efficient ECC group addition and doubling are then built from this foundation. To support a wide variety of curves and at the same time resist timing/power-based side-channel attacks, our scalar multiplication is implemented using the Co-Z ladder due to Hutter, Joye, and Sierra. This approach also allows us to trade off between speed and power consumption by using a different number of Montgomery cores.
Last updated:  2016-09-19
Secure Error-Tolerant Graph Matching Protocols
Kalikinkar Mandal, Basel Alomair, Radha Poovendran
We consider a setting where there are two parties, each party holds a private graph and they wish to jointly compute the structural dissimilarity between two graphs without revealing any information about their private input graph. Graph edit distance (GED) is a widely accepted metric for measuring the dissimilarity of graphs. It measures the minimum cost for transforming one graph into the other graph by applying graph edit operations. In this paper we present a framework for securely computing approximated GED and as an example, present a protocol based on threshold additive homomorphic encryption scheme. We develop several new sub-protocols such as private maximum computation and optimal assignment protocols to construct the main protocol. We show that our protocols are secure against semi-honest adversaries. The asymptotic complexity of the protocol is $O(n^5\ell\log^*(\ell))$ where $\ell$ is the bit length of ring elements and $n$ is the number of nodes in the graph.
Last updated:  2019-12-05
Cut-and-Choose for Garbled RAM
Peihan Miao
Garbled RAM, introduced by Lu and Ostrovsky (Eurocrypt 2013), provides a novel method for secure computation on RAM (Random Access Machine) programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that the computational complexity and communication complexity only grow with the running time of the RAM program, avoiding the inefficient process of first converting it into a circuit. It allows for executing multiple RAM programs on a persistent database, but is secure only against semi-honest adversaries. In this work we provide a cut-and-choose technique for garbled RAM. This gives the first constant-round two-party RAM computation protocol secure against malicious adversaries which allows for multiple RAM programs being executed on a persistent database. Our protocol makes black-box use of the one-way functions, and security of our construction is argued in the random oracle model.
Last updated:  2018-10-18
On Basing Search SIVP on NP-Hardness
Uncategorized
Tianren Liu
Show abstract
Uncategorized
The possibility of basing cryptography on the minimal assumption NP$\nsubseteq$BPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the $\tilde{O}(n)$-approximate shortest independent vector problem $\text{SIVP}_{\tilde O(n)}$. Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that $\text{gapSIVP}_{\sqrt{n/\log n}}$ is in NP$\cap$coAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to $\text{gapSIVP}_{\tilde O(\sqrt n)}$ (under general probabilistic polynomial-time adaptive reductions) is in AM$\cap$coAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor $\tilde O(n)$? We eliminate such possibility, by showing that any language that can be reduced to solving search $\text{SIVP}_{\gamma}$ with any approximation factor $\gamma(n) = \omega(n\log n)$ lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter $\tilde O(\sqrt n)\cdot\lambda_n$ lies in AM intersect coAM.
Last updated:  2016-09-19
Generalized Desynchronization Attack on UMAP: Application to RCIA, KMAP, SLAP and SASI$^+$ protocols
Masoumeh Safkhani, Nasour Bagheri
Tian et al. proposed a permutation based authentication protocol entitled RAPP. However, it came out very soon that it suffers from several security treats such as desynchronization attack. Following RAPP, several protocols have been proposed in literature to defeat such attacks. Among them, some protocols suggested to keep a record of old parameters by both the reader and the tag. In this paper we present a genrilized version of all such protocols, named GUMAP, and present an efficent desynchronization attack against it. The complexity of our attack is 5 consequences sessions of protocol and the success probability is almost 1. Our attack is applicable as it is to recently proposed protocols entitled RCIA, KMAP, SASI$^{+}$ and SLAP. To the best of our knowledge, it is the first report on the vulnerability of these protocols.
Last updated:  2016-09-15
Succinct Predicate and Online-Offline Multi-Input Inner Product Encryptions under Standard Static Assumptions
Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay
This paper presents expressive predicate encryption (PE) systems, namely non-zero inner-product-predicate encryption (NIPPE) and attribute-based encryption (ABE) supporting monotone span programs achieving best known parameters among existing similar schemes under well-studied static complexity assumptions. Both the constructions are built in composite order bilinear group setting and involve only 2 group elements in the ciphertexts. More interestingly, our NIPPE scheme, which additionally features only 1 group element in the decryption keys, is the first to attain succinct ciphertexts and decryption keys simultaneously. For proving selective security of these constructions under the Subgroup Decision assumptions, which are the most standard static assumptions in composite order bilinear group setting, we apply the extended version of the elegant D´ej`a Q framework, which was originally proposed as a general technique for reducing the q-type complexity assumptions to their static counter parts. Our work thus demonstrates the power of this framework in overcoming the need of q-type assumptions, which are vulnerable to serious practical attacks, for deriving security of highly expressive PE systems with compact parameters. We further introduce the concept of online-offline multi-input functional encryption (OO-MIFE), which is a crucial advancement towards realizing this highly promising but computationally intensive cryptographic primitive in resource bounded and power constrained devices. We also instantiate our notion of OO-MIFE by constructing such a scheme for the multi-input analog of the inner product functionality, which has a wide range of application in practice. Our OO-MIFE scheme for multiinput inner products is built in asymmetric bilinear groups of prime order and is proven selectively secure under the well-studied k-Linear (k-LIN) assumption.
Last updated:  2016-09-15
From Indifferentiability to Constructive Cryptography (and Back)
Uncategorized
Ueli Maurer, Renato Renner
Show abstract
Uncategorized
The concept of indifferentiability of systems, a generalized form of indistinguishability, was proposed in 2004 to provide a simplified and generalized explanation of impossibility results like the non-instantiability of random oracles by hash functions due to Canetti, Goldreich, and Halevi (STOC 1998). But indifferentiability is actually a constructive notion, leading to possibility results. For example, Coron {\em et al.} (Crypto 2005) argued that the soundness of the construction $C(f)$ of a hash function from a compression function $f$ can be demonstrated by proving that $C(R)$ is indifferentiable from a random oracle if $R$ is an ideal random compression function. The purpose of this short paper is to describe how the indifferentiability notion was a precursor to the theory of constructive cryptography and thereby to provide a simplified and generalized treatment of indifferentiability as a special type of constructive statement.
Last updated:  2016-09-16
Universally Composable Cryptographic Role-Based Access Control
Bin Liu, Bogdan Warinschi
In cryptographic access control sensitive data is protected by cryptographic primitives and the desired access structure is enforced through appropriate management of the secret keys. In this paper we study rigorous security definitions for the cryptographic enforcement of Role Based Access Control (RBAC). We propose the first simulation-based security definition within the framework of Universal Composability (UC). Our definition is natural and intuitively appealing, so we expect that our approach would carry over to other access models. Next, we establish two results that clarify the strength of our definition when compared with existing ones that use the game-based definitional approach. On the positive side, we demonstrate that both read and write-access guarantees in the sense of game-based security are implied by UC security of an access control system. Perhaps expected, this result serves as confirmation that the definition we propose is sound. Our main technical result is a proof that simulation-based security requires impractical assumptions on the encryption scheme that is employed. As in other simulation-based settings, the source of inefficiency is the well known ``commitment problem'' which naturally occurs in the context of cryptographic access control to file systems.
Last updated:  2016-10-18
Distance Bounding based on PUF
Mathilde Igier, Serge Vaudenay
Distance Bounding (DB) is designed to mitigate relay attacks. This paper provides a complete study of the DB protocol of Kleber et al. based on Physical Unclonable Functions (PUFs). We contradict the claim that it resists to Terrorist Fraud (TF). We propose some slight modifications to increase the security of the protocol and formally prove TF-resistance, as well as resistance to Distance Fraud (DF), and Man-In-the-Middle attacks (MiM) which include relay attacks.
Last updated:  2016-09-15
Quantifying Web Adblocker Privacy
Arthur Gervais, Alexandros Filios, Vincent Lenders, Srdjan Capkun
Web advertisements, an integral part of today's web browsing experience, financially support countless websites. Meaningful advertisements, however, require behavioral targeting, user tracking and profile fingerprinting that raise serious privacy concerns. To counter privacy issues and enhance usability, adblockers emerged as a popular way to filter web requests that do not serve the website's main content. Despite their popularity, little work has focused on quantifying the privacy provisions of adblockers. In this paper, we develop a quantitative approach to objectively compare the privacy of adblockers. We propose a model based on a set of privacy metrics that captures not only the technical web architecture, but also the underlying corporate institutions of the problem across time and geography. We investigate experimentally the effect of various combinations of ad-blocking software and browser settings on 1000 Web sites. Our results highlight a significant difference among adblockers in terms of filtering performance, in particular affected by the applied configurations. Besides the ability to judge the filtering capabilities of existing adblockers and their particular configurations, our work provides a general framework to evaluate new adblocker proposals.
Last updated:  2017-01-25
Parallelized Side-Channel Attack Resisted Scalar Multiplication Using q-Based Addition-Subtraction k-chains
Uncategorized
Kittiphop Phalakarn, Kittiphon Phalakarn, Vorapong Suppakitpaisarn
Show abstract
Uncategorized
This paper presents parallel scalar multiplication techniques for elliptic curve cryptography using q-based addition-subtraction k-chain which can also effectively resist side-channel attack. Many techniques have been discussed to improve scalar multiplication, for example, double-and-add, NAF, w-NAF, addition chain and addition-subtraction chain. However, these techniques cannot resist side-channel attack. Montgomery ladder, random w-NAF and uniform operation techniques are also widely used to prevent side-channel attack, but their operations are not efficient enough comparing to those with no side-channel attack prevention. We have found a new way to use k-chain for this purpose. In this paper, we extend the definition of k-chain to q-based addition-subtraction k-chain and modify an algorithm proposed by Jarvinen et al. to generate the q-based addition-subtraction k-chain. We show the upper and lower bounds of its length which lead to the computation time using the new chain techniques. The chain techniques are used to reduce the cost of scalar multiplication in parallel ways. Comparing to w-NAF, which is faster than double-and-add and Montgomery ladder technique, the maximum computation time of our q-based addition-subtraction k-chain techniques can have up to 25.92% less addition costs using only 3 parallel computing cores. We also discuss on the optimization for multiple operand point addition using hybrid-double multiplier which is proposed by Azarderakhsh and Reyhani-Masoleh. The proposed parallel chain techniques can also tolerate side-channel attack efficiently.
Last updated:  2016-09-14
Physical Unclonable Functions based on Temperature Compensated Ring Oscillators
Sha Tao, Elena Dubrova
Physical unclonable functions (PUFs) are promising hardware security primitives suitable for low-cost cryptographic applications.Ring oscillator (RO) PUF is a well-received silicon PUF solution due to its ease of implementation and entropy evaluation. However, the responses of RO-PUFs are susceptible to environmental changes, in particular, to temperature variations. Additionally, a conventional RO-PUF implementation is usually more power-hungry than other PUF alternatives. This paper explores circuit-level techniques to design low-power RO-PUFs with enhanced thermal stability. We introduce a power-efficient approach based on a phase/frequency detector (PFD) to perform pairwise comparisons of ROs. We also propose a temperature compensated bulk-controlled oscillator and investigate its feasibility and usage in PFD-based RO-PUFs. Evaluation results demonstrate that the proposed techniques can effectively reduce the thermally induced errors in PUF responses while imposing a very low power overhead.
Last updated:  2016-09-14
An efficient somewhat homomorphic encryption scheme based on factorization
Gérald Gavin
Surprisingly, most of existing provably secure FHE or SWHE schemes are lattice-based constructions. It is legitimate to question whether there is a mysterious link between homomorphic encryptions and lattices. This paper can be seen as a first (partial) negative answer to this question. We propose a very simple private-key (partially) homomorphic encryption scheme whose security relies on factorization. This encryption scheme deals with a secret multivariate rational function $\phi_D$ defined over $\mathbb{Z}_n$, $n$ being an RSA-modulus. An encryption of $x$ is simply a vector $c$ such that $\phi_D(c)=x+\textsf{noise}$. To get homomorphic properties, nonlinear operators are specifically developed. We first prove IND-CPA security in the generic ring model assuming the hardness of factoring. We then extend this model in order to integrate lattice-based cryptanalysis and we reduce the security of our scheme (in this extended model) to an algebraic condition. This condition is extensively discussed for several choices of parameters. Some of these choices lead to competitive performance with respect to other existing homomorphic encryptions. While quantum computers are not only dreams anymore, designing factorization-based cryptographic schemes might appear as irrelevant. But, it is important to notice that, in our scheme, the factorization of $n$ is not required to decrypt. The factoring assumption simply ensures that solving nonlinear equations or finding non-null polynomials with many roots is difficult. Consequently, the ideas behind our construction could be re-used in rings satisfying these properties.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.