All papers in 2013 (881 results)

Last updated:  2014-04-20
New Speed Records for Montgomery Modular Multiplication on 8-bit AVR Microcontrollers
Zhe Liu, Johann Großschädl
Modular multiplication of large integers is a performance-critical arithmetic operation of many public-key cryptosystems such as RSA, DSA, Diffie-Hellman (DH) and their elliptic curve-based variants ECDSA and ECDH. The computational cost of modular multiplication and related operations (e.g. exponentiation) poses a practical challenge to the widespread deployment of public-key cryptography, especially on embedded devices equipped with 8-bit processors (smart cards, wireless sensor nodes, etc.). In this paper, we describe basic software techniques to improve the performance of Montgomery modular multiplication on 8-bit AVR-based microcontrollers. First, we present a new variant of the widely-used hybrid method for multiple precision multiplication that is 10.6% faster than the original hybrid technique of Gura et al. Then, we discuss different hybrid Montgomery multiplication algorithms, including Hybrid Finely Integrated Product Scanning (HFIPS), and introduce a novel approach for Montgomery multiplication, which we call Hybrid Separated Product Scanning (HSPS). Finally, we show how to perform the modular subtraction of Montgomery reduction in a regular fashion without execution of conditional statements so as to counteract Simple Power Analysis (SPA) attacks. Our AVR implementation of the HFIPS and HSPS method outperforms the Montgomery multiplication of the MIRACL Crypto SDK by 21.6% and 14.3%, respectively, and is twice as fast as the modular multiplication of the TinyECC library.
Last updated:  2017-10-22
Accelerating Bitcoin's Transaction Processing. Fast Money Grows on Trees, Not Chains
Yonatan Sompolinsky, Aviv Zohar
Bitcoin is a potentially disruptive new crypto-currency based on a decentralized open-source protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin's success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin's transaction processing. The security analysis done by Bitcoin's creator Satoshi Nakamoto~\cite{nakamoto2008bitcoin} assumes that block propagation delays are negligible compared to the time between blocks---an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer~\cite{Decker2013Information}, we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second -- a 600 fold speedup compared to today's rate, while still allowing the network to processes many transactions per second.
Last updated:  2015-04-02
New Constructions of Revocable Identity-Based Encryption from Multilinear Maps
Seunghwan Park, Kwangsu Lee, Dong Hoon Lee
A revocation mechanism in cryptosystems for a large number of users is absolutely necessary for maintaining the security of whole systems. A revocable identity-based encryption (RIBE) provides an efficient revocation method in IBE in which a trusted authority periodically broadcasts an update key for non-revoked users and a user can decrypt a ciphertext if his private key is not revoked in the update key. Boldyreva, Goyal, and Kumar (CCS 2008) defined RIBE and proposed an RIBE scheme that uses a tree-based revocation encryption scheme to revoke users' private keys. However, this approach has an inherent limitation in that the number of private key elements and update key elements cannot be constant. In this paper, to overcome this limitation, we devise a new technique for RIBE and propose RIBE schemes with a constant number of private key elements. We achieve the following results: - We first devise a new technique for RIBE that combines a hierarchical IBE (HIBE) scheme and a public-key broadcast encryption (PKBE) scheme by using multilinear maps. In contrast to the previous technique for RIBE, our technique uses a PKBE scheme in bilinear maps for revocation to achieve short private keys and update keys. - Following our new technique for RIBE, we propose an RIBE scheme in three-leveled multilinear maps that combines the HIBE scheme of Boneh and Boyen (EUROCRYPT 2004) and the PKBE scheme of Boneh, Gentry, and Waters (CRYPTO 2005). The private key and update key of our scheme possess a constant number of group elements. We introduce a new complexity assumption in multilinear maps and prove the security of our scheme in the selective revocation list model. - Next, we propose another RIBE scheme with reduced public parameters by combining the HIBE scheme of Boneh and Boyen and the PKBE scheme of Boneh, Waters, and Zhandry (CRYPTO 2014), which uses multilinear maps. Compared with our first RIBE scheme, our second RIBE scheme requires high-leveled multilinear maps since the underlying PKBE scheme is based on high-leveled multilinear maps.
Last updated:  2019-02-05
Succinct Non-Interactive Zero Knowledge for a von Neumann Architecture
Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, Madars Virza
We build a system that provides succinct non-interactive zero-knowledge proofs (zk-SNARKs) for program executions on a von Neumann RISC architecture. The system has two components: a cryptographic proof system for verifying satisfiability of arithmetic circuits, and a circuit generator to translate program executions to such circuits. Our design of both components improves in functionality and efficiency over prior work, as follows. Our circuit generator is the first to be universal: it does not need to know the program, but only a bound on its running time. Moreover, the size of the output circuit depends additively (rather than multiplicatively) on program size, allowing verification of larger programs. The cryptographic proof system improves proving and verification times, by leveraging new algorithms and a pairing library tailored to the protocol. We evaluated our system for programs with up to 10,000 instructions, running for up to 32,000 machine steps, each of which can arbitrarily access random-access memory; and also demonstrated it executing programs that use just-in-time compilation. Our proofs are 230 bytes long at 80 bits of security, or 288 bytes long at 128 bits of security. Typical verification time is 5 milliseconds, regardless of the original program's running time.
Last updated:  2014-05-20
Efficient Hardware Implementation of MQ Asymmetric Cipher PMI+ on FPGAs
Shaohua Tang, Bo Lv, Guomin Chen, Zhiniang Peng
PMI+ is a Multivariate Quadratic (MQ) public key algorithm used for encryption and decryption operations, and belongs to post quantum cryptography.We designs a hardware on FPGAs to efficiently implement PMI+ in this paper.Our main contributions are that,firstly, a hardware architecture of encryption and decryption of PMI+ is developed, and description of corresponding hardware algorithm is proposed;secondly, basic arithmetic units are implemented with higher efficiency that multiplication, squaring, vector dot product and power operation are implemented in full parallel;and thirdly, an optimized implementation for core module, including optimized large power operation, is achieved. The encryption and decryption hardware of PMI+ is efficiently realized on FPGA by the above optimization and improvement.It is verified by experiments that the designed hardware can complete an encryption operation within 497 clock cycles, and the clock frequency can be up to 145.6MHz,and the designed hardware can complete a decryption operation within 438 clock cycles wherein the clock frequency can be up to 37.04MHz.
Last updated:  2013-12-30
MQ Signature and Proxy Signature Schemes with Exact Security Based on UOV Signature
Shaohua Tang, Jiahui Chen, Lingling Xu, Xiaoyu Li
Multivariate public key cryptography which relies on MQ (Multivariate Quadratic) problems is one of the main approaches to guarantee the security of communication in the post-quantum world. In this paper, we propose a combined MQ signature scheme based on the yet unbroken UOV (Unbalanced Oil and Vinegar) signature if parameters are properly chosen. Our scheme can not only reduce the public key size of the UOV signature, but also provide more tighter bound of security against chosen-message attack in the random oracle model. On the other hand, we propose a proxy signature scheme based on our proposed combined signature scheme. Additionally, we give a strict security proof for our proxy signature scheme. Finally, we present experiments for all of our proposed schemes and the baseline schemes. Comparisons with related schemes show that our work has some advantages on performance along with more strict security.
Last updated:  2015-12-21
Public-Key Encryption with Lazy Parties
Uncategorized
Kenji Yasunaga
Show abstract
Uncategorized
In a public-key encryption scheme, if a sender is not concerned about the security of a message and is unwilling to generate costly randomness, the security of the encrypted message can be compromised. In this work, we characterize such \emph{lazy parties}, who are regraded as honest parties, but are unwilling to perform a costly task when they are not concerned about the security. Specifically, we consider a rather simple setting in which the costly task is to generate randomness used in algorithms, and parties can choose either perfect randomness or a fixed string. We model lazy parties as rational players who behave rationally to maximize their utilities, and define a security game between the parties and an adversary. Since a standard secure encryption scheme does not work in the setting, we provide constructions of secure encryption schemes in various settings.
Last updated:  2013-12-29
Policy-Based Non-interactive Outsourcing of Computation using multikey FHE and CP-ABE
Michael Clear, Ciaran McGoldrick
We consider the problem of outsourced computation that operates on encrypted inputs supplied by multiple independent parties. To facilitate fine-grained access control, it would be desirable if each party could encrypt her input under an appropriate access policy. Moreover, a party should only be authorized to decrypt the result of a computation performed on a set of encrypted inputs if his credentials satisfy the composition of all input policies. There has been limited success so far achieving homomorphic encryption in the functional setting; that is, for primitives such as Ciphertext-Policy Attribute Based Encryption (CP-ABE) and Identity Based Encryption (IBE). We introduce a new primitive that captures homomorphic encryption with support for access policies and policy composition. We then present a generic construction using CP-ABE and multikey Fully-Homomorphic encryption (FHE). Furthermore, we show that a CP-ABE scheme that is homomorphic for circuits of polylogarithmic depth in some parameter $m$ implies a CP-ABE scheme that is homomorphic for circuits of arity $m$ and unbounded depth.
Last updated:  2015-04-27
General Constructions of Rational Secret Sharing with Expected Constant-Round Reconstruction
Uncategorized
Akinori Kawachi, Yoshio Okamoto, Keisuke Tanaka, Kenji Yasunaga
Show abstract
Uncategorized
We present a general construction of a rational secret-sharing protocol that converts any rational secret-sharing protocol to a protocol with an expected constant-round reconstruction. Our construction can be applied to protocols for synchronous channels, and preserves a strict Nash equilibrium of the original protocol. Combining with an existing protocol, we obtain the first expected constant-round protocol that achieves a strict Nash equilibrium with the optimal coalition resilience $\ceil{\frac{n}{2}}-1$, where $n$ is the number of players. Our construction can be extended to a construction that preserves the \emph{immunity} to unexpectedly behaving players. Then, for any constant $m \geq 1$, we obtain an expected constant-round protocol that achieves a Nash equilibrium with the optimal coalition resilience $\ceil{\frac{n}{2}}-m-1$ in the presence of $m$ unexpectedly behaving players. The same protocol also achieves a strict Nash equilibrium with coalition resilience $1$. We show that our protocol with immunity achieves the optimal coalition resilience among constant-round protocols with immunity with respect to both Nash and strict Nash equilibria.
Last updated:  2014-09-15
Poly-Many Hardcore Bits for Any One-Way Function and a Framework for Differing-Inputs Obfuscation
Mihir Bellare, Igors Stepanovs, Stefano Tessaro
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any one-way function. In the case the one-way function is injective or has polynomially-bounded pre-image size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume the existence of differing-input obfuscation (diO), but of a form weaker than full auxiliary-input diO. Our construction for injective one-way functions extends to extract hardcore bits on multiple, correlated inputs, yielding new D-PKE schemes. Of independent interest is a definitional framework for differing-inputs obfuscation in which security is parameterized by circuit-sampler classes.
Last updated:  2014-12-19
--Withdrawn--
Uncategorized
Xu An Wang, Yunlong Ge, Xiaoyuan Yang
Uncategorized
Last updated:  2016-03-13
A Unified Security Model of Authenticated Key Exchange with Specific Adversarial Capabilities
Weiqiang Wen, Libin Wang, Jiaxin Pan
The most widely accepted models in the security proofs of Authenticated Key Exchange protocols are the Canetti-Krawczyk and extended Canetti-Krawczyk models that admit different adversarial queries with ambiguities and incomparable strength. It is desirable to incorporate specific and powerful adversarial queries into a single unified security model and establish a more practical-oriented security notion. Concerning the security of one-round implicitly authenticated Diffie-Hellman key exchange protocols, we present a unified security model that has many advantages over the previous ones. In the model, a system environment is set up, all of adversarial queries are practically interpreted and definitely characterized through physical environment, and some rigorous rules of secret leakage are also specified. To demonstrate usability of our model, a new protocol based on the OAKE protocol is proposed, which satisfies the presented strong security notion and attains high efficiency. The protocol is proven secure in random oracle model under gap Diffie-Hellman assumption.
Last updated:  2013-12-29
A new class of hyper-bent functions and Kloosterman sums
Uncategorized
Chunming Tang, Yanfeng Qi
Show abstract
Uncategorized
This paper is devoted to the characterization of hyper-bent functions. Several classes of hyper-bent functions have been studied, such as Charpin and Gong's $\sum\limits_{r\in R}\mathrm{Tr}_{1}^{n} (a_{r}x^{r(2^m-1)})$ and Mesnager's $\sum\limits_{r\in R}\mathrm{Tr}_{1}^{n}(a_{r}x^{r(2^m-1)}) +\mathrm{Tr}_{1}^{2}(bx^{\frac{2^n-1}{3}})$, where $R$ is a set of representations of the cyclotomic cosets modulo $2^m+1$ of full size $n$ and $a_{r}\in \mathbb{F}_{2^m}$. In this paper, we generalize their results and consider a class of Boolean functions of the form $\sum_{r\in R}\sum_{i=0}^{2}Tr^n_1(a_{r,i}x^{r(2^m-1)+\frac{2^n-1}{3}i}) +Tr^2_1(bx^{\frac{2^n-1}{3}})$, where $n=2m$, $m$ is odd, $b\in\mathbb{F}_4$, and $a_{r,i}\in \mathbb{F}_{2^n}$. With the restriction of $a_{r,i}\in \mathbb{F}_{2^m}$, we present the characterization of hyper-bentness of these functions with character sums. Further, we reformulate this characterization in terms of the number of points on hyper-elliptic curves. For some special cases, with the help of Kloosterman sums and cubic sums, we determine the characterization for some hyper-bent functions including functions with four, six and ten traces terms. Evaluations of Kloosterman sums at three general points are used in the characterization. Actually, our results can generalized to the general case: $a_{r,i}\in \mathbb{F}_{2^n}$. And we explain this for characterizing binomial, trinomial and quadrinomial hyper-bent functions.
Last updated:  2013-12-29
How to Fake Auxiliary Input
Dimitar Jetchev, Krzysztof Pietrzak
Consider a joint distribution $(X,A)$ on a set ${\cal X}\times\{0,1\}^\ell$. We show that for any family ${\cal F}$ of distinguishers $f \colon {\cal X} \times \{0,1\}^\ell \rightarrow \{0,1\}$, there exists a simulator $h \colon {\cal X} \rightarrow \{0,1\}^\ell$ such that \begin{enumerate} \item no function in ${\cal F}$ can distinguish $(X,A)$ from $(X,h(X))$ with advantage $\epsilon$, \item $h$ is only $O(2^{3\ell}\epsilon^{-2})$ times less efficient than the functions in ${\cal F}$. \end{enumerate} For the most interesting settings of the parameters (in particular, the cryptographic case where $X$ has superlogarithmic min-entropy, $\epsilon > 0$ is negligible and ${\cal F}$ consists of circuits of polynomial size), we can make the simulator $h$ \emph{deterministic}. As an illustrative application of this theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt'09. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES. Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem.
Last updated:  2013-12-29
Theoretical Bitcoin Attacks with less than Half of the Computational Power (draft)
Lear Bahack
A widespread security claim of the Bitcoin system, presented in the original Bitcoin whitepaper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically flawed assumptions. In the paper we analyze two kinds of attacks based on two theoretical flaws: the Block Discarding Attack and the Difficulty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not $\frac{1}{2}$ but a bit less than $\frac{1}{4}$, and outline proposals for protocol change that can raise this limit to be as close to $\frac{1}{2}$ as we want. The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major differences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the first time.
Last updated:  2013-12-29
LHash: A Lightweight Hash Function (Full Version)
Wenling Wu, Shuang Wu, Lei Zhang, Jian Zou, Le Dong
In this paper, we propose a new lightweight hash function supporting three different digest sizes: 80, 96 and 128 bits, providing preimage security from 64 to 120 bits, second preimage and collision security from 40 to 60 bits. LHash requires about 817 GE and 1028 GE with a serialized implementation. In faster implementations based on function $T$, LHash requires 989 GE and 1200 GE with 54 and 72 cycles per block, respectively. Furthermore, its energy consumption evaluated by energy per bit is also remarkable. LHash allows to make trade-offs among security, speed, energy consumption and implementation costs by adjusting parameters. The design of LHash employs a kind of Feistel-PG structure in the internal permutation, and this structure can utilize permutation layers on nibbles to improve the diffusion speed. The adaptability of LHash in different environments is good, since different versions of LHash share the same basic computing module. The low-area implementation comes from the hardware-friendly S-box and linear diffusion layer. We evaluate the resistance of LHash against known attacks and confirm that LHash provides a good security margin.
Last updated:  2014-06-18
Compact Ring-LWE based Cryptoprocessor
Uncategorized
Sujoy Sinha Roy, Frederik Vercauteren, Nele Mentens, Donald Donglong Chen, Ingrid Verbauwhede
Show abstract
Uncategorized
In this paper we propose an efficient and compact processor for a ring-LWE based encryption scheme. We present three optimizations for the Number Theoretic Transform (NTT) used for polynomial multiplication: we avoid pre-processing in the negative wrapped convolution by merging it with the main algorithm, we reduce the fixed computation cost of the twiddle factors and propose an advanced memory access scheme. These optimization techniques reduce both the cycle and memory requirements. Finally, we also propose an optimization of the ring-LWE encryption system that reduces the number of NTT operations from five to four resulting in a 20\% speed-up. We use these computational optimizations along with several architectural optimizations to design an instruction-set ring-LWE cryptoprocessor. For dimension 256, our processor performs encryption/decryption operations in 20/9 $\mu s$ on a Virtex 6 FPGA and only requires 1349 LUTs, 860 FFs, 1 DSP-MULT and 2 BRAMs. Similarly for dimension 512, the processor takes 48/21 $\mu s$ for performing encryption/decryption operations and only requires 1536 LUTs, 953 FFs, 1 DSP-MULT and 3 BRAMs. Our processors are therefore more than three times smaller than the current state of the art hardware implementations, whilst running somewhat faster.
Last updated:  2014-05-12
SNR to Success Rate: Reaching the Limit of Non-Profiling DPA
Uncategorized
Suvadeep Hajra, Debdeep Mukhopadhyay
Show abstract
Uncategorized
Many profiling power analysis attacks estimate the multivariate probability distribution using a profiling step, and thus, can optimally combine the leakages of multiple sample points. Though there exist several approaches like filtering, Principal Component Analysis for combining the leakages of multiple sample points in non-profiling DPA, their optimality has been been rarely studied. We study the issue of optimally combining the leakages of multiple sample points in non-profiling DPA attacks using a linear function. In this work, our contributions are three-fold: 1) we first derive a relation between the success rate of a CPA attack and the SNR of the power traces, 2) we introduce a multivariate leakage model for Virtex-5 FPGA device, and 3) using the proposed multivariate leakage model, we devise linear filters to maximize the SNR of the output leakage which, in turn, optimizes the success rate of the CPA attacks in a non-profiling setup.
Last updated:  2013-12-29
Near-linear time, Leakage-resilient Key Evolution Schemes from Expander Graphs
Adam Smith, Ye Zhang
We develop new schemes for deterministically updating a stored cryptographic key that provide security against an internal adversary who can control the update computation and leak bounded amounts of information to the outside world. Our schemes are much more efficient than the previous schemes for this model, due to Dziembowski, Kazana and Wichs (CRYPTO 2011). Specifically, our update operation runs in time quasilinear in the key length, rather than quadratic, while offering a similar level of leakage resilience. In order to design our scheme, we strengthen the connections between the model of Dziembowski et al. and ``pebbling games'', showing that random-oracle-based key evolution schemes are secure as long as the graph of the update function's calls to the oracle has appropriate combinatorial properties. This builds on a connection between pebbling and the random oracle model first established by Dwork, Naor and Wee (CRYPTO 2005). Our scheme's efficiency relies on the existence (which we show) of families of ``local'' bipartite expander graphs of constant degree.
Last updated:  2013-12-29
Formal Treatment of Distributed Trust in Electronic Voting
Stephan Neumann, Melanie Volkamer
Electronic voting systems are among the most security critical distributed systems. Different trust concepts are implemented to mitigate the risk of conspiracies endangering security properties. These concepts render systems often very complex and end users no longer recognize whom they need to trust. Correspondingly, specific trust considerations are necessary to support users. Recently, resilience terms have been proposed in order to express, which entities can violate the addressed security properties in particular by illegal collaborations. However, previous works derived these resilience terms manually. Thus, successful attacks can be missed. Based on this approach, we propose a framework to formally and automatically derive these terms. Our framework comprises a knowledge calculus, which allows us to model knowledge and reason about knowledge of collaborating election entities. The introduced framework is applied to deduce previously manually derived resilience terms of three remote electronic voting systems, namely Polyas, Helios and the Estonian voting system. Thereby, we were able to discover mistakes in previous derivations.
Last updated:  2015-05-01
How to Delegate Computations: The Power of No-Signaling Proofs
Uncategorized
Yael Tauman Kalai, Ran Raz, Ron D. Rothblum
Show abstract
Uncategorized
We construct a 1-round delegation scheme (i.e., argument-system) for every language computable in time t=t(n), where the running time of the prover is poly(t) and the running time of the verifier is n*polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t=t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n*polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by Aiello et-al (ICALP, 2000), which makes use of a PIR scheme. This method lacked a proof of security. We prove that this method is secure assuming the underlying MIP is secure against no-signaling provers.
Last updated:  2013-12-29
Privacy Preserving Enforcement of Sensitive Policies in Outsourced and Distributed Environments
Muhammad Rizwan Asghar
The enforcement of sensitive policies in untrusted environments is still an open challenge for policy-based systems. On the one hand, taking any appropriate security decision requires access to these policies. On the other hand, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. The key challenge is how to enforce sensitive policies and protect content in untrusted environments. In the context of untrusted environments, we mainly distinguish between outsourced and distributed environments. The most attractive paradigms concerning outsourced and distributed environments are cloud computing and opportunistic networks, respectively. In this dissertation, we present the design, technical and implementation details of our proposed policy-based access control mechanisms for untrusted environments. First of all, we provide full confidentiality of access policies in outsourced environments, where service providers do not learn private information about policies during the policy deployment and evaluation phases. Our proposed architecture is such that we are able to support expressive policies and take into account contextual information before making any access decision. The system entities do not share any encryption keys and even if a user is deleted, the system is still able to perform its operations without requiring any action. For complex user management, we have implemented a policy-based Role-Based Access Control (RBAC) mechanism, where users are assigned roles, roles are assigned permissions and users execute permissions if their roles are active in the session maintained by service providers. Finally, we offer the full-fledged RBAC policies by incorporating role hierarchies and dynamic security constraints. In opportunistic networks, we protect content by specifying expressive access control policies. In our proposed approach, brokers match subscriptions against policies associated with content without compromising privacy of subscribers. As a result, an unauthorised broker neither gains access to content nor learns policies and authorised nodes gain access only if they satisfy fine-grained policies specified by publishers. Our proposed system provides scalable key management in which loosely-coupled publishers and subscribers communicate without any prior contact. Finally, we have developed a prototype of the system that runs on real smartphones and analysed its performance.
Last updated:  2014-06-13
On the Implausibility of Differing-Inputs Obfuscation and Extractable Witness Encryption with Auxiliary Input
Sanjam Garg, Craig Gentry, Shai Halevi, Daniel Wichs
The notion of differing-inputs obfuscation (diO) was introduced by Barak et al. (CRYPTO 2001). It guarantees that, for any two circuits $C_0, C_1$, if it is difficult to come up with an input $x$ on which $C_0(x) \neq C_1(x)$, then it should also be difficult to distinguish the obfuscation of $C_0$ from that of $C_1$. This is a strengthening of indistinguishability obfuscation, where the above is only guaranteed for circuits that agree on all inputs: $C_0(x) = C_1(x)$ for all $x$. Two recent works of Ananth et al. (ePrint 2013) and Boyle et al. (TCC 2014) study the notion of diO in the setting where the attacker is also given some auxiliary information related to the circuits, showing that this notion leads to many interesting applications. In this work, we show that the existence of general-purpose diO with general auxiliary input has a surprising consequence: it implies that a specific circuit $C^*$ with specific auxiliary input $\aux^*$ cannot be obfuscated in a way that hides some specific information. In other words, under the conjecture that such special-purpose obfuscation exists, we show that general-purpose diO cannot exist. We do not know if this special-purpose obfuscation assumption is implied by diO itself, and hence we do not get an unconditional impossibility result. However, the special-purpose obfuscation assumption is a falsifiable assumption which we do not know how to break for candidate obfuscation schemes. Showing the existence of general-purpose diO with general auxiliary input would necessitate showing how to break this assumption. We also show that the special-purpose obfuscation assumption implies the impossibility of extractable witness encryption with auxiliary input, a notion proposed by Goldwasser et al. (CRYPTO 2013). A variant of this assumption also implies the impossibility of ``output-only dependent'' hardcore bits for general one-way functions, as recently constructed by Bellare and Tessaro (ePrint 2013) using diO.
Last updated:  2013-12-29
Using the Joint Distributions of a Cryptographic Function in Side Channel Analysis
Uncategorized
Yanis Linge, Cecile Dumas, Sophie Lambert-Lacroix
Show abstract
Uncategorized
The Side Channel Analysis is now a classic way to retrieve a secret key in the smart-card world. Unfortunately, most of the ensuing attacks require the plaintext or the ciphertext used by the embedded algorithm. In this article, we present a new method for exploiting the leakage of a device without this constraint. Our attack is based on a study of the leakage distribution of internal data of a cryptographic function and can be performed not only at the beginning or the end of the algorithm, but also at every instant that involves the secret key. This paper focuses on the distribution study and the resulting attack. We also propose a way to proceed in a noisy context using smart distances. We validate our proposition by practical results on an AES128 software implemented on a ATMega2561 and on the DPA contest v4.
Last updated:  2013-12-29
Practical Dual-Receiver Encryption---Soundness, Complete Non-Malleability, and Applications
Sherman S. M. Chow, Matthew Franklin, Haibin Zhang
We reformalize and recast dual-receiver encryption (DRE) proposed in CCS '04, a public-key encryption (PKE) scheme for encrypting to two independent recipients in one shot. We start by defining the crucial soundness property for DRE, which ensures that two recipients will get the same decryption result. While conceptually simple, DRE with soundness turns out to be a powerful primitive for various goals for PKE, such as complete non-malleability (CNM) and plaintext-awareness (PA). We then construct practical DRE schemes without random oracles under the Bilinear Decisional Diffie-Hellman assumption, while prior approaches rely on random oracles or inefficient non-interactive zero-knowledge proofs. Finally, we investigate further applications or extensions of DRE, including DRE with CNM, combined use of DRE and PKE, strengthening two types of PKE schemes with plaintext equality test, off-the-record messaging with a stronger notion of deniability, etc.
Last updated:  2013-12-29
RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis
Daniel Genkin, Adi Shamir, Eran Tromer
Many computers emit a high-pitched noise during operation, due to vibration in some of their electronic components. These acoustic emanations are more than a nuisance: they can convey information about the software running on the computer, and in particular leak sensitive information about security-related computations. In a preliminary presentation (Eurocrypt'04 rump session), we have shown that different RSA keys induce different sound patterns, but it was not clear how to extract individual key bits. The main problem was that the acoustic side channel has a very low bandwidth (under 20 kHz using common microphones, and a few hundred kHz using ultrasound microphones), many orders of magnitude below the GHz-scale clock rates of the attacked computers. In this paper we describe a new acoustic cryptanalysis key extraction attack, applicable to GnuPG's current implementation of RSA. The attack can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts. We experimentally demonstrate that such attacks can be carried out, using either a plain mobile phone placed next to the computer, or a more sensitive microphone placed 4 meters away. Beyond acoustics, we demonstrate that a similar low-bandwidth attack can be performed by measuring the electric potential of a computer chassis. A suitably-equipped attacker need merely touch the target computer with his bare hand, or get the required leakage information from the ground wires at the remote end of VGA, USB or Ethernet cables.
Last updated:  2013-12-20
Tightly-Secure Signatures From Lossy Identification Schemes
Michel Abdalla, Pierre-Alain Fouque, Vadim Lyubashevsky, Mehdi Tibouchi
In this paper we present three digital signature schemes with tight security reductions. Our first signature scheme is a particularly efficient version of the short exponent discrete log based scheme of Girault et al. (J. of Cryptology 2006). Our scheme has a tight reduction to the decisional Short Discrete Logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the original scheme of Girault et al. is based. The second signature scheme we construct is a modification of the scheme of Lyubashevsky (Asiacrypt 2009) that is based on the worst-case hardness of the shortest vector problem in ideal lattices. And the third scheme is a very simple signature scheme that is based directly on the hardness of the Subset Sum problem. We also present a general transformation that converts what we term lossy identification schemes into signature schemes with tight security reductions. We believe that this greatly simplifies the task of constructing and proving the security of such signature schemes.
Last updated:  2013-12-20
Weaknesses in a Recently Proposed RFID Authentication Protocol
Mete Akgün, M. Ufuk Çaǧlayan
Many RFID authentication protocols have been proposed to provide desired security and privacy level for RFID systems. Almost all of these protocols are based symmetric cryptography because of the limited resources of RFID tags. Recently Cheng et. al have been proposed an RFID security protocol based on chaotic maps. In this paper, we analyze the security of this protocol and discover its vulnerabilities. We firstly present a de-synchronization attack in which a passive adversary makes the shared secrets out-of-synchronization by eavesdropping just one protocol session. We secondly present a secret disclosure attack in which a passive adversary extracts secrets of a tag by eavesdropping just one protocol session. An adversary having the secrets of the tag can launch some other attacks.
Last updated:  2014-02-07
Multiple-Use Transferable E-Cash
Uncategorized
Pratik Sarkar
Show abstract
Uncategorized
Ecash is a concept of electronic cash which would allow users to carry money in form of digital coins. Transaction can be done both offline and online in absence of a third party/financial institution. This paper proposes an offline model which supports multiple usage of transferable ecoin. The protocol is based on RSA, digital signature and a two-step encryption process. In this two step encryption, the user account details are encrypted in the coin using unique numbers in each step. The first encryption takes place during the successful receipt of the coin, where a receive end number is used for encryption,which is unique for every receipt. The second step of encryption takes place during successful spending of the coin,where a spending end receive number is used for encryption, which is unique for every spending of the coin. These two unique numbers comprise the major part of encryption in this model, prevents double spending and preserves user anonymity.
Last updated:  2013-12-17
Automatic Search for Differential Trails in ARX Ciphers (Extended Version)
Alex Biryukov, Vesselin Velichkov
We propose a tool for automatic search for differential trails in ARX ciphers. By introducing the concept of a partial difference distribution table (pDDT) we extend Matsui's algorithm, originally proposed for DES-like ciphers, to the class of ARX ciphers. To the best of our knowledge this is the first application of Matsui's algorithm to ciphers that do not have S-boxes. The tool is applied to the block ciphers TEA, XTEA, SPECK and RAIDEN. For RAIDEN we find an iterative characteristic on all 32 rounds that can be used to break the full cipher using standard differential cryptanalysis. This is the first cryptanalysis of the cipher in a non-related key setting. Differential trails on 9, 10 and 13 rounds are found for SPECK32, SPECK48 and SPECK64 respectively. The 13 round trail covers half of the total number of rounds. These are the first public results on the security analysis of SPECK. For TEA multiple full (i.e. not truncated) differential trails are reported for the first time, while for XTEA we confirm the previous best known trail reported by Hong et al. We also show closed formulas for computing the exact additive differential probabilities of the left and right shift operations. The source code of the tool is publicly available as part of a larger toolkit for the analysis of ARX at the following address: https://github.com/vesselinux/yaarx .
Last updated:  2015-04-07
Improved Boomerang Attacks on Round-Reduced SM3 and BLAKE-256
Uncategorized
Dongxia Bai, Hongbo Yu, Gaoli Wang, Xiaoyun Wang
Show abstract
Uncategorized
In this paper we study the security of hash functions SM3 and BLAKE-256 against boomerang attack. SM3 is designed by X. Wang et al. and published by Chinese Commercial Cryptography Administration Office for the use of electronic certification service system in China. BLAKE is one of the five finalists of the NIST SHA-3 competition submitted by J.-P. Aumasson et al. For SM3, we present boomerang distinguishers for the compression function reduced to 34/35/36/37 steps out of 64 steps, with time complexities $2^{31.4}$, $2^{33.6}$, $2^{73.4}$ and $2^{192}$ respectively. Then we show some incompatible problems existed in the previous boomerang attacks on SM3. Meanwhile, we launch boomerang attacks on up to 7 and 8 rounds keyed permutation of BLAKE-256 which are the first valid $7$-round and $8$-round boomerangs for BLAKE-256. Especially, since our distinguishers on 34/35-step compression function of SM3 and 7-round keyed permutation of BLAKE-256 are practical, we are able to obtain boomerang quartets of these attacks. As far as we know, these are the best results against round-reduced SM3 and BLAKE-256.
Last updated:  2013-12-20
Power and Timing Side Channels for PUFs and their Efficient Exploitation
Ulrich Rührmair, Xiaolin Xu, Jan Sölter, Ahmed Mahmoud, Farinaz Koushanfar, Wayne Burleson
We discuss the first power and timing side channels on Strong Physical Unclonable Functions (Strong PUFs) in the literature, and describe their efficient exploitation via adapted machine learning (ML) techniques. Our method is illustrated by the example of the two currently most secure (CCS 2010, IEEE T-IFS 2013) electrical Strong PUFs, so-called XOR Arbiter PUFs and Lightweight PUFs. It allows us for the first time to tackle these two architectures with a polynomial attack complexity. In greater detail, our power and timing side channels provide information on the single outputs of the many parallel Arbiter PUFs inside an XOR Arbiter PUF or Lightweight PUF. They indicate how many of these single outputs (in sum) were equal to one (and how many were equal to zero) before the outputs entered the final XOR gate. Taken for itself, this side channel information is of little value, since it does not tell which of the single outputs were zero or one, respectively. But we show that if combined with suitably adapted machine learning techniques, it allows very efficient attacks on the two above PUFs, i.e., attacks that merely use linearly many challenge-response pairs and lowdegree polynomial computation times. Without countermeasures, the two PUFs can hence no longer be called secure, regardless of their sizes. For comparison, the best-performing pure modeling attacks on the above two PUFs are known to have an exponential complexity (CCS 2010, IEEE T-IFS 2013). The practical viability of new our attacks is firstly demonstrated by ML experiments on numerically simulated CRPs. We thereby confirm attacks on the two above PUFs for up to 16 XORs and challenge bitlengths of up to 512. Secondly, we execute a full experimental proof-of-concept for our timing side channel, successfully attacking FPGA-implementations of the two above PUF types for 8, 12, and 16 XORs, and bitlengths 64, 128, 256 and 512. In earlier works (CCS 2010, IEEE T-IFS 2013), 8 XOR architectures with bitlength 512 had been explicitly suggested as secure and beyond the reach of foreseeable attacks. Besides the abovementioned new power and timing side channels, two other central innovations of our paper are our tailormade, polynomial ML-algorithm that integrates the side channel information, and the implementation of Arbiter PUF variants with up to 16 XORs and bitlength 512 in silicon. To our knowledge, such sizes have never been implemented before in the literature. Finally, we discuss efficient countermeasures against our power and timing side channels. They could and should be used to secure future Arbiter PUF generations against the latter.
Last updated:  2013-12-17
Secure Floating-Point Arithmetic and Private Satellite Collision Analysis
Liina Kamm, Jan Willemson
In this paper we show that it is possible and, indeed, feasible to use secure multiparty computation for calculating the probability of a collision between two satellites. For this purpose, we first describe basic floating-point arithmetic operators (addition and multiplication) for multiparty computations. The operators are implemented on the SHAREMIND secure multiparty computation engine. We discuss the implementation details, provide methods for evaluating example elementary functions (inverse, square root, exponentiation of e, error function). Using these primitives, we implement a satellite conjunction analysis algorithm and give benchmark results for the primitives as well as the conjunction analysis itself.
Last updated:  2013-12-17
Pushing the Limit of Non-Profiling DPA using Multivariate Leakage Model
Suvadeep Hajra, Debdeep Mukhopadhyay
Profiling power attacks like Template attack and Stochastic attack optimize their performance by jointly evaluating the leakages of multiple sample points. However, such multivariate approaches are rare among non-profiling Differential Power Analysis (DPA) attacks, since integration of the leakage of a higher SNR sample point with the leakage of lower SNR sample point might result in a decrease in the overall performance. One of the few successful multivariate approaches is the application of Principal Component Analysis (PCA) for non-profiling DPA. However, PCA also performs sub-optimally in the presence of high noise. In this paper, a multivariate model for an FPGA platform is introduced for improving the performances of non-profiling DPA attacks. The introduction of the proposed model greatly increases the success rate of DPA attacks in the presence of high noise. The experimental results on both simulated power traces and real power traces are also provided as an evidence.
Last updated:  2013-12-17
Weakness of Several Identity-based Tripartite Authenticated Key Agreement Protocols
Xi-Jun Lin, Lin Sun
Key agreement allows multi-parties exchanging public information to create a common secret key that is known only to those entities over an insecure network. In recent years, several identity-based authenticated key agreement protocols have been proposed. In this study, we analyze three identity-based tripartite authenticated key agreement protocols. After the analysis, we found that these protocols do not possess the desirable security attributes.
Last updated:  2013-12-31
Ultralightweight cryptography for passive RFID system
Uncategorized
Umar Mujahid, M. Najam-ul-islam, Jameel Ahmed
Uncategorized
Radio Frequency Identification) is one of the most growing technologies among the pervasive systems. Non line of sight capability makes RFID systems much faster than its other contending systems such as barcodes and magnetic taps etc. But there are some allied security apprehensions with RFID systems. RFID security has been acquired a lot of attention in last few years as evinced by the large number of publications (over 2000). In this paper, a brief survey of eminent ultralightweight authentication protocols has been presented & then a four-layer security model, which comprises of various passive and active attacks, has been proposed. Cryptanalysis of these protocols has also been performed under the implications of the proposed security model
Last updated:  2013-12-30
A new attack on RSA with a composed decryption exponent
Abderrahmane Nitaj, Mohamed Ould Douh
In this paper, we consider an RSA modulus $N=pq$, where the prime factors $p$, $q$ are of the same size. We present an attack on RSA when the decryption exponent $d$ is in the form $d=Md_1+d_0$ where $M$ is a given positive integer and $d_1$ and $d_0$ are two suitably small unknown integers. In 1999, Boneh and Durfee~\cite{BODU} presented an attack on RSA when $d<N^{0.292}$. When $d=Md_1+d_0$, our attack enables one to overcome Boneh and Durfee's bound and to factor the RSA modulus.
Last updated:  2017-07-04
How to Keep a Secret: Leakage Deterring Public-key Cryptography
Aggelos Kiayias, Qiang Tang
How is it possible to prevent the sharing of cryptographic functions? This question appears to be fundamentally hard to address since in this setting the owner of the key {\em is} the adversary: she wishes to share a program or device that (potentially only partly) implements her main cryptographic functionality. Given that she possesses the cryptographic key, it is impossible for her to be {\em prevented} from writing code or building a device that uses that key. She may though be {\em deterred} from doing so. We introduce {\em leakage-deterring} public-key cryptographic primitives to address this problem. Such primitives have the feature of enabling the embedding of owner-specific private data into the owner's public-key so that given access to {\em any} (even partially functional) implementation of the primitive, the recovery of the data can be facilitated. We formalize the notion of leakage-deterring in the context of encryption, signature, and identification and we provide efficient generic constructions that facilitate the recoverability of the hidden data while retaining privacy as long as no sharing takes place.
Last updated:  2014-03-28
A generic view on trace-and-revoke broadcast encryption schemes
Uncategorized
Dennis Hofheinz, Christoph Striecks
Show abstract
Uncategorized
At Eurocrypt 2011, Wee presented a generalization of threshold public key encryption, threshold signatures, and revocation schemes arising from threshold extractable hash proof systems. In particular, he gave instances of his generic revocation scheme from the DDH assumption (which led to the Naor-Pinkas revocation scheme), and from the factoring assumption (which led to a new revocation scheme). We expand on Wee's work in two directions: (a) We propose threshold extractable hash proof instantiations from the "Extended Decisional Diffie-Hellman" (EDDH) assumption due to Hemenway and Ostrovsky (PKC 2012). This in particular yields EDDH-based variants of threshold public key encryption, threshold signatures, and revocation schemes. In detail, this yields a DCR-based revocation scheme. (b) We show that our EDDH-based revocation scheme allows for a mild form of traitor tracing (and, thus, yields a new trace-and-revoke scheme). In particular, compared to Wee's factoring-based scheme, our DCR-based scheme has the advantage that it allows to trace traitors.
Last updated:  2013-12-20
A Study of Goldbach's conjecture and Polignac's conjecture equivalence issues
Uncategorized
Jian Ye, Chenglian Liu
Show abstract
Uncategorized
The famous Goldbach's conjecture and Polignac's conjecture are two of all unsolved problems in the field of number theory today. As well known, the Goldbach's conjecture and the Polignac's conjecture are equivalent. Most of the literatures does not introduce about internal equivalence in Polignac's conjecture. In this paper, we would like to discuss the internal equivalence to the Polignac's conjecture, say $T_{2k}(x)$ and $T(x)$ are equivalent. Since $T_{2k}\sim T(x)\sim 2c\cdot \frac{x}{(\ln x)^{2}}$, we rewrite and re-express to $T(x)\sim T_{4}(x)\sim T_{8}(x)\sim T_{16}(x)\sim T_{32}(x)\sim T_{2^{n}}(x)\sim 2c\cdot \frac{x}{(\ln x)^{2}}$. And then connected with the Goldbach's conjecture. Finally, we will point out the important prime number symmetry role of play in these two conjectures.
Last updated:  2014-03-28
Detecting Hidden Leakages
Uncategorized
Amir Moradi, Sylvain Guilley, Annelie Heuser
Show abstract
Uncategorized
Reducing the entropy of the mask is a technique which has been proposed to mitigate the high performance overhead of masked software implementations of symmetric block ciphers. Rotating S-box Masking (RSM) is an example of such schemes applied to AES with the purpose of maintaining the security at least against univariate first-order side-channel attacks. This article examines the vulnerability of a realization of such technique using the side-channel measurements publicly available through DPA contest V4. Our analyses which focus on exploiting the first-order leakage of the implementation discover a couple of potential attacks which can recover the secret key. Indeed the leakage we exploit is due to a design mistake as well as the characteristics of the implementation platform, none of which has been considered during the design of the countermeasure (implemented in naive C code).
Last updated:  2013-12-16
Trust Views for the Web PKI
Johannes Braun, Florian Volk, Johannes Buchmann, Max Mühlhäuser
The steadily growing number of certification authorities (CAs) assigned to the Web Public Key Infrastructure (Web PKI) and trusted by current browsers imposes severe security issues. Apart from being impossible for relying entities to assess whom they actually trust, the current binary trust model implemented with the Web PKI makes each CA a single point of failure. In this paper, we present the concept of trust views to manage variable trust levels for exactly those CAs actually required by a relying entity. This reduces the set of trusted CAs and minimizes the risk to rely on malicious certificates issued due to CA failures or compromises.
Last updated:  2018-01-24
(Efficient) Universally Composable Oblivious Transfer Using a Minimal Number of Stateless Tokens
Seung Geol Choi, Jonathan Katz, Dominique Schröder, Arkady Yerukhimovich, Hong Sheng Zhou
We continue the line of work initiated by Katz (Eurocrypt 2007) on using tamper-proof hardware tokens for universally composable secure computation. As our main result, we show an oblivious-transfer (OT) protocol in which two parties each create and exchange a single, stateless token and can then run an unbounded number of OTs. We also show a more efficient protocol, based only on standard symmetric-key primitives (block ciphers and collision-resistant hash functions), that can be used if a bounded number of OTs suffice. Motivated by this result, we investigate the number of stateless tokens needed for universally composable OT. We prove that our protocol is optimal in this regard for constructions making black-box use of the tokens (in a sense we define). We also show that nonblack-box techniques can be used to obtain a construction using only a single stateless token.
Last updated:  2017-02-21
Lattice Decoding Attacks on Binary LWE
Uncategorized
Shi Bai, Steven D. Galbraith
Show abstract
Uncategorized
We consider the binary-LWE problem, which is the learning with errors problem when the entries of the secret vector are chosen from $\{ 0, 1\}$ or $\{ -1, 0, 1 \}$ (and the error vector is sampled from a discrete Gaussian distribution). Our main result is an improved lattice decoding algorithm for binary-LWE which first translates the problem to the inhomogeneous short integer solution (ISIS) problem, and then solves the closest vector problem using a re-scaling of the lattice. We also discuss modulus switching as an approach to the problem. Our conclusion is that binary-LWE is easier than general LWE. We give experimental results and theoretical estimates that can be used to choose parameters for binary-LWE to achieve certain security levels.
Last updated:  2014-01-22
An improved compression technique for signatures based on learning with errors
Shi Bai, Steven D. Galbraith
We present a new approach to the compression technique of Lyubashevsky et al for lattice-based signatures based on learning with errors (LWE). Our ideas seem to be particularly suitable for signature schemes whose security, in the random oracle model, is based on standard worst-case computational assumptions. Our signatures are shorter than any previous proposal for provably-secure signatures based on standard lattice problems: at the 128-bit level we improve signature size from (more than) 16500 bits to around 9000 to 12000 bits.
Last updated:  2014-03-05
Fair Two-Party Computations via Bitcoin Deposits
Uncategorized
Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, Łukasz Mazurek
Show abstract
Uncategorized
We show how the Bitcoin currency system (with a small modification) can be used to obtain fairness in any two-party secure computation protocol in the following sense: if one party aborts the protocol after learning the output then the other party gets a financial compensation (in bitcoins). One possible application of such protocols is the fair contract signing: each party is forced to complete the protocol, or to pay to the other one a fine. We also show how to link the output of this protocol to the Bitcoin currency. More precisely: we show a method to design secure two-party protocols for functionalities that result in a "forced" financial transfer from one party to the other. Our protocols build upon the ideas of our recent paper "Secure Multiparty Computations on Bitcoin" (Cryptology ePrint Archive, Report 2013/784). Compared to that paper, our results are more general, since our protocols allow to compute any function, while in the previous paper we concentrated only on some specific tasks (commitment schemes and lotteries). On the other hand, as opposed to "Secure Multiparty Computations on Bitcoin", to obtain security we need to modify the Bitcoin specification so that the transactions are "non-malleable" (we discuss this concept in more detail in the paper).
Last updated:  2013-12-17
Identity-Based Key-Encapsulation Mechanism from Multilinear Maps
Uncategorized
Hao Wang, Lei Wu, Zhihua Zheng, Yilei Wang
Show abstract
Uncategorized
We construct an Identity-Based Key Encapsulation Mechanism (IB-KEM) in a generic "leveled" multilinear map setting and prove its security under multilinear decisional Diffie-Hellmanin assumption in the selective-ID model. Then, we make our IB-KEM translated to the GGH framework, which defined an "approximate" version of a multilinear group family from ideal lattices, and modify our proof of security to use the GGH graded algebras analogue of multilinear maps.
Last updated:  2014-08-22
A Modular Framework for Building Variable-Input Length Tweakable Ciphers
Thomas Shrimpton, R. Seth Terashima
We present the Protected-IV construction (PIV) a simple, modular method for building variable-input-length tweakable ciphers. At our level of abstraction, many interesting design opportunities surface. For example, an obvious pathway to building beyond birthday-bound secure tweakable ciphers with performance competitive with existing birthday-bound-limited constructions. As part of our design space exploration, we give two fully instantiated PIV constructions, TCT1 and TCT2; the latter is fast and has beyond birthday-bound security, the former is faster and has birthday-bound security. Finally, we consider a generic method for turning a VIL tweakable cipher (like PIV) into an authenticated encryption scheme that admits associated data, can withstand nonce-misuse, and allows for multiple decryption error messages. Thus, the method offers robustness even in the face of certain sidechannels, and common implementation mistakes.
Last updated:  2013-12-16
Keyless Signatures' Infrastructure: How to Build Global Distributed Hash-Trees
Ahto Buldas, Andres Kroonmaa, Risto Laanoja
Keyless Signatures Infrastructure (KSI) is a globally distributed system for providing time-stamping and server-supported digital signature services. Global per-second hash trees are created and their root hash values published. We discuss some service quality issues that arise in practical implementation of the service and present solutions for avoiding single points of failure and guaranteeing a service with reasonable and stable delay. Guardtime AS has been operating a KSI Infrastructure for 5 years. We summarize how the KSI Infrastructure is built, and the lessons learned during the operational period of the service.
Last updated:  2014-10-14
Verifier-Based Password-Authenticated Key Exchange: New Models and Constructions
Fabrice Benhamouda, David Pointcheval
While password-authenticated key exchange (or PAKE) protocols have been deeply studied, a server corruption remains the main threat, with many concrete cases nowadays. Verifier-based PAKE (or VPAKE) protocols, initially called Augmented-PAKE, have been proposed to limit the impact of any leakage. However, no satisfactory security model has ever been proposed to quantify the actual security of a protocol in the standard model. The unique model proposed so far is an ideal functionality in the universal composability (\UC) framework, but is only meaningful in idealized models. In this paper, we first formally define some properties for the transform (password hashing) applied to the password for the storage on the server-side, for an efficient VPAKE use. A tight one-wayness is required to prevent improved password searches. We then enhance the Bellare-Pointcheval-Rogaway game-based model for PAKE to VPAKE protocols, in such a way that it allows a VPAKE protocol to be secure in the standard model. In addition, we show how to further extend this model to handle non-uniform and related passwords, both in case of PAKE and VPAKE. Finally, we propose very efficient constructions of password hashing and \VPAKE protocols, which are nearly as efficient as the best PAKE protocols to date.
Last updated:  2013-12-16
Practical Dynamic Searchable Encryption with Small Leakage
Emil Stefanov, Charalampos Papamanthou, Elaine Shi
Dynamic Searchable Symmetric Encryption (DSSE) enables a client to encrypt his document collection in a way that it is still searchable and efficiently updatable. However, all DSSE constructions that have been presented in the literature so far come with several problems: Either they leak a significant amount of information (e.g., hashes of the keywords contained in the updated document) or are inefficient in terms of space or search/update time (e.g., linear in the number of documents). In this paper we revisit the DSSE problem. We propose the first DSSE scheme that achieves the best of both worlds, i.e., both small leakage and efficiency. In particular, our DSSE scheme leaks significantly less information than any other previous DSSE construction and supports both updates and searches in sublinear time in the worst case, maintaining at the same time a data structure of only linear size. We finally provide an implementation of our construction, showing its practical efficiency.
Last updated:  2013-12-16
Provable Security Proofs and their Interpretation in the Real World
Vikram Singh
This paper analyses provable security proofs, using the EDL signature scheme as its case study, and interprets their benefits and drawbacks when applied to the real world. Provable security has been an area of contention. Some, such as Koblitz and Menezes, give little credit to the potential extra security provided and argue that it is a distracting goal. However, others believe that an algorithm with a security proof is superior to one without it, and are prepared to accept the impact to performance that their use might involve. Goldreich has been notable for his defence of the security proof, and for his opposition to the view of Koblitz and Menezes. This paper is designed to help the reader make their own decisions on security proofs. We achieve this by giving an introduction to the typical security model used, then give a description of the EDL signature scheme and its tight reduction to the CDH problem in the Random Oracle Model, then analyse the proof's assumptions, meaning, validity and overhead for real world security.
Last updated:  2015-09-17
Property Preserving Symmetric Encryption Revisited
Uncategorized
Sanjit Chatterjee, M. Prem Laxman Das
Show abstract
Uncategorized
At EUROCRYPT~2012 Pandey and Rouselakis introduced the notion of property preserving symmetric encryption which enables checking for a property on plaintexts by running a public test on the corresponding ciphertexts. Their primary contributions are: (i) a separation between `find-then-guess' and `left-or-right' security notions; (ii) a concrete construction for left-or-right secure orthogonality testing in composite order bilinear groups. This work undertakes a comprehensive (crypt)analysis of property preserving symmetric encryption on both these fronts. We observe that the quadratic residue based property used in their separation result is a special case of testing equality of one-bit messages, suggest a very simple and efficient deterministic encryption scheme for testing equality and show that the two security notions, find-then-guess and left-or-right, are tightly equivalent in this setting. On the other hand, the separation result easily generalizes for the equality property. So contextualized, we posit that the question of separation between security notions is property specific and subtler than what the authors envisaged; mandating further critical investigation. Next, we show that given a find-then-guess secure orthogonality preserving encryption of vectors of length 2n, there exists left-or-right secure orthogonality preserving encryption of vectors of length n, giving further evidence that find-then-guess is indeed a meaningful notion of security for property preserving encryption. Finally, we cryptanalyze the scheme for testing orthogonality. A simple distinguishing attack establishes that it is not even the weakest selective find-then-guess secure. Our main attack extracts out the subgroup elements used to mask the message vector and indicates greater vulnerabilities in the construction beyond indistinguishability. Overall, our work underlines the importance of cryptanalysis in provable security.
Last updated:  2014-03-05
Is Bitcoin a Decentralized Currency?
Uncategorized
Arthur Gervais, Ghassan Karame, Srdjan Capkun, Vedran Capkun
Show abstract
Uncategorized
Bitcoin has achieved large-scale acceptance and popularity by promising its users a fully decentralized and low-cost virtual currency system. However, recent incidents and observations are revealing the true limits of decentralization in the Bitcoin system. In this article, we show that the vital operations and decisions that Bitcoin is currently undertaking are not decentralized. More specifically, we show that a limited set of entities currently control the services, decision making, mining, and the incident resolution processes in Bitcoin. We also show that third-party entities can unilaterally decide to “devalue” any specific set of Bitcoin addresses pertaining to any entity participating in the system. Finally, we explore possible avenues to enhance the decentralization in the Bitcoin system.
Last updated:  2013-12-11
Decentralized Traceable Attribute-Based Signatures
Ali El Kaafarani, Essam Ghadafi, Dalia Khader
Attribute-based signatures allow a signer owning a set of attributes to anonymously sign a message w.r.t.\ some signing policy. A recipient of the signature is convinced that a signer with a set of attributes satisfying the signing policy has indeed produced the signature without learning the identity of the signer or which set of attributes was used in the signing. Traceable attribute-based signatures add anonymity revocation mechanisms to attribute-based signatures whereby a special tracing authority equipped with a secret key is capable of revealing the identity of the signer. Such a feature is important in settings where accountability and abuse prevention are required. In this work, we first provide a formal security model for traceable attribute-based signatures. Our focus is on the more practical case where attribute management is distributed among different authorities rather than relying on a single central authority. By specializing our model to the single attribute authority setting, we overcome some of the shortcomings of the existing model for the same setting. Our second contribution is a generic construction for the primitive which achieves a strong notion of security. Namely, it achieves CCA anonymity and its security is w.r.t.\ adaptive adversaries. Moreover, our framework permits expressive signing polices. Finally, we provide some instantiations of the primitive whose security reduces to falsifiable intractability assumptions and without relying on idealized assumptions.
Last updated:  2014-01-12
Lower Bounds in the Hardware Token Model
Shashank Agrawal, Prabhanjan Ananth, Vipul Goyal, Manoj Prabhakaran, Alon Rosen
We study the complexity of secure computation in the tamper-proof hardware token model. Our main focus is on non-interactive unconditional two-party computation using bit-OT tokens, but we also study computational security with stateless tokens that have more complex functionality. Our results can be summarized as follows: - There exists a class of functions such that the number of bit-OT tokens required to securely implement them is at least the size of the sender's input. The same applies for receiver's input size (with a different class of functionalities). - Non-adaptive protocols in the hardware token model imply efficient (decomposable) randomized encodings. This can be interpreted as evidence to the impossibility of non-adaptive protocols for a large class of functions. - There exists a functionality for which there is no protocol in the stateless hardware token model accessing the tokens at most a constant number of times, even when the adversary is computationally bounded. En route to proving our results, we make interesting connections between the hardware token model and well studied notions such as OT hybrid model, randomized encodings, and obfuscation.
Last updated:  2013-12-06
Secure multi-party data analysis: end user validation and practical experiments
Dan Bogdanov, Liina Kamm, Sven Laur, Pille Pruulmann-Vengerfeldt
Research papers on new secure multi-party computation protocols rarely confirm the need for the developed protocol with its end users. One challenge in the way of such validation is that it is hard to explain the benefits of secure multi-party computation to non-experts. We present a method that we used to explain the application models of secure multi-party computation to a diverse group of end users in several professional areas. In these interviews, we learned that the potential users were curious about the possibility of using secure multi-party computation to share and statistically analyse private data. However, they also had concerns on how the new technology will change the data analysis processes. Inspired by this, we implemented a secure multi-party computation prototype that calculates statistical functions in the same way as popular data analysis packages like R, SAS, SPSS and Stata. Finally, we validated the practical feasibility of this application by conducting an experimental study that combined tax records with education records.
Last updated:  2014-04-24
EPCGen2 Pseudorandom Number Generators: Analysis of J3Gen
Uncategorized
Alberto Peinado, Jorge Munilla, Amparo Fúster
Uncategorized
This paper analyzes the cryptographic security of J3Gen, a promising pseudo random number generator for low-cost passive RFID tags. Although J3Gen has been shown to fulfill the randomness criteria set by the EPCglobal Gen2 standard and is intended for security applications, we describe here two cryptanalytic attacks which question its security claims: i) a probabilistic attack based on solving linear equation systems, and ii) a deterministic attack based on the output sequence decimation. Numerical results, supported by simulations, show that for the specific recommended values of the configurable parameters, a low number of intercepted output bits are enough to crytanalyze J3Gen. We then make some recommendations which address these issues.
Last updated:  2013-12-06
Fair and Efficient Secure Multiparty Computation with Reputation Systems
Gilad Asharov, Yehuda Lindell, Hila Zarosim
A reputation system for a set of entities is essentially a list of scores that provides a measure of the reliability of each entity in the set. The score given to an entity can be interpreted (and in the reputation system literature it often is~\cite{FRS}) as the probability that an entity will behave honestly. In this paper, we ask whether or not it is possible to utilize reputation systems for carrying out secure multiparty computation. We provide formal definitions of secure computation in this setting, and carry out a theoretical study of feasibility. We present almost tight results showing when it is and is not possible to achieve \emph{fair} secure computation in our model. We suggest applications for our model in settings where some information about the honesty of other parties is given. This can be preferable to the current situation where either an honest majority is arbitrarily assumed, or a protocol that is secure for a dishonest majority is used and the efficiency and security guarantees (including fairness) of an honest majority are not obtained.
Last updated:  2013-12-13
Another Look at XCB
Debrup Chakraborty, Vicente Hernandez-Jimenez, Palash Sarkar
XCB is a tweakable enciphering scheme (TES) which was first proposed in 2004. The scheme was modified in 2007. We call these two versions of XCB as XCBv1 and XCBv2 respectively. XCBv2 was later proposed as a standard for encryption of sector oriented storage media in IEEE-std 1619.2 2010. There is no known proof of security for XCBv1 but the authors provided a concrete security bound for XCBv2 and a ``proof'' for justifying the bound. In this paper we show that XCBv2 is not secure as a TES by showing an easy distinguishing attack on it. For XCBv2 to be secure, the message space should contain only messages whose lengths are multiples of the block length of the block cipher. For such restricted message spaces also, the bound that the authors claim is not justified. We show this by pointing out some errors in the proof. For XCBv2 on full block messages, we provide a new security analysis. The resulting bound that can be proved is much worse than what has been claimed by the authors. Further, we provide the first concrete security bound for XCBv1, which holds for all message lengths. In terms of known security bounds, both XCBv1 and XCBv2 are worse compared to existing alternative TES.
Last updated:  2013-12-06
Leakage Resilient Fully Homomorphic Encryption
Alexandra Berkoff, Feng-Hao Liu
We construct the first leakage resilient variants of fully homomorphic encryption (FHE) schemes. Our leakage model is bounded adaptive leakage resilience. We first construct a leakage- resilient leveled FHE scheme, meaning the scheme is both leakage resilient and homomorphic for all circuits of depth less than some pre-established maximum set at the time of key generation. We do so by applying ideas from recent works analyzing the leakage resilience of public key encryption schemes based on the decision learning with errors (DLWE) assumption to the Gentry, Sahai and Waters ([17]) leveled FHE scheme. We then move beyond simply leveled FHE, removing the need for an a priori maximum circuit depth, by presenting a novel way to combine schemes. We show that by combining leakage resilient leveled FHE with multi-key FHE, it is possible to create a leakage resilient scheme capable of homomorphically evaluating circuits of arbitrary depth, with a bounded number of distinct input ciphertexts.
Last updated:  2014-07-21
Exact Smooth Projective Hash Function based on LWE
Olivier Blazy, Céline Chevalier, Léo Ducas, Jiaxin Pan
Smooth Projective Hash Functions are one of the base tools to build interactive protocols; and this notion has lead to the construction of numerous protocols enjoying strong security notions, such as the security in the Bellare-Pointcheval-Rogaway (BPR) model or even Universal Composability (UC). Yet, the construction of SPHF has been almost limited to discrete-logarithm or pairing type assumptions up to now. This stands in contrast with domains such as homomorphic encryption or functional encryption, where Lattice Based Cryptography has already caught up and overtook discrete-log/pairing based cryptography. So far, work in the direction of UC based on lattices is almost restricted to a paper from Peikert, Vaikuntanathan, and Waters (Crypto 2008) dealing with Oblivious Transfer in the UC framework, and work in the direction of password-authenticated key exchange protocols (PAKE) to one from Katz and Vaikuntanathan (Asiacrypt 2009) on a 3-round Password-Authenticated Key Exchange, but restraining itself to the BPR model. It seems that dealing with errors in those contexts is not as easy as it is for encryption. In this work, we identify the problem at its source, namely, the lattice version of Diffie-Hellman key exchange protocol: the key greement is only approximate. We explicit a simple folklore trick to obtain true, errorless, one-round key exchange from LWE. We then show that this trick can be adapted to various lattice encryption schemes, leading, with some technicalities, to errorless SPHF's. From there, we derive three new results, namely the first lattice-based following protocols: a one-round PAKE secure in the BPR model, a 3-round PAKE secure in the UC model, and a UC commitment scheme, all of them based on SIS and LWE assumptions.
Last updated:  2014-12-19
On the Security of Recently Proposed RFID Protocols
Mete Akgün, M. Ufuk Çaǧlayan
RFID authentication protocols should have a secret updating phase in order to protect the privacy of RFID tags against tag tracing attacks. In the literature, there are many lightweight RFID authentication protocols that try to provide key updating with lightweight cryptographic primitives. In this paper, we analyse the security of two recently proposed lightweight RFID authentication protocol against de-synchronization attacks. We show that secret values shared between the back-end server and any given tag can be easily desynchronised. This weakness stems from the insufficient design of these protocols.
Last updated:  2013-12-06
Safe enclosures: towards cryptographic techniques for server protection
Sergiu Bursuc, Julian P. Murphy
Cryptography is generally used to protect sensitive data from an untrusted server. In this paper, we investigate the converse question: can we use cryptography to protect a trusted server from untrusted data? As a first step in this direction, we propose the notion of safe enclosures. Intuitively, a safe enclosure is a cryptographic primitive that encapsulates data in a way that allows to perform some computation on it, while at the same time protecting the server from malicious data. Furthermore, a safe enclosure should come equipped with a dedicated protocol that implements the enclosing function with unconditional integrity. Otherwise, unguarded data may reach the server. We discuss the novelty of these concepts, propose their formal definition and show several realizations.
Last updated:  2013-12-06
On the Relation of Random Grid, Probabilistic and Deterministic Visual Cryptography
Roberto De Prisco, Alfredo De Santis
Visual cryptography is a special type of secret sharing. Two models of visual cryptography have been independently studied: deterministic visual cryptography, introduced by Naor and Shamir, and random grid visual cryptography, introduced by Kafri and Keren. In the context of the deterministic model, Yang has introduced a third model, the probabilistic visual cryptography model. The connection between the probabilistic and the deterministic models have been explored. In this paper we show that there is a strict relation between the random grid model and the deterministic model. More specically we show that to any random grid scheme corresponds a deterministic scheme and viceversa. This allows us to use results known in a model also in the other model. In fact, the random grid model is equivalent to the probabilistic model with no pixel expansion. Exploiting the (many) results known in the deterministic model we are able to improve several schemes and to provide many upper bounds for the random grid model. Exploiting some results known for the random grid model, we are also able to provide new schemes for the deterministic model. A side eect of this paper is that future new results for any one of the two models (random grid and deterministic) should not ignore, and in fact be compared to, the results known in the other model.
Last updated:  2014-07-25
Interactive Encryption and Message Authentication
Yevgeniy Dodis, Dario Fiore
Public-Key Encryption (PKE) and Message Authentication (PKMA, aka as digital signatures) are fundamental cryptographic primitives. Traditionally, both notions are defined as non-interactive (i.e., single-message). In this work, we initiate rigorous study of (possibly) {\em interactive} PKE and PKMA schemes. We obtain the following results demonstrating the power of interaction to resolve questions which are either open or impossible in the non-interactive setting. Efficiency/Assumptions. One of the most well known open questions in the area of PKE is to build, in a ``black-box way'', so called chosen ciphertext attack (CCA-) secure PKE from chosen plaintext attack (CPA-) secure PKE. In contrast, we show a simple $2$-round CCA-secure PKE from any (non-interactive) CPA-secure PKE (in fact, these primitives turn out to be equivalent). Similarly, although non-interactive PKMA schemes can be inefficiently built from any one-way function, no efficient signature schemes are known from many popular number-theoretic assumptions, such as factoring, CDH or DDH. In contrast, we show an efficient $2$-round PKMA from most popular assumptions, including factoring, CDH and DDH. Advanced Properties. It is well known that no non-interactive signature (resp. encryption) scheme can be {\em deniable} (resp. {\em forward-secure}), since the signature (resp. ciphertext) can later ``serve as an evidence of the sender's consent'' (resp. ``be decrypted if the receiver's key is compromised''). We also formalize a related notion of {\em replay-secure} (necessarily) interactive PKMA (resp. PKE) schemes, where the verifier (resp. encryptor) is assured that the ``current'' message can only be authenticated (resp. decrypted) by the secret key owner {\em now}, as opposed to some time in the past (resp. future). We observe that our 2-round PKMA scheme is both replay-secure and (passively) deniable, and our 2-round PKE scheme is both replay- and forward-secure.
Last updated:  2013-12-15
Fast Prime Field Elliptic Curve Cryptography with 256 Bit Primes
Shay Gueron, Vlad Krasnov
This paper studies software optimization of Elliptic Curve Cryptography with 256-bit prime fields. We propose a constant-time implementation of the NIST and SECG standardized curve P-256, that can be seamlessly integrated into OpenSSL. This accelerates Perfect Forward Secrecy TLS handshakes that use ECDSA and/or ECDHE, and can help improving the efficiency of TLS servers. We report significant performance improvements for ECDSA and ECDH, on several architectures. For example, on the latest Intel Haswell microarchitecture, our ECDSA sign is 2.33x faster than OpenSSL’s implementation.
Last updated:  2013-12-06
Iterated group products and leakage resilience against NC^1
Uncategorized
Eric Miles
Show abstract
Uncategorized
We show that if NC^1 \neq L, then for every element g of the alternating group A_t, circuits of depth O(log t) cannot distinguish between a uniform vector over (A_t)^t with product = g and one with product = identity. Combined with a recent construction by the author and Viola in the setting of leakage-resilient cryptography [STOC '13], this gives a compiler that produces circuits withstanding leakage from NC^1 (assuming NC^1 \neq L). For context, leakage from NC^1 breaks nearly all previous constructions, and security against leakage from P is impossible. We build on work by Cook and McKenzie [J.\ Algorithms '87] establishing the relationship between L = logarithmic space and the symmetric group S_t. Our techniques include a novel algorithmic use of commutators to manipulate the cycle structure of permutations in A_t.
Last updated:  2013-12-09
RDAS: A Symmetric Key Scheme for Authenticated Query Processing in Outsourced Databases
Lil Maria Rodriguez-Henriquez, Debrup Chakraborty
In this paper we address the problem of authenticated query processing in outsourced databases. An authenticated query processing mechanism allows a client to verify the validity of the query responses that it gets from an untrusted and remote server, who stores the client's database on its behalf. We introduce a general framework called RDAS for the problem of authenticated query processing, and define the security goals for this task in line with concrete provable security. We propose several schemes which enable a client to verify both the completeness and correctness of the query responses of a server. All the schemes follow the proposed framework and are provably secure in terms of the proposed security definition. The novelty of the proposed schemes is that they use bitmap indexes as a main component for providing authentication. Bitmap indexes have recently seen lot of applications for accelerated query processing and many commercial databases implement such indexes. Bitmaps have not been previously used for a security goal. We show that the proposed schemes can match in both functionality and efficiency compared to the existing schemes. We also implement the schemes on a real database and provide extensive experimental studies on the schemes
Last updated:  2020-06-05
Multi-ciphersuite security of the Secure Shell (SSH) protocol
Uncategorized
Florian Bergsma, Benjamin Dowling, Florian Kohlar, Jörg Schwenk, Douglas Stebila
Show abstract
Uncategorized
The Secure Shell (SSH) protocol is widely used to provide secure remote access to servers, making it among the most important security protocols on the Internet. We show that the signed-Diffie--Hellman SSH ciphersuites of the SSH protocol are secure: each is a secure authenticated and confidential channel establishment (ACCE) protocol, the same security definition now used to describe the security of Transport Layer Security (TLS) ciphersuites. While the ACCE definition suffices to describe the security of individual ciphersuites, it does not cover the case where parties use the same long-term key with many different ciphersuites: it is common in practice for the server to use the same signing key with both finite field and elliptic curve Diffie--Hellman, for example. While TLS is vulnerable to attack in this case, we show that SSH is secure even when the same signing key is used across multiple ciphersuites. We introduce a new generic multi-ciphersuite composition framework to achieve this result in a black-box way.
Last updated:  2013-12-06
A Note on Bilinear Groups of a Large Composite Order
Zhengjun Cao, Lihua Liu
We remark that the structure of bilinear groups of a large composite order(at least 1024 bits) could make group operation inefficient and lose the advantages of elliptic curve cryptography which gained mainly from smaller parameter size. As of 2013, the longest parameter recommended by NIST for elliptic curves has 571 bits. From the practical point of view, such an algebraic structure is unlikely applicable to cryptographic schemes.
Last updated:  2013-12-06
Constant-Round Black-Box Construction of Composable Multi-Party Computation Protocol
Susumu Kiyoshima, Yoshifumi Manabe, Tatsuaki Okamoto
We present the first general MPC protocol that satisfies the following: (1) the construction is black-box, (2) the protocol is universally composable in the plain model, and (3) the number of rounds is constant. The security of our protocol is proven in angel-based UC security under the assumption of the existence of one-way functions that are secure against sub-exponential-time adversaries and constant-round semi-honest oblivious transfer protocols that are secure against quasi-polynomial-time adversaries. We obtain the MPC protocol by constructing a constant-round CCA-secure commitment scheme in a black-box way under the assumption of the existence of one-way functions that are secure against sub-exponential-time adversaries. To justify the use of such a sub-exponential hardness assumption in obtaining our constant-round CCA-secure commitment scheme, we show that if black-box reductions are used, there does not exist any constant-round CCA-secure commitment scheme under any falsifiable polynomial-time hardness assumptions.
Last updated:  2014-05-22
Formal Analysis of CRT-RSA Vigilant's Countermeasure Against the BellCoRe Attack
Pablo Rauzy, Sylvain Guilley
In our paper at PROOFS 2013, we formally studied a few known countermeasures to protect CRT-RSA against the BellCoRe fault injection attack. However, we left Vigilant's countermeasure and its alleged repaired version by Coron et al. as future work, because the arithmetical framework of our tool was not sufficiently powerful. In this paper we bridge this gap and then use the same methodology to formally study both versions of the countermeasure. We obtain surprising results, which we believe demonstrate the importance of formal analysis in the field of implementation security. Indeed, the original version of Vigilant's countermeasure is actually broken, but not as much as Coron et al. thought it was. As a consequence, the repaired version they proposed can be simplified. It can actually be simplified even further as two of the nine modular verifications happen to be unnecessary. Fortunately, we could formally prove the simplified repaired version to be resistant to the BellCoRe attack, which was considered a "challenging issue" by the authors of the countermeasure themselves.
Last updated:  2013-12-06
Riding the Saddle Point: asymptotics of the capacity-achieving simple decoder for bias-based traitor tracing
Uncategorized
Sarah Ibrahimi, Boris Skoric, Jan-Jaap Oosterwijk
Show abstract
Uncategorized
We study the asymptotic-capacity-achieving score function that was recently proposed by Oosterwijk et al. for bias-based traitor tracing codes. For the bias function we choose the Dirichlet distribution with a cutoff. Using Bernstein's inequality and Bennett's inequality, we upper bound the false positive and false negative error probabilities. From these bounds we derive sufficient conditions for the scheme parameters. We solve these conditions in the limit of large coalition size $c_0$ and obtain asymptotic solutions for the cutoff, the sufficient code length and the corresponding accusation threshold. The code length converges to its asymptote approximately as $c_0^{-1/2}$, which is faster than the $c_0^{-1/3}$ of Tardos' score function.
Last updated:  2015-04-02
Secrecy without Perfect Randomness: Cryptography with (Bounded) Weak Sources
Michael Backes, Aniket Kate, Sebastian Meiser, Tim Ruffing
Cryptographic protocols are commonly designed and their security proven under the assumption that the protocol parties have access to perfect (uniform) randomness. Physical randomness sources deployed in practical implementations of these protocols often fall short in meeting this assumption, but instead provide only a steady stream of bits with certain high entropy. Trying to ground cryptographic protocols on such imperfect, weaker sources of randomness has thus far mostly given rise to a multitude of impossibility results, including the impossibility to construct provably secure encryption, commitments, secret sharing, and zero-knowledge proofs based solely on a weak source. More generally, indistinguishability-based properties break down for such weak sources. In this paper, we show that the loss of security induced by using a weak source can be meaningfully quantified if the source is bounded, e.g., for the well-studied Santha-Vazirna (SV) sources. The quantification relies on a novel relaxation of indistinguishability by a quantitative parameter. We call the resulting notion differential indistinguishability in order to reflect its structural similarity to differential privacy. More concretely, we prove that indistinguishability with uniform randomness implies differential indistinguishability with weak randomness. We show that if the amount of weak randomness is limited (e.g., by using it only to seed a PRG), all cryptographic primitives and protocols still achieve differential indistinguishability.
Last updated:  2014-02-24
Distributed Key Generation for Secure Encrypted Deduplication
Yitao Duan
Large-scale storage systems often attempt to achieve two seemingly conflicting goals: (1) the systems need to reduce the copies of redundant data to save space, a process called deduplication; and (2) users demand encryption of their data to ensure privacy. Conventional encryption makes deduplication on ciphertexts ineffective, as it destroys data redundancy. A line of work, originated from Convergent Encryption [28], and evolved into Message Locked Encryption [12], strives to solve this problem. The latest work, DupLESS [11], proposes a server-aided architecture that provides the strongest privacy. The DupLESS architecture relies on a key server to help the clients generate encryption keys that result in convergent ciphertexts. In this paper, we first provide a rigorous proof of security, in the random oracle model, for the DupLESS architecture which is lacking in the original paper. Our proof shows that using additional secret, other than the data itself, for generating encryption keys achieves the best possible security under current deduplication paradigm.We then introduce a distributed protocol that eliminates the need for a key server and allows less managed systems such as P2P systems to enjoy the high security level. Implementation and evaluation show that the scheme is both robust and practical.
Last updated:  2014-06-21
Efficient (Anonymous) Compact HIBE From Standard Assumptions
Uncategorized
Somindu C. Ramanna, Palash Sarkar
Show abstract
Uncategorized
We present two hierarchical identity-based encryption (HIBE) schemes, denoted as $\ahibe$ and $\hibe$, from Type-3 pairings with constant sized ciphertexts. Scheme $\ahibe$ achieves anonymity while $\hibe$ is non-anonymous. The constructions are obtained by extending the IBE scheme recently proposed by Jutla and Roy (Asiacrypt 2013). Security is based on the standard decisional Symmetric eXternal Diffie-Hellman (SXDH) assumption. In terms of provable security properties, previous direct constructions of constant-size ciphertext HIBE had one or more of the following drawbacks: security in the weaker model of selective-identity attacks; exponential security degradation in the depth of the HIBE; and use of non-standard assumptions. The security arguments for $\ahibe$ and $\hibe$ avoid all of these drawbacks. These drawbacks can also be avoided by obtaining HIBE schemes by specialising schemes for hierarchical inner product encryption; the downside is that the resulting efficiencies are inferior to those of the schemes reported here. Currently, there is no known anonymous HIBE scheme having the security properties of $\ahibe$ and comparable efficiency. An independent work by Chen and Wee describes a non-anonymous HIBE scheme with security claims and efficiency similar to that of $\hibe$; we note though that in comparison to $\hibe$, the Chen-Wee HIBE scheme has larger ciphertexts and less efficient encryption and decryption algorithms. Based on the current state-of-the-art, $\ahibe$ and $\hibe$ are the schemes of choice for efficient implementation of (anonymous) HIBE constructions.
Last updated:  2014-07-03
Proofs of Space: When Space is of the Essence
Giuseppe Ateniese, Ilario Bonacina, Antonio Faonio, Nicola Galesi
Proofs of computational effort were devised to control denial of service attacks. Dwork and Naor (CRYPTO ’92), for example, proposed to use such proofs to discourage spam. The idea is to couple each email message with a proof of work that demonstrates the sender performed some computational task. A proof of work can be either CPU-bound or memory-bound. In a CPU-bound proof, the prover must compute a CPU-intensive function that is easy to check by the verifier. A memory-bound proof, instead, forces the prover to access the main memory several times, effectively replacing CPU cycles with memory accesses. In this paper we put forward a new concept dubbed proof of space. To compute such a proof, the prover must use a specified amount of space, i.e., we are not interested in the number of accesses to the main memory (as in memory-bound proof of work) but rather on the amount of actual memory the prover must employ to compute the proof. We give a complete and detailed algorithmic description of our model. We develop a comprehensive theoretical analysis which uses combinatorial tools from Complexity Theory (such as pebbling games) which are essential in studying space lower bounds.
Last updated:  2013-12-01
Group Signature with relaxed-privacy and revocability for VANET
Mohammad Saiful Islam Mamun, Atsuko Miyaji
This paper adapts a new group signature (GS) scheme to the specific needs of certain application e.g., a vehicular adhoc network (VANET). Groth GS is the first efficient GS scheme in the BSZ-model with security proofs in the standard model. We modify the Groth GS in order to meet a restricted, but arguably sufficient set of privacy proper-ties. Although there are some authentication schemes using GS none of them satisfy all the desirable security and privacy properties. Either they follow GSs that rely on Random Oracle Model, or unable to satisfy potential application requirements. In particular, link management which allows any designated entities to link messages, whether they are coming from the same member or a certain group of members without revealing their identities; opening soundness that prevents malicious accusations by the opener against some honest member of the group; revocation system that privileges from fraudulent member like the traditional Public Key infrastructure (PKI). In order to achieve the aforementioned security properties together, we propose a new GS model where linkability, sound opening and revocability properties are assembled in a single scheme. The novelty of our proposal stems from extending the Groth GS by relaxing strong privacy properties to a scheme with a lightly lesser privacy in order to fit an existing VANET application requirements. In addition, we partially minimize the Groth GS scheme to expedite efficiency.
Last updated:  2013-12-01
Fully, (Almost) Tightly Secure IBE from Standard Assumptions
Jie Chen, Hoeteck Wee
We present the first fully secure Identity-Based Encryption scheme (IBE) from the standard assumptions where the security loss depends only on the security parameter and is independent of the number of secret key queries. This partially answers an open problem posed by Waters (Eurocrypt 2005). Our construction combines Waters' dual system encryption methodology (Crypto 2009) with the Naor-Reingold pseudo-random function (J. ACM, 2004) in a novel way. The security of our scheme relies on the DLIN assumption in prime-order groups.
Last updated:  2014-02-25
Cryptosystems Resilient to Both Continual Key Leakages and Leakages from Hash Functions
Uncategorized
Guangjun Fan, Yongbin Zhou, Chengyu Hu, Dengguo Feng
Show abstract
Uncategorized
Yoneyama et al. introduced Leaky Random Oracle Model (LROM for short) at ProvSec2008 in order to discuss security (or insecurity) of cryptographic schemes which use hash functions as building blocks when leakages from pairs of input and output of hash functions occur. This kind of leakages occurs due to various attacks caused by sloppy usage or implementation. Their results showed that this kind of leakages may threaten the security of some cryptographic schemes. However, an important fact is that such attacks would leak not only pairs of input and output of hash functions, but also the secret key. Therefore, LROM is rather limited in the sense that it considers leakages from pairs of input and output of hash functions alone, instead of taking into consideration other possible leakages from the secret key simultaneously. On the other hand, many other leakage models mainly concentrate on leakages from the secret key and ignore leakages from hash functions for a cryptographic scheme exploiting hash functions in these leakage models. Some examples show that the above drawbacks of LROM and other leakage models may cause insecurity of some schemes which are secure in the two kinds of leakage model. In this paper, we present an augmented model of both LROM and some leakage models, which both the secret key and pairs of input and output of hash functions can be leaked. Furthermore, the secret key can be leaked continually during the whole life cycle of a cryptographic scheme. Hence, our new model is more universal and stronger than LROM and some leakage models (e.g. only computation leaks model and bounded memory leakage model). As an application example, we also present a public key encryption scheme which is provably IND-CCA secure in our new model.
Last updated:  2014-02-13
Authenticating Computation on Groups: New Homomorphic Primitives and Applications
Dario Catalano, Antonio Marcedone, Orazio Puglisi
In this paper we introduce new primitives to authenticate computation on data expressed as elements in (cryptographic) groups. As for the case of homomorphic authenticators, our primitives allow to verify the correctness of the computation without having to know of the original data set. More precisely, our contributions are two-fold. First, we introduce the notion of linearly homomorphic authenticated encryption with public verifiability and show how to instantiate this primitive (in the random oracle model) to support Paillier's ciphertexts. This immediately yields a very simple and efficient (publicly) verifiable computation mechanism for encrypted (outsourced) data based on Paillier's cryptosystem. As a second result, we show how to construct linearly homomorphic signature schemes to sign elements in bilinear groups (LHSG for short). Such type of signatures are very similar to (linearly homomorphic) structure preserving ones, but they allow for more flexibility, as the signature is explicitly allowed to contain components which are not group elements. In this sense our contributions are as follows. First we show a very simple construction of LHSG that is secure against weak random message attack (RMA). Next we give evidence that RMA secure LHSG are interesting on their own right by showing applications in the context of on-line/off-line homomorphic and network coding signatures. This notably provides what seems to be the first instantiations of homomorphic signatures achieving on-line/off-line efficiency trade-offs. Finally, we present a generic transform that converts RMA-secure LHSG into ones that achieve full security guarantees.
Last updated:  2013-12-01
Algebraic Properties of the Cube Attack
Frank-M. Quedenfeld, Christopher Wolf
Cube attacks can be used to analyse and break cryptographic primitives that have an easy algebraic description. One example for such a primitive is the stream cipher /Trivium. In this article we give a new framework for cubes that are useful in the cryptanalytic context. In addition, we show how algebraic modelling of a cipher can greatly be improved when taking both cubes and linear equivalences between variables into account. When taking many instances of Trivium, we empirically show a saturation effect, i.e., the number of variables to model an attack will become constant for a given number of rounds. Moreover, we show how to systematically find cubes both for general primitives and also specifically for Trivium. For the latter, we have found all cubes up to round 446 and draw some conclusions on their evolution between rounds. All techniques in this article are general and can be applied to any cipher.
Last updated:  2013-12-01
New Insight into the Isomorphism of Polynomials problem IP1S and its Use in Cryptography
Gilles Macario-Rat, Jérôme Plût, Henri Gilbert
This paper investigates the mathematical structure of the ``Isomorphism of Polynomial with One Secret'' problem (IP1S). Our purpose is to understand why for practical parameter values of IP1S most random instances are easily solvable (as first observed by Bouillaguet et al.). We show that the structure of the problem is directly linked to the structure of quadratic forms in odd and even characteristic. We describe a completely new method allowing to efficiently solve most instances. Unlike previous solving techniques, this is not based upon Gröbner basis computations.
Last updated:  2013-12-03
A Generic Chosen-Ciphertext Key-Leakage Secure Public Key Encryption Scheme from Hash Proof System
Rupeng Yang, Qiuliang Xu, Yongbin Zhou, Chengyu Hu, Zuoxia Yu
We present a new generic construction of public key encryption (PKE) scheme that is secure against a-posteriori chosen-ciphertext &#955;-key-leakage attacks (LR-CCA-2 secure) from any universal hash proof system (HPS). Our construction relies only on the existence of universal hash proof systems, which makes our scheme simple, clean and efficient. Furthermore, our construction is a potential way to construct LR-CCA-2 secure PKE scheme from minimal assumption.
Last updated:  2013-12-01
Tamper Resilient Circuits: The Adversary at the Gates
Uncategorized
Aggelos Kiayias, Yiannis Tselekounis
Show abstract
Uncategorized
We initiate the investigation of {\em gate}-tampering attacks against cryptographic circuits. Our model is motivated by the plausibility of tampering directly with circuit gates and by the increasing use of {\em tamper resilient gates} among the known constructions that are shown to be resilient against {\em wire-tampering} adversaries. We prove that gate-tampering is {\em strictly} stronger than wire-tampering. On the one hand, we show that there is a gate-tampering strategy that perfectly simulates any given wire-tampering strategy. On the other, we construct families of circuits over which it is impossible for any wire-tampering attacker to simulate a certain gate-tampering attack (that we explicitly construct). We also provide a tamper resilience impossibility result that applies to both gate and wire tampering adversaries and relates the amount of tampering to the depth of the circuit. Finally, we show that defending against gate-tampering attacks is feasible by appropriately abstracting and analyzing the circuit compiler of Ishai et al. \cite{Ishai:2006a} in a manner which may be of independent interest. Specifically, we first introduce a class of compilers that, assuming certain well defined tamper resilience characteristics against a specific class of attackers, can be shown to produce tamper resilient circuits against that same class of attackers. Then, we describe a compiler in this class for which we prove that it possesses the necessary tamper-resilience characteristics against gate-tampering attackers.
Last updated:  2015-06-24
Proofs of Space
Stefan Dziembowski, Sebastian Faust, Vladimir Kolmogorov, Krzysztof Pietrzak
Proofs of work (PoW) have been suggested by Dwork and Naor (Crypto'92) as protection to a shared resource. The basic idea is to ask the service requestor to dedicate some non-trivial amount of computational work to every request. The original applications included prevention of spam and protection against denial of service attacks. More recently, PoWs have been used to prevent double spending in the Bitcoin digital currency system. In this work, we put forward an alternative concept for PoWs -- so-called \emph{proofs of space} (PoS), where a service requestor must dedicate a significant amount of disk space as opposed to computation. We construct secure PoS schemes in the random oracle model (with one additional mild assumption required for the proof to go through), using graphs with high ``pebbling complexity'' and Merkle hash-trees. We discuss some applications, including follow-up work where a decentralized digital currency scheme called Spacecoin is constructed that uses PoS (instead of wasteful PoW like in Bitcoin) to prevent double spending. The main technical contribution of this work is the construction of (directed, loop-free) graphs on $N$ vertices with in-degree $O(\log\log N)$ such that even if one places $\Theta(N)$ pebbles on the nodes of the graph, there's a constant fraction of nodes that needs $\Theta(N)$ steps to be pebbled (where in every step one can put a pebble on a node if all its parents have a pebble).
Last updated:  2013-12-01
Insecurity of An Anonymous Authentication For Privacy-preserving IoT Target-driven Applications
Xi-Jun Lin, Lin Sun
The Internet of Things (IoT) will be formed by smart objects and services interacting autonomously and in real-time. Recently, Alcaide et al. proposed a fully decentralized anonymous authentication protocol for privacy-preserving IoT target-driven applications. Their system is set up by an ad-hoc community of decentralized founding nodes. Nodes can interact, being participants of cyberphysical systems, preserving full anonymity. In this study, we point out that their protocol is insecure. The adversary can cheat the data collectors by impersonating a legitimate user.
Last updated:  2013-12-01
Behind the Scene of Side Channel Attacks
Victor Lomné, Emmanuel Prouff, Thomas Roche
Since the introduction of side channel attacks in the nineties, a large amount of work has been devoted to their effectiveness and efficiency improvements. On the one side, general results and conclusions are drawn in theoretical frameworks, but the latter ones are often set in a too ideal context to capture the full complexity of an attack performed in real conditions. On the other side, practical improvements are proposed for specific contexts but the big picture is often put aside, which makes them difficult to adapt to different contexts. This paper tries to bridge the gap between both worlds. We specifically investigate which kind of issues is faced by a security evaluator when performing a state of the art attack. This analysis leads us to focus on the very common situation where the exact time of the sensitive processing is drown in a large number of leakage points. In this context we propose new ideas to improve the effectiveness and/or efficiency of the three considered attacks. In the particular case of stochastic attacks, we show that the existing literature, essentially developed under the assumption that the exact sensitive time is known, cannot be directly applied when the latter assumption is relaxed. To deal with this issue, we propose an improvement which makes stochastic attack a real alternative to the classical correlation power analysis. Our study is illustrated by various attack experiments performed on several copies of three micro-controllers with different CMOS technologies (respectively 350, 130 and 90 nanometers).
Last updated:  2013-11-30
A fast integer-based batch full-homomorphic encryption scheme over finite field
Uncategorized
Long Zhang, Qiuling Yue
Show abstract
Uncategorized
In view of the problems that the plaintext space is too small in the existing schemes. In this paper, a new improved scheme is presented by improving the DGHV scheme. The plaintext space of the improved scheme is extended from finite prime field $F_{2}$ in the original scheme to finite prime field $F_{p}$. Combine and apply the method of encryption in the batch encryption scheme was proposed in 2013, and the plaintext space is further extended to finite fields $F_{q}$. The new improved scheme encrypts the message by applying the modular mathematical operation and the Chinese remainder theorem, and the security of the scheme is based on the the difficulty of approximate greatest common divisor problem and the spare subset sum problem. The improved scheme we got has the advantages of encrypt fast, and the size of ciphertext is small. So compared with the original scheme, it is better for practical application.
Last updated:  2013-11-30
Improved Authenticity Bound of EAX, and Refinements
Kazuhiko Minematsu, Stefan Lucks, Tetsu Iwata
EAX is a mode of operation for blockciphers to implement an authenticated encryption. The original paper of EAX proved that EAX is unforgeable up to $O(2^{n/2})$ data with one verification query. However, this generally guarantees a rather weak bound for the unforgeability under multiple verification queries, i.e., only $(2^{n/3})$ data is acceptable. This paper provides an improvement over the previous security proof, by showing that EAX is unforgeable up to $O(2^{n/2})$ data with multiple verification queries. Our security proof is based on the techniques appeared in a paper of FSE 2013 by Minematsu et al. which studied the security of a variant of EAX called EAX-prime. We also provide some ideas to reduce the complexity of EAX while keeping our new security bound. In particular, EAX needs three blockcipher calls and keep them in memory as a pre-processing, and our proposals can effectively reduce three calls to one call. This would be useful when computational power and memory are constrained.
Last updated:  2014-05-14
APE: Authenticated Permutation-Based Encryption for Lightweight Cryptography
Elena Andreeva, Begül Bilgin, Andrey Bogdanov, Atul Luykx, Bart Mennink, Nicky Mouha, Kan Yasuda
The domain of lightweight cryptography focuses on cryptographic algorithms for extremely constrained devices. It is very costly to avoid nonce reuse in such environments, because this requires either a secure pseudorandom number generator (PRNG), or non-volatile memory to store a counter. At the same time, a lot of cryptographic schemes actually require the nonce assumption for their security. In this paper, we propose APE as the first permutation-based authenticated encryption scheme that is resistant against nonce misuse. We formally prove that APE is secure, based on the security of the underlying permutation. To decrypt, APE processes the ciphertext blocks in reverse order, and uses inverse permutation calls. APE therefore requires a permutation that is both efficient for forward and inverse calls. We instantiate APE with the permutations of three recent lightweight hash function designs: quark, photon, and spongent. For any of these permutations, an implementation that supports both encryption and decryption requires less than 1.9~kGE and 2.8~kGE for 80-bit and 128-bit security levels, respectively.
Last updated:  2013-11-30
Parallelizable and Authenticated Online Ciphers
Elena Andreeva, Andrey Bogdanov, Atul Luykx, Bart Mennink, Elmar Tischhauser, Kan Yasuda
Online ciphers encrypt an arbitrary number of plaintext blocks and output ciphertext blocks which only depend on the preceding plaintext blocks. All online ciphers proposed so far are essentially serial, which significantly limits their performance on parallel architectures such as modern general-purpose CPUs or dedicated hardware. We propose the first parallelizable online cipher, COPE. It performs two calls to the underlying block cipher per plaintext block and is fully parallelizable in both encryption and decryption. COPE is proven secure against chosen-plaintext attacks assuming the underlying block cipher is a strong PRP. We then extend COPE to create COPA, the first parallelizable, online authenticated cipher with nonce-misuse resistance. COPA only requires two extra block cipher calls to provide integrity. The privacy and integrity of the scheme is proven secure assuming the underlying block cipher is a strong PRP. Our implementation with Intel AES-NI on a Sandy Bridge CPU architecture shows that both COPE and COPA are about \textit{5 times faster} than their closest competition: TC1, TC3, and McOE-G. This high factor of advantage emphasizes the paramount role of parallelizability on up-to-date computing platforms.
Last updated:  2016-08-15
Proofs of Data Possession and Retrievability Based on MRD Codes
Shuai Han, Shengli Liu, Kefei Chen, Dawu Gu
Proofs of Data Possession (PoDP) scheme is essential to data outsourcing. It provides an efficient audit to convince a client that his/her file is available at the storage server, ready for retrieval when needed. An updated version of PoDP is Proofs of Retrievability (PoR), which proves the client's file can be recovered by interactions with the storage server. We propose a PoDP/PoR scheme based on Maximum Rank Distance (MRD) codes. The client file is encoded block-wise to generate homomorphic tags with help of an MRD code. In an audit, the storage provider is able to aggregate the blocks and tags into one block and one tag, due to the homomorphic property of tags. The algebraic structure of MRD codewords enables the aggregation to be operated over a binary field, which simplifies the computation of storage provider to be the most efficient XOR operation. We also prove two security notions, unforgeability served for PoDP and soundness served for PoR with properties of MRD codes. Meanwhile, the storage provider can also audit itself to locate and correct errors in the data storage to improve the reliability of the system, thanks to the MRD code again.
Last updated:  2013-11-30
Improvement of Lin-Tzeng Solution to Yao's Millionaires Problem and Its Cheating Advantage Analysis
Zhengjun Cao, Lihua Liu
In 2005, Lin and Tzeng proposed a solution to Yao's Millionaires problem in the setting of semi-honest parties. At the end of the protocol only the party (Alice) who is responsible for setting up the system parameters knows the outcome. It does not specify how to have the other party (Bob) know the result. In this note, we present an improvement of the Lin-Tzeng solution. It requires that Alice and Bob alternately perform the original protocol twice. Under the reasonable assumption that a participator does not deviate from the prescribed steps before he/she obtains the outcome, Alice and Bob can almost simultaneously obtain the result. To the best of our knowledge, it is the first time to show that one participator has only an advantage of $\mbox{ln}\,n/n$ possibility to cheat the other in the reasonable setting.
Last updated:  2013-11-30
Wide-weak Privacy Preserving RFID Mutual Authentication Protocol
Raghuvir Songhela, Manik Lal Das
Radio Frequency IDentification (RFID) systems are gaining enormous interests at industry due to their vast applications such as supply chain, access control, inventory, transport, health care and home appliances. Although tag identification is the primary security goal of an RFID system, privacy issue is equally, even more, important concern in RFID system because of pervasiveness of RFID tags. Over the years, many protocols have been proposed for RFID tags' identification using different cryptographic primitives. It has been observed that most of them provide tags' identification, but they fail to preserve tags' privacy. It has been also proven that public-key primitives are essential for strong privacy and security requirements in RFID systems. In this paper, we present a mutual authentication protocol for RFID systems using elliptic curves arithmetic. Precisely, the proposed protocol provides narrow-strong and wide-weak privacy and resists tracking attacks under standard complexity assumption. The protocol is compared with related works and found efficient in comparison to others.
Last updated:  2014-07-19
Tree Based Symmetric Key Broadcast Encryption
Uncategorized
Sanjay Bhattacherjee, Palash Sarkar
Show abstract
Uncategorized
The most influential broadcast encryption (BE) scheme till date was introduced in 2001 by Naor, Naor and Lotspiech (NNL) and is based on binary trees. This paper generalizes the ideas of NNL to obtain BE schemes based on $k$-ary trees for any $k\geq 2$. The treatment is uniform across all $k$ and essentially provides a single scheme which is parameterized by the arity of the underlying tree. We perform an extensive analysis of the header length and user storage of the scheme. It is shown that for a $k$-ary tree with $n$ users out of which $r$ are revoked, the maximum header length is $\min(2r-1,n-r,\lceil n/k\rceil)$. An expression for the expected header length is obtained and it is shown that the expression can be evaluated in $O(r\log n)$ time. Experimental results indicate that for values of $r$ one would expect in applications such as pay TV systems, the average header length decreases as $k$ increases. The number of keys to be stored by any user is shown to be at most $(\chi_k-2)\ell_0(\ell_0+1)/2$, where $\ell_0=\lceil\log_k n\rceil$ and $\chi_k$ is the number of cyclotomic cosets modulo $2^k-1$. In particular, when the number of users is more than $1024$, we prove that the user storage required for $k=3$ is less than that of $k=2$. For higher values of $k$, the user storage is greater than that for binary trees. The option of choosing the value of $k$ provides a designer of a BE system with a wider range of trade-offs between average header length and user storage. The effect of layering on the $k$-ary tree SD scheme is also explored.
Last updated:  2013-11-30
Efficient Leakage-Resilient Signature Schemes in the Generic Bilinear Group Model
Fei Tang, Hongda Li, Qihua Niu, Bei Liang
We extend the techniques of Kiltz et al. (in ASIACRYPT 2010) and Galindo et al. (in SAC 2012) to construct two efficient leakage-resilient signature schemes. Our schemes based on Boneh-Lynn-Shacham (BLS) short signature and Waters signature schemes, respectively. Both of them are more efficient than Galindo et al.'s scheme, and can tolerate leakage of (1-o(1))/2 of the secret key at every signature invocation. The security of the proposed schemes are proved in the generic bilinear group model (additionally, in our first scheme which based on the BLS short signature, a random oracle is needed for the proof).
Last updated:  2014-03-18
Secure Multiparty Computations on Bitcoin
Uncategorized
Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, Łukasz Mazurek
Show abstract
Uncategorized
Bitcoin is a decentralized digital currency, introduced in 2008, that has recently gained noticeable popularity. Its main features are: (a) it lacks a central authority that controls the transactions, (b) the list of transactions is publicly available, and (c) its syntax allows more advanced transactions than simply transferring the money. The goal of this paper is to show how these properties of Bitcoin can be used in the area of secure multiparty computation protocols (MPCs). Firstly, we show that the Bitcoin system provides an attractive way to construct a version of timed commitments" where the committer has to reveal his secret within a certain time frame, or to pay a fine. This, in turn, can be used to obtain fairness in some multiparty protocols. Secondly, we introduce a concept of multiparty protocols that work "directly on Bitcoin". Recall that the standard definition of the MPCs guarantees only that the protocol "emulates the trusted third party". Hence ensuring that the inputs are correct, and the outcome is respected is beyond the scope of the definition. Our observation is that the Bitcoin system can be used to go beyond the standard "emulation-based" definition, by constructing protocols that link their inputs and the outputs with the real Bitcoin transactions. As an instantiation of this idea we construct protocols for secure multiparty lotteries using the Bitcoin currency, without relying on a trusted authority (one of these protocols uses the Bitcoin-based timed commitments mentioned above). Our protocols guarantee fairness for the honest parties no matter how the loser behaves. For example: if one party interrupts the protocol then her money is transferred to the honest participants. Our protocols are practical (to demonstrate it we performed their transactions in the actual Bitcoin system), and can be used in real life as a replacement for the online gambling sites. We think that this paradigm can have also other applications. We discuss some of them.
Last updated:  2013-11-30
ECC-Based Non-Interactive Deniable Authentication with Designated Verifier
Uncategorized
Yalin Chen, Jue-Sam Chou
Show abstract
Uncategorized
Recently, researchers have proposed many non-interactive deniable authentication (NIDA) protocols. Most of them claim that their protocols possess full deniability. However, after reviewing, we found that they either cannot achieve full deniability, or suffer KCI or SKCI attack; moreover, lack efficiency, because they are mainly based on DLP, factoring problem, or bilinear pairings. Due to this observation, and that ECC provides the security equivalence to RSA and DSA by using much smaller key size, we used Fiat-Shamir heuristic to propose a novel ECC-based NIDA protocol for achieving full deniability as well as getting more efficient than the previous schemes. After security analyses and efficiency comparisons, we confirmed the success of the usage. Therefore, the proposed scheme was more suitable to be implemented in low power mobile devices than the others.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.