All papers in 2011 (Page 8 of 714 results)

Last updated:  2011-01-08
Unconditionally Reliable Message Transmission in Directed Neighbour Networks
Shashank Agrawal, Abhinav Mehta, Kannan Srinathan
The problem of unconditionally reliable message transmission (URMT) is to design a protocol which when run by players in a network enables a sender S to deliver a message to a receiver R with high probability, even when some players in the network are under the control of an unbounded adversary. Renault and Tomala [JoC2008] gave a characterization of undirected neighbour networks over which URMT tolerating Byzantine adversary is possible. In this paper, we generalize their result to the case of directed networks.
Last updated:  2011-01-08
Secure Message Transmission In Asynchronous Directed Networks
Shashank Agrawal, Abhinav Mehta, Kannan Srinathan
We study the problem of information-theoretically secure message transmission (SMT) in asynchronous directed networks. In line with the literature, the distrust and failures of the network is captured via a computationally unbounded Byzantine adversary that may corrupt some subset of nodes. We give a characterization of networks over which SMT from sender S to receiver R is possible in both the well-known settings, namely perfect SMT (PSMT) and unconditional SMT (USMT). We distinguish between two variants of USMT: one in which R can output an incorrect message (with small probability) and another in which R never outputs a wrong message, but may choose to abort (with small probability). We also provide efficient protocols for an important class of networks.
Last updated:  2011-01-08
Minimizing Non-interactive Zero-Knowledge Proofs Using Fully Homomorphic Encryption
Jens Groth
A non-interactive zero-knowledge proof can be used to demonstrate the truth of a statement without revealing anything else. It has been shown under standard cryptographic assumptions that non-interactive zero-knowledge proofs of membership exist for all languages in NP. However, known non-interactive zero-knowledge proofs of membership of NP-languages yield proofs that are larger than the corresponding membership witnesses. We investigate the question of minimizing the communication overhead involved in making non-interactive zero-knowledge proofs and show that if fully homomorphic encryption exists then it is possible to minimize the size of non-interactive zero-knowledge proofs and get proofs that are of the same size as the witnesses. Our technique is applicable to many types of non-interactive zero-knowledge proofs. We apply it to both standard non-interactive zero-knowledge proofs and to universally composable non-interactive zero-knowledge proofs. The technique can also be applied outside the realm of non-interactive zero-knowledge proofs, for instance to get witness-size interactive zero-knowledge proofs in the plain model without any setup.
Last updated:  2011-01-06
After-the-Fact Leakage in Public-Key Encryption
Shai Halevi, Huijia Lin
What does it mean for an encryption scheme to be leakage-resilient Prior formulations require that the scheme remains semantically secure even in the presence of leakage, but only considered leakage that occurs \emph{before the challenge ciphertext is generated}. Although seemingly necessary, this restriction severely limits the usefulness of the resulting notion. In this work we study after-the-fact leakage, namely leakage that the adversary obtains after seeing the challenge ciphertext. We seek a ``natural'' and realizable notion of security, which is usable in higher-level protocols and applications. To this end, we formulate \emph{entropic leakage-resilient PKE}. This notion captures the intuition that as long as the entropy of the encrypted message is higher than the amount of leakage, the message still has some (pseudo) entropy left. We show that this notion is realized by the Naor-Segev constructions (using hash proof systems). We demonstrate that entropic leakage-resilience is useful by showing a simple construction that uses it to get semantic security in the presence of after-the-fact leakage, in a model of bounded memory leakage from a split state.
Last updated:  2011-01-06
Structured Encryption and Controlled Disclosure
Melissa Chase, Seny Kamara
We consider the problem of encrypting structured data (e.g., a web graph or a social network) in such a way that it can be efficiently and privately queried. For this purpose, we introduce the notion of structured encryption which generalizes previous work on symmetric searchable encryption (SSE) to the setting of arbitrarily-structured data. In the context of cloud storage, structured encryption allows a client to encrypt data without losing the ability to query and retrieve it efficiently. Another application, which we introduce in this work, is to the problem of controlled disclosure, where a data owner wishes to grant access to only part of a massive dataset. We propose a model for structured encryption, a formal security definition and several efficient constructions. We present schemes for performing queries on two simple types of structured data, specifically lookup queries on matrix-structured data, and search queries on labeled data. We then show how these can be used to construct efficient schemes for encrypting graph data while allowing for efficient neighbor and adjacency queries. Finally, we consider data that exhibits a more complex structure such as labeled graph data (e.g., web graphs). We show how to encrypt this type of data in order to perform focused subgraph queries, which are used in several web search algorithms. Our construction is based on our labeled data and basic graph encryption schemes and provides insight into how several simpler algorithms can be combined to generate an efficient scheme for more complex queries.
Last updated:  2012-01-05
Progression-Free Sets and Sublinear Pairing-Based Non-Interactive Zero-Knowledge Arguments
Uncategorized
Helger Lipmaa
Show abstract
Uncategorized
In 2010, Groth constructed the only previously known sublinear-communication NIZK circuit satisfiability argument in the common reference string model. We optimize Groth's argument by, in particular, reducing both the CRS length and the prover's computational complexity from quadratic to quasilinear in the circuit size. We also use a (presumably) weaker security assumption, and have tighter security reductions. Our main contribution is to show that the complexity of Groth's basic arguments is dominated by the quadratic number of monomials in certain polynomials. We collapse the number of monomials to quasilinear by using a recent construction of progression-free sets.
Last updated:  2011-01-05
Computing Elliptic Curve Discrete Logarithms with the Negation Map
Ping Wang, Fangguo Zhang
It is clear that the negation map can be used to speed up the computation of elliptic curve discrete logarithms with the Pollard rho method. However, the random walks defined on elliptic curve points equivalence class $\{\pm P\}$ used by Pollard rho will always get trapped in fruitless cycles. We propose an efficient alternative approach to resolve fruitless cycles. Besides the theoretical analysis, we also examine the performance of the new algorithm in experiments with elliptic curve groups. The experiment results show that we can achieve the speedup by a factor extremely close to $\sqrt{2}$, which is the best performance one can achieve in theory, using the new algorithm with the negation map.
Last updated:  2011-10-20
KISS: A Bit Too Simple
Greg Rose
KISS (`Keep it Simple Stupid') is an efficient pseudo-random number generator specified by G. Marsaglia and A. Zaman in 1993. G. Marsaglia in 1998 posted a C version to various USENET newsgroups, including \texttt{sci.crypt}. Marsaglia himself has never claimed cryptographic security for the KISS generator, but many others have made the intellectual leap and claimed that it is of cryptographic quality. In this paper we show a number of reasons why the generator does not meet the KISS authors' claims, why it is not suitable for use as a stream cipher, and that it is not cryptographically secure. Our best attack requires about 70 words of generated output and a few hours of computation to recover the initial state.
Last updated:  2011-01-05
Exploring the Limits of Common Coins Using Frontier Analysis of Protocols
Hemanta K. Maji, Pichayoot Ouppaphan, Manoj Prabhakaran, Mike Rosulek
In 2-party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required. To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the {\em frontier analysis}. This involves analyzing carefully defined ``frontiers'' in a weighted tree induced by the protocol's execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions. Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the ``cryptomania assumption'' (that there is a semi-honest or standalone secure oblivious transfer protocol) in terms of UC-secure reduction among randomized functionalities. Also, we provide an {\em unconditional result} on the uselessness of common randomness, even in the computationally bounded setting. Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.
Last updated:  2012-11-23
Is privacy compatible with truthfulness?
Uncategorized
David Xiao
Show abstract
Uncategorized
In the area of privacy-preserving data mining, a differentially private mechanism intuitively encourages people to share their data because they are at little risk of revealing their own information. However, we argue that this interpretation is incomplete because external incentives are necessary for people to participate in databases, and so data release mechanisms should not only be differentially private but also compatible with incentives, otherwise the data collected may be false. We apply the notion of \emph{truthfulness} from game theory to this problem. In certain settings, it turns out that existing differentially private mechanisms do not encourage participants to report their information truthfully. On the positive side, we exhibit a transformation that takes truthful mechanisms and transforms them into differentially private mechanisms that remain truthful. Our transformation applies to games where the type space is small and the goal is to optimize an insensitive quantity such as social welfare. Our transformation incurs only a small additive loss in optimality, and it is computationally efficient. Combined with the VCG mechanism, our transformation implies that there exists a differentially private, truthful, and approximately efficient mechanism for any social welfare game with small type space. We also study a model where an explicit numerical cost is assigned to the information leaked by a mechanism. We show that in this case, even differential privacy may not be strong enough of a notion to motivate people to participate truthfully. We show that mechanisms that release a perturbed histogram of the database may reveal too much information. We also show that, in general, any mechanism that outputs a synopsis that resembles the original database (such as the mechanism of Blum et al. (STOC '08)) may reveal too much information.
Last updated:  2011-01-05
A low-memory algorithm for finding short product representations in finite groups
Uncategorized
Gaetan Bisson, Andrew V. Sutherland
Show abstract
Uncategorized
We describe a space-efficient algorithm for solving a generalization of the subset sum problem in a finite group G, using a Pollard-rho approach. Given an element z and a sequence of elements S, our algorithm attempts to find a subsequence of S whose product in G is equal to z. For a random sequence S of length d*log2(n), where n=#G and d>=2 is a constant, we find that its expected running time is O(sqrt(n)*log(n)) group operations (we give a rigorous proof for d>4), and it only needs to store O(1) group elements. We consider applications to class groups of imaginary quadratic fields, and to finding isogenies between elliptic curves over a finite field.
Last updated:  2011-01-05
On the correct use of the negation map in the Pollard rho method
Daniel J. Bernstein, Tanja Lange, Peter Schwabe
Bos, Kaihara, Kleinjung, Lenstra, and Montgomery recently showed that ECDLPs on the 112-bit secp112r1 curve can be solved in an expected time of 65 years on a PlayStation 3. This paper shows how to solve the same ECDLPs at almost twice the speed on the same hardware. The improvement comes primarily from a new variant of Pollard's rho method that fully exploits the negation map without branching, and secondarily from improved techniques for modular arithmetic.
Last updated:  2011-01-05
A Zero-One Law for Secure Multi-Party Computation with Ternary Outputs (full version)
Gunnar Kreitz
There are protocols to privately evaluate any function in the honest-but-curious setting assuming that the honest nodes are in majority. For some specific functions, protocols are known which remain secure even without an honest majority. The seminal work by Chor and Kushilevitz [CK91] gave a complete characterization of Boolean functions, showing that each Boolean function either requires an honest majority, or is such that it can be privately evaluated regardless of the number of colluding nodes. The problem of discovering the threshold for secure evaluation of more general functions remains an open problem. Towards a resolution, we provide a complete characterization of the security threshold for functions with three different outputs. Surprisingly, the zero-one law for Boolean functions extends to Z_3, meaning that each function with range Z_3 either requires honest majority or tolerates up to $n$ colluding nodes.
Last updated:  2016-03-20
Practical Frameworks For $h$-Out-Of-$n$ Oblivious Transfer With Security Against Covert and Malicious Adversaries
Zeng Bing, Tang Xueming, Xu Peng, Jing Jiandu
We present two practical frameworks for $h$-out-of-$n$ oblivious transfer ($OT^{n}_{h}$). The first one is secure against covert adversaries who are not always willing to cheat at any price. The security is proven under the ideal/real simulation paradigm (call such security fully simulatable security). The second one is secure against malicious adversaries who are always willing to cheat. It provides fully simulatable security and privacy respectively for the sender and the receiver (call such security one-sided simulatable security). The two frameworks can be implemented from the decisional Diffie-Hellman (DDH) assumption, the decisional $N$-th residuosity assumption, the decisional quadratic residuosity assumption and so on. The DDH-based instantiation of our first framework costs the minimum communication rounds and the minimum computational overhead, compared with existing practical protocols for oblivious transfer with fully simulatable security against covert adversaries or malicious adversaries. Though our second framework is not efficient, compared with existing practical protocols with one-sided simulatable security against malicious adversaries. However, it first provides a way to deal with general $OT^{n}_{h}$ on this security level. What is more, its DDH-based instantiation is more efficient than the existing practical protocols for oblivious transfer with fully simulatable security against malicious adversaries.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.