All papers in 2012 (733 results)

Shafi Goldwasser, Yael Kalai, Raluca Ada Popa, Vinod Vaikuntanathan, Nickolai Zeldovich
Show abstract
Garbled circuits, introduced by Yao in the mid 80s, allow computing a function f on an input x without leaking anything about f or x besides f(x). Garbled circuits found numerous applications, but every known construction suffers from one limitation: it offers no security if used on multiple inputs x. In this paper, we construct for the first time reusable garbled circuits. The key building block is a new succinct single-key functional encryption scheme. Functional encryption is an ambitious primitive: given an encryption Enc(x) of a value x, and a secret key sk_f for a function f, anyone can compute f(x) without learning any other information about x. We construct, for the first time, a succinct functional encryption scheme for any polynomial-time function f where succinctness means that the ciphertext size does not grow with the size of the circuit for f, but only with its depth. The security of our construction is based on the intractability of the Learning with Errors (LWE) problem and holds as long as an adversary has access to a single key sk_f (or even an a priori bounded number of keys for different functions). Building on our succinct single-key functional encryption scheme, we show several new applications in addition to reusable garbled circuits, such as a paradigm for general function obfuscation which we call token-based obfuscation, homomorphic encryption for a class of Turing machines where the evaluation runs in input-specific time rather than worst-case time, and a scheme for delegating computation which is publicly verifiable and maintains the privacy of the computation.
Last updated:  2013-05-13
Uncategorized
Eduarda S. V. Freire, Dennis Hofheinz, Eike Kiltz, Kenneth G. Paterson
Show abstract
Non-interactive key exchange (NIKE) is a fundamental but much-overlooked cryptographic primitive. It appears as a major contribution in the ground-breaking paper of Diffie and Hellman, but NIKE has remained largely unstudied since then. In this paper, we provide different security models for this primitive and explore the relationships between them. We then give constructions for secure NIKE in the Random Oracle Model based on the hardness of factoring and in the standard model based on the hardness of a variant of the decisional Bilinear Diffie Hellman Problem for asymmetric pairings. We also study the relationship between NIKE and public key encryption (PKE), showing that a secure NIKE scheme can be generically converted into an IND-CCA secure PKE scheme. This conversion also illustrates the fundamental nature of NIKE in public key cryptography.
Last updated:  2013-01-01
Gaëtan Leurent
Show abstract
In this work we consider generic algorithms to find near-collisions for a hash function. If we consider only hash computations, it is easy to compute a lower-bound for the complexity of near-collision algorithms, and to build a matching algorithm. However, this algorithm needs a lot of memory, and makes than 2^{n/2} memory accesses. Recently, several algorithms have been proposed without this memory requirement; they require more hash evaluations, but the attack is actually more practical. They can be divided in two main categories: some are based on truncation, and some are based on covering codes. In this paper, we give a new insight to the generic complexity of a near-collision attack. First, we consider time-memory trade-offs for truncation-based algorithms. For a practical implementation, it seems reasonable to assume that some memory is available and we show that taking advantage of this memory can significantly reduce the complexity. Second, we show a new method combining truncation and covering codes. The new algorithm is always at least as good as the previous works, and often gives a significant improvement. We illustrate our results by giving a 10-near collision for MD5: our algorithm has a complexity of 2^45.4 using 1TB of memory while the best previous
Last updated:  2013-01-01
Dalin Chu, Johann Großschädl, Zhe Liu, Volker Müller, Yang Zhang
Show abstract
Wireless Sensor Networks (WSNs) pose a number of unique security challenges that demand innovation in several areas including the design of cryptographic primitives and protocols. Despite recent progress, the efficient implementation of Elliptic Curve Cryptography (ECC) for WSNs is still a very active research topic and techniques to further reduce the time and energy cost of ECC are eagerly sought. This paper presents an optimized ECC implementation that we developed from scratch to comply with the severe resource constraints of 8-bit sensor nodes such as the MICAz and IRIS motes. Our ECC software uses Optimal Prime Fields (OPFs) as underlying algebraic structure and supports two different families of elliptic curves, namely Weierstraß-form and twisted Edwards-form curves. Due to the combination of efficient field arithmetic and fast group operations, we achieve an execution time of $5.8 \cdot 10^6$ clock cycles for a full 158-bit scalar multiplication on an 8-bit ATmega128 microcontroller, which is 2.78 times faster than the widely-used TinyECC library. Our implementation also shows that the energy cost of scalar multiplication on a MICAz (or IRIS) mote amounts to just 19 mJ when using a twisted Edwards curve over a 160-bit OPF. This result compares fairly well with the energy figures of two recently-presented hardware designs of ECC based on twisted Edwards curves.
Last updated:  2013-03-26
Nir Bitansky, Omer Paneth
Show abstract
The traditional notion of {\em program obfuscation} requires that an obfuscation $\tilde{f}$ of a program $f$ computes the exact same function as $f$, but beyond that, the code of $\tilde{f}$ should not leak any information about $f$. This strong notion of {\em virtual black-box} security was shown by Barak et al. (CRYPTO 2001) to be impossible to achieve, for certain {\em unobfuscatable function families}. The same work raised the question of {\em approximate obfuscation}, where the obfuscated $\tilde{f}$ is only required to approximate $\tilde{f}$; that is, $\tilde{f}$ only agrees with $f$ on some input distribution. We show that, assuming {\em trapdoor permutations}, there exist families of {\em robust unobfuscatable functions} for which even approximate obfuscation is impossible. That is, obfuscation is impossible even if the obfuscated $\tilde{f}$ only agrees with $f$ with probability slightly more than $\frac{1}{2}$, on a uniformly sampled input (below $\frac{1}{2}$-agreement, the function obfuscated by $\tilde{f}$ is not uniquely defined). Additionally, we show that, assuming only one-way functions, we can rule out approximate obfuscation where $\tilde{f}$ is not allowed to err, but may refuse to compute $f$ with probability close to $1$. We then demonstrate the power of robust unobfuscatable functions by exhibiting new implications to resettable protocols that so far have been out of our reach. Concretely, we obtain a new non-black-box simulation technique that reduces the assumptions required for resettably-sound zero-knowledge protocols to {\em one-way functions}, as well as reduce round-complexity. We also present a new simplified construction of simultaneously resettable zero-knowledge protocols that does not rely on collision-resistent hashing. Finally, we construct a three-message simultaneously resettable $\WI$ {\em argument of knowledge} (with a non-black-box knowledge extractor). Our constructions are based on a special kind of ``resettable slots" that are useful for a non-black-box simulator, but not for a resetting prover.
Last updated:  2015-04-23
E. Brier, Q. Fortier, R. Korkikian, K. W. Magld, D. Naccache, G. Ozari de Almeida, A. Pommellet, A. H. Ragab, J. Vuillemin
Show abstract
This paper considers the transfer of digital data over {\sl leaky and noisy} communication channels. We develop defensive strategies exploiting the fact that noise prevents the attacker from accurately measuring leakage. The defense strategy described in this paper pairs each useful data element $k$ with a camouflage value $v$ and simultaneously transmits both $k$ and $v$ over the channel. This releases an emission $e(k,v)$. We wish to select the camouflage values $v(k)$ as a function of $k$ in a way that makes the quantities $e(k,v(k))$ as {\sl indistinguishable} as possible from each other. We model the problem and show that optimal camouflage values can be computed from side-channels under very weak physical assumptions. The proposed technique is hence applicable to a wide range of readily available technologies. We propose algorithms for computing optimal camouflage values when the number of samples per trace is moderate (typically $\leq 6$) and justify our models by a statistical analysis. We also provide experimental results obtained using FPGAs.
Last updated:  2013-01-01
Dana Dachman-Soled
Show abstract
The primitive of deniable encryption was first introduced by Canetti et al. (CRYPTO, 1997). Deniable encryption is a regular public key encryption scheme with the added feature that after running the protocol honestly and transmitting a message $m$, both Sender and Receiver may produce random coins showing that the transmitted ciphertext was an encryption of any message $m'$ in the message space. Deniable encryption is a key tool for constructing incoercible protocols, since it allows a party to send one message and later provide apparent evidence to a coercer that a different message was sent. In addition, deniable encryption may be used to obtain \emph{adaptively}-secure multiparty computation (MPC) protocols and is secure under \emph{selective-opening} attacks. Different flavors such as sender-deniable and receiver-deniable encryption, where only the Sender or Receiver can produce fake random coins, have been considered. Recently, several open questions regarding the feasibility of deniable encryption have been resolved (c.f. (O'Neill et al., CRYPTO, 2011), (Bendlin et al., ASIACRYPT, 2011)). A fundamental remaining open question is whether it is possible to construct sender-deniable Encryption Schemes with super-polynomial security, where an adversary has negligible advantage in distinguishing real and fake openings. The primitive of simulatable public key encryption (PKE), introduced by Damgård and Nielsen (CRYPTO, 2000), is a public key encryption scheme with additional properties that allow oblivious sampling of public keys and ciphertexts. It is one of the low-level primitives used to construct adaptively-secure MPC protocols and was used by O'Neill et al. in their construction of bi-deniable encryption in the multi-distributional model (CRYPTO, 2011). Moreover, the original construction of sender-deniable encryption with polynomial security given by Canetti et al. can be instantiated with simulatable PKE. Thus, a natural question to ask is whether it is possible to construct sender-deniable encryption with \emph{super-polynomial security} from simulatable PKE. In this work, we investigate the possibility of constructing sender-deniable public key encryption from the primitive of simulatable PKE in a black-box manner. We show that, in fact, there is no black-box construction of sender-deniable encryption with super-polynomial security from simulatable PKE. This indicates that the original construction of sender-deniable public key encryption given by Canetti et al. is in some sense optimal, since improving on it will require the use of non-black-box techniques, stronger underlying assumptions or interaction.
Last updated:  2012-12-28
Marc Obrador, Paz Morillo, Sandra Guasch
Show abstract
This paper presents some proposals of protocols for two types of schemes such as verifiable delegation of computation and remote electronic voting, based on polynomial properties. Our protocols for verifiable delegation of computation are aimed to the efficient evaluation of polynomials, working on schemes where the polynomial and/or the input are kept secret to the server. Our proposal for remote electronic voting allows the verification of vote well-formation upon reception at the voting server, with little overhead of computations for the voter.
Last updated:  2012-12-30
Qingfeng Cheng
Show abstract
Recently, He et al. (Computers and Mathematics with Applications, 2012, 64(6): 1914-1926) proposed a new efficient certificateless two-party authenticated key agreement protocol. They claimed their protocol was provably secure in the extended Canetti-Krawczyk (eCK) model. In this paper, we will show that their protocol is insecure. A type I adversary, who obtains one party's ephemeral private key, can impersonate the party to cheat the other party and compute the shared session key successfully. For overcoming this weakness, we also propose a simple countermeasure.
Last updated:  2012-12-28
Kenneth G. Paterson, Antigoni Polychroniadou, Dale L. Sibborn
Show abstract
Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a coding-theoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory -- channel capacity, list decoding algorithms, and random coding techniques -- to derive bounds on the performance of the previous algorithms and our new algorithm.
Last updated:  2013-01-06
Uncategorized
Yaacov Belenky, Zeev Geyzel, Michael Kara-Ivanov, Avraham Entelis
Show abstract
In order to prevent the SPA (Simple Power Analysis) attack against modular exponentiation algorithms, a multiply-always implementation is generally used. Witteman et al. introduced in \cite{WI} a new cross-correlation power analysis attack against the multiply-always implementation. We suggest two new algorithms, resistant to this attack and also to other known attacks. The first algorithm is an alternative approach to exponentiation algorithms used in cryptography, which usually receive as an input some representation (e.g. binary) of the exponent. In our approach both the exponent and the result are functions (not necessarily easily invertible) of the exponentiation algorithm input. We show that this approach can have a good performance and that it is also resistant to several known attacks, especially to the cross-correlation power analysis. It is particularly relevant for cryptographic schemes in which the private exponent can be chosen arbitrarily. Another exponentiation algorithm that we present here may be preferable for use with RSA in certain settings. It is resistant to the cross-correlation power analysis attack, C safe error attack, and other attacks; although it involves squaring operations.
Last updated:  2012-12-27
Itay Berman, Iftach Haitner, Ilan Komargodski, Moni Naor
Show abstract
The focus of this work is hardness-preserving transformations of somewhat limited pseudorandom functions families (PRFs) into ones with more versatile characteristics. Consider the problem of domain extension of pseudorandom functions: given a PRF that takes as input elements of some domain $\mathcal{U}$, we would like to come up with a PRF over a larger domain. Can we do it with little work and without significantly impacting the security of the system? One approach is to first hash the larger domain into the smaller one and then apply the original PRF. Such a reduction, however, is vulnerable to a "birthday attack": after $\sqrt{|\mathcal{U}|}$ queries to the resulting PRF, a collision (i.e., two distinct inputs having the same hash value) is very likely to occur. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work we show how to go beyond the aforementioned birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing, a hashing paradigm that resolves collisions in a table by using two hash functions and two tables, cleverly assigning each element to one of the two tables. We use this approach to obtain: (i) a domain extension method that requires just two calls to the original PRF, can withstand as many queries as the original domain size, and has a distinguishing probability that is exponentially small in the amount of non-cryptographic work; and (ii) a security-preserving reduction from non-adaptive to adaptive PRFs.
Last updated:  2015-10-20
Prabhanjan Ananth, Raghav Bhaskar, Vipul Goyal, Vanishree Rao
Show abstract
The Fiat-Shamir paradigm was proposed as a way to remove interaction from 3-round proof of knowledge protocols and derive secure signature schemes. This generic transformation leads to very efficient schemes and has thus grown quite popular. However, this transformation is proven secure only in the random oracle model. In FOCS 2003, Goldwasser and Kalai showed that this transformation is provably insecure in the standard model by presenting a counterexample of a 3-round protocol, the Fiat-Shamir transformation of which is (although provably secure in the random oracle model) insecure in the standard model, thus showing that the random oracle is uninstantiable. In particular, for every hash function that is used to replace the random oracle, the resulting signature scheme is existentially forgeable. This result was shown by relying on the non-black-box techniques of Barak (FOCS 2001). An alternative to the Fiat-Shamir paradigm was proposed by Fischlin in Crypto 2005. Fischlin\rq{}s transformation can be applied to any so called 3-round ``Fiat-Shamir proof of knowledge\rq{}\rq{} and can be used to derive non-interactive zero-knowledge proofs of knowledge as well as signature schemes. An attractive property of this transformation is that it provides online extractability (i.e., the extractor works without having to rewind the prover). Fischlin remarks that in comparison to the Fiat-Shamir transformation, his construction tries to ``decouple the hash function from the protocol flow" and hence, the counterexample in the work of Goldwaaser and Kalai does not seem to carry over to this setting. In this work, we show a counterexample to the Fischlin's transformation. In particular, we construct a 3-round Fiat-Shamir proof of knowledge (on which Fischlin's transformation is applicable), and then, present an adversary against both - the soundness of the resulting non-interactive zero-knowledge, as well as the unforegeability of the resulting signature scheme. Our attacks are successful except with negligible probability for any hash function, that is used to instantiate the random oracle, provided that there is an apriori (polynomial) bound on the running time of the hash function. By choosing the right bound, secure instantiation of Fischlin transformation with most practical cryptographic hash functions can be ruled out. The techniques used in our work are quite unrelated to the ones used in the work of Goldwasser and Kalai. Our primary technique is to bind the protocol flow with the hash function if the code of the hash function is available. We believe that our ideas are of independent interest and maybe applicable in other related settings.
Last updated:  2012-12-27
Antoine Joux
Show abstract
Many index calculus algorithms generate multiplicative relations between smoothness basis elements by using a process called {\it Sieving}. This process allows to filter potential candidate relations very quickly, without spending too much time to consider bad candidates. However, from an asymptotic point of view, there is not much difference between sieving and straightforward testing of candidates. The reason is that even when sieving, some small amount time is spend for each bad candidates. Thus, asymptotically, the total number of candidates contributes to the complexity. In this paper, we introduce a new technique: {\it Pinpointing}, which allows us to construct multiplicate relations much faster, thus reducing the asymptotic complexity of relations' construction. Unfortunately, we only know how to implement this technique for finite fields which contain a medium-sized subfield. When applicable, this method improves the asymptotic complexity of the index calculus algorithm in the cases where the sieving phase dominates. In practice, it gives a very interesting boost to the performance of state-of-the-art algorithms. We illustrate the feasability of the method with a discrete logarithm record in medium prime finite fields of sizes 1175~bits and 1425~bits.
Last updated:  2013-01-07
Yun Song, Zhihui Li
Show abstract
How to construct an ideal multi-secret sharing scheme for general access structures is difficult. In this paper, we solve an open problem proposed by Spiez et al.recently [Finite Fields and Their Application, 2011(17) 329-342], namely to design an algorithm of privileged coalitions of any length if such coalitions exist. Furthermore, in terms of privileged coalitions, we show that most of the existing multi-secret sharing schemes based on Shamir threshold secret sharing are not perfect by analyzing Yang et al.'s scheme and Pang et al.'s scheme. Finally, based on the algorithm mentioned above, we devise an ideal multi-secret sharing scheme for families of access structures, which possesses more vivid authorized sets than that of the threshold scheme.
Last updated:  2012-12-27
Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Rafail Ostrovsky, Omer Paneth
Show abstract
\emph{Succinct non-interactive arguments} (SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays researches have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation. A common relaxation is a \emph{preprocessing} SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only $O(1)$ encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have ``escaped the hegemony'' of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments. We present a general methodology for the construction of preprocessing SNARGs, as well as resulting concrete efficiency improvements. Our contribution is three-fold: (1) We introduce and study a natural extension of the interactive proof model that considers \emph{algebraically-bounded} provers; this new setting is analogous to the common study of algebraically-bounded ``adversaries'' in other fields, such as pseudorandomness and randomness extraction. More concretely, in this work we focus on linear (or affine) provers, and provide several constructions of (succinct two-message) \emph{linear-interactive proofs} (LIPs) for NP. Our constructions are based on general transformations applied to both \emph{linear} PCPs (LPCPs) and traditional ``unstructured" PCPs. (2) We give conceptually simple cryptographic transformations from LIPs to preprocessing SNARGs, whose security can be based on different forms of \emph{linear targeted malleability} (implied by previous knowledge assumptions). Our transformations convert arbitrary (two-message) LIPs into designated-verifier SNARGs, and LIPs with degree-bounded verifiers into publicly-verifiable SNARGs. We also extend our methodology to obtain \emph{zero-knowledge} LIPs and SNARGs. Our techniques yield SNARGs \emph{of knowledge} and thus can benefit from known recursive composition and bootstrapping techniques. (3) Following this methodology, we exhibit several constructions achieving new efficiency features, such as ``single-ciphertext preprocessing SNARGs" and improved succinctness-soundness tradeoffs. We also offer a new perspective on existing constructions of preprocessing SNARGs, revealing a direct connection of these to LPCPs and LIPs.
Last updated:  2013-09-15
Jan Camenisch, Anna Lysyanskaya, Gregory Neven
Show abstract
Password-authenticated secret sharing (PASS) schemes, first introduced by Bagherzandi et al. at CCS 2011, allow users to distribute data among several servers so that the data can be recovered using a single humanmemorizable password, but no single server (or collusion of servers up to a certain size) can mount an off-line dictionary attack on the password or learn anything about the data. We propose a new, universally composable (UC) security definition for the two-server case (2PASS) in the public-key setting that addresses a number of relevant limitations of the previous, non-UC definition. For example, our definition makes no prior assumptions on the distribution of passwords, preserves security when honest users mistype their passwords, and guarantees secure composition with other protocols in spite of the unavoidable non-negligible success rate of online dictionary attacks. We further present a concrete 2PASS protocol and prove that it meets our definition. Given the strong security guarantees, our protocol is surprisingly efficient: in its most efficient instantiation under the DDH assumption in the random-oracle model, it requires fewer than twenty elliptic-curve exponentiations on the user's device. We achieve our results by careful protocol design and by exclusively focusing on the two-server public-key setting.
Last updated:  2012-12-27
Xavier Boyen
Show abstract
We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions. Specifically, we build an efficient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-with-errors intractability in the standard model.
Last updated:  2012-12-31
Jingyuan Zhao, Meiqin Wang, Jiazhe Chen, Yuliang Zheng
Show abstract
SAFER\scriptsize + \normalsize was a candidate block cipher for AES with 128-bit block size and a variable key sizes of 128, 192 or 256 bits. Bluetooth uses customized versions of SAFER\scriptsize + \normalsize for security. The numbers of rounds for SAFER\scriptsize + \normalsize with key sizes of 128, 192 and 256 are 8, 12 and 16, respectively. SAFER\scriptsize ++\normalsize, a variant of SAFER\scriptsize +\normalsize, was among the cryptographic primitives selected for the second phase of the NESSIE project. The block size is 128 bits and the key size can take either 128 or 256 bits. The number of rounds are 7 for SAFER\scriptsize ++ \normalsize/128 and 10 for SAFER\scriptsize ++ \normalsize/256. Both ciphers use PHT as their linear transformations. In this paper, we take advantage of properties of PHT and S-boxes to identify 3.75-round impossible differentials for SAFER\scriptsize ++ \normalsize and 2.75-round impossible differentials for SAFER\scriptsize +\normalsize, which result in impossible differential attacks on 4-round SAFER\scriptsize +\normalsize/128(256), 5-round SAFER\scriptsize ++\normalsize/128 and 5.5-round SAFER\scriptsize ++\normalsize/256. Our attacks significantly improve previously known impossible differential attacks on 3.75-round SAFER\scriptsize +\normalsize/128(256) and SAFER\scriptsize ++\normalsize/128(256). Our attacks on SAFER\scriptsize +\normalsize/128(256) and SAFER\scriptsize ++\normalsize/256 represent the best currently known attack in terms of the number of rounds.
Last updated:  2013-01-03
Uncategorized
Shweta Agrawal, Craig Gentry, Shai Halevi, Amit Sahai
Show abstract
The classic Leftover Hash Lemma (LHL) is one of the most useful tools in cryptography, and is often used to argue that certain distributions arising from modular subset-sums are close to uniform over some finite domain. Though extremely useful and powerful in general, the applicability of the leftover hash lemma to lattice based cryptography is limited for two reasons. First, typically the distributions we care about in lattice-based cryptography are {\em discrete Gaussians}, not uniform. Second, the elements chosen from these discrete Gaussian distributions lie in an infinite domain: a lattice rather than a finite field. In this work we prove a ``lattice world" analog of LHL over infinite domains, proving that certain ``generalized subset sum'' distributions are statistically close to well behaved discrete Gaussian distributions, even without any modular reduction. Specifically, given many vectors $\{\vec x_i\}_{i=1}^m$ from some lattice $L\subset\R^n$, we analyze the probability distribution $\sum_{i=1}^m z_i \vec x_i$ where the integer vector $\vec z \in \Z^m$ is chosen from a discrete Gaussian distribution. We show that when the $\vec x_i$'s are ``random enough'' and the Gaussian from which the $\vec z$'s are chosen is ``wide enough'', then the resulting distribution is statistically close to a near-spherical discrete Gaussian over the lattice $L$. Beyond being interesting in its own right, this ``lattice-world" analog of LHL has applications for the new construction of multilinear maps \cite{GGH12}, where it is used to sample Discrete Gaussians obliviously. Specifically, given encoding of the $\vec x_i$'s, it is used to produce an encoding of a near-spherical Gaussian distribution over the lattice. We believe that our new lemma will have other applications, and sketch some plausible ones in this work.
Last updated:  2013-03-22
Uncategorized
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.