**On the Hardness of Learning with Rounding over Small Modulus**

*Andrej Bogdanov and Siyao Guo and Daniel Masny and Silas Richelson and Alon Rosen*

**Abstract: **We show the following reductions from the learning with errors problem (LWE) to the learning with rounding problem (LWR): (1) Learning the secret and (2) distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the number of samples is no larger than O(q/Bp), where q is the LWR modulus, p is the rounding modulus and the noise is sampled from any distribution supported over the set {-B,...,B}.

Our second result generalizes a theorem of Alwen, Krenn, Pietrzak and Wichs (CRYPTO 2013) and provides an alternate proof of it. Unlike Alwen et al., we do not impose any number theoretic restrictions on the modulus q. The first result also extends to variants of LWR and LWE over polynomial rings.

As additional results we show that (3) distinguishing any number of LWR samples from random strings is of equivalent hardness to LWE whose noise distribution is uniform over the integers in the range [-q/2p,...,q/2p) provided q is a multiple of p and (4) the "noise flooding" technique for converting faulty LWE noise to a discrete Gaussian distribution can be applied whenever q = \Omega(B\sqrt{m}).

All our reductions preserve sample complexity and have time complexity at most polynomial in q, the dimension, and the number of samples.

**Category / Keywords: **LWE, LWR

**Original Publication**** (with minor differences): **TCC 2016a

**Date: **received 31 Jul 2015, last revised 5 Feb 2016

**Contact author: **silas richelson at gmail com

**Available format(s): **PDF | BibTeX Citation

**Version: **20160205:183109 (All versions of this report)

**Short URL: **ia.cr/2015/769

[ Cryptology ePrint archive ]