Our main technique is a novel twist on the classic OT extension of Ishai et al. (Crypto 2003), using an additively key-homomorphic PRF to reduce interaction. We first use this to construct a protocol for a large batch of 1-out-of-$n$ OTs on random inputs, with amortized $o(1)$ communication. Converting these to 1-out-of-2 OTs on chosen strings requires logarithmic communication. The key-homomorphic PRF used in the protocol can be instantiated under the learning with errors assumption with exponential modulus-to-noise ratio.
Category / Keywords: oblivious transfer, learning with errors, multi-party computation Original Publication (in the same form): IACR-PKC-2018 Date: received 8 Jan 2018 Contact author: peter scholl at cs au dk Available format(s): PDF | BibTeX Citation Version: 20180108:181412 (All versions of this report) Short URL: ia.cr/2018/036