Cryptology ePrint Archive: Report 1998/014

Randomness versus Fault-Tolerance

Ran Canetti, Eyal Kushilevitz, Rafail Ostrovsky, Adi Rosen

Abstract: We investigate the relations between two major requirements of multiparty protocols: {\em fault tolerance} (or {\em resilience}) and {\em randomness}. Fault-tolerance is measured in terms of the maximum number of colluding faulty parties, t, that a protocol can withstand and still maintain the privacy of the inputs and the correctness of the outputs (of the honest parties). Randomness is measured in terms of the total number of random bits needed by the parties in order to execute the protocol.

Previously, the upper bound on the amount of randomness required by general constructions for securely computing any non-trivial function f was polynomial both in $n$, the total number of parties, and the circuit-size C(f). This was the state of knowledge even for the special case t=1 (i.e., when there is at most one faulty party). In this paper, we show that for any linear-size circuit, and for any number t < n/3 of faulty parties, O(poly(t) * log n) randomness is sufficient. More generally, we show that for any function f with circuit-size C(f), we need only O(poly(t) * log n + poly(t) * C(f)/n) randomness in order to withstand any coalition of size at most t. Furthermore, in our protocol only t+1 parties flip coins and the rest of the parties are deterministic. Our results generalize to the case of adaptive adversaries as well.

Category / Keywords: Secure multiparty protocols, Randomness, Limited independence, Composition of protocols.

Publication Info: Appeared in the THEORY OF CRYPTOGRAPHY LIBRARY and has been included in the ePrint Archive.

Date: received Apr 30th, 1998.

Contact author: canetti at watson ibm com

Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | BibTeX Citation

Discussion forum: Show discussion | Start new discussion


[ Cryptology ePrint archive ]