Paper 2021/1040

MUSE: Secure Inference Resilient to Malicious Clients

Ryan Lehmkuhl, Pratyush Mishra, Akshayaram Srinivasan, and Raluca Ada Popa


The increasing adoption of machine learning inference in applications has led to a corresponding increase in concerns surrounding the privacy guarantees offered by existing mechanisms for inference. Such concerns have motivated the construction of efficient secure inference protocols that allow parties to perform inference without revealing their sensitive information. Recently, there has been a proliferation of such proposals, rapidly improving efficiency. However, most of these protocols assume that the client is semi-honest, that is, the client does not deviate from the protocol; yet in practice, clients are many, have varying incentives, and can behave arbitrarily. To demonstrate that a malicious client can completely break the security of semi-honest protocols, we first develop a new model-extraction attack against many state-of-the-art secure inference protocols. Our attack enables a malicious client to learn model weights with 22x-312x fewer queries than the best black-box model-extraction attack and scales to much deeper networks. Motivated by the severity of our attack, we design and implement MUSE, an efficient two-party secure inference protocol resilient to malicious clients. MUSE introduces a novel cryptographic protocol for conditional disclosure of secrets to switch between authenticated additive secret shares and garbled circuit labels, and an improved Beaver's triple generation procedure which is 8x-12.5x faster than existing techniques. These protocols allow MUSE to push a majority of its cryptographic overhead into a preprocessing phase: compared to the equivalent semi-honest protocol (which is close to state-of-the-art), MUSE's online phase is only 1.7x-2.2x slower and uses 1.4x more communication. Overall, MUSE is 13.4x-21x faster and uses 2x-3.6x less communication than existing secure inference protocols which defend against malicious clients.

Available format(s)
Cryptographic protocols
Publication info
Published elsewhere. Minor revision. USENIX Security 2021
privacy-preserving machine learningsecure inferencemodel extraction
Contact author(s)
ryanleh @ berkeley edu
pratyush @ berkeley edu
raluca popa @ berkeley edu
2021-08-21: revised
2021-08-16: received
See all versions
Short URL
Creative Commons Attribution


      author = {Ryan Lehmkuhl and Pratyush Mishra and Akshayaram Srinivasan and Raluca Ada Popa},
      title = {{MUSE}: Secure Inference Resilient to Malicious Clients},
      howpublished = {Cryptology ePrint Archive, Paper 2021/1040},
      year = {2021},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.