Paper 2024/1010

FSSiBNN: FSS-based Secure Binarized Neural Network Inference with Free Bitwidth Conversion

Peng Yang, Harbin Institute of Technology, ShenZhen
Zoe Lin Jiang, Harbin Institute of Technology, ShenZhen, Guangdong Key Laboratory of New Security and Intelligence Technology
Jiehang Zhuang, Harbin Institute of Technology, ShenZhen
Junbin Fang, Jinan University
Siu Ming Yiu, The University of Hong Kong
Xuan Wang, Harbin Institute of Technology, ShenZhen, Guangdong Key Laboratory of New Security and Intelligence Technology
Abstract

Neural network inference as a service enables a cloud server to provide inference services to clients. To ensure the privacy of both the cloud server's model and the client's data, secure neural network inference is essential. Binarized neural networks (BNNs), which use binary weights and activations, are often employed to accelerate inference. However, achieving secure BNN inference with secure multi-party computation (MPC) is challenging because MPC protocols cannot directly operate on values of different bitwidths and require bitwidth conversion. Existing bitwidth conversion schemes expand the bitwidths of weights and activations, leading to significant communication overhead. To address these challenges, we propose FSSiBNN, a secure BNN inference framework featuring free bitwidth conversion based on function secret sharing (FSS). By leveraging FSS, which supports arbitrary input and output bitwidths, we introduce a bitwidth-reduced parameter encoding scheme. This scheme seamlessly integrates bitwidth conversion into FSS-based secure binary activation and max pooling protocols, thereby eliminating the additional communication overhead. Additionally, we enhance communication efficiency by combining and converting multiple BNN layers into fewer matrix multiplication and comparison operations. We precompute matrix multiplication tuples for matrix multiplication and FSS keys for comparison during the offline phase, enabling constant-round online inference. In our experiments, we evaluated various datasets and models, comparing our results with state-of-the-art frameworks. Compared with the two-party framework XONN (USENIX Security '19), FSSiBNN achieves approximately 7$\times$ faster inference times and reduces communication overhead by about 577$\times$. Compared with the three-party frameworks SecureBiNN (ESORICS '22) and FLEXBNN (TIFS '23), FSSiBNN is approximately 2.5$\times$ faster in inference time and reduces communication overhead by 1.3$\times$ to 16.4$\times$.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint.
Keywords
Secure neural network inferenceBinarized neural networkFree bitwidth conversionFunction secret sharing
Contact author(s)
stuyangpeng @ stu hit edu cn
zoeljiang @ hit edu cn
History
2024-06-28: revised
2024-06-21: received
See all versions
Short URL
https://ia.cr/2024/1010
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/1010,
      author = {Peng Yang and Zoe Lin Jiang and Jiehang Zhuang and Junbin Fang and Siu Ming Yiu and Xuan Wang},
      title = {{FSSiBNN}: {FSS}-based Secure Binarized Neural Network Inference with Free Bitwidth Conversion},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/1010},
      year = {2024},
      url = {https://eprint.iacr.org/2024/1010}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.