Paper 2024/1403
Hard-Label Cryptanalytic Extraction of Neural Network Models
Abstract
The machine learning problem of extracting neural network parameters has been proposed for nearly three decades. Functionally equivalent extraction is a crucial goal for research on this problem. When the adversary has access to the raw output of neural networks, various attacks, including those presented at CRYPTO 2020 and EUROCRYPT 2024, have successfully achieved this goal. However, this goal is not achieved when neural networks operate under a hard-label setting where the raw output is inaccessible. In this paper, we propose the first attack that theoretically achieves functionally equivalent extraction under the hard-label setting, which applies to ReLU neural networks. The effectiveness of our attack is validated through practical experiments on a wide range of ReLU neural networks, including neural networks trained on two real benchmarking datasets (MNIST, CIFAR10) widely used in computer vision. For a neural network consisting of $10^5$ parameters, our attack only requires several hours on a single core.
Metadata
- Available format(s)
- Category
- Attacks and cryptanalysis
- Publication info
- A minor revision of an IACR publication in ASIACRYPT 2024
- Keywords
- CryptanalysisReLu Neural NetworksFunctionally Equivalent ExtractionHard-Label
- Contact author(s)
-
chenyi2023 @ mail tsinghua edu cn
xiaoyangdong @ tsinghua edu cn
guojian @ ntu edu sg
shenyt22 @ mails tsinghua edu cn
anyuwang @ tsinghua edu cn
xiaoyunwang @ tsinghua edu cn - History
- 2024-09-11: approved
- 2024-09-08: received
- See all versions
- Short URL
- https://ia.cr/2024/1403
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2024/1403, author = {Yi Chen and Xiaoyang Dong and Jian Guo and Yantian Shen and Anyu Wang and Xiaoyun Wang}, title = {Hard-Label Cryptanalytic Extraction of Neural Network Models}, howpublished = {Cryptology {ePrint} Archive, Paper 2024/1403}, year = {2024}, url = {https://eprint.iacr.org/2024/1403} }