You are looking at a specific version 20210531:112109 of this paper. See the latest version.

Paper 2021/167

Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

Seetal Potluri and Aydin Aysu

Abstract

Stealing trained machine learning (ML) models is a new and growing concern due to the model's development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 2^20.7x, 2^50.7x, and 2^33.9x, respectively, and a reduction in queries by 2^6.5x, 2^4.6x, and 2^14.2x, respectively.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint. MINOR revision.
Keywords
Neural network modelsML HardwareModel StealingScan-chain
Contact author(s)
spotlur2 @ ncsu edu
History
2021-09-23: last of 11 revisions
2021-02-17: received
See all versions
Short URL
https://ia.cr/2021/167
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.