Paper 2021/167

Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

Seetal Potluri and Aydin Aysu


Stealing trained machine learning (ML) models is a new and growing concern due to the model's development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 2^20.7x, 2^50.7x, and 2^33.9x, respectively, and a reduction in queries by 2^6.5x, 2^4.6x, and 2^14.2x, respectively.

Available format(s)
Publication info
Published elsewhere. ICCAD 2021
Neural network modelsML HardwareModel StealingScan-chain
Contact author(s)
spotlur2 @ ncsu edu
2021-09-23: last of 11 revisions
2021-02-17: received
See all versions
Short URL
Creative Commons Attribution


      author = {Seetal Potluri and Aydin Aysu},
      title = {Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware},
      howpublished = {Cryptology ePrint Archive, Paper 2021/167},
      year = {2021},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.