Cryptology ePrint Archive: Report 2021/167

Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

Seetal Potluri and Aydin Aysu

Abstract: Stealing trained machine learning (ML) models is a new and growing concern due to the model's development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 2^20.7x, 2^50.7x, and 2^33.9x, respectively, and a reduction in queries by 2^6.5x, 2^4.6x, and 2^14.2x, respectively.

Category / Keywords: applications / Neural network models, ML Hardware, Model Stealing, Scan-chain

Date: received 15 Feb 2021, last revised 17 Feb 2021

Contact author: spotlur2 at ncsu edu

Available format(s): PDF | BibTeX Citation

Version: 20210218:025442 (All versions of this report)

Short URL: ia.cr/2021/167


[ Cryptology ePrint archive ]