Cryptology ePrint Archive: Report 2017/551

Noise-Tolerant Machine Learning Attacks against Physically Unclonable Functions

Fatemeh Ganji and Shahin Tajik and Jean-Pierre Seifert

Abstract: Along with the evolution of Physically Unclonable Functions (PUFs) as a remedy for the shortcomings of conventional key storage and generation methods, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Nevertheless, the issue of dealing with noise has so far not been addressed in this context. Thus, it is not clear to what extent ML attacks can be launched on real-world, noisy PUFs. This paper aims to clarify this issue by focusing on provable ML algorithms, i.e., Probably Approximately Correct (PAC) learning algorithms. We prove that PAC learning algorithms for known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. In addition to this proof, we introduce a new style of ML algorithms taking advantage of the Fourier analysis principle. We believe that this type of ML algorithms, and the notions related to it can broaden our understanding of PUFs by offering better measures of security.

Category / Keywords: Physically Unclonable Functions, Boolean Analysis, Noise Sensitivity, Low Degree Algorithm, Machine Learning, PAC Learning.

Date: received 6 Jun 2017, last revised 25 Jul 2017

Contact author: fganji at sec t-labs tu-berlin de

Available format(s): PDF | BibTeX Citation

Version: 20170725:194401 (All versions of this report)

Short URL: ia.cr/2017/551

Discussion forum: Show discussion | Start new discussion


[ Cryptology ePrint archive ]