You are looking at a specific version 20170608:194308 of this paper. See the latest version.

Paper 2017/551

Noise-Tolerant Machine Learning Attacks against Physically Unclonable Functions

Fatemeh Ganji and Shahin Tajik and Jean-Pierre Seifert

Abstract

Along with the evolution of Physically Unclonable Functions (PUFs) as a remedy for the shortcomings of conventional key storage and generation methods, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Nevertheless, the issue of dealing with noise has so far not been addressed in this context. Thus, it is not clear to what extent ML attacks can be launched on real-world, noisy PUFs. This paper aims to clarify this issue by focusing on provable ML algorithms, i.e., Probably Approximately Correct (PAC) learning algorithms. We prove that PAC learning algorithms for known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. In addition to this proof, we introduce a new style of ML algorithms taking advantage of the Fourier analysis principle. We believe that this type of ML algorithms, and the notions related to it can broaden our understanding of PUFs by offering better measures of security.

Metadata
Available format(s)
PDF
Publication info
Preprint. MINOR revision.
Keywords
Physically Unclonable FunctionsBoolean AnalysisNoise SensitivityLow Degree AlgorithmMachine LearningPAC Learning.
Contact author(s)
fganji @ sec t-labs tu-berlin de
History
2017-11-30: last of 2 revisions
2017-06-08: received
See all versions
Short URL
https://ia.cr/2017/551
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.