Strong PUF Security Metrics and Their Resistance to Machine Learning
PUF ML
Beschreibung
In recent years, Physical Unclonable Functions (PUFs) have become a widely studied hardware-based security primitive. Strong PUFs, in particular, are designed to support a large challenge space and are often used in authentication protocols, where a device proves its identity to a remote backend through challenge–response interactions.
Ideally, the mapping from challenges to responses should be highly complex and difficult to predict. However, many underlying PUF constructions, such as arbiter PUFs and loop PUFs, are linear in nature. This makes them vulnerable to machine learning (ML) attacks, which can efficiently learn and reproduce their behavior.
To improve resistance, several approaches have been proposed. These include introducing nonlinearity during the quantization process (e.g., through methods like NMQ) and combining responses from multiple PUF instances using logical operations. While these techniques can make attacks more difficult, their effectiveness is typically evaluated by empirical trials.
This raises an important question: how can the security of strong PUFs against ML attacks be assessed systematically? In practice, security is often judged based on whether existing attacks succeed or fail, but small improvements in models or preprocessing can quickly change these outcomes. This makes it difficult to compare different designs or estimate their robustness.
Your seminar work will explore the following questions:
* Is there a way to systematically determine the security of strong pufs against ml attacks?
* If not mathematically possible (or not possible for various constructions in a generalized way): Are there practical metrics that give an indication in regard to robustness to ML attacks?
* Why are certain techniques — such as challenge transformations — effective in improving ML attacks, and which types of models are most effective in learning PUF behavior?
The following papers will help you as a starting point:
Unified Framework for Qualifying Security Boundary of PUFs Against Machine Learning Attacks
- Hongming Fei, Zilong Hu, Prosanta Gope, and Biplab Sikdar
- https://arxiv.org/pdf/2601.04697
Strong PUF Security Metrics: Sensitivity of Responses to Single Challenge Bit Flips
- Wolfgang Stefani, Fynn Kappelhoff, Martin Gruber, Yu-Neng Wang,
- Sara Achour, Debdeep Mukhopadhyay, and Ulrich Rührmair
- https://eprint.iacr.org/2024/378.pdf