Master's Theses
SCA of AI Hardware Accelerator
SCA, Neural Networks, Hardware, FPGA
Description
Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.
A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed.
In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.
Start of Thesis: Anytime
References:
[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.
Prerequisites
- VHDL/Verilog Knowledge
- Sichere Implementierung Kryptographischer Verfahren (SIKA)
- Python Skills
Contact
manuel.brosch@tum.de or matthias.probst@tum.de
Supervisor:
Research Internships (Forschungspraxis)
SCA of AI Hardware Accelerator
SCA, Neural Networks, Hardware, FPGA
Description
Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.
A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed.
In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.
Start of Thesis: Anytime
References:
[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.
Prerequisites
- VHDL/Verilog Knowledge
- Sichere Implementierung Kryptographischer Verfahren (SIKA)
- Python Skills
Contact
manuel.brosch@tum.de or matthias.probst@tum.de