Foto von Matthias Probst

M.Sc. Matthias Probst

Raum: N1008ZG

Research Interests

  • Side Channel Analysis
  • Neural Networks
  • Neuromorphic Hardware (Spiking Neural Networks)

Research positions for students

If one of my research topics catches your interest, feel free to contact me for possible Bachelor Thesis, Master Thesis or research internship opportunities.

Master's Theses

SCA of AI Hardware Accelerator

Keywords:
SCA, Neural Networks, Hardware, FPGA

Description

Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.


A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed. 

In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.

 Start of Thesis: Anytime


References:

[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.

Prerequisites

  • VHDL/Verilog Knowledge
  • Sichere Implementierung Kryptographischer Verfahren (SIKA)
  • Python Skills

Contact

manuel.brosch@tum.de or matthias.probst@tum.de

Supervisor:

Matthias Probst, Manuel Brosch

Research Internships (Forschungspraxis)

SCA of AI Hardware Accelerator

Keywords:
SCA, Neural Networks, Hardware, FPGA

Description

Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.


A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed. 

In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.

 Start of Thesis: Anytime


References:

[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.

Prerequisites

  • VHDL/Verilog Knowledge
  • Sichere Implementierung Kryptographischer Verfahren (SIKA)
  • Python Skills

Contact

manuel.brosch@tum.de or matthias.probst@tum.de

Supervisor:

Matthias Probst, Manuel Brosch

Teaching

Embedded Systems and Security in SoSe 20, WiSe 20/21, WiSe 21/22, WiSe 22/23

Publications

2023

  • Brosch, Manuel and Probst, Matthias and Glaser, Matthias and Sigl, Georg: A Masked Hardware Accelerator for Feed-Forward Neural Networks With Fixed-Point Arithmetic. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2023, 1-14 more…

2022

  • Brosch, Manuel and Probst, Matthias and Sigl, Georg: Counteract Side-Channel Analysis of Neural Networks by Shuffling. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2022Antwerp, Belgium more…

2021

  • Gruber, Michael and Probst, Matthias and Karl, Patrick and Schamberger, Thomas and Tebelmann, Lars and Tempelmeier, Michael and Sigl, Georg: DOMREP – An Orthogonal Countermeasure for Arbitrary Order Side-Channel and Fault Attack Protection. IEEE Transactions on Information Forensics and Security (16), 2021, 4321-4335 more…

2020

  • Gruber, M.; Probst, M.; Tempelmeier, M.: Statistical Ineffective Fault Analysis of GIMLI. 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), 2020IEEE International Symposium on Hardware Oriented Security and Trust (HOST) more…

2019

  • Gruber, M. and Probst, M. and Tempelmeier, M.: Persistent Fault Analysis of OCB, DEOXYS and COLM. 2019 Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC), 2019Atlanta, USA more…