Foto von Manuel Brosch

M.Sc. Manuel Brosch

Research Interests

  • Side-Channel Analysis of Neural Networks, AI Hardware Accelerators and Neuromorphic Hardware
  • Countermeasures against Side-Channel Analysis
  • Secure Implementations of Neural Networks

Open Positions for Students

If you are interested in my research area, feel free to contact me for possible Bachelor Thesis, Master Thesis or research internship.

Bachelor's Theses

Enhancing a masked AI Accelerator

Keywords:
SCA, Machine Learning, VHDL, Masking

Description

Artificial Intelligence (AI) experience growing popularity in edge devices. The increasing usage of AI on edge devices enlarges the relevance of security of the Intellectual Property (IP) stored within the algorithm. As an attacker can gain physical access to the device, hardware attacks such as Side-Channel Analysis (SCA) must be considered [1]. SCA uses physical quantities like the power consumption to extract valuable information about the AI algorithm.

A common technique to counter SCA is masking [2], which introduces random numbers to make intermediate results and the power consumption independent of secret values.

In this work an existing FPGA implementation of a neural network accelerator should be extended to execute different types of neural networks.

 

Start: Anytime

References

[1] Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In Proceedings of the 28th USENIX Conference on Security Symposium (SEC'19). USENIX Association, USA, 515–532.

[2] Athanasiou, Konstantinos & Wahl, Thomas & Ding, A. & Fei, Yunsi. (2022). Masking Feedforward Neural Networks Against Power Analysis Attacks. Proceedings on Privacy Enhancing Technologies. 2022. 501-521. 10.2478/popets-2022-0025.

 

 

Prerequisites

  • VHDL
  • Python

Contact

manuel.brosch@tum.de

Supervisor:

Manuel Brosch

Master's Theses

SCA of AI Hardware Accelerator

Keywords:
SCA, Neural Networks, Hardware, FPGA

Description

Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.


A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed. 

In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.

 Start of Thesis: Anytime


References:

[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.

Prerequisites

  • VHDL/Verilog Knowledge
  • Sichere Implementierung Kryptographischer Verfahren (SIKA)
  • Python Skills

Contact

manuel.brosch@tum.de or matthias.probst@tum.de

Supervisor:

Manuel Brosch, Matthias Probst

Research Internships (Forschungspraxis)

Enhancing a masked AI Accelerator

Keywords:
SCA, Machine Learning, VHDL, Masking

Description

Artificial Intelligence (AI) experience growing popularity in edge devices. The increasing usage of AI on edge devices enlarges the relevance of security of the Intellectual Property (IP) stored within the algorithm. As an attacker can gain physical access to the device, hardware attacks such as Side-Channel Analysis (SCA) must be considered [1]. SCA uses physical quantities like the power consumption to extract valuable information about the AI algorithm.

A common technique to counter SCA is masking [2], which introduces random numbers to make intermediate results and the power consumption independent of secret values.

In this work an existing FPGA implementation of a neural network accelerator should be extended to execute different types of neural networks.

 

Start: Anytime

References

[1] Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In Proceedings of the 28th USENIX Conference on Security Symposium (SEC'19). USENIX Association, USA, 515–532.

[2] Athanasiou, Konstantinos & Wahl, Thomas & Ding, A. & Fei, Yunsi. (2022). Masking Feedforward Neural Networks Against Power Analysis Attacks. Proceedings on Privacy Enhancing Technologies. 2022. 501-521. 10.2478/popets-2022-0025.

 

 

Prerequisites

  • VHDL
  • Python

Contact

manuel.brosch@tum.de

Supervisor:

Manuel Brosch

SCA of AI Hardware Accelerator

Keywords:
SCA, Neural Networks, Hardware, FPGA

Description

Neural Networks are inevitable in everyday life. Speech and face recognition as well as driverless cars are just some examples where Artificial Neural Networks (ANN) are used. Training a deep ANN is very time-consuming and computational expensive. Thus, the intellectual property stored in an ANN is an asset worth to protect. Additionally, implementations on edge devices need to be power-efficient whilst maintaining a high throughput. [1] or [2] are examples for frameworks aiming to fulfill these requirements.


A side-channel attack can be used to extract the network parameters such as the number or type of layers, as well as weights and bias values. In [3, 4] side-channel attacks on different implementations of ANNs are performed. 

In this work, a side-channel attack on autogenerated implementations of different ANNs should be performed. This includes a detailed analysis of the side-channel properties of the different implementations.

 Start of Thesis: Anytime


References:

[1] M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O’brien, Y. Umuroglu, M. Leeser, and K. Vissers, “Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks,” ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1–23, 2018.
[2] Y. Umuroglu and M. Jahre, “Streamlined deployment for quantized neural networks,” arXiv preprint arXiv:1709.04060, 2017.
[3] L. Batina, S. Bhasin, D. Jap, and S. Picek, “{CSI}{NN}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 515–532, 2019.
[4] A. Dubey, R. Cammarota, and A. Aysu, “Bomanet: Boolean masking of an entire neural network," arXiv preprint arXiv:2006.09532, 2020.

Prerequisites

  • VHDL/Verilog Knowledge
  • Sichere Implementierung Kryptographischer Verfahren (SIKA)
  • Python Skills

Contact

manuel.brosch@tum.de or matthias.probst@tum.de

Supervisor:

Manuel Brosch, Matthias Probst

Student Assistant Jobs

Side-Channel Analysis of Error-Correcting Codes for PUFs

Description

Physical Unclonable Functions (PUFs) exploit manufacturing process variations to generate unique signatures. PUF and error-correcting codes can be joined together to reliably generate cryptographically strong keys. However, the implementation of error-correcting codes is prone to physical attacks like side-channel attacks. Side-channel attacks exploit the information leaked during the computation of secret intermediate states to recover the secret key. Therefore, the implementation of error-correcting codes must also involve the implementation of proper countermeasures against side-channel attacks.

The goal of this thesis is to evaluate the side-channel resistance of a secure implementation of error-correcting codes for PUFs on FPGA. The thesis consists of the following steps:

  • Get familiar with currently available implementations of error-correcting codes for PUFs
  • Adapt and improve current implementations (VHDL)
  • Develop a measurement setup for side-channel analysis (Matlab/Python)
  • Perform side-channel analysis using the state-of-the-art EMF measurement equipment in our lab (Oscilloscope knowledge + Matlab/Python required)

Prerequisites

 The ideal candidate should have:

  • Previous experience in field of digital design (VHDL/Vivado/Xilinx FPGA)
  • Basic knowledge on using lab equipment (e.g Oscilloscope,...)
  • Basic knowledge in statistics
  • Good programming skills in Matlab/Python
  • Attendance at the lecture “Secure Implementation of Cryptographic Algorithms” is advantageous

 

Contact

Email: m.pehl@tum.de or manuel.brosch@tum.de

Supervisor:

Michael Pehl, Manuel Brosch

Publications

2022

  • Brosch, Manuel and Probst, Matthias and Sigl, Georg: Counteract Side-Channel Analysis of Neural Networks by Shuffling. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2022Antwerp, Belgium more…