Security of Neural Network Implementations
Contact: Manuel Brosch, Matthias Probst
In recent years, the deployment of neural networks on edge devices has emerged as a dominant trend in modern computing. Compared to conventional cloud-based inference, where data must be transmitted between the device and remote servers, on-device or "edge-AI" processing significantly improves power efficiency, reduces latency, and enhances data privacy by keeping sensitive information local. However, this paradigm shift introduces new challenges, particularly the need to maintain high inference accuracy and throughput within the tight computational, memory, and energy constraints of embedded systems. Moreover, the training of high-performance neural networks is a computationally intensive and costly process, often involving large datasets and extensive optimization cycles. As a result, trained models constitute valuable intellectual property and are considered strategic assets that warrant strong protection against unauthorized extraction or tampering. In the edge-AI context, where devices are physically accessible and potentially exposed to adversarial manipulation, ensuring model confidentiality and integrity becomes a critical security concern.
Our research is dedicated to the secure and efficient hardware/software co-design of neural network implementations, with a particular emphasis on understanding and mitigating their side-channel vulnerabilities. By systematically analyzing the side-channel properties of neural accelerators, we aim to quantify their susceptibility to physical attacks and develop lightweight, implementation-aware countermeasures that preserve both performance and security.
Specifically, our research addresses two main directions:
- Side-channel attacks on artificial and spiking neural networks—including architectures such as multilayer perceptrons (MLPs) and convolutional neural networks (CNNs)—to evaluate information leakage and attack feasibility.
- Efficient countermeasure design and implementation to enhance resilience against side-channel attacks while maintaining energy efficiency and computational throughput.
Through this dual focus, our work contributes to a deeper understanding of the security implications of edge-AI and advances the development of robust, trustworthy neuromorphic and machine-learning hardware.
Selected Publications
Probst, Matthias and Brosch, Manuel and Sigl, Georg: Side-Channel Analysis of Integrate-and-Fire Neurons Within Spiking Neural Networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 2024, 1-13 [mehr…]
Brosch, Manuel and Probst, Matthias and Glaser, Matthias and Sigl, Georg: A Masked Hardware Accelerator for Feed-Forward Neural Networks With Fixed-Point Arithmetic. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2023, 1-14 [mehr…]
Brosch, Manuel and Probst, Matthias and Sigl, Georg: Counteract Side-Channel Analysis of Neural Networks by Shuffling. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2022Antwerp, Belgium [mehr…]