Aktuelles

Paper presented at SAIAD Workshop, June 19, 2021

Aktuelles |


Efficiently deploying learning-based systems on embedded hardware is challenging for various reasons, two of which are considered in this paper: The model’s size and its robustness against attacks. Together with our partners from BMW and KIT, we combine adversarial training and model pruning in a joint formulation of the fundamental learning objective during training. This renders the classifier robust against attacks, enables better model compression and reduces its overall computational effort.

The paper titled "Adversarial Robust Model Compression using In-Train Pruning" has been presented at the SAIAD Workshop on June 19th, 2021.

The authors are Manoj-Rohit Vemparala, Nael Fasfous, Alexander Frickenstein, Sreetama Sarkar, Qi Zhao, Sabine Kuhn, Lukas Frickenstein, Anmol Singh, Christian Unger, Naveen-Shankar Nagaraja, Christian Wressnegger, and Walter Stechele.

Safe Artificial Intelligence for Automated Driving (SAIAD), in conjunction with the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2021) on June 19th, 2021, virtually https://sites.google.com/view/saiad2021