Differentially-Private and Robust Federated Learning
Description
Federated learning is a machine learning paradigm that aims to learn collaboratively from decentralized private data owned by entities referred to as clients. However, due to its decentralized nature, federated learning is susceptible to poisoning attacks, where malicious clients try to corrupt the learning process by modifying their data or local model updates. Moreover, the updates sent by the clients might leak information about the private data involved in the learning. This thesis aims to investigate and combine existing robust aggregation techniques in FL with differential privacy techniques.
References:
[1] - https://arxiv.org/pdf/2304.09762.pdf
[2] - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9757841
[3] - https://dl.acm.org/doi/abs/10.1145/3465084.3467919
Prerequisites
- Knowledge about machine learning and gradient descent optimization
- Proficiency in Python and PyTorch
- Undergraduate statistics courses
- Prior knowledge about differential privacy is a plus
Contact
marvin.xhemrishi@tum.de
luis.massny@tum.de