Foto von Christoph Hofmeister

M.Sc. Christoph Hofmeister

Technische Universität München

Professur für Codierung und Kryptographie (Prof. Wachter-Zeh)

Postadresse

Postal:
Theresienstr. 90
80333 München

Abschlussarbeiten

Angebotene Abschlussarbeiten

Private and Secure Federated Learning

Beschreibung

In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics

Betreuer:

Laufende Abschlussarbeiten

Implementation of a Generic Federated Learning Framework

Beschreibung

Since the introduction of federated learning in [1], we can observe a rapidly growing body of research. In particular, we face challenges with respect to privacy, security and efficiency. The first half of this research internship aims at implementing a generic framework for simulating decentralized optimization procedures in a federated leraning setting. During the second half and with the help of the framework, the student should analyze the performance of selected state-of-the-art schemes.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics
  • Python (Intermediate Level)

Betreuer:

Secure Federated Learning

Beschreibung

In the initially proposed federated learning setting [1], the federator observes partial gradient computations of all clients contributing to a decentralized training procedure. However, clients might send malicious (corrupt) computations to harm the training process on purpose. Considering this model, security against malicious clients can be ensured by running statistics on the partial results [2, 3]. For example, clients’ results that differ significantly from the vast majority of responses can be excluded from the training process. In recent works, secure aggregation of partial work was proposed [4]. The goal is to let the master only observe the sum of all local models, and by this to enhance the privacy level of the clients’ data. These works, however, complicate the use of statistics to account for corrupt partial computations as the master only observes the aggregated result. The goal of this research internship is to review related literature on secure federated learning including their limitations, and to explore possible approaches to ensure security against potentially corrupt results while preserving privacy of the clients’ data.

[1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” vol. 54, pp. 1273–1282, 20--22 Apr 2017.

[2] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent,” in Advances in Neural Information Processing Systems, 2017, vol. 30.

[3] Z. Yang and W. U. Bajwa, “ByRDiE: Byzantine-Resilient Distributed Coordinate Descent for Decentralized Learning,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 4, pp. 611–627, Dec. 2019.

[4] K. Bonawitz et al., “Practical Secure Aggregation for Privacy-Preserving Machine Learning,” Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017. doi: 10.1145/3133956.3133982.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory

Betreuer: