Picture of Luis Maßny

M.Sc. Luis Maßny

Technical University of Munich

Associate Professorship of Coding and Cryptography (Prof. Wachter-Zeh)

Postal address

Postal:
Theresienstr. 90
80333 München

Theses

Available Theses

Private and Secure Federated Learning

Description

In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.

Prerequisites

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics

Supervisor:

Theses in Progress

Synchronization and Signal Detection in Wireless Communications: A Deep Learning Approach

Description

High-speed mobility conditions pose a significant challenge for any sort of wireless communication. For enabling the future of IoT and autonomous driving in vehicular environments, various standards have been developed by IEEE and 3GPP. Despite their existence, the current implementations are unable to achieve high-throughput reliable connectivity with extremely low latency. All of these issues come down to limitations of the proposed receiver design that is incapable of estimating and detecting the instantaneously changing channel conditions encountered during mobility at high speeds and through urban environments.

In this thesis, we propose the use of supervised machine learning to perform synchronization and signal detection for IEEE 802.11p. Our design aims to find methods that reduce complexity and offer an alternative solution that can perform better if not at the same level as the state-of-the-art methods. A fair comparison will be performed via bit-error rate simulation of traditional signal-detection methods against our ML model under previously collected real channel measurements from a high-speed mobility scenario.

Supervisor:

Luis Maßny - Dr. Alberto Viseras (Motius GmbH)

Deep Learning with Differential Privacy

Description

Differential privacy [1] is a security notion that is widely used in data analytics. A differentially private algorithm guarantees that the privacy of an individual is not harmed while it is still possible to learn about a population.

This concept can be transferred to the domain of machine learning. In this setting, model is trained based on potentially sensitive data. For classification tasks for example, the trained model is stored on untrusted devices. Although only the trained model and not the data itself is stored, it was shown, however, that the model can still provide information about individual training data samples. Thus, a learning algorithm is required that preserves the privacy of training data samples. Such a differentially private learning algorithm has been introduced in [2].

[1] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends® in Theoretical Computer Science 9.3–4 (2014): 211-407.
[2] Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
[3] Geyer, Robin C., Tassilo Klein, and Moin Nabi. "Differentially private federated learning: A client level perspective." arXiv preprint arXiv:1712.07557 (2017).

 

Prerequisites

Prior knowledge on

  • machine learning
  • probability theory and statistics

Contact

Luis Maßny (luis.massny@tum.de)

Supervisor: