Foto von Luis Maßny

M.Sc. Luis Maßny

Technische Universität München

Professur für Codierung und Kryptographie (Prof. Wachter-Zeh)


Theresienstr. 90
80333 München


Angebotene Abschlussarbeiten

Deep Learning with Differential Privacy


Differential privacy [1] is a security notion that is widely used in data analytics. A differentially private algorithm guarantees that the privacy of an individual is not harmed while it is still possible to learn about a population.

This concept can be transferred to the domain of machine learning. In this setting, model is trained based on potentially sensitive data. For classification tasks for example, the trained model is stored on untrusted devices. Although only the trained model and not the data itself is stored, it was shown, however, that the model can still provide information about individual training data samples. Thus, a learning algorithm is required that preserves the privacy of training data samples. Such a differentially private learning algorithm has been introduced in [2].

[1] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends® in Theoretical Computer Science 9.3–4 (2014): 211-407.
[2] Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
[3] Geyer, Robin C., Tassilo Klein, and Moin Nabi. "Differentially private federated learning: A client level perspective." arXiv preprint arXiv:1712.07557 (2017).



Prior knowledge on

  • machine learning
  • probability theory and statistics


Luis Maßny (


Private and Secure Federated Learning


In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.


  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics


Laufende Abschlussarbeiten

Secure Record Linkage via Multi-Party Computation


Processing a massive amount of collected data by means of machine learning algorithms often becomes infeasible when carried out on single machines. To cope with the computational requirements, distributed cloud computing was introduced. Thereby, a large computational task is split into multiple parts and distributed among worker machines to parallelize the computations and thereby speed up the learning process. However, since confidential data must be shared with third parties and the outcome is threatened by potential corrupt computations, privacy and security has to be ensured. This is particularly critical in medical environments, in which we deal with individual patients' information. 

To motivate the study of these challenges, a competition called iDash privacy and security workshop is hosted every year [1]. This year, the task is to develop a framework that securely links similar patient related entries being stored on different datasets without comprising privacy - for example to avoid double considerations in further processing steps. During this research internship, the student should use multi-party computation tools to develop a framework that complies with the aforementioned requirements.



  • Coding Theory (e.g., Channel Coding)
  • Linear Algebra
  • Information Theory (optional)