Foto von Luis Maßny

M.Sc. Luis Maßny

Technische Universität München

Professur für Codierung und Kryptographie (Prof. Wachter-Zeh)

Postadresse

Postal:
Theresienstr. 90
80333 München

Biografie

  • Doctoral researcher under the supervision of Prof. Antonia Wachter-Zeh and Dr. Rawad Bitar, Technical University of Munich, since September 2021.
  • Development Engineer
  • M.Sc. Electrical Engineering, Information Technology, and Computer Engineering, RWTH Aachen University, 2020.
  • B.Sc. Electrical Engineering, Information Technology, and Computer Engineering, RWTH Aachen University, 2018.

Forschung

  • Security and Privacy for Distributed Computing Systems
  • Wireless Communication and Signal Processing
  • Federated Learning
  • Coding Theory
  • Information Theory

Lehre

Abschlussarbeiten

Angebotene Abschlussarbeiten

Distributed Noise Generation for Secure Over-the-Air Computation with Applications in Federated Learning

Kurzbeschreibung:
Over-the-Air (OtA) computation is a promising approach with the potential to drastically reduce the communication overhead of wireless distributed data-processing systems (e.g. Federated Learning). Since this method, however, is prone to eavesdropping, artificial noise can be employed to secure the communication. An open problem however, is the distributed design of artifical noise among different users.

Beschreibung

Novel use cases for mobile communication networks include the aggregation of large amounts of data, which is stored in a distributed manner across network users. For instance, Federated Learning requires the aggregation of machine learning model updates from contributing users.

Over-the-Air (OtA) computation is an approach with the potential to drastically reduce the communication overhead of wireless distributed data-processing systems (e.g. Federated Learning). It exploits the multiple-access property and linearity of the wireless channel to compute sums of pre-processed data by the channel. This important property at the same time opens great opportunities for eavesdroppers to learn about the transmitted signal. If the legitimate receiver shall have exclusive access to the computation result, it is crucial to employ additional security measures.

Artificial noise can be employed to secure the communication. This noise is either generated by dedicated users jamming the communication [3], or by jointly designing the noise contribution of each user, [1][2]. The latter approach makes it possible to minimize the distortion at the legitimate receiver, but requires a centrally coordinated noise design. Therefore, an open problem is how to allow for the distributed design of artifical noise.

[1] Maßny, Luis, and Antonia Wachter-Zeh. "Secure Over-the-Air Computation using Zero-Forced Artificial Noise." arXiv preprint arXiv:2212.04288 (2022).
[2] Liao, Jialing, Zheng Chen, and Erik G. Larsson. "Over-the-Air Federated Learning with Privacy Protection via Correlated Additive Perturbations." arXiv preprint arXiv:2210.02235 (2022).
[3] Yan, Na, et al. "Toward Secure and Private Over-the-Air Federated Learning." arXiv preprint arXiv:2210.07669 (2022).

Voraussetzungen

- basic knowledge in statistics and estimation theory
- basic knowledge about linear wireless channels

Betreuer:

Private and Secure Federated Learning

Beschreibung

In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics

Betreuer:

Laufende Abschlussarbeiten

Pufferfish Privacy and its relation to Differential Privacy

Beschreibung

When collecting and analyzing vast amounts of data in a database, the privacy of an individual is an increasing concern nowadays. Differential privacy [3] is a well-established and accepted privacy notion that quantifies the amount of information that is leaked about an individuals by retrieving statistics from a database. However, it cannot represent the impact of correlations between individuals' data on the privacy leakage.

Pufferfish privacy [1] has been proposed as an alternative privacy measure, which extends differential privacy to more sophisticated and comprehensive privacy requirements. Most importantly, it can quantify privacy in cases when data of different users is correlated, as it is the case in social networks. In [2], the so-called Wasserstein mechanism has been proposed which achieves pufferfish privacy, and has similarities with the well-known Laplace mechanism for differential privacy.

The goal of this seminar topic is to understand the difference between differential privacy and pufferfish privacy, and analyze how pufferfish privacy can be achieved by the Wasserstein mechanism.

[1] D. Kifer and A. Machanavajjhala, “Pufferfish: A framework for mathematical privacy definitions,” ACM Trans. Database Syst., vol. 39, no. 1, p. 3:1-3:36, Jan. 2014.

[2] S. Song, Y. Wang, and K. Chaudhuri, “Pufferfish Privacy Mechanisms for Correlated Data.” arXiv, Mar. 12, 2017.

[3] C. Dwork and A. Roth, "The algorithmic foundations of differential privacy," Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407, 2014.

 

Voraussetzungen

  • knowledge in probability theory and statistics
  • (optional) previous knowledge about differential privacy

Kontakt

Luis Maßny (luis.massny@tum.de)

Betreuer:

Publikationen

2022

  • Luis Maßny; Antonia Wachter-Zeh: Secure Over-the-Air Federated Learning. Munich Workshop on Coding and Cryptography, 2022 mehr…
  • Luis Maßny; Antonia Wachter-Zeh: Secure Over-the-Air Federated Learning. IEEE European School of Information Theory, 2022 mehr…
  • Luis Maßny; Christoph Hofmeister; Maximilian Egger; Rawad Bitar; Antonia Wachter-Zeh: Nested Gradient Codes for Straggler Mitigation in Distributed Machine Learning. TUM ICE Workshop Raitenhaslach, 2022 mehr…