Foto von Maximilian Egger

M.Sc. Maximilian Egger

Technische Universität München

Professur für Codierung und Kryptographie (Prof. Wachter-Zeh)

Postadresse

Postal:
Theresienstr. 90
80333 München

Biografie

Maximilian Egger received the B.Eng. in Electrical Engineering from the University of Applied Sciences Augsburg in 2020, and the M.Sc. in Electrical Engineering and Information Technology from the Technical University of Munich in 2022, both with high distinction (final grades: 1.0). He pursued a dual bachelor study accompanied by engineering positions in different hardware and software development departments at the Hilti AG. Inspiring collaborations at university, industry and the German Academic Scholarship Foundation strengthened his motivation to drive scientific progress. As a doctoral researcher at the Institute for Communications Engineering under the supervision of Prof. Dr.-Ing. Antonia Wachter-Zeh, he is conducting research in the rapidly growing field of large-scale decentralized computing and federated learning. Sensible data, potentially corrupted computations and stochastic environments naturally lead to concerns about privacy, security and efficiency. As part of his research, he investigates these problems from a coding and information-theoretic perspective.

Lehre

Coding Theory for Storage and Networks [Sommersemester 2022]
Fast Secure and Reliable Coded Computing [Wintersemester 2022/23]

Abschlussarbeiten

Angebotene Abschlussarbeiten

Random Walks for Decentralized Learning

Beschreibung

Fully decentralized schemes do not require a central entity and have been studied in [1, 2]. These works aim to reach consensus on a desirable machine learning model among all clients. We can mainly distinguish between i) gossip algorithms [3] where clients share their result with all neighbors, naturally leading to high communication complexities, and ii) random walk approaches like [4, 5] where the model is communicated only to a specific neighbor until matching certain convergence criteria. Such random walk approaches are used in federated learning to reduce the communication load in the network and at the clients’ side.

The main task of the student is to study the work in [5], which additionally accounts for the heterogeneity of the clients’ data. Further, drawbacks and limitations of the proposed approach should be determined.

[1] J. B. Predd, S. B. Kulkarni, and H. V. Poor, “Distributed learning in wireless sensor networks,” IEEE Signal Process. Mag., vol. 23, no. 4, pp. 56–69, 2006.

[2] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011.

[3] S. S. Ram, A. Nedi ?c, and V. V. Veeravalli, “Asynchronous gossip algorithms for stochastic optimization,” in IEEE Conf. Decis. Control. IEEE, 2009, pp. 3581–3586.

[4] D. Needell, R. Ward, and N. Srebro, “Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm,” Adv. Neural Inf. Process. Syst., vol. 27, 2014.

[5] G. Ayache, V. Dassari, and S. E. Rouayheb, “Walk for learning: A random walk approach for federated learning from heterogeneous data,” arXiv preprint arXiv:2206.00737, 2022.

Voraussetzungen

- Machine Learning and Statistics
- Information Theory

Betreuer:

MAB-Based Efficient Distributed ML on the Cloud

Stichworte:
Distributed Machine Learning (ML), Multi-Armed Bandits (MABs), Cloud Simulations (AWS, GCP, ...)

Beschreibung

We consider the problem of running a distributed machine learning algorithm on the cloud. This imposes several challenges. In particular, cloud instances may have different performances/speeds. To fully leverage the performance of the instances, we want to characterize their speed and potentially use the fastest ones. To explore the speed of the instances while exploiting them (assigning computational tasks), we use the theory of multi-armed bandits (MABs).

The goal of the research intership is to start by implementing existing theoretical algorithms [1] and possibly adapting them based on the experimental observations.

[1] M. Egger, R. Bitar, A. Wachter-Zeh and D. Gündüz, Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits, submitted to IEEE Journal on Selected Areas in Communications (JSAC), 2022.

Voraussetzungen

  • Information Theory
  • Machine Learning Basics
  • Python (Intermediate Level)

Betreuer:

Private and Secure Federated Learning

Beschreibung

In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics

Betreuer:

Laufende Abschlussarbeiten

Privacy-Preserving Vertical Federated XGBoost on GPU

Beschreibung

Gradient boosting tree model is widely used in practical applications, and its optimized implementation XGBoost is one of the most popular machine learning algorithms. If the data is centralized, It has been well implemented. When the data is decentralized and to preserve privacy, federated learning (FL) is used. FL is widely divided into ‘horizontal FL’ and ‘vertical FL,’ which differ from the partition methods of datasets. Previous works have focused more on horizontal FL, while vertical FL remains unexplored. Existing works in vertical federated XGBoost have some limitations, such as intermediate data leakage, computation overhead, etc. GPU acceleration is quite common in machine learning and is not applied to privacy-preserving XGBoost. This project aims to build an efficient privacy-preserving vertical federated XGBoost using GPUs.

Betreuer:

Marvin Xhemrishi, Maximilian Egger - Huawei (Dr. -Ing. Yong Li)

A Framework for Federated Learning with Variable Local Updates

Beschreibung

Since the introduction of federated learning in [1], we can observe a rapidly growing body of research. In particular, we face challenges with respect to privacy, security and efficiency. We build upon an existing generic framework for simulating decentralized optimization procedures in a federated learning setting. With the help of that framework, the student should analyze the performance of selected state-of-the-art schemes and investigate different protocols that utilize local updates and the effect of straggling clients.

Betreuer:

Analysis and Modeling of Software Energy Consumption in Consumer Computing Devices

Beschreibung

Sustainability is an essential part of every industrial field in today’s world. Thus, the importance of energy-efficient software cannot be overstated. In this master's thesis, the student examines the hardware-related metrics of a consumer computing device and estimate the energy consumption of specific applications by analysing their metrics. With this analysis, we aim to find a universally applicable mathematical method to estimate the energy consumption of individual applications by using only their resource consumption metrics without the need for an additional power measurement tool. This includes the determination of useful metrics that affect the energy consumption of a consumer computing device.

Betreuer:

Maximilian Egger - Karsten Schörner (Siemens)

Approximate Matrix Multiplication

Beschreibung

The rapid growth of data being recorded and processes per day necessitates to outsource large computations to external workers. In neural network training and inference processes, matrix multiplication is a core computation. That is why a large body of research is concerned with designing schemes for distributed matrix multiplication that allows for tolerating straggling workers (i.e., slow or even unresponsive workers). While reconcstructing the exact result can be costly, approximate schemes can improve on the efficiency while only sacrificing little in terms of accuracy. In this project, the student should study existing approximate matrix multiplication schemes and improve on the achievable rate while maintaining low error rates.

Voraussetzungen

- Coding Theory (e.g., Channel Coding)
- Information Theory

Betreuer:

Publikationen

2022

  • Marvin Xhemrishi, Maximilian Egger, Rawad Bitar: Efficient Private Storage of Sparse Machine Learning Data. 2022 IEEE Information Theory Workshop (ITW), 2022 mehr…
  • Maximilian Egger: Challenges in Federated Learning - A Brief Overview. TUM ICE Workshop Raitenhaslach, 2022 mehr…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits. 2022 IEEE International Symposium on Information Theory (ISIT), 2022 mehr…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits. 2022 Munich Workshop on Coding and Cryptography (MWCC), 2022 mehr…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits. 2022 IEEE European School of Information Theory (ESIT), 2022 mehr…
  • Maximilian Egger, Thomas Schamberger, Lars Tebelmann, Florian Lippert, Georg Sigl: A Second Look at the ASCAD Databases. 14th International Workshop on Constructive Side-Channel Analysis and Secure Design, 2022 mehr…
  • Xhemrishi M.; Egger M. Bitar R.: Efficient Private Storage of Sparse Machine Learning Data. IEEE Information Theory Workshop, 2022 mehr…