Picture of Maximilian Egger

M.Sc. Maximilian Egger

Technical University of Munich

Associate Professorship of Coding and Cryptography (Prof. Wachter-Zeh)

Postal address

Postal:
Theresienstr. 90
80333 München

Biography

Maximilian Egger received the B.Eng. in Electrical Engineering from the University of Applied Sciences Augsburg in 2020, and the M.Sc. in Electrical Engineering and Information Technology from the Technical University of Munich in 2022, both with high distinction (final grades: 1.0). He pursued a dual bachelor study accompanied by engineering positions in different hardware and software development departments at the Hilti AG. Inspiring collaborations at university, industry and the German Academic Scholarship Foundation strengthened his motivation to drive scientific progress. As a doctoral researcher at the Institute for Communications Engineering under the supervision of Prof. Dr.-Ing. Antonia Wachter-Zeh, he is conducting research in the rapidly growing field of large-scale decentralized computing and federated learning. Sensible data, potentially corrupted computations and stochastic environments naturally lead to concerns about privacy, security and efficiency. As part of his research, he investigates these problems from a coding and information-theoretic perspective.

 

Teaching

Coding Theory for Storage and Networks [Summer Term 2022]
Fast Secure and Reliable Coded Computing [Winter Term 2022/23]
Nachrichtentechnik [Summer Term 2023]

Theses

Available Theses

Random Walks for Decentralized Learning

Description

Fully decentralized schemes do not require a central entity and have been studied in [1, 2]. These works aim to reach consensus on a desirable machine learning model among all clients. We can mainly distinguish between i) gossip algorithms [3] where clients share their result with all neighbors, naturally leading to high communication complexities, and ii) random walk approaches like [4, 5] where the model is communicated only to a specific neighbor until matching certain convergence criteria. Such random walk approaches are used in federated learning to reduce the communication load in the network and at the clients’ side.

The main task of the student is to study the work in [5], which additionally accounts for the heterogeneity of the clients’ data. Further, drawbacks and limitations of the proposed approach should be determined.

[1] J. B. Predd, S. B. Kulkarni, and H. V. Poor, “Distributed learning in wireless sensor networks,” IEEE Signal Process. Mag., vol. 23, no. 4, pp. 56–69, 2006.

[2] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011.

[3] S. S. Ram, A. Nedi ?c, and V. V. Veeravalli, “Asynchronous gossip algorithms for stochastic optimization,” in IEEE Conf. Decis. Control. IEEE, 2009, pp. 3581–3586.

[4] D. Needell, R. Ward, and N. Srebro, “Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm,” Adv. Neural Inf. Process. Syst., vol. 27, 2014.

[5] G. Ayache, V. Dassari, and S. E. Rouayheb, “Walk for learning: A random walk approach for federated learning from heterogeneous data,” arXiv preprint arXiv:2206.00737, 2022.

Prerequisites

- Machine Learning and Statistics
- Information Theory

Supervisor:

MAB-Based Efficient Distributed ML on the Cloud

Keywords:
Distributed Machine Learning (ML), Multi-Armed Bandits (MABs), Cloud Simulations (AWS, GCP, ...)

Description

We consider the problem of running a distributed machine learning algorithm on the cloud. This imposes several challenges. In particular, cloud instances may have different performances/speeds. To fully leverage the performance of the instances, we want to characterize their speed and potentially use the fastest ones. To explore the speed of the instances while exploiting them (assigning computational tasks), we use the theory of multi-armed bandits (MABs).

The goal of the research intership is to start by implementing existing theoretical algorithms [1] and possibly adapting them based on the experimental observations.

[1] M. Egger, R. Bitar, A. Wachter-Zeh and D. Gündüz, Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits, submitted to IEEE Journal on Selected Areas in Communications (JSAC), 2022.

Prerequisites

  • Information Theory
  • Machine Learning Basics
  • Python (Intermediate Level)

Supervisor:

Theses in Progress

The Interplay of Fairness and Privacy in Federated Learning

Description

We study the impact of different measures of fairness on the privacy guarantees for individual clients in federated learning.

Supervisor:

Channel Codes for Robust Federated Learning Over the Air

Description

We study the usage of channel codes for the setting of federated learning with over-the-air computation so that the sum of codewords over reals can be efficiently and reliably decoded at the federator to obtain the average of partial model updates.

Supervisor:

Bias Variance Trade-Off in Gradient Compression for Federated Learning

Description

We study the bias-variance trade-off in gradient compression in the setting of federated learning. In particular, we investigate if biased and low-variance estimates of indivial clients' gradients can still lead to unbiased averaged gradients while reducing the overall variance, and hence the distortion of the compression scheme.

Supervisor:

Gradient Compression Schemes and Their Interplay with Regularization

Description

We theoretically study the impact of gradient compression schemes on the regularization properties of the underlying machine learning algorithm.

Supervisor:

A Comparison of Selected Gradient Compression Schemes

Description

In this work, we investigate and compare selected gradient compression schemes and their impact on the performance of the underlying machine learning algorithm.

Supervisor:

Privacy-Preserving Vertical Federated XGBoost on GPU

Description

Gradient boosting tree model is widely used in practical applications, and its optimized implementation XGBoost is one of the most popular machine learning algorithms. If the data is centralized, It has been well implemented. When the data is decentralized and to preserve privacy, federated learning (FL) is used. FL is widely divided into ‘horizontal FL’ and ‘vertical FL,’ which differ from the partition methods of datasets. Previous works have focused more on horizontal FL, while vertical FL remains unexplored. Existing works in vertical federated XGBoost have some limitations, such as intermediate data leakage, computation overhead, etc. GPU acceleration is quite common in machine learning and is not applied to privacy-preserving XGBoost. This project aims to build an efficient privacy-preserving vertical federated XGBoost using GPUs.

Supervisor:

Marvin Xhemrishi, Maximilian Egger - Huawei (Dr. -Ing. Yong Li)

A Framework for Federated Learning with Variable Local Updates

Description

Since the introduction of federated learning in [1], we can observe a rapidly growing body of research. In particular, we face challenges with respect to privacy, security and efficiency. We build upon an existing generic framework for simulating decentralized optimization procedures in a federated learning setting. With the help of that framework, the student should analyze the performance of selected state-of-the-art schemes and investigate different protocols that utilize local updates and the effect of straggling clients.

Supervisor:

Analysis and Modeling of Software Energy Consumption in Consumer Computing Devices

Description

Sustainability is an essential part of every industrial field in today’s world. Thus, the importance of energy-efficient software cannot be overstated. In this master's thesis, the student examines the hardware-related metrics of a consumer computing device and estimate the energy consumption of specific applications by analysing their metrics. With this analysis, we aim to find a universally applicable mathematical method to estimate the energy consumption of individual applications by using only their resource consumption metrics without the need for an additional power measurement tool. This includes the determination of useful metrics that affect the energy consumption of a consumer computing device.

Supervisor:

Maximilian Egger - Karsten Schörner (Siemens)

Publications

2023

  • Maximilian Egger, Christoph Hofmeister, Antonia Wachter-Zeh, Rawad Bitar: Private Aggregation in Wireless Federated Learning with Heterogeneous Clusters. International Symposium on Information Theory, 2023 more…
  • Maximilian Egger, Marvin Xhemrishi, Antonia Wachter-Zeh, Rawad Bitar: Sparse and Private Distributed Matrix Multiplication with Straggler Tolerance. International Symposium on Information Theory, 2023 more…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz, Nir Weinberger: Maximal-Capacity Discrete Memoryless Channel Identification. International Symposium on Information Theory, 2023 more…
  • Maximilian Egger, Serge Kas Hanna, Rawad Bitar: Fast and Straggler-Tolerant Distributed SGD with Reduced Computation Load. International Symposium on Information Theory, 2023 more…
  • Thomas Schamberger, Maximilian Egger, Lars Tebelmann: Hide and Seek: Using Occlusion Techniques for Side-Channel Leakage Attribution in CNNs. Artificial Intelligence in Hardware Security Workshop, 2023 more…

2022

  • Marvin Xhemrishi, Maximilian Egger, Rawad Bitar: Efficient Private Storage of Sparse Machine Learning Data. 2022 IEEE Information Theory Workshop (ITW), 2022 more…
  • Maximilian Egger: Challenges in Federated Learning - A Brief Overview. TUM ICE Workshop Raitenhaslach, 2022 more…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits. 2022 IEEE International Symposium on Information Theory (ISIT), 2022 more…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits. 2022 Munich Workshop on Coding and Cryptography (MWCC), 2022 more…
  • Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Deniz Gündüz: Cost-Efficient Distributed Learning via Combinatorial Multi-Armed Bandits. 2022 IEEE European School of Information Theory (ESIT), 2022 more…
  • Maximilian Egger, Thomas Schamberger, Lars Tebelmann, Florian Lippert, Georg Sigl: A Second Look at the ASCAD Databases. 14th International Workshop on Constructive Side-Channel Analysis and Secure Design, 2022 more…
  • Xhemrishi M.; Egger M. Bitar R.: Efficient Private Storage of Sparse Machine Learning Data. IEEE Information Theory Workshop, 2022 more…