Foto von Rawad Bitar

Dr. Ph.D. Rawad Bitar

Technische Universität München

Professur für Codierung und Kryptographie (Prof. Wachter-Zeh)

Postadresse

Postal:
Theresienstr. 90
80333 München

Biografie

Postdoctoral researcher working with Prof. Dr.-Ing Antonia Wachter-Zeh. I obtained a PhD from the ECE department of Rutgers University in January 2020. During my PhD, I held short term visiting positions at Aalto, Technical University of Berlin and the Chinese University of Hong Kong. In addition, I spent three years as PhD candidate at the ECE department of Illinois Institute of Technology (IIT). From August 2014 till January 2020, I was a member of the CSI lab supervised by Prof. Salim El Rouayheb.

In terms of studies, I received a masters degree in Information and Communication from the Lebanese University after doing the thesis at IIT in 2014. I graduated as a Computer and Communication Engineer from the Lebanese University in 2013 after doing the engineering senior project at Center of Nuclear Science and Science of Matter (CSNSM) in Paris, France and  an internship at procsim-consulting company in EPFL, Lausanne, Switzerland, in 2012.

Lehre

  • Winter semester 2021/2022: Security in Communications and Networks, jointly with Prof. Dr.-Ing Antonia Wachter-Zeh
  • Summer semester 2021: Coding Theory for Storage and Networks, jointly with Dr.-Ing Sven Puchinger
  • Winter semester 2020/2021: Security in Communications and Networks, jointly with Prof. Dr.-Ing Antonia Wachter-Zeh
  • Summer semester 2020: Coding Theory for Storage and Networks, jointly with Dr. Alessandro Neri

Abschlussarbeiten

Angebotene Abschlussarbeiten

MAB-Based Efficient Distributed ML on the Cloud

Stichworte:
Distributed Machine Learning (ML), Multi-Armed Bandits (MABs), Cloud Simulations (AWS, GCP, ...)

Beschreibung

We consider the problem of running a distributed machine learning algorithm on the cloud. This imposes several challenges. In particular, cloud instances may have different performances/speeds. To fully leverage the performance of the instances, we want to characterize their speed and potentially use the fastest ones. To explore the speed of the instances while exploiting them (assigning computational tasks), we use the theory of multi-armed bandits (MABs).

The goal of the research intership is to start by implementing existing theoretical algorithms [1] and possibly adapting them based on the experimental observations.

[1] M. Egger, R. Bitar, A. Wachter-Zeh and D. Gündüz, Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits, submitted to IEEE Journal on Selected Areas in Communications (JSAC), 2022.

Voraussetzungen

  • Information Theory
  • Machine Learning Basics
  • Python (Intermediate Level)

Betreuer:

Private and Secure Federated Learning

Beschreibung

In federated learning, a machine learning model shall be trained on private user data with the help of a central server, the so-called federator. This setting differs from other machine learning settings in that the user data shall not be shared with the federator for privacy reasons and/or to decrease the communication load of the system.

Even though only intermediate results are shared, extra care is necessary to guarantee data privacy. An additional challenge arises if the system includes malicious users that breach protocol and send corrupt computation results.

The goal of this work is to design, implement and analyze coding- and information-theoretic solutions for privacy and security in federated learning.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics

Betreuer:

Laufende Abschlussarbeiten

Implementation of a Generic Federated Learning Framework

Beschreibung

Since the introduction of federated learning in [1], we can observe a rapidly growing body of research. In particular, we face challenges with respect to privacy, security and efficiency. The first half of this research internship aims at implementing a generic framework for simulating decentralized optimization procedures in a federated leraning setting. During the second half and with the help of the framework, the student should analyze the performance of selected state-of-the-art schemes.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory
  • Machine Learning Basics
  • Python (Intermediate Level)

Betreuer:

Secure Federated Learning

Beschreibung

In the initially proposed federated learning setting [1], the federator observes partial gradient computations of all clients contributing to a decentralized training procedure. However, clients might send malicious (corrupt) computations to harm the training process on purpose. Considering this model, security against malicious clients can be ensured by running statistics on the partial results [2, 3]. For example, clients’ results that differ significantly from the vast majority of responses can be excluded from the training process. In recent works, secure aggregation of partial work was proposed [4]. The goal is to let the master only observe the sum of all local models, and by this to enhance the privacy level of the clients’ data. These works, however, complicate the use of statistics to account for corrupt partial computations as the master only observes the aggregated result. The goal of this research internship is to review related literature on secure federated learning including their limitations, and to explore possible approaches to ensure security against potentially corrupt results while preserving privacy of the clients’ data.

[1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” vol. 54, pp. 1273–1282, 20--22 Apr 2017.

[2] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent,” in Advances in Neural Information Processing Systems, 2017, vol. 30.

[3] Z. Yang and W. U. Bajwa, “ByRDiE: Byzantine-Resilient Distributed Coordinate Descent for Decentralized Learning,” IEEE Transactions on Signal and Information Processing over Networks, vol. 5, no. 4, pp. 611–627, Dec. 2019.

[4] K. Bonawitz et al., “Practical Secure Aggregation for Privacy-Preserving Machine Learning,” Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017. doi: 10.1145/3133956.3133982.

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Information Theory

Betreuer:

Testing the Performance of Distributed Coordinate Gradient Descent

Stichworte:
Machine learning, distributed computing, stragglers

Beschreibung

In this project, we implement distributed coordinate gradient descent in the master/worker setting on a local machine. The goal is to analyze the performance of this algorithm in the presence of stregglers.

The new idea here is to distribute the data redundantly to the workers so that stragglers do not affect the performance of the overall algorithm. We try a new scoring strategy of the coordinates that will reflect the repetition of each coordinate within the workers to guarantee that the master observes the important coordinates at each iteration with high probability.

We test our ideas on MNIST data set and potentially using CIFAR-10.

Kontakt

rawad.bitar@tum.de

Betreuer:

Rawad Bitar, Serge Kas Hanna

Secure Record Linkage via Multi-Party Computation

Beschreibung

Processing a massive amount of collected data by means of machine learning algorithms often becomes infeasible when carried out on single machines. To cope with the computational requirements, distributed cloud computing was introduced. Thereby, a large computational task is split into multiple parts and distributed among worker machines to parallelize the computations and thereby speed up the learning process. However, since confidential data must be shared with third parties and the outcome is threatened by potential corrupt computations, privacy and security has to be ensured. This is particularly critical in medical environments, in which we deal with individual patients' information. 

To motivate the study of these challenges, a competition called iDash privacy and security workshop is hosted every year [1]. This year, the task is to develop a framework that securely links similar patient related entries being stored on different datasets without comprising privacy - for example to avoid double considerations in further processing steps. During this research internship, the student should use multi-party computation tools to develop a framework that complies with the aforementioned requirements.

[1] http://www.humangenomeprivacy.org/2022/competition-tasks.html

Voraussetzungen

  • Coding Theory (e.g., Channel Coding)
  • Linear Algebra
  • Information Theory (optional)

Betreuer:

Forschung

Publikationen