Senior researcher and lecturer doing a habilitation with Prof. Dr.-Ing Antonia Wachter-Zeh., Prof. Deniz Gündüz and Prof. Sidharth Jaggi as mentors. I obtained a PhD from the ECE department of Rutgers University in January 2020. During my PhD, I held short term visiting positions at Aalto, Technical University of Berlin and the Chinese University of Hong Kong. In addition, I spent three years as PhD candidate at the ECE department of Illinois Institute of Technology (IIT). From August 2014 till January 2020, I was a member of the CSI lab supervised by Prof. Salim El Rouayheb.
In terms of studies, I received a masters degree in Information and Communication from the Lebanese University after doing the thesis at IIT in 2014. I graduated as a Computer and Communication Engineer from the Lebanese University in 2013 after doing the engineering senior project at Center of Nuclear Science and Science of Matter (CSNSM) in Paris, France and an internship at procsim-consulting company in EPFL, Lausanne, Switzerland, in 2012.
My research interest is centered around privacy, scalability, security and reliability of distributed systems. The applications may differ but the goal remains the same: study theoretical and fundamental limits of innovative storage and computing systems and design codes achieving those limits. My current work is motivated by the following research directions.
Private and secure distributed computing
DNA-based storage systems
Private and secure network coding.
DFG (German Research Foundation) grant for a temporary position for a Principal Investigator "Private Secure and Efficient Codes for Distributed Machine Learning". (2023 -- 2026)
EuroTech Visiting Research Programme grant. (March 2023)
In this project, we investigate the interplay between redundancy and straggler tolerance in distributed learning.
The setting is that of a main node distributing computational tasks to available workers as part of a machine learning algorithm, e.g., training a neural network. Waiting for all workers to return their computations suffers from the presence of stragglers, i.e., slow or unresponsive nodes. Mitigating the effect of the stragglers can be done through the use of redundancy or by leveraging the properties of the convergence of the machine learning algorithm.
The goal of this work is to compare when redundancy is needed. In this case, we aim to first analyze the convergence speed with and without redundancy. Then, we aim at design schemes that adaptively increase the redundancy to speed up the convergence.
R. Bitar, M. Wootters and S. El Rouayheb, Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning, IEEE Journal on Selected Areas in Information Theory (JSAIT), Vol. 1, No. 1, May 2020. arXiv:1905.05383
S. Kas Hanna, R. Bitar, P. Parag, V. Dasari and S. El Rouayheb, Adaptive Stochastic Gradient Descent for Fast and Communication-Efficient Distributed Learning, preprint, arXiv:2208.03134.
Knowledge in the following topics:
Gradient descent and stochastic gradient descent
Independence and motivation to work on a research topic
Knowledge of implementing neural networks is a plus
We consider the problem of running a distributed machine learning algorithm on the cloud. This imposes several challenges. In particular, cloud instances may have different performances/speeds. To fully leverage the performance of the instances, we want to characterize their speed and potentially use the fastest ones. To explore the speed of the instances while exploiting them (assigning computational tasks), we use the theory of multi-armed bandits (MABs).
The goal of the research intership is to start by implementing existing theoretical algorithms  and possibly adapting them based on the experimental observations.
 M. Egger, R. Bitar, A. Wachter-Zeh and D. Gündüz, Efficient Distributed Machine Learning via Combinatorial Multi-Armed Bandits, submitted to IEEE Journal on Selected Areas in Communications (JSAC), 2022.