Open Thesis

Ongoing Thesis

Master's Theses

Towards a Digital Twin for Cloud-native Mobile Networks

Description

Cloud computing and microservice-based architectures have empowered businesses to develop new highly reliable applications, that can adapt to variable workloads. In the context of 5G telco applications, both the research community and the industry has been exploring methods of using cluster orchestrators, such as Kubernetes (K8s) in mobile network deployments. More specifically, Multi-access Edge Computing and Fog computing for 5G networks represent use-cases where the principles of cloud computing can be applied, but meeting the requirements (especially regarding latency) proves challenging.

With the increased usage of cloud deployments for Radio Access Networks, cluster configuration gained importance, as an optimized configuration translates into higher performance, increased agility and better usage of the resources. Empirical, experience-based human heuristics can improve the cluster configuration, however they require advanced knowledge about the deployment and the direct intervention of the cluster operator.

The research community is currently exploring the steps towards an automated, data-driven cluster configuration: the behavior of the cluster is learned with Machine Learning (ML), the cluster behavior is simulated with different configurations and an optimizer chooses the best configuration. However, an optimised configuration is only applicable to the real-life cluster, if the simulation of the cluster behavior is highly accurate.

The goal of this Master's Thesis is to determine the net value added by building a Digital Twin of a k8s cluster. Therefore, it uses ML-based models for the simulation of three network functions compared to traditional “Hand-crafted models”. First, it implements new network functions such as the pod scheduler and the load balancer in an existing simulation framework for k8s cluster.

Second, the thesis compares classical, hand-crafted models for the simulation with data-driven methods. Namely, after implementing Load Balancing and Pod Scheduling in the simulator in the classical way, it also integrates already trained ML model equivalents of these functions.
In the end, performance metrics and accuracy of both appraoches are compared.

Supervisor:

Johannes Zerwas, Patrick Krämer, Navidreza Asadi

Automated Generation of Adversarial Inputs for Data Center Networks

Keywords:
adversarial; datacenter networks

Description

Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Many proposal for network designs and routing algorithms covering different operational goals and requirements have been proposed.
This variety makes it hard for operators to choose the ``right'' solution.
Recently, some works proposed that automatically generate adversarial input to networks or networking algorithms [1,2] to identify weak spots in order to get a better view of their performance and help operators' decision making. However, they focus on specific scenarios.
The goal of this thesis is to develop or extend such mechanisms so that they can be applied a wider range of scenarios than previously.
The thesis builds upon an existing flow-level simulator in C++ and initial algorithms that generate adversarial inputs for networking problems.

[1] S. Lettner and A. Blenk, “Adversarial Network Algorithm Benchmarking,” in Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies, Orlando FL USA, Dec. 2019, pp. 31–33, doi: 10.1145/3360468.3366779.
[2] J. Zerwas et al., “NetBOA: Self-Driving Network Benchmarking,” in Proceedings of the 2019 Workshop on Network Meets AI & ML  - NetAI’19, Beijing, China, 2019, pp. 8–14, doi: 10.1145/3341216.3342207.

Prerequisites

- Profound knowledge in C++

Supervisor: