Student Work

We offer students the opportunity to actively participate on interesting and cutting edge research topics and concrete research projects by conducting their thesis in our group.

We offer topics for your Bachelor's Thesis (BA) and Master's Thesis (MA) to successfully complete your studies with a scientific work. We offer students of the Department of Electrical and Computer Engineering to supervise your Forschungspraxis (FP) (research internship /industrial internship) and Ingenieurpraxis (IP) directly at our chair. For students with other specializations, such as Informatics, we offer opportunities to supervise your Interdisciplinary Project (IDP) (German: "Interdisziplinäres Projekt (IDP)"). Please contact us directly for more information.

This page also lists open positions for paid student work in projects (Werkstudent, Studentische Hilfskraft) (SHK).

Please note: For some topics different types of theses are possible. We adapt the goals then to fit the respective requirements and workload (e.g. BA or MA or internship).

Please note: On this page we also list student theses that are already assigned to students (Ongoing Thesis), so you can get an impression on the range of topics that we have at our Chair. Nevertheless, if you are interested in one of the topics of already ongoing theses, please do not hesitate to directly contact the supervisor and ask if there are any plans for follow-up topics. Many times this is actually the case.

Open Thesis

Working Student for Analysis, Modeling and Simulation of Communication Networks SS2024

Description

The primary responsibilities of a working student include assisting tutors in correcting programming assignments and answering questions in Moodle. Working time is 6-7 hours per week in the period from May to July.

Prerequisites

  • Python knowledge

Contact

polina.kutsevol@tum.de

Supervisor:

Polina Kutsevol

MPTCP vs MPQUIC in a LiFi-WiFi Network

Keywords:
Multipath communication, LiFi, Hardware

Description

Tasks:

  • Review of related literature on multipath protocols
  • MPTCP+LiFi enabled hardware setup 
  • MPQUIC+LiFi enabled hardware setup 
  • Evaluate and Compare different schedulers on linux testbed
  • For a master thesis, this includes developing a novel scheduler

 

If you are interested in this work, please send an email with a short introduction of yourself along with your CV and grade transcript.

Prerequisites

  • Experience with Linux networking
  • Strong foundation in wireless networking concepts
  • Availability to work in-presence

Supervisor:

Resource Allocation with Multi-Agent Reinforcement Learning

Keywords:
LiFi, Multipath, Reinforcement Learning, Task Offloading

Description

The goal of the thesis would be to build a Wireless Resource Allocation Framework to optimize task offloading in Multi-hop, Multi-path networks.

The approach is to develop an optimization problem to allocate network resources to users for task offloading and to solve this problem using Multi-agent Reinforcement Learning.

Related Reading:

Z. Cao, P. Zhou, R. Li, S. Huang and D. Wu, "Multiagent Deep Reinforcement Learning for Joint Multichannel Access and Task Offloading of Mobile-Edge Computing in Industry 4.0," in IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6201-6213, July 2020, doi: 10.1109/JIOT.2020.2968951.

If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.

 

Prerequisites

  • Strong Python programming skills
  • Strong foundation on wireless communications
  • Experience with Reinforcement Learning

 

Contact

hansini.vijayaraghavan@tum.de

Supervisor:

Student Assistent for Wireless Sensor Networks Lab Summer Semester 2024

Description

The Wireless Sensor Networks lab offers the opportunity to develop software solutions for the wireless sensor networking system, targeting innovative applications. For the next semester, a position is available to assist the participants in learning the programming environment and during the project development phase. The lab is planned to be held on-site every Tuesday 15:00 to 17:00.

Prerequisites

  • Solid knowledge in Wireless Communication: PHY, MAC, and network layers.
  • Solid programming skills: C/C++.
  • Linux knowledge.
  • Experience with embedded systems and microcontroller programming knowledge is preferable.

 

Contact

yash.deshpande@tum.de

alexander.wietfeld@tum.de

Supervisor:

Yash Deshpande, Alexander Wietfeld

Demo Implementation: Network Planning For The Future Railway Communications

Keywords:
Demo, GUI, Web
Short Description:
This works consists on the implementation of a demo for the work on Network Planning For The Future Railway Communications

Description

This works consists on the implementation of a demo for the work on Network Planning For The Future Railway Communications.

The idea is to program a web GUI, where the users can plan the network and examine its performance under dynamic scenarios.

An example of the expected outcome can be found here.

Please send your CV and transcript of records.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Python
  • Web programming (GUI)
  • GIT

Contact

Supervisor:

Cristian Bermudez Serna

Working Student for the implementation of a web server for a medical robotics application

Description

Tasks:

Implementation of a web server (backend, bridge to robotic control system, frontend) for a medical robotics application within the research project 6G-life. The setup should subsequently be optimized by integrating multiple different inputs, i.e. ultrasound images or camera streams.

 

Please note that while this project is closely related to my research, this position is offered by our project partners at MITI (https://web.med.tum.de/en/miti/home/).

Prerequisites

  • Motivation and independent way of working
  • Interest in medicine and application-oriented research
  • Experience in web server development
  • Ideally knowledge of video transmission technology, GUI design or ROS

 

 

Contact

sven.kolb@tum.de

Supervisor:

Nicolai Kröger - Sven Kolb (MITI)

Working Student for the Implementation of Camera Streams and Development of a Web Server

Description

Tasks:

Implementation of live camera streams for the research project 6G-life. In addition, a web server for a GUI is to be set up in which, among other things, the camera streams are integrated

 

Please note that while this work is closely related to my work, this position is from our project partners at MITI (https://web.med.tum.de/en/miti/home/).

Prerequisites

  • Motivation and independent way of working
  • Interest in medicine and application-oriented research
  • Experience in web server development
  • Ideally knowledge of camera transmission technology and GUI programming

 

Contact

franziska.jurosch@tum.de

Supervisor:

Nicolai Kröger - Franziska Jurosch (MITI)

Evaluating a Reliability Block Diagram (RBD)

Keywords:
Reliability block diagram, availability

Description

A reliability block diagram is a tool used to measure the availability of a system (a network, in our case). However, the existing tools as software packages do not work with bidirectional links. 

This work aims to build a tool that can evaluate the availability of a network based on the RBD.

Prerequisites

Python and data structures, Kommunikationsnetze or Communication Network Reliability course would be useful. 

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Improving network availability- a minimal cut set approach

Keywords:
availability, reliability, Minimal

Description

A cut set is a set of components that, by failing, causes the system to fail. 

A cut set is minimal if it cannot be reduced without losing its status as a cut set.

 

For a source-destination pair in a network, the ideal conditions for maximum availability are:

a) Least possible number of minimal cut sets.
b) Largest possible size for the minimal cut sets.

 

This work aims to find the best links that, when added to the network, improve the network availability by increasing the size of the minimal cutsets. Two methods are possible- an ILP and a heuristic.
The final result must be a performance comparison of the two methods.

 

Prerequisites

Communication Network Reliability course, Python, Integer Linear Programming

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Sovereignty of an optical switch

Keywords:
sovereignty, availability, optical switch

Description

An optical switch has multiple subcomponents. Usually, a switch manufacturer purchases raw materials from different subcomponent manufacturers. 
When a network operator purchases switches from different switch manufacturers, there is a possibility that switches from different manufacturers share a common subcomponent. This becomes a vulnerable single point of failure. For example, in the case of laptops, an HP laptop and a Dell laptop can have the same AMD processor.

Our work is related to the trustworthiness of the subcomponent manufacturers. The goal is to evaluate:

a) The trustworthiness of a switch based on the subcomponent manufacturers

b) Provide guidelines to network operators to choose the most sovereign set of switches possible to avoid a single point of failure.

Prerequisites

Python

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Development of a GUI for Monitoring and Debugging a Digital Twin of QKD Networks

Keywords:
GUI
Short Description:
Quantum key distribution (QKD) is a promising technology for providing secure communication also in the presence of powerful quantum computers. Due to its time-dependent behavior and multi-layer architecture, analysis of routing policies and network performance parameters can be done by emulation. Our implemented network emulator based on container and network function virtualization allows network performance parameters analysis and routing policy optimization.

Description

We search for a student to build a GUI, simplifying analysis and interaction with the network emulator. The emulator is based on Containernet and includes QKD-specific network function virtualization. Currently, distributed routing is supported but will be extended by centralized routing. Monitoring data from active QKD-links are fed in to mirror realistic circumstances.

  • Build a front-end displaying performance and operational data
  • Build a GUI for dynamically changing secret key rates 

Prerequisites

  • Programming skills in Python
  • Experience in front-end web development
  • Interest in security and practical concepts of guaranteed security

 

Contact

Mario Wenning mario.wenning@tum.de

Supervisor:

Mario Wenning

Development and justification of a physical layer model based on monitoring data for quantum key distribution

Short Description:
Quantum key distribution (QKD) is a promising technology for providing secure communication also in the presence of powerful quantum computers. Although the security for idealized QKD protocols is proven, the assessment of practical imperfections influencing security and performance is subject to research. Vendors of QKD devices usually provide bounds of security and performance but with a lack of transparency and proof by published measurement studies. In addition, academia provides theoretical boundaries considering distinct imperfections with limited practical relevance.

Description

We search for a student to analyze the monitoring data of QKD devices deployed in a realistic environment. The monitoring data is already acquired and offers the possibility to investigate the relation between reported performance, vendor specifications, and the influence of the environment. The goal is to gain insights into performance prediction, eavesdropping detection and influences on security by applying data science, including AI/ML.

Prerequisites

  • Programming skills in Python
  • Experience in data science and statistics
  • Foundation in information theory
  • Interest in security and practical concepts of guaranteed security
  • Willingness to analyze and understand imperfections of quantum transmission

 

Contact

Mario Wenning mario.wenning@tum.de

Supervisor:

Mario Wenning

Sustainable Core Networks in 5G with Performance Guarantees

Keywords:
5G, 5G Edge, UPF, Optimization, Heuristic

Description

With the advent of 5G cellular networks, more stringent types of traffic, pertaining to applications like augmented reality, virtual reality, and online gaming, are being served nowadays. However, this comes with an increased energy consumption on both the user’s and network side, challenging this way the sustainability of cellular networks. Furthermore, the in-network computing aspect exacerbates things even further in that direction. 

Hence, it is very important to provide end-to-end sustainability, i.e., minimize the energy consumption in the network while maintaining performance guarantees, such as the maximum latency each flow should experience. This can be done, for example, depending on the traffic load in the network, and in order to keep the energy usage at low levels, the operator can decide to shut off certain network components, like User Plane Functions (UPFs) or edge clouds, and reassign the tasks to other entities. 

In this thesis, the focus will be on the core network. The aforementioned decisions will come up as solutions to optimization problems. To that end, the student will formulate optimization problems and solve them either analytically or using an optimization solver (e.g., Gurobi). The other part would be conducting realistic simulations and showing the improvements with our approach. 

Prerequisites

- Basic understanding of 5G Core Networks and Mobile Edge Computing (MEC).

- Experience with mathematical formulation of optimization problems.

- Programming experience with Python and Gurobi.

Supervisor:

Endri Goshi, Fidan Mehmeti

Setup and maintenance of a Molecular Communication Networks testbed for 6G and beyond

Keywords:
Internet of Bio-Nano-Things, Molecular Communication Networks, 6G, Testbed

Description

Molecular communication (MC) is an alternative to classical electromagnetic wave-based communication, where molecules are used for information exchange. MC is expected to enable in-body networks for future medical applications in the Internet of Bio-Nano Things, a vision for 6G and beyond.

We seek a working student to help us set up and maintain a molecular communication networks testbed at the chair. The testbed will be based on ink molecules transmitted through a water-filled tube system with a background flow. We plan on using spectral absorption measurements to detect the information molecules.

What you will do:

  • Set up the tubes, pumps, sensors, and other components
  • Program microcontrollers to control microfluidic pumps and spectral sensors
  • Debugging and fault detection in both software and hardware
  • Implement data collection methods

Prerequisites

What you need:

  • Interest in future and unconventional communication methods
  • Willingness to learn new things and a hands-on mentality
  • Experience in (low-level) programming (e.g., C/C++), preferably with microcontrollers/Arduinos for controlling sensors

Nice to have:

  • Already worked with spectral sensors
  • Experience with CAD, 3D printing, and soldering

 

Contact

  • sebastian.a.schmidt@tum.de
  • alexander.wietfeld@tum.de

Supervisor:

Sebastian Schmidt, Alexander Wietfeld

Distributed Deep Learning for Video Analytics

Keywords:
Distributed Deep Learning, Distributed Computing, Video Analytics, Edge Computing, Edge AI

Description

    In recent years, deep learning-based algorithms have demonstrated superior accuracy in video analysis tasks, and scaling up such models; i.e., designing and training larger models with more parameters, can improve their accuracy even more.

    On the other hand, due to strict latency requirements as well as privacy concerns, there is a tendency towards deploying video analysis tasks close to data sources; i.e., at the edge. However, compared to dedicated cloud infrastructures, edge devices (e.g., smartphones and IoT devices) as well as edge clouds are constrained in terms of compute, memory and storage resources, which consequently leads to a trade-off between response time and accuracy. 

    Considering video analysis tasks such as image classification and object detection as the application at the heart of this project, the goal is to evaluate different deep learning model distribution techniques for a scenario of interest.

Supervisor:

Navidreza Asadi

Edge AI in Adversarial Environment: A Simplistic Byzantine Scenario

Keywords:
Distributed Deep Learning, Distributed Computing, Byzantine Attack, Adversarial Inference

Description

This project considers an environment consisting of several low performance machines which are connected together across a network. 

Edge AI has drawn the attention of both academia and industry as a way to bring intelligence to edge devices to enhance data privacy as well as latency. 

Prior works investigated on improving accuracy-latency trade-off of Edge AI by distributing a model into multiple available and idle machines. Building on top of those works, this project adds one more dimension: a scenario where $f$ out of $n$ contributing nodes are adversary. 

Therefore, for each data sample an adversary (1) may not provide an output (can also be considered as a faulty node.) or (2) may provide an arbitrary (i.e., randomly generated) output.

The goal is to evaluate robustness of different parallelism techniques in terms of achievable accuracy in presence of malicious contributors and/or faulty nodes.

Note that contrary to the mainstream existing literature, this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although robustness of the training phase can be considered as well, it has a much lower priority.

Supervisor:

Navidreza Asadi

On the Efficiency of Deep Learning Parallelism Schemes

Keywords:
Distributed Deep Learning, Parallel Computing, Inference, AI Serving

Description

Deep Learning models are becoming increasingly larger so that most of the state-of-the-art model architectures are either too big to be deployed on a single machine or cause performance issues such as undesired delays.

This is not only true for the largest models being deployed in high performance cloud infrastructures but also for smaller and more efficient models that are designed to have fewer parameters (and hence, lower accuracy) to be deployed on edge devices.

    That said, this project considers the second environment where there are multiple resource constrained machines connected through a network. 

    Continuing the research towards distributing deep learning models into multiple machines, the objective is to generate more efficient variants/submodels compared to existing deep learning parallelism algorithms.  

Note that this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although efficiency of the training phase can be considered as well, it has a much lower priority.

Supervisor:

Navidreza Asadi

Optimizing Communication Efficiency of Deep Learning Parallelism Techniques in the Inference Phase

Keywords:
Distributed Deep Learning, Parallel Computing, Inference, Communication Efficiency

Description

Deep Learning models are becoming increasingly larger so that most of the state-of-the-art model architectures are either too big to be deployed on a single machine or cause performance issues such as undesired delays. 

This is not only true for the largest models being deployed in high performance cloud infrastructures but also for smaller and more efficient models that are designed to have fewer parameters (and hence, lower accuracy) to be deployed on edge devices.

That said, this project considers the second environment where there are multiple resource constrained machines connected through a network. 

When distributing deep learning models across multiple compute nodes, trying to realize parallelism, certain algorithms (e.g., Model Parallelism) are not able to achieve the desired performance in terms of latency, mainly due to (1) communication cost of intermediate tensors; and (2) inter-operator blocking.

This project consists of multiple sub-projects each can be taken separately.

In the context of Model Parallelism, two potential modifications can be considered: 

  • Pipeline parallelism by delaying the inference of the first few data samples assuming a live stream of input data.
  • Finding certain points in deep learning architectures or modifying the architecture itself so that for each data sample, it becomes possible to filter out some sub-parts of the model, and therefore reducing the transmitted data, and still achieve comparable accuracy.

Class and Variant Parallelism improve inter-node communication significantly. However, the input data needs to be shared between contributing nodes. The goal is to propose a technique to transmit less data, and to find a good trade-off between computation and communication.

Note that this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although efficiency of the training phase can be considered as well, it has a much lower priority.

Supervisor:

Navidreza Asadi

Load Generation for Benchmarking Kubernetes Autoscaler

Keywords:
Horizontal Pod Autoscaler (HPA), Kubernetes (K8s), Benchmarking

Description

Kubernetes (K8s) has become the de facto standard for orchestrating containerized applications. K8s is an open-source framework which among many features, provides automated scaling and management of services. 

Considering a microservice-based architecture, where each application is composed of multiple independent services (usually each service provides a single functionality), K8s' Horizontal Pod Autoscaler (HPA) can be leveraged to dynamically change the number of  instances (also known as Pods) based on workload and incoming request pattern.

The main focus of this project is to benchmark the HPA behavior of a Kubernetes cluster running a microservice-based application having multiple services chained together. That means, there is a dependency between multiple services, and by sending a request to a certain service, other services might be called once or multiple times.

This project aims to generate incoming request load patterns that lead to an increase in either the operational cost of the Kubernetes cluster or response time of the requests. This potentially helps to identify corner cases of the algorithm and/or weak spots of the system; hence called adversarial benchmarking.

The applications can be selected from commonly used benchmarks such as DeathStarBench*. The objective is to investigate on the dependencies between services and how different sequences of incoming request patterns can affect each service as well as the whole system.

* https://github.com/delimitrou/DeathStarBench/blob/master/hotelReservation/README.md

Supervisor:

Navidreza Asadi

Demo implementation: Multi-domain redundant network routing

Keywords:
multi-domain, SDN
Short Description:
This works consists on the implementation of a multi-domain SDN network.

Description

Software-Defined Networking (SDN) is a network paradigm where control and data planes are decoupled. The control plane consists on a controller, which manages network functionality and can be deployed in one or multiple servers. The data plane consists on forwarding entities which are instructed by the controller on how to forward traffic.

A network can be divided in multiple domains in order to ease its management or limit ownership. In multi-domain SDN, each domain has a controller which is responsible for the management. Controllers in different domains cooperate which each other aiming at providing multi-domain end-to-end connectivity.

In this work, the student will receive an abstract topology representing the multi-domain network. This information has to be used to build a virtual network, that can be used in the testing of different algorithms. The implementation should include a GUI, in order to visualize the topology and interact with the different elements in the network.

Please send your CV and transcript of records.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Networking/SDN
  • Python
  • Object-Oriented Programming
  • Web programming (GUI)

Contact

Supervisor:

Cristian Bermudez Serna

Working student for the PCN lab

Keywords:
SDN, P4
Short Description:
The Prgrammable Communication Networks lab offers the opportunity to work with interesting and entertaining projects related to the Software-Defined Networking (SDN) and Programmable Data Planes (PDP) paradigms. For the next semester, a position is available to assist on the lab doing the student supervision during the lab sessions.

Description

The Prgrammable Communication Networks lab offers the opportunity to work with interesting and entertaining projects related to the Software-Defined Networking (SDN) and Programmable Data Planes (PDP) paradigms. For the next semester, a position is available to assist on the lab doing the student supervision during the lab sessions.

Prerequisites

  • Solid knowledge in computer networking (TCP/IP, SDN, P4)
  • Solid knowledge of networking tools and Linux (iperf, ssh, etc)
  • Good programming skills: C/C++, Python

Contact

Please send your transcript of recrods and CV to:

Cristian Bermudez Serna - cristian.bermudez-serna@tum.de

Nicolai Kröger - nicolai.kroeger@tum.de

Supervisor:

Cristian Bermudez Serna, Nicolai Kröger, Kaan Aykurt

Ongoing Thesis (already assigned)

Bachelor's Theses

Investigation of Various Resource Allocation Granularities in the Frequency Domain of a 5G Radio Access Network

Description

Resource efficiency is a key design principle for future 6G radio systems. To achieve resource efficiency, a high resource utilization must be accomplished. This can be realized by allocating exactly the number of resources to the various users, which are required by them. The number of required resources per user highly depends on the experienced channel conditions of a user, which can be determined by the Channel Quality Indicator (CQI) in the Downlink (DL). To determine the CQIs, reference signals are sent by the Base Station (BS) and the CQI is periodically reported by the User Equipemt (UE). CQI reporting can either be done for the entire Bandwidth Part (BWP), i.e., wideband CQI reporting, or for a subgroup of frequency resources, i.e., subband CQI reporting. In 5G Radio Access Networks (RANs), the unit of resource allocation in the frequency domain is a Resource Block Group (RBG), which consists of multiple Physical Resource Blocks (PRBs). The number of PRBs that make up a single RBG is fixed and depends on the employed Bandwidth Part (BWP) size [1]. Since the CQI determines how much data can be sent with a single RBG, it highly influences the number of required resources of a user. Hence, the accuracy and the granularity of the CQI in terms of the frequency spectrum impacts the overall resource utilization that can be achieved. Furthermore, the number of PRBs making up one RBG influences the achieved resource utilization, as the difference between the number of allocated resources and the number of required resources decreases with a decreasing level of granularity.   The goal of this thesis is to investigate the trade-off between a higher CQI reporting granularity, which comes with an increased signaling overhead, and the increased resource efficiency by achieving a better mapping between the actual channel conditions and the reported CQI value. Moreover, the impact of smaller RBG sizes should be compared against the increase in signaling overhead allowing for a more granular resource allocation. To this end, first, the student needs to gain a thorough understanding of the DL reference signals and CQI reporting in 5G as well as the DL resource allocation. This can be achieved by a detailed study of the ETSI 5G standards. Afterwards, the student is expected to conduct extensive simulations for different parameter settings to numerically evaluate the different trade-offs. Optionally, depending on the thesis progress, a similar analysis can be conducted for an UL setup or measurements can be conducted in a testbed setup that uses OpenAirInterface (OAI) [2].   References [1] ETSI, “5G; NR; phyiscal layer procedures for data: 3GPP TS 38.214 version 17.5.0 release 17.” www.etsi.org, 2023. Technical Specification. [2] Open Air Interface, “OpenAirInterface | 5G software alliance for democratising wireless in- novation,” 2024. https://openairinterface.org [Accessed: January 31, 2024].

Supervisor:

Modeling and implementation of quantum entangled photon sources

Description

What are the different models for quantum entangled photon sources and how can they be implemented in a quantum networks simulator?

Supervisor:

Benedikt Baier

Measurement Design and Performance Evaluation in srsRAN

Description

Development of a performance test framework for srsRAN and evaluating the performance using this tool.

Supervisor:

Nicolai Kröger

Data plane performance measurements

Keywords:
P4, SDN
Short Description:
This work consists on performing measurements for a given P4 code on different devices.

Description

Software-Defined Networking (SDN) is a network paradigm where control and data planes are decoupled. The control plane consists on a controller, which manages network functionality and can be deployed in one or multiple servers. The data plane consists on forwarding devices which are instructed by the controller on how to forward traffic.

P4 is a domain-specific programming language, which can be used to define the functionality of forwarding devices as virtual or hardware switches and SmartNICs.

This work consists on performing measurements for a given P4 code on different devices. For that, an small P4-enabled virtual network will be used to perform some measurments. Later, data will be also collected from hardware devices as switchs and SmartNICs. Measurement should be depicted in a GUI for its subsequent analysis.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Networking/SDN
  • Python/C
  • Web programming (GUI).

Please send your CV and transcript of records.

Contact

Supervisor:

Cristian Bermudez Serna

Measuring the Throughput of quantized neural networks on P4 devices

Description

Implement a quantized neural network in P4 and evaluate the throughput of feed-forward networks and networks with attention mechanisms on P4 hardware.

Supervisor:

An SCTP Load Balancer for Kubernetes to aid RAN-Core Communication

Keywords:
5G, SCTP, Kubernetes, RAN, 5G Core, gNB, AMF

Description

Cloud Native deployments of the 5G Core network are gaining increasing interest and many providers are exploring these options. One of the key technologies that will be used to deploy these Networks, is Kubernetes (k8s).

In 5G, NG Application Protocol (NGAP) is used for the gNB-AMF (RAN-Core) communication. NGAP uses SCTP as a Transport Layer protocol. In order to load balance traffic coming from the gNB towards a resilient cluster of AMF instances, a L4 load balancer needs to be deployed in the Kubernetes Cluster.

The goal of this project is do develop a SCTP Load Balancer to be used in a 5G Core Network to aid the communication between the RAN and Core.
The project will be developed using the language Go (https://golang.org/).

Prerequisites

- General knowledge about Mobile Networks (RAN & Core).
- Good knowledge of Cloud Orchestration tools like Kuberentes.
- Strong programming skills. Knowledge of Go (https://golang.org/) is a plus.

Contact

endri.goshi@tum.de

Supervisor:

Endri Goshi

Development of an East/West API for SD-RAN control communication

Description

Software-Defined Radio Access Network (SD-RAN) is receiving a lot of attention in 5G networks, since it offers means for a more flexible and programmable mobile network architecture.

The heart of the SD-RAN architecture are the so called SD-RAN controllers. Currently, initial prototypes have been developed and used in commercial and academic testbeds. However, most of the solutions only contain a single SD-RAN controller. Nonetheless, a single controller becomes also a single point of failure for a system, not only due to potential controller failures but also due to a high load induced from the devices in the data plane.

To this end a multi-controller control plane often becomes a reasonable choice. However, a multi-controller control plane renders the communication among the controllers more challenging, since they need to often exchange control information with each other to keep an up to date network state. Unfortunately, currently there is no protocol available for such a communication.

The aim of this work is the development and implementation of an East/West API for SD-RAN controller communication according to 5G stardardization. The protocol should enable the exchange of infromation among the SD-RAN controllers regarding UEs, BSs, wireless channel state and allow for control plane migration among controllers.

Prerequisites

  • Experience with programming languages Python/C++.
  • Experience with socket programming.
  • Knowledge about SDN is a must.
  • Knowledge about 4G/5G networks is a plus.

Supervisor:

Master's Theses

Exploring Multi-Link Operation (MLO) for Reliable Transmissions in Wi-Fi 7 and Beyond

Description

How to efficiently allocate industrial automation traffic to multiple links in MLO to meet the latency requirements?

  • Implementation of State-of-the-Art policies for our Industrial Automation Application using NS3.
  • Analyzing their effect and coming up with a novel policy/classification based on our traffic type.

Supervisor:

Alba Jano, Hansini Vijayaraghavan - Ben Schneider (Siemens)

A novel sparse dMIMO transmission scheme for efficient data communication

Description

For 6G radio systems it is key to achieve high capacity, coverage and energy efficiency. Distributed massive MIMO (dMIMO) systems are one of the often proposed 6G concepts to help to achieve these challenging requirements. dMIMO is very similar to a JT CoMP system, which has been researched for many years even so the reported performance gains over classical mMIMO systems are typically small to moderate. There are a number of well known challenges like the high inter cooperation area interference, channel aging for a high number of involved radio channels, or, huge processing complexity due to the large size of the channel matrices for a high number of cooperating transmission points. To overcome at least some of these issues a novel so called ‘sparse’ dMIMO system has been proposed, where the conventional transmission of user data is replaced by a novel ‘start stop bit’ transmission scheme. The main feature is the sparse resource usage for data transmissions, which is interesting for dMIMO systems as it can help to significantly reduce the inter cooperation area interference by potentially 98%. Similarly, the complexity for the dMIMO precoding might be reduced by potentially 98% as most of the resource elements of a physical resource block will be set to zero, i.e., do not need any precoding. The scope of the master thesis is to evaluate the novel 6G sparse dMIMO concept, verify the claimed benefits, identify new challenges and potential new implementation concepts overcoming these challenges. Where useful, AI/ML based solutions should be applied to achieve highest performance with lowest complexity.

Supervisor:

Valentin Haider - Wolfgang Zirwas, Bernhard Wegmann, Brenda Vilas-Boas (Nokia)

Evaluation of Time Offset in 5G NR using USRPs

Short Description:
To evaluate the effect of Time Offset between base stations in 5G NR.

Description

The 3GPP standard has established a maximum time synchronization error, the indoor 5G network may be able to handle a higher value proposed by an earlier simulation. To validate and determine such a result, it is necessary to use real hardware to create a small network and test related aspects of it. Upper error limits serve a useful purpose in many network applications. The developer is able to establish the maximum permissible error by the network requirements to aid in the construction of networks.

Supervisor:

Yash Deshpande

Implementation and comparison of different techniques for disaggregated optical access network planning

Keywords:
PON, ILP
Short Description:
Implementation and comparison of different techniques for efficient and cost-effective planning of disaggregated optical access networks with protection from failures

Description

Implementation and comparison of different techniques for efficient and cost-effective planning of disaggregated optical access networks with protection from failures.

Tasks:

  • Topology generation using Gabriel Graphs
  • Executing heuristic approach from state of the art
  • ILP formulation and implementation, including protection schemes
  • Exploration of ML alternatives
  • Results evaluation

Prerequisites

Knowledge of:

  • Network planning
  • Optical newtorks
  • ILP formulation
  • Python

Contact

cristian.bermudez-serna@tum.de

Supervisor:

Cristian Bermudez Serna

RTT-guided Route Servers at IXPs

Description

Problem: BGP is performance-agnostic

Solution: incorporate a delay-related metric into the best-path selection process.

Approach: Estimate the round-trip prop_delay to destinations (/24s) within the routing table of the IXP

Goal: Evaluate if it is possible to outperform BGP’s route selection criterion, in terms of latency, with a measurement-based approach.

Supervisor:

Maximilian Stephan - Matthias Wichtlhuber (DE-CIX)

LFM Deep Dive: Understanding the Impact on 5G

Keywords:
5G, AKA, LFM, Security

Description

Linkability of Failure Messages (LFM) is a security hole in the Authentication and Key Agreement (AKA) procedure.

The LFM flaw was first reported in 3G [2] and it has also been proven to work in 5G [1]. Compared to IMSI catchers, the use of the flaw for identifying nearby subscribers has two limitations: First, the attacker has to know the ID of a person of interest that they are looking for. Only these subscribers with known IDs can be detected, it is not possible to find the ID of a new subscriber without knowing / guessing it.

Second, LFM requires an attacker to probe every new device that connects to their fake base station for every ID that they are looking for. In addition to probing every new device, the attacker also needs to contact an authentic mobile network to obtain authentication requests for each person of interest.

Due to these limitations, the LFM flaw is less powerful than previously used IMSI catchers. The objective of this project is to examine the scalability and practicability of exploiting the flaw on a larger scale.

Supervisor:

Oliver Zeidler - Julian Sturm ()

Network Programmability-based Security Mechanisms in Optical Access Networks

Keywords:
Network Programmability, Optical Access Networks, SDN, P4
Short Description:
The advent of Software-Defined Networking (SDN) has revolutionized the way networks are managed and secured. In the context of optical access networks, where performance and security are paramount, it is crucial to develop advanced mechanisms for safeguarding against threats like TCP-SYN flood attacks. This research proposal aims to investigate a novel approach to thwarting such attacks, leveraging SDN controllers and programmable switches, specifically in optical access networks.

Description

The advent of Software-Defined Networking (SDN) has revolutionized the way networks are managed and secured. In the context of optical access networks, where performance and security are paramount, it is crucial to develop advanced mechanisms for safeguarding against threats like TCP-SYN flood attacks. This research proposal aims to investigate a novel approach to thwarting such attacks, leveraging SDN controllers and programmable switches, specifically in optical access networks.

How can SDN controllers and programmable switches be employed to effectively detect and mitigate TCP-SYN flood attacks in optical access networks, utilizing authentication using a modified SYN-ACK exchange and actuating triggers, while maintaining network performance and reliability?

This research aims to contribute to the field of network security and SDN by providing a cutting-edge solution for mitigating TCP-SYN flood attacks in optical access networks. The expected outcomes include:

1. An innovative approach to SYN flood attack mitigation, leveraging SYN-ACK exchange and P4-based actuating triggers.

2. Insights into the performance and scalability of the proposed solution in optical access network scenarios.

3. A comprehensive evaluation of alternative SYN flood attack mitigation techniques, aiding network administrators in selecting the most appropriate method.

Prerequisites

  • Machine Learning
  • Python and P4 programming
  • Knowledge of Softrware-Defined Networking

 

Contact

cristian.bermudez-serna@tum.de

Supervisor:

Cristian Bermudez Serna

Exploration of Machine Learning for In-network Prediction and Classification

Keywords:
Machine Learning, P4, SDN
Short Description:
A promising solution is to include a Machine Learning algorithm into the Data Plane. Specifically, Decision Trees (DT) and Random Forests (RF) can be used to do line-rate classification. Since Decision Trees do not require complex mathematical operations, they can be easily deployed into the programmable switches using P4 language. Either a per-packet or a per-flow approach, each with its advantages and its drawbacks, will automate the decision of the switch of how to handle the incoming traffic instead of always forwarding it first to the controller.

Description

Software defined networks (SDN) have made data traffic routing a lot more convenient. The functionality of the additional controller can be used e.g. for detecting network threats like DoS or also for load balancing by redirecting data traffic. The initial idea of SDNs is that each time a new packet enters the network the packet is first forwarded to the controller to be checked. The controller then decides on which route the packet shall be send inside the network or tells the network to drop the packet, for instance if it is a threat. Each of the switches then save this information in their match-action tables. However, this model cannot scale in large networks with thousands or even millions of different packets trafficking, since that would lead to an additional latency, if every single packet needs to be sent to the controller.

Therefore, a promising solution to improve the model is to include a Machine Learning algorithm into the process. Specifically, Decision Trees (DT) and Random Forests (RF) can be used to do this line-rate classification. Since Decision Trees do not require complex mathematical operations, they can be easily deployed into the programmable switches using P4 language. Either a per-packet or a per-flow approach, each with its advantages and its drawbacks, will automate the decision of the switch of how to handle the incoming traffic instead of always forwarding it first to the controller.

In this master thesis a realisation of a DT into the P4 switches will be tested. First a functioning DT based on a real data traffic dataset will be implemented. Both variations (per-packet/per-flow) will be taken into consideration. The second step will be to translate the algorithm into the P4 switches. Afterwards the prediction performance will be analysed. The final step will be to compare the ML approach to the non-ML approach and draw conclusions on the results.

Prerequisites

  • Machine Learning
  • Python and P4 programming
  • Knowledge of Softrware-Defined Networking

Contact

cristian.bermudez-serna@tum.de

Supervisor:

Cristian Bermudez Serna

Performance Evaluation of a 6G UAM Connected Sensor Fusion System

Description

The master thesis aims to develop a connected sensor fusion system focusing on its application in Urban Air Mobility localization. By gathering data from multiple sensors, the air vehicles (AVs) will be able to better estimate the airspace view and improve their route planning.  The performance of IoT protocols within the context of a 6G system will be assessed. The study also seeks to evaluate the impact of network performance factors, such as delay and packet loss, on the accuracy of the fusion data. Additionally, the thesis will investigate the impact of a semantic-aware transport layer on the performance of the fusion system. Ultimately, the research not only contributes to the advancement of UAM technology but also aligns with the emerging 6G paradigm, offering a more connected and efficient solution for tactical deconfliction in airspace navigation, making it safer and more reliable.

Supervisor:

Polina Kutsevol - Markus Klügel (Airbus)

Remote Monitoring of Correlated Sources Over Random Access Channels in IoT Systems

Description

The thesis studies a Markov model of two correlated sources (X and Y) transmitting the data over a wireless channel for remote estimation. The objective of the thesis is to develop a strong theoretical insight for modelling an estimator to optimize the errors and age of information in a wireless communication system. We aim at studying various estimation strategies to do so, and demonstrate the optimal method for the given conditions (such as correlation between the sources, and Markov model parameters). The implementation is carried out by simulating the abovementioned theoretical concepts in MATLAB under different conditions.

Supervisor:

Polina Kutsevol - Dr. Andrea Munari (DLR - Deutsches Zentrum für Luft- und Raumfahrt)

Analysis of UE-initiated Signaling Storms and Their Impact on 5G Network Security

Keywords:
5G, Signaling Storm, UE initiated attacks, DDoS

Description

Signaling storm is a specific type of DDoS attack, which emerges from frequent small-scale signaling activities of a group of compromised UE. Typically, signaling messages are exchanged between UE and the network for establishing communication sessions and managing network resources. However, signaling attacks abuse regular procedures to generate high number of signaling messages within a short period. The generation of excessive signaling load increases the network congestion and consumes resources. In 5G, UEs must send a request to initiate themselves and establish the communication with the 5G core. These initial registration request messages contain UE related information about identity, location and capabilities. The recent research internship focused on signaling storms has revealed that an initial registration request flood can generate significant signaling load and stress the network core. In the scope of mentioned internship, a simulation environment was created using UERANSIM and open5GS to investigate the impact of repetitive initial registration requests from a botnet comprising hundreds of UEs on control plane resources. The master thesis involves a comprehensive research study based on this initial observation to identify other signaling attack scenarios initiated by UEs, that abuse regular UE signaling for registration processes, inter-slice handovers and mobility handovers. Furthermore, assessing the impact of these scenarios and exploring possible detection methodologies are crucial for the intended study.

Motivation: 5G networks are designed to be used for three types of connected services: Enhanced Mobile Broadband(eMBB), Ultra Reliable Low Latency Communications (URLLC) and Massive Machine Type Communications (mMTC). Higher throughput, reliable connections and low latency capabilities of 5G networks should meet uninterrupted and robust data exchange requirements of users. Both the industry and individual users heavily rely on seamless connectivity. However, numerous studies have shown that 5G networks are vulnerable to signaling threats and DDoS attacks, which are becoming more severe due to the growing number of mobile and IoT devices. Such attacks can increase latency and impact service availability. The majority of literature on this topic examines potential 5G threats including signal storms and their effect on users. Even some detection and prevention techniques have been proposed. Although these studies provide valuable information about signaling storms, it has not been particularly investigated how control plane resources can be exploited by flooding UE initiated and 5G protocol specific requests. The research gap regarding concrete statements to reproduce signaling attacks is the main motivation of this study.

Objectives and Research Question: This work will focus on UE initiated DDoS attacks targeting control plane resources of 5G networks and it will question if these attacks can have a severe impact on practical 5G test setup. Therefore, signaling procedures particularly the ones involving NAS and NGAP protocols, will be explored to identify scenarios for UE initiated signaling attacks. The characteristics of the identified scenarios will be derived by theoretical analysis. The remaining objectives are reproducing these scenarios conducting experiments with appropriate simulation tools, evaluating the impact of these attacks on the network and user experience and investigating detection solutions for signaling storms.

Challenges: The identified scenarios should be demonstrated and analyzed to study the research question, which poses two main challenges. Designing a simulation environment for realistic attack reproduction is elaborate, which requires determining the most suitable solution to simulate UE, gNB and 5GC among existing solutions. The simulation environment cannot completely replace the real 5G network and there will be some restrictions. Therefore, the second challenge is to design experiments in a way that allows the derivation of general statements about 5G security threats from observations made during the experiments

Contribution: This thesis will address the signaling attacks on the control plane of 5G networks by identifying concrete signaling scenarios to generate excessive packet floods, analyzing them, and demonstrating them to assess their impact on the network. The simulation environment will allow reproducing various attacks to derive characteristics of the attacks, which are required for detection by distinguishing between good and malicious communication patterns. Overall, this work will contribute to the improvement of network security.

Supervisor:

Oliver Zeidler, Maximilian Stephan - Tim Niehoff (IPOQUE)

Proactive load-aware wireless resource allocation for sustainable 6G network

Description

The rapid growth of traffic and the number of connected devices in the 5G and beyond wireless networks focus the attention on sustainability in the radio access network (RAN). Traffic load and status of available wireless resources in the network change rapidly, especially in scenarios with a large number of connections and high mobility. The high connectivity is caused by exponentially increasing Internet of Things (IoT) devices connected to the network for supporting various use cases ranging from Industry 4.0 to healthcare.

IoT devices mainly powered by batteries are characterized by low cost, low complexity, and limited computational resources. Therefore, elongating their lifetime while fulfilling the quality of service (QoS) requirements poses a new research challenge. To tackle this problem, context awareness of devices consisting of device type and mobility; and the network traffic load simultaneously enhance the wireless resource management and the management of the devices states. Moreover, to enable awareness of the neighboring cells, the predicted information on the traffic load can be exchanged among cells. The above affects the decisions of accepting the device or offloading it to the neighboring cell and the device's operating state.

In this thesis, the student will focus on developing and testing a context-aware resource allocation mechanism based on device mobility and traffic load, focusing on decreasing individual devices' energy consumption and reducing processing latency.

Prerequisites

  • Good knowledge of Python and Matlab programming.
  • Good mathematical background.
  • Knowledge of mobile networks.

Contact

alba.jano@tum.de

Supervisor:

Alba Jano

The Analysis of Dual Data Gathering Strategy for Internet-of-Things Devices in Status Update Systems

Description

The analysis of a dual data-gathering strategy for Internet-of-Things (IoT) devices in status update systems offers valuable insights into improving the efficiency and reliability of data collection in IoT environments. This thesis focuses on investigating the dual data gathering strategy, aiming to optimize the performance of status update systems in IoT deployments. The dual data-gathering strategy takes advantage of both local and remote processing capabilities. Using different source servers, this strategy aims to reduce energy consumption and network congestion in status update systems. The anticipated outcomes of this research include a comprehensive understanding of the dual data gathering strategy, mathematical models to analyze its performance, and insights into its practical implementation. These outcomes will not only advance the theoretical understanding of status update systems in IoT but also have practical implications for the design and deployment of IoT networks and applications.

Supervisor:

Polina Kutsevol - Dr. Andrea Munari (DLR - Deutsches Zentrum für Luft- und Raumfahrt)

End-to-End Scheduling in Large-Scale Deterministic Networks

Keywords:
TSN, Scheduling, Industrial Networks
Short Description:
To evaluate APS in TSN Networks

Description

Providing Quality of Service (QoS) to emerging time-sensitive applications such as factory automation, telesurgery, and VR/AR applications is a challenging task [1]. Time Sensitive Networks (TSN) [2] and Deterministic Networks [3] were developed for such applications to guarantee ultra low latency, bounded latency and jitter, and zero congestion loss. The objective of this work is to develop a methodology to guarantee bounded End-to-End (E2E) latency and jitter in large-scale networks.

Prerequisites

C++, Expeience with OMNET++, KNowledge of TSN.

Supervisor:

Yash Deshpande, Philip Diederich - Dr Siyu Tang (Huawei Technologies)

Design and Evaluation of Protocol Exploits for the Vendor-Specific Implementation of Commercial 5G Devices

Description

The goal of this thesis is to evaluate weaknesses of the protocol implementation in commercial smart phones and design attacks correspondingly.

Supervisor:

Nicolai Kröger, Oliver Zeidler - Dominik Brunke ()

Tow ards TSN and 5GS Integration: Implementation of TSN AF

Keywords:
5G, TSN, Industrial Networks
Short Description:
Implementing a TSN AF to a 5G core to make the data plane communication deterministic.

Description

Time-Sensitive Networking (TSN)is a set of standards[1]developedby IEEE 802.1 Task Groupto enableEthernet networks to giveQuality of Service (QoS)guarantees for time-sensitiveor mission-critical traffic and applications.VariousTSN standards provide differing QoSguaranteesand require different functions to be implemented in hardware. As devices
frommultiplevendorsneedtooffermutuallycompatiblefunctions,profilessuchas IEEE60802forIndustrialAutomation[2]arebeingdefined.These profilesfocusona commonset of functions and configurations in order to decreasethe complexity which possible variations in standards might create.

Prerequisites

REST API, Knowledge and Experience with 5G systems, Undesrstanding of TSN.

Python.

Supervisor:

Yash Deshpande - Dr. Andreas Zirkler (Siemens AG)

Processing Priorization in the Medical Context

Description

In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance. However, these processing resources are usually shared among many applications, which potentially leads to worse performance in terms of execution time, throughput, etc. . This is especially critical for applications such as autonomous driving, telemedicine or smart operations. Hence, the processing of more critical applications must be prioritized.

 

In this thesis, the task is to develop and evaluate a priorization approach for applications. However, not only technical aspects will play a role for the priorization, but also ethical, i.e. in this case medical aspects. This is especially important, if applications are equally critical. For this, suitable real use cases are identified together with our partners at MITI (Hospital "Rechts der Isar"). The priorization approach then leads to a specified distribution of the processing and networking resources, satisfying the minimum needs of critical applications.

 

The result will be an evaluated priorization approach for applications in the medical environment.

 

Prerequisites

Motivation, basic networking knowledge, basic programming knowledge

Contact

nicolai.kroeger@tum.de

Supervisor:

Nicolai Kröger, Fidan Mehmeti

Mobility Management for Computation-Intensive Tasks in Cellular Networks with SD-RAN

Description

In the previous generations of cellular networks, both data plane and control plane operations were conducted jointly in Radio Access Networks (RANs). With the emergence of Software Defined Networks (SDNs), and their adaptation in RANs, known as SD-RAN, for the first time the separation of control from data plane operations became possible in 5G RAN, as a paradigm shift on how the assignment of network resources is handled in particular, and how cellular networks operate in general. The control is shifted to centralized units, which are known as SD-RAN controllers. This brings considerable benefits into the cellular network because it detaches the monolithic RAN control and enables co-operation among different RAN components, i.e., Base Stations (BSs), improving this way network performance along multiple dimensions. Depending on the current spread of users (UEs) across BSs, and their channel conditions for which UEs periodically update their serving BSs, and BSs forward that information to the SD-RAN controller, the latter can reallocate resources to BSs accordingly. BSs then perform the resource allocation across their corresponding UEs. Consequently, exploiting the wide network knowledge leads to an overall improved performance as it allows for optimal allocation decisions.

 

This increased level of flexibility, which arises from having a broader view of the network, can be exploited in improving the mobility management in cellular networks. This comes into play even more with 6G networks in which in-network computing is envisioned to be integral part. Namely, users will be sending computationally-intensive tasks to edge clouds (through their BSs) and would be waiting some results as a response. However, as it will take some time until these tasks are run on the cloud, the user might be changing the serving BS. As a result, handover will have to be managed. However, while the task is being uploaded, performing handovers would not be good as then the task would need to be sent to another edge cloud. Consequently, having a centralized knowledge of all the network (which the SD-RAN controller has), to avoid frequent handovers, the controller has an extra degree of freedom by increasing the number of frequency blocks that can be assigned to a user while uploading the task and while downloading results.

 

In this thesis the goal would be to increase the overall network utility by deciding which tasks to serve (each task has its own utility), given the limited network resources in terms of the upload bandwidth, download bandwidth, storage in edge clouds, and finite computational capacity. Users besides sending tasks and receiving results are assumed to run other applications, with given service requirements. The student will formulate optimization problems and solve them either analytically or using an optimization solver, like Gurobi, CVX, etc. The other task would be to conductt realistic simulations and showing the advantages the developed algorithms offer against benchmarks.

 

 

 

 

 

 

Prerequisites

Good knowledge of Python and interest to learn about mobility management in 5G

Supervisor:

Anna Prado, Fidan Mehmeti

Enhanced Mobility Management in 5G Networks with SD-RAN

Description

 

In pre-5G networks, both the data plane and control plane operations were performed jointly in Radio Access Networks (RANs). With the emergence of Software Defined Networks (SDNs), and its adaptation in RANs, known as SD-RAN, for the first time the separation of control from data plane became possible 5G RAN, as a paradigm shift on how the assignment of network resources is handled in particular, and how cellular networks operate in general. The control is transferred to centralized units, which are known as SD-RAN controllers. This brings considerable benefits into the mobile network since it detaches the monolithic RAN control and enables co-operation among different RAN components, i.e., Base Stations (BSs), improving the network performance along several dimensions. To that end, depending on the current spread of the users (UEs) across BSs, and their channel conditions for which the UEs periodically update their serving BSs, and BSs send that information to the SD-RAN controller, the latter can reallocate resources to BSs accordingly. BSs then perform the resource allocation across their corresponding UEs. As a consequence, exploiting the wide network knowledge leads to an overall improved performance as it allows for optimal allocation decisions.

 

This increased level of flexibility, which arises from having a broader view of the network, can be exploited in improving the mobility management in cellular networks. In the previous generations of cellular networks, each BS has its own set of frequencies at which it could transmit. Given that each user would receive service by only one BS, depending on the channel conditions the users would have with the serving BS and the number of users within the same sell, the user would decide on whether she would need a handover or it would be better to remain within the same serving area (i.e., receiving service from the same BS) . Currently, conditional handovers are being the most serious candidate for 5G. However, every handover involves a considerable cost, due to the preparations that need to be performed to hand a user over from one BS to another one. These will unavoidably lead to reductions in data rates and network resources for other users. On the other hand, having a centralized knowledge of all the network (which the SD-RAN controller has), to avoid frequent handovers, the controller has an extra degree of freedom by increasing the number of frequency blocks that can be assigned to a user experiencing bad channel conditions. This of course depends on the topology of the users in that moment.

 

In this thesis, the focus will be on jointly deciding on the resource allocation policy for each user across the entire area of the controller and when to perform the handover in order to optimize different performance aspects (e.g., provide proportional fairness). To that end, the student will formulate optimization problems and solve them either analytically or using an optimization solver, like Gurobi, CVX, etc. The other part would be conducting realistic simulations and showing the advantages the developed algorithms offer against state of the art.

 

 

Prerequisites

Good knowledge of Python and interest to learn about mobility management in 5G

Supervisor:

Anna Prado, Fidan Mehmeti

Network Intrusion Detection using pre-trained tabular representation models

Keywords:
Machine learning, intrusion detection
Short Description:
Detecting intrusion detection using tabular representation and pre-trained machine learning models.

Description

Network Intrusion Detection (NID) is a common topic in cybersecurity. However, it is not trivial to find a solution when facing the complicated network environment nowadays. Often a complex system is needed to process enormous volume of data stored in databases. This thesis proposes to use Deep Learning models to tackle the NID problem in a pre-train/fine-tune manner. As the new paradigm of transfer learning, the process of pre-training follows by fine-tuning has achieved huge success in many areas such as vision and NLP. We aim to study whether those trending models still perform well on large-scale structured data such as network security logs. It is plausible to leverage the strong learning ability of DL models to learn table representations and separate anomaly from benign records based on the learned information.

Prerequisites

  • Machine learning knowdlege
  • Programming skills (Python, GIT)
  • Computer networking knowledge

Supervisor:

Cristian Bermudez Serna, Hasan Yagiz Özkan - Dr. Haojin Yang (HPI)

Automated Generation of Adversarial Inputs for Data Center Networks

Keywords:
adversarial; datacenter networks

Description

Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Many proposal for network designs and routing algorithms covering different operational goals and requirements have been proposed.
This variety makes it hard for operators to choose the ``right'' solution.
Recently, some works proposed that automatically generate adversarial input to networks or networking algorithms [1,2] to identify weak spots in order to get a better view of their performance and help operators' decision making. However, they focus on specific scenarios.
The goal of this thesis is to develop or extend such mechanisms so that they can be applied a wider range of scenarios than previously.
The thesis builds upon an existing flow-level simulator in C++ and initial algorithms that generate adversarial inputs for networking problems.

[1] S. Lettner and A. Blenk, “Adversarial Network Algorithm Benchmarking,” in Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies, Orlando FL USA, Dec. 2019, pp. 31–33, doi: 10.1145/3360468.3366779.
[2] J. Zerwas et al., “NetBOA: Self-Driving Network Benchmarking,” in Proceedings of the 2019 Workshop on Network Meets AI & ML  - NetAI’19, Beijing, China, 2019, pp. 8–14, doi: 10.1145/3341216.3342207.

Prerequisites

- Profound knowledge in C++

Supervisor:

Towards Log Data-driven Fault Analysis in a Heterogeneous Content Provider Network

Description

Bayerischer Rundfunk (BR) operates a network to deliver content via television, radio and the internet to its users. This requires a highly heterogenous network. The network monitoring solution for the BR-network collects log data from involved devices and stores it in a central database. Currently, human operators make network management decisions based on a manual review of this log data. This especially includes root cause identification in case of network failures. Such a human-centric process can be tedious and does not scale well with increasing network complexity. In this thesis, the student should perform a thourough analysis of the described data and evaluate the potential for automated processing. Goal is to provide a data-driven approach that significantly supports human operators with identifying root causes in case of network failures.

Supervisor:

Maximilian Stephan

Reliability Analysis of ONOS Releases based onCode Metrics and SRGM

Description

Software Defined Networking (SDN) separates the control and data planes.Control plane can be considered as the brain of the network and it is responsible for configuring flows, finding paths and managing all the network functionalities like firewall,  load balancing,  etc.  For this reason,  the SDN controller became complex.  Furthermore, it is a large software platform, which have many contributors  with  different  experience  level.   As  a  result  the  code  contains  many undetected and unresolved bugs.  If one of these bugs is activated in the operational state, it may cause performance degradation or even collapse of the whole system.

SDN serves to broad range of applications with different requirements.  Some of the application areas like autonomous driving requires high reliability and performance degradation may cause undesired results.  Software Reliability Growth Models (SRGM) are statistical frameworks that are based on historical bug reports  for  reliability  analysis  and  widely  used  to  estimate  the  reliability  of  a software.  Open network operating system (ONOS) is an open source project and it became one of the most popular SDN platforms.  Its historical bug reports are open in their JIRA issue tracker.  Currently ONOS has 23 releases, its first ten  versions  are  investigated  with  different  SRGM  models  [1]  and  found  that different SRGMs fit to the bug detection of different versions of ONOS.

Source code metrics refer to quantitative characteristics of the code.  Those metrics  can  describe  the  size  of  the  code  (lines  of  code),  complexity  of  code (McCabe’s complexity), etc.  They have been used to predicting the number of bugs, identifying possible potential location of bug, etc.

The goal of this work is to analyse the reliability of different ONOS releases. For that purpose, an understanding of the correlation between the structure of source code and the bug manifestation process is crucial to predict the future bug manifestation of the new releases.  First, a state of the art research on the SRGM will  be  done  to  understand  the  software  reliability  and  SRGMs.   Afterwards the  student  should  implement  different  SRGMs  to  fit  the  error  manifestation of  every  release  and  compare  the  results  with  mentioned  research  [1].   Then, different  code  metrics  will  be  obtained  from  each  ONOS  release.   Then,  the correlation between SRGM and code metrics will be revealed.  At last reliability of the release will be analyzed with the best fitting SRGM. The result of this work will be to propose a reliability metric combining SRGM and code metrics that improves the software reliability prediction.

 

References

P. Vizarreta, K. Trivedi, B. Helvik, P. Heegaard, W. Kellerer, and C. Mas Machuca, An empirical study of software reliability in SDN controllers,  13th International  Conference  on  Network  and  Service  Management  (CNSM), 2017.

Supervisor:

Hasan Yagiz Özkan

Reinforcement Learning for joint/dynamic user and slice scheduling in RAN towards 5G

Description

In the Radio Access Network (RAN), the MAC scheduler is largely inherited across generations in the past, to fit to new networking goals and service requirements. The rapid deployment of new 5G technologies will make upgrading of current ones extremely complicated and difficult to improve and maintain. Therefore, finding new solutions for efficient Radio Resource Scheduling (RRS) is necessary to meet the new KPI targets. 5G networks and beyond use the concept of network slicing by forging virtual instances (slices) of its physical infrastructure. A heterogeneous network requires a more optimized and dynamic RRS approach. In view of the development of SD-RAN controllers and artificial intelligence, new promising tools such as reinforcement learning can be proven useful for such a problem.

In this thesis, a data-driven MAC slice scheduler will be implemented, that maximizes user utility, while learning the optimal slice partitioning ratio. A deep reinforcement learning technique will be used to evaluate the radio resource scheduling and slicing in RAN. The results will be compared with traditional schedulers from the state-of-the-art.

Supervisor:

Arled Papa - Prof. Navid Nikaein (EURECOM)

vnf2tx: Automating VNF platform operation with Reinforcement Learning

Description

...

Supervisor:

Hierarchical SDN control for Multi-domain TSN Industrial Networks

Description

In this thesis student will focus on designing and implementing a hierarchical SDN solution for industrial multi-domain TSN network.

Contact

kostas.katsalis@huawei.com

Supervisor:

Nemanja Deric - Dr. Kostas Katsalis (Huawei Technologies)

Deliberate Load-Imbalancing in Data Center Networks

Keywords:
Traffic Engineering, Scheduling, Data Center Networks
Short Description:
Goal of this thesis is the implementation and evaluation of an in-dataplane flow scheduling algorithm based on the online scheduling algorithm IMBAL in NS3.

Description

Recently, a scalable load balancing algorithm in the dataplane has been proposed that leverages P4 to estimate the utilization in the network and assign flows to the least utilized path. This approach can be interpreted as a form of Graham's List algorithm.

In this thesis, the student is tasked to investigate how a different online scheduling algorithm called IMBAL performs compared to HULA. A prototype of IMBAL should be implemented in NS3. The tasks of this thesis are:

  1. Literature research and overview to online scheduling and traffic engineering in data center networks.
  2. Design how IMBAL can be implemented in NS3.
  3. Implementation of IMBAL in NS3.
  4. Evaluation of the implementation in NS3 with production traffic traces and comparison to HULA (a HULA implementation is provided from the chair and its implementation not part of this thesis).

Supervisor:

5G-RAN control plane modeling and Core network evaluation

Description

Next generation mobile networks are envisioned to cope with heterogeneous applications with diverse requirements. To this end, 5G is paving the way towards more scalable and higher performing deployments. This leads to a revised architecture, where the majority of the functionalities are implemented as network functions, which could be scaled up/down depending on the application requirements. 

3GPP has already released the 5G architecture overview, however there exists no actual open source deployment of RAN functionalities. This will be crucial towards the evaluation of the Core network both in terms of scalability and performance. In this thesis, the student shall understand the 5G standardization, especially the control plane communication between the RAN and 5G Core. Further, an initial RAN function compatible with the 5G standards shall be implemented and evaluation of control plane performance will be carried out. 

Prerequisites

  • Strong knowledge on programming languages Python, C++ or Java.
  • Knowledge about mobile networking is necessary.
  • Knowlegde about 4G/5G architecture is a plus.

Supervisor:

Endri Goshi, Arled Papa

Interdisciplinary Projects

Joint radio and computing resource allocation using artificial intelligence algorithms

Description

Mobile Edge Computing (MEC) enabled 6G network support low latency applications running in energy-constrained and computational limited devices, especially IoT devices. Using the task offloading concept, the devices offload the incoming tasks fully or partially to MEC depending on the device and network side's communication and computation resource availability.

The 6G networks are oriented towards the Digital Twins (DT); therefore, the resource allocation and offloading decisions are enhanced with the context-awareness of the devices, environment, and network. The device context awareness consists of battery state, power consumption, CPU load, and traffic type. Further, the environmental context-awareness includes the position of the network components, the mobility patterns, and the quality of the wireless channel and the availability of the network resources.

 

In this project, the student will focus on developing and testing an artificial intelligence algorithm for joint allocating of computing and radio resources in a predictive manner, focusing on decreasing individual devices' energy consumption and reducing processing latency.

Tasks

 

  • Work with a 6G radio access network simulator, to generate the database for the scenario with devices having high energy efficiency and low task processing latency requirements.
  • Develop a reinforcement learning algorithm for joint allocation of radio and computing resource allocation.
  • Comparing the developed model with the state-of-the-art approaches.
  • Test and documentation.

 

Prerequisites

 

  • Good knowledge of Python programming.
  • Good mathematical background.
  • Good knowledge of deep learning/reinforcement learning.

Supervisor:

Alba Jano

Research Internships (Forschungspraxis)

Investigating SDR-Based OpenRAN with LiFi Technology

Description

This internship offers a comprehensive exploration of cutting-edge telecommunications technologies, focusing on the integration of Software-Defined Radios (SDRs) with Light Fidelity (LiFi) technology within the OpenRAN architecture.

Supervisor:

Hansini Vijayaraghavan - Muhammad Asad (aeroLiFi GmbH)

An AI Benchmarking Suite for Microservices-Based Applications

Keywords:
Kubernetes, Deep Learning, Video Analytics, Microservices

Description

In the realm of AI applications, the deployment strategy significantly impacts performance metrics.

This research internship aims to investigate and benchmark AI applications in two predominant deployment configurations: monolithic and microservices-based, specifically within Kubernetes environments.

The central question revolves around understanding how these deployment strategies affect various performance metrics and determining the more efficient configuration. This inquiry is crucial as the deployment strategy plays a pivotal role in the operational efficiency of AI applications.

Currently, the field lacks a comprehensive benchmarking suite that evaluates AI applications from an end-to-end deployment perspective. Our approach includes the development of a benchmarking suite tailored for microservice-based AI applications.

This suite will capture metrics such as CPU/GPU/Memory utilization, interservice communication, end-to-end and per-service latency, and cache misses.

Supervisor:

Navidreza Asadi

Integrating Chameleon in a TSN-5G Framework

Short Description:
Chameleon is a TSN method developed at LKN. This FP will work on developing it for TSN-5G bridges.

Description

 

In this research internship, we want to implement Chameleon into the TSN-5G network. The overall idea is to implement chameleon adaptive source routing and queue-aware routing into a 5G bridge connecting 2 TSN systems and enhancing the network to be brighter in adapting to different flows that come to the system while still being able to communicate wirelessly with the ability to synchronize time-sensitive applications with scheduled traffic.

 

 

Supervisor:

Yash Deshpande

Evaluation of Time Synchronization in NR BS-BS Interference Scenarios

Short Description:
To evaluate time offset in agressor victim simulations in 5G NR

Description

The research will utilize the 5G-NR functions available in Matlab for a two-base station scenario, each having multiple UEs connected in both cells. The throughput and block error rate of a UE at the edge of the cell will be evaluated for multiple simulations for different values of delayed transmission in the interfering base station. The study will also incorporate the other BS parameters, such as BS transmit power and the distance between the two BSs, and analyze their influence on the results obtained.

Supervisor:

Yash Deshpande

Implementation of a Medical Communication Protocol

Description

The next generation of communication networks, namely 6G, will enable a huge variety of medical applications. In the current phase, our research focuses on optimizing the interaction between network and medical application developing. For this, we joined forces with the Minimally Invasive Interdisciplinary Therapeutical Intervention (MITI) group, sitting at the hospital "Rechts der Isar".

This research internship focuses on one certain use case, the mobile patient monitoring of patiens in the operation room. In particular, the vital parameters of the patient are continously measured and sent to a server. The task of the research internship is to develop and implement the communication protocol between the sensors and the processing server. Hereby, a basic implementation is already given. However, key components are still missing, which should be implemented and evaluated within the scope of this research internship.

 

Please note, that the real setup is located at MITI. You may have to work there.

Prerequisites

  • Motivation
  • Knowledge about Python and Linux
  • Basic communication (protocol) principles

Supervisor:

Nicolai Kröger - Franziska Jurosch (MITI)

Investigating the AWARE Framework for Automated Workload Autoscaling in Edge Environments

Keywords:
Autoscaling, Kubernetes, Edge Computing

Description

 The rapid growth of edge computing has introduced new challenges in managing and scaling workloads in distributed environments to maintain stable service performance while saving resources. To address this, this research internship aims to explore the feasibility and implications of extending the AWARE framework (Qiu et al., 2023) [1], which has been developed by as an automated workload autoscaling solution for production cloud systems, to edge environments. 

 

 AWARE utilizes tools such as reinforcement learning, meta-learning, and bootstrapping when scaling out workloads in the horizontal dimension by increasing the number of deployment instances and scaling up in the vertical dimension by increasing the allocated resources of a deployment instance. We will employ edge environment infrastructures with limited resources that run a lightweight distribution of the Kubernetes (K8s) container orchestration tool, and the goal is to gain insights into the performance, adaptability, and limitations of this approach.

Supervisor:

Navidreza Asadi

Developing a Digital Twin for Demand Generation

Keywords:
Demand Forecast, Demand Modeling, Digital Twin, GAN

Description

Modern clusters provide many knobs that cloud administrators can tune to configure their system. However, different configurations lead to different levels of performance. In order to find an optimized configuration and to try out different "what-if" scenarios, Digital Twins (DT) have been suggested as a possible solution. A Digital Twin is a virtual model of a physical object or system, that describes the behavior of the real peer with high accuracy. DTs leverage historical and real-time data in order to keep up with the real system.

One of the challenge of creating an accurate DT of a cluster is accurately modeling and predicting the demand.  Because of the high variability of the network traffic, this task has proven challenging in the past, but recent works, such as DoppelGANger [1], enable a modeling of the network traffic volume that better matches reality.

DoppelGANger creates high fidelity synthetic datasets modeled from real traces by leveraging Generative Adversarial Networks (GANs). Moreover, it is able to capture the underlying long-term dependencies and complex multidimensional relationships, surpassing other GAN and non-GAN-based approaches.

Concretely, DoppelGANger has shown improved performance over the state-of-the-art on three different datasets. In this student thesis, we want to explore how DoppelGANger performs also on three other datasets (Fifa '95 World Cup, Bibsonomy, NASA) and discover possible limitations of it. Additionally, we want to analyze the suitability of using DoppelGANger in a real-time manner.

 

[1] Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, and Vyas Sekar. 2020. Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions. In Proceedings of the ACM Internet Measurement Conference (IMC '20). Association for Computing Machinery, New York, NY, USA, 464–483. https://doi.org/10.1145/3419394.3423643

Prerequisites

  • Proficiency in Python
  • Interest in data-driven modeling and time-series analysis.
  • Understanding of Machine Learning (ML) concepts and models
  • Experience with the NumPy and Pandas libraries

Contact

razvan.ursu@tum.de

Supervisor:

Razvan-Mihai Ursu

Development and Evaluation of an Intelligent Wireless Resource Management for 5G/6G Downlink Channel

Description

In this work, we will evaluate several additional techniques in 5G/6G toward reliability enhancements focusing on the Radio Access Network (RAN). The student is expected to first understand and evaluate the concept via simulations over MATLAB. Then, the techniques will be implemented in OpenAirInterface (OAI) [1] platform and we will evaluate the enhancements over our practical 5G testbed setup.

The initial setup will include a mobile robot, 5G Stand-alone communication, and a multi-access edge computing (MEC) system running a machine learning algorithm.

The expected outcome is to have improvements to the RAN of OAI including but not limited to wireless channel estimation and equalization, downlink reliability. More details will be provided after the first meeting.

[1] N.Nikaein, M.K. Marina, S. Manickam, A.Dawson, R. Knopp and C.Bonnet,
“OpenAirInterface: A flexible platform for 5G research,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 5, 2014.

Prerequisites

- Good C/C++ experience

- Good Matlab knowledge

 - Medium knowledge on OFDM and Wireless Channel Estimation

- Good Python knowledge is a plus

- Machine Learning understanding is a plus

Contact

serkut.ayvasik@tum.de

Supervisor:

Further Development and Implementation of a System for Room Sensitive Measurement of Vital Parameters

Description

As a new and future mobile communications standard, 6G is intended to focus on people. In addition to very low latencies and high data transmission rates, the network will be more reliable, secure and dynamic. These features are particularly in demand in medicine and medical device technology to develop intelligent value-added functions for patients and clinical staff. As part of the BMBF-funded joint project "6G-life", a demonstrator will be developed that uses and requires these network properties to enable new intelligent assistance functions.

 

The aim of this research practice is the further development of a system for the environment-aware measurement of vital parameters. For this purpose, a detailed literature research on the possibilities of biosignal processing and data transmission protocols in the medical field will be conducted. Subsequently, an already existing measurement setup is to be further developed, which records vital parameters by means of suitable sensors (ECG, temperature, SpO2, blood pressure, ...). A message architecture is to be designed that allows secure and controllable transmission of the sensor data to computing units located in the network for biosignal processing. The results of the signal processing are to be displayed on an already existing GUI.

 

This Research Internship is in cooperation with the MITI research group. Most work may be done at their site.

Prerequisites

  • Motivation and independent working style
  • Knowledge in communication networks and signal processing
  • Knowledge in Python
  • Knowledge in C++ if possible

Supervisor:

Nicolai Kröger - Franziska Jurosch (MITI)

Robustness analysis of Rail Data Network

Keywords:
Reliability, robustness, rail data network
Short Description:
Robustness analysis of optical networks supporting train communications.

Description

Background

Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughput (less than 2 Mbps) and frequent service interruptions. This motivates the exploration of networks enabling train control and on-board data communications under mobility scenarios with high reliability and controlled latency.

Research question

The goal of this work is to analyze the robustness of the Rail Data Network and its variants to identify the most robust alternative. The topologies and traffic matrix will be given. The plan is to develop robustness surfaces [1] to understand the strengths and weaknesses of different networks combinations.

The results from this work can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.

 References

[1] Manzano, M., Sahneh, F., Scoglio, C., Calle, E. and Marzo, J.L., 2014. Robustness surfaces of complex networks. Scientific reports, 4(1), p.6133.

[2] Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation

[3] 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/

Prerequisites

Requirements

Basic knowledge in:

  • Python
  • Communication Network Reliability course at LKN or equivalent knowledge.

Please send your CV and transcript of records.

Contact

  • Shakthivelu Janardhanan - shakthivelu.janardhanan@tum.de
  • Cristian Bermudez Serna - cristian.bermudez-serna@tum.de

Supervisor:

Shakthivelu Janardhanan, Cristian Bermudez Serna

ILP-based network planning for the future railway communications

Keywords:
Network Planning, On-Train Data Communications. Integer Linear Programming
Short Description:
Exploration of mechanisms for handling data communications under the influence of mobility in the German long distance railway system.

Description

This work focuses on the exploration of networks enabling train control and on-board data communications under mobility scenarios. Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughput (less than 2 Mbps) and frequent service interruptions.

This work aims at the development of a network planning solution enabling future applications in train mobility scenarios such as: Automatic Train Operation (ATO) [1,2], leveraging cloud technologies and meeting bandwidth requirements of data-hungry end-users' applications. Here, special attention will be given to the migration of communications services triggered by trains mobility patterns. It is expected of the student to find solutions to the following questions:

  • When to trigger service migrations?

  • Where to migrate services? (i.e., to which data center)

  • How to handle this process? (So that the user does not perceive any interruption)

 Given:

  • Trains mobility patterns

  • Service requirements in terms of bandwidth and delay

  • Network topology

  • Data center locations

 
The results from this work can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.

 [1] Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation

[2] 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/

Prerequisites

Basic knowledge in:

  • Integer Linear Programming (ILP), heuristics or Machine Learning (ML).

  • Python

Please send your CV and transcript of records.

 

Contact

Supervisor:

Cristian Bermudez Serna

Machine-learning-based network planning for the future railway communications

Keywords:
Network Planning, On-Train Data Communications. Machine Learning
Short Description:
Exploration of mechanisms for handling data communications under the influence of mobility in the German long distance railway system.

Description

This work focuses on the exploration of networks enabling train control and on-board data communications under mobility scenarios. Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughput (less than 2 Mbps) and frequent service interruptions.

This work aims at the development of a network planning solution enabling future applications in train mobility scenarios such as: Automatic Train Operation (ATO) [1,2], leveraging cloud technologies and meeting bandwidth requirements of data-hungry end-users' applications. Here, special attention will be given to the migration of communications services triggered by trains mobility patterns. It is expected of the student to find solutions to the following questions:

  • When to trigger service migrations?

  • Where to migrate services? (i.e., to which data center)

  • How to handle this process? (So that the user does not perceive any interruption)

 Given:

  • Trains mobility patterns

  • Service requirements in terms of bandwidth and delay

  • Network topology

  • Data center locations

 
The results from this work can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.

 [1] Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation

[2] 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/

Prerequisites

Basic knowledge in:

  • Integer Linear Programming (ILP), heuristics or Machine Learning (ML).

  • Python

Please send your CV and transcript of records.

 

Contact

Supervisor:

Cristian Bermudez Serna

Evaluation of traffic model impact on a context-aware power consumption model of user equipment

Keywords:
5G, IIoT, energy, efficiency

Description

Energy efficiency is one of the key performance requirements in the 5G network to ensure user experience. A portion of devices, especially the Industrial Internet of Things (IIoT), run on limited energy, supported by the batteries not placed over the lifetime.

Therefore, the estimation of the power consumption and battery lifetime has recently received increased attention. Multiple context parameters, such as mobility and traffic arrivals, impact the device's power consumption.

In this thesis, the student shall focus on analysing the impact of different traffic models on the power consumption of user equipment. Different source and aggregated traffic models will be implemented depending on the number of devices n the scenario. The implemented traffic models will be evaluated based on a context-aware power consumption model for the user equipment.

Prerequisites

  • Good knowledge of Python and Matlab programming.
  • Good mathematical background.
  • Knowledge mobile networks.

Supervisor:

Alba Jano

Cost evaluation of a dynamic functional split

Description

Increased interference is one of the main drawbacks of cell densification, which is an important strategy for 5G networks to achieve higher data rates. Function centralization has been proposed as a strategy to counter this problem, by letting the physical or scheduling functions coordinate among one another. Nevertheless, the capacity of the fronthaul network limits the feasibility of this strategy, as the throughput required to connect low level functions is very high. Fortunately, since not every function benefits in the same way from centralization, a more flexible approach can be used. Instead of centralizing all functions, only those providing the highest amount of interference mitigation can be centralized. In addition, the centralization level, or functional split, can be change during runtime according to the instantaneous network conditions. Nonetheless, it is not fully know how costly it is to deploy and operate a network implementing a dynamic functional split.

In this internship, the cost of a radio access network implementing a dynamic functional split will be evaluated. A simulator already developed at LKN will be used and extended to produce network configurations adapted to the instantaneous user position and activity. Then, off-the-shelf cost models will be improved and used to estimate the deployment and operating cost of the network under multiple scenarios. Furthermore, the conditions on which a dynamic functional split is profitable will be investigated. Improvements on the functional-split selection algorithm will be proposed, such that the operator benefits from enhanced performance without operating at exceedingly costly states. Finally, a model that takes into account the cost of finding and implementing a new functional split will be employed and its results compared to the previous results.

Supervisor:

Jitter Analysis and Comparison of Jitter Algorithms

Description

In electronics and telecommunication, jitter is a significant and an undesired factor. The effect of jitter on the signal depends on the nature of the jitter. It is important to sample jitter and noise sources when the clock frequency is especially prone to jitter or when one is debugging failure sources in the transmission of high speed serial signals. Managing jitter is of utmost importance and the methods of jitter decomposition have changed comparably over the past years.

 

In a system, jitter has many contributions and it is not an easy job to identify the contributors. It is difficult to get Random Jitter on a spectrogram. The waveforms are initially constant, but the 1/f noise and flicker noise cause a lot of disturbance when it comes to output measurement at particular frequencies in a system. 

 

The task is to understand the difference between the jitter calculations based on a step response estimation and the dual dirac model by comparing the jitter algorithms between the R&S oscilloscope and other competition oscilloscopes. Also to understand how well the jitter decomposition and identification is there.

 

The tasks in detail are as follows.

Setup a waveform simulation environment and extend to elaborate test cases

Run the generated waveforms through the algorithms

Analyze and compare the results:

Frequency domain

Statistically (histogram, etc)

Time domain

Consistency of the results

Evaluate the estimation of the BER (bit error rate)

Identify the limitations of the dual-dirac model

Compare dual-dirac model results with a calculation based on the step response estimation

Generate new waveforms based on the analysis

Summarize findings

Supervisor:

Arled Papa - Mathias Hellwig (Rohde Schwarz)

Probabilistic Traffic Classification

Keywords:
Probabilistic Graphical Models, Markov Model, Hidden Markov Model, Machine Learning, Traffic Classification
Short Description:
Classification of packet level traces using Markov and Hidden Markov Models.

Description

The goal of this thesis is the classification of packet-level traces using Markov- and Hidden Markov Model. The scenario is open-world: Traffic of specific web applications should be distinguished from all possible web-pages (background traffic). In addition, several pages should be differentiated. Examples include: Google Maps, Youtube, Google Search, Facebook, Google Drive, Instagram, Amazon Store, Amazon Prime Video, etc.

Supervisor:

Internships

Data Analysis and Prediction of Optical Network Performance on Open Source Data

Description

The Internship aims to analyze Open Source Optical Network Data for performance prediction of Optical Networks and develop data-driven methods for quality of transmission estimation.

Contact

jasper.mueller@adtran.com

Supervisor:

Jasper Konstantin Müller - Jasper Müller (Adtran Networks SE)

Probability parameters of 5G RANs featuring dynamic functional split

Description

The architecture of 5G radio access networks features the division of the base station (gNodeB) into a centralized unit (CU) and a distributed unit (DU). This division enables cost reduction and better user experience via enhanced interference mitigation. Recent research proposes the posibility to modify this functional split dynamically, that is, to lively change the functions that run on the CU and DU. This has interesting implications at the network operation.

In this topic, the student will employ a dedicated simulator developed by LKN to characterize the duration and transition rates of each functional split under multiple variables: population density, mitigation capabilities, mobility, etc. This characterization may be used then on traffic models to predict the network behavior.

Prerequisites

MATLAB, some experience with mobile networks and simulators

Supervisor:

Student Assistant Jobs

Working Student for the Medical Testbed

Description

Future medical applications put stringent requirements on the underlying communication networks in terms of highest availability, maximal throughput, minimal latency, etc. Thus, in the context of the 6G-life project, new networking concepts and solutions are being developed.

For the research of using 6G for medical applications, the communication and the medical side have joined forces: While researchers from the MITI group (Minimally invasive Interdisciplinary Therapeutical Intervention), located at the hospital "Rechts der Isar", focus on the requirements of the medical applications and collecting needed parameters of patients, it is the task of the researchers at LKN to optimize the network in order to satisfy the applications' demands. The goal of this joint research work is to have working testbeds for two medical testbeds located in the hospital to demonstrate the impact and benefits of future 6G networks and concepts for medical applications.

Your task during this work is to implement the communcation network for those testbeds. Based on an existing open-access 5G network implementation, you will implement changes according to the progress of the current research. The results of your work, working 6G medical testbeds, will enable researchers to validate their approaches with real-world measurements and allow to demonstrate future 6G concepts to research, industry and politics.

In this project, you will gain a deep insight into how communication networks, especially the Radio Access Network (RAN), work and how different aspects are implemented. Additionally, you will understand the current limitations and weaknesses as well as concepts for improvement. Also, you will get some insights into medical topics if interested. As in such a broad topic there are many open research questions, you additionally have the possibility to also write your thesis or complete an internship.

 

Prerequisites

Most important:

  • Motivation and willingness to learn unknown things.
  • Ability to work with various partners (teamwork ability).

 

Of advantage:

  • C/C++ and knowledge about how other programming languages work (Python, etc.)
  • Knowledge about communication networks (exspecially the RAN), 5G concepts, the P4 language, SDN, Linux.
  • Initiative to bring in own ideas and solutions.

Please note: It is not necessary to know about every topic aforementioned, much more it is important to be willing to read oneself in.

 

Supervisor:

Nicolai Kröger

Working Student for Analysis, Modeling and Simulation of Communication Networks SS2023

Description

The main responsibilities of a working student include assistance to tutors in the correction of the programming assignments, as well as answering the questions in Moodle.  Working time is 6-7 hours per week in the period from May to July.

Prerequisites

  • Python knowledge

Contact

polina.kutsevol@tum.de

Supervisor:

Alba Jano, Polina Kutsevol

Working Student for Testbed on 5G/6G RAN

Description

The results expected from this work are the enhancement of the 5G/6G tested setup with additional features on the Radio Access Network (RAN) and Core Network (CN). The work is focused on the OpenAirInterface (OAI) [1] platform, which forms the basis of the testbed setup. The expected outcome is to have improvements in wireless resource scheduling, focused on the uplink (UL), power management, and core network function management. 

[1] N.Nikaein, M.K. Marina, S. Manickam, A.Dawson, R. Knopp and C.Bonnet, “OpenAirInterface: A flexible platform for 5G research,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 5, 2014.

Prerequisites

  • Good C/C++ experience
  • Good Python knowledge
  • RAN and CN architecture understanding is a plus

Contact

alba.jano@tum.de, yash.deshpande@tum.de

Supervisor:

Alba Jano, Yash Deshpande

Working Student for Network Delay Measurements

Description

Communication Networks must fulfill a strict set of requirements in the Industrial Area. The Networks must fulfill strict latency and bandwidth requirements to allow trouble-free operation. Typically, the industry relies on purpose build solutions that can satisfy the requirements.

Recently, the industry is moving towards using Ethernet-based Networks for their use case. This enables us to use common of the shelf hardware to communicate within the network. However, this hardware still will execute industrial applications and therefore has the same strict requirements as the network. In this project, we consider Linux-based hosts that run the industrial applications. We consider different networking hardware and configurations of the system to see how it affects performance. The goal is to investigate the overhead of the host. 

 

Your tasks within the project are :

  • Measure the Host Latency with different NICs
  • Measure the Host Latency with different Hardware Offloads
  • Tune, configure, and measure the Linux Scheduler to improve performance

 

You will gain:

  • Experience with Networking Hardware
  • Experience with Hardware Measurements 
  • Experience with Test Automation

 

Please send a short intro of yourself with your CV and transcript of records to us. We are looking forward to meeting you.

 

Prerequisites

  • Familiarity with Linux Console
  • Python
  • C (not required, but a plus)

Contact

philip.diederich@tum.de

Supervisor:

Solving the manufacturer assignment problem to maximise availability of a network using linear programming

Keywords:
availability, manufacturer assignment, Nonlinear program

Description

Availability is the probability that a device performs its required function at a particular instant of time.

In most networks, the components are brought from different manufacturers. They have different availabilities. Network operators prefer having reliable components handling more traffic. This ensures the robustness of the network. So, assigning appropriate manufacturers to the components in the topology guaranteeing 
a) maximum availability, and 
b) load balancing on the nodes
is essential.

For a fixed topology and known traffic, how can the components be assigned to manufacturers to maximise availability and balance load on nodes?

Prerequisites

Mandatory:

  • Communication Network Reliability course/ Optical Networks course at LKN
  • Python

 

Preferred:

  • Knowledge of Linear Programming and/or nonlinear programming

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Working Student for the Implementation of a Medical Testbed

Keywords:
Communication networks, programming
Short Description:
Your goal is to implement a network for critical medical applications based on an existing open-access 5G networking framework as well as the adaptation of this network according to the needs of our research.

Description

 

Future medical applications put stringent requirements on the underlying communication networks in terms of highest availability, maximal throughput, minimal latency, etc. Thus, in the context of the 6G-life project, new networking concepts and solutions are being developed.

For the research of using 6G for medical applications, the communication and the medical side have joined forces: While researchers from the MITI group (Minimally invasive Interdisciplinary Therapeutical Intervention), located at the hospital "Rechts der Isar", focus on the requirements of the medical applications and collecting needed parameters of patients, it is the task of the researchers at LKN to optimize the network in order to satisfy the applications' demands. The goal of this joint research work is to have working testbeds for two medical testbeds located in the hospital to demonstrate the impact and benefits of future 6G networks and concepts for medical applications.

Your task during this work is to implement the communcation network for those testbeds. Based on an existing open-access 5G network implementation, you will implement changes according to the progress of the current research. The results of your work, working 6G medical testbeds, will enable researchers to validate their approaches with real-world measurements and allow to demonstrate future 6G concepts to research, industry and politics.

In this project, you will gain a deep insight into how communication networks, especially the Radio Access Network (RAN), work and how different aspects are implemented. Additionally, you will understand the current limitations and weaknesses as well as concepts for improvement. Also, you will get some insights into medical topics if interested. As in such a broad topic there are many open research questions, you additionally have the possibility to also write your thesis or complete an internship.

Prerequisites

 

  • Most important: Motivation and willingness to learn unknown things.
  • C/C++ and knowledge about how other programming languages work (Python, etc.) and/or the willingness to work oneself into such languages.
  • Preferred: Knowledge about communication networks (exspecially the RAN), 5G concepts, the P4 language, SDN, Linux.
  • Initiative to bring in own ideas and solutions.
  • Ability to work with various partners (teamwork ability).

Please note: It is not necessary to know about every topic aforementioned, much more it is important to be willing to read oneself in.

Contact

Supervisor:

Nicolai Kröger

End-to-End Delay Measurements of Linux End Hosts

Description

As preliminary results show, Linux TCP/IP Networking Stack introduces a high networking delay. The topic of this work is to perform an empirical study on the Linux socket-based transmission approach and implement a delay measurement workflow based on existing foundations and repositories.

References:

[1] Where has my time gone?

Prerequisites

Basic knowledge of 

  • Networking and Linux
  • C

Supervisor:

Zikai Zhou

Implementation of a Techno-Economic tool for VLC

Short Description:
Development and implementation in Excel/VBA of visible light communication (VLC) techno-economic tool for IoT services.

Description

Future IoT will need wireless links with high data rates, low latency and reliable connectivity despite the limited radio spectrum. Connected lighting is an interesting infrastructure for IoT services because it enables visible light communication (VLC), i.e. a wireless communication using unlicensed light spectrum. This work will aim at developing a tool to perform an economic evaluation of the proposed solution in the particular case of a smart office.

For that purpose, the following tasks will have to be performed:

  • Definition of a high-level framework specifying the different modules that will be implemented as well as the required inputs and the expected outputs of the tool.
  • Development of a cost evaluation Excel-VBA tool. This tool will allow to evaluate different variations of the selected case study and if possible, to compare different alternative models (e.g., dimensioning) or scenarios (e.g., building types).

Prerequisites

- Excel and VBA

Supervisor:

Implementation of Energy-Aware Algorithms for Service Function Chain Placement

Description

Network Function Virtualization (NFV) is becoming a promissing technology in modern networks. A challenging problem is determining the placement of Virtual Network Functions (VNFs). In this work, we plan to implement existing algorithms for embedding VNFs chains in NFV-enabled networks. 

Prerequisites

Experience in Python or Java, object oriented programming

Contact

amir.varasteh@tum.de

Supervisor: