Student Work

We offer students the opportunity to actively participate on interesting and cutting edge research topics and concrete research projects by conducting their thesis in our group.

We offer topics for your Bachelor's Thesis and Master's Thesis to successfully complete your studies with a scientific work. We offer students of the Department of Electrical and Computer Engineering to supervise your Forschungspraxis (research internship /industrial internship) and Ingenieurpraxis directly at our chair. For students with other specializations, such as Informatics, we offer opportunities to supervise your Interdisciplinary Project (IDP) (German: "Interdisziplinäres Projekt (IDP)"). Please contact us directly for more information.

Please note: On this page we also list student theses that are already assigned to students (Ongoing Thesis), so you can get an impression on the range of topics that we have at our Chair. Nevertheless, if you are interested in one of the topics of already ongoing theses, please do not hesitate to directly contact the supervisor and ask if there are any plans for follow-up topics. Many times this is actually the case.

Open Thesis

Disaggregated optical access network planning ILP vs ML

Keywords:
ML, ILP, optical access networks, PON, disaggregated networks

Description

The increase in bandwidth requirements triggered by new services and much more terminals force network providers to upgrade their networks constantly. Obviously, the upgrade takes cost into account, but it should also consider bandwidth, delay, reliability, and security.

 

This master thesis will aim at modeling, implementing, and evaluating different access network architectures with a network planning tool. For that purpose, different tasks will be considered:

    • Define different network architectures based on the state of the art: combining PON and aggregation networks, considering dependabilities, etc.

    • Learn and select the best data and planning tool: ArcGIS, Gabriel Graphs, etc.

    • Implement planning solution: component placement, fiber/cable layout, etc. using ILP

    • Implement planning solution: component placement, fiber/cable layout, etc. using ML

    • Evaluate and compare the availability and cost of different architectures.

Example of previous work

 

Shahid, Arslan; Mas Machuca, Carmen: Dimensioning and Assessment of Protected Converged Optical Access Networks. IEEE Communications Magazine Vol. 55, No. 8, 2017

Prerequisites

Python, ML, and ILP formulation.

Contact

PD Dr.-Ing. habil. Carmen Mas Machuca

cmas@tum.de

Supervisor:

Machine-learning-based network planning for the future railway communications

Keywords:
Network Planning, On-Train Data Communications. Machine Learning
Short Description:
Exploration of mechanisms for handling data communications under the influence of mobility in the German long distance railway system.

Description

This work focuses on the exploration of networks enabling train control and on-board data communications under mobility scenarios. Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughput (less than 2 Mbps) and frequent service interruptions.

This work aims at the development of a network planning solution enabling future applications in train mobility scenarios such as: Automatic Train Operation (ATO) [1,2], leveraging cloud technologies and meeting bandwidth requirements of data-hungry end-users' applications. Here, special attention will be given to the migration of communications services triggered by trains mobility patterns. It is expected of the student to find solutions to the following questions:

  • When to trigger service migrations?

  • Where to migrate services? (i.e., to which data center)

  • How to handle this process? (So that the user does not perceive any interruption)

 Given:

  • Trains mobility patterns

  • Service requirements in terms of bandwidth and delay

  • Network topology

  • Data center locations

 
The results from this work can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.

 [1] Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation

[2] 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/

Prerequisites

Basic knowledge in:

  • Integer Linear Programming (ILP), heuristics or Machine Learning (ML).

  • Python

Please send your CV and transcript of records.

 

Contact

Supervisor:

Cristian Bermudez Serna

Student Assistant for the Internetkommunikation Lecture in SoSe23

Description

Internetkommunikation offers the opportunity to develop interesting software solutions for technical questions about internet protocols and mechanisms. For the next semester, a position is available to assist the teaching assistants for the tutorials and class project.

Prerequisites

  • Knowledge in computer networking
  • Good programming skills in Python

 

Supervisor:

Alexander Griessel

Working Student for Analysis, Modeling and Simulation of Communication Networks SS2023

Description

The main responsibilities of a working student include assistance to tutors in the correction of the programming assignments, as well as answering the questions in Moodle.  Working time is 6-7 hours per week in the period from May to July.

Prerequisites

  • Python knowledge

Contact

polina.kutsevol@tum.de

Supervisor:

Alba Jano, Polina Kutsevol

Alpha-fair Mobility Management in 5G Networks using Deep Reinforcement Learning

Description

Mobility management in 5G is challenging due to the usage of higher frequencies and blockages of line of sight signal. Moreover, a much higher number of cells is needed at higher frequencies to provide similar coverage as in 4G. As a result, the base stations are placed much more densely, and the users experience frequent handovers, which reduce the network capacity. The 5G baseline handover leads to frequent unnecessary handovers that drastically increase the signaling. Moreover, selecting the serving BS for a user is not trivial since users are often located in the coverage of multiple BSs. Therefore, advanced handover techniques are needed in 5G to perform smooth network operation. 

In this thesis, an optimization problem to provide alpha-fairness in data rates among the users and reduce the handover rate will be considered. This is an NP-hard problem, so we will relax it to obtain an upper bound. Then, a Deep Reinforcement Learning (DRL)-based algorithm will be developed, which finds a near-optimal user-to-BS assignment and resource allocation. Finally, extensive simulations with varied parameters should be performed to evaluate the algorithms using a Python-based simulator. 

 

Prerequisites

  • Good knowledge of Python

  • Interest to learn about mobility management in 5G

  • Motivation to produce results that might go into a paper 

Supervisor:

Anna Prado, Fidan Mehmeti

Operator-revenue Maximization in Beyond-5G Cellular Networks

Description

Starting with 5G, assigning portions of network resources depending on the use case, where users running the same application/service would be getting resources from the same resource pool, a process known as network slicing, introduced a paradigm shift in how cellular networks operate in general. This brought significant advantages to both users and operators, by easing the resource allocation process and improving performance.

In 5G, users can be categorized, based on the service they are running, into three broad groups: eMBB, URLLC, and mMTC. There would be a separate Radio Access Networks (RAN) slice for each group. So, all eMBB users within the same cell would receive their resources from the same slice. URLLC service type is characterized by the most stringent traffic requirements. So, different amount of resources are needed to enable a satisfying user experience for different types of services. Hence, the problem of proper slice dimensioning arises as a very important issue that needs to be addressed.

Besides splitting RAN resources (slices), given that in-network computing will become an inevitable part of next generation of cellular networks, splitting computing resources would be very important. The computing resources come from edge clouds, which are collocated with base stations. Also, users have different channel conditions, which needs to be taken into account. For each admitted user, the traffic requirement would have to be satisfied.

Therefore, the operator would need to decide how to slice the RAN resources, and given the available computing resources, it would need to decide on the number of users of each service type to admit so that its overall revenue is maximized. The candidate needs to formulate an optimization problem. Then, different static and dynamic policies are to be analyzed in order to determine the best one, depending on the computational complexity introduced and the achievable performance in terms of the objective of the optimization problem.           

Prerequisites

A good knowledge of any programming language is required.

Supervisor:

Fidan Mehmeti

Processing Priorization in the Medical Context

Description

In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance. However, these processing resources are usually shared among many applications, which potentially leads to worse performance in terms of execution time, throughput, etc. . This is especially critical for applications such as autonomous driving, telemedicine or smart operations. Hence, the processing of more critical applications must be prioritized.

 

In this thesis, the task is to develop and evaluate a priorization approach for applications. However, not only technical aspects will play a role for the priorization, but also ethical, i.e. in this case medical aspects. This is especially important, if applications are equally critical. For this, suitable real use cases are identified together with our partners at MITI (Hospital "Rechts der Isar"). The priorization approach then leads to a specified distribution of the processing and networking resources, satisfying the minimum needs of critical applications.

 

The result will be an evaluated priorization approach for applications in the medical environment.

 

Prerequisites

Motivation, basic networking knowledge, basic programming knowledge

Contact

nicolai.kroeger@tum.de

Supervisor:

Nicolai Kröger, Fidan Mehmeti

Mobility Management for Computation-Intensive Tasks in Cellular Networks with SD-RAN

Description

In the previous generations of cellular networks, both data plane and control plane operations were conducted jointly in Radio Access Networks (RANs). With the emergence of Software Defined Networks (SDNs), and their adaptation in RANs, known as SD-RAN, for the first time the separation of control from data plane operations became possible in 5G RAN, as a paradigm shift on how the assignment of network resources is handled in particular, and how cellular networks operate in general. The control is shifted to centralized units, which are known as SD-RAN controllers. This brings considerable benefits into the cellular network because it detaches the monolithic RAN control and enables co-operation among different RAN components, i.e., Base Stations (BSs), improving this way network performance along multiple dimensions. Depending on the current spread of users (UEs) across BSs, and their channel conditions for which UEs periodically update their serving BSs, and BSs forward that information to the SD-RAN controller, the latter can reallocate resources to BSs accordingly. BSs then perform the resource allocation across their corresponding UEs. Consequently, exploiting the wide network knowledge leads to an overall improved performance as it allows for optimal allocation decisions.

 

This increased level of flexibility, which arises from having a broader view of the network, can be exploited in improving the mobility management in cellular networks. This comes into play even more with 6G networks in which in-network computing is envisioned to be integral part. Namely, users will be sending computationally-intensive tasks to edge clouds (through their BSs) and would be waiting some results as a response. However, as it will take some time until these tasks are run on the cloud, the user might be changing the serving BS. As a result, handover will have to be managed. However, while the task is being uploaded, performing handovers would not be good as then the task would need to be sent to another edge cloud. Consequently, having a centralized knowledge of all the network (which the SD-RAN controller has), to avoid frequent handovers, the controller has an extra degree of freedom by increasing the number of frequency blocks that can be assigned to a user while uploading the task and while downloading results.

 

In this thesis the goal would be to increase the overall network utility by deciding which tasks to serve (each task has its own utility), given the limited network resources in terms of the upload bandwidth, download bandwidth, storage in edge clouds, and finite computational capacity. Users besides sending tasks and receiving results are assumed to run other applications, with given service requirements. The student will formulate optimization problems and solve them either analytically or using an optimization solver, like Gurobi, CVX, etc. The other task would be to conductt realistic simulations and showing the advantages the developed algorithms offer against benchmarks.

 

 

 

 

 

 

Prerequisites

Good knowledge of Python and interest to learn about mobility management in 5G

Supervisor:

Anna Prado, Fidan Mehmeti

Optimal resource allocation for utility maximization in 5G networks

Description

The slice dimensioning for the three types of traffic in 5G(eMBB, URLLC, and mMTC) would be the focus of this thesis. Each user, depending on the type of traffic, is characterized by a weight. This could be e.g., the gain of the operator by serving a given user. We assume that the more stringent the traffic requirement is, the higher the gain for the operator is. In this way, mMTC users weight would be the lowest, whereas the URRLC’s the highest. Also, users have different channel conditions, which needs to be taken into account. For each admitted user, the traffic requirement would have to be satisfied. The problem would then reduce to deciding what sizes of network slices are to be allocated to each service type, and the corresponding number of users within the slice, so that the utility for the operator is maximized.

Three policies are to be analyzed. The first is motivated by the finite number of PhysicalResource Blocks (PRBs) each cell has, so that a brute-force solution is found. The computational complexity of this policy has to be obtained as well. The second, less complex policy, is the one in which resources are reserved beforehand (i.e., it is a static policy), based depending on the ratio between the weight and the resources needed for a user of the given service type. Finally, the third policy, which would be based on a heuristic, would decide onthefly on how to dimension RAN slice sizes. 

Prerequisites

A good knowledge of any programming language is required.

Contact

fidan.mehmeti@tum.de

Supervisor:

Fidan Mehmeti

Enhanced Mobility Management in 5G Networks with SD-RAN

Description

 

In pre-5G networks, both the data plane and control plane operations were performed jointly in Radio Access Networks (RANs). With the emergence of Software Defined Networks (SDNs), and its adaptation in RANs, known as SD-RAN, for the first time the separation of control from data plane became possible 5G RAN, as a paradigm shift on how the assignment of network resources is handled in particular, and how cellular networks operate in general. The control is transferred to centralized units, which are known as SD-RAN controllers. This brings considerable benefits into the mobile network since it detaches the monolithic RAN control and enables co-operation among different RAN components, i.e., Base Stations (BSs), improving the network performance along several dimensions. To that end, depending on the current spread of the users (UEs) across BSs, and their channel conditions for which the UEs periodically update their serving BSs, and BSs send that information to the SD-RAN controller, the latter can reallocate resources to BSs accordingly. BSs then perform the resource allocation across their corresponding UEs. As a consequence, exploiting the wide network knowledge leads to an overall improved performance as it allows for optimal allocation decisions.

 

This increased level of flexibility, which arises from having a broader view of the network, can be exploited in improving the mobility management in cellular networks. In the previous generations of cellular networks, each BS has its own set of frequencies at which it could transmit. Given that each user would receive service by only one BS, depending on the channel conditions the users would have with the serving BS and the number of users within the same sell, the user would decide on whether she would need a handover or it would be better to remain within the same serving area (i.e., receiving service from the same BS) . Currently, conditional handovers are being the most serious candidate for 5G. However, every handover involves a considerable cost, due to the preparations that need to be performed to hand a user over from one BS to another one. These will unavoidably lead to reductions in data rates and network resources for other users. On the other hand, having a centralized knowledge of all the network (which the SD-RAN controller has), to avoid frequent handovers, the controller has an extra degree of freedom by increasing the number of frequency blocks that can be assigned to a user experiencing bad channel conditions. This of course depends on the topology of the users in that moment.

 

In this thesis, the focus will be on jointly deciding on the resource allocation policy for each user across the entire area of the controller and when to perform the handover in order to optimize different performance aspects (e.g., provide proportional fairness). To that end, the student will formulate optimization problems and solve them either analytically or using an optimization solver, like Gurobi, CVX, etc. The other part would be conducting realistic simulations and showing the advantages the developed algorithms offer against state of the art.

 

 

Prerequisites

Good knowledge of Python and interest to learn about mobility management in 5G

Supervisor:

Anna Prado, Fidan Mehmeti

Joint power and PRB allocation in SD-RAN environments in Beyond-5G networks

Keywords:
5G NR, SD-RAN, joint optimization

Description

 

In the previous generations of cellular networks, in Radio Access Networks (RANs) both the data plane and control plane operations were performed jointly. With the emergence of Software Defined Networks (SDNs), and its adaptation in RANs, known as SD-RAN, the separation of control from data plane became possible for the first time in RANs of 5G networks, as a paradigm shift on how the assignment of network resources is handled in particular, and how cellular networks operate in general. The control is transferred to centralized units known as SD-RAN controllers. This brings a lot of benefits into the mobile network since it detaches the monolithic RAN control and enables co-operation among RAN components, i.e., Base Stations (BSs), improving the network performance along different dimensions.

 

This increased level of flexibility arises from having a broader view of the network, which is provided by the centralized SD-RAN approach. In that way, depending on the current spread of the users (UEs) across BSs, and their channel conditions for which the UEs periodically update their serving BSs, and BSs send that information to the SD-RAN controller, the latter can reallocate resources to BSs accordingly. BSs then perform the resource allocation across their corresponding UEs. As a consequence, exploiting the wide network knowledge leads to an overall improved performance as it allows for optimal allocation decisions. As opposed to SD-RAN, in a classical RAN approach, each BS has its own fixed set of resources, and allocates them to the UEs within its operational area.

 

So far, the research in SD-RAN has focused only on allocating the resource blocks (i.e., frequencies) adaptively to the BSs. In this thesis, the focus will be on the joint allocation of both the resource blocks and transmission power to BSs within the area of the controller, in order to optimize different performance aspects. To that end, the student will formulate optimization problems and solve them either analytically or using an optimization solver, like Gurobi, CVX, etc. The other part would be conducting measurements for different allocation policies in OperAirInterface.

 

Prerequisites

 

- Good C/C++ experience

 

- Knowledge on OFDMA

 

Contact

serkut.ayvasik@tum.de

fidan.mehmeti@tum.de

Supervisor:

Serkut Ayvasik, Fidan Mehmeti

Optimal and Proactive Communication Resource Allocation

Keywords:
LiFi, Multipath, Optimization, Task Offloading

Description

The goal of the thesis would be to build an Anticipatory or Proactive Wireless Resource Allocation Framework to optimize Multi-hop, Multi-path networks.

The approach is to develop and solve an optimization problem to allocate network resources to users by looking into window of time in the future. By knowing the channel quality of the users in the future, a better, more optimal allocation of resources is made possible.

Related Reading:

Dastgheib, Mohammad Amir, et al. "Mobility-aware resource allocation in VLC networks using T-step look-ahead policy." Journal of Lightwave Technology 36.23 (2018): 5358-5370.

If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.

 

Prerequisites

  • Strong Python programming skills
  • Strong foundation on wireless communications
  • Experience with optimization problems

 

Contact

hansini.vijayaraghavan@tum.de

Supervisor:

Modeling and implementing a simulator for multi-domain wireless networks

Description

Emerging applications such as telemedicine put stringent requirements on the underlying communication network. Furthermore, communication is expected to happen also across different domains. As this cannot be fulfilled easily and efficiently, the new communication network generation (6G) is currently being researched. 6G follows a holistic networking approach, i.e., not only across individual domains, but also across the entire network, with the focus to provide end-to-end performance guarantees. The overall network consists of several network types, different in the used devices and technologies (such as molecular networks, quantum networks, satellite networks, campus networks, etc.)

 

In order to develop new concepts and estimate their performance, it is essential to see practical results. However, obtaining measurement results from a complete testbed setup for every case is infeasible. In comparison, using a simulation allows to vary a broad variety of parameters such as the topology or device parameters and still achieve results in a reasonable amount of time. Another important aspect is performing analytic modeling.

 

The goal in this Master Thesis is to implement a packet-based simulator (preferably) in C/C++ and evaluate the functionality of multi-domain networks. Also, the achievable performance guarantees should be provided.

 

Supervisor:

Nicolai Kröger, Fidan Mehmeti

Working Student for Testbed on 5G/6G RAN

Description

The results expected from this work are the enhancement of the 5G/6G tested setup with additional features on the Radio Access Network (RAN) and Core Network (CN). The work is focused on the OpenAirInterface (OAI) [1] platform, which forms the basis of the testbed setup. The expected outcome is to have improvements in wireless resource scheduling, focused on the uplink (UL), power management, and core network function management. 

[1] N.Nikaein, M.K. Marina, S. Manickam, A.Dawson, R. Knopp and C.Bonnet, “OpenAirInterface: A flexible platform for 5G research,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 5, 2014.

Prerequisites

  • Good C/C++ experience
  • Good Python knowledge
  • RAN and CN architecture understanding is a plus

Contact

alba.jano@tum.de, yash.deshpande@tum.de

Supervisor:

Alba Jano, Yash Deshpande

Dimensioning a German Train IP-Optical Network with Integer Linear Programming

Keywords:
ILP, optical communicaitons
Short Description:
This thesis consists in modeling and solving the dimensioning problem for a train IP-Optical network using Integer Linear Programming (ILP).

Description

Background

Network operators are confronting continuously increasing QoS (Quality of Service) requirements. The need for Internet bandwidth and low latency is getting more demanding. Thus, network operators must make appropriate network equipment upgrades to serve the user traffic. A german train IP-Optical network faces similar challenges. The network operators of a railway company must stand up to the challenge of the rising traffic requested from the passengers. They must develop a mechanism to quantify the nature of the new needed equipment (i.e. dimensioning) while leveraging the current advances in the Coherent Pluggable Transceivers (CPT) modules.

Problem Description

In the context of this thesis, you are called to model and solve the dimensioning problem for a train IP-Optical network using Integer Linear Programming (ILP). More specifically, the thesis consists of the following steps:

  • literature research on RMSA (Routing, Modulation and Spectrum Assignment) and  CPT market
  • adaptation of a given ILP model to the current scenario
  • parametric study
  • evaluation and visualization


Acquired Knowledge and Skills

In this thesis you will enrich your knowledge of IP-optical networks and ILP, a general methodology to find optimum solutions to linear problems. You will get an insight into core networking, network services, and modern challenges. Finally, you will learn how to run a parametric sweep simulation and evaluate the results.

Prerequisites

Basic knowledge in:

  • Communication Networks Architecture and Design
  • Programming Experience
  • Julia Language

 

Contact

Please send your CV and transcript of records to:

  • cristian.bermudez-serna@tum.de
  • filippos.christou@ikr.uni-stuttgart.de

Supervisor:

Cristian Bermudez Serna - Filippos Christou (Institute of Communication Networks and Computer Engineering (IKR) - Uni. Stuttgart)

Working Student for Network Delay Measurements

Description

Communication Networks must fulfill a strict set of requirements in the Industrial Area. The Networks must fulfill strict latency and bandwidth requirements to allow trouble-free operation. Typically, the industry relies on purpose build solutions that can satisfy the requirements.

Recently, the industry is moving towards using Ethernet-based Networks for their use case. This enables us to use common of the shelf hardware to communicate within the network. However, this hardware still will execute industrial applications and therefore has the same strict requirements as the network. In this project, we consider Linux-based hosts that run the industrial applications. We consider different networking hardware and configurations of the system to see how it affects performance. The goal is to investigate the overhead of the host. 

 

Your tasks within the project are :

  • Measure the Host Latency with different NICs
  • Measure the Host Latency with different Hardware Offloads
  • Tune, configure, and measure the Linux Scheduler to improve performance

 

You will gain:

  • Experience with Networking Hardware
  • Experience with Hardware Measurements 
  • Experience with Test Automation

 

Please send a short intro of yourself with your CV and transcript of records to us. We are looking forward to meeting you.

 

Prerequisites

  • Familiarity with Linux Console
  • Python
  • C (not required, but a plus)

Contact

philip.diederich@tum.de

Supervisor:

Adaptive regenerator location selection in EON using reinforcement learning

Keywords:
Optical network planning, 3R regeneration, Reinforcement learning

Description

Elastic Optical Networks (EON) provide flexibility in bandwidth allocation, leading to improvement in spectrum utilization. The signal transmission capability is also improved, as lightpaths with better configurations of higher datarate and better modulation schemes can be deployed. The reach of the optical signal is controlled by the receivers' capability to receive the signal successfully, subjected to the received OSNR. This reach can be extended by using regeneration. Existing works use simple heuristics to find the locations for regeneration. The challenge is to update/ increase the regeneration locations based on the current network state. The optimal placement of regenerators and assignment of regeneration locations should aid in improving spectrum efficiency. 

In this work, the student is expected to

  • Designing and implementing an adaptive regeneration location selection algorithm, which considers the current network state, available spectrum, and available routing scenario, using reinforcement learning.
  •  Evaluating the performance of the algorithm in realistic topologies.

Interested students, please send an email with a short introduction of yourself along with your CV and grade transcript.

Prerequisites

  • Strong Python and Java programming skills
  • Experience in ML techniques (including reinforcement learning)
  • Knowledge of optimization problems and classical methods is preferable

Contact

Saquib Amjad (saquib.amjad@tum.de)

Supervisor:

Saquib Amjad

Working Student for the Implementation of a Medical Testbed

Keywords:
Communication networks, programming
Short Description:
Your goal is to implement a network for critical medical applications based on an existing open-access 5G networking framework as well as the adaptation of this network according to the needs of our research.

Description

 

Future medical applications put stringent requirements on the underlying communication networks in terms of highest availability, maximal throughput, minimal latency, etc. Thus, in the context of the 6G-life project, new networking concepts and solutions are being developed.

For the research of using 6G for medical applications, the communication and the medical side have joined forces: While researchers from the MITI group (Minimally invasive Interdisciplinary Therapeutical Intervention), located at the hospital "Rechts der Isar", focus on the requirements of the medical applications and collecting needed parameters of patients, it is the task of the researchers at LKN to optimize the network in order to satisfy the applications' demands. The goal of this joint research work is to have working testbeds for two medical testbeds located in the hospital to demonstrate the impact and benefits of future 6G networks and concepts for medical applications.

Your task during this work is to implement the communcation network for those testbeds. Based on an existing open-access 5G network implementation, you will implement changes according to the progress of the current research. The results of your work, working 6G medical testbeds, will enable researchers to validate their approaches with real-world measurements and allow to demonstrate future 6G concepts to research, industry and politics.

In this project, you will gain a deep insight into how communication networks, especially the Radio Access Network (RAN), work and how different aspects are implemented. Additionally, you will understand the current limitations and weaknesses as well as concepts for improvement. Also, you will get some insights into medical topics if interested. As in such a broad topic there are many open research questions, you additionally have the possibility to also write your thesis or complete an internship.

Prerequisites

 

  • Most important: Motivation and willingness to learn unknown things.
  • C/C++ and knowledge about how other programming languages work (Python, etc.) and/or the willingness to work oneself into such languages.
  • Preferred: Knowledge about communication networks (exspecially the RAN), 5G concepts, the P4 language, SDN, Linux.
  • Initiative to bring in own ideas and solutions.
  • Ability to work with various partners (teamwork ability).

Please note: It is not necessary to know about every topic aforementioned, much more it is important to be willing to read oneself in.

Contact

Supervisor:

Nicolai Kröger

P4Update Improvement

Description

The Paper P4Update describes a data plane mechanism during network routing updates. The goal is to ensure consistency and speed at the same time. It uses distance labeling, path segmentation, coordinated verification, and local scheduling methods to achieve the goal. This work requires the student to know the programmable data plane with P4, how to build a simulated network and basic performance evaluation knowledge. Then the next task is to improve the skeleton and perform proactive and reactive measurements. 

References:
[1] P4Update: Fast and Locally Verifiable Consistent Network Updates in the P4 Data Plane

Prerequisites

SDN, P4, Python, Linux

 

 

Contact

Zikai Zhou

Supervisor:

Zikai Zhou

Design and evaluation of conditional device paging DRX

Keywords:
5G, IIoT, energy efficiency, DRX

Description

Energy Efficiency (EE) has become a key performance indicator for sustainable 5G networks due to the growth of next-generation mobile devices connected to the network and applications with the requirements to preserve energy resources. The relevance of EE increases for Industrial Internet of Things (IIoT) devices, which run on limited energy supported by the batteries not replaced over the lifetime.

Therefore, the development of methods to increase the energy efficiency on the device side has received the attention of academia and industry research. Reducing the continuous monitoring of the PDCCH channels is considered a key factor in increasing the device energy efficiency, especially considering the limited resources.

In this thesis, the student shall focus on the implementation and evaluation of a conditional device paging DRX mechanism to reduce the PDCCH channel monitoring. The mechanism will be evaluated through a 5G based simulator.

Prerequisites

  • Good knowledge of Python and Matlab.
  • Knowledge of mobile networks.

Supervisor:

Alba Jano

Ongoing Thesis (already assigned)

Bachelor's Theses

Learning to proactively allocate Wireless Resources

Keywords:
LiFi, Reinforcement Learning

Description

The goal of the thesis would be to build an Anticipatory or Proactive Wireless Resource Allocation Framework to optimize Multi-path networks.

The approach is to develop an optimization problem to allocate network resources to users by looking into window of time in the future and to solve this problem using Reinforcement Learning.

By knowing the channel quality of the users in the future, a better, more optimal allocation of resources is made possible.

Related Reading:

Chen, Weixi, et al. "Proactive 3C Resource Allocation for Wireless Virtual Reality Using Deep Reinforcement Learning." 2021 IEEE Global Communications Conference (GLOBECOM). IEEE, 2021

If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.

 

Prerequisites

  • Strong Python programming skills
  • Strong foundation on wireless communications
  • Experience with Reinforcement Learning

 

Contact

hansini.vijayaraghavan@tum.de

Supervisor:

Multimodal and Redundant Peer to Peer communication for Robots in Noisy Environments

Keywords:
Robot Communication, ROS Communication

Description

In the case at hand, the focus lies on robots that need to exchange actor commands in real-time to be able to perform a transportation task collaboratively. The communication layer needs to exchange up to 1Mbit with latencies within the real-time boundaries of the tasks between the involved mobile robots. Jitter is especially harmful whenever control loops are involved and thus makes the main concern of this work. Secondly, safety and robustness concerns need to be addressed as mobile robots operate in areas mixed with humans and any harm to humans needs to be reliably prevented.

Prerequisites

C. C++, Experience with ROS.

Supervisor:

Yash Deshpande - Markus Weber (Filics GmbH)

Modelling and evaluation of the availability of an optical switch

Keywords:
optical switch, availability
Short Description:
This work requires identifying the primary subcomponents of an optical switch and modeling the switch based on the failure rates of these subcomponents.

Description

Optical switches are an integral part of core networks. In this work, the student must model and evaluate the availability of the optical switch. The tasks are as follows:

  • Identify subcomponents of the optical switch.
  • Find the failure rates of these subcomponents based on data sheets.
  • Model the subcomponents as individual Stochastic Activity Networks (using Mobius).
  • Find the availability of the subcomponent and the switch.

The student is expected to use the Mobius software tool to model the Stochastic Activity Network. It is a simple tool that can be learned a week before the internship. 

Prerequisites

Knowledge of any of the following is advantageous.

  • Communication Network Reliability
  • Analysis, Modelling, and Simulation of Networks
  • Optical Networks

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Learning to forward based on the euclidean embedding of graphs

Description

Routing on unstructured graphs is a problem in large data center topologies. Unstructured graphs do not allow an efficient encoding of the Forwarding Information Base with Longest Prefix Matching. An open problem is thus an effective way to forward packets in such networks.

A possible solution could be the spatial navigation of the unstructured graphs with Deep Neural Networks (DNNs). Recent work [1-4] shows that DNNs can learn to navigate in continuous euclidean spaces. The goal of this thesis is to investigate if DNNs are also capable of learning forwarding behavior on the euclidean embedding of graphs. In the euclidean embedding, nodes are mapped to two dimensional points, i.e., coordinates. The coordinates can be encoded as part of the IP-addresses, e.g., using half precision floating points. The task of the DNN is to predict the outgoing ports a switch should forward a packet on such that the packet travels on one of the shortest paths between the switch and the destination. The DNN makes the prediction using three inputs: The coordinates of the switch, the headings of the ports of the switch, and the coordinates of the destination.

The thesis should evaluate if it is possible to learn the shortest path forwarding behavior on small (tens of nodes) and arbitrarily structured graphs based on the graphs’ embedding in a two- dimensional euclidean space. If successful, the thesis should further investigate if the forwarding behavior generalizes to unseen graphs, and whether the forwarding behavior scales to large graphs with hundreds of nodes. For all tasks, the NetworkX library [5] provides the required functionality to generate random graphs and embed the graphs in an euclidean space using a force directed algorithm.  

 

References: 

[1] C. J. Cueva and X.-X. Wei, “Emergence of grid-like representations by training recurrent neural networks to perform spatial localization,” 2018. [Online]. Available: https://openreview.net/forum? id=B17JTOe0- 

[2] A. Banino et al., “Vector-based navigation using grid-like representations in artificial agents,” Nature, vol. 557, no. 7705, pp. 429–433, May 2018, doi: 10.1038/s41586-018-0102-6. 

[3] S. Brahmbhatt and J. Hays, “DeepNav: Learning to Navigate Large Cities,” CoRR, vol. abs/1701.09135, 2017, [Online]. Available: http://arxiv.org/abs/1701.09135 

[4] P. Mirowski et al., “Learning to Navigate in Cities Without a Map,” ArXiv e-prints, Mar. 2018. 

[5] Aric A. Hagberg, Daniel A. Schult and Pieter J. Swart, “Exploring network structure, dynamics, and function using NetworkX”, in Proceedings of the 7th Python in Science Conference (SciPy2008), Gäel Varoquaux, Travis Vaught, and Jarrod Millman (Eds), (Pasadena, CA USA), pp. 11–15, Aug 2008

Supervisor:

Data plane performance measurements

Keywords:
P4, SDN
Short Description:
This work consists on performing measurements for a given P4 code on different devices.

Description

Software-Defined Networking (SDN) is a network paradigm where control and data planes are decoupled. The control plane consists on a controller, which manages network functionality and can be deployed in one or multiple servers. The data plane consists on forwarding devices which are instructed by the controller on how to forward traffic.

P4 is a domain-specific programming language, which can be used to define the functionality of forwarding devices as virtual or hardware switches and SmartNICs.

This work consists on performing measurements for a given P4 code on different devices. For that, an small P4-enabled virtual network will be used to perform some measurments. Later, data will be also collected from hardware devices as switchs and SmartNICs. Measurement should be depicted in a GUI for its subsequent analysis.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Networking/SDN
  • Python/C
  • Web programming (GUI).

Please send your CV and transcript of records.

Contact

Supervisor:

Cristian Bermudez Serna

Modeling and comparing different aircraft cabin wireless channel models

Keywords:
RF, 3D-Model, ray-tracing, Aircraft Cabin

Description

Wireless communication is heavily dependent on the channel it operates on. Therefore, to simulate wireless transmissions, accurate wireless channel models are needed. The amount of channel models for aircraft cabins is quite limited and outdated because of the new materials used in aircrafts and new frequencies available for wireless transmission. Therefore, we want to develop a 3D structure of an two aisle aircraft cabin in Blender to simulate the propagation of electromagnetic waves. The goal of the thesis is to generate a wireless channel description of this model using ray-tracing and compare it to an existing single aisle model.

Prerequisites

Review of related literature
Convert the existing single aisle 3D model to a double aisle
Derive a channel description
Compare the channel with an existing single aisle model
Evaluate the results

Supervisor:

Measuring the Throughput of quantized neural networks on P4 devices

Description

Implement a quantized neural network in P4 and evaluate the throughput of feed-forward networks and networks with attention mechanisms on P4 hardware.

Supervisor:

An SCTP Load Balancer for Kubernetes to aid RAN-Core Communication

Keywords:
5G, SCTP, Kubernetes, RAN, 5G Core, gNB, AMF

Description

Cloud Native deployments of the 5G Core network are gaining increasing interest and many providers are exploring these options. One of the key technologies that will be used to deploy these Networks, is Kubernetes (k8s).

In 5G, NG Application Protocol (NGAP) is used for the gNB-AMF (RAN-Core) communication. NGAP uses SCTP as a Transport Layer protocol. In order to load balance traffic coming from the gNB towards a resilient cluster of AMF instances, a L4 load balancer needs to be deployed in the Kubernetes Cluster.

The goal of this project is do develop a SCTP Load Balancer to be used in a 5G Core Network to aid the communication between the RAN and Core.
The project will be developed using the language Go (https://golang.org/).

Prerequisites

- General knowledge about Mobile Networks (RAN & Core).
- Good knowledge of Cloud Orchestration tools like Kuberentes.
- Strong programming skills. Knowledge of Go (https://golang.org/) is a plus.

Contact

endri.goshi@tum.de

Supervisor:

Endri Goshi

Development of an East/West API for SD-RAN control communication

Description

Software-Defined Radio Access Network (SD-RAN) is receiving a lot of attention in 5G networks, since it offers means for a more flexible and programmable mobile network architecture.

The heart of the SD-RAN architecture are the so called SD-RAN controllers. Currently, initial prototypes have been developed and used in commercial and academic testbeds. However, most of the solutions only contain a single SD-RAN controller. Nonetheless, a single controller becomes also a single point of failure for a system, not only due to potential controller failures but also due to a high load induced from the devices in the data plane.

To this end a multi-controller control plane often becomes a reasonable choice. However, a multi-controller control plane renders the communication among the controllers more challenging, since they need to often exchange control information with each other to keep an up to date network state. Unfortunately, currently there is no protocol available for such a communication.

The aim of this work is the development and implementation of an East/West API for SD-RAN controller communication according to 5G stardardization. The protocol should enable the exchange of infromation among the SD-RAN controllers regarding UEs, BSs, wireless channel state and allow for control plane migration among controllers.

Prerequisites

  • Experience with programming languages Python/C++.
  • Experience with socket programming.
  • Knowledge about SDN is a must.
  • Knowledge about 4G/5G networks is a plus.

Supervisor:

Traffic classification using graphattention neural networks

Description

...

Supervisor:

Anpassungs eines HArdware in the Loop Simulationsbaus

Description

...

Supervisor:

Untersuchung zur rückwärtskompatiblen Datenratensteigerung für proprietäre Brandmeldebustechnik

Description

Das Thema dieser Bachelorthesis ist die Entwicklung, Optimierung und Evaluation des proprietären Brandmeldebussystems LSNi1 der Firma Bosch Sicherheitssysteme GmbH in Grasbrunn.


Brandmeldeanlagen sind ein bestehender notwendiger Teil der Gebaudetechnik und sollen fur neue vernetzte IoT-Dienste verwendet werden. Aufgrund immer höheren Anforderungen dieser Dienste soll die Datenrate uber den Brandmeldebus gesteigert werden. Zur Zeit wird das Bussystem zur verhaltnismäßig datenarmen Alarmabfrage und Antwort der Netzelemente verwendet. Ein wichtiger Punkt ist dabei, die Systemcharakteristika des Brandmeldesystems nicht zu verschlechtern. Solche Netzwerke sind als Sicherheitssysteme deklariert und mussen daher äußerst zuverlässig in Sachen Ausfallsicherheit und Fehlererkennung sein.


Die Aufgabe besteht darin, eine neue Übertragungstechnik mit höherer Datenrate auf der physikalischen Schicht des Busses zu finden und diese mit Hilfe eines Prototypen umzusetzen. Die Optimierung des Prototypen soll durch Berechnung und Experimentieren mit Systemelementen ausgearbeitet werden. Daraufhin werden verschiedene Testaufbauten, im firmeneigenen Labor getestet. Die dabei aufgenommenen Daten werden über ein entwickeltes Matlab-Evaluierungsprogramm ausgewertet, um eindeutige Aussagen über die Funktionalitat des Systems zu geben. Eine besondere Herausforderung bei Systemen dieser Art sind die hohen Kabellängen und die große Anzahl an Systemelementen auf dem Bus. Mit den ausgewerteten Ergebnissen soll überpruft werden, in welchem Umfang die Datenrate auf dem Bus gesteigert werden kann.

Supervisor:

Wolfgang Kellerer - Dr.Tjark Windisch (Bosch Sicherheitssysteme)

Master's Theses

AP Sum Power Minimization of a cfmMIMO Network subject to per-UE Minimum Rate Constraint with ML

Keywords:
cell-free massive MIMO, Machine Learning, Optimization, 6G RAN

Description

Cell-free massive MIMO (cfmMIMO) is a promising system that removes the restriction in traditional MIMO networks that an access point (AP) can only serve user elements (UE) within its immediate vicinity – i.e. the cell. Instead, a large number of multi-antenna APs distributed over a service area is used to coherently serve UEs in the same time/frequency resource [MLYN16]. [NAYL+17] has shown that cell-free massive MIMO has significantly better performance than conventional small-cell systems which limit each UE to be served by one AP only. Despite increased spectral efficiency, some have questioned the energy efficiency of cfmMIMO systems that might affect the overall practicability of cfmMIMO, especially as such systems require a large number of backhaul links which potentially increase the total power consumption to such a level that can overwhelm the spectral efficiency gains [NTDM+18].

In light of the increasingly active discussions around cfmMIMO energy efficiency, we consider such a system in the uplink configuration for simplicity and assume each AP has a fixed power consumption. We propose to minimize the sum power consumption of all APs with respect to active APs, UE transmit power, and beamformers, subject to UE maximum transmit power constraint and minimum per-UE achieved ergodic rate. For a detailed mathematical description of the optimization problem, please see the attached PDF file.

The rationale behind minimizing AP power consumption instead of UE transmitted/radiated power is because the per AP power consumption is in the order of kilowatts, while UEs transmit in the order of milliwatts. The power saving potential is simply much larger by optimizing the former.  

The student will attempt to solve this combinatorial problem using advanced machine learning methods, specifically looking into Bayesian optimization due to the combinatorial nature of the problem hindering the application of gradient-based methods. Additionally, to preserve scalability and avoid imposing a Gaussian prior onto the objective function, the usage of deep ensembles [LPB17] instead of a Gaussian process could be more appropriate.

Emphasis will be placed on real-time optimization, i.e. to arrive at a decision configuration quickly enough to adapt to changing network behavior. There will also be an option to model the AP's power consumption more realistically and introduce more complexity to the cost function. After optimization, a detailed analysis will be carried out on the results, such as a comparison of the performance of the proposed algorithm on different channel modelling conditions to evaluate the practicability of the algorithm in real-world scenarios.

[MLYN16]  T. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo, Fundamentals of Massive MIMO. Cambridge, U.K.: Cambridge Univ. Press, 2016.

[NAYL+17] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-free massive MIMO versus small cells,” IEEE Trans. Wireless Commun., vol. 16, no. 3, pp. 1834– 1850, Mar. 2017.

[NTDM+18] H. Q. Ngo, L.-N. Tran, T. Q. Duong, M. Matthaiou, und E. G. Larsson, „On the Total Energy Efficiency of Cell-Free Massive MIMO“,IEEE Transactions on Green Communications and Networking, Bd. 2, Nr. 1, S. 25–39, Ma?rz 2018, doi: 10.1109/TGCN.2017.2770215.

[LPB17] B. Lakshminarayanan, A. Pritzel, und C. Blundell, „Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles“. arXiv, 3. November 2017 [Online]. Verfu?gbar unter: http://arxiv.org/abs/1612.01474. [Zugegriffen: 24. Ma?rz 2023]

 

Supervisor:

Graph-network-based RSSI fingerprinting for localization

Keywords:
Graph Neural Network, Machine Learning, Industrial Indood positioning
Short Description:
Indoor positioning is a crucial feature for diverse industrial use cases, as it allows for improved monitoring and task automation, thus enhancing production efficiency. Nonetheless, inexpensive positioning deployments based on signal strength measurements struggle to provide good localization accuracy, as these measurements are usually highly variant.

Description

Indoor positioning is a crucial feature for diverse industrial use cases, as it allows for improved monitoring and task automation, thus enhancing production efficiency. Nonetheless, inexpensive positioning deployments based on signal strength measurements struggle to provide good localization accuracy, as these measurements are usually highly variant.

Fingerprinting based on received signal strength indicators (RSSI) has been proposed as a promising approach to mitigate the high variability and lead to relatively high positioning accuracy while keeping deployment costs low. Nonetheless, using RSSI fingerprinting for accurate positioning entails mapping large and noisy vectors of RSSI measurements to specific points in space, which is a challenging task.

Several techniques have been evaluated in the state of the art to perform this mapping: distance estimation followed by trilateration, k-nearest neighbors averaging, radiomap interpolation and error minimization, and, in recent years, also neural networks. By training neural networks with sufficient amounts of accurate data, we can abstract away the complexity of the mapping and still achieve accurate positioning.

Nonetheless, as in other applications, it has been observed that neural networks often struggle at generalizing data beyond those close to the training set, and their complete lack of knowledge about the underlying physical phenomena may result in obviously wrong results that are difficult to prevent with simply more training.

Motivated by these and other related facts, in recent years the concept of graph-network- based learning has emerged. As opposed to neural networks, graph networks use graphs to represent all inputs and outputs of the learning process and apply modified versions of common training approaches to convert input graphs into output graphs. The intention behind this procedure is that graphs themselves can be defined in such a way to model our a priori knowledge about the inputs and outputs.

In this thesis, we will investigate the application of graph-network-based machine learning to RSSI fingerprinting for localization, with the intention of incorporating radio propagation and environment models into the learning process. The results will be compared against other state-of-the-art ML approaches to conclude whether graph networks can help in producing more accurate positioning from the same training data.

Prerequisites

  • Python/C++/Matlab
  • Machine Learning basics
  • Wireless communication basics
  • Mmathematical and analytical skills

Contact

Supervisor:

Cristian Bermudez Serna - Dr. Alberto Martínez Alba (Siemens AG)

Development and Implementation of a Self-organizing Network for Federated Learning

Keywords:
Federated Learning, Gossip Learning, constrained devices
Short Description:
To build decentralized, federated learning on embedded systems

Description

In this thesis, we want to build a decentralized FL on embedded systems. Unlike traditional FL, the model is only shared between the local microcontrollers without a central server. Each microcontroller generates its global model by communicating with the neighbor nodes in a self-organized network. The purpose is to evaluate the training performance while enhancing communication efficiency to achieve the limitation of this exclusive microcontroller platform.

Prerequisites

Wireless communication, experience in Machine Learning, Python. 

Supervisor:

Navidreza Asadi, Yash Deshpande - Lars Wulfert (Fraunhofer IMS)

Sidelink Communication in Industry 4.0 Scenarios

Keywords:
5G, Sidelink, Industrial Communication
Short Description:
A conceptual model with simulations where 5G sidelink is evaluated for robot-robot collaboration

Description

The main idea of the Master Thesis would be to study and analyze the standardized resource allocation procedure for the scenario of mode 2 in-coverage for a group of automated guided vehicles carrying a piece or factory equipment cooperatively. Then once the analysis is done, another goal would be to propose enhancements to the existing autonomous resource allocation procedure to meet the stringent QoS requirements for the Industry 4.0 use cases. Development of the simulator could be in MATLAB or Python.

Supervisor:

Yash Deshpande - Shubhangi Bhadauria (Siemens AG)

Interoperability of Media Redundancy Protocol and Frame Replication and Elimination for Reliability

Keywords:
Time Sensitive Networking, Industrial Networks
Short Description:
To combine FRER and MRP in a TSN Testbed.

Description

Redundancy for reliability is an important aspect of deterministic networking. A legacy protocol called Media Redundancy protocol and a newer proposal called Frame Replication and Elimination for Reliability are good candidates in this regard.

This thesis will give an overview of all current and relevant redundancy protocols. It will analyze MRP and FRER in-depth to find the needs of these protocols to enable interoperability. Furthermore, a novel concept of network components to solve the interoperability problems re- regarding redundancy for Industrial Ethernet is presented. It is expected that a component must be configured which enables devices with the two different redundancy protocols implemented to communicate. Tests are performed on a testbed to prove the applicability of the theoretical concept. An evaluation of different topologies will also be made. Finally, the metrics of MRP, FRER, and MRP-FRER networks are compared to identify the performance differences that may arise and which configuration still fulfills the real-time requirements, adds benefits to the user, and therefore can be recommended for Ethernet-based communication in automation and control systems.

Prerequisites

C and C++, Knowledge of Routing and Forwarding, Ethernet systems. Experience with TSN is a plus.

Supervisor:

Yash Deshpande, Philip Diederich - Dr. Andreas Zirkler (Siemens AG)

Two Applications Demostrating the Advantage of Real-Time RAN Intelligent Control

Keywords:
Digital Twin, Latency, RAN Control

Description

The Radio Access Network (RAN) architecture for cellular networks is developing towards exposing an increasing number of data collection and control interfaces. The applications I will develop in my master’s thesis are hosted on a platform named EdgeRIC, which operates as a real-time RAN controller sitting next to the Distributed Units (DU) inside a 5G cellular network architecture. The proximity of the EdgeRIC platform to the transmitting and receiving base stations allows for real-time low latencies when processing information from the RAN. My applications are examples of how RAN control can benefit from this real-time information.

1. Accelerated Digital Twin of the Network

This application will host an emulated copy of a real-world physical RAN. The copy will be an instance of a srsRAN network using computer networking transport channels instead of radio links. The goal is to accelerate the events inside the network copy to more frequently generate data on metrics such as channel quality. When this can be achieved, a MAC scheduling policy using reinforcement learning, for example, could train on this feedback faster than in the real world and the trained policy could be applied to real-world scheduling decisions more quickly.

2. Mitigation of Interference from Neighboring Cells

This application tackles the problem of interference in frequency from neighboring cells by applying a beamformed null signal in the direction of the interfering base station. This should improve throughput for users attached to the original base station. The EdgeRIC platform ensures low latency for computing the null signal.

Supervisor:

Learning to Proactively allocate Communication Resources

Keywords:
LiFi, Multipath, Reinforcement Learning, Task Offloading

Description

The goal of the thesis would be to build an Anticipatory or Proactive Wireless Resource Allocation Framework to optimize Multi-hop, Multi-path networks.

The approach is to develop an optimization problem to allocate network resources to users by looking into window of time in the future and to solve this problem using Reinforcement Learning.

By knowing the channel quality of the users in the future, a better, more optimal allocation of resources is made possible.

Related Reading:

Dastgheib, Mohammad Amir, et al. "Mobility-aware resource allocation in VLC networks using T-step look-ahead policy." Journal of Lightwave Technology 36.23 (2018): 5358-5370.

If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.

 

Prerequisites

  • Strong Python programming skills
  • Strong foundation on wireless communications
  • Experience with Reinforcement Learning

 

Contact

hansini.vijayaraghavan@tum.de

Supervisor:

Towards a Digital Twin for Cloud-native Mobile Networks

Description

Cloud computing and microservice-based architectures have empowered businesses to develop new highly reliable applications, that can adapt to variable workloads. In the context of 5G telco applications, both the research community and the industry has been exploring methods of using cluster orchestrators, such as Kubernetes (K8s) in mobile network deployments. More specifically, Multi-access Edge Computing and Fog computing for 5G networks represent use-cases where the principles of cloud computing can be applied, but meeting the requirements (especially regarding latency) proves challenging.

With the increased usage of cloud deployments for Radio Access Networks, cluster configuration gained importance, as an optimized configuration translates into higher performance, increased agility and better usage of the resources. Empirical, experience-based human heuristics can improve the cluster configuration, however they require advanced knowledge about the deployment and the direct intervention of the cluster operator.

The research community is currently exploring the steps towards an automated, data-driven cluster configuration: the behavior of the cluster is learned with Machine Learning (ML), the cluster behavior is simulated with different configurations and an optimizer chooses the best configuration. However, an optimised configuration is only applicable to the real-life cluster, if the simulation of the cluster behavior is highly accurate.

The goal of this Master's Thesis is to determine the net value added by building a Digital Twin of a k8s cluster. Therefore, it uses ML-based models for the simulation of three network functions compared to traditional “Hand-crafted models”. First, it implements new network functions such as the pod scheduler and the load balancer in an existing simulation framework for k8s cluster.

Second, the thesis compares classical, hand-crafted models for the simulation with data-driven methods. Namely, after implementing Load Balancing and Pod Scheduling in the simulator in the classical way, it also integrates already trained ML model equivalents of these functions.
In the end, performance metrics and accuracy of both appraoches are compared.

Supervisor:

Johannes Zerwas, Patrick Krämer, Navidreza Asadi

Characterizing Service Disruptions in a Regional Content Provider Network

Description

Bayerischer Rundfunk (BR) operates a network to deliver content via television, radio and the internet to its users. This requires a highly heterogeneous network. The network monitoring solution for the BR-network collects alert data from involved devices and stores it in a central database. Currently, human operators make network management decisions based on a manual review of this data. This especially includes root cause identification in case of network failures. Such a human-centric process can be tedious and does not scale well with increasing network complexity. In this thesis, the student should perform a thorough analysis of the described data and characterize service disruptions to enable automated analysis in the future.

Supervisor:

Maximilian Stephan

MVL: C2C protocol

Description

MVL: C2C protocol

Supervisor:

ML based optimization of optical transport and network configurations

Description

In recent years, innovations in optical transport technology, such as probabilistic constellation shaping or multi-band networks, lead to a significant increase in complexity of optical network planning and optimization. The use of simple planning heuristics leads to a highly suboptimal utilization of network ressources and potentially underprovisioning of traffic demands. ML based optimization techniques are promising due to fast computation and approximation of optimal solution for complex problems. However, currently proposed ML models such as DeepRMSA do not yet support different lightplath configurations and flex-grid scenarios. 

This work will extend DeepRMSA to more generalized scenarios and compare it to heuristic based planning. Furthermore, this work will explore ML based solution to lightpath configuration optimization for the case of a large number of possible modulations, enabled by probabilistic constellation shaping.

Supervisor:

Jasper Konstantin Müller - Jasper Müller (ADVA)

Context-aware resource allocation and offloading decisions in a MEC-enabled 6G network

Description

Mobile Edge Computing (MEC) enabled 6G network support low latency applications running in energy-constrained and computational limited devices, especially IoT devices. Using the task offloading concept, the devices offload the incoming tasks fully or partially to MEC depending on the device and network side's communication and computation resource availability.

The 6G networks are oriented towards the Digital Twins (DT); therefore, the resource allocation and offloading decisions are enhanced with the context-awareness of the devices, environment, and network. The device context awareness consists of battery state, power consumption, CPU load, and traffic type. Further, the environmental context-awareness includes the position of the network components, the mobility patterns, and the predicted routes. Even though most IoT devices are accounted for as static devices, there are use cases in NR-Light where devices obtain mobility.

 

In this thesis, the student will focus on developing and testing a context-aware communication and computing resource allocation mechanism, focusing on decreasing individual devices' energy consumption and reducing processing latency.

Prerequisites

  • Good knowledge of Python and Matlab programming.
  • Good mathematical background.
  • Knowledge of mobile networks.

Contact

alba.jano@tum.de

 

Supervisor:

Alba Jano

Network Intrusion Detection using pre-trained tabular representation models

Keywords:
Machine learning, intrusion detection
Short Description:
Detecting intrusion detection using tabular representation and pre-trained machine learning models.

Description

Network Intrusion Detection (NID) is a common topic in cybersecurity. However, it is not trivial to find a solution when facing the complicated network environment nowadays. Often a complex system is needed to process enormous volume of data stored in databases. This thesis proposes to use Deep Learning models to tackle the NID problem in a pre-train/fine-tune manner. As the new paradigm of transfer learning, the process of pre-training follows by fine-tuning has achieved huge success in many areas such as vision and NLP. We aim to study whether those trending models still perform well on large-scale structured data such as network security logs. It is plausible to leverage the strong learning ability of DL models to learn table representations and separate anomaly from benign records based on the learned information.

Prerequisites

  • Machine learning knowdlege
  • Programming skills (Python, GIT)
  • Computer networking knowledge

Supervisor:

Cristian Bermudez Serna - Dr. Haojin Yang (HPI)

Cost Optimized Optical Network Migration Strategies for Long-Haul Optical Networks

Keywords:
Optical Networks; Multiband communication; network planning; network upgrade; techno-economics

Description

With 5G commercial deployments now a reality, combined with the push for digitalization in a post-COVID era, the use-cases and demands for high-speed internet connectivity is ever-increasing. Many use-case are bound by ultra-low latency and high data-rate requirements. This means that the backbone long-haul optical networks need to be also upgraded, to support the traffic demands which will increase in the coming years.

Depending on the dark fiber avalability in the region under study, fiber infrastructure can be upgraded by deploying parallel systems and lighting new fibers. However, a move towards commercial deployment developement of more than C-Band optical communication is a strong contender to increase capacity, without investing in leasing new dark fiber.

In this Master Thesis, we approach this upgrade problem from a transport network operator's perspective and conduct several network planning study, to come up with an upgrade strategy, which is beneficial to operators, so that they can still offer optical transport services to their clients, while maintaining profitability.

Prerequisites

  1. Currently enrolled MSCe or MSEI student
  2. Background in communication networks
  3. Basic knowledge of techno-economics
  4. Knowledge of statistics and basic fundamental research
  5. Independent worker
  6. Good programming skills (Java or Python)

Supervisor:

Sai Kireet Patri - Sai Kireet Patri (ADVA)

Optimized Multi Access Edge Computing in Aeronautical Networks

Keywords:
6G MEC Optimization

Description

Aeronautical applications such as satellite communications and in aircraft systems have enhanced the range of applications for 5G, bringing new opportunities and potentials.

In the new 6G era these industries are envisioned to receive even more attention due to the high coverage possibilities that they provide. Applications such as unmanned aerival vehicles, flying taxis, moving base stations are just a few to name. In that regard, to support these wide range of applications, new 6G network urge for new and more efficient architectures.

In this thesis the student shall focus on the analysis of existing archiectures for 5G/6G for aeronautical applications and shall perform a comparison, definining potentials for the development of new architectures to reduce delay and increase the network performance. To this end, a comprehensive analysis is required to define the right metrics for comparison and identifying the potentials for improvement. An initial simulator based on the defined metrics is to be created to enable the comparison of similar algorithms in the future.

Prerequisites

    Good knowledge of simulation environments such as Matlab, Python.
    Good mathematical background.
    Knowledge about satellite communications is a plus.

Supervisor:

Implementation of reinforcement learning based MPTCP scheduler

Description

In order to fully utilize the capabilities of a LiFi-RF Heterogeneous network, the client devices should be capable of using multiple network interfaces simultaneously. Thanks to multipath solutions like MPTCP, this is possible. 

The challenge in a MPTCP-enabled heterogeneous network lies in designing a policy to schedule data packets onto the multiple paths with heterogeneous characteristics (eg. delay, packet loss). 

This work involves

  • Re-implementing an existing deep reinforcement learning model of a multipath scheduler
  • Extending the scheduler
  • Evaluating the models extensively in an emulation environment (Mininet)
  • Evaluating the models extensively on the hardware testbed

If you are interested in this work, please send an email with a short introduction of yourself along with your CV and grade transcript.

Prerequisites

  • Strong Python programming skills
  • Experience with reinforcement learning
  • Experience with Linux
  • Experience using Mininet is an advantage

Supervisor:

Implementing and Evaluating a Neural Network-based Routing Protocol

Keywords:
AI, deep learning, eBPF, MPLS Networking

Description

The goal of traffic engineering in communication networks is steering traffic such that a objective is optimized. The objective depends on the application and can be among others minimizing the maximum link utilization, and minimizing the flow completion time. To optimize any objective, a TE solution maps the current state of the network into a forwarding decision. That is, given the network state, source, and destination, forward this traffic along that path. Currently, the decision making in TE system is based on simple, hand-crafted algorithms. The reason lies in the strict computational requirements towards any TE algorithm (decisions at (sub)millisecond scale), and the necessity to realize the TE system as a distributed protocol.

Recent work shows that Neural Networks (NNs) can learn a distributed protocol from examples. The NN uses decisions of a TE system and synthesizes a distributed protocol out of those examples. In the process, the NN learns how information on a node should be encoded, which nodes need to exchange information to make decisions, and how to map the exchanged network state into a forwarding decision. For fast inference, the NN that makes the decisions is a fully binarized NN, i.e., input, weights, and activations are binary. While a Binary Neural Network (BNN) accelerates the evaluation, a practical implementation in existing hardware is still missing.

The goal of this thesis is to fill this gap and realize the BNN on a physical system. Your task is to develop a host based implementation (Appendix C.2 in [1]) using the extended Berkeley Packet Filter (eBPF). Concretely, the eBPF implementation must process update messages, and use the NN to make forwarding decisions for outbound traffic, and signal the forwarding decisions to the Network. The creation and the sending of update messages on switches is not part of the thesis and will be provided. The final deliverable is a small VM-based Clos-Topology (e.g., two pods of a k=4 fat-tree topology) in which traffic is routed based on the path determined by the NN in the end-hosts.

Supervisor:

Automated Generation of Adversarial Inputs for Data Center Networks

Keywords:
adversarial; datacenter networks

Description

Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks.
Many proposal for network designs and routing algorithms covering different operational goals and requirements have been proposed.
This variety makes it hard for operators to choose the ``right'' solution.
Recently, some works proposed that automatically generate adversarial input to networks or networking algorithms [1,2] to identify weak spots in order to get a better view of their performance and help operators' decision making. However, they focus on specific scenarios.
The goal of this thesis is to develop or extend such mechanisms so that they can be applied a wider range of scenarios than previously.
The thesis builds upon an existing flow-level simulator in C++ and initial algorithms that generate adversarial inputs for networking problems.

[1] S. Lettner and A. Blenk, “Adversarial Network Algorithm Benchmarking,” in Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies, Orlando FL USA, Dec. 2019, pp. 31–33, doi: 10.1145/3360468.3366779.
[2] J. Zerwas et al., “NetBOA: Self-Driving Network Benchmarking,” in Proceedings of the 2019 Workshop on Network Meets AI & ML  - NetAI’19, Beijing, China, 2019, pp. 8–14, doi: 10.1145/3341216.3342207.

Prerequisites

- Profound knowledge in C++

Supervisor:

Deep reinforcement learning based beam alignment pattern in a multi-RAT network

Keywords:
Beamforming, Optimization, Linearization

Description

Beamforming is a method for directional signal transmission and reception. Beamforming works by combining antenna elements, such that at particular angles constructive interference amplifies the signal, while for other angles destructive interference attenuates the wave. We formulated an optimization problem to maximize the sum rate of a network containing multiple access points by finding the optimum beam angles. This problem is formulated mathematically. The goal of this work is to digitize the formula and to solve the optimization problem by simplifying the formulation.

Prerequisites

- Python or Matlab programming experience
- Knowledge in optimization problems
- Knowledge in Sage or Gurobi (optional)

Supervisor:

Towards Log Data-driven Fault Analysis in a Heterogeneous Content Provider Network

Description

Bayerischer Rundfunk (BR) operates a network to deliver content via television, radio and the internet to its users. This requires a highly heterogenous network. The network monitoring solution for the BR-network collects log data from involved devices and stores it in a central database. Currently, human operators make network management decisions based on a manual review of this log data. This especially includes root cause identification in case of network failures. Such a human-centric process can be tedious and does not scale well with increasing network complexity. In this thesis, the student should perform a thourough analysis of the described data and evaluate the potential for automated processing. Goal is to provide a data-driven approach that significantly supports human operators with identifying root causes in case of network failures.

Supervisor:

Maximilian Stephan

Reliability Analysis of ONOS Releases based onCode Metrics and SRGM

Description

Software Defined Networking (SDN) separates the control and data planes.Control plane can be considered as the brain of the network and it is responsible for configuring flows, finding paths and managing all the network functionalities like firewall,  load balancing,  etc.  For this reason,  the SDN controller became complex.  Furthermore, it is a large software platform, which have many contributors  with  different  experience  level.   As  a  result  the  code  contains  many undetected and unresolved bugs.  If one of these bugs is activated in the operational state, it may cause performance degradation or even collapse of the whole system.

SDN serves to broad range of applications with different requirements.  Some of the application areas like autonomous driving requires high reliability and performance degradation may cause undesired results.  Software Reliability Growth Models (SRGM) are statistical frameworks that are based on historical bug reports  for  reliability  analysis  and  widely  used  to  estimate  the  reliability  of  a software.  Open network operating system (ONOS) is an open source project and it became one of the most popular SDN platforms.  Its historical bug reports are open in their JIRA issue tracker.  Currently ONOS has 23 releases, its first ten  versions  are  investigated  with  different  SRGM  models  [1]  and  found  that different SRGMs fit to the bug detection of different versions of ONOS.

Source code metrics refer to quantitative characteristics of the code.  Those metrics  can  describe  the  size  of  the  code  (lines  of  code),  complexity  of  code (McCabe’s complexity), etc.  They have been used to predicting the number of bugs, identifying possible potential location of bug, etc.

The goal of this work is to analyse the reliability of different ONOS releases. For that purpose, an understanding of the correlation between the structure of source code and the bug manifestation process is crucial to predict the future bug manifestation of the new releases.  First, a state of the art research on the SRGM will  be  done  to  understand  the  software  reliability  and  SRGMs.   Afterwards the  student  should  implement  different  SRGMs  to  fit  the  error  manifestation of  every  release  and  compare  the  results  with  mentioned  research  [1].   Then, different  code  metrics  will  be  obtained  from  each  ONOS  release.   Then,  the correlation between SRGM and code metrics will be revealed.  At last reliability of the release will be analyzed with the best fitting SRGM. The result of this work will be to propose a reliability metric combining SRGM and code metrics that improves the software reliability prediction.

 

References

P. Vizarreta, K. Trivedi, B. Helvik, P. Heegaard, W. Kellerer, and C. Mas Machuca, An empirical study of software reliability in SDN controllers,  13th International  Conference  on  Network  and  Service  Management  (CNSM), 2017.

Supervisor:

Hasan Yagiz Özkan

Reinforcement Learning for joint/dynamic user and slice scheduling in RAN towards 5G

Description

In the Radio Access Network (RAN), the MAC scheduler is largely inherited across generations in the past, to fit to new networking goals and service requirements. The rapid deployment of new 5G technologies will make upgrading of current ones extremely complicated and difficult to improve and maintain. Therefore, finding new solutions for efficient Radio Resource Scheduling (RRS) is necessary to meet the new KPI targets. 5G networks and beyond use the concept of network slicing by forging virtual instances (slices) of its physical infrastructure. A heterogeneous network requires a more optimized and dynamic RRS approach. In view of the development of SD-RAN controllers and artificial intelligence, new promising tools such as reinforcement learning can be proven useful for such a problem.

In this thesis, a data-driven MAC slice scheduler will be implemented, that maximizes user utility, while learning the optimal slice partitioning ratio. A deep reinforcement learning technique will be used to evaluate the radio resource scheduling and slicing in RAN. The results will be compared with traditional schedulers from the state-of-the-art.

Supervisor:

Arled Papa - Prof. Navid Nikaein (EURECOM)

vnf2tx: Automating VNF platform operation with Reinforcement Learning

Description

...

Supervisor:

Hierarchical SDN control for Multi-domain TSN Industrial Networks

Description

In this thesis student will focus on designing and implementing a hierarchical SDN solution for industrial multi-domain TSN network.

Contact

kostas.katsalis@huawei.com

Supervisor:

Nemanja Deric - Dr. Kostas Katsalis (Huawei Technologies)

Implementation and Analysis of P4-based 5G User Plane Function

Keywords:
5G, P4, UPF

Description

The 5G cellular networks are the state-of-the-art cellular networks for the coming 10 years. One of the critical network functions in the 5G core system is the evolved packet gateway or User Plane Function (UPF). The UPF is responsible for carrying the users' packets from the base stations to the data network (like the internet). 

On the other hand, P4 is a promising language for programming packet processors. It can be used to program different networking devices (Software/FPGAs/ASICs/...).

Using P4 to implement the UPF has many advantages in terms of flexibility and scalability. In this work, the student will realize/implement the UPF in P4 language. Then, the advantages of this approach, especially in terms of performance gains, will be evaluated.

Prerequisites

-Good programming skills

-Basic Linux knowledge

-Motivation and critical thinking

-Knowledge about LTE, 5G, or P4 is a plus.

Supervisor:

Deliberate Load-Imbalancing in Data Center Networks

Keywords:
Traffic Engineering, Scheduling, Data Center Networks
Short Description:
Goal of this thesis is the implementation and evaluation of an in-dataplane flow scheduling algorithm based on the online scheduling algorithm IMBAL in NS3.

Description

Recently, a scalable load balancing algorithm in the dataplane has been proposed that leverages P4 to estimate the utilization in the network and assign flows to the least utilized path. This approach can be interpreted as a form of Graham's List algorithm.

In this thesis, the student is tasked to investigate how a different online scheduling algorithm called IMBAL performs compared to HULA. A prototype of IMBAL should be implemented in NS3. The tasks of this thesis are:

  1. Literature research and overview to online scheduling and traffic engineering in data center networks.
  2. Design how IMBAL can be implemented in NS3.
  3. Implementation of IMBAL in NS3.
  4. Evaluation of the implementation in NS3 with production traffic traces and comparison to HULA (a HULA implementation is provided from the chair and its implementation not part of this thesis).

Supervisor:

5G-RAN control plane modeling and Core network evaluation

Description

Next generation mobile networks are envisioned to cope with heterogeneous applications with diverse requirements. To this end, 5G is paving the way towards more scalable and higher performing deployments. This leads to a revised architecture, where the majority of the functionalities are implemented as network functions, which could be scaled up/down depending on the application requirements. 

3GPP has already released the 5G architecture overview, however there exists no actual open source deployment of RAN functionalities. This will be crucial towards the evaluation of the Core network both in terms of scalability and performance. In this thesis, the student shall understand the 5G standardization, especially the control plane communication between the RAN and 5G Core. Further, an initial RAN function compatible with the 5G standards shall be implemented and evaluation of control plane performance will be carried out. 

Prerequisites

  • Strong knowledge on programming languages Python, C++ or Java.
  • Knowledge about mobile networking is necessary.
  • Knowlegde about 4G/5G architecture is a plus.

Supervisor:

Endri Goshi, Arled Papa

Interdisciplinary Projects

Optimally scheduling packets with MPTCP for Wireless Heterogeneous Networks

Keywords:
LiFi, Multipath, Optimization, Scheduling

Description

In order to fully utilize the capabilities of a LiFi-RF Heterogeneous network, the client devices should be capable of using multiple network interfaces simultaneously. Thanks to multipath solutions like MPTCP, this is possible. 

The challenge in a MPTCP-enabled heterogeneous network lies in designing a policy to schedule data packets onto the multiple paths with heterogeneous characteristics (eg. delay, packet loss).

This work involves

  • Designing an MPTCP scheduler that schedules packets optimally to minimize network delay and handle the dynamicity of heterogeneous links
  • Implementing the scheduler in the Linux kernel
  • Performing extensive evaluations with Mininet and hardware

Related Reading:Yang, Wenjun, et al. "Loss-aware throughput estimation scheduler for multi-path TCP in heterogeneous wireless networks." IEEE Transactions on Wireless Communications 20.5 (2021): 3336-3349.

If you are interested in this work, please send an email with a short introduction of yourself along with your CV and grade transcript.

 

Prerequisites

  • Strong Python and C++ programming skills
  • Experience with optimization problems
  • Experience with Linux networking

 

Contact

hansini.vijayaraghavan@tum.de

Supervisor:

Joint radio and computing resource allocation using artificial intelligence algorithms

Description

Mobile Edge Computing (MEC) enabled 6G network support low latency applications running in energy-constrained and computational limited devices, especially IoT devices. Using the task offloading concept, the devices offload the incoming tasks fully or partially to MEC depending on the device and network side's communication and computation resource availability.

The 6G networks are oriented towards the Digital Twins (DT); therefore, the resource allocation and offloading decisions are enhanced with the context-awareness of the devices, environment, and network. The device context awareness consists of battery state, power consumption, CPU load, and traffic type. Further, the environmental context-awareness includes the position of the network components, the mobility patterns, and the quality of the wireless channel and the availability of the network resources.

 

In this project, the student will focus on developing and testing an artificial intelligence algorithm for joint allocating of computing and radio resources in a predictive manner, focusing on decreasing individual devices' energy consumption and reducing processing latency.

Tasks

 

  • Work with a 6G radio access network simulator, to generate the database for the scenario with devices having high energy efficiency and low task processing latency requirements.
  • Develop a reinforcement learning algorithm for joint allocation of radio and computing resource allocation.
  • Comparing the developed model with the state-of-the-art approaches.
  • Test and documentation.

 

Prerequisites

 

  • Good knowledge of Python programming.
  • Good mathematical background.
  • Good knowledge of deep learning/reinforcement learning.

Supervisor:

Alba Jano

Research Internships (Forschungspraxis)

Deterministic Networking with 5G

Keywords:
Deterministic Networking, Reliability, Network Calculus
Short Description:
To build a TSN-5G UPF bridge.

Description

Time-Sensitive Networking (TSN) states a set of standards in IEEE 802.1 to improve determinism in the network. Methods to avoid packet drops, bound the delay and jitter in the network for TSN are provided by network calculus.

 

Meanwhile, fifth-generation wireless communications (5G) are playing a vital role in industrial communications for wireless connectivity. Together with TSN technologies, which are implemented for wired connectivity, it can manage to provide reliable and fast communications for a large amount of services on a common network infrastructure, as well as time-sensitive applications requiring low-latency communications. The fusion of TSN and 5G communication technologies can bring significant improvements to the current industrial practices.

 

 

Supervisor:

Yash Deshpande

Simulation of PTPv2 with configurable delay switch

Keywords:
Time Synchronization, Simulation
Short Description:
Simulation changes to the OMNET++ PTP implementation for configurable delay switches and packet delay filters.

Description

This project aims to implement a simulation of PTP and find out the effect of delay variation. Considering that the process of PTP, the simulation will be built based on OMNet++. A project on GitHub named ptp-sim is used as an reference during implementation. The main object is to assign different delay for the PTP messages, which means the link delay can be set arbitrarily or following a certain distribution during the simulation. For each simulation, the timestamp of master clock and ordinary clock is given to calculate the error of synchronisation. Based on the delay distribution parameter and synchronisation error, the effect of delay variation on PTP can be concluded.
be concluded.

Prerequisites

C++, Data Networking

Supervisor:

Yash Deshpande

Entwicklung und Bewertung von verschiedenen Konzepten für den Einsatz von Coordinated Multipoint

Description

...

Supervisor:

Optimal placement of PDP-capable switches

Keywords:
P4, SDN
Short Description:
This work consists on the optimal placement of PDP-capable switches for PCPP

Description

Software-Defined Networking (SDN) is a network paradigm where control and data planes are decoupled. The control plane consists on a controller, which manages network functionality and can be deployed in one or multiple servers. The data plane consists on forwarding devices which are instructed by the controller on how to forward traffic.

P4 is a domain-specific programming language, which can be used to define the functionality of forwarding devices as virtual or hardware switches and SmartNICs.

This work consists on the the optimal placement of PDP-capable switches for PCPP. For that, an ILP and  P4-enabled virtual network will be used to validate the implementation.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Networking/SDN
  • Python/C
  • Web programming (GUI).

Please send your CV and transcript of records.

Contact

Supervisor:

Cristian Bermudez Serna

Implementation od Collective Perception Messages (CPM) in Artery Framework

Description

...

Supervisor:

Forschungspraxis

Description

....

Supervisor:

Dynamic network planning for the future railway communications

Keywords:
Network Planning, On-Train Data Communications
Short Description:
Exploration of mechanisms for handling data communications under the influence of mobility in the German long distance railway system.

Description

This work focuses on the exploration of networks enabling train control and on-board data communications under mobility scenarios. Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughput (less than 2 Mbps) and frequent service interruptions.

This work aims at the development of a network planning solution enabling future applications in train mobility scenarios such as: Automatic Train Operation (ATO) [1,2], leveraging cloud technologies and meeting bandwidth requirements of data-hungry end-users' applications. Here, special attention will be given to the migration of communications services triggered by trains mobility patterns. It is expected of the student to find solutions to the following questions:

  • When to trigger service migrations?

  • Where to migrate services? (i.e., to which data center)

  • How to handle this process? (So that the user does not perceive any interruption)

 Given:

  • Trains mobility patterns

  • Service requirements in terms of bandwidth and delay

  • Network topology

  • Data center locations

 
The results from this work can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.

 [1] Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation

[2] 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/

Prerequisites

Basic knowledge in:

  • Integer Linear Programming (ILP), heuristics or Machine Learning (ML).

  • Python

Please send your CV and transcript of records.

 

Contact

Supervisor:

Cristian Bermudez Serna

Analysis of Time-Sensitive Networking Systems

Description

...

Supervisor:

Evaluation of traffic model impact on a context-aware power consumption model of user equipment

Keywords:
5G, IIoT, energy, efficiency

Description

Energy efficiency is one of the key performance requirements in the 5G network to ensure user experience. A portion of devices, especially the Industrial Internet of Things (IIoT), run on limited energy, supported by the batteries not placed over the lifetime.

Therefore, the estimation of the power consumption and battery lifetime has recently received increased attention. Multiple context parameters, such as mobility and traffic arrivals, impact the device's power consumption.

In this thesis, the student shall focus on analysing the impact of different traffic models on the power consumption of user equipment. Different source and aggregated traffic models will be implemented depending on the number of devices n the scenario. The implemented traffic models will be evaluated based on a context-aware power consumption model for the user equipment.

Prerequisites

  • Good knowledge of Python and Matlab programming.
  • Good mathematical background.
  • Knowledge mobile networks.

Supervisor:

Alba Jano

Cost evaluation of a dynamic functional split

Description

Increased interference is one of the main drawbacks of cell densification, which is an important strategy for 5G networks to achieve higher data rates. Function centralization has been proposed as a strategy to counter this problem, by letting the physical or scheduling functions coordinate among one another. Nevertheless, the capacity of the fronthaul network limits the feasibility of this strategy, as the throughput required to connect low level functions is very high. Fortunately, since not every function benefits in the same way from centralization, a more flexible approach can be used. Instead of centralizing all functions, only those providing the highest amount of interference mitigation can be centralized. In addition, the centralization level, or functional split, can be change during runtime according to the instantaneous network conditions. Nonetheless, it is not fully know how costly it is to deploy and operate a network implementing a dynamic functional split.

In this internship, the cost of a radio access network implementing a dynamic functional split will be evaluated. A simulator already developed at LKN will be used and extended to produce network configurations adapted to the instantaneous user position and activity. Then, off-the-shelf cost models will be improved and used to estimate the deployment and operating cost of the network under multiple scenarios. Furthermore, the conditions on which a dynamic functional split is profitable will be investigated. Improvements on the functional-split selection algorithm will be proposed, such that the operator benefits from enhanced performance without operating at exceedingly costly states. Finally, a model that takes into account the cost of finding and implementing a new functional split will be employed and its results compared to the previous results.

Supervisor:

Software Engineering for Automotive Ethernet

Description

...

Supervisor:

Jitter Analysis and Comparison of Jitter Algorithms

Description

In electronics and telecommunication, jitter is a significant and an undesired factor. The effect of jitter on the signal depends on the nature of the jitter. It is important to sample jitter and noise sources when the clock frequency is especially prone to jitter or when one is debugging failure sources in the transmission of high speed serial signals. Managing jitter is of utmost importance and the methods of jitter decomposition have changed comparably over the past years.

 

In a system, jitter has many contributions and it is not an easy job to identify the contributors. It is difficult to get Random Jitter on a spectrogram. The waveforms are initially constant, but the 1/f noise and flicker noise cause a lot of disturbance when it comes to output measurement at particular frequencies in a system. 

 

The task is to understand the difference between the jitter calculations based on a step response estimation and the dual dirac model by comparing the jitter algorithms between the R&S oscilloscope and other competition oscilloscopes. Also to understand how well the jitter decomposition and identification is there.

 

The tasks in detail are as follows.

Setup a waveform simulation environment and extend to elaborate test cases

Run the generated waveforms through the algorithms

Analyze and compare the results:

Frequency domain

Statistically (histogram, etc)

Time domain

Consistency of the results

Evaluate the estimation of the BER (bit error rate)

Identify the limitations of the dual-dirac model

Compare dual-dirac model results with a calculation based on the step response estimation

Generate new waveforms based on the analysis

Summarize findings

Supervisor:

Arled Papa - Mathias Hellwig (Rohde Schwarz)

Probabilistic Traffic Classification

Keywords:
Probabilistic Graphical Models, Markov Model, Hidden Markov Model, Machine Learning, Traffic Classification
Short Description:
Classification of packet level traces using Markov and Hidden Markov Models.

Description

The goal of this thesis is the classification of packet-level traces using Markov- and Hidden Markov Model. The scenario is open-world: Traffic of specific web applications should be distinguished from all possible web-pages (background traffic). In addition, several pages should be differentiated. Examples include: Google Maps, Youtube, Google Search, Facebook, Google Drive, Instagram, Amazon Store, Amazon Prime Video, etc.

Supervisor:

Internships

IP

Description

...

Supervisor:

Ingenieurpraxis

Description

...

Supervisor:

Probability parameters 5G RANs featuring dynamic functional split

Description

...

Supervisor:

Probability parameters of 5G RANs featuring dynamic functional split

Description

The architecture of 5G radio access networks features the division of the base station (gNodeB) into a centralized unit (CU) and a distributed unit (DU). This division enables cost reduction and better user experience via enhanced interference mitigation. Recent research proposes the posibility to modify this functional split dynamically, that is, to lively change the functions that run on the CU and DU. This has interesting implications at the network operation.

In this topic, the student will employ a dedicated simulator developed by LKN to characterize the duration and transition rates of each functional split under multiple variables: population density, mitigation capabilities, mobility, etc. This characterization may be used then on traffic models to predict the network behavior.

Prerequisites

MATLAB, some experience with mobile networks and simulators

Supervisor:

Student Assistant Jobs

Solving the manufacturer assignment problem to maximise availability of a network using centrality metrics

Keywords:
availability, manufacturer assignment, centrality metrics

Description

Availability is the probability that a device performs its required function at a particular instant of time.

In most networks, the components are brought from different manufacturers. They have different availabilities. Network operators prefer having reliable components handling more traffic. This ensures the robustness of the network. So, assigning appropriate manufacturers to the components in the topology to guarantee maximum availability is essential.

In this work, the student uses centrality metrics to identify the critical nodes and assign manufacturers based on these metrics.

Prerequisites

Mandatory:

  • Kommunikationsnetze course at LKN
  • Python

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Solving the manufacturer assignment problem to maximise availability of a network using linear programming

Keywords:
availability, manufacturer assignment, Nonlinear program

Description

Availability is the probability that a device performs its required function at a particular instant of time.

In most networks, the components are brought from different manufacturers. They have different availabilities. Network operators prefer having reliable components handling more traffic. This ensures the robustness of the network. So, assigning appropriate manufacturers to the components in the topology guaranteeing 
a) maximum availability, and 
b) load balancing on the nodes
is essential.

For a fixed topology and known traffic, how can the components be assigned to manufacturers to maximise availability and balance load on nodes?

Prerequisites

Mandatory:

  • Communication Network Reliability course/ Optical Networks course at LKN
  • Python

 

Preferred:

  • Knowledge of Linear Programming and/or nonlinear programming

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Working student for development of GUI to study data center switch availability

Keywords:
reliability, availability, GUI, python
Short Description:
The primary goal is to develop a GUI using python to show the failure characteristics of a data center switch and its subcomponents.

Description

Data centers are built using different types of switches. While building a data center, the availability of switches plays a crucial role in the availability of the data center. In turn, the switch's availability is defined by the quality of the sub-components used. This work focuses on switch availability as a combination of the sub-components' availabilities.

In this work, the student is expected to build a python GUI, where the user can give the failure parameters of the subcomponents as input, and view the possible failing times and instantaneous availabilities of the sub-components and the switch itself.

The student will receive the set of parameters and values for this GUI. The work is only to develop the GUI.

Prerequisites

Mandatory: Python, PyQT

Preferred: Analysis, Modeling, and Simulation of Communication Networks; Communication Network Reliability at LKN.

 

Contact

shakthivelu.janardhanan@tum.de

Supervisor:

Shakthivelu Janardhanan

Working Student for Testbed on 5G/6G RAN

Description

In this work, expected result is to enhance the 5G/6G testbed setup with several additional features mainly focusing on the Radio Access Network (RAN). The student is expected to work on the OpenAirInterface (OAI) [1] platform which is the basis of the testbed setup. Expected outcome is to have improvements to the RAN of OAI including but not limited to wireless channel estimation and equalization, Uplink (UL) resource scheduling, and power boosting. More details will be provided after the first meeting.

 

[1] N.Nikaein, M.K. Marina, S. Manickam, A.Dawson, R. Knopp and C.Bonnet,
“OpenAirInterface: A flexible platform for 5G research,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 5, 2014.

Prerequisites

- Good C/C++ experience

- Medium knowledge on OFDM and Wireless Channel Estimation

- Good Python knowledge is a plus

- Machine Learning understanding is a plus

 

Contact

serkut.ayvasik@tum.de

Supervisor:

Multi-domain network implementation

Keywords:
multi-domain, SDN
Short Description:
This works consists on the implementation of a multi-domain SDN network.

Description

 Software-Defined Networking (SDN) is a network paradigm where control and data planes are decoupled. The control plane consists on a controller, which manages network functionality and can be deployed in one or multiple servers. The data plane consists on forwarding entities which are instructed by the controller on how to forward traffic.

 A network can be divided in multiple domains in order to ease its management or limit ownership. In multi-domain SDN, each domain has a controller which is responsible for the management. Controllers in different domains cooperate which each other aiming at providing multi-domain end-to-end connectivity.

 In this work, the student will receive an abstract topology representing the multi-domain network. This information has to be used to build a virtual network, that can be used in the testing of different algorithms. The implementation should include a GUI, in order to visualize the topology and interact with the different elements in the network.

Please send your CV and transcript of records.

Prerequisites

Basic knowledge on the following:

  • Linux
  • Networking/SDN
  • Python
  • Web programming (GUI)

Contact

Supervisor:

Cristian Bermudez Serna

Student Assistant for the Wireless Sensor Networks Lab WS22/23

Description

The Wireless Sensor Networks lab offers the opportunity to develop software solutions for the wireless sensor networking system, targeting innovative applications. For the next semester, a position is available to assist the participants in learning the programming environment and during the project development phase. The lab is planned to be held on-site every Tuesday 15:00 to 17:00.

Prerequisites

  • Solid knowledge in Wireless Communication: PHY, MAC, and network layers.
  • Solid programming skills: C/C++.
  • Linux knowledge.
  • Experience with embedded systems and microcontroller programming knowledge is preferable.

Supervisor:

Yash Deshpande, Hansini Vijayaraghavan

End-to-End Delay Measurements of Linux End Hosts

Description

As preliminary results show, Linux TCP/IP Networking Stack introduces a high networking delay. The topic of this work is to perform an empirical study on the Linux socket-based transmission approach and implement a delay measurement workflow based on existing foundations and repositories.

References:

[1] Where has my time gone?

Prerequisites

Basic knowledge of 

  • Networking and Linux
  • C

Supervisor:

Zikai Zhou

Implementation of a Techno-Economic tool for VLC

Short Description:
Development and implementation in Excel/VBA of visible light communication (VLC) techno-economic tool for IoT services.

Description

Future IoT will need wireless links with high data rates, low latency and reliable connectivity despite the limited radio spectrum. Connected lighting is an interesting infrastructure for IoT services because it enables visible light communication (VLC), i.e. a wireless communication using unlicensed light spectrum. This work will aim at developing a tool to perform an economic evaluation of the proposed solution in the particular case of a smart office.

For that purpose, the following tasks will have to be performed:

  • Definition of a high-level framework specifying the different modules that will be implemented as well as the required inputs and the expected outputs of the tool.
  • Development of a cost evaluation Excel-VBA tool. This tool will allow to evaluate different variations of the selected case study and if possible, to compare different alternative models (e.g., dimensioning) or scenarios (e.g., building types).

Prerequisites

- Excel and VBA

Supervisor:

Implementation of Energy-Aware Algorithms for Service Function Chain Placement

Description

Network Function Virtualization (NFV) is becoming a promissing technology in modern networks. A challenging problem is determining the placement of Virtual Network Functions (VNFs). In this work, we plan to implement existing algorithms for embedding VNFs chains in NFV-enabled networks. 

Prerequisites

Experience in Python or Java, object oriented programming

Contact

amir.varasteh@tum.de

Supervisor:

Working student for innovative Podcasts for BCN lecture

Short Description:
Programming support for the design of podcasts/apps for the Brodadband Communication Networks Lecture

Description

The lecture Broadband Communication Networks (Prof. Wolfgang Kellerer) teaches network-related methods of mobile communication: WIFI, 2G to 5G cellular networks, etc. In order to bridge the gap between the methods and real life, innovative teaching concepts shall be developed in form of short podcasts where the students learn in short episodes about wireless communication and networking in day to day scenarios. In the podcasts the student should also be introduced to short exercises they can perform on their own.

In order to support designing these podcasts Pro. Kellerer is looking for a student experienced in programming and maybe in podcasts to help him.

Prerequisites

Very good programming skills; experience with podcasts; experience with app programming (on android/ios)

Supervisor:

Working student for the SDN lab

Description

The Software-Defined Networking (SDN) lab offers the opportunity to work with real hardware on interesting and entertaining projects related to the SDN network paradigm. For the next semester, a position is available to assist the teaching assistants for the lab (definition and preparation of the assignments, preparation of the hardware, etc.).

Prerequisites

  • Solid knowledge in computer networking (TCP/IP, SDN)
  • Solid knowledge of networking tools and Linux (iperf, ssh, etc)
  • Good programming skills: C/C++, Python

Supervisor: