We offer topics for your Bachelor's Thesis and Master's Thesis to successfully complete your studies with a scientific work. We offer students of the Department of Electrical and Computer Engineering to supervise your Forschungspraxis (research internship /industrial internship) and Ingenieurpraxis directly at our chair. For students with other specializations, such as Informatics, we offer opportunities to supervise your Interdisciplinary Project (IDP) (German: "Interdisziplinäres Projekt (IDP)"). Please contact us directly for more information.
Optimal resource allocation for utility maximization in 5G networks
Keywords: 5G, network slicing, optimization
The slice dimensioning for the three types (eMBB, URLLC, and mMTC) of traffic in 5G would be the focus of this thesis. Each user, depending on the type of traffic, is characterized by a weight. This could be e.g., the gain of the operator by serving a given user. We assume that the stringent the traffic requirement is, the higher the gain for the operator. In this way, mMTC users weight would be the lowest, whereas the URRLC’s the highest. Also, users have different channel conditions, which needs to be taken into account. For each admitted user, the traffic requirement would have to be satisfied. The problem would then reduce to deciding how many slices are to be allocated to each service type, and the corresponding number of users within the slice, so that the utility for the operator is maximized. Three policies are to be analyzed. The first is motivated by the finite number of resource blocks each cell has, so that a brute-force solution is found. The computational complexity of this policy has to be obtained as well. The second, less complex policy, is one in which resources are reserved beforehand (i.e., it is a static policy) based depending on the ratio between the weight and the resources needed for a user of the given service type. Finally, the third policy, which would be based on a heuristic, would decide on-the-fly on how to dimension RAN slice sizes.
In this work, Machine Learning (ML) alternatives will be evaluated to perform in-network prediction and classification. The goal is to develop ML-based applications that help in network optimization, reconfiguration, telemetry and resource allocation.
Some of the tasks include:
ML algorithm implementation
Evaluation in terms of accuracy and resource utilization
If you are interested, please send me your CV and your transcript of records
Keywords: communication networks ,machine learning, cyber security
Communication networks are already key in the everyday life of most people. From social interaction to controlling vital infrastructure, communication networks constitute one of the enabling technologies for the modern world. But this importance comes with challenges. Among others, it makes communication networks the target of attacks. Such attacks can take a variety of forms. Two prominent examples are denial of service and the gain of unauthorized access to machines/data. To securely operate a communication network it is of great importance to detect and mitigate attacks as early as possible.
In this context, the proposed project focuses on the collection and analysis of network traffic. The gained insights should then be utilized to detect malicious behavior. As a first step, your task is to implement an environment where known attacks can be carried out in a secure and observable manner. This especially includes:
setup of virtual machines that can be used as a target.
setup of virtual machines with attack capabilities.
architecture for collecting ground-truth data during the attack phase.
As a second step, your task is to evaluate methods for identifying traffic patterns of different attacks. In this context, a special focus lies on the research question of how so-called Stochastic Block Models (SBMs) can be used to detect specific forms of attacks.
Within this project, you will gain a broad overview of network security, which is a valuable asset for your future career. You will help us to develop mitigation strategies and thus directly contribute towards the improvement of network security. Of course, it is possible to do a student thesis within the scope of this project.
Please send a short intro of yourself together with your CV and transcript of record to us. We are looking forward to meeting you.
Basic knowledge of Linux
Basic Python programming skills
General understanding of communication networks (especially on packet level; protocols: IP, TCP/UDP)
Beamforming is a method for directional signal transmission and reception. Beamforming works by combining antenna elements, such that at particular angles constructive interference amplifies the signal, while for other angles destructive interference attenuates the wave. We formulated an optimization problem to maximize the sum rate of a network containing multiple access points by finding the optimum beam angles. This problem is formulated mathematically. The goal of this work is to digitize the formula and to solve the optimization problem by simplifying the formulation.
- Python or Matlab programming experience - Knowledge in optimization problems - Knowledge in Sage or Gurobi (optional)
Student Assistant for the Wireless Sensor Networks Lab SS22
The Wireless Sensor Networks lab offers the opportunity to develop software solutions for the wireless sensor networking system, targeting innovative applications. For the next semester, a position is available to assist the participants in learning the programming environment and during the project development phase. The lab is planned to be held on-site every Tuesday and Thursday from 15:00 to 17:00.
Solid knowledge in Wireless Communication: PHY, MAC, and network layers.
Solid programming skills: C/C++.
Experience with embedded systems and microcontroller programming knowledge is preferable.
Network Planning for the Future Railway Communications
Keywords: Network Planning, On-Train Data Commmunications
Short Description: This work focuses on the exploration of networks enabling train control and on-board data communications. Today, low bandwidth networks such as GSM, providing less than 200 Kbps [1,2], are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters , access points ), they fail in their attempt as these connections are characterized by having low throughputs (less than 2 Mbps) and frequent service interruptions.
This work thesis focuses on the exploration of networks enabling train control and on-board data communications. Today, low bandwidth networks such as GSM, providing less than 200 Kbps are being used to transmit train control information. Moreover, despite trains may use multiple on-board technologies to provide users with an internet connection (e.g., repeaters, access points), they fail in their attempt as these connections are characterized by having low throughputs (less than 2 Mbps) and frequent service interruptions.
This thesis aims at the development of a network planning solution enabling future applications in train mobility scenario such as: Automatic Train Operation (ATO) [1,2,3], leveraging cloud technologies and meeting bandwidth requirements of data-hungry end-users' applications. Here special attention will be given to the planning of the access network composed of Access Points (APs) and (edge) data centers along their connection to the core network for the German inter-city railway system. It is expected of the student to find solutions to the following questions:
- Where to place network components such as APs and (edge) data centers?
- How to interconnect the network components?
- Trains mobility patterns
- Service requirements in terms of bandwidth, delay, and reliability levels.
- Mobile network operators
- Core network
The results from this master’s thesis can be useful to get an insight on requirements for Smart Transportation Systems, that may in turn be useful for cementing the basis of other scenarios such as: Autonomous Driving and Tele-Operated Driving.
 Digitale Schiene Deutschland. Last visit on 13.12.2021 https://digitale-schiene-deutschland.de/FRMCS-5G-Datenkommunikation
 5G-Rail FRMCS. Last visit on 13.12.2021 https://5grail.eu/frmcs/
 Challenges in GSM-R to FRMCS migration. Last visit on 13.12.2021. https://www.quattron.com/wp-content/uploads/2020/03/Herausforderungen-bei-der-Migration-von-GSM-R-zu-FRMCS.pdf
The wireless system is separated to individual blocks and developed individually such as channel encoder/decoder, modulator, demodulator etc. throughout the wireless generations. However, finding the optimum of such individual blocks does not guarantee the global optimum of a wireless communication system. To overcome this problem a novel concept of End-to-End (E2E) learning of communication systems is proposed . This novel concept is utilizing the deep learning techniques to jointly optimize the whole wireless communication system. Although quite appealing theoretically, the practicality of this idea is questionable due to the unavailability of the wireless channel knowledge at the transmitter during the training. To mitigate the problem, there are several proposed approaches such as using a generative adversarial net (GAN) to represent the channel effects at the transmitter as in . In this work, the expected result is to generalize the previous works to the new 5G-NR standard using Matlab 5G Toolbox framework.
 T. OShea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 563–575, 2017
 Hao Ye, Le Liang, Geoffrey Y. Li, and B. Juang. Deep learning-based end-to-end wireless communication systems with conditional gans as unknown channels. IEEE Transactions on Wireless Communications, 19:3133–3143, 2020
- Medium experience with Matlab
- Medium experience with Python
- Matlab 5G Toolbox and Wireless Channel Estimation knowledge are plus
Re-implementation of a Deep reinforcement learning based multipath scheduler
In order to fully utilize the capabilities of a LiFi-RF Heterogeneous network, the client devices should be capable of using multiple network interfaces simultaneously. Thanks to multipath solutions like MPTCP, this is possible.
The challenge in a MPTCP-enabled heterogeneous network lies in designing a policy to schedule data packets onto the multiple paths with heterogeneous characteristics (eg. delay, packet loss).
This work involves
Re-implementing an existing deep reinforcement learning model of a multipath scheduler
Evaluating the models extensively in an emulation environment (Mininet)
If you are interested in this work, please send an email with a short introduction of yourself along with your CV and grade transcript.
Short Description: The goal of this thesis is to investigate the patterns exhibited in the decisions of the Chameleon controller.
The paper Chameleon describes a control path algorithm with the goal of achieving predictable latency and high network utilization. Chameleon utilizes path diversity, priority queueing and recalculating routes to outperform state of the art.
The controller decides if a new flow can be embedded depending on the current network state and the requirements of the new flow. Understanding the decision process of Chameleon is crucial to further improve performance. Therefore, the goal of this thesis is to investigate the decisions of the Chameleon controller. The task of the student is to design a framework to easily generate and store data with Chameleon for further evaluation. After this, the student should evaluate the collected data. The goal here is to find patterns the controller exhibits in its decisions.
Amaury Van Bemten, Nemanja Ðeri?, Amir Varasteh, Stefan Schmid, Carmen Mas-Machuca, Andreas Blenk, and Wolfgang Kellerer. 2020. Chameleon: Predictable Latency and High Utilization with Queue-Aware and Adaptive Source Routing. In The 16th International Conference on emerging Networking EXperiments and Technologies (CoNEXT ’20), December 1–4, 2020, Barcelona, Spain. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3386367.3432879
Joint optimization of caching and slicing for In Flight Entertainment and Connectivity Services
5G Networks are anticipated to support the tremendous growth in the traffic demands and the heterogeneity of future applications. In that regard network slicing paves the way towards programmable and flexible networks. On the other hand, delay critical applications require an additional boost on the performance which solutions like Mobile Edge Computing (MEC) or network caching can provide.
In this thesis the student shall focus on the algorithmic part of the joint network slicing and edge caching. A solution to efficiently manage the resources in order to provide the required performance is needed.
Good knowledge of Python.
Good mathematical background. Basic knowledge of network slicing and optimization algorithms is a plus.
Cloud Native deployments of the 5G Core network are gaining increasing interest and many providers are exploring these options. One of the key technologies that will be used to deploy these Networks, is Kubernetes (k8s).
In 5G, NG Application Protocol (NGAP) is used for the gNB-AMF (RAN-Core) communication. NGAP uses SCTP as a Transport Layer protocol. In order to load balance traffic coming from the gNB towards a resilient cluster of AMF instances, a L4 load balancer needs to be deployed in the Kubernetes Cluster.
The goal of this project is do develop a SCTP Load Balancer to be used in a 5G Core Network to aid the communication between the RAN and Core. The project will be developed using the language Go (https://golang.org/).
- General knowledge about Mobile Networks (RAN & Core). - Good knowledge of Cloud Orchestration tools like Kuberentes. - Strong programming skills. Knowledge of Go (https://golang.org/) is a plus.
Development of an East/West API for SD-RAN control communication
Software-Defined Radio Access Network (SD-RAN) is receiving a lot of attention in 5G networks, since it offers means for a more flexible and programmable mobile network architecture.
The heart of the SD-RAN architecture are the so called SD-RAN controllers. Currently, initial prototypes have been developed and used in commercial and academic testbeds. However, most of the solutions only contain a single SD-RAN controller. Nonetheless, a single controller becomes also a single point of failure for a system, not only due to potential controller failures but also due to a high load induced from the devices in the data plane.
To this end a multi-controller control plane often becomes a reasonable choice. However, a multi-controller control plane renders the communication among the controllers more challenging, since they need to often exchange control information with each other to keep an up to date network state. Unfortunately, currently there is no protocol available for such a communication.
The aim of this work is the development and implementation of an East/West API for SD-RAN controller communication according to 5G stardardization. The protocol should enable the exchange of infromation among the SD-RAN controllers regarding UEs, BSs, wireless channel state and allow for control plane migration among controllers.
Untersuchung zur rückwärtskompatiblen Datenratensteigerung für proprietäre Brandmeldebustechnik
Das Thema dieser Bachelorthesis ist die Entwicklung, Optimierung und Evaluation des proprietären Brandmeldebussystems LSNi1 der Firma Bosch Sicherheitssysteme GmbH in Grasbrunn.
Brandmeldeanlagen sind ein bestehender notwendiger Teil der Gebaudetechnik und sollen fur neue vernetzte IoT-Dienste verwendet werden. Aufgrund immer höheren Anforderungen dieser Dienste soll die Datenrate uber den Brandmeldebus gesteigert werden. Zur Zeit wird das Bussystem zur verhaltnismäßig datenarmen Alarmabfrage und Antwort der Netzelemente verwendet. Ein wichtiger Punkt ist dabei, die Systemcharakteristika des Brandmeldesystems nicht zu verschlechtern. Solche Netzwerke sind als Sicherheitssysteme deklariert und mussen daher äußerst zuverlässig in Sachen Ausfallsicherheit und Fehlererkennung sein.
Die Aufgabe besteht darin, eine neue Übertragungstechnik mit höherer Datenrate auf der physikalischen Schicht des Busses zu finden und diese mit Hilfe eines Prototypen umzusetzen. Die Optimierung des Prototypen soll durch Berechnung und Experimentieren mit Systemelementen ausgearbeitet werden. Daraufhin werden verschiedene Testaufbauten, im firmeneigenen Labor getestet. Die dabei aufgenommenen Daten werden über ein entwickeltes Matlab-Evaluierungsprogramm ausgewertet, um eindeutige Aussagen über die Funktionalitat des Systems zu geben. Eine besondere Herausforderung bei Systemen dieser Art sind die hohen Kabellängen und die große Anzahl an Systemelementen auf dem Bus. Mit den ausgewerteten Ergebnissen soll überpruft werden, in welchem Umfang die Datenrate auf dem Bus gesteigert werden kann.
This idea behind this master’s thesis is to leverage on previous research works done about formal proofs to model congestion control algorithms and make of a formal proof of their behavior and/or provide a scenario where it would fail to perform as intended.
Joint network and edge cloud \alpha-fair resource allocation to vehicular users in a cell for URRLC traffic
In this work, the focus would be on vehicular users transmitting URLLC traffic to the base station. Besides the traffic ``arriving” at the base station, it has to be processed in the edge cloud, which is assumed to be associated with that base station. There are two types of resources: radio link and edge cloud resources. The first is responsible to ``bring’’ the traffic to the base station, whereas the second is in charge of processing that information. As we are interested in URLLC traffic, there is a finite deadline by which the information has to be processed on the edge. So, the delay involves the transmission delay, propagation delay and processing delay. Assuming there are multiple users of this same use case, the two types of resources have to be split among them. The objective is to provide \alpha-fair resource allocation, by deciding which physical resource blocks are allocated to which users, and which units of edge clouds are dedicated to which user, so that the finite latency requirement is fulfilled. The student would need to solve an integer program.
Implementing and Evaluating a Neural Network-based Routing Protocol
Keywords: AI, deep learning, eBPF, MPLS Networking
The goal of traffic engineering in communication networks is steering traffic such that a objective is optimized. The objective depends on the application and can be among others minimizing the maximum link utilization, and minimizing the flow completion time. To optimize any objective, a TE solution maps the current state of the network into a forwarding decision. That is, given the network state, source, and destination, forward this traffic along that path. Currently, the decision making in TE system is based on simple, hand-crafted algorithms. The reason lies in the strict computational requirements towards any TE algorithm (decisions at (sub)millisecond scale), and the necessity to realize the TE system as a distributed protocol.
Recent work shows that Neural Networks (NNs) can learn a distributed protocol from examples. The NN uses decisions of a TE system and synthesizes a distributed protocol out of those examples. In the process, the NN learns how information on a node should be encoded, which nodes need to exchange information to make decisions, and how to map the exchanged network state into a forwarding decision. For fast inference, the NN that makes the decisions is a fully binarized NN, i.e., input, weights, and activations are binary. While a Binary Neural Network (BNN) accelerates the evaluation, a practical implementation in existing hardware is still missing.
The goal of this thesis is to fill this gap and realize the BNN on a physical system. Your task is to develop a host based implementation (Appendix C.2 in ) using the extended Berkeley Packet Filter (eBPF). Concretely, the eBPF implementation must process update messages, and use the NN to make forwarding decisions for outbound traffic, and signal the forwarding decisions to the Network. The creation and the sending of update messages on switches is not part of the thesis and will be provided. The final deliverable is a small VM-based Clos-Topology (e.g., two pods of a k=4 fat-tree topology) in which traffic is routed based on the path determined by the NN in the end-hosts.
Analysis of Mobility and Blockage using Madrid Grid Mobility Model
In this thesis, the Madrid Grid Mobility model is extended to consider geometric blockages. Then the model is compared to the 3GPP blockage model, which is distance and probability-based. In the second part, an optimization problem to maximize network throughput considering mobility KPIs such as handovers is formulated and solved using a solver. Finally, a fast and well-performing heuristic will be developed.
Automated Generation of Adversarial Inputs for Data Center Networks
Keywords: adversarial; datacenter networks
Today's Data Center (DC) networks are facing increasing demands and a plethora of requirements. Factors for this are the rise of Cloud Computing, Virtualization and emerging high data rate applications such as distributed Machine Learning frameworks. Many proposal for network designs and routing algorithms covering different operational goals and requirements have been proposed. This variety makes it hard for operators to choose the ``right'' solution. Recently, some works proposed that automatically generate adversarial input to networks or networking algorithms [1,2] to identify weak spots in order to get a better view of their performance and help operators' decision making. However, they focus on specific scenarios. The goal of this thesis is to develop or extend such mechanisms so that they can be applied a wider range of scenarios than previously. The thesis builds upon an existing flow-level simulator in C++ and initial algorithms that generate adversarial inputs for networking problems.
 S. Lettner and A. Blenk, “Adversarial Network Algorithm Benchmarking,” in Proceedings of the 15th International Conference on emerging Networking EXperiments and Technologies, Orlando FL USA, Dec. 2019, pp. 31–33, doi: 10.1145/3360468.3366779.  J. Zerwas et al., “NetBOA: Self-Driving Network Benchmarking,” in Proceedings of the 2019 Workshop on Network Meets AI & ML - NetAI’19, Beijing, China, 2019, pp. 8–14, doi: 10.1145/3341216.3342207.
Analysis of Air Traffic Control Communication WANs
Today's air traffic control communication systems use IP-based protocols for voice transmission, voice recording, and signaling. The architecture of ATC communication WANs varies between small WANs that consist of locations within an airport's premises, and large WANs that are comprised of multiple locations country-wide.
When existing towers or control centers that use legacy analog voice systems want to upgrade to a new, IP-based system, it is not known in advance if the existing IP infrastructure meets the requirements to migrate to an IP-based system. This is of particular importance, as ATC communication is considered a critical infrastructure with correspondingly high requirements regarding reliability, availability, and latency.
The goal of the thesis is to investigate different approaches to WAN measurement. Further, a tool to analyze unknown WANs and assess if the KPIs are met in order to provide high-quality VoIP service is to be developed.
VoIP and ATC infrastructure, Python, C++, Eurocae ED-137 standards
For this thesis, the student will evaluate different deployment schemes for cloud-native 5G network functions and will propose state-migration mechanisms to improve the performance of the system as a whole.
Towards Log Data-driven Fault Analysis in a Heterogeneous Content Provider Network
Bayerischer Rundfunk (BR) operates a network to deliver content via television, radio and the internet to its users. This requires a highly heterogenous network. The network monitoring solution for the BR-network collects log data from involved devices and stores it in a central database. Currently, human operators make network management decisions based on a manual review of this log data. This especially includes root cause identification in case of network failures. Such a human-centric process can be tedious and does not scale well with increasing network complexity. In this thesis, the student should perform a thourough analysis of the described data and evaluate the potential for automated processing. Goal is to provide a data-driven approach that significantly supports human operators with identifying root causes in case of network failures.
Reliability Analysis of ONOS Releases based onCode Metrics and SRGM
Software Defined Networking (SDN) separates the control and data planes.Control plane can be considered as the brain of the network and it is responsible for configuring flows, finding paths and managing all the network functionalities like firewall, load balancing, etc. For this reason, the SDN controller became complex. Furthermore, it is a large software platform, which have many contributors with different experience level. As a result the code contains many undetected and unresolved bugs. If one of these bugs is activated in the operational state, it may cause performance degradation or even collapse of the whole system.
SDN serves to broad range of applications with different requirements. Some of the application areas like autonomous driving requires high reliability and performance degradation may cause undesired results. Software Reliability Growth Models (SRGM) are statistical frameworks that are based on historical bug reports for reliability analysis and widely used to estimate the reliability of a software. Open network operating system (ONOS) is an open source project and it became one of the most popular SDN platforms. Its historical bug reports are open in their JIRA issue tracker. Currently ONOS has 23 releases, its first ten versions are investigated with different SRGM models  and found that different SRGMs fit to the bug detection of different versions of ONOS.
Source code metrics refer to quantitative characteristics of the code. Those metrics can describe the size of the code (lines of code), complexity of code (McCabe’s complexity), etc. They have been used to predicting the number of bugs, identifying possible potential location of bug, etc.
The goal of this work is to analyse the reliability of different ONOS releases. For that purpose, an understanding of the correlation between the structure of source code and the bug manifestation process is crucial to predict the future bug manifestation of the new releases. First, a state of the art research on the SRGM will be done to understand the software reliability and SRGMs. Afterwards the student should implement different SRGMs to fit the error manifestation of every release and compare the results with mentioned research . Then, different code metrics will be obtained from each ONOS release. Then, the correlation between SRGM and code metrics will be revealed. At last reliability of the release will be analyzed with the best fitting SRGM. The result of this work will be to propose a reliability metric combining SRGM and code metrics that improves the software reliability prediction.
P. Vizarreta, K. Trivedi, B. Helvik, P. Heegaard, W. Kellerer, and C. Mas Machuca, An empirical study of software reliability in SDN controllers, 13th International Conference on Network and Service Management (CNSM), 2017.
Reinforcement Learning for joint/dynamic user and slice scheduling in RAN towards 5G
In the Radio Access Network (RAN), the MAC scheduler is largely inherited across generations in the past, to fit to new networking goals and service requirements. The rapid deployment of new 5G technologies will make upgrading of current ones extremely complicated and difficult to improve and maintain. Therefore, finding new solutions for efficient Radio Resource Scheduling (RRS) is necessary to meet the new KPI targets. 5G networks and beyond use the concept of network slicing by forging virtual instances (slices) of its physical infrastructure. A heterogeneous network requires a more optimized and dynamic RRS approach. In view of the development of SD-RAN controllers and artificial intelligence, new promising tools such as reinforcement learning can be proven useful for such a problem.
In this thesis, a data-driven MAC slice scheduler will be implemented, that maximizes user utility, while learning the optimal slice partitioning ratio. A deep reinforcement learning technique will be used to evaluate the radio resource scheduling and slicing in RAN. The results will be compared with traditional schedulers from the state-of-the-art.
Optimization of Access Point Placement in Beyond 5G LiFi-RF Wireless Heterogeneous Networks
Due to the rapid increase in the number of mobile phones and other handheld devices, an enormous pressure is placed on the already overloaded cellular networks to provide the required Quality of Service. Heterogeneous networks, with multiple coexisting and collaborating technologies like 5G, WiFi, and LiFi, have emerged as a good solution to increase capacity.
In order to guarantee optimal coverage, avoid interference and minimize costs, the cellular base stations and WiFi/LiFi access points have to be placed optimally in the environment.
Working knowledge of Python
Sound knowledge of Wireless Communication
Interest to learn new communication technologies
Experience with optimization problems is an advantage
Age of Information-based Multi-Hop Communication using GNU Radio & Software-defined Radios
Keywords: GNU Radio, Software-defined Radios, Age of Information
Age of Information(AoI) is defined as the elapsed time since the generation of the most recent packet. It has been used as a cross-layer metric in remote monitoring and controlling scenarios.
AoI-aware networking & protocol design requires the modification within the communication stack. To that end, our department has set up a testbed that consists of multiple software-defined radios (SDRs) programmed in GNU Radio. This enables us to develop our own solutions, e.g., MAC protocol, routing algorithms etc. to improve e.g. AoI performance at the application layer.
This tasks of this project will consist of:
Working with the existing setup, that consists of multiple SDRs programmed with GNU Radio / C++
Scheduling of flows over multi-hop paths between the source and the destination pairs
Performance analysis and comparison of experimental results to analytical ones from the literature
The project requires:
Solid C/C++ knowledge and low-level programming
Python skills are beneficial
Solid fundamental knowledge in wireless communications
Having worked with practical implementations of wireless communication devices is a great benefit, such as SDRs or sensor motes (some examples are Telosb Motes, Zolertia Re-motes, ...)
Implementation and Analysis of P4-based 5G User Plane Function
Keywords: 5G, P4, UPF
The 5G cellular networks are the state-of-the-art cellular networks for the coming 10 years. One of the critical network functions in the 5G core system is the evolved packet gateway or User Plane Function (UPF). The UPF is responsible for carrying the users' packets from the base stations to the data network (like the internet).
On the other hand, P4 is a promising language for programming packet processors. It can be used to program different networking devices (Software/FPGAs/ASICs/...).
Using P4 to implement the UPF has many advantages in terms of flexibility and scalability. In this work, the student will realize/implement the UPF in P4 language. Then, the advantages of this approach, especially in terms of performance gains, will be evaluated.
Deliberate Load-Imbalancing in Data Center Networks
Keywords: Traffic Engineering, Scheduling, Data Center Networks
Short Description: Goal of this thesis is the implementation and evaluation of an in-dataplane flow scheduling algorithm based on the online scheduling algorithm IMBAL in NS3.
Recently, a scalable load balancing algorithm in the dataplane has been proposed that leverages P4 to estimate the utilization in the network and assign flows to the least utilized path. This approach can be interpreted as a form of Graham's List algorithm.
In this thesis, the student is tasked to investigate how a different online scheduling algorithm called IMBAL performs compared to HULA. A prototype of IMBAL should be implemented in NS3. The tasks of this thesis are:
Literature research and overview to online scheduling and traffic engineering in data center networks.
Design how IMBAL can be implemented in NS3.
Implementation of IMBAL in NS3.
Evaluation of the implementation in NS3 with production traffic traces and comparison to HULA (a HULA implementation is provided from the chair and its implementation not part of this thesis).
5G-RAN control plane modeling and Core network evaluation
Next generation mobile networks are envisioned to cope with heterogeneous applications with diverse requirements. To this end, 5G is paving the way towards more scalable and higher performing deployments. This leads to a revised architecture, where the majority of the functionalities are implemented as network functions, which could be scaled up/down depending on the application requirements.
3GPP has already released the 5G architecture overview, however there exists no actual open source deployment of RAN functionalities. This will be crucial towards the evaluation of the Core network both in terms of scalability and performance. In this thesis, the student shall understand the 5G standardization, especially the control plane communication between the RAN and 5G Core. Further, an initial RAN function compatible with the 5G standards shall be implemented and evaluation of control plane performance will be carried out.
Strong knowledge on programming languages Python, C++ or Java.
Towards Digital Network Twins: Can we Machine Learn Network Function Behaviors?
Digital Network Twins can help to improve future network operation and management significantly. A Digital Twin (DT) of a network is a digital representation that is coupled to the real network. It can be used to perform experiments e.g. to improve the operation of the real network. Running a detailed model of a network as a DT can be quite challenging. Either the computational effort is too high to run the model somehow in real-time or the abstraction level is too high so the DT does not represent the real network closely enough. Here Machine Learning (ML) could be a solution. Instead of accurately modeling the behavior analytically, an ML approach observes and learns the behavior of a network and its elements and may lead to a model that is less complex to use for a DT. Research in this direction is still at its infancy, however. This research internship should investigate the ability of current ML approaches to learn the behavior of network functions. For this task, Kubernetes' ingress controller, a load balancer (LB), shall be set up as an exemplary network function in a virtual testbed. The setup should further contain traffic generators to benchmark network functions and monitoring installations to observe the be- havior of the LB. The collected data should then be used to train an ML model. The core question is whether how far the behavior of network functions can be learned and abstracted with ML models.
Time synchronisation in multi-point wireless deterministic edges based on the Fine Time Measurement protocol
Keywords: Time Synhronization, Time Sensitive Networking, Wireless Communication
he focus of the research internship concerns the sup- port on the integration and eventually development of time synchronisation extensions for the ”Fine Time Measurement” protocol of IEEE 802.11ax. The work to be carried out concerns experimentation and performance evaluation concerning time synchronization aspects involving a hybrid Ethernet/TSN/Wi-Fi6 testbed, where the wireless region integrates multi-AP scenarios.
C. C++, Understanding of Wireless Communications, Understanding of time synchronization.
Towards Digital Network Twins: Can we Machine Learn Network Function Behaviors?
Digital Network Twins can help to improve future network operation and management significantly. A Digital Twin of a network is a digital representation that is coupled to the real network. It can be used to perform experiments e.g. to improve the operation of the real network. Running a detailed model of a network as a Digital Network Twin can be quite challenging. Either the computational effort is too high to run the model somehow in real-time or the abstraction level is too high so the Digital Twin does not represent the real network closely enough. Here Machine Learning could be a solution. Instead of accurately modeling the behavior analytically, a machine learning approach observes and learns the behavior of a network and its elements and may lead to a model that is less complex to use for a Digital Twin. Research in this direction is still at its infancy, however.
This research internship should investigate the ability of current machine learning approaches to learn the behavior of network functions. For this task, exemplary network functions, e.g., written in P4, should be set up in a virtual and a hardware testbed. The setup should further contain traffic generators to benchmark network functions and monitoring installations to observe the behavior of the network functions. The collected data should then be used to train machine learning models. The core question is whether how far the behavior of network functions can be learned and abstracted with machine learning models.
Evaluation of traffic model impact on a context-aware power consumption model of user equipment
Keywords: 5G, IIoT, energy, efficiency
Energy efficiency is one of the key performance requirements in the 5G network to ensure user experience. A portion of devices, especially the Industrial Internet of Things (IIoT), run on limited energy, supported by the batteries not placed over the lifetime.
Therefore, the estimation of the power consumption and battery lifetime has recently received increased attention. Multiple context parameters, such as mobility and traffic arrivals, impact the device's power consumption.
In this thesis, the student shall focus on analysing the impact of different traffic models on the power consumption of user equipment. Different source and aggregated traffic models will be implemented depending on the number of devices n the scenario. The implemented traffic models will be evaluated based on a context-aware power consumption model for the user equipment.
Application of Per-Stream Filtering and Policing for stream delay guarantees in TSN
Time-Sensitive Networking (TSN) standards aim at providing stricter quality of service (QoS) guarantees than legacy Ethernet. Using the TAS feature of TSN, a pre-planned transmission schedule with dedicated time slots for critical traffic, very small end-to-end latency, and jitter can be guaranteed. However, it has several drawbacks, such as algorithm complexity to compute a conflict-free schedule, non-trivial re-configuration at runtime, strict requirements on clock synchronization across the whole network.
As many use cases do not need the outstanding performance of a TAS-scheduled network, and many legacy end stations and applications are not capable of sending packets at a pre-planned point in time, the topic of this research internship is to elaborate on the performance of a strict priority-based network – which behaves deterministic, if the traffic patterns, i.e. the amount of data per time interval, is known for all high-priority traffic in the network.
The goal of this internship is to implement and evaluate an algorithm that computes the worst-case delay for given streams, considering the current network topology and data flow already present in the network.
Knowledge of TSN, TAS
Zikai Zhou, Andreas Blenk - Dr. Andreas Zirkler (Siemens AG (50%), LKN (50%))
Increased interference is one of the main drawbacks of cell densification, which is an important strategy for 5G networks to achieve higher data rates. Function centralization has been proposed as a strategy to counter this problem, by letting the physical or scheduling functions coordinate among one another. Nevertheless, the capacity of the fronthaul network limits the feasibility of this strategy, as the throughput required to connect low level functions is very high. Fortunately, since not every function benefits in the same way from centralization, a more flexible approach can be used. Instead of centralizing all functions, only those providing the highest amount of interference mitigation can be centralized. In addition, the centralization level, or functional split, can be change during runtime according to the instantaneous network conditions. Nonetheless, it is not fully know how costly it is to deploy and operate a network implementing a dynamic functional split.
In this internship, the cost of a radio access network implementing a dynamic functional split will be evaluated. A simulator already developed at LKN will be used and extended to produce network configurations adapted to the instantaneous user position and activity. Then, off-the-shelf cost models will be improved and used to estimate the deployment and operating cost of the network under multiple scenarios. Furthermore, the conditions on which a dynamic functional split is profitable will be investigated. Improvements on the functional-split selection algorithm will be proposed, such that the operator benefits from enhanced performance without operating at exceedingly costly states. Finally, a model that takes into account the cost of finding and implementing a new functional split will be employed and its results compared to the previous results.
Jitter Analysis and Comparison of Jitter Algorithms
In electronics and telecommunication, jitter is a significant and an undesired factor. The effect of jitter on the signal depends on the nature of the jitter. It is important to sample jitter and noise sources when the clock frequency is especially prone to jitter or when one is debugging failure sources in the transmission of high speed serial signals. Managing jitter is of utmost importance and the methods of jitter decomposition have changed comparably over the past years.
In a system, jitter has many contributions and it is not an easy job to identify the contributors. It is difficult to get Random Jitter on a spectrogram. The waveforms are initially constant, but the 1/f noise and flicker noise cause a lot of disturbance when it comes to output measurement at particular frequencies in a system.
The task is to understand the difference between the jitter calculations based on a step response estimation and the dual dirac model by comparing the jitter algorithms between the R&S oscilloscope and other competition oscilloscopes. Also to understand how well the jitter decomposition and identification is there.
The tasks in detail are as follows.
Setup a waveform simulation environment and extend to elaborate test cases
Run the generated waveforms through the algorithms
Analyze and compare the results:
Statistically (histogram, etc)
Consistency of the results
Evaluate the estimation of the BER (bit error rate)
Identify the limitations of the dual-dirac model
Compare dual-dirac model results with a calculation based on the step response estimation
Short Description: Classification of packet level traces using Markov and Hidden Markov Models.
The goal of this thesis is the classification of packet-level traces using Markov- and Hidden Markov Model. The scenario is open-world: Traffic of specific web applications should be distinguished from all possible web-pages (background traffic). In addition, several pages should be differentiated. Examples include: Google Maps, Youtube, Google Search, Facebook, Google Drive, Instagram, Amazon Store, Amazon Prime Video, etc.
Joint Planning of Optical and Satellite QKD networks
This research internship looks at Quanktum Key Distribution exchange in a wide area pan-European network. Different architectures, routing and spectrum allocation methods need to be considered, keeping in mind both technological and economic constraints of such a combined deployment.
This work is helpful for Optical Networking as well as satellite network providers in order to provide secure data transmissions over large region multi-national governmental organizations, banks and security agencies.
Probability parameters of 5G RANs featuring dynamic functional split
The architecture of 5G radio access networks features the division of the base station (gNodeB) into a centralized unit (CU) and a distributed unit (DU). This division enables cost reduction and better user experience via enhanced interference mitigation. Recent research proposes the posibility to modify this functional split dynamically, that is, to lively change the functions that run on the CU and DU. This has interesting implications at the network operation.
In this topic, the student will employ a dedicated simulator developed by LKN to characterize the duration and transition rates of each functional split under multiple variables: population density, mitigation capabilities, mobility, etc. This characterization may be used then on traffic models to predict the network behavior.
MATLAB, some experience with mobile networks and simulators
Student Assistant for the Internetkommunikation Lecture in SoSe22
Internetkommunikation offers the opportunity to develop interesting software solutions for technical questions about internet protocols and mechanisms. For the next semester, a position is available to assist the teaching assistants for the tutorials and class project.
Lightpath Lifecycle Management using TransportPCE and OpenROADM
OpenROADM Multi-Source Agreement is the current open-source standard for data-models in open disaggregated optical line systems. These data-models enable optical network operators and software integrators to use open-source SDN controllers to discover, control and configure optical line systems.
One such controller is the openDayLight TransportPCE which provides southbound RESTCONF APIs to configure the emulated/real devices.
The goal of this working student position is to develop and setup a test bed with emulated OpenROADM based simulator devices and conenct them to TransportPCE SDN controller. The SDN controller shall be further interfaced with an in-house network planning and orchestration engine to facilitate online configuration of lightpaths.
As preliminary results show, Linux TCP/IP Networking Stack introduces a high networking delay. The topic of this work is to perform an empirical study on the Linux socket-based transmission approach and implement a delay measurement workflow based on existing foundations and repositories.
Short Description: Development and implementation in Excel/VBA of visible light communication (VLC) techno-economic tool for IoT services.
Future IoT will need wireless links with high data rates, low latency and reliable connectivity despite the limited radio spectrum. Connected lighting is an interesting infrastructure for IoT services because it enables visible light communication (VLC), i.e. a wireless communication using unlicensed light spectrum. This work will aim at developing a tool to perform an economic evaluation of the proposed solution in the particular case of a smart office.
For that purpose, the following tasks will have to be performed:
Definition of a high-level framework specifying the different modules that will be implemented as well as the required inputs and the expected outputs of the tool.
Development of a cost evaluation Excel-VBA tool. This tool will allow to evaluate different variations of the selected case study and if possible, to compare different alternative models (e.g., dimensioning) or scenarios (e.g., building types).
Implementation of Energy-Aware Algorithms for Service Function Chain Placement
Network Function Virtualization (NFV) is becoming a promissing technology in modern networks. A challenging problem is determining the placement of Virtual Network Functions (VNFs). In this work, we plan to implement existing algorithms for embedding VNFs chains in NFV-enabled networks.
Experience in Python or Java, object oriented programming
Working student for innovative Podcasts for BCN lecture
Short Description: Programming support for the design of podcasts/apps for the Brodadband Communication Networks Lecture
The lecture Broadband Communication Networks (Prof. Wolfgang Kellerer) teaches network-related methods of mobile communication: WIFI, 2G to 5G cellular networks, etc. In order to bridge the gap between the methods and real life, innovative teaching concepts shall be developed in form of short podcasts where the students learn in short episodes about wireless communication and networking in day to day scenarios. In the podcasts the student should also be introduced to short exercises they can perform on their own.
In order to support designing these podcasts Pro. Kellerer is looking for a student experienced in programming and maybe in podcasts to help him.
Very good programming skills; experience with podcasts; experience with app programming (on android/ios)
The Software-Defined Networking (SDN) lab offers the opportunity to work with real hardware on interesting and entertaining projects related to the SDN network paradigm. For the next semester, a position is available to assist the teaching assistants for the lab (definition and preparation of the assignments, preparation of the hardware, etc.).
Solid knowledge in computer networking (TCP/IP, SDN)
Solid knowledge of networking tools and Linux (iperf, ssh, etc)