Harnessing Large Language Models for Intelligent Wireless Networking
Description
Explore the exciting world of Large Language Models (LLMs) with us!
As LLMs (like GPT-4) transform industries worldwide, this is your chance to be part of this transformative journey in wireless networking.
In this project, you will:
- Re-implement and analyze cutting-edge LLMs, evaluating their strengths, limitations, and specific applications in wireless networking.
- Identify and tailor these models to a unique use case in wireless networking, applying state-of-the-art techniques to solve real-world challenges.
Related Reading:
- Shao, Jiawei, Jingwen Tong, Qiong Wu, Wei Guo, Zijian Li, Zehong Lin, and Jun Zhang. "WirelessLLM: Empowering Large Language Models Towards Wireless Intelligence." arXiv preprint arXiv:2405.17053 (2024).
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Prerequisites
- Strong Python programming skills
- Strong foundation in wireless networking concepts
- Prior experience with machine learning frameworks
Supervisor:
Development of a 5G Multipath TCP Testbed for Multi-Access Network Optimization
Description
Join us in tackling one of the most pressing challenges in mobile networking—managing the growing demand for data and the need for higher performance in modern applications. As single-network connections struggle to keep up, the 3GPP's Access Traffic Steering, Switching, and Splitting (ATSSS) framework offers a solution, enabling devices to dynamically switch between and simultaneously use multiple network types like 5G, LTE, and Wi-Fi.
In this project, you will:
- Develop a cutting-edge 5G testbed that adheres to 3GPP standards.
- Integrate Multipath TCP to enable seamless communication across multiple network interfaces.
- Contribute to the optimization of mobile traffic management, enhancing both performance and reliability in next-generation networks.
This work is a unique opportunity to get hands-on experience with 5G technology and be at the forefront of mobile networking innovation.
Related Reading:
- M. Quadrini, D. Verde, M. Luglio, C. Roseti and F. Zampognaro, "Implementation and Testing of MP-TCP ATSSS in a 5G Multi-Access Configuration," 2023 International Symposium on Networks, Computers and Communications (ISNCC), Doha, Qatar, 2023, pp. 1-6, doi: 10.1109/ISNCC58260.2023.10323859.
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Prerequisites
- Experience with programming in C/C++
- Strong foundation in wireless networking concepts
- Motivation to learn 5G concepts
- Availability to work in-presence
Supervisor:
Design and Implementation of an Intelligent Multipath Packet Scheduler
Description
Are you ready to dive into cutting-edge technology that merges LiFi and WiFi networks? Imagine your work enabling devices to seamlessly connect across multiple interfaces, pushing the boundaries of what's possible in wireless communication. With multipath solutions like MPTCP and MPQUIC, the potential is immense—but the challenge is real.
We are looking for a motivated student to design and implement a state-of-the-art wireless-channel-aware packet scheduler. You'll tackle the complex task of scheduling data packets across multiple network paths, each with unique characteristics like delay and packet loss.
Related Reading:
- W. Yang, L. Cai, S. Shu, J. Pan and A. Sepahi, "MAMS: Mobility-Aware Multipath Scheduler for MPQUIC," in IEEE/ACM Transactions on Networking, vol. 32, no. 4, pp. 3237-3252, Aug. 2024, doi: 10.1109/TNET.2024.3382269.
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Prerequisites
- Experience with Linux networking
- Strong foundation in wireless networking concepts
- Availability to work in-presence
Supervisor:
Modeling a framework to evaluate manufacturer trustworthiness
manufacturer trustworthiness, sovereignty
Description
With the ongoing trade wars, sanctions, and geopolitical influences worldwide, nations are beginning to make their internet more sovereign by removing "high-risk" manufacturers from their networks. However, there are no guidelines to determine the risk factor associated with a manufacturer.
This work aims to determine all (most) factors that determine the trustworthiness of a manufacturer and provide guidelines on how to evaluate the trustworthiness of a manufacturer.
Prerequisites
Courses from LKN and/or background in Corporate economics
Contact
shakthivelu.janardhanan@tum.de
Supervisor:
PyRBD: Development of a frontend for a reliability suite
Reliability block diagram, front-end, minimal cut sets, fault trees
Description
A reliability block diagram is a tool used to measure the availability of a system (a network, in our case). However, the existing tools as software packages do not work with bidirectional links.
This work aims to build a tool to visualize fault trees and minimal cut sets to complete a reliability suite on Python.
Prerequisites
Python
Contact
shakthivelu.janardhanan@tum.de
Supervisor:
PyRBD: Development of a C++ backend for Reliability Block Diagram evaluation
Reliability block diagram, availability
Description
A reliability block diagram is a tool used to measure the availability of a system (a network, in our case). However, the existing tools as software packages do not work with bidirectional links.
This work aims to build a tool that can evaluate the availability of a network based on the RBD. The back end should be in C++. This back end should be wrapped in a Python function.
Prerequisites
Proficient in C++,
Basic knowledge of Python Ctype bindings.
Contact
shakthivelu.janardhanan@tum.de
Supervisor:
Development of a Medical User Interface for a Telerobotic Examination Suite
Description
Telediagnostic will help to bring medical expertise in every corner, even in distant rural areas. Hereby, the currently researched 6G communication standard will play a crucial role. Thus, in the scope of the 6G-life project, the aim is to investigate telediagnostic and telerobotic scenarios in detail. Furthermore, capabilities should be demonstrated in practise.
In this research internship, the goal is to develop a User Interface (GUI) for an existing telediagnostic/telerobotic testbed which is provided by our project partner MITI who are located at the hospital "Rechts der Isar" (ca. 20 min. from TUM by bus). In particular, the task is to implement a GUI based on existing UI/UX concepts in such a way a doctor can perform a telerobotic diagnosis using real robotors in the setup. For that, different video streams, medical devices and vital parameters (all already existing) shall be included and shown in the GUI. Furthermore, warnings in dangerous situations shall be shown.
Please note that this research internship is in combination with MITI. As the robotic setup is located there, working at MITI will be needed.
Prerequisites
- Motivation.
- Interest in the medical field.
- Experience in the development of GUIs, especially with Qt/PyQt and video streaming would be nice but are not necessary.
Supervisor:
Working Student for the Implementation of a Medical Testbed
Communication networks, programming
Your goal is to implement a network for critical medical applications based on an existing open-access 5G networking framework as well as the adaptation of this network according to the needs of our research.
Description
Beginning possible from 1st August 2024
Future medical applications put stringent requirements on the underlying communication networks in terms of highest availability, maximal throughput, minimal latency, etc. Thus, in the context of the 6G-life project, new networking concepts and solutions are being developed.
For the research of using 6G for medical applications, the communication and the medical side have joined forces: While researchers from the MITI group (Minimally invasive Interdisciplinary Therapeutical Intervention), located at the hospital "Rechts der Isar", focus on the requirements of the medical applications and collecting needed parameters of patients, it is the task of the researchers at LKN to optimize the network in order to satisfy the applications' demands. The goal of this joint research work is to have working testbeds for two medical testbeds located in the hospital to demonstrate the impact and benefits of future 6G networks and concepts for medical applications.
Your task during this work is to implement the communcation network for those testbeds. Based on an existing open-access 5G network implementation, you will implement changes according to the progress of the current research. The results of your work, working 6G medical testbeds, will enable researchers to validate their approaches with real-world measurements and allow to demonstrate future 6G concepts to research, industry and politics.
In this project, you will gain a deep insight into how communication networks, especially the Radio Access Network (RAN), work and how different aspects are implemented. Additionally, you will understand the current limitations and weaknesses as well as concepts for improvement. Also, you will get some insights into medical topics if interested. As in such a broad topic there are many open research questions, you additionally have the possibility to also write your thesis or complete an internship.
Prerequisites
- Most important: Motivation and willingness to learn unknown things.
- C/C++ and knowledge about how other programming languages work (Python, etc.) and/or the willingness to work oneself into such languages.
- Preferred: Knowledge about communication networks (exspecially the RAN), 5G concepts, the P4 language, SDN, Linux.
- Initiative to bring in own ideas and solutions.
- Ability to work with various partners (teamwork ability).
Please note: It is not necessary to know about every topic aforementioned, much more it is important to be willing to read oneself in.
Contact
Supervisor:
Mobile Communication RRC Message Security Analysis
5G, SDR, Security, RAN
Description
In this topic an analysis of RRC messages in 4G and 5G should be done. There exist several different kind of these messages with different functions and level of information content. The focus should lay on messages related to the connection release. The analysis should consider privacy and security aspects. After the theoretical review and analysis the practical part should focus on an attack. An implementation of one security and privacy aspect should be done as a proof-of-concept with Open Source hard- and software.
The following things are requested to be designed, implemented, and evaluated (most likely via proof-of-concept) in this thesis:
• Security and availability analysis of specific RRC messages
• Implementation of an attack
• Practical evaluation with testing of commercial smartphones
We will offer you:
• Initial literature
- https://doi.org/10.14722/NDSS.2016.23236
• Smart working environment
• Deep contact to supervisors and a lot of discussions and knowledge exchange
A detailed description of the topics will be formulated with you in initial meetings. For sure, the report needs to be written based on the requirements of the universities, as well as a detailed documentation and handing over the complete project with all sources. Depending on the chosen thesis type the content will be adapted in its complexity.
All applications must be submitted through our application website INTERAMT:
https://interamt.de/koop/app/stelle?id=1103974
Carefully note the information provided on the site to avoid any issues with your application.
Please include
• a short CV
• current overview of your grades
• the keyword "T3-MK-RRC" as comment
in your application.
For any questions or further details regarding this thesis and the application process, please don't hesitate
to contact:
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Prerequisites
Knowledge in the following fields is required:
• C/C++
Knowledge in the following fields would be an advantage:
• Mobile Communication 4G, 5G
Contact
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Supervisor:
Mobile Communication Broadcast Message Security Analysis
5G, SDR, Security, RAN
Description
In this topic an analysis of Broadcast messages in 4G and 5G should be done. There exist several different kind of these messages with different functions and level of information content. The analysis should consider privacy and security aspects. After the theoretical review and analysis the practical part should focus on one aspect of the findings. An implementation of one security and privacy aspect should be done as a proof-of-concept with Open Source hard- and software.
The following things are requested to be designed, implemented, and evaluated (most likely via proof-of-concept) in this thesis:
• Security and privacy analysis of Broadcast Messages
• Implementation of an attack
• Practical evaluation with testing of commercial smartphones
We will offer you:
• Initial literature
- https://dl.acm.org/doi/10.1145/3307334.3326082
• Smart working environment
• Deep contact to supervisors and a lot of discussions and knowledge exchange
A detailed description of the topics will be formulated with you in initial meetings. For sure, the report needs to be written based on the requirements of the universities, as well as a detailed documentation and handing over the complete project with all sources. Depending on the chosen thesis type the content will be adapted in its complexity.
All applications must be submitted through our application website INTERAMT:
https://interamt.de/koop/app/stelle?id=1103974
Carefully note the information provided on the site to avoid any issues with your application.
Please include
• a short CV
• current overview of your grades
• the keyword "T3-MK-BROADCAST" as comment
in your application.
For any questions or further details regarding this thesis and the application process, please don't hesitate to contact:
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Prerequisites
Knowledge in the following fields is required:
• C/C++
Knowledge in the following fields would be an advantage:
• Mobile Communication 4G, 5G
Contact
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Supervisor:
Improving network availability- a min cut set approach
availability, reliability, Minimal cut set
Description
A cut set is a set of components that, by failing, causes the system to fail. A cut set is minimal if it cannot be reduced without losing its status as a cut set.
In this work, we aim to improve network availability based on the mincutsets. We employ graph coloring methods to improve the availability of mincutsets.
Prerequisites
Mandatory: Python
Communication Network Reliability course, and Integer Linear Programming.
Contact
shakthivelu.janardhanan@tum.de
Supervisor:
Working Student for the Implementation of a Medical Testbed
Communication networks, programming
Your goal is to implement a network for critical medical applications based on an existing open-access 5G networking framework as well as the adaptation of this network according to the needs of our research.
Description
Beginning possible from 1st October 2024
Future medical applications put stringent requirements on the underlying communication networks in terms of highest availability, maximal throughput, minimal latency, etc. Thus, in the context of the 6G-life project, new networking concepts and solutions are being developed.
For the research of using 6G for medical applications, the communication and the medical side have joined forces: While researchers from the MITI group (Minimally invasive Interdisciplinary Therapeutical Intervention), located at the hospital "Rechts der Isar", focus on the requirements of the medical applications and collecting needed parameters of patients, it is the task of the researchers at LKN to optimize the network in order to satisfy the applications' demands. The goal of this joint research work is to have working testbeds for two medical testbeds located in the hospital to demonstrate the impact and benefits of future 6G networks and concepts for medical applications.
Your task during this work is to implement the communcation network for those testbeds. Based on an existing open-access 5G network implementation, you will implement changes according to the progress of the current research. The results of your work, working 6G medical testbeds, will enable researchers to validate their approaches with real-world measurements and allow to demonstrate future 6G concepts to research, industry and politics.
In this project, you will gain a deep insight into how communication networks, especially the Radio Access Network (RAN), work and how different aspects are implemented. Additionally, you will understand the current limitations and weaknesses as well as concepts for improvement. Also, you will get some insights into medical topics if interested. As in such a broad topic there are many open research questions, you additionally have the possibility to also write your thesis or complete an internship.
Prerequisites
- Most important: Motivation and willingness to learn unknown things.
- C/C++ and knowledge about how other programming languages work (Python, etc.) and/or the willingness to work oneself into such languages.
- Preferred: Knowledge about communication networks (exspecially the RAN), 5G concepts, the P4 language, SDN, Linux.
- Initiative to bring in own ideas and solutions.
- Ability to work with various partners (teamwork ability).
Please note: It is not necessary to know about every topic aforementioned, much more it is important to be willing to read oneself in.
Contact
Supervisor:
Latency and Reliability Guarantees in Multi-domain Networks
Multi-domain networks
Description
One of the aspects not covered by 5G networks are multi-domain networks, comprising one or more campus networks. There are private networks, including the Radio Access Network and Core Network, not owned by the cellular operators like within a university, hospital, etc. There will be scenarios in which the transmitter is within a different campus network from the receiver, and the data would have to traverse networks operated by different entities.
Given the different operators managing the “transmitter” and “receiver” networks, providing any end-to-end performance guarantees in terms of latency and reliability can pose significant challenges in multi-domain networks. For example, if there is a maximum latency that a packet can tolerate in the communication cycle between the transmitter and receiver, the former experiencing given channel conditions would require a given amount of RAN resources to meet that latency. The receiver, on the other end of the communication path, will most probably experience different channel conditions. Therefore, it will require a different amount of resources to satisfy the end-to-end latency requirement. Finding an optimal resource allocation approach across different networks that would lead to latency and reliability guarantees in a multi-domain network will be the topic of this thesis.
Prerequisites
The approach used to solve these problems will rely on queueing theory. A good knowledge of any programming language is required.
Supervisor:
Decentralized Federated Learning on Constrained IoT Devices
Description
The Internet of Things (IoT) is an increasingly prominent aspect of our daily lives, with connected devices offering unprecedented convenience and efficiency. As we move towards a more interconnected world, ensuring the privacy and security of data generated by these devices is paramount. That is where decentralized federated learning comes in.
Federated Learning (FL) is a machine-learning paradigm that enables multiple parties to collaboratively train a model without sharing their data directly. This thesis focuses on taking FL one step further by removing the need for a central server, allowing IoT devices to directly collaborate in a peer-to-peer manner.
In this project, you will explore and develop decentralized federated learning frameworks specifically tailored for constrained IoT devices with limited computational power, memory, and energy resources. The aim is to design and implement efficient algorithms that can harness the collective power of these devices while ensuring data privacy and device autonomy. This involves tackling challenges related to resource-constrained environments, heterogeneous device capabilities, and maintaining security and privacy guarantees.
The project offers a unique opportunity to contribute to cutting-edge research with real-world impact. Successful outcomes will enable secure and private machine learning on IoT devices, fostering new applications in areas such as smart homes, industrial automation, and wearable health monitoring.
Responsibilities:
- Literature review on decentralized federated learning, especially in relation to IoT and decentralized systems.
- Design and development of decentralized FL frameworks suitable for constrained IoT devices.
- Implementation and evaluation of the proposed framework using real-world datasets and testbeds.
- Analysis of security and privacy aspects, along with resource utilization.
- Documentation and presentation of findings in a thesis report, possibly leading to publications in top venues.
Requirements:
- Enrollment in a Master's program in Computer Engineering, Computer Science, Electrical Engineering or related fields
- Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch)
- Proficiency in C and Python programming language
- Experience with IoT devices and embedded systems development
- Excellent analytical skills and a systematic problem-solving approach
Nice to Have:
- Knowledge of cybersecurity and privacy principles
- Familiarity with blockchain or other decentralized technologies
- Interest in distributed computing and edge computing paradigms
Contact
Email: navid.asadi@tum.de
Supervisor:
Attacks on Cloud Autoscaling Mechanisms
Cloud Computing, Kubernetes, autoscaling, low and slow attacks, Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), cloud security, container orches
Description
In the era of cloud-native computing, Kubernetes has emerged as a leading container orchestration platform, enabling seamless scalability and reliability for modern applications.
However, with its widespread adoption comes a new frontier in cybersecurity challenges, particularly low and slow attacks that exploit autoscaling features to disrupt services subtly yet effectively.
This project aims to delve into the intricacies of these attacks, examining their impact on Kubernetes' Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA), and proposing mitigation strategies for more resilient systems.
Responsibilities:
- Conduct a thorough literature review to identify existing knowledge gaps and research on similar attacks.
- Develop methodologies to simulate low and slow attack scenarios on Kubernetes clusters with varying configurations of autoscaling mechanisms.
- Analyze the impact of these attacks on resource utilization, service availability, and overall system performance.
- Evaluate current defense mechanisms and propose novel strategies to enhance the resilience of Kubernetes' autoscaling features.
- Implement and test selected mitigation approaches in a controlled environment.
- Document findings, present a comparative analysis of effectiveness, and discuss implications for future development in cloud security practices.
Requirements:
- A strong background in computer engineering, computer science or a related field.
- Familiarity with Kubernetes architecture and container orchestration concepts.
- Experience in deploying and managing applications on Kubernetes clusters.
- Proficiency in at least one scripting/programming language (e.g., Python, Go).
- Understanding of cloud computing and cybersecurity fundamentals.
Nice to Have:
- Prior research or hands-on experience in cloud security, particularly in the context of Kubernetes.
- Knowledge of network protocols and low-level system interactions.
- Experience with DevOps tools and practices.
Contact
Email: navid.asasdi@tum.de
Supervisor:
Working Student/Research Internship - On-Device Training on Microcontrollers
Description
We are seeking a highly motivated and skilled student to replicate a research paper that explores the application of pruning techniques for on-device training on microcontrollers. The original paper demonstrated the feasibility of deploying deep neural networks on resource-constrained devices, and achieved significant reductions in model size and computational requirements while maintaining acceptable accuracy.
Responsibilities:
- Extend our existing framework by implementing the pruning techniques on a microcontroller-based platform (e.g., Arduino, ESP32)
- Replicate the experiments described in the original paper to validate the results
- Evaluate the performance of the pruned models on various benchmark datasets
- Compare the results with the original paper and identify areas for improvement
- Document the replication process, results, and findings in a clear and concise manner
Requirements:
- Strong programming skills in C and Python
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) and microcontroller-based platforms
- Familiarity with pruning techniques for neural networks is a plus
- Excellent analytical and problem-solving skills
- Ability to work independently and manage time effectively
- Strong communication and documentation skills
Contact
Email: navid.asadi@tum.de
Supervisor:
Working Student - Machine Learning Serving on Kubernetes
Machine Learning, Kubernetes, Containerization, Docker, Orchestration, Cloud Computing, MLOps, Machine Learning Operations, DevOps, Microservices Architecture,
Description
We are seeking an ambitious and forward-thinking working student to join our dynamic team working at the intersection of Machine Learning (ML) and Kubernetes. In this exciting role, you will be immersed in a cutting-edge environment where advanced ML models meet the power of container orchestration through Kubernetes. Your contributions will directly impact the development and optimization of scalable and robust ML serving systems leveraging the benefits of Kubernetes.
If you are a student passionate about both Machine Learning and Kubernetes, we invite you to join us on this exciting journey! We offer the chance to pioneer cutting-edge solutions that leverage the power of these two transformative technologies.
Responsibilities:
- Collaborate with a cross-functional team to design and implement ML workflows on Kubernetes.
- Assist in packaging and deploying ML models as microservices using containers (Docker) and managing them effectively through Kubernetes.
- Optimize resource allocation, scheduling, and scaling strategies for efficient model serving at varying workloads.
- Implement monitoring solutions specific to ML inference tasks within the Kubernetes cluster.
- Troubleshoot and debug issues related to containerized ML applications
- Document best practices, tutorials, and guides on leveraging Kubernetes for ML serving
Requirements:
- Currently enrolled in a Bachelor's or Master's program in School of CIT
- Strong programming skills in Python with experience in software development lifecycle methodologies.
- Familiarity with machine learning frameworks such as TensorFlow and PyTorch.
- Proficiency in container technologies. Docker and Kubernetes certification would be a plus but not mandatory.
- Experience with cloud computing platforms; e.g., AWS, GCP or Azure.
- Demonstrated ability to work independently with effective time management and strong problem-solving analytical skills.
- Excellent communication and teamwork capabilities.
Nice to Have:
- Kubernetes Certification: Having a valid Kubernetes certification (CKA, CKAD, or CKE) demonstrates your expertise in container orchestration and can be a significant advantage.
- Experience with DevOps and/or MLOps Tools: Familiarity with MLOps tools such as MLflow, Kubeflow, or TensorFlow Extended (TFX) can help you streamline the machine learning workflow and improve collaboration. Experience with OpenTelemetry, Jaeger, Istio, and monitoring tools is a plus.
- Knowledge of Distributed Systems: Understanding distributed systems architecture and design patterns can help you optimize the performance and scalability of your machine learning models.
- Contributions to Open-Source Projects: Having contributed to open-source projects related to Kubernetes, machine learning, or MLOps demonstrates your ability to collaborate with others and adapt to new technologies.
- Familiarity with Agile Methodologies: Knowledge of agile development methodologies such as Scrum or Kanban can help you work efficiently in a fast-paced environment and deliver results quickly.
- Cloud-Native Application Development: Experience with cloud-native application development using frameworks like Cloud Foundry or AWS Cloud Development Kit (CDK) can be beneficial in designing scalable and efficient machine learning workflows.
Contact
Email: navid.asadi@tum.de
Supervisor:
Working Student for the Edge AI Testbed
IoT, Edge Computing, Machine Learning, Measurement, Power Characterization
Description
We are seeking a highly motivated and enthusiastic Working Student to join our team as part of the Edge AI Testbed project. As a Working Student, a key member of our research team, you will contribute to the development and testing of cutting-edge Artificial Intelligence (AI) systems at the edge of the network. You will work closely with our researchers and engineers to design, implement, and evaluate innovative AI solutions that can operate efficiently on resource-constrained edge devices.
Responsibilities:
- Assist in designing and implementing AI models for edge computing
- Develop and test software components for the Edge AI Testbed
- Collaborate with team members to integrate AI models with edge hardware platforms
- Participate in performance optimization and evaluation of AI systems on edge devices
- Contribute to the development of tools and scripts for automated testing and deployment
- Document and report on project progress, results, and findings
If you are a motivated and talented student looking to gain hands-on experience in Edge AI, we encourage you to apply for this exciting opportunity!
Requirements:
- Currently enrolled in a Bachelor's or Master's program in School of CIT
- Strong programming skills in languages such as Python and C++
- Experience with AI frameworks such as TensorFlow, PyTorch, or Keras
- Familiarity with edge computing platforms and devices (e.g., Raspberry Pi, NVIDIA Jetson)
- Basic knowledge of Linux operating systems and shell scripting
- Excellent problem-solving skills and ability to work independently
- Strong communication and teamwork skills
Nice to Have:
- Experience with containerization using Docker
- Familiarity with cloud computing platforms (e.g., Kubernetes)
- Experience with Apache Ray
- Knowledge of computer vision or natural language processing
- Participation in open-source projects or personal projects related to AI and edge computing
Contact
Email: navid.asadi@tum.de
Supervisor:
An AI Benchmarking Suite for Microservices-Based Applications
Kubernetes, Deep Learning, Video Analytics, Microservices
Description
In the realm of AI applications, the deployment strategy significantly impacts performance metrics.
This research internship aims to investigate and benchmark AI applications in two predominant deployment configurations: monolithic and microservices-based, specifically within Kubernetes environments.
The central question revolves around understanding how these deployment strategies affect various performance metrics and determining the more efficient configuration. This inquiry is crucial as the deployment strategy plays a pivotal role in the operational efficiency of AI applications.
Currently, the field lacks a comprehensive benchmarking suite that evaluates AI applications from an end-to-end deployment perspective. Our approach includes the development of a benchmarking suite tailored for microservice-based AI applications.
This suite will capture metrics such as CPU/GPU/Memory utilization, interservice communication, end-to-end and per-service latency, and cache misses.
Requirements:
- Familiarity with Kubernetes
- Familiarity with Deep Learning frameworks (e.g., PyTorch or TensorFlow)
- Basics of computer networking
Contact
Email: navid.asadi@tum.de
Supervisor:
Performance Evaluation of Serverless Frameworks
Serverless, Function as a Service, Machine Learning, Distributed ML
Description
Serverless computing is a cloud computing paradigm that separates infrastructure management from software development and deployment. It offers advantages such as low development overhead, fine-grained unmanaged autoscaling, and reduced customer billing. From the cloud provider's perspective, serverless reduces operational costs through multi-tenant resource multiplexing and infrastructure heterogeneity.
However, the serverless paradigm also comes with its challenges. First, a systematic methodology is needed to assess the performance of heterogeneous open-source serverless solutions. To our knowledge, existing surveys need a thorough comparison between these frameworks. Second, there are inherent challenges associated with the serverless architecture, specifically due to its short-lived and stateless nature.
Requirements:
- Familiarity with Kubernetes
- Basics of computer networking
Contact
Email: navid.asadi@tum.de
Supervisor:
Investigation of Flexibility vs. Sustainability Tradeoffs in 6G
Description
5G networks brought significant performance improvements for different service types like augmented reality, virtual reality, online gaming, live video streaming, robotic surgeries, etc., by providing higher throughput, lower latency, higher reliability as well as the possibility to successfully serve a large number of users. However, these improvements do not come without any costs. The main consequence of satisfying the stringent traffic requirements of the aforementioned applications is excessive energy consumption.
Therefore, making the cellular networks sustainable, i.e., constraining their power consumption, is of utmost importance in the next generation of cellular networks, i.e., 6G. This goal is of interest mostly to cellular network operators. Of course, while achieving network sustainability, the satisfaction of all traffic requirements, which is of interest to cellular users, must be ensured at all times. While these are opposing goals, a certain balance has to be achieved.
In this thesis, the focus is on the type of services known as eMBB (enhanced mobile broadband). These are services that are characterized as latency-tolerant to a certain extent, but sensitive to the throughput and its stability. Live video streaming is a use case falling into this category. For these applications, on the one side, higher data rates imply higher energy consumption. On the other side, the users can be satisfied with slightly lower throughput as long as the provided data rate is constant, which corresponds to the flexibility that the network operator can exploit. Hence, the question that needs to be answered in this thesis is what is the optimal trade-off between the data rate and the energy consumption in a cellular network with eMBB users? To answer this question, the entire communication process will be encompassed, i.e., from the transmitting user through the base station and core network to the receiving end. The student will need to formulate an optimization problem to address the related problem, which they will then solve through exact optimization solvers, but also through proposing simpler algorithms (heuristics) that reduce the solution time while not considerably deteriorating the system performance.
Prerequisites
- Good knowledge of any programming language
- Good mathematical and analytical thinking skills
- High level of self-engagement and motivation
Contact
valentin.haider@tum.de
fidan.mehmeti@tum.de
Supervisor:
Network Planning in the Medical Context
Description
In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance.
In this thesis, the task is to develop and evaluate an approach to dimension networking resources such as networking devices and processing units depending on the envisioned medical applications to be executed. This work is in cooperation with our partners at MITI (Hospital „Rechts der Isar“).
The result will be an approach to dimension and plan networks for future medical applications.
Prerequisites
· Motivation
· Ideally some experience in optimization problems
· Basic networking knowledge
· Basic programming skills
Supervisor:
Optimizing the Availability of Medical Applications
Description
In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance.
In this thesis, the task is to develop and evaluate an approach to optimize the availability of medical applications, i.e., modular application functions (MAFs), when executed in the network. For that, suitable real use cases are identified together with our partners at MITI (Hospital "Rechts der Isar"). The optimizing approach then leads to a specified distribution of the processing and networking resources, satisfying the minimum needs of critical applications while considering the needed availability.
The result will be an evaluated placement approach for applications in the medical environment considering the availability.
Prerequisites
· Motivation
· Ideally some experience in solving optimization problems
· Basic networking knowledge
· Basic programming skills
Supervisor:
Minimizing the Power Consumption of Medical Applications
Description
In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance.
In this thesis, the task is to develop and evaluate an approach to minimize the power consumptions of medical applications, i.e., modular application functions (MAFs), when executed in the network. For that, suitable real use cases are identified together with our partners at MITI (Hospital "Rechts der Isar"). The optimizing approach then leads to a specified distribution of the processing and networking resources, satisfying the minimum needs of critical applications while considering the power consumption.
The result will be an evaluated power minimizing approach for applications in the medical environment.
Prerequisites
· Motivation
· Ideally some experience in solving optimization problems
· Basic networking knowledge
· Basic programming skills
Supervisor:
Processing Priorization of MAF Chains in the Medical Context
Description
In future communication systems such as 6G, in-network computing will play a crucial role. In particular, processing units within the network enable to run applications such as digital twins close to the end user, leading to lower latencies and overall better performance. However, these processing resources are usually shared among many applications, which potentially leads to worse performance in terms of execution time, throughput, etc. . This is especially critical for applications such as autonomous driving, telemedicine or smart operations. Hence, the processing of more critical applications must be prioritized.
In this thesis, the task is to develop and evaluate a priorization approach for chains of modular medical applications, i.e., modular application functions (MAFs). Hereby, this work extends an already existing work, focusing on the placement of only single MAFs with prioritization. In this work, suitable real use cases are identified together with our partners at MITI (Hospital "Rechts der Isar"). The priorization approach then leads to a specified distribution of the processing and networking resources, satisfying the minimum needs of critical applications.
The result will be an evaluated priorization approach for applications in the medical environment.
Prerequisites
· Motivation
· Ideally some experience in solving optimization problems
· Basic networking knowledge
· Basic programming skills
Supervisor:
Intel's IPU: Starting from the beginning
Description
Intel develops Network Devices consisting of an FPGA and a general purpose processor. These are the so called IPUs. The goal of this Thesis/Position is to get such an IPU (Intel IPU F2000X) up and running and evaluates its potential. Here, the goal is to program a custom IPU application and evaluate metrics like latency, throughput, and many more under varying circumstances.
Prerequisites
- Basic Knowledge Linux Terminal
- Basic Knowledge C/C++
- Basic Knowledge of and about FPGAs
Supervisor:
DPU as Measurement Cards and Load Generators
Description
Datacenters experience higher and more demanding Network Loads and Traffic. Companies like Nvidia developed special networking hardware to fulfill these demands (The Nvidia Bluefield Line-Up). These cards promise high throughput and high precision. The features required to achieve such tasks can also be used to use Bluefield Cards as potential measurement cards or as load generators.
The goal of this Thesis/Position is to evaluate the performance and feasibility of this approach
For more information, please contact me directly (philip.diederich@tum.de)
Prerequisites
- Basic Knowledge Linux Terminal
- Basic Knowledge Python
- Basic Knowledge C/C++
Supervisor:
Advancing Real-time Network Simulations to Real World Behaviour
Description
Testing real-time application and networks is very timing sensitive. It is very hard to get this precision and accuracy in the real-world. However, the real-world itself also behaves different then simualtions. Our Simulator behaves like the theory dictates and allows us to get these precise timing, but needs to be tested and exteded to behave more like a real-network would
Requirements
- Knowledge of NS-3
- Knowledge of Python
- Knowledge of C/C++
Please contact me for more information (philip.diederich@tum.de)
Supervisor:
Working Student - Real-Time Network Controller for Research
Description
Chameleon is a real-time network controller that guarantees packet latencies for admitted flows. However, Chameleon is designed to work in high performance environments. For research and development, a different approach that offers more debugging and extension capablites would suit us better.
Goals:
- Create Real-time Network Controller
- Controller needs to be easy to debug
- Controller needs to be easy to extend
- Controller needs to have good logging and tracing
Requirements:
- Advanced Knowledge of C/C++
- Advanced Knowledge of Python
Please contact me for more information (philip.diederich@tum.de)
Amaury Van Bemten, Nemanja Ðeri?, Amir Varasteh, Stefan Schmid, Carmen Mas-Machuca, Andreas Blenk, and Wolfgang Kellerer. 2020. Chameleon: Predictable Latency and High Utilization with Queue-Aware and Adaptive Source Routing. In The 16th International Conference on emerging Networking EXperiments and Technologies (CoNEXT ’20), December 1–4, 2020, Barcelona, Spain. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3386367. 3432879
Supervisor:
Controlling Stochastic Network Flows for Real-time Networking
Description
Any data that is sent in a real-time network is monitored and accounted for. This allows us with the help of some mathematical frameworks to calculate upper bounds for the latency of the flow. These frameworks and controllers often consider hard real-time guarantees. This means that every packet arrives in time every time. With soft real-time guarantees, this is not the case. Here, we are allowed to have some leeway
In this thesis, we want to explore how we can model and admit network flows that have a stochastical nature.
Please contact me for more information (philip.diederich@tum.de)!!
Supervisor:
Working Student: Framework for Testing Realtime Networks
Description
Testing a Network Controller, custom real-time protocols, or verifying simulations with emulations requires a lot of computing effort. This is why we are developing a framework that helps you run parallel networking experiments. This framework also increases the reproducibility of any networking experiment.
The main Task of this position is to help develop the general-purpose framework for executing parallel networking experiments.
Tasks:
- Continue developing the Framework for multi server / multi app usage
- Extend Web Capabilities of the Framework
- Automate Starting and Stopping
- Ease-of-use Improvements
- Test the functionality
Requirements:
- Knowledge of Python
- Basic Knowledge of Web-App Development (FastApi, React etc...)
- Basic Knowledge of System Architecture Development
Feel free to contact me per mail (philip.diederich@tum.de)
Supervisor:
Working Student Infrastructure Service Management
Description
We are seeking a highly motivated and detail-oriented Working Student to join our data center team. As a Working Student, you will assist in the daily operations of our data center, gaining hands-on experience in a fast-paced and dynamic environment.
Responsibilities:
Assist with regular data center tasks, such as.
- Rack and Stack equipment
- Cable Management and organization
- Perform basic troubleshooting and maintenance tasks
- Assist with inventory management
- Monitor data center systems and report any discrepancies or issues
- Create the basis for our Data Center Infrastructure Management
- Develop and maintain documentation of data center procedures and policies
- Perform other duties as required to support the data center operations
Requirements
- Availability to work 8 - 10 hours per week with flexible scheduling to accommodate academic commitments
- Basic knowledge of computer systems, networks, and data center operations
- Basic knowledge in Python
Supervisor:
Student Assistent for Wireless Sensor Networks Lab Winter Semester 24/25
Description
The Wireless Sensor Networks lab offers the opportunity to develop software solutions for the wireless sensor networking system, targeting innovative applications. For the next semester, a position is available to assist the participants in learning the programming environment and during the project development phase. The lab is planned to be held on-site every Tuesday 15:00 to 17:00.
Prerequisites
- Solid knowledge in Wireless Communication: PHY, MAC, and network layers.
- Solid programming skills: C/C++.
- Linux knowledge.
- Experience with embedded systems and microcontroller programming knowledge is preferable.
Contact
yash.deshpande@tum.de
alexander.wietfeld@tum.de
Supervisor:
Development of a GUI for Monitoring and Debugging a Digital Twin of QKD Networks
GUI
Quantum key distribution (QKD) is a promising technology for providing secure communication also in the presence of powerful quantum computers. Due to its time-dependent behavior and multi-layer architecture, analysis of routing policies and network performance parameters can be done by emulation. Our implemented network emulator based on container and network function virtualization allows network performance parameters analysis and routing policy optimization.
Description
We search for a student to build a GUI, simplifying analysis and interaction with the network emulator. The emulator is based on Containernet and includes QKD-specific network function virtualization. Currently, distributed routing is supported but will be extended by centralized routing. Monitoring data from active QKD-links are fed in to mirror realistic circumstances.
- Build a front-end displaying performance and operational data
- Build a GUI for dynamically changing secret key rates
Prerequisites
- Programming skills in Python
- Experience in front-end web development
- Interest in security and practical concepts of guaranteed security
Contact
Mario Wenning mario.wenning@tum.de
Supervisor:
Distributed Deep Learning for Video Analytics
Distributed Deep Learning, Distributed Computing, Video Analytics, Edge Computing, Edge AI
Description
In recent years, deep learning-based algorithms have demonstrated superior accuracy in video analysis tasks, and scaling up such models; i.e., designing and training larger models with more parameters, can improve their accuracy even more.
On the other hand, due to strict latency requirements as well as privacy concerns, there is a tendency towards deploying video analysis tasks close to data sources; i.e., at the edge. However, compared to dedicated cloud infrastructures, edge devices (e.g., smartphones and IoT devices) as well as edge clouds are constrained in terms of compute, memory and storage resources, which consequently leads to a trade-off between response time and accuracy.
Considering video analysis tasks such as image classification and object detection as the application at the heart of this project, the goal is to evaluate different deep learning model distribution techniques for a scenario of interest.
Contact
Email: navid.asadi@tum.de
Supervisor:
Edge AI in Adversarial Environment: A Simplistic Byzantine Scenario
Distributed Deep Learning, Distributed Computing, Byzantine Attack, Adversarial Inference
Description
This project considers an environment consisting of several low performance machines which are connected together across a network.
Edge AI has drawn the attention of both academia and industry as a way to bring intelligence to edge devices to enhance data privacy as well as latency.
Prior works investigated on improving accuracy-latency trade-off of Edge AI by distributing a model into multiple available and idle machines. Building on top of those works, this project adds one more dimension: a scenario where $f$ out of $n$ contributing nodes are adversary.
Therefore, for each data sample an adversary (1) may not provide an output (can also be considered as a faulty node.) or (2) may provide an arbitrary (i.e., randomly generated) output.
The goal is to evaluate robustness of different parallelism techniques in terms of achievable accuracy in presence of malicious contributors and/or faulty nodes.
Note that contrary to the mainstream existing literature, this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although robustness of the training phase can be considered as well, it has a much lower priority.
Contact
Email: navid.asadi@tum.de
Supervisor:
On the Efficiency of Deep Learning Parallelism Schemes
Distributed Deep Learning, Parallel Computing, Inference, AI Serving
Description
Deep Learning models are becoming increasingly larger so that most of the state-of-the-art model architectures are either too big to be deployed on a single machine or cause performance issues such as undesired delays.
This is not only true for the largest models being deployed in high performance cloud infrastructures but also for smaller and more efficient models that are designed to have fewer parameters (and hence, lower accuracy) to be deployed on edge devices.
That said, this project considers the second environment where there are multiple resource constrained machines connected through a network.
Continuing the research towards distributing deep learning models into multiple machines, the objective is to generate more efficient variants/submodels compared to existing deep learning parallelism algorithms.
Note that this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although efficiency of the training phase can be considered as well, it has a much lower priority.
Contact
Email: navid.asadi@tum.de
Supervisor:
Optimizing Distributed Deep Learning Inference
deep learning, distributed systems, parallel computing, model parallelism, communication overhead reduction, performance evaluation, edge devices
Description
The rapid growth in size and complexity of deep learning models has led to significant challenges in deploying these architectures across resource-constrained machines interconnected through a network. This research project focuses on optimizing the deployment of deep learning models at the edge, where limited computational resources and high-latency networks hinder performance. The main objective is to develop efficient distributed inference techniques that can overcome the limitations of edge devices, ensuring real-time processing and decision-making.
The successful candidate will work on addressing the following challenges:
- Employing model parallelism techniques to distribute workload across compute nodes while minimizing communication overhead associated with exchanging intermediate tensors between nodes.
- Reducing inter-operator blocking to improve overall system throughput.
- Developing efficient compression techniques tailored for deep learning data exchanges to minimize network latency.
- Evaluating the performance of proposed modifications using standard deep learning benchmarks and real-world datasets.
Responsibilities:
- Implement and evaluate various parallelism techniques, such as model parallelism and variant parallelism, from a communication efficiency perspective.
- Identify and implement mechanisms to minimize the exchange of intermediate tensors between compute nodes, potentially using advanced compression techniques tailored for deep learning data exchanges.
- Conduct comprehensive performance evaluations of proposed modifications using standard deep learning benchmarks and real-world datasets. Assess improvements in latency, resource efficiency, and overall system throughput compared to baseline configurations.
- Write technical reports and publications detailing the research findings.
Requirements:
- Pursuing a Master's degree in School of CIT
- Strong background in deep learning, distributed systems, and parallel computing.
- Proficiency in Python and experience with deep learning frameworks (e.g., TensorFlow, PyTorch).
- Excellent problem-solving skills and the ability to work independently and collaboratively as part of a team.
- Strong communication and writing skills for technical reports and publications.
Contact
Email: navid.asadi@tum.de
Supervisor:
Working student for the PCN lab
SDN, P4
The Prgrammable Communication Networks lab offers the opportunity to work with interesting and entertaining projects related to the Software-Defined Networking (SDN) and Programmable Data Planes (PDP) paradigms. For the next semester, a position is available to assist on the lab doing the student supervision during the lab sessions.
Description
The Prgrammable Communication Networks lab offers the opportunity to work with interesting and entertaining projects related to the Software-Defined Networking (SDN) and Programmable Data Planes (PDP) paradigms. For the next semester, a position is available to assist on the lab doing the student supervision during the lab sessions.
Prerequisites
- Solid knowledge in computer networking (TCP/IP, SDN, P4)
- Solid knowledge of networking tools and Linux (iperf, ssh, etc)
- Good programming skills: C/C++, Python
Contact
Please send your transcript of recrods and CV to:
Cristian Bermudez Serna - cristian.bermudez-serna@tum.de
Nicolai Kröger - nicolai.kroeger@tum.de