Investigating the Detectability of Hidden Communication in 5G Core Networks
Beschreibung
Mobile networks are now ubiquitous and part of our everyday lives. Due to their important role in public security and safety, they are classified as critical infrastructure and need to be protected accordingly. At the same time, 5G shifted from a closed system to a set of microservices designed to be deployed in dynamic environments such as (public) clouds. This large number of involved systems and components increases the risk of infiltration by bad actors through security flaws and supply chain issues. To understand how a compromised core network can be exploited, we described a steganography based system able to execute various attacks and implemented a proof-of-concept. This framework should now be extended and evaluated against state-of-the-art detection mechanisms.
Objectives:
• Implement the framework in an open source 5G core network (such as Open5GS, Free5GC and OpenAir-Interface).
• Perform tests on the feasibility of various attacks in this framework.
• Evaluate 5G intrusion detection and prevention approaches described in the literature.
Voraussetzungen
• Basic understanding of cellular radio communication (such as LTE, 5G NR) specifically their architecture and protocols.
• Basic knowledge of network security.
• Solid knowledge of C/C++ and/or Golang.
Please include:
• a short CV
• a current overview of your grades
in your application.
For any questions or further details regarding this thesis and the application process, please don’t hesitate to contact:
• Julian Sturm (TUM), Email: julian.sturm@tum.de
Kontakt
Julian Sturm (TUM), Email: julian.sturm@tum.de
Betreuer:
Finding and Identifying Publicly Accessible 5G Core Networks
5G, 5G Core, Security, IP
Beschreibung
Mobile networks are now ubiquitous and part of our everyday lives. Due to their important role in public security and safety, they are classified as critical infrastructure and need to be protected accordingly. At the same time, 5G shifted from a closed system to a set of microservices designed to be deployed in dynamic environments such as (public) clouds. Previous research shows, that often critical systems are identifiable from the internet with little to no protection (Bodenheim et al. 2014). For 5G however, such data is lacking.
Objectives:
• Develop methods to identify components of open source 5G core networks (such as Open5GS, Free5GC and OpenAirInterface), as well as commercial networks based on their network fingerprint.
• Perform internet scanning to search for publicly accessible networks.
• Evaluate the prevalence of deployed security mechanisms (if scans are successful).
Voraussetzungen
• Basic understanding of cellular radio communication (such as LTE, 5G NR) specifically their architecture and protocols.
• Solid understanding of IP networks, specifically their architecture and protocols.
• Solid knowledge of Python or another suitable programming language.
Please include:
• a short CV
• a current overview of your grades
in your application.
For any questions or further details regarding this thesis and the application process, please don’t hesitate to contact:
• Julian Sturm (TUM), Email: julian.sturm@tum.de
Kontakt
Julian Sturm (TUM), Email: julian.sturm@tum.de
Betreuer:
Harnessing Large Language Models for Intelligent Wireless Networking
Beschreibung
Explore the exciting world of Large Language Models (LLMs) with us!
As LLMs (like GPT-4) transform industries worldwide, this is your chance to be part of this transformative journey in wireless networking.
In this project, you will:
- Re-implement and analyze cutting-edge LLMs, evaluating their strengths, limitations, and specific applications in wireless networking.
- Identify and tailor these models to a unique use case in wireless networking, applying state-of-the-art techniques to solve real-world challenges.
Related Reading:
- Shao, Jiawei, Jingwen Tong, Qiong Wu, Wei Guo, Zijian Li, Zehong Lin, and Jun Zhang. "WirelessLLM: Empowering Large Language Models Towards Wireless Intelligence." arXiv preprint arXiv:2405.17053 (2024).
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Voraussetzungen
- Strong Python programming skills
- Strong foundation in wireless networking concepts
- Prior experience with machine learning frameworks
Betreuer:
Development of a 5G Multipath TCP Testbed for Multi-Access Network Optimization
Beschreibung
Join us in tackling one of the most pressing challenges in mobile networking—managing the growing demand for data and the need for higher performance in modern applications. As single-network connections struggle to keep up, the 3GPP's Access Traffic Steering, Switching, and Splitting (ATSSS) framework offers a solution, enabling devices to dynamically switch between and simultaneously use multiple network types like 5G, LTE, and Wi-Fi.
In this project, you will:
- Develop a cutting-edge 5G testbed that adheres to 3GPP standards.
- Integrate Multipath TCP to enable seamless communication across multiple network interfaces.
- Contribute to the optimization of mobile traffic management, enhancing both performance and reliability in next-generation networks.
This work is a unique opportunity to get hands-on experience with 5G technology and be at the forefront of mobile networking innovation.
Related Reading:
- M. Quadrini, D. Verde, M. Luglio, C. Roseti and F. Zampognaro, "Implementation and Testing of MP-TCP ATSSS in a 5G Multi-Access Configuration," 2023 International Symposium on Networks, Computers and Communications (ISNCC), Doha, Qatar, 2023, pp. 1-6, doi: 10.1109/ISNCC58260.2023.10323859.
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Voraussetzungen
- Experience with programming in C/C++
- Strong foundation in wireless networking concepts
- Motivation to learn 5G concepts
- Availability to work in-presence
Betreuer:
Design and Implementation of an Intelligent Multipath Packet Scheduler
Beschreibung
Are you ready to dive into cutting-edge technology that merges LiFi and WiFi networks? Imagine your work enabling devices to seamlessly connect across multiple interfaces, pushing the boundaries of what's possible in wireless communication. With multipath solutions like MPTCP and MPQUIC, the potential is immense—but the challenge is real.
We are looking for a motivated student to design and implement a state-of-the-art wireless-channel-aware packet scheduler. You'll tackle the complex task of scheduling data packets across multiple network paths, each with unique characteristics like delay and packet loss.
Related Reading:
- W. Yang, L. Cai, S. Shu, J. Pan and A. Sepahi, "MAMS: Mobility-Aware Multipath Scheduler for MPQUIC," in IEEE/ACM Transactions on Networking, vol. 32, no. 4, pp. 3237-3252, Aug. 2024, doi: 10.1109/TNET.2024.3382269.
If you are interested in this work, please send me an email with a short introduction of yourself along with your CV and grade transcript.
Voraussetzungen
- Experience with Linux networking
- Strong foundation in wireless networking concepts
- Availability to work in-presence
Betreuer:
Mobile Communication RRC Message Security Analysis
5G, SDR, Security, RAN
Beschreibung
In this topic an analysis of RRC messages in 4G and 5G should be done. There exist several different kind of these messages with different functions and level of information content. The focus should lay on messages related to the connection release. The analysis should consider privacy and security aspects. After the theoretical review and analysis the practical part should focus on an attack. An implementation of one security and privacy aspect should be done as a proof-of-concept with Open Source hard- and software.
The following things are requested to be designed, implemented, and evaluated (most likely via proof-of-concept) in this thesis:
• Security and availability analysis of specific RRC messages
• Implementation of an attack
• Practical evaluation with testing of commercial smartphones
We will offer you:
• Initial literature
- https://doi.org/10.14722/NDSS.2016.23236
• Smart working environment
• Deep contact to supervisors and a lot of discussions and knowledge exchange
A detailed description of the topics will be formulated with you in initial meetings. For sure, the report needs to be written based on the requirements of the universities, as well as a detailed documentation and handing over the complete project with all sources. Depending on the chosen thesis type the content will be adapted in its complexity.
All applications must be submitted through our application website INTERAMT:
https://interamt.de/koop/app/stelle?id=1103974
Carefully note the information provided on the site to avoid any issues with your application.
Please include
• a short CV
• current overview of your grades
• the keyword "T3-MK-RRC" as comment
in your application.
For any questions or further details regarding this thesis and the application process, please don't hesitate
to contact:
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Voraussetzungen
Knowledge in the following fields is required:
• C/C++
Knowledge in the following fields would be an advantage:
• Mobile Communication 4G, 5G
Kontakt
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Betreuer:
Mobile Communication Broadcast Message Security Analysis
5G, SDR, Security, RAN
Beschreibung
In this topic an analysis of Broadcast messages in 4G and 5G should be done. There exist several different kind of these messages with different functions and level of information content. The analysis should consider privacy and security aspects. After the theoretical review and analysis the practical part should focus on one aspect of the findings. An implementation of one security and privacy aspect should be done as a proof-of-concept with Open Source hard- and software.
The following things are requested to be designed, implemented, and evaluated (most likely via proof-of-concept) in this thesis:
• Security and privacy analysis of Broadcast Messages
• Implementation of an attack
• Practical evaluation with testing of commercial smartphones
We will offer you:
• Initial literature
- https://dl.acm.org/doi/10.1145/3307334.3326082
• Smart working environment
• Deep contact to supervisors and a lot of discussions and knowledge exchange
A detailed description of the topics will be formulated with you in initial meetings. For sure, the report needs to be written based on the requirements of the universities, as well as a detailed documentation and handing over the complete project with all sources. Depending on the chosen thesis type the content will be adapted in its complexity.
All applications must be submitted through our application website INTERAMT:
https://interamt.de/koop/app/stelle?id=1103974
Carefully note the information provided on the site to avoid any issues with your application.
Please include
• a short CV
• current overview of your grades
• the keyword "T3-MK-BROADCAST" as comment
in your application.
For any questions or further details regarding this thesis and the application process, please don't hesitate to contact:
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Voraussetzungen
Knowledge in the following fields is required:
• C/C++
Knowledge in the following fields would be an advantage:
• Mobile Communication 4G, 5G
Kontakt
• TUM contact: nicolai.kroeger@tum.de, serkut.ayvasik@tum.de
• Forschungreferat T3 (ZITiS), Email: t3@zitis.bund.de
Betreuer:
Latency and Reliability Guarantees in Multi-domain Networks
Multi-domain networks
Beschreibung
One of the aspects not covered by 5G networks are multi-domain networks, comprising one or more campus networks. There are private networks, including the Radio Access Network and Core Network, not owned by the cellular operators like within a university, hospital, etc. There will be scenarios in which the transmitter is within a different campus network from the receiver, and the data would have to traverse networks operated by different entities.
Given the different operators managing the “transmitter” and “receiver” networks, providing any end-to-end performance guarantees in terms of latency and reliability can pose significant challenges in multi-domain networks. For example, if there is a maximum latency that a packet can tolerate in the communication cycle between the transmitter and receiver, the former experiencing given channel conditions would require a given amount of RAN resources to meet that latency. The receiver, on the other end of the communication path, will most probably experience different channel conditions. Therefore, it will require a different amount of resources to satisfy the end-to-end latency requirement. Finding an optimal resource allocation approach across different networks that would lead to latency and reliability guarantees in a multi-domain network will be the topic of this thesis.
Voraussetzungen
The approach used to solve these problems will rely on queueing theory. A good knowledge of any programming language is required.
Betreuer:
Decentralized Federated Learning on Constrained IoT Devices
Beschreibung
The Internet of Things (IoT) is an increasingly prominent aspect of our daily lives, with connected devices offering unprecedented convenience and efficiency. As we move towards a more interconnected world, ensuring the privacy and security of data generated by these devices is paramount. That is where decentralized federated learning comes in.
Federated Learning (FL) is a machine-learning paradigm that enables multiple parties to collaboratively train a model without sharing their data directly. This thesis focuses on taking FL one step further by removing the need for a central server, allowing IoT devices to directly collaborate in a peer-to-peer manner.
In this project, you will explore and develop decentralized federated learning frameworks specifically tailored for constrained IoT devices with limited computational power, memory, and energy resources. The aim is to design and implement efficient algorithms that can harness the collective power of these devices while ensuring data privacy and device autonomy. This involves tackling challenges related to resource-constrained environments, heterogeneous device capabilities, and maintaining security and privacy guarantees.
The project offers a unique opportunity to contribute to cutting-edge research with real-world impact. Successful outcomes will enable secure and private machine learning on IoT devices, fostering new applications in areas such as smart homes, industrial automation, and wearable health monitoring.
Responsibilities:
- Literature review on decentralized federated learning, especially in relation to IoT and decentralized systems.
- Design and development of decentralized FL frameworks suitable for constrained IoT devices.
- Implementation and evaluation of the proposed framework using real-world datasets and testbeds.
- Analysis of security and privacy aspects, along with resource utilization.
- Documentation and presentation of findings in a thesis report, possibly leading to publications in top venues.
Requirements:
- Enrollment in a Master's program in Computer Engineering, Computer Science, Electrical Engineering or related fields
- Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch)
- Proficiency in C and Python programming language
- Experience with IoT devices and embedded systems development
- Excellent analytical skills and a systematic problem-solving approach
Nice to Have:
- Knowledge of cybersecurity and privacy principles
- Familiarity with blockchain or other decentralized technologies
- Interest in distributed computing and edge computing paradigms
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Attacks on Cloud Autoscaling Mechanisms
Cloud Computing, Kubernetes, autoscaling, low and slow attacks, Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), cloud security, container orches
Beschreibung
In the era of cloud-native computing, Kubernetes has emerged as a leading container orchestration platform, enabling seamless scalability and reliability for modern applications.
However, with its widespread adoption comes a new frontier in cybersecurity challenges, particularly low and slow attacks that exploit autoscaling features to disrupt services subtly yet effectively.
This project aims to delve into the intricacies of these attacks, examining their impact on Kubernetes' Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA), and proposing mitigation strategies for more resilient systems.
Responsibilities:
- Conduct a thorough literature review to identify existing knowledge gaps and research on similar attacks.
- Develop methodologies to simulate low and slow attack scenarios on Kubernetes clusters with varying configurations of autoscaling mechanisms.
- Analyze the impact of these attacks on resource utilization, service availability, and overall system performance.
- Evaluate current defense mechanisms and propose novel strategies to enhance the resilience of Kubernetes' autoscaling features.
- Implement and test selected mitigation approaches in a controlled environment.
- Document findings, present a comparative analysis of effectiveness, and discuss implications for future development in cloud security practices.
Requirements:
- A strong background in computer engineering, computer science or a related field.
- Familiarity with Kubernetes architecture and container orchestration concepts.
- Experience in deploying and managing applications on Kubernetes clusters.
- Proficiency in at least one scripting/programming language (e.g., Python, Go).
- Understanding of cloud computing and cybersecurity fundamentals.
Nice to Have:
- Prior research or hands-on experience in cloud security, particularly in the context of Kubernetes.
- Knowledge of network protocols and low-level system interactions.
- Experience with DevOps tools and practices.
Kontakt
Email: navid.asasdi@tum.de
Betreuer:
Working Student/Research Internship - On-Device Training on Microcontrollers
Beschreibung
We are seeking a highly motivated and skilled student to replicate a research paper that explores the application of pruning techniques for on-device training on microcontrollers. The original paper demonstrated the feasibility of deploying deep neural networks on resource-constrained devices, and achieved significant reductions in model size and computational requirements while maintaining acceptable accuracy.
Responsibilities:
- Extend our existing framework by implementing the pruning techniques on a microcontroller-based platform (e.g., Arduino, ESP32)
- Replicate the experiments described in the original paper to validate the results
- Evaluate the performance of the pruned models on various benchmark datasets
- Compare the results with the original paper and identify areas for improvement
- Document the replication process, results, and findings in a clear and concise manner
Requirements:
- Strong programming skills in C and Python
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) and microcontroller-based platforms
- Familiarity with pruning techniques for neural networks is a plus
- Excellent analytical and problem-solving skills
- Ability to work independently and manage time effectively
- Strong communication and documentation skills
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Working Student - Machine Learning Serving on Kubernetes
Machine Learning, Kubernetes, Containerization, Docker, Orchestration, Cloud Computing, MLOps, Machine Learning Operations, DevOps, Microservices Architecture,
Beschreibung
We are seeking an ambitious and forward-thinking working student to join our dynamic team working at the intersection of Machine Learning (ML) and Kubernetes. In this exciting role, you will be immersed in a cutting-edge environment where advanced ML models meet the power of container orchestration through Kubernetes. Your contributions will directly impact the development and optimization of scalable and robust ML serving systems leveraging the benefits of Kubernetes.
If you are a student passionate about both Machine Learning and Kubernetes, we invite you to join us on this exciting journey! We offer the chance to pioneer cutting-edge solutions that leverage the power of these two transformative technologies.
Responsibilities:
- Collaborate with a cross-functional team to design and implement ML workflows on Kubernetes.
- Assist in packaging and deploying ML models as microservices using containers (Docker) and managing them effectively through Kubernetes.
- Optimize resource allocation, scheduling, and scaling strategies for efficient model serving at varying workloads.
- Implement monitoring solutions specific to ML inference tasks within the Kubernetes cluster.
- Troubleshoot and debug issues related to containerized ML applications
- Document best practices, tutorials, and guides on leveraging Kubernetes for ML serving
Requirements:
- Currently enrolled in a Bachelor's or Master's program in School of CIT
- Strong programming skills in Python with experience in software development lifecycle methodologies.
- Familiarity with machine learning frameworks such as TensorFlow and PyTorch.
- Proficiency in container technologies. Docker and Kubernetes certification would be a plus but not mandatory.
- Experience with cloud computing platforms; e.g., AWS, GCP or Azure.
- Demonstrated ability to work independently with effective time management and strong problem-solving analytical skills.
- Excellent communication and teamwork capabilities.
Nice to Have:
- Kubernetes Certification: Having a valid Kubernetes certification (CKA, CKAD, or CKE) demonstrates your expertise in container orchestration and can be a significant advantage.
- Experience with DevOps and/or MLOps Tools: Familiarity with MLOps tools such as MLflow, Kubeflow, or TensorFlow Extended (TFX) can help you streamline the machine learning workflow and improve collaboration. Experience with OpenTelemetry, Jaeger, Istio, and monitoring tools is a plus.
- Knowledge of Distributed Systems: Understanding distributed systems architecture and design patterns can help you optimize the performance and scalability of your machine learning models.
- Contributions to Open-Source Projects: Having contributed to open-source projects related to Kubernetes, machine learning, or MLOps demonstrates your ability to collaborate with others and adapt to new technologies.
- Familiarity with Agile Methodologies: Knowledge of agile development methodologies such as Scrum or Kanban can help you work efficiently in a fast-paced environment and deliver results quickly.
- Cloud-Native Application Development: Experience with cloud-native application development using frameworks like Cloud Foundry or AWS Cloud Development Kit (CDK) can be beneficial in designing scalable and efficient machine learning workflows.
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Working Student for the Edge AI Testbed
IoT, Edge Computing, Machine Learning, Measurement, Power Characterization
Beschreibung
We are seeking a highly motivated and enthusiastic Working Student to join our team as part of the Edge AI Testbed project. As a Working Student, a key member of our research team, you will contribute to the development and testing of cutting-edge Artificial Intelligence (AI) systems at the edge of the network. You will work closely with our researchers and engineers to design, implement, and evaluate innovative AI solutions that can operate efficiently on resource-constrained edge devices.
Responsibilities:
- Assist in designing and implementing AI models for edge computing
- Develop and test software components for the Edge AI Testbed
- Collaborate with team members to integrate AI models with edge hardware platforms
- Participate in performance optimization and evaluation of AI systems on edge devices
- Contribute to the development of tools and scripts for automated testing and deployment
- Document and report on project progress, results, and findings
If you are a motivated and talented student looking to gain hands-on experience in Edge AI, we encourage you to apply for this exciting opportunity!
Requirements:
- Currently enrolled in a Bachelor's or Master's program in School of CIT
- Strong programming skills in languages such as Python and C++
- Experience with AI frameworks such as TensorFlow, PyTorch, or Keras
- Familiarity with edge computing platforms and devices (e.g., Raspberry Pi, NVIDIA Jetson)
- Basic knowledge of Linux operating systems and shell scripting
- Excellent problem-solving skills and ability to work independently
- Strong communication and teamwork skills
Nice to Have:
- Experience with containerization using Docker
- Familiarity with cloud computing platforms (e.g., Kubernetes)
- Experience with Apache Ray
- Knowledge of computer vision or natural language processing
- Participation in open-source projects or personal projects related to AI and edge computing
Kontakt
Email: navid.asadi@tum.de
Betreuer:
An AI Benchmarking Suite for Microservices-Based Applications
Kubernetes, Deep Learning, Video Analytics, Microservices
In the realm of AI applications, the deployment strategy significantly impacts performance metrics. This research internship aims to investigate and benchmark AI applications in two predominant deployment configurations: monolithic and microservices-based, specifically within Kubernetes environments. The central question revolves around understanding how these deployment strategies affect various performance metrics and determining the more efficient configuration. This inquiry is crucial as the deployment strategy plays a pivotal role in the operational efficiency of AI applications. Currently, the field lacks a comprehensive benchmarking suite that evaluates AI applications from an end-to-end deployment perspective. Our approach includes the development of a benchmarking suite tailored for microservice-based AI applications. This suite will capture metrics such as CPU/GPU/Memory utilization, interservice communication, end-to-end and per-service latency, and cache misses.
Beschreibung
In the realm of AI applications, the deployment strategy significantly impacts performance metrics.
This research internship aims to investigate and benchmark AI applications in two predominant deployment configurations: monolithic and microservices-based, specifically within Kubernetes environments.
The central question revolves around understanding how these deployment strategies affect various performance metrics and determining the more efficient configuration. This inquiry is crucial as the deployment strategy plays a pivotal role in the operational efficiency of AI applications.
Currently, the field lacks a comprehensive benchmarking suite that evaluates AI applications from an end-to-end deployment perspective. Our approach includes the development of a benchmarking suite tailored for microservice-based AI applications.
This suite will capture metrics such as CPU/GPU/Memory utilization, interservice communication, end-to-end and per-service latency, and cache misses.
Voraussetzungen
- Familiarity with Kubernetes
- Familiarity with Deep Learning frameworks (e.g., PyTorch or TensorFlow)
- Basics of computer networking
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Performance Evaluation of Serverless Frameworks
Serverless, Function as a Service, Machine Learning, Distributed ML
Beschreibung
Serverless computing is a cloud computing paradigm that separates infrastructure management from software development and deployment. It offers advantages such as low development overhead, fine-grained unmanaged autoscaling, and reduced customer billing. From the cloud provider's perspective, serverless reduces operational costs through multi-tenant resource multiplexing and infrastructure heterogeneity.
However, the serverless paradigm also comes with its challenges. First, a systematic methodology is needed to assess the performance of heterogeneous open-source serverless solutions. To our knowledge, existing surveys need a thorough comparison between these frameworks. Second, there are inherent challenges associated with the serverless architecture, specifically due to its short-lived and stateless nature.
Requirements:
- Familiarity with Kubernetes
- Basics of computer networking
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Investigation of Flexibility vs. Sustainability Tradeoffs in 6G
Beschreibung
5G networks brought significant performance improvements for different service types like augmented reality, virtual reality, online gaming, live video streaming, robotic surgeries, etc., by providing higher throughput, lower latency, higher reliability as well as the possibility to successfully serve a large number of users. However, these improvements do not come without any costs. The main consequence of satisfying the stringent traffic requirements of the aforementioned applications is excessive energy consumption.
Therefore, making the cellular networks sustainable, i.e., constraining their power consumption, is of utmost importance in the next generation of cellular networks, i.e., 6G. This goal is of interest mostly to cellular network operators. Of course, while achieving network sustainability, the satisfaction of all traffic requirements, which is of interest to cellular users, must be ensured at all times. While these are opposing goals, a certain balance has to be achieved.
In this thesis, the focus is on the type of services known as eMBB (enhanced mobile broadband). These are services that are characterized as latency-tolerant to a certain extent, but sensitive to the throughput and its stability. Live video streaming is a use case falling into this category. For these applications, on the one side, higher data rates imply higher energy consumption. On the other side, the users can be satisfied with slightly lower throughput as long as the provided data rate is constant, which corresponds to the flexibility that the network operator can exploit. Hence, the question that needs to be answered in this thesis is what is the optimal trade-off between the data rate and the energy consumption in a cellular network with eMBB users? To answer this question, the entire communication process will be encompassed, i.e., from the transmitting user through the base station and core network to the receiving end. The student will need to formulate an optimization problem to address the related problem, which they will then solve through exact optimization solvers, but also through proposing simpler algorithms (heuristics) that reduce the solution time while not considerably deteriorating the system performance.
Voraussetzungen
- Good knowledge of any programming language
- Good mathematical and analytical thinking skills
- High level of self-engagement and motivation
Kontakt
valentin.haider@tum.de
fidan.mehmeti@tum.de
Betreuer:
Intel's IPU: Starting from the beginning
Beschreibung
Intel develops Network Devices consisting of an FPGA and a general purpose processor. These are the so called IPUs. The goal of this Thesis/Position is to get such an IPU (Intel IPU F2000X) up and running and evaluates its potential. Here, the goal is to program a custom IPU application and evaluate metrics like latency, throughput, and many more under varying circumstances.
Voraussetzungen
- Basic Knowledge Linux Terminal
- Basic Knowledge C/C++
- Basic Knowledge of and about FPGAs
Betreuer:
DPU as Measurement Cards and Load Generators
Beschreibung
Datacenters experience higher and more demanding Network Loads and Traffic. Companies like Nvidia developed special networking hardware to fulfill these demands (The Nvidia Bluefield Line-Up). These cards promise high throughput and high precision. The features required to achieve such tasks can also be used to use Bluefield Cards as potential measurement cards or as load generators.
The goal of this Thesis/Position is to evaluate the performance and feasibility of this approach
For more information, please contact me directly (philip.diederich@tum.de)
Voraussetzungen
- Basic Knowledge Linux Terminal
- Basic Knowledge Python
- Basic Knowledge C/C++
Betreuer:
Advancing Real-time Network Simulations to Real World Behaviour
Beschreibung
Testing real-time application and networks is very timing sensitive. It is very hard to get this precision and accuracy in the real-world. However, the real-world itself also behaves different then simualtions. Our Simulator behaves like the theory dictates and allows us to get these precise timing, but needs to be tested and exteded to behave more like a real-network would
Requirements
- Knowledge of NS-3
- Knowledge of Python
- Knowledge of C/C++
Please contact me for more information (philip.diederich@tum.de)
Betreuer:
Working Student - Real-Time Network Controller for Research
Beschreibung
Chameleon is a real-time network controller that guarantees packet latencies for admitted flows. However, Chameleon is designed to work in high performance environments. For research and development, a different approach that offers more debugging and extension capablites would suit us better.
Goals:
- Create Real-time Network Controller
- Controller needs to be easy to debug
- Controller needs to be easy to extend
- Controller needs to have good logging and tracing
Requirements:
- Advanced Knowledge of C/C++
- Advanced Knowledge of Python
Please contact me for more information (philip.diederich@tum.de)
Amaury Van Bemten, Nemanja Ðeri?, Amir Varasteh, Stefan Schmid, Carmen Mas-Machuca, Andreas Blenk, and Wolfgang Kellerer. 2020. Chameleon: Predictable Latency and High Utilization with Queue-Aware and Adaptive Source Routing. In The 16th International Conference on emerging Networking EXperiments and Technologies (CoNEXT ’20), December 1–4, 2020, Barcelona, Spain. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3386367. 3432879
Betreuer:
Controlling Stochastic Network Flows for Real-time Networking
Beschreibung
Any data that is sent in a real-time network is monitored and accounted for. This allows us with the help of some mathematical frameworks to calculate upper bounds for the latency of the flow. These frameworks and controllers often consider hard real-time guarantees. This means that every packet arrives in time every time. With soft real-time guarantees, this is not the case. Here, we are allowed to have some leeway
In this thesis, we want to explore how we can model and admit network flows that have a stochastical nature.
Please contact me for more information (philip.diederich@tum.de)!!
Betreuer:
Working Student: Framework for Testing Realtime Networks
Beschreibung
Testing a Network Controller, custom real-time protocols, or verifying simulations with emulations requires a lot of computing effort. This is why we are developing a framework that helps you run parallel networking experiments. This framework also increases the reproducibility of any networking experiment.
The main Task of this position is to help develop the general-purpose framework for executing parallel networking experiments.
Tasks:
- Continue developing the Framework for multi server / multi app usage
- Extend Web Capabilities of the Framework
- Automate Starting and Stopping
- Ease-of-use Improvements
- Test the functionality
Requirements:
- Knowledge of Python
- Basic Knowledge of Web-App Development (FastApi, React etc...)
- Basic Knowledge of System Architecture Development
Feel free to contact me per mail (philip.diederich@tum.de)
Betreuer:
Working Student Infrastructure Service Management
Beschreibung
We are seeking a highly motivated and detail-oriented Working Student to join our data center team. As a Working Student, you will assist in the daily operations of our data center, gaining hands-on experience in a fast-paced and dynamic environment.
Responsibilities:
Assist with regular data center tasks, such as.
- Rack and Stack equipment
- Cable Management and organization
- Perform basic troubleshooting and maintenance tasks
- Assist with inventory management
- Monitor data center systems and report any discrepancies or issues
- Create the basis for our Data Center Infrastructure Management
- Develop and maintain documentation of data center procedures and policies
- Perform other duties as required to support the data center operations
Requirements
- Availability to work 8 - 10 hours per week with flexible scheduling to accommodate academic commitments
- Basic knowledge of computer systems, networks, and data center operations
- Basic knowledge in Python
Betreuer:
Development of a GUI for Monitoring and Debugging a Digital Twin of QKD Networks
GUI
Quantum key distribution (QKD) is a promising technology for providing secure communication also in the presence of powerful quantum computers. Due to its time-dependent behavior and multi-layer architecture, analysis of routing policies and network performance parameters can be done by emulation. Our implemented network emulator based on container and network function virtualization allows network performance parameters analysis and routing policy optimization.
Beschreibung
We search for a student to build a GUI, simplifying analysis and interaction with the network emulator. The emulator is based on Containernet and includes QKD-specific network function virtualization. Currently, distributed routing is supported but will be extended by centralized routing. Monitoring data from active QKD-links are fed in to mirror realistic circumstances.
- Build a front-end displaying performance and operational data
- Build a GUI for dynamically changing secret key rates
Voraussetzungen
- Programming skills in Python
- Experience in front-end web development
- Interest in security and practical concepts of guaranteed security
Kontakt
Mario Wenning mario.wenning@tum.de
Betreuer:
Distributed Deep Learning for Video Analytics
Distributed Deep Learning, Distributed Computing, Video Analytics, Edge Computing, Edge AI
Beschreibung
In recent years, deep learning-based algorithms have demonstrated superior accuracy in video analysis tasks, and scaling up such models; i.e., designing and training larger models with more parameters, can improve their accuracy even more.
On the other hand, due to strict latency requirements as well as privacy concerns, there is a tendency towards deploying video analysis tasks close to data sources; i.e., at the edge. However, compared to dedicated cloud infrastructures, edge devices (e.g., smartphones and IoT devices) as well as edge clouds are constrained in terms of compute, memory and storage resources, which consequently leads to a trade-off between response time and accuracy.
Considering video analysis tasks such as image classification and object detection as the application at the heart of this project, the goal is to evaluate different deep learning model distribution techniques for a scenario of interest.
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Edge AI in Adversarial Environment: A Simplistic Byzantine Scenario
Distributed Deep Learning, Distributed Computing, Byzantine Attack, Adversarial Inference
Beschreibung
This project considers an environment consisting of several low performance machines which are connected together across a network.
Edge AI has drawn the attention of both academia and industry as a way to bring intelligence to edge devices to enhance data privacy as well as latency.
Prior works investigated on improving accuracy-latency trade-off of Edge AI by distributing a model into multiple available and idle machines. Building on top of those works, this project adds one more dimension: a scenario where $f$ out of $n$ contributing nodes are adversary.
Therefore, for each data sample an adversary (1) may not provide an output (can also be considered as a faulty node.) or (2) may provide an arbitrary (i.e., randomly generated) output.
The goal is to evaluate robustness of different parallelism techniques in terms of achievable accuracy in presence of malicious contributors and/or faulty nodes.
Note that contrary to the mainstream existing literature, this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although robustness of the training phase can be considered as well, it has a much lower priority.
Kontakt
Email: navid.asadi@tum.de
Betreuer:
On the Efficiency of Deep Learning Parallelism Schemes
Distributed Deep Learning, Parallel Computing, Inference, AI Serving
Beschreibung
Deep Learning models are becoming increasingly larger so that most of the state-of-the-art model architectures are either too big to be deployed on a single machine or cause performance issues such as undesired delays.
This is not only true for the largest models being deployed in high performance cloud infrastructures but also for smaller and more efficient models that are designed to have fewer parameters (and hence, lower accuracy) to be deployed on edge devices.
That said, this project considers the second environment where there are multiple resource constrained machines connected through a network.
Continuing the research towards distributing deep learning models into multiple machines, the objective is to generate more efficient variants/submodels compared to existing deep learning parallelism algorithms.
Note that this project mainly focuses on the inference (i.e., serving) phase of deep learning algorithms, and although efficiency of the training phase can be considered as well, it has a much lower priority.
Kontakt
Email: navid.asadi@tum.de
Betreuer:
Optimizing Distributed Deep Learning Inference
deep learning, distributed systems, parallel computing, model parallelism, communication overhead reduction, performance evaluation, edge devices
Beschreibung
The rapid growth in size and complexity of deep learning models has led to significant challenges in deploying these architectures across resource-constrained machines interconnected through a network. This research project focuses on optimizing the deployment of deep learning models at the edge, where limited computational resources and high-latency networks hinder performance. The main objective is to develop efficient distributed inference techniques that can overcome the limitations of edge devices, ensuring real-time processing and decision-making.
The successful candidate will work on addressing the following challenges:
- Employing model parallelism techniques to distribute workload across compute nodes while minimizing communication overhead associated with exchanging intermediate tensors between nodes.
- Reducing inter-operator blocking to improve overall system throughput.
- Developing efficient compression techniques tailored for deep learning data exchanges to minimize network latency.
- Evaluating the performance of proposed modifications using standard deep learning benchmarks and real-world datasets.
Responsibilities:
- Implement and evaluate various parallelism techniques, such as model parallelism and variant parallelism, from a communication efficiency perspective.
- Identify and implement mechanisms to minimize the exchange of intermediate tensors between compute nodes, potentially using advanced compression techniques tailored for deep learning data exchanges.
- Conduct comprehensive performance evaluations of proposed modifications using standard deep learning benchmarks and real-world datasets. Assess improvements in latency, resource efficiency, and overall system throughput compared to baseline configurations.
- Write technical reports and publications detailing the research findings.
Requirements:
- Pursuing a Master's degree in School of CIT
- Strong background in deep learning, distributed systems, and parallel computing.
- Proficiency in Python and experience with deep learning frameworks (e.g., TensorFlow, PyTorch).
- Excellent problem-solving skills and the ability to work independently and collaboratively as part of a team.
- Strong communication and writing skills for technical reports and publications.
Kontakt
Email: navid.asadi@tum.de