Laufende Arbeiten

Bachelorarbeiten

Parametrization of Humanoid Activity Animations

Beschreibung

The goal of this bachelor thesis is to create a tool to parameterize human activity animations using parametric 3D curves and integrate them with IK(Inverse kinematics). The tool should allow flexible modification of the animation trajectories. The idea of the work tool is based on https://studios.disneyresearch.com/2019/10/27/parameterized-animated-activities/

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Analysis of Human Activities Using Interactive Segmentation Models

Beschreibung

The goal of this bachelor thesis is to use the state of the art Interactive video segmentation models like  (MiVOS)[https://hkchengrex.github.io/MiVOS/] to efficiently segment objects and enrich them with further information about the object state.

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Task Parameterized Skill Learning from Demonstration

Stichworte:
LfD, imitation learning, haptics

Beschreibung

Task Parameterized Skill Learning from Demonstration

Betreuer:

Elfolgsmaximierung auf der streaming Platform YouTube

Beschreibung

...

Betreuer:

haptic data redution for position-position teleoperation control architecture

Stichworte:
teleoperation control, haptics

Beschreibung

Using a teleoperation system with haptic feedback, the users can thus truly immerse themselves into a distant environment, i.e., modify it, and execute tasks without physically being present but with the feeling of being there. A typical teleoperation system with haptic feedback (referred to as a teleoperation system) comprises three main parts: the human operator OP)/master system, the teleoperator (TOP)/slave system, and the communication link/network in between. During teleoperation, the slave and master devices exchange multimodal sensor information over the communication link. This work aims to develop a haptic data reduction scheme based on a position-position teleoperation architecture and compare the performance with the classical position-force control architecture.

 

Your work:

(1) build up a teleoperation system that can switch between position-position and position-force architectures.

(2) integrate the existing haptic data reduction scheme with the PP architecture.

(3)  introduce delays, and implement existing passivity based control scheme to ensure system stability

(4) compare the performance difference between the PF and PP architectures.

Voraussetzungen

C++, matlab simulink

Betreuer:

Optimization of Saliency Map Creation

Stichworte:
Saliency maps, deep learning, computer vision

Beschreibung

Saliency maps can be interpreted as probability maps the assess the scenes attractiveness and highlight the regions that might be interesting for the user to look. The objective of this thesis is to help create a novel dataset that records the head-motions and gaze directions for participants that watch 360° videos with varying dynamics in the scene. This dataset is then to be used to improve state-of-the-art saliency map creation algorithms and make them soft realtime-capable. Deep learning proved to be a robust technique to create saliency maps. The student is supposed to either use pruning techniques to boost the performance of state-of-the-art methods or to develop an own approach that delivers a trade-off between accuracy and computational complexity.

Voraussetzungen

Computer Vision, Machine Learning, C++, Python

Betreuer:

Masterarbeiten

Incorporating AoA information to MPDC fingerprinting for indoor localization

Stichworte:
indoor positioning; fingerprinting; ULA

Beschreibung

Fuse AoA information with multipath delay componens for robust indoor localization.

Kontakt

fabian.seguel@tum.de

of 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

Optimizing network deployment for AI-aided AoA estimation

Stichworte:
AI; Network deployment; Indoor positioning

Beschreibung

The student will optimize through simulations the network atchitecture in order to obtain a robust positioning  estimate in a indoor set-up.

 

 

Voraussetzungen

matlab

AI/ML

 

Kontakt

fabian.seguel@tum.de

of. 2940

 

Betreuer:

Fabian Esteban Seguel Gonzalez

Robotic task learning from human demonstration using spherical representations

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Voraussetzungen

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Blood perfusion imaging on the human hand

Beschreibung

Blood perfusion imaging on the human hand

Betreuer:

Rahul Chaudhari

Object detection and pose tracking for Human Activity Understanding

Beschreibung

Object detection and pose tracking for Human Activity Understanding

Betreuer:

Rahul Chaudhari

Developing of a method to solve the flexible job-shop scheduling problem with Graph Neural Networks (GNNs)

Beschreibung

...

Betreuer:

Hasan Furkan Kaynar

3D ground truth capture for hand-object interactions

Beschreibung

The Dex-YCB dataset from NVIDIA and University of Washington provides visual data on several instances of human hands grasping physical objects. This topic is about augmenting the above dataset with ground-truth information from additional sensors, such as a VR hand-glove and contact sensors.

Apart from ground truth for benchmarking purely visual algorithms, the auxiliary sensors can be fused together with visual data for improved reconstruction of hand-object interactions.

Reference: https://dex-ycb.github.io/

Voraussetzungen

  • Interest and experience in working with hardware -- multiview cameras, VR gloves, self-made embedded sensors.
  • Faimiliarity with ROS (www.ros.org)
  • C++ for data acquisition and python/C++ for data processing

Kontakt

https://www.ei.tum.de/lmt/team/mitarbeiter/chaudhari-rahul/

(Please provide your CV and transcript in your application)

Betreuer:

Rahul Chaudhari

Network Aware Imitation Learning

Stichworte:
Teleoperation, Learning from Demonstration

Beschreibung

When teaching robots remotely via teleoperation, the communication (between the human demonstrator and the remote robot learner) imposes challenges. Network delay, packet loss, and data compression might cause completely or partially degraded demonstration qualities. In this thesis, we would like the make the best out of varying quality demonstrations provided via teleoperation with haptic feedback. We will use different teleoperation setups to test the developed robot learning approaches:

Voraussetzungen

Requirements:

Experience in C/C++

ROS is a plus

High motivation to learn and conduct research

 

Betreuer:

Decentralized Formation Optimization for Future Swarm Missions

Beschreibung

...

Kontakt

Siwei.Zhang@dlr.de

Betreuer:

Eckehard Steinbach - Dr. Siwei Zhang (DLR)

Depth data analysis for investigating robotic grasp estimates

Beschreibung

Many robotic applications are based on computer vision, which relies on the sensor output. A typical example is semantic scene analysis, with which the robot plans its motions. Many computer vision algorithms are trained in simulation which may or may not represent the actual sensor data realistically. Physical sensors are imperfect and the output erroneous data may deteriorate the performance of the required tasks. In this thesis, we will analyze the sensor data and estimate its effects on the final robotic task.

Voraussetzungen

Required background:

- Digital signal processing

- Image analysis / Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

ToF Inverse Rendering with Deep Learning Techniques

Beschreibung

...

Betreuer:

Hand Pose Estimation Using Multi-View RGB-D Sequences

Stichworte:
Hand Object Interaction, Pose Estimation, Deep Learning

Beschreibung

In this project the task is to fit a parametric hand mesh model and a set of rigid objects to a sequence of multi-view RGB-D cameras. Existing models for hand key-point detection and 6DoF pose estimation for rigid objects models have significantly evolved in recent years. Our goal is to utilize such models to estimate the hand and object poses.

Related Work

  1. https://dex-ycb.github.io/
  2. https://www.tugraz.at/institute/icg/research/team-lepetit/research-projects/hand-object-3d-pose-annotation/
  3. https://github.com/hassony2/obman
  4. https://github.com/ylabbe/cosypose

Voraussetzungen

  • Knowledge in computer vision.
  • Experience about segmentation models (i.e. Detectron2)
  • Experience with deep learning frameworks PyTorch or TensorFlow(2.x).
  • Experience with Pytorch3D is a plus.

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Generalizable Neural Rendering for Indoor Scene Analysis

Beschreibung

Neural rendering refers to the set of generative deep learning methods that enables the extraction and manipulation of scene properties such as semantic information, geometry and illumination [1]. The field being relatively new, most of the methods revolve around the idea of representing the scene properties implicitly by neural networks. Recent works utilize differential rendering to backproject color values from posed images [2]. This idea is further extended by the work titled Neural Radiance Field (NeRF) by also predicting the density [3]. Nevertheless, very low convergence rate and requirement of many structured camera viewpoints hinders the way to real-life applications.

In this Master’s level research project, the student will investigate how to utilize prior information about scenes. This information will be utilized for generalizable neural rendering methods. This will enable large-scale scene analysis from sparse camera view points.

[1] Tewari, Ayush, et al. "State of the art on neural rendering." 

[2] Niemeyer, Michael, et al. "Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision."

[3] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis."

 

Voraussetzungen

  • Experience in Python
  • Experience in machine learning, data processing and scientific computing frameworks such as: NumPy, SciPy, Tensorflow, Pytorch, Matplotlib, Pandas
  • Experience in multiview geometry

Nice to have

  • Experience with neural rendering frameworks

Kontakt

cem.eteke@tum.de

Betreuer:

Cem Eteke

Robust light-weight semantic segmentation of large-scale 3D scenes

Beschreibung

Semantic segmentation from point clouds is a challenging task since the methods have to both extract local and global features in a scene. Induced by the pioneering works such as PointNet [1], 3D scene modeling from raw point cloud data has seen a significant growth [2]. Many of the successful models focus on increasing the capacity and the receptive capabilities of the model. The frameworks utilized in these works have either high memory or time complexities or even both. A more recent work further expands the receptive field to make the models efficient [3]. However, having a larger receptive field may result in small details being averaged out and may hinder the performance when the model is fed with out of distribution samples. 

In this Master’s level research project, the student will investigate how to account for the shortcomings of utilizing a high receptive field. The goal is to transfer the pretrained light-weight models to out of distribution data for semantic segmentation task.

References

[1] Qi, Charles R., et al. "Pointnet: Deep learning on point sets for 3d classification and segmentation." 

[2] Guo, Yulan, et al. "Deep learning for 3d point clouds: A survey." 

[3] Hu, Qingyong, et al. "Randla-net: Efficient semantic segmentation of large-scale point clouds." 

 

Voraussetzungen

  • Experience in Python
  • Experience in machine learning, data processing and scientific computing frameworks such as: NumPy, SciPy, Tensorflow, Pytorch, Matplotlib, Pandas
  • Experience in 3D Computer Vision

Nice to have

  • Experience with the ScanNet dataset
  • Experience with the Open3D framework

 

Kontakt

cem.eteke@tum.de

Betreuer:

Cem Eteke

3D Semantic Panoptic Completion and Object Detection for RGB-D Scans

Beschreibung

3D scans work as the important data modality in perception system of AR/VR and robotics. With the development of commodity RGB-D sensors, the 3D scans from real-world is much easier to obtain. However, the incomplete nature of 3D scans caused by inherent occlusions due to physical limitation still impact the performance of perception system. So several research communities focus on several tasks to expand the incomplete partial scan including completion, semantic segmentation, object detection. To predict missing information or extract high-level semantic feature, the deep neural network should be designed.

In summary, the master thesis aims to take incomplete partial scan, predict the unseen and missing geometry to obtain complete high-resolution 3D reconstruction and assign semantic information to each voxel, then detect the object conditioned on the complete semantic scene. To get the better performance in real-world task, the model will be trained using the ScanNet since it comes from real-world scan.

Betreuer:

Michael Adam - Prof. Niessner ()

Uncertainty Propagation in Camera-based Perception of Autonomous Vehicles

Beschreibung

Scenario simulations are a promising approach to detect limitations of HAD systems.Simulations are a viable means of verifying the decision-making capabilities of HAD vehicles in a multitude of situations. In particular, simulations can verify the behavior of the
system in rare but otherwise critical situations that are either too complex or too dangerous to reproduce in
the real world. Using simulations in combination with a formalization of safety goals enables search-based
testing to automatically identify scenarios in which the perception system cannot meet safety goals due to sensor, system architecture, or environmental limitations. Critical scenarios could be used to identify perceived uncertainty and how it propagates through-out the system architecture.This thesis investigates such simulation-based approach for a camera-based perception model of HAD vehicles, and proposes a scheme to validate the developed perception model.

Kontakt

iwo.kurzidem@iks.fraunhofer.de

Betreuer:

Christopher Kuhn - Iwo Kurzidem (Fraunhofer IKS)

Attentive observation using intensity-assisted segmentation for SLAM in a dynamic environment

Stichworte:
SLAM, ROS, Deep Learning, Segmentation

Beschreibung

Attentive observation using intensity-assisted segmentation for SLAM in a dynamic environment.

 

Betreuer:

A novel energy adaptation scheme for block-based haptic coding in time-delayed teleoperation systems

Beschreibung

In this work, you need to address the following tasks:

1. improve the block haptic coding scheme by considering delay sensitivities of block lengths.

2. improve the energy bank scheme for TDPAs by considering human perception threshold.

3. combining 1 and 2 and raise your own idea for intra-block energy adaptation method to improve system performance and human percpetion quality

Betreuer:

Skill transfer learning for autonomous robots

Kurzbeschreibung:
Trasfer a skill from robot A to robot B

Beschreibung

Autonomous robots have achieved high levels of performance and reliability at specific tasks. However, for them to be practical and effective at everyday tasks in our homes and offices, they must be able to learn to perform different tasks over time, and rapidly adapt to new situations.

Learning each task in isolation is an expensive process, requiring large amounts of both time and data. In robotics, this expensive learning process also has secondary costs, such as energy usage and joint fatigue. Furthermore, as robotic hardware evolves or new robots are acquired, these robots must be trained, which is extremely inefficient if performed tabula rasa.

Recent developments in knowledge representation, machine learning, and optimal control provide a potential solution to this problem, enabling robots to minimize the time and cost of learning new tasks by building upon knowledge acquired from other tasks or by other robots. This ability is essential to the development of versatile autonomous robots that can perform a wide variety of tasks and rapidly learn new abilities.

Various aspects of this problem have been addressed by different communities in artificial intelligence and robotics. This symposium will seek to draw together researchers from these different communities toward the goal of enabling autonomous robots to support a wide variety of tasks, rapidly and robustly learn new abilities, adapt quickly to changing contexts, and collaborate effectively with other robots and humans.

Voraussetzungen

Strong C++

Strong Phyton

Motivation

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Illumination of Augmented Reality Content using a Digital Enviroment Twin

Beschreibung

...

Betreuer:

Klassifikation von Wafermuster mit Hilfe von Machine Learing Methoden zur Automatisierung der Mustererkennung

Beschreibung

...

Betreuer:

Solid-State LiDAR and Stereo-Camera based SLAM for unstructured planetary-like environments

Stichworte:
Solid-State LiDAR; Stereo-Camera; SLAM

Beschreibung

New developments in solid-state LiDAR technology open the possibility of integrating range sensors in possible space-qualifiable perception setups, thanks to mechanical designs with reduced moveable parts. Thereby, the development of a hybrid stereo-camera/LiDAR sensor setup might overcome disadvantages each technology comes with, such as limit range for stereo camera setups or the minimum range Lidars need. This thesis investigates such a new solid-state Lidar's possibilities by incorporating it along with a stereo camera setup and an IMU sensor into a SLAM system. Foreseen activities might include, but are not limited to, the design and construction of a portable/handhold sensor setup for recording and testing in planetary-like environments, extrinsic calibration of the sensors, integration into a software pipeline, development of a ROS interface, and preliminary mapping tests.

Betreuer:

Mojtaba Karimi - (German Space Agency (DLR))

Introspective Sensor Monitoring for Multimodal Object Detection

Beschreibung

In multimodal object detection, different sensors such as cameras or LIDAR have different strengths that are combined for optimal detection rates. However, different sensors also have different weaknesses. In this thesis, a monitoring model for each individual sensor is trained with previous performances of that sensor. For a new input, the sensor's performance is then predicted based only on the sensory input. The predicted performance score is then used in the subsequent sensor fusion to reduce the impact of challenging sensory readings, allowing the fusion architecture to dynamically adapt and rely more on the other sensors instead.

Voraussetzungen

Experience with Machine Learning and Object Detection

Betreuer:

Closing the sim-to-real gap using haptic teleoperation

Stichworte:
Liquid Pouring , Skill Refinment, Sim-to-Real gap

Beschreibung

Simulation is a good way of investigating the possibility of a complex behavior implementation for service robots. Since we can not consider all the characteristics of the real work scenario there would be the sim-to-real adaptation step in order to fine-tune the current learned skill from simulations. In this project, the scenario will be the liquid pouring tasks which we already learned in a simulation environment. We will tackle the sim-to-real adaptation problem using haptic communication in order to demonstrate and skill for extra time and the robot will use the expert user knowledge in order to fine-tune the current learned skill from the simulation.

Voraussetzungen

Be familiar with skill refinement techniques. 

Be familiar with haptic teleoperation

Strong background in c++

String background in python

 

 

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Deep Predictive Attention Controller for LiDAR-Inertial localization and mapping

Stichworte:
SLAM, Sensor Fusion, Deep Learning

Beschreibung

The multidimensional sensory data is computationally expensive for localization algorithms in autonomous navigation for drones. Research shows that not all sensory data are equivalently important during the entire process of SLAM to perform a reliable output. The attention control scheme is one of the effective ways to filter out the highly valuable sensory data for such a system. The predictive attention model, for instance, can help us to improve the result of the sensor fusion algorithms by concentrating on the most valuable sensory data based on the dynamic of the vehicle motion or the semantic understanding of the environment. The aim of this work is to investigate the state-of-the-art attention control models that can be adapted for the multidimensional sensory data acquisition system and compare them from different modalities. 

Voraussetzungen

- Strong background in Python and C++ programming

- Solid background in robot control theory

- Be familiar with deep learning frameworks (Tensorflow)

- Be familiar with the robot operating system (ROS)

Kontakt

leox.karimi@tum.de

Betreuer:

Model based Collision Identification for Real-Time Jaco2 Robot Manipulation 

Stichworte:
ROS, Haptics, Teleoperation, Jaco2

Beschreibung

By the advancement of robotics and communication networks such as 5G, telemedicine has become a critical application for remote diagnosis and treatment.

In this project, we want to perform robotic teleoperation using a Sigma 7 haptic master and a Jaco 2 robotic manipulator.

 Tasks:

  • State of the art review and mathematical modeling
  • Jaco2 haptic controller implementation
  • Fault-tolerant (delay, network disconnect) controller design 
  • System evaluation with external force-torque sensor

Voraussetzungen

  • Strong background in C++ programming
  • Solid background in control theory 
  • Be familiar with robot dynamics and kinematics 
  • Be familiar with the robot operating system (ROS) and ROS Control (Optional)

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Interdisziplinäre Projekte

Extension of an Open-source Autonomous Driving Simulation for German Autobahn Scenarios

Beschreibung

This work can be done in German or English in a team of 2-4 members.
Self-driving cars need to be safe in the interaction with other road users such as motorists, cyclists, and pedestrians. But how can car manufacturers ensure that their self-driving cars are safe with us humans? The only realistic and economic way to test this is to use simulation.
cogniBIT is a Munich-based Startup founded by Alumni of TUM and LMU and provides realistic models of all kind of road users. These models are based on state-of-the art neurocognitive and sensorimotor research and reproduce human perception, cognition, and action with all its limitations.
In this project the objective is to extend the open-source simulator CARLA (www.carla.org) such that German Autobahn-like scenarios can be simulated.

Tasks:
•    Design an Autobahn scenario using the road description format OpenDRIVE.
•    Adapt the CARLA OpenDRIVE standalone mode (requires C++ knowledge).
•    Design an environment for the scenario using the Unreal Engine 4 Editor.
•    Perform a simulation-based experiment using the German Autobahn scenario and the cogniBIT driver model.

Voraussetzungen

•    C++ knowledge
•    experience with Python is helpful
•    experience with the UE4 editor is helpful
•    interest in autonomous driving and cognitive models

Betreuer:

Markus Hofbauer - Lukas Brostek (cogniBIT)

Extension of an Open-source Autonomous Driving Simulation for German Autobahn Scenarios

Beschreibung

This work can be done in German or English in a team of 2-4 members.
Self-driving cars need to be safe in the interaction with other road users such as motorists, cyclists, and pedestrians. But how can car manufacturers ensure that their self-driving cars are safe with us humans? The only realistic and economic way to test this is to use simulation.
cogniBIT is a Munich-based Startup founded by Alumni of TUM and LMU and provides realistic models of all kind of road users. These models are based on state-of-the art neurocognitive and sensorimotor research and reproduce human perception, cognition, and action with all its limitations.
In this project the objective is to extend the open-source simulator CARLA (www.carla.org) such that German Autobahn-like scenarios can be simulated.

Tasks:
•    Design an Autobahn scenario using the road description format OpenDRIVE.
•    Adapt the CARLA OpenDRIVE standalone mode (requires C++ knowledge).
•    Design an environment for the scenario using the Unreal Engine 4 Editor.
•    Perform a simulation-based experiment using the German Autobahn scenario and the cogniBIT driver model.

Voraussetzungen

•    C++ knowledge
•    experience with Python is helpful
•    experience with the UE4 editor is helpful
•    interest in autonomous driving and cognitive models

Betreuer:

Markus Hofbauer - Lukas Brostek (cogniBIT)

Forschungspraxis (Research Internships)

Network Aware Shared Control

Stichworte:
Teleoperation, Learning from Demonstration

Beschreibung

When teaching robots remotely via teleoperation, the communication (between the human demonstrator and the remote robot learner) imposes challenges. Network delay, packet loss, and data compression might cause completely or partially degraded demonstration qualities. In this thesis, we would like the make the best out of varying quality demonstrations provided via teleoperation with haptic feedback. We will test the developed approach with shared control.

Voraussetzungen

Requirements:

Experience in C/C++

ROS is a plus

High motivation to learn and conduct research

 

Betreuer:

Interactive Segmentation with Depth Information

Beschreibung

Image segmentation is a fundamental problem in computer vision. While there are autonomous methods for segmenting a given image, interactive segmentation schemes receive a human guidance in addition to the scene information. In this project, we will investigate methods for using depth information to improve segmentation accuracy.

Voraussetzungen

Basic knowledge of

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Familiarity with

- Python or C++

- Tensorflow or PyTorch

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Dynamic head-pose estimation for egocentric head-mounted cameras

Beschreibung

Under this topic, the student should evaluate head-pose estimation methods and algorithms for wearable head-mounted cameras.

Kontakt

https://www.ei.tum.de/lmt/team/mitarbeiter/chaudhari-rahul/

(Please provide your CV and transcript in your application)

Betreuer:

Rahul Chaudhari

Analysis of efficient and low-complexity auto-exposure algorithms on multiple hardware platforms

Beschreibung

In this project we are going to study auto exposure algorithms with low-complexity and high-performance for realtime application. The algorithm will be assessed in different hardware platforms, e.g., CPU, GPU, and FPGA.

Betreuer:

Kai Cui - Kevin Segovia (NavVis)

3D Object Recognition Using Transformers

Beschreibung

In this work, we investigate how transformer architectures such as ViT [1] can be used for multi-view 3D object recognition.

 

[1] Dosovitskiy et al. https://arxiv.org/pdf/2010.11929.pdf

Voraussetzungen

  • Python
  • Pytorch
  • Knowledge of CNNs
  • Knowledge of transformers

Kontakt

martin.piccolrovazzi@tum.de

Betreuer:

Traffic-Aware View Prioritization for Teleoperated Driving

Stichworte:
Teleoperated Driving, Adaptive Video Streaming

Beschreibung

This work can be done in German or English

Existing teledriving setups use multiple cameras to cover the vehicle's surrounding environment in order to provide the operator with sufficient information of the current traffic situation. However, the importance of individual camera views varies for different driving tasks. Modeling the importance of individual camera view according to the current traffic situation can be used in several applications for teledriving.

The goal of this project is the creation and conduction of a user study for measuring the influence of traffic-aware view adaptation. The goal of the user study is to evaluate the performance a our traffic-aware view adaptation compared to a simple uniform bit budget distribution among all camera views.

Tasks

  • Introduction to the existing TELECARLA driving setup [1]
  • Design a driving user study with the CARLA scenario runner
  • Evaluate the results in terms of driving performance, lane invasions, etc.

References

[1] TELECARLA: An Open Source Extension of the CARLA Simulator for Teleoperated Driving Research Using Off-the-Shelf Components, Markus Hofbauer, Christopher B. Kuhn, Goran Petrovic, Eckehard Steinbach; IV 2020

Voraussetzungen

  • Experience with ROS (C++ and Python)

Betreuer:

Model Predictive Controller for Franka Panda

Stichworte:
MPC,NMPC

Beschreibung

In this project, the student will extend the currently developed motion controller for the well-known Franka Panda robot. Finally, he/she will evaluate the planner on the real robot.

Voraussetzungen

Strong C++ background

ROS

Motivation 

 

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Learning Temporal Knowledge Graphs with Neural Ordinary Differential Equations

Beschreibung

...

Kontakt

zhen.han@campus.lmu.de

Betreuer:

Eckehard Steinbach - Zhen Han (LMU)

Sim-to-Real Gap in Liquid Pouring

Stichworte:
sim-to-real

Beschreibung

We want to investigate what are the simulation bottlenecks in order to learn the pouring task. How we can tackle this problem. This project is more paper reading and the field of research is skill refinement and domain adaptation. In addition, we will try to implement one of the states of the art methods of teaching by demonstration in order to adapt the simulation skill to the real-world scenario.

Voraussetzungen

Creativity

Motivation

Strong C++ Background

Strong Phyton Background

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

"Pouring Liquids" dataset development

Stichworte:
Nvidia Flex, Unity3D, Nvidia Physics 4.0
Kurzbeschreibung:
Using Unity3D and Nvidia Flex plugin, develop a learning environment and model different fluids for teaching pouring tasks to robots.

Beschreibung

The student will develop different liquid characteristics using Nvidia Flex, will add different containers and particle collision checking system. In addition, a ground truth system to later use for robot teaching.

 

Reference:

https://developer.nvidia.com/flex

https://developer.nvidia.com/physx-sdk%20

Voraussetzungen

Strong Unity3D background

Familiar with Nvidia Physics and Nvidia Flex libraries. 

 

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Analysis and evaluation of DynaSLAM for dynamic object detection

Beschreibung

Investigation of DynaSLAM in terms of real-time capabilities and dynamic object detection.

Betreuer:

Comparison of Driver Situation Awareness with an Eye Tracking based Decision Anticipation Model

Stichworte:
Situation Awareness, Autonomous Driving, Region of Interest Prediction, Eye Tracking

Beschreibung

This work can be done in German or English

The transmission of control to the human driver in autonomous driving requires the observation of the human driver. The vehicle has to guarantee that the human driver is aware of the current driving situation. One input source for observing the human driver is based on the driver's gaze.

The objective of this project is to compare two existing approaches for driver observation [1,2]. While [1] measures the driver situation awareness (SA), [2] anticipates the drivers decision. As part of a user study [2] published a gaze dataset. An interesting cross validation would be the comparison of the
SA score generated by [1] and the predicted decision correctness of [2].

Tasks

  • Generate ROI predictions [3] from the dataset of [2]
  • Estimate the driver SA with the model of [1]
  • Compare [1] and [2]
  • (Optional) Extend driving experiments

References

[1] Markus Hofbauer, Christopher Kuhn, Lukas Puettner, Goran Petrovic, and Eckehard Steinbach. Measuring driver situation awareness using region-of-interest prediction and eye tracking. In 22nd IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec 2020.
[2] Pierluigi Vito Amadori, Tobias Fischer, Ruohan Wang, and Yiannis Demiris. Decision Anticipation for Driving Assistance Systems. June 2020.
[3] Markus Hofbauer, Christopher Kuhn, Jiaming Meng, Goran Petrovic, and Eckehard Steinbach. Multi-view region of interest prediction for autonomous driving using semisupervised labeling. In IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, Sep 2020.

Voraussetzungen

  • Experience with ROS and Python
  • Basic knowledge of Linux

Betreuer:

3D object model reconstruction from RGB-D scenes

Beschreibung

The robots should be able to discover their environments and learn new objects in order to be a part of daily human life. There are still challenges to detect or recognize objects in unstructured environments like a household environment. For robotic grasping and manipulation, knowing 3D models of the objects are beneficial, hence the robot needs to infer the 3D shape of an object upon observation. In this project, we will investigate methods that can infer or produce 3D models of novel objects by observing RGB-D scenes. We will analyze the methods to reconstruct 3D information with different arrangements of an RGB-D camera.

 

 

 

 

 

 

 

 

 

Voraussetzungen

  • Basic knowledge of digital signal processing / computer vision
  • Experience with ROS, C++, Python.
  • Experience with Artificial Neural Network libraries or motivation to learn them
  • Motivation to yield a successful work

Kontakt

furkan.kaynar@tum.de

Betreuer:

Hasan Furkan Kaynar

Research on the implementation of an (automated) solution for the analysis of surface impurities on endoscope tubes

Beschreibung

...

Betreuer:

AI-Enhanced Tool for Desk Research – Smart Analytical Engine

Beschreibung

...

Betreuer:

Algorithm evaluation for robot grasping with compliant jaws

Stichworte:
python, ROS, robot grasping
Kurzbeschreibung:
Apply state-of-the-art contact model for robot grasp planning with a customized physical setup including a KUKA robot arm and a parallel-jaw gripper with compliant materials.

Beschreibung

Model-based grasp planning algorithms depend on friction analysis since friction between objects and gripper-jaws highly affect the grasp robustness. A state-of-the-art friction analysis algorithm for grasp planning is evaluated with plastic robot fingers and achieved promising results, but will it work if grippers are mounted with compliant materials such as rubber and silicon, compared to more advanced contact models?

The task of this work is to create a new dataset and retrain an existing deep network by applying a state-of-the-art contact model for grasp planning.

 

 

Betreuer:

Adaptive LiDAR data update rate control based on motion estimation

Stichworte:
SLAM, Sensor Fusion, ROS

Beschreibung

...

Betreuer:

Perceptual-oriented objective quality assessment for time-delayed teleoperation

Beschreibung

Recent advances in haptic communication cast light onto the promise of full immersion into remote real or virtual environments.

The quality of compressed haptic signals is crucial to fulfill this promise. Traditionally, the quality of haptic signals is evaluated through a series of subjective experiments. So far, only very limited attention was directed toward developing objective quality measures for haptic communication. In this work, we focus on the compression distortion and the delay compensation distortion that contaminate the force/velocity haptic signals generated by physical interaction with objects in the remote real or virtual environment.

 

Voraussetzungen

 

Kontakt

 

Betreuer:

UWB localization by Kalman filter and particle filter

Beschreibung

...

Betreuer:

Investigating the Potential of Machine Learning to Map Changes in Forest based on Earth Observation

Beschreibung

...

Betreuer:

Ingenieurpraxis

Automatisierung 3D Scan

Beschreibung

Aufbau eines Systems zur autonomen Kartierung von Innenräumen.

Betreuer:

Martin Piccolrovazzi - (BMW Group)

Recording of Robotic Grasping Failures

Beschreibung

The aim of this project is collecting data by robotic grasping experiments and creating a largescale labeled dataset. We will conduct experiments while attempting to grasp known or unknown objects autonomously. The complete pipeline includes:

- Estimating grasp poses via computer vision

- Robotic motion planning

- Executing the grasp physically

- Recording necessary data

- Organizing the recorded data into a well-structured dataset

 

Most of the data collection pipeline has been already developed, additions and modifications may be needed.

Voraussetzungen

Useful background:

- Digital signal processing

- Computer vision

- Dataset handling

 

Requirements:

- Experience with Python and ROS

- Motivation to yield a good outcome

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Studentische Hilfskräfte

Studentische Hilfskraft Praktikum Software Engineering

Stichworte:
Software Engineering, Unit Testing, TDD, C++

Beschreibung

We are looking for a teaching assistant student of our new Software Engineering Lab. In this course we explain basic principles of software engineering such as unit testing, test driven development and how to collaborate in teams [1].

You will act as a teaching assistant to supervise students during the lab session working on their practical homeworks. The tasks of the homeworks are generally C++ coding exercises where the students contribute to a common codebase. This means you should have a good experience in C++, unit testing, and git as this will be an essential part of the homeworks.

References

[1] Winters, Titus, Tom Manshreck, and Hyrum Wright, eds. Software Engineering at Google: Lessons Learned from Programming Over Time. O'Reilly Media, Incorporated, 2020

Voraussetzungen

  • Very good knowledge in C++
  • Experience with unit testing
  • Good understanding of git and collaborative software development

Betreuer:

MATLAB tutor for Digital Signal Processing lecture in summer semester 2022

Beschreibung

Tasks:

  • Help students with the basics of MATLAB (e.g. matrix operations, filtering, image processing, runtime errors)
  • Correct some of the homework problems 
  • Understand the DSP coursework material

 

We offer:

  • Payment according to the working hours and academic qualification
  • The workload is approximately 6 hours per week from May 2022 to August 2022
  • Technische Universität München especially welcomes applications from female applicants

 

Application:

  • Please send your application with a CV and transcript per e-mail to basak.guelecyuez@tum.de 
  • Students who have taken DSP course preferred.

Betreuer:

implementation of teleoperation systems using Raspberry PI

Beschreibung

We have already a framework of teleoperation system running in Windows, where two haptic devices are connected through UDP protocol, one as the leader device and the other is the follower.

Your tasks are:

1. move the framework to Linux system.

2. setup a ROS-based virtual teleoperation environment.

3. shift the framework to a raspberry PI.

Voraussetzungen

Linux, socket programming (e.g. UDP protocol), C and C++, ROS

Betreuer:

Studentische Hilfskraft Praktikum Software Engineering

Stichworte:
Software Engineering, Unit Testing, TDD, C++

Beschreibung

We are looking for a teaching assistant student of our new Software Engineering Lab. In this course we explain basic principles of software engineering such as unit testing, test driven development and how to collaborate in teams [1].

You will act as a teaching assistant to supervise students during the lab session working on their practical homeworks. The tasks of the homeworks are generally C++ coding exercises where the students contribute to a common codebase. This means you should have a good experience in C++, unit testing, and git as this will be an essential part of the homeworks.

References

[1] Winters, Titus, Tom Manshreck, and Hyrum Wright, eds. Software Engineering at Google: Lessons Learned from Programming Over Time. O'Reilly Media, Incorporated, 2020

Voraussetzungen

  • Very good knowledge in C++
  • Experience with unit testing
  • Good understanding of git and collaborative software development

Betreuer:

Student Assistant for distributed haptic training system

Stichworte:
server-client, UDP, GUI programming

Beschreibung

Your tasks:

1.      build a serve-client telehaptic training system based on current code.

2.     develop GUI for client side.

required skills:

- c++

- knowledge about socket programming, e.g. UDP

- GUI programming, e.g. QT

- working environment: windows+visual studio



This work has a tight connection with the project Teleoperation over 5G networks and the IEEE standardization P1918.1.1.

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/dfg-teleoperation-over-5g/

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/ieee-p191811-haptic-codecs

Betreuer:

Student Assistant for reference software of time-delayed teleoperation with control schemes and haptic codec

Kurzbeschreibung:
In this work, you need to extend the current teleoperation reference software for enabling different control schemes and haptic data reduction approaches.

Beschreibung

Your tasks:

1.      Current code refactoring and optimization

2.      New algorithm implementation

 

This work has a tight connection with the project Teleoperation over 5G networks and the IEEE standardization P1918.1.1.

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/dfg-teleoperation-over-5g/

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/ieee-p191811-haptic-codecs

Voraussetzungen

 

C++, knowledge for control engineering and communication protocol

Betreuer: