Studentische Arbeiten am Lehrstuhl für Medientechnik

Im Rahmen unserer aktuellen Forschungsprojekte bieten wir spannende Aufgabenstellungen für studentische Projekte (Ingenieurpraxis, Forschungspraxis, Werkstudententätigkeiten, IDPs) und Abschlussarbeiten (Bachelor- und Masterarbeiten) an.

Offene Arbeiten

GUI for Plug-and-Play Haptic Interaction system

Stichworte:
Teleoperation, GUI, Qt, JavaScript
Kurzbeschreibung:
Our project aims to build a GUI for Plug-and-Play Haptic Interaction system according to the IEEE standard.

Beschreibung

Our project aims to build a GUI for Plug-and-Play Haptic Interaction system according to the IEEE standard. For the system, the main achievements are:

 

1.     Plug and play on the Leader side: When the Leader device disconnects from the system, the Follower device will turn to the waiting state and will remain in its initial position same as when it’s activated in the system until the next re-insertion of the Leader device.

2.     Automatic adjustment of device parameters according to the specific type of Leader device to guarantee the performance of human perception: First of all, when connecting, the Leader device will transmit its media and interface information to the Follower side, so-called Metadata, and at the same time it will inform the Follower device of the specific model type it is using. The Follower device will adjust its parameters according to the received information to adapt to the Leader if the type of Leader device has different precision from the Follower device and transmits its metadata to the Leader.

 

 

Voraussetzungen

The requirements of our project are as follows. For the GUI part:

1.   The GUI should be implemented under either Qt or JavaScript (first considering Qt) on both the Leader and Follower sides.

2.     For the Leader side, the GUI should be proposed including these functions:

1). Chooses the device on the Leader side.

2). Shows whether the handshaking is successful or not.

3). Shows the device type used on the Follower side after the handshake.

4). When the Leader device is disconnected from the system, show as well.

3.    For the Follower side, the GUI should be proposed including these functions:

1). Chooses the device on the Follower side.

2). Shows whether the handshaking is successful or not.

3). Shows the device type used on the Leader side, adjusts the parameters on the Follower side, and then shows the adjusted device type if the handshake is successful.

4). When the Leader device is disconnected from the system, show as well. And then shows the initial position of the Follower device in the waiting state.

4.    For the adjustments to the message transmission process:

1).   Achieve the PnP adjustment on the follower side.

 

2).   The message sending order, the format of the interface, the mode of pushing data packets into stacks, and the decoding function should obey the regulations of the IEEE standard.

 

Kontakt

Email Adress: siwen.liu@tum.de, xiao.xu@tum.de

Betreuer:

Siwen Liu, Xiao Xu

Implementing the Digital Twin of a Factory

Stichworte:
Digital Twin, Sensors, Computer Vision, Robotics

Beschreibung

A Digital Twin is a virtual representation of an asset, to which is connected in a bi-directional way: changes happening in the real asset are shown in the digital asset and vice-versa.

In this project, we will create a prototype for the Digital Twin of a factory in Nvidia Omniverse. We will create a visual representation and update it using sensor data and artificial intelligence.

Voraussetzungen

Required:

  • Python knowledge

Recommended (not all of them):

  • Experience in ROS
  • C++ knowledge
  • Robotics knowledge

Kontakt

diego.prado@tum.de

Betreuer:

Diego Fernandez Prado

Robotic Imitation Learning for Industrial Applications

Stichworte:
robotics, machine learning, computer vision, image processing, haptics, ROS

Beschreibung

Imitation Learning helps robots to learn any skill robustly and much faster than simply using reinforcement learning. In this project, we will use human demonstrations to teach a robot manipulator how to solve different industrial tasks.

To achieve out goal, we will use different sensors (cameras, Force/Torque, etc.) and we will employ state of the art Machine learning/Deep Learning techniques, together with image processing.

Voraussetzungen

Required:

- Experience in Python

- Some Machine Learning / Deep Learning / Image Processing experience

 

Also beneficial (but not a must):

- ROS experience

- Reinforcement Learning experience

Kontakt

diego.prado@tum.de

Betreuer:

Diego Fernandez Prado

Synthetic Egocentric Hand-Object Interactions

Beschreibung

In this work you are going to develop and combine existing state-of-the-art simulator tools in Unreal Engine. You are going to explore recent state of the art methods for creating synthetic data based on realistic simulation environments and integrate existing hand animations into the simulation in order to enable hand-object interactions. Furthermore you are going to deal with environmental 3D pointclouds.

Reference:

https://arxiv.org/pdf/2104.11776.pdf

https://sim2realai.github.io/UnrealROX/

Voraussetzungen

Programming knowledge

C++ is a plus

Kontakt

email: constantin.patsch@tum.de

(supervision possible in german, english)

Betreuer:

Constantin Patsch

Hand-pose based robotic grasp demonstration via mobile devices

Beschreibung

 

Although there is intensive research in the field of robotics since decades, autonomous robotic grasping and manipulation still remain as challenging abilities under real-life conditions. Autonomous algorithms fail more in unstructured environments such as household environments, which limits the practical use of robots in daily human life. In unsructured environments, the perception gains importance and there can often be novel and unseen cases by which the autonomous algorithms tend to fail. By these cases there is need for human correction or demonstration to increase the task performance or teach new abilities to robots. For this aim, we will create a user interface which is intuitive to use by the user on a mobile device via hand poses. At the same time the interface should provide the necessary data to efficiently assist a robot in a daily home environment. The main application will be teleassistance for robotic grasping.

 

Voraussetzungen

 

  • Basic knowledge of image processing / computer vision. 
  • Basic coding experience, especially with C#.
  • Experience with Unity game engine.
  • Basic experience with ROS.
  • Motivation to yield a successful work.

 

 

 

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Removal of Dynamic Objects from Indoor Point Clouds

Stichworte:
Semantic Segmentation, Occupancy Grid, Point Clouds, Static Maps

Beschreibung

The accuracy of point cloud-based indoor localization can be improved by using static maps. One step in the creation of such maps is the removal of dynamic objects and their corresponding traces. In this project, we aim to investigate approaches for dynamic object removal based on either (a) occupancy grid analysis [1, 2] or (b) semantic segmentation [3].

 

References

[1] S.Pagad, D.Agarwal, S.Narayanan, K. Rangan, H. Kim, and G. Yalla, "Robust Method for Removing Dynamic Objects from Point Clouds," 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 10765-10771, doi: 10.1109/ICRA40945.2020.9197168

[2] H. Lim, S. Hwang, and H. Myung, "ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building," IEEE Robotics and Automation Letters, 6(2), pp. 2272-2279

[3] M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, "Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation," 2021 European Conference on Mobile Robots (ECMR), 2021, pp. 1-6, doi: 10.1109/ECMR50962.2021.9568799

 

 

 

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

Action Segment Prediction

Stichworte:
Deep Learning, Transformer, CNN, Video Understanding

Beschreibung

This work is focused on video understanding, with a special focus on human action segmentation. It is going to involve self-attention models (like Transformers), CNN's and other deep learning architectures.

You are going to evaluate State of the Art approaches on public action segmentation datasets with respect to an action segment prediction problem setting. Furthermore, you can incorporate your own ideas into developing the final model pipeline. Additional ideas from your side are welcome.

 

Related Work:

  • https://arxiv.org/pdf/2110.08568v1.pdf
  • https://arxiv.org/pdf/2106.02036.pdf

Voraussetzungen

  • Python and Pytorch
  • Basic knowledge in deep learning

Kontakt

constantin.patsch@tum.de

Betreuer:

Constantin Patsch

A Scene Graph based Refinement for 3D Scene Point Clouds Completion

Stichworte:
Scene Completion, Point Clouds, Scene Graph, Object/Relationship detection, Deep Learning

Beschreibung

In this work, we want to investigate how scene graphs can help to improve scene completion/point cloud completion. The scene graph will be generated by object, attributes, and relationships detection with the 2D RGB images as input. The first stage of this work is to exploit the state-of-the-art scene reconstruction framework to construct the scene point clouds. In the second stage, we need to utilize the generated scene graphs to fine-tune and improve the constructed scene point clouds.

Voraussetzungen

  • High motivation to learn and conduct research
  • Good programming skills in Python, Pytorch
  • Basic experience with computer vision

Kontakt

dong.yang@tum.de

(Please attach your CV and transcript)

Betreuer:

Dong Yang, Xiao Xu

3D object model extraction during teleoperation

Beschreibung

 

Robotic teleoperation is applied often for tasks that the robots are not proficient at. During teleoperation, the operator observes the remote scene via a camera. On the other hand, most robots additionally have a depth sensor which can be used to extract useful information for future tasks.

 

In this project we will analyze and test the situations, where we can use the recorded depth data to extract and reconstruct the 3D model of a novel object grasped and manipulated by the robot.

Voraussetzungen

Necessary and useful backgrond:

- ROS, python, C++

- Image processing, video processing

 

Additionally:

- Motivation to yield a good work

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Implementation of robotic motion planning

Beschreibung

The motion planner of a robotic arm requires planning of the necessary motion, under collision avoidance and regarding the joint limitations of the robot. In this project, we will focus on motion planning of the Panda robot arm, using several planners. We will test OpenRave motion planner and compare it to the moveit motion planner.

We will also implement and test cartesian path planning using methods like the Descartes path planner.

 

At the end of this project, the student will learn about implementation and usage of different motion/path planners.

 

 

Voraussetzungen

Useful background:

- Robotic control

- Experience with ROS

 

Necessary background:

- Experience with C++

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Multiband evaluation of passive signal for human activity recognition

Stichworte:
CSI; HAR; AI
Kurzbeschreibung:
To obtain samples

Beschreibung

The student must use a rf system to collect samples for different activities.
Implement classification algorithms to determinine dofferent activies from CSI or using T-F transforms

Voraussetzungen

 

 

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

OCC for indoor positioning and identification in harsh environments

Stichworte:
optical camera communications; video tracking
Kurzbeschreibung:
Identify and track multiple light sources in a video stream

Beschreibung

Identify and track multiple light sources in a  video stream.

The student must record the video multiple objects to the tracked in the video stream.

Voraussetzungen

Python

Signal/image processing

 

Kontakt

fabian.seguel@tum.de

of. 2940

 

Betreuer:

Fabian Esteban Seguel Gonzalez

Sub-band analysis for indoor positioning: extracting robust features

Stichworte:
OFDMA; CSI; indoor positioning
Kurzbeschreibung:
To obtain samples

Beschreibung

The student must set up a transmission scheme based on MIMO technology and OFDMA.

Take samples in different points inside an indoor environment for further processing of the signal characteristics to obtain an estimated position of the mobile node.

Voraussetzungen

Python

WIreless communications with focus in channel state information and OFDMA MIMO systems

Knowledge in USRP not required but is a plus

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

High-level Robotic Teleoperation via Scene Editing

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on user interaction methods for describing a robotic task via modifying the viewed scene.

 

Voraussetzungen

Useful background:

- 3D Human-computer interfaces

- Game Design

- Digital signal processing

 

Required abilities:

- Experience with Unity and ROS

- Motivation to yield a good work

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Robotic grasp learning from human demonstrations

Stichworte:

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Voraussetzungen

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Jacobian Null-space Energy Dissipation TDPA for Redundancy Robots in Teleoperation

Stichworte:
Teleoperation, Robotics, Control Theory

Beschreibung

Teleoperation Systems

 

Bilateral teleoperation with haptic feedback provides its users with a new dimension of immersion in virtual or remote environments. This technology enables a great variety of applications in robotics and virtual reality, such as remote surgery and industrial digital twin [1]. Figure 1 shows a generalized human-in-the-loop teleoperation system with kinesthetic feedback, where the operator commands a remote/virtual robot to explore the environment and experiences the interactive force feedback through the haptic interface. 

Teleoperation systems face many challenges caused by unpredictable environment changes, time-delayed feedback, limited network capacity, etc. [2]. These issues inevitably distort the force feedback signal, degrading the transparency and stability of the system. In the past decades, many control algorithms and hardware architectures were developed to tackle these problems in the past decades [3].

Time Domain Passivity Method (TDPA)

 

TDPA is a passivity-based control scheme that ensures the stability of teleoperation systems in the presence of communication delays [4] (See Figure 2.). It abstracts two-port networks from the haptic system and observes the energy flow between the networks. Passivity condition is maintained by dissipating extra energy generated by non-passive networks. Original TDAP suffers from position drift and feedback force jump [5], and one reason for the position drift is that the energy, which is generated by the delayed communication, is dissipated in the task space of the robots.

 

Jacobian Null-Space for Redundancy Robot

 

Many robot mechanisms have redundant degrees of freedom (rDOFs), which means that they have a larger number of joints than the number of dimensions of their task or configuration space. The null space of the Jacobian null space stands for the redundant dimensions which can be exploited to dissipate extra energy by damping the null space motion without affecting the task space [5].

Your Task and Target

In this work, we target at improving the performance of TDPA by considering dissipating energy generated by time delay and other factors in the Jacobian null-space of the kinesthetically redundant robots. With the help of the Jacobian null-space method, we can avoid dissipating energy in the task space, so as to alleviate position drift and force distortion while keeping the system passive. For more information, previous work can be referred to as [7-9].

In this master's internship, your work will include

1.      1. surveying the related algorithms

2.      2. constructing the simulation environment

3.      3. experimenting with the state-of-the-art Jacobian null-space TDPA method.

4.      4. analyzing system passivity in Cartesian task space, joint space, null space, etc.

Voraussetzungen

Requirements

All requirements are recommended but not mandatory. However, you will need extra effort to catch up if you are unfamiliar with the following topics:

1.    3. Basic knowledge about robotics and control theory is favorable.

2.    2. Experience with robotics simulation software and platforms is favorable.

3.    1. C++, Matlab, and Python would be the primary working language. Basic knowledge about one or more of them is highly recommended.

Kontakt

zican.wang@tum.de

xiao.xu@tum.de

Betreuer:

Zican Wang, Xiao Xu

Learning-based human-robot shared autonomy

Stichworte:
robot learning, shared control

Beschreibung

In shared control teleoperation, the robot intelligence and human input can be blended together to achieve improved task performance and reduce the human workload. In this topic, we would like to investigate how we can combine human input and robot intelligence effectively to achieve at the end full robot autonomy. We will employ robot learning from demonstration approaches, where we provide task demonstrations using teleoperation. 

We aim to test the developed algorithms in simulation and using Franka Emika robot arm.

Requirements:

Basic experience in C/C++

ROS is a plus

High motivation to learn and conduct research

Betreuer:

Scene graph generation models analysis using Visual Genome benchmark and self-recorded datasets

Beschreibung

Scene graphs were first proposed [1] as a data structure that describes the object instances in a scene and the relationships between these objects. The nodes in the scene graph represent the detected target objects, whereas the edges denote the detected pairwise relationships. A complete scene graph can represent the detailed semantics of a dataset of scenes.

Scene graph generation (SGG) is a visual detection task for building structured scene graphs. In this work, we would like to compare the three traditional SGG models: Neural Motifs [2], Graph R-CNN [3], and IMP [4]. We will train and evaluate the models using the Visual Genome dataset and other commonly used datasets. In addition, our dataset will be annotated and then utilized.

[1]: J. Johnson, et al. "Image retrieval using scene graphs."

[2]: R. Zellers, et al. "Neural motifs: Scene graph parsing with global context."

[3]: J. Yang, et al. "Graph r-cnn for scene graph generation."

[4]: D. Xu, et al. "Scene graph generation by iterative message passing."

Voraussetzungen

  • Good Programming Skills (Python, GUI design)
  • Knowledge about Ubuntu/Linux/Pytorch
  • Knowledge about Computer vision/Neural network
  • Motivation to learn and conduct research

Kontakt

dong.yang@tum.de

Betreuer:

Dong Yang, Xiao Xu

Generalized Robot Learning from Demonstration

Stichworte:
LfD, imitation learning, Franka Arm

Beschreibung

In this work, we would like to design a generative model for robot learning from demonstration. We will consider a table cleaning task, determine varying task conditions, and apply a generative model such that the robot can generalize to novel unseen conditions. We aim to use both simulation and Franka arm for testing our algorithms. 

Voraussetzungen

Requirements:

Experience in C/C++

ROS is a plus

High motivation to learn and conduct research

 

Betreuer:

Robotic task learning from human demonstration

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Voraussetzungen

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

 

Wichtige Informationen zur Anfertigung der Ausarbeitung und zu Vorträgen am LMT, sowie Vorlagen für Powerpoint und LaTeX haben wir hier zusammengefasst.