Open Thesis

Real-time Multi-sensor Processing Framework Based on ROS

Description

Multi-sensor data can provide rich environmental information for robots. In practical applications, it is necessary to ensure real-time and synchronous processing of sensor data. In this work, the student needs to design a ROS-based sensor data acquisition and processing framework and carry it on an existing robot platform. Specifically, the sensors involved in this project include RGBD camera, millimeter-wave radar, LiDAR, and IMU. There exist clock deviations between different sensors. The student needs to calibrate the clocks uniformly to make the timestamps of the data collected by the sensors consistent, transmit the collected data to the robot platform in real-time, and process them into the required data, such as point clouds, RGB pictures, etc.

Prerequisites

  • Strong familiarity with ROS, C++, and Python programming
  • Experience with hardware and sensors
  • Basic knowledge of robotics

Contact

mengchen.xiong@tum.de

dong.yang@tum.de

(Please attach your CV and transcript)

Supervisor:

Mengchen Xiong, Dong Yang

Ongoing Thesis

Master's Theses

Digital twin of a position-force teleoperation framework in Nvidia Omniverse

Description

NVIDIA Omniverse is a platform that enables researchers to create custom 3D pipelines and simulate large virtual environments in a fast and convenient manner. It can render the environments very accurately and immersively with the help of GPU acceleration. In this work, we aim to create a digital twin of a teleoperation framework in Omniverse, in which we can use the haptic input device to control the remote robot arm.

Prerequisites

  • Good Programming Skills (Python, C++)
  • Knowledge about Ubuntu/Linux/ROS
  • Motivation to learn and conduct research

Contact

dong.yang@tum.de

Please attach your CV and transcript

Supervisor:

Dong Yang, Xiao Xu

Research Internships (Forschungspraxis)

Evaluation of Scene Graph Generation Models using VG Dataset and Self-recorded Dataset

Description

Scene graphs were first proposed [1] as a data structure that describes the object instances in a scene and the relationships between these objects. The nodes in the scene graph represent the detected target objects, whereas the edges denote the detected pairwise relationships. A complete scene graph can represent the detailed semantics of a dataset of scenes.

Scene graph generation (SGG) is a visual detection task for building structured scene graphs. In this work, we would like to compare the three traditional SGG models: Neural Motifs [2], Graph R-CNN [3], and IMP [4]. We will train and evaluate the models using the Visual Genome dataset and other commonly used datasets. In addition, our dataset will be annotated and then utilized.

[1]: J. Johnson, et al. "Image retrieval using scene graphs."

[2]: R. Zellers, et al. "Neural motifs: Scene graph parsing with global context."

[3]: J. Yang, et al. "Graph r-cnn for scene graph generation."

[4]: D. Xu, et al. "Scene graph generation by iterative message passing."

Prerequisites

  • Good Programming Skills (Python)
  • Knowledge about Ubuntu/Linux/Pytorch
  • Knowledge about Computer vision/Neural network
  • Motivation to learn and conduct research

Contact

dong.yang@tum.de

Supervisor:

Dong Yang, Xiao Xu