Laufende Arbeiten

Bachelorarbeiten

EIT-Based Hand Gesture Recognition

Stichworte:
eit, dsp, cv, deep-learning, machine-learning, hand, hand-object, hoi

Beschreibung

Electrical Impedance Tomography (EIT) is an imaging technique that estimates the impedance of human body tissues by passing an alternating current through pairs of electrodes and measuring the voltage and current among other pairs of electrodes.

The inverse problem aims to reconstruct a cross-section tomographic image of the body part given the measurements.

EIT Wearable devices were applied successfully to the area of hand gesture classification and resulted in high-accuracy machine learning models [1][2].

The goal of the project is to research and test possible calibration approaches for sufficient and reproducible measurement results of EIT (similar to [3]), as well as go into hand gesture classification based on the measured impedance values of a human forearm [1][2].

 

We provide the wearable EIT band that is being developed in collaboration with Enari GmbH and the necessary computer vision building blocks for dataset collection.

The attached figure shows a pipeline of the image reconstruction

[1] Zhang, Yang & Harrison, Chris. (2015). Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition. 167-173. 10.1145/2807442.2807480.

[2] D. Jiang, Y. Wu and A. Demosthenous, "Hand Gesture Recognition Using Three-Dimensional Electrical Impedance Tomography," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 9, pp. 1554-1558, Sept. 2020, doi: 10.1109/TCSII.2020.3006430.

 

[3] Zhang, Y., Xiao, R., & Harrison, C. (2016). Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (pp. 843–850). Association for Computing Machinery.

Voraussetzungen

  • Knowledge with python
  • Knowledge in Digital Signal Processing or Deep Learning

Kontakt

marsil.zakour@tum.de and stefan.haegele@tum.de

Betreuer:

Marsil Zakour, Stefan Hägele

Evaluation of Inverse Rendering using Multi-View RGB-D data

Beschreibung

  • The core idea is to detect illumination in a pre-defined scene (Digital Twin) and adapt the moving objects in the simulation.

    In this work, the student would have to:

    • Create a sensor setup
    • Inverse Rendering
    • Show lighting changes in the room
    • Estimate novel views

Voraussetzungen

Preferable

  • Experience with Git
  • Python (Pytorch)
  • Nvidia Omniverse

Kontakt

driton.salihu@tum.de

Betreuer:

Driton Salihu

Multiband evaluation of passive signal for human activity recognition

Stichworte:
CSI; HAR; AI
Kurzbeschreibung:
To obtain samples

Beschreibung

The student must use a rf system to collect samples for different activities.
Implement classification algorithms to determinine dofferent activies from CSI or using T-F transforms

Voraussetzungen

 

 

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

Vital Sign Monitoring Using Multi-resolution Analysis and Machine Learning

Stichworte:
CSI; HAR; AI
Kurzbeschreibung:
To obtain samples

Beschreibung

The student must use a radar sytem to obtain vital signs of a patient.
The system must be embedded in a hospital bed.
Vital signs such as breathing rate and HR will be targeted; Others applications must be discussed.

Voraussetzungen

 

 

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

Learning-based human-robot shared autonomy

Stichworte:
robot learning, shared control

Beschreibung

In shared control teleoperation, the robot intelligence and human input can be blended together to achieve improved task performance and reduce the human workload. In this topic, we would like to investigate how we can combine human input and robot intelligence effectively to achieve at the end full robot autonomy. We will employ robot learning from demonstration approaches, where we provide task demonstrations using teleoperation. 

We aim to test the developed algorithms in simulation and using Franka Emika robot arm.

Requirements:

Basic experience in C/C++

ROS is a plus

High motivation to learn and conduct research

Betreuer:

Elfolgsmaximierung auf der streaming Platform YouTube

Beschreibung

...

Betreuer:

haptic data redution for position-position teleoperation control architecture

Stichworte:
teleoperation control, haptics

Beschreibung

Using a teleoperation system with haptic feedback, the users can thus truly immerse themselves into a distant environment, i.e., modify it, and execute tasks without physically being present but with the feeling of being there. A typical teleoperation system with haptic feedback (referred to as a teleoperation system) comprises three main parts: the human operator OP)/master system, the teleoperator (TOP)/slave system, and the communication link/network in between. During teleoperation, the slave and master devices exchange multimodal sensor information over the communication link. This work aims to develop a haptic data reduction scheme based on a position-position teleoperation architecture and compare the performance with the classical position-force control architecture.

 

Your work:

(1) build up a teleoperation system that can switch between position-position and position-force architectures.

(2) integrate the existing haptic data reduction scheme with the PP architecture.

(3)  introduce delays, and implement existing passivity based control scheme to ensure system stability

(4) compare the performance difference between the PF and PP architectures.

Voraussetzungen

C++, matlab simulink

Betreuer:

Optimization of Saliency Map Creation

Stichworte:
Saliency maps, deep learning, computer vision

Beschreibung

Saliency maps can be interpreted as probability maps the assess the scenes attractiveness and highlight the regions that might be interesting for the user to look. The objective of this thesis is to help create a novel dataset that records the head-motions and gaze directions for participants that watch 360° videos with varying dynamics in the scene. This dataset is then to be used to improve state-of-the-art saliency map creation algorithms and make them soft realtime-capable. Deep learning proved to be a robust technique to create saliency maps. The student is supposed to either use pruning techniques to boost the performance of state-of-the-art methods or to develop an own approach that delivers a trade-off between accuracy and computational complexity.

Voraussetzungen

Computer Vision, Machine Learning, C++, Python

Betreuer:

Masterarbeiten

Validation of Pose Estimation Algorithm with Synthetic Data Generated in Game Engines

Beschreibung

The aim of this thesis is to examine whether synthetic test data generated with game engines provide the necessary level of detail and realism to be meaningfully utilized in the context of a critical evaluation of algorithms, such as those used for body pose estimation.
Your tasks:
Research on state-of-the-art pose estimation algorithms
Creation of virtual test scenarios relevant to occupant safety in Unreal Engine 5
Generation of photorealistic synthetic test data including ground truth
Quantitative investigation/validation of an established pose estimation algorithm using the above test data

Kontakt

Zhifan Ni (zhifan.ni@tum.de)

Betreuer:

Zhifan Ni - Robin Konhäuser (robin.konhaeuser@iav.de) (IAV Vehicle Safety GmbH)

Real-time registration of noisy, incomplete and partially-occluded 3D pointclouds

Beschreibung

This topic is about the registration of 3D pointclouds belonging to certain objects in the scene, rather than about registering different pointclouds of the scene itself.

State-of-the-art (SOTA) pointcloud registration models/algorithms should be first reviewed, and promising candidates should be selected for evaluation based on the criteria listed below.

  • The method must work in real-time (at least 25 frames per second) for at least 5 different objects at the same time.
  • The method must be robust to noise in the  pointclouds. They come from an Intel RealSense D435 RGB+Depth camera.
  • The method must be able to robustly track the objects of interest even if they are occluded partially by other objects.

The best-suited method must then be extended or improved in a novel way or a completely novel method should be developed.

Both classical as well as Deep Learning based methods must be considered.

Related work:

  • DeepGMR: https://github.com/wentaoyuan/deepgmr
  • 3D Object Tracking with Transformer: https://github.com/3bobo/lttr

 

Voraussetzungen

  • First experiences with 3D data processing / Computer Vision
  • Python programming, ideally also familiarity with C++
  • Familiarity with Linux and the command line

Betreuer:

Rahul Chaudhari

Learning 3D skeleton animations of animals from videos

Beschreibung

Under this topic, the student should investigate how to learn 3D animations of skeletons of animals from videos. The 2D skeleton should be extracted first automatically from a video. A state-of-the-art 3D animal shape+pose (SMAL, see references below) model should then be fitted to the skeleton.

References

  • https://smal.is.tue.mpg.de/index.html
  • https://smalr.is.tue.mpg.de/
  • https://github.com/silviazuffi/smalr_online
  • https://github.com/silviazuffi/gloss_skeleton
  • https://github.com/silviazuffi/smalst
  • https://github.com/benjiebob/SMALify
  • https://github.com/benjiebob/SMALViewer
  • https://bmvc2022.mpi-inf.mpg.de/0848.pdf

Dataset

  • https://research.google.com/youtube8m/explore.html
  • https://youtube-vos.org/dataset/vos/
  • https://data.vision.ee.ethz.ch/cvl/youtube-objects/
  • https://blog.roboflow.com/youtube-video-computer-vision/
  • https://github.com/gtoderici/sports-1m-dataset/ (this dataset seems to provide raw videos from YT)
  • https://github.com/pandorgan/APT-36K
  • https://calvin-vision.net/datasets/tigdog/: contains all the videos, the behavior labels, the landmarks, and the segmentation masks for all three object classes (dog, horse, tiger)
  • https://github.com/hellock/WLD (raw videos)
  • https://sutdcv.github.io/Animal-Kingdom/
  • https://sites.google.com/view/animal-pose/

Voraussetzungen

- Background in Computer Vision, Optimization techniques, and Deep Learning

- Python programming

Betreuer:

Rahul Chaudhari

Real-time Multi-View Visual SLAM

Beschreibung

How can a SLAM system utilize a multi camera rig efficiently, robust and fast.

Betreuer:

Scene Graph-based Indoor Localization

Stichworte:
3D Computer Vision, Deep Learning, Indoor Localization

Beschreibung

This thesis investigates 3D scene graph representations and deep learning for localization in complex indoor environments.

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Betreuer:

Adam Misik

A perceptual-based rate scalable haptic offline coding scheme

Beschreibung

A perceptual-based rate scalable haptic coding for offline kinesthetic compression

to develop a haptic offline coding scheme based on previous studies.

Betreuer:

Robust Hand-Object Pose estimation from Multi-view 2D Keypoints

Beschreibung

Hand-object pose estimation is a challenging task due to multiple factors like occlusion, and ambiguity in pose recovery.  To overcome this issue, multi-view camera systems are used.

Using 2D keypoint detectors for hands and objects like  Yolov8-pose and mmpose we can uplift the 2D detections to 3D. However, the detections usually are usually noisy, and some keypoints may be missing.  

We want to utilize deep learning methods for smoothing, inpainting, and uplifting these detections to 3D in order to estimate the pose of the corresponding hands and objects.

The task is formulated as follows:

Given a sequence of noisy 2D key points for human hands and an object captured from calibrated camera views. Using a deep learning model, estimate a smooth trajectory of the hand and object poses.

 

Voraussetzungen

  • Python
  • Knowledge about Deep Learning
  • Knowledge about Pytorch
  • Previous Knowledge about 3D data processing is a plus.

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Interactive story generation with visual input

Beschreibung

Conventional stories for children of ages 3—6 years are static, independent of the medium (text, video, audio). We aim to make stories interactive, by giving the user control over characters, objects, scenes, and timing. This will lead to the construction of novel, unique, and personalized stories situated in (partially) familiar environments. We restrict this objective to specific domains consisting of a coherent body of works, such as the children’s book series “Meine Freundin Conni”. The challenges in this thesis include finding a suitable knowledge representation for the domain, learning that representation automatically, and inferring a novel storyline over that representation with active user interaction. In this direction, both neural as well as symbolic approaches should be explored.

So far we have implemented a text-based interactive story generation system based on Large Language Models. In this thesis, the text input modality should be replaced by visual input. In particular, the story should be driven by real-world motion of figurines and objects, rather than an abstract textual description of the scene and its dynamics.

 

Voraussetzungen

- First experiences with 2D/3D Computer Vision and Computer Graphics

- Familiarity with AI incl. Deep Learning (university courses / practical experience)

- Programming in Python

 

Betreuer:

Rahul Chaudhari

Leveraging Multimodal Data for Scan2CAD-based 3D Reconstruction

Stichworte:
3D Computer Vision, Deep Learning, Scan-to-CAD

Beschreibung

3D reconstruction of indoor environments is essential for various applications, such as virtual reality, simulation, and robotics. The Scan2CAD approach is a state-of-the-art method for 3D reconstruction and modeling based on point clouds and CAD models [1]. The Scan2CAD approach is primarily focused on geometric structure and may not be able to capture the color, texture, or material properties of objects within the scene. This can limit its usefulness in applications where object appearance is important.   

The proposed master thesis aims to improve Scan2CAD-based reconstruction by using multiple modalities, such as RGB images and CAD models. By using multiple modalities, the accuracy of Scan2CAD-based reconstruction can be further improved. The thesis can draw inspiration from approaches presented in [2, 3, 4].

References

[1] Avetisyan, Armen, et al. "Scan2cad: Learning cad model alignment in rgb-d scans." Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2019.

[2] Wald, Johanna, et al. "Rio: 3d object instance re-localization in changing indoor environments." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.

[3] Gümeli, Can, Angela Dai, and Matthias Nießner. "ROCA: robust CAD model retrieval and alignment from a single image." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.

[4] Siddiqui, Yawar, et al. "Texturify: Generating textures on 3d shape surfaces." Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part III. Cham: Springer Nature Switzerland, 2022.

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

Deep Learning models for zero-shot object detection and segmentation

Beschreibung

In the world of computer vision, data labeling holds immense significance for training powerful machine learning models. Accurate annotations provide the foundation for teaching algorithms to understand visual information effectively. However, data labeling in computer vision poses unique challenges, including the complexity of visual data, the need for precise annotations, and handling large-scale datasets. Overcoming these challenges is crucial for enabling computer vision systems to extract valuable insights, identify objects, and revolutionize a wide range of industries.

Therefore, the development of automatic annotation pipelines for 2D and 3D labeling in various tasks is crucial, leveraging recent advancements in computer vision to enable automatic, efficient and accurate labeling of visual data.

This master thesis will focus on automatically labeling images and videos, and specifically generating 2D/3D labels (i.e., 2D/3D bounding boxes and segmentation masks). The automatic labeling pipeline has to generalize to any type of images and videos such as, household objects, toys, indoor/outdoor environments, etc.

The automatic labeling pipeline will be developed based on zero-shot detection and segmentation models suchGroundingDINO andsegment-anything, in addition to similar methods (seeAwesome Segment Anything). Additionally, the labeling pipeline including the used models will be implemented in theautodistill code base and the performance will be tested by training and evaluating some smaller target models for specific tasks.

Sub-tasks:

?     Automatic generation of 2D labels for images and videos, such as 2D bounding boxes and segmentation masks (seeGrounded-Segment-Anything andsegment-any-moving,Segment-and-Track-Anything).

?     Automatic generation of 3D labels for images and videos, such as 3D bounding boxes and segmentation masks (see3D-Box-Segment-Anything,SegmentAnything3D,segment-any-moving,Segment-and-Track-Anything).

?     Implement a 2D/3D labeling tool to modify and improve the automatic 2D/3D labels (seeDLTA-AI)

?     The automatic labeling pipeline in addition to the used base models and some target models have to be implemented in theautodistill code base to enable an easy end-to-end labeling, training, and deployment for various tasks such as 2D/3D object detection, segmentation.

?     Comprehensive overview of the performance and limitation of the current zero-shot models for the use of automatic labeling for tasks such as 2D/3D object detection, segmentation.

?     Suggestion of future works to overcome the limitation of the used methods

Bonus tasks:

?     Adding image augmentation and editing methods to the labeling pipeline and tool to generate more data (seeEditAnything)

?     Implement one-shot labeling methods to generate labels for unique objects (seePersonalize-SAM andMatcher)

Voraussetzungen

Interest and first experiences in Computer Vision, Deep Learning, Python programming, 3D data.

Betreuer:

Rahul Chaudhari

VR-based 3D synthetic data generation for interactive Computer Vision tasks

Beschreibung

Under this topic, the student will extend our existing VR-based synthetic data generation tool for Hand-Object interactions. Furthermore, the student will generate synthetic data using this tool and evaluate state-of-the-art Computer Vision and Deep Learning models for tracking Hand-Object Interactions in 3D.

Voraussetzungen

  • Strong familiarity with Python programming
  • Interest and first experiences in Computer Graphics, VR, Computer Vision, and Deep Learning.
  • Ideally also interest and experience in Blender 3D software

Betreuer:

Rahul Chaudhari

iOS app for tracking objects using RGB and depth data

Beschreibung

This topic is about the development of an iPhone app for tracking objects in the environment using data from the device's RGB and depth sensors.

Voraussetzungen

  • Good programming experience with C++ and Python
  • Ideally, experience building iOS apps with SWIFT and/or Unity ARFoundation
  • This topic is only suitable for you if you have a recent personal mac development device (ideally at least a MacBook Pro with Apple Silicon M1) and at least an iPhone 12 Pro with a LiDAR depth sensor

Betreuer:

Rahul Chaudhari

Digital twin of a position-force teleoperation framework in Nvidia Omniverse

Beschreibung

NVIDIA Omniverse is a platform that enables researchers to create custom 3D pipelines and simulate large virtual environments in a fast and convenient manner. It can render the environments very accurately and immersively with the help of GPU acceleration. In this work, we aim to create a digital twin of a teleoperation framework in Omniverse, in which we can use the haptic input device to control the remote robot arm.

Voraussetzungen

  • Good Programming Skills (Python, C++)
  • Knowledge about Ubuntu/Linux/ROS
  • Motivation to learn and conduct research

Kontakt

dong.yang@tum.de

Please attach your CV and transcript

Betreuer:

Dong Yang, Xiao Xu

Global Camera Localization in Lidar Maps

Stichworte:
Contrastive Learning, Localization, Camera, Lidar, Point Clouds

Beschreibung

Visual localization is a fundamental problem in computer vision that applies to applications such as robotics, autonomous driving, or augmented reality. A common approach to visual localization is based on matching 2D features of an image query with points in a previously acquired 3D map [1]. A shortcoming of this of this approach is the viewpoint dependency of the query features, which leads to poor results when the viewing angle between the query and the map varies. Other effects, such as photometric inconsistencies, limit the potential of using 2D image features for  localization in a 3D map. 

 

Recently, localization based on point cloud to point cloud matching has been introduced [2,3]. Once a map is created using a 3D sensor such as lidar, the device can be localized by directly matching a query lidar point cloud with a previously created global map. The  advantage of this approach is the higher robustness against viewpoint variations and the direct depth information available in both the 3D map and 3D query [1]. 

 

A common assumption in visual localization approaches is that the same modality is used for both query and map generation. However, this is often not the case, especially for devices used for robotics and augmented reality. In order to perform point cloud-based localization for these common cases, cross-source point cloud retrieval and registration must be addressed. In this work, such an approach is investigated.

 

References

 

[1] T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, “Monocular camera localization in 3D LiDAR maps,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2016, pp. 1926–1931. doi: 10.1109/IROS.2016.7759304.

 

[2] J. Du, R. Wang, and D. Cremers, “DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization,” in Computer Vision – ECCV 2020, vol. 12349, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, pp. 744–762. doi: 10.1007/978-3-030-58548-8_43.

 

[3] J. Komorowski, M. Wysoczanska, and T. Trzcinski, “EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale.” arXiv, Oct. 24, 2021. Accessed: Oct. 31, 2022. [Online]. Available: http://arxiv.org/abs/2110.12486

 

 

 

 

Voraussetzungen

  • Python and Git
  • Experience with SLAM
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

Uncertainty Quantification for Deep Learning-based Point Cloud Registration

Stichworte:
Uncertainty Quantification, Point Cloud Registration, Bayesian Inference, Deep Learning

Beschreibung

The problem of registering point clouds can be reduced to estimating a Euclidean transformation between two sets of 3D points [1]. Once the transformation is estimated, it can be used to register two point clouds in a common coordinate system.

Applications of point cloud registration include 3D reconstruction, localization, or change detection. However, these applications rely on a high similarity between point clouds and do not account for disturbances in the form of noise, occlusions, or outliers. Such defects degrade the quality of the point cloud and thus the accuracy of the registration-dependent application. One approach to deal with these effects is to quantify the registration uncertainty. The general idea is to use uncertainty as a guide for point cloud registration quality. If the uncertainty is too high, a new registration iteration or re-scanning is needed.

In this project, we investigate uncertainty quantification for current learning-based approaches to point cloud registration [1, 2, 3]. First, several methods for uncertainty quantification are selected [4]. Of particular interest are approaches based on Bayesian inference. The approaches are then modified to fit current point cloud registration frameworks and evaluated against benchmark datasets such as ModelNet or ShapeNet. In the evaluation, different types of scan perturbations need to be tested.

References

[1] Huang, Xiaoshui, et al. A Comprehensive Survey on Point Cloud Registration. arXiv:2103.02690, arXiv, 5 Mar. 2021. arXiv.org, http://arxiv.org/abs/2103.02690.

[2] Yuan, Wentao, et al. DeepGMR: Learning Latent Gaussian Mixture Models for Registration. arXiv:2008.09088, arXiv, 20 Aug. 2020. arXiv.org, http://arxiv.org/abs/2008.09088.

[3] Huang, Shengyu, et al. “PREDATOR: Registration of 3D Point Clouds with Low Overlap.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021, pp. 4265–74. DOI.org (Crossref), https://doi.org/10.1109/CVPR46437.2021.00425.

[4] Abdar, Moloud, et al. “A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges.” Information Fusion, vol. 76, Dec. 2021, pp. 243–97. ScienceDirecthttps://doi.org/10.1016/j.inffus.2021.05.008.

   

 

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

 

Betreuer:

Adam Misik

Hand Pose Estimation Using Multi-View RGB-D Sequences

Stichworte:
Hand Object Interaction, Pose Estimation, Deep Learning

Beschreibung

In this project the task is to fit a parametric hand mesh model and a set of rigid objects to a sequence of multi-view RGB-D cameras. Existing models for hand key-point detection and 6DoF pose estimation for rigid objects models have significantly evolved in recent years. Our goal is to utilize such models to estimate the hand and object poses.

Related Work

  1. https://dex-ycb.github.io/
  2. https://www.tugraz.at/institute/icg/research/team-lepetit/research-projects/hand-object-3d-pose-annotation/
  3. https://github.com/hassony2/obman
  4. https://github.com/ylabbe/cosypose

Voraussetzungen

  • Knowledge in computer vision.
  • Experience about segmentation models (i.e. Detectron2)
  • Experience with deep learning frameworks PyTorch or TensorFlow(2.x).
  • Experience with Pytorch3D is a plus.

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Attentive observation using intensity-assisted segmentation for SLAM in a dynamic environment

Stichworte:
SLAM, ROS, Deep Learning, Segmentation

Beschreibung

Attentive observation using intensity-assisted segmentation for SLAM in a dynamic environment.

 

Betreuer:

Illumination of Augmented Reality Content using a Digital Enviroment Twin

Beschreibung

...

Betreuer:

Klassifikation von Wafermuster mit Hilfe von Machine Learing Methoden zur Automatisierung der Mustererkennung

Beschreibung

...

Betreuer:

Solid-State LiDAR and Stereo-Camera based SLAM for unstructured planetary-like environments

Stichworte:
Solid-State LiDAR; Stereo-Camera; SLAM

Beschreibung

New developments in solid-state LiDAR technology open the possibility of integrating range sensors in possible space-qualifiable perception setups, thanks to mechanical designs with reduced moveable parts. Thereby, the development of a hybrid stereo-camera/LiDAR sensor setup might overcome disadvantages each technology comes with, such as limit range for stereo camera setups or the minimum range Lidars need. This thesis investigates such a new solid-state Lidar's possibilities by incorporating it along with a stereo camera setup and an IMU sensor into a SLAM system. Foreseen activities might include, but are not limited to, the design and construction of a portable/handhold sensor setup for recording and testing in planetary-like environments, extrinsic calibration of the sensors, integration into a software pipeline, development of a ROS interface, and preliminary mapping tests.

Betreuer:

Mojtaba Karimi - (German Space Agency (DLR))

Deep Predictive Attention Controller for LiDAR-Inertial localization and mapping

Stichworte:
SLAM, Sensor Fusion, Deep Learning

Beschreibung

The multidimensional sensory data is computationally expensive for localization algorithms in autonomous navigation for drones. Research shows that not all sensory data are equivalently important during the entire process of SLAM to perform a reliable output. The attention control scheme is one of the effective ways to filter out the highly valuable sensory data for such a system. The predictive attention model, for instance, can help us to improve the result of the sensor fusion algorithms by concentrating on the most valuable sensory data based on the dynamic of the vehicle motion or the semantic understanding of the environment. The aim of this work is to investigate the state-of-the-art attention control models that can be adapted for the multidimensional sensory data acquisition system and compare them from different modalities. 

Voraussetzungen

- Strong background in Python and C++ programming

- Solid background in robot control theory

- Be familiar with deep learning frameworks (Tensorflow)

- Be familiar with the robot operating system (ROS)

Kontakt

leox.karimi@tum.de

Betreuer:

Model based Collision Identification for Real-Time Jaco2 Robot Manipulation 

Stichworte:
ROS, Haptics, Teleoperation, Jaco2

Beschreibung

By the advancement of robotics and communication networks such as 5G, telemedicine has become a critical application for remote diagnosis and treatment.

In this project, we want to perform robotic teleoperation using a Sigma 7 haptic master and a Jaco 2 robotic manipulator.

 Tasks:

  • State of the art review and mathematical modeling
  • Jaco2 haptic controller implementation
  • Fault-tolerant (delay, network disconnect) controller design 
  • System evaluation with external force-torque sensor

Voraussetzungen

  • Strong background in C++ programming
  • Solid background in control theory 
  • Be familiar with robot dynamics and kinematics 
  • Be familiar with the robot operating system (ROS) and ROS Control (Optional)

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Interdisziplinäre Projekte

Impact of Instance Segmentation Modality on the Accuracy of Action Recognition Models from Ego-Perspective Views

Beschreibung

The goal of this project is to use interactive segmentation methods to collect data for instance segmentation models and then analyze the impact of the instance segmentation modality on the performance of action detection networks.

 

 

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Extension of an Open-source Autonomous Driving Simulation for German Autobahn Scenarios

Beschreibung

This work can be done in German or English in a team of 2-4 members.
Self-driving cars need to be safe in the interaction with other road users such as motorists, cyclists, and pedestrians. But how can car manufacturers ensure that their self-driving cars are safe with us humans? The only realistic and economic way to test this is to use simulation.
cogniBIT is a Munich-based Startup founded by Alumni of TUM and LMU and provides realistic models of all kind of road users. These models are based on state-of-the art neurocognitive and sensorimotor research and reproduce human perception, cognition, and action with all its limitations.
In this project the objective is to extend the open-source simulator CARLA (www.carla.org) such that German Autobahn-like scenarios can be simulated.

Tasks:
•    Design an Autobahn scenario using the road description format OpenDRIVE.
•    Adapt the CARLA OpenDRIVE standalone mode (requires C++ knowledge).
•    Design an environment for the scenario using the Unreal Engine 4 Editor.
•    Perform a simulation-based experiment using the German Autobahn scenario and the cogniBIT driver model.

Voraussetzungen

•    C++ knowledge
•    experience with Python is helpful
•    experience with the UE4 editor is helpful
•    interest in autonomous driving and cognitive models

Betreuer:

Markus Hofbauer - Lukas Brostek (cogniBIT)

Extension of an Open-source Autonomous Driving Simulation for German Autobahn Scenarios

Beschreibung

This work can be done in German or English in a team of 2-4 members.
Self-driving cars need to be safe in the interaction with other road users such as motorists, cyclists, and pedestrians. But how can car manufacturers ensure that their self-driving cars are safe with us humans? The only realistic and economic way to test this is to use simulation.
cogniBIT is a Munich-based Startup founded by Alumni of TUM and LMU and provides realistic models of all kind of road users. These models are based on state-of-the art neurocognitive and sensorimotor research and reproduce human perception, cognition, and action with all its limitations.
In this project the objective is to extend the open-source simulator CARLA (www.carla.org) such that German Autobahn-like scenarios can be simulated.

Tasks:
•    Design an Autobahn scenario using the road description format OpenDRIVE.
•    Adapt the CARLA OpenDRIVE standalone mode (requires C++ knowledge).
•    Design an environment for the scenario using the Unreal Engine 4 Editor.
•    Perform a simulation-based experiment using the German Autobahn scenario and the cogniBIT driver model.

Voraussetzungen

•    C++ knowledge
•    experience with Python is helpful
•    experience with the UE4 editor is helpful
•    interest in autonomous driving and cognitive models

Betreuer:

Markus Hofbauer - Lukas Brostek (cogniBIT)

Forschungspraxis (Research Internships)

Refining 3D Hand-Object Reconstruction via Elastomer Model

Beschreibung

To model the interaction of hand and object, not only is a separate estimation of hand and object required but the contact between hand and object must also be taken into account. Significant progress has been made in modeling isolated hands and objects from RGB images. However, modeling the contact between a human hand and an object within a single image needs much effort because of the existence of occlusions. In this paper, we propose a method for the reconstruction of hands and objects in 3D based on elastomer models. This method simulates the Hand-Object(HO) contact based on the elastic energy of the elastomer model. At the same time, it imitates the deformation of soft hand tissue using the concept of elastic modulus, such that a more physically plausible grasp could be formed. Aside from that, an optimizer is applied to improve the HO interaction under the supervision of ground truth. The whole framework is constructed in an end-to-end manner. Several commonly used benchmarks show that the method leads to a better reconstruction result and produces more physically plausible hand and object estimation.

Betreuer:

Xinguo He

Surface Material Recognition using Object Category Prior with Vision Language Model

Beschreibung

In our 6G Digital Twin setup, we need to recognize the surface material of objects in the environment. Currently, we are using a YOLOv8 model to perform object segmentation. If we could extend such an object segmentation model to material segmentation tasks, we can save computation resources. Surely, we can fine-tune the classification head in those object segmentation models on a surface material dataset. However, this will ignore the object category we already detected. Another possible approach is to apply a vision language model, such as CLIP, and use prompts like "a wooden surface of a table" to leverage the prior knowledge of object categories.
In this Forschungspraxis, we will first explore the state-of-the-art works in open-vocabulary semantic segmentation and understand how they deal with segmentation masks. Then, we will adapt such models to material segmentation, which may involve mask adaptation, feature alignment, prompt engineering, etc.

 

Example: F. Liang et al., Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP, CVPR 2023, https://jeff-liangf.github.io/projects/ovseg/

Voraussetzungen

Basic programming knowledge is required, preferably in Python.

Experience with PyTorch and popular object detection models (YOLOv8, Detectron2, ...) is a plus.

Interest in Computer Vision and Multimodal Models.

Kontakt

zhifan.ni@tum.de

Betreuer:

Zhifan Ni

Feature enhancement based human-object detection

Kurzbeschreibung:
Human-object interaction, Feature pre-processing, VAE

Beschreibung

Human-object interaction detection is currently a famous research topic. It requires us to spatially distinguish human-object interaction in images. However, the current stage of feature extraction can be further optimized. This task will explore how to improve the performance of HOI detection, starting with feature extraction and optimization.

Voraussetzungen

- Computer vision

- Human-object interaction prediction

- Deep Learning

- Transformer

Kontakt

yuankai.wu@tum.de

Betreuer:

Yuankai Wu

Generative Hands Object Interactions Using Diffusion Models

Stichworte:
deep-learning, diffusion, stable-diffusion, action, smpl-x,mano, hand-object-interaction

Beschreibung

Recently, there has been increasing success in the generation of human motion and object grasp. On the other hand, an increasing number of datasets capture human hands' interaction with surrounding objects in addition to action labels. 

 

One advantage of Diffusion models is that they can easily conditioned on different types of input like text embeddings and control parameters.  

Your task will include exploring the existing models and implementing the hand-object interaction diffusion model.

Datasets We could use: 

  •  https://hoi4d.github.io/
  •  https://taeinkwon.com/projects/h2o/
  •  https://dex-ycb.github.io/

Motion Generation Models

  • https://guytevet.github.io/mdm-page/
  • https://goal.is.tue.mpg.de/

Voraussetzungen

  • Experience and interest in deep learning research.
  • Knowledge and experience with Pytorch.

Kontakt

marsil.zakour@tum.de

Betreuer:

Marsil Zakour

Monocular RGB-based Digital Twin

Beschreibung

Using monocular RGB data to reconstruct a 3D interior environment with CAD-based reconstruction.

Voraussetzungen

Git, Python, PyTorch

Kontakt

driton.salihu@tum.de

Betreuer:

Driton Salihu

Human-robot interaction using vision-based human-object interaction prediction

Kurzbeschreibung:
Human-object interaction, human-robot interaction

Beschreibung

We use vision solution to locate the target object for the robot and send the desired object back to the operator to complete the whole process of human-robot interaction.

Voraussetzungen

- Panda arm

- Computer vision

- Human-object interaction prediction

-Grasping

Betreuer:

Yuankai Wu

Camera-Lidar Dataset for Localization Tasks

Stichworte:
Camera, Lidar, Dataset Creation, SLAM, Machine Learning

Beschreibung

In this project, we will create a camera-lidar dataset. The dataset can be used for the improvement of visual localization tasks. The idea is to extend existing localization datasets with camera-lidar correspondence point clouds [1].  

For the generation lidar submaps, we will use the procedure proposed in [2]. For the camera submaps, we will use visual SLAM and extract the 3D reconstruction [3]. Both the lidar and camera submaps will then be linked based on odometry or timestamp information provided with the localization dataset. 

References

[1] Maddern, Will, et al. "1 year, 1000 km: The Oxford RobotCar dataset." The International Journal of Robotics Research 36.1 (2017): 3-15.

[2] Uy, Mikaela Angelina, and Gim Hee Lee. "Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

[3] Mur-Artal, Raul, Jose Maria Martinez Montiel, and Juan D. Tardos. "ORB-SLAM: a versatile and accurate monocular SLAM system." IEEE transactions on robotics 31.5 (2015): 1147-1163.

Voraussetzungen

  • Python and Git
  • C++ basics
  • Interest in SLAM and Computer Vision 

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

GAN-based subjective haptic signal quality assessment database augmentation and enlargement methods

Beschreibung

This project needs the student to research and implement a novel GAN-based approach for subjective haptic signal quality assessment database augmentation and enlargement. Also, subjective experiments will also be conducted to evaluate the result of the automatic data expansion.

Betreuer:

Zican Wang

Handshake for Plug-and-Play Haptic Interaction system

Stichworte:
Teleoperation, GUI, Qt, JavaScript
Kurzbeschreibung:
Our project aims to implement handshake communication protocol for Plug-and-Play Haptic Interaction system according to the IEEE standard.

Beschreibung

Our project aims to implement handshake communication protocol for Plug-and-Play Haptic Interaction system according to the IEEE standard. For the system, the main achievements are:

 

1.     Plug and play on the Leader side: When the Leader device disconnects from the system, the Follower device will turn to the waiting state and will remain in its initial position same as when it’s activated in the system until the next re-insertion of the Leader device.

2.     Automatic adjustment of device parameters according to the specific type of Leader device to guarantee the performance of human perception: First of all, when connecting, the Leader device will transmit its media and interface information to the Follower side, so-called Metadata, and at the same time it will inform the Follower device of the specific model type it is using. The Follower device will adjust its parameters according to the received information to adapt to the Leader if the type of Leader device has different precision from the Follower device and transmits its metadata to the Leader.

   For the adjustments to the message transmission process:

1).   Achieve the PnP adjustment on the follower side.

 

2).   The message sending order, the format of the interface, the mode of pushing data packets into stacks, and the decoding function should obey the regulations of the IEEE standard.

 

Voraussetzungen

C/C++

socket programming

visual studio IDE

Kontakt

Email Adress: siwen.liu@tum.de, xiao.xu@tum.de

Betreuer:

Siwen Liu, Xiao Xu

GUI for Plug-and-Play Haptic Interaction system

Stichworte:
Teleoperation, GUI, Qt, JavaScript
Kurzbeschreibung:
Our project aims to build a GUI for Plug-and-Play Haptic Interaction system according to the IEEE standard.

Beschreibung

Our project aims to build a GUI for Plug-and-Play Haptic Interaction system according to the IEEE standard. For the system, the main achievements are:

 

1.     Plug and play on the Leader side: When the Leader device disconnects from the system, the Follower device will turn to the waiting state and will remain in its initial position same as when it’s activated in the system until the next re-insertion of the Leader device.

2.     Automatic adjustment of device parameters according to the specific type of Leader device to guarantee the performance of human perception: First of all, when connecting, the Leader device will transmit its media and interface information to the Follower side, so-called Metadata, and at the same time it will inform the Follower device of the specific model type it is using. The Follower device will adjust its parameters according to the received information to adapt to the Leader if the type of Leader device has different precision from the Follower device and transmits its metadata to the Leader.

 

 

Voraussetzungen

The requirements of our project are as follows. For the GUI part:

1.   The GUI should be implemented under either Qt or JavaScript (first considering Qt) on both the Leader and Follower sides.

2.     For the Leader side, the GUI should be proposed including these functions:

1). Chooses the device on the Leader side.

2). Shows whether the handshaking is successful or not.

3). Shows the device type used on the Follower side after the handshake.

4). When the Leader device is disconnected from the system, show as well.

3.    For the Follower side, the GUI should be proposed including these functions:

1). Chooses the device on the Follower side.

2). Shows whether the handshaking is successful or not.

3). Shows the device type used on the Leader side, adjusts the parameters on the Follower side, and then shows the adjusted device type if the handshake is successful.

4). When the Leader device is disconnected from the system, show as well. And then shows the initial position of the Follower device in the waiting state.

4.    For the adjustments to the message transmission process:

1).   Achieve the PnP adjustment on the follower side.

 

2).   The message sending order, the format of the interface, the mode of pushing data packets into stacks, and the decoding function should obey the regulations of the IEEE standard.

 

Kontakt

Email Adress: siwen.liu@tum.de, xiao.xu@tum.de

Betreuer:

Siwen Liu, Xiao Xu

Inverse Rendering in a Digital Twin for Augmented Reality

Stichworte:
Digital Twin, Illumination, HDR

Beschreibung

The task is to generate an End-to-End pipeline for illumination estimation inside of a digital twin.

Finally, also an AR application can be created.

Possible References

[1] https://arxiv.org/pdf/1905.02722.pdf

[2] https://arxiv.org/pdf/1906.07370.pdf

[3] https://arxiv.org/pdf/2011.10687.pdf

Voraussetzungen

  • Python (Pytorch)
  • Experience with Git

Kontakt

driton.salihu@tum.de

Betreuer:

Driton Salihu

Network Aware Shared Control

Stichworte:
Teleoperation, Learning from Demonstration

Beschreibung

In this thesis, we would like the make the best out of varying quality demonstrations. We will test the developed approach with shared control.

Voraussetzungen

Requirements:

Experience in C/C++

ROS is a plus

High motivation to learn and conduct research

 

Betreuer:

Optimization of 3D Object Detection Procedures for Indoor Environments

Stichworte:
3D Object Detection, 3D Point Clouds, Digital Twin, Optimization

Beschreibung

3D object detection has been a major task for point cloud-based 3D reconstruction of indoor environments. Current research has focused on having a low inference time for 3D object detection. While this is preferable, a lot of cases do not profit from this. Especially considering the use of a pre-defined static Digital Twin for AR and robotics application, thus this decreases the incentive for low inference time at the cost of accuracy.

As such this thesis will follow the approach of [1] (in this work only based on point cloud data) to generate proposals of layout and objects in a scene through for example [2]/[3] and use some form of optimization algorithm (reinforcement learning, genetic algorithm) to optimize to the correct solution.

Further, for more geometrical-reasonable results the use of a relationship graph neural network, as in [4], would be applied in the pipeline.

References

[1] Hampali, Shreyas, et al. “Monte Carlo Scene Search for 3D Scene Understanding.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  (2021): 13799-13808. https://arxiv.org/abs/2103.07969#:~:text=We explore how a general, from noisy RGB-D scans.

[2] Chen, Xiaoxue, Hao Zhao, Guyue Zhou, and Ya-Qin Zhang. “PQ-Transformer: Jointly Parsing 3D Objects and Layouts From Point Clouds.” IEEE Robotics and Automation Letters  7 (2022): 2519-2526. https://arxiv.org/abs/2109.05566

[3] Qi, C., Or Litany, Kaiming He and Leonidas J. Guibas. “Deep Hough Voting for 3D Object Detection in Point Clouds.” 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  (2019): 9276-9285. https://arxiv.org/abs/1904.09664

[4] Avetisyan, Armen, Tatiana Khanova, Christopher Bongsoo Choy, Denver Dash, Angela Dai and Matthias Nießner. “SceneCAD: Predicting Object Alignments and Layouts in RGB-D Scans.” ArXiv  abs/2003.12622 (2020): n. pag. https://arxiv.org/abs/2003.12622

 

Voraussetzungen

  • Python (Pytorch)
  • Experience with Git
  • Knowledge in working with 3D Point Clouds (preferable)
  • Knowledge about optimization methods (preferable)

Kontakt

driton.salihu@tum.de

Betreuer:

Driton Salihu

Learning Temporal Knowledge Graphs with Neural Ordinary Differential Equations

Beschreibung

...

Kontakt

zhen.han@campus.lmu.de

Betreuer:

Eckehard Steinbach - Zhen Han (LMU)

Sim-to-Real Gap in Liquid Pouring

Stichworte:
sim-to-real

Beschreibung

We want to investigate what are the simulation bottlenecks in order to learn the pouring task. How we can tackle this problem. This project is more paper reading and the field of research is skill refinement and domain adaptation. In addition, we will try to implement one of the states of the art methods of teaching by demonstration in order to adapt the simulation skill to the real-world scenario.

Voraussetzungen

Creativity

Motivation

Strong C++ Background

Strong Phyton Background

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

"Pouring Liquids" dataset development

Stichworte:
Nvidia Flex, Unity3D, Nvidia Physics 4.0
Kurzbeschreibung:
Using Unity3D and Nvidia Flex plugin, develop a learning environment and model different fluids for teaching pouring tasks to robots.

Beschreibung

The student will develop different liquid characteristics using Nvidia Flex, will add different containers and particle collision checking system. In addition, a ground truth system to later use for robot teaching.

 

Reference:

https://developer.nvidia.com/flex

https://developer.nvidia.com/physx-sdk%20

Voraussetzungen

Strong Unity3D background

Familiar with Nvidia Physics and Nvidia Flex libraries. 

 

Kontakt

edwin.babaians@tum.de

Betreuer:

Edwin Babaians

Analysis and evaluation of DynaSLAM for dynamic object detection

Beschreibung

Investigation of DynaSLAM in terms of real-time capabilities and dynamic object detection.

Betreuer:

Comparison of Driver Situation Awareness with an Eye Tracking based Decision Anticipation Model

Stichworte:
Situation Awareness, Autonomous Driving, Region of Interest Prediction, Eye Tracking

Beschreibung

This work can be done in German or English

The transmission of control to the human driver in autonomous driving requires the observation of the human driver. The vehicle has to guarantee that the human driver is aware of the current driving situation. One input source for observing the human driver is based on the driver's gaze.

The objective of this project is to compare two existing approaches for driver observation [1,2]. While [1] measures the driver situation awareness (SA), [2] anticipates the drivers decision. As part of a user study [2] published a gaze dataset. An interesting cross validation would be the comparison of the
SA score generated by [1] and the predicted decision correctness of [2].

Tasks

  • Generate ROI predictions [3] from the dataset of [2]
  • Estimate the driver SA with the model of [1]
  • Compare [1] and [2]
  • (Optional) Extend driving experiments

References

[1] Markus Hofbauer, Christopher Kuhn, Lukas Puettner, Goran Petrovic, and Eckehard Steinbach. Measuring driver situation awareness using region-of-interest prediction and eye tracking. In 22nd IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec 2020.
[2] Pierluigi Vito Amadori, Tobias Fischer, Ruohan Wang, and Yiannis Demiris. Decision Anticipation for Driving Assistance Systems. June 2020.
[3] Markus Hofbauer, Christopher Kuhn, Jiaming Meng, Goran Petrovic, and Eckehard Steinbach. Multi-view region of interest prediction for autonomous driving using semisupervised labeling. In IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, Sep 2020.

Voraussetzungen

  • Experience with ROS and Python
  • Basic knowledge of Linux

Betreuer:

3D object model reconstruction from RGB-D scenes

Beschreibung

The robots should be able to discover their environments and learn new objects in order to be a part of daily human life. There are still challenges to detect or recognize objects in unstructured environments like a household environment. For robotic grasping and manipulation, knowing 3D models of the objects are beneficial, hence the robot needs to infer the 3D shape of an object upon observation. In this project, we will investigate methods that can infer or produce 3D models of novel objects by observing RGB-D scenes. We will analyze the methods to reconstruct 3D information with different arrangements of an RGB-D camera.

 

 

 

 

 

 

 

 

 

Voraussetzungen

  • Basic knowledge of digital signal processing / computer vision
  • Experience with ROS, C++, Python.
  • Experience with Artificial Neural Network libraries or motivation to learn them
  • Motivation to yield a successful work

Kontakt

furkan.kaynar@tum.de

Betreuer:

Hasan Furkan Kaynar

Research on the implementation of an (automated) solution for the analysis of surface impurities on endoscope tubes

Beschreibung

...

Betreuer:

AI-Enhanced Tool for Desk Research – Smart Analytical Engine

Beschreibung

...

Betreuer:

Algorithm evaluation for robot grasping with compliant jaws

Stichworte:
python, ROS, robot grasping
Kurzbeschreibung:
Apply state-of-the-art contact model for robot grasp planning with a customized physical setup including a KUKA robot arm and a parallel-jaw gripper with compliant materials.

Beschreibung

Model-based grasp planning algorithms depend on friction analysis since friction between objects and gripper-jaws highly affect the grasp robustness. A state-of-the-art friction analysis algorithm for grasp planning is evaluated with plastic robot fingers and achieved promising results, but will it work if grippers are mounted with compliant materials such as rubber and silicon, compared to more advanced contact models?

The task of this work is to create a new dataset and retrain an existing deep network by applying a state-of-the-art contact model for grasp planning.

 

 

Betreuer:

Adaptive LiDAR data update rate control based on motion estimation

Stichworte:
SLAM, Sensor Fusion, ROS

Beschreibung

...

Betreuer:

Perceptual-oriented objective quality assessment for time-delayed teleoperation

Beschreibung

Recent advances in haptic communication cast light onto the promise of full immersion into remote real or virtual environments.

The quality of compressed haptic signals is crucial to fulfill this promise. Traditionally, the quality of haptic signals is evaluated through a series of subjective experiments. So far, only very limited attention was directed toward developing objective quality measures for haptic communication. In this work, we focus on the compression distortion and the delay compensation distortion that contaminate the force/velocity haptic signals generated by physical interaction with objects in the remote real or virtual environment.

 

Voraussetzungen

 

Kontakt

 

Betreuer:

UWB localization by Kalman filter and particle filter

Beschreibung

...

Betreuer:

Investigating the Potential of Machine Learning to Map Changes in Forest based on Earth Observation

Beschreibung

...

Betreuer:

Ingenieurpraxis

Recording of Robotic Grasping Failures

Beschreibung

The aim of this project is collecting data by robotic grasping experiments and creating a largescale labeled dataset. We will conduct experiments while attempting to grasp known or unknown objects autonomously. The complete pipeline includes:

- Estimating grasp poses via computer vision

- Robotic motion planning

- Executing the grasp physically

- Recording necessary data

- Organizing the recorded data into a well-structured dataset

 

Most of the data collection pipeline has been already developed, additions and modifications may be needed.

Voraussetzungen

Useful background:

- Digital signal processing

- Computer vision

- Dataset handling

 

Requirements:

- Experience with Python and ROS

- Motivation to yield a good outcome

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Studentische Hilfskräfte

Real-time Multi-sensor Processing Framework Based on ROS

Beschreibung

Multi-sensor data can provide rich environmental information for robots. In practical applications, it is necessary to ensure real-time and synchronous processing of sensor data. In this work, the student needs to design a ROS-based sensor data acquisition and processing framework and carry it on an existing robot platform. Specifically, the sensors involved in this project include RGBD camera, millimeter-wave radar, LiDAR, and IMU. There exist clock deviations between different sensors. The student needs to calibrate the clocks uniformly to make the timestamps of the data collected by the sensors consistent, transmit the collected data to the robot platform in real-time, and process them into the required data, such as point clouds, RGB pictures, etc.

Voraussetzungen

  • Strong familiarity with ROS, C++, and Python programming
  • Experience with hardware and sensors
  • Basic knowledge of robotics

Kontakt

mengchen.xiong@tum.de

dong.yang@tum.de

(Please attach your CV and transcript)

Betreuer:

Mengchen Xiong, Dong Yang

HiWi / Working Student for Blender tasks

Stichworte:
3D, blender, python
Kurzbeschreibung:
This is a working student position for a variety of tasks in the Blender environment: 3D modelling of characters / objects, character rigging, animation, interactive rendering, etc. Part of the job is automate certain workflows or tasks in blender using the Blender Python API.

Beschreibung

This is a working student position for a variety of tasks in the Blender environment:

  • 3D modelling of characters / objects,
  • character rigging,
  • animation,
  • interactive rendering, etc.
  • Part of the job is automate certain workflows or tasks in blender using the Blender Python API.

 

Voraussetzungen

  • Strong interest in 3D Computer Graphics and Gaming.
  • Very strong familiarity with Blender
  • Comfortable programming in Python
  • Ideally: also familiarity with development environments on Linux and windows.

Please send a description of your interest and experience regarding the above points together with your application.

Kontakt

https://www.ce.cit.tum.de/lmt/team/mitarbeiter/chaudhari-rahul/

Betreuer:

Rahul Chaudhari

Studentische Hilfskraft Praktikum Software Engineering

Stichworte:
Software Engineering, Unit Testing, TDD, C++

Beschreibung

We are looking for a teaching assistant student of our new Software Engineering Lab. In this course we explain basic principles of software engineering such as unit testing, test driven development and how to collaborate in teams [1].

You will act as a teaching assistant to supervise students during the lab session working on their practical homeworks. The tasks of the homeworks are generally C++ coding exercises where the students contribute to a common codebase. This means you should have a good experience in C++, unit testing, and git as this will be an essential part of the homeworks.

References

[1] Winters, Titus, Tom Manshreck, and Hyrum Wright, eds. Software Engineering at Google: Lessons Learned from Programming Over Time. O'Reilly Media, Incorporated, 2020

Voraussetzungen

  • Very good knowledge in C++
  • Experience with unit testing
  • Good understanding of git and collaborative software development

Betreuer:

MATLAB tutor for Digital Signal Processing lecture in summer semester 2022

Beschreibung

Tasks:

  • Help students with the basics of MATLAB (e.g. matrix operations, filtering, image processing, runtime errors)
  • Correct some of the homework problems 
  • Understand the DSP coursework material

 

We offer:

  • Payment according to the working hours and academic qualification
  • The workload is approximately 6 hours per week from May 2022 to August 2022
  • Technische Universität München especially welcomes applications from female applicants

 

Application:

  • Please send your application with a CV and transcript per e-mail to basak.guelecyuez@tum.de 
  • Students who have taken DSP course preferred.

Betreuer:

implementation of teleoperation systems using Raspberry PI

Beschreibung

We have already a framework of teleoperation system running in Windows, where two haptic devices are connected through UDP protocol, one as the leader device and the other is the follower.

Your tasks are:

1. move the framework to Linux system.

2. setup a ROS-based virtual teleoperation environment.

3. shift the framework to a raspberry PI.

Voraussetzungen

Linux, socket programming (e.g. UDP protocol), C and C++, ROS

Betreuer:

Studentische Hilfskraft Praktikum Software Engineering

Stichworte:
Software Engineering, Unit Testing, TDD, C++

Beschreibung

We are looking for a teaching assistant student of our new Software Engineering Lab. In this course we explain basic principles of software engineering such as unit testing, test driven development and how to collaborate in teams [1].

You will act as a teaching assistant to supervise students during the lab session working on their practical homeworks. The tasks of the homeworks are generally C++ coding exercises where the students contribute to a common codebase. This means you should have a good experience in C++, unit testing, and git as this will be an essential part of the homeworks.

References

[1] Winters, Titus, Tom Manshreck, and Hyrum Wright, eds. Software Engineering at Google: Lessons Learned from Programming Over Time. O'Reilly Media, Incorporated, 2020

Voraussetzungen

  • Very good knowledge in C++
  • Experience with unit testing
  • Good understanding of git and collaborative software development

Betreuer:

Student Assistant for distributed haptic training system

Stichworte:
server-client, UDP, GUI programming

Beschreibung

Your tasks:

1.      build a serve-client telehaptic training system based on current code.

2.     develop GUI for client side.

required skills:

- c++

- knowledge about socket programming, e.g. UDP

- GUI programming, e.g. QT

- working environment: windows+visual studio



This work has a tight connection with the project Teleoperation over 5G networks and the IEEE standardization P1918.1.1.

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/dfg-teleoperation-over-5g/

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/ieee-p191811-haptic-codecs

Betreuer:

Student Assistant for reference software of time-delayed teleoperation with control schemes and haptic codec

Kurzbeschreibung:
In this work, you need to extend the current teleoperation reference software for enabling different control schemes and haptic data reduction approaches.

Beschreibung

Your tasks:

1.      Current code refactoring and optimization

2.      New algorithm implementation

 

This work has a tight connection with the project Teleoperation over 5G networks and the IEEE standardization P1918.1.1.

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/dfg-teleoperation-over-5g/

https://www.ei.tum.de/lmt/forschung/ausgewaehlte-projekte/ieee-p191811-haptic-codecs

Voraussetzungen

 

C++, knowledge for control engineering and communication protocol

Betreuer: