Studentische Arbeiten am Lehrstuhl für Medientechnik

Im Rahmen unserer aktuellen Forschungsprojekte bieten wir spannende Aufgabenstellungen für studentische Projekte (Ingenieurpraxis, Forschungspraxis, Werkstudententätigkeiten, IDPs) und Abschlussarbeiten (Bachelor- und Masterarbeiten) an.

Offene Arbeiten

EIT-Based Hand Gesture Recognition

Stichworte:
eit, dsp, cv, deep-learning, machine-learning, hand, hand-object, hoi

Beschreibung

Electrical Impedance Tomography (EIT) is an imaging technique that estimates the impedance of human body tissues by passing an alternating current through pairs of electrodes and measuring the voltage and current among other pairs of electrodes.

The inverse problem aims to reconstruct a cross-section tomographic image of the body part given the measurements.

EIT Wearable devices were applied successfully to the area of hand gesture classification and resulted in high-accuracy machine learning models [1][2].

The goal of the project is to research and test possible calibration approaches for sufficient and reproducible measurement results of EIT (similar to [3]), as well as go into hand gesture classification based on the measured impedance values of a human forearm [1][2].

 

We provide the wearable EIT band that is being developed in collaboration with Enari GmbH and the necessary computer vision building blocks for dataset collection.

The attached figure shows a pipeline of the image reconstruction

[1] Zhang, Yang & Harrison, Chris. (2015). Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition. 167-173. 10.1145/2807442.2807480.

[2] D. Jiang, Y. Wu and A. Demosthenous, "Hand Gesture Recognition Using Three-Dimensional Electrical Impedance Tomography," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 9, pp. 1554-1558, Sept. 2020, doi: 10.1109/TCSII.2020.3006430.

 

[3] Zhang, Y., Xiao, R., & Harrison, C. (2016). Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (pp. 843–850). Association for Computing Machinery.

Voraussetzungen

  • Knowledge with python
  • Knowledge in Digital Signal Processing or Deep Learning

Kontakt

marsil.zakour@tum.de and stefan.haegele@tum.de

Betreuer:

Marsil Zakour, Stefan Hägele

Securing Audio with AI and Blockchain: A Study of Digital Watermarking Techniques

Beschreibung

Description:

This thesis project will examine the integration of artificial intelligence (AI) and blockchain technology for digital watermarking of audio. Digital watermarking is a technique used to embed hidden information, such as ownership or copyright information, into digital audio files. The goal of this project is to develop new AI-based techniques for digital watermarking that can be secured and protected using blockchain technology.

Prerequisites:

  • Strong background in signal processing and digital audio
  • Familiarity with machine learning and AI techniques
  • Basic understanding of blockchain technology and its applications
  • Experience with programming languages such as Python and JavaScript
  • Strong analytical and problem-solving skills
  • Strong written and verbal communication skills

This project is an exciting opportunity to work at the intersection of AI and blockchain, where you will have the chance to apply your skills and knowledge to the development of new technologies that could have a significant impact on the audio industry. You will be working with an innovative startup in the heart of Silicon Valley, where you will have the opportunity to contribute to the development of cutting-edge technology. If you are passionate about AI, blockchain, and signal processing and are looking for a challenging and rewarding research experience, this thesis project is for you!

Please send your CV and Transcript of Records. Tell me why you are interested in this topic:

 

Kontakt

tamay@sureel.io

Betreuer:

Eckehard Steinbach - Dr.-Ing. Tamay Aykut (Sureel)

Securing Images/Videos with AI and Blockchain: A Study of Digital Watermarking Techniques

Beschreibung

Description:

This thesis project will examine the integration of artificial intelligence (AI) and blockchain technology for digital watermarking of images/videos. Digital watermarking is a technique used to embed hidden information, such as ownership or copyright information, into image files. The goal of this project is to develop new AI-based techniques for digital watermarking that can be secured and protected using blockchain technology.

Prerequisites:

  • Strong background in signal processing and digital images
  • Familiarity with machine learning and AI techniques
  • Basic understanding of blockchain technology and its applications
  • Experience with programming languages such as Python and JavaScript
  • Strong analytical and problem-solving skills
  • Strong written and verbal communication skills

This project is an exciting opportunity to work at the intersection of AI and blockchain, where you will have the chance to apply your skills and knowledge to the development of new technologies that could have a significant impact on the media industry. You will be working with an innovative startup in the heart of Silicon Valley, where you will have the opportunity to contribute to the development of cutting-edge technology. If you are passionate about AI, blockchain, and signal processing and are looking for a challenging and rewarding research experience, this thesis project is for you!

Please send your CV and Transcript of Records. Tell me why you are interested in this topic:

 

Kontakt

tamay@sureel.io

Betreuer:

Eckehard Steinbach - Dr.-Ing. Tamay Aykut (Sureel)

Unlocking the Potential of AI and Blockchain: Generative Multimedia

Beschreibung

Description:

We are excited to offer a unique and innovative thesis project that combines cutting-edge technology with digital multimedia. As an "AI and Blockchain Generative Media Researcher," you will have the opportunity to explore the potential of using AI algorithms to generate one-of-a-kind pieces of media content and using blockchain technology to protect the rights of the original creators via smart licensing mechanisms.

This project is an external thesis with a startup in San Francisco, which will give you the chance to work with real-world industry experts and gain valuable experience in a startup environment. This is not just about writing a thesis, it's about making a real-world impact on the media technology industry. You will have the chance to conduct research, explore new possibilities and create something truly unique.

Prerequisites:

  • Strong background in signal processing and digital images
  • Familiarity with machine learning and AI techniques
  • Basic understanding of blockchain technology and its applications
  • Experience with programming languages such as Python and JavaScript
  • Strong analytical and problem-solving skills
  • Strong written and verbal communication skills

Don't miss this opportunity to be part of a revolutionary project that combines your passion for computer science and art. Apply now and take the first step in unlocking the potential of AI and blockchain technology in generative art, with the added bonus of gaining valuable experience working with a startup in San Francisco.

Please send your CV and Transcript of Records. Tell me why you are interested in this topic:

 

Kontakt

tamay@sureel.io

Betreuer:

Eckehard Steinbach - Dr.-Ing. Tamay Aykut (Sureel)

Decentralized DRM for Multimedia: Blockchain-Powered Encryption and Encoding

Beschreibung

We are excited to offer a cutting-edge thesis project that explores the potential of using blockchain technology to decentralize Digital Rights Management (DRM) in the music and video streaming industry. As a "Web3 DRM Researcher," you will have the opportunity to investigate the use of encryption and encoding techniques, powered by blockchain technology, to secure and protect digital content in a decentralized way.

This project will involve research on the current state of DRM technology used by companies like Spotify and Netflix, and the challenges they face in ensuring the security and protection of digital content. You will then explore the potential of blockchain technology to address these challenges, and investigate the implementation of encryption and encoding techniques to secure and protect digital content in a decentralized manner.

Prerequisites:

  • Strong background in signal processing and audio/video encryption/encoding
  • Familiarity with machine learning and AI techniques
  • Basic understanding of blockchain technology and its applications
  • Experience with programming languages such as Python and JavaScript
  • Strong analytical and problem-solving skills
  • Strong written and verbal communication skills

This is an exciting opportunity for a student to work on a cutting-edge project that has the potential to make a real-world impact on the music and video streaming industry. Apply now and take the first step in decentralizing web3 DRM using blockchain technology.

Please send your CV and Transcript of Records. Tell me why you are interested in this topic:

 

Kontakt

Betreuer:

Eckehard Steinbach - Dr.-Ing. Tamay Aykut (Sureel)

Automatic Hand-Eye-Calibration (Detecting Camera Pose)

Beschreibung

 

For vision-based robotic task planning, it is crucial to know the pose of the camera, in order to relate the points to the world coordinates for the robotic task. For this aim, there are many calibration techniques that require manual work. In this project, we will work on automatic hand-eye-calibration to make the calibration process easier.

Voraussetzungen

 

- Experience with Python

- Basic knowledge of Digital Signal Processing /Image processing

 

 

Kontakt

furkan.kaynar@tum.de

 

Please provide your CV and Transcript of Records in your application.

 

 

Betreuer:

Hasan Furkan Kaynar

Camera-Lidar Dataset for Localization Tasks

Stichworte:
Camera, Lidar, Dataset Creation, SLAM, Machine Learning

Beschreibung

In this project, we will create a camera-lidar dataset. The dataset can be used for the improvement of visual localization tasks. The idea is to extend existing localization datasets with camera-lidar correspondence point clouds [1].  

For the generation lidar submaps, we will use the procedure proposed in [2]. For the camera submaps, we will use visual SLAM and extract the 3D reconstruction [3]. Both the lidar and camera submaps will then be linked based on odometry or timestamp information provided with the localization dataset. 

References

[1] Maddern, Will, et al. "1 year, 1000 km: The Oxford RobotCar dataset." The International Journal of Robotics Research 36.1 (2017): 3-15.

[2] Uy, Mikaela Angelina, and Gim Hee Lee. "Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

[3] Mur-Artal, Raul, Jose Maria Martinez Montiel, and Juan D. Tardos. "ORB-SLAM: a versatile and accurate monocular SLAM system." IEEE transactions on robotics 31.5 (2015): 1147-1163.

Voraussetzungen

  • Python and Git
  • C++ basics
  • Interest in SLAM and Computer Vision 

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

Multimodal Learning for Localization Tasks

Stichworte:
Camera, Lidar, Point Clouds, Deep Learning

Beschreibung

The field of multimodal learning has gained attention given the multisensor configurations of robotic systems and the advantages of sensor fusion [1]. In this work, we focus on combining camera and lidar data. There are several existing approaches that combine both sensors for object detection, pedestrian classification, or lane segmentation [1,2,3].

Our goal is to combine RGB images and lidar point clouds for localization tasks. The idea is to use a global lidar point cloud as a map and perform localization based on a single RGB image. The localization can then be solved based on a common embedding space as presented in [4,5]. 

References

[1] Melotti, Gledson, Cristiano Premebida, and Nuno Gonçalves. "Multimodal deep-learning for object recognition combining camera and LIDAR data." 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2020.

[2] Zhang, Xinyu, et al. "Channel attention in LiDAR-camera fusion for lane line segmentation." Pattern Recognition 118 (2021): 108020.

[3] Asvadi, Alireza, et al. "Multimodal vehicle detection: fusing 3D-LIDAR and color camera data." Pattern Recognition Letters 115 (2018): 20-29.

[4] Cattaneo, D., et al. “Global Visual Localization in LiDAR-Maps through Shared 2D-3D Embedding Space.” 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 4365–71. IEEE Xplore, https://doi.org/10.1109/ICRA40945.2020.9196859.

[5] Jiang, Peng, and Srikanth Saripalli. "Contrastive Learning of Features between Images and LiDAR." 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE). IEEE, 2022.

 

 

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Betreuer:

Adam Misik

Radar based indoor positioning and tracking

Stichworte:
radar, indoor positioning, tracking
Kurzbeschreibung:
passive positioning

Beschreibung

The main task is to creade a GUI that provides online positioning and tracking. 

Voraussetzungen

Python or matlab

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

A perceptual-based rate scalable haptic coding scheme

Beschreibung

to develop a haptic offline coding scheme based on previous studies.

 

More details coming soon

Voraussetzungen

matlab or python

signal processing background

Betreuer:

Selective Sensor Fusion Strategies for Depth Estimation in Fog Environment

Stichworte:
Sensor Fusion, Depth Estimation, Fog Environment

Beschreibung

Deep learning-based depth estimation has been studied extensively for perceiving and understanding the surrounding environment. Due to physical limitations and the sensitivity of the measurement results on the scene characteristics and environmental conditions of individual sensors, the performance of depth estimation is insufficient in many applications where only a single type of sensor data is applied. To tackle this issue, the fusion of multiple sensor modalities has been studied as a promising solution, especially in the fog environment.

In this work, the student needs to investigate the selective sensor fusion strategies (camera, LiDAR, and radar) under different fog concentrations using deep learning-based methods.

Voraussetzungen

  • High motivation to learn and conduct research
  • Good programming skills in Python, Pytorch, Linux
  • Basic experience with deep learning, neural network

Kontakt

mengchen.xiong@tum.de

(Please attach your CV and transcript to your application)

Betreuer:

Hand-Object Interaction Action Segmentation

Stichworte:
Deep Learning, Computer Vision, Video Understanding

Beschreibung

In this work you are going to investigate the action msegmentation problem in an unsupervised setting. You are especially getting familiar with object detection networks (e.g.: YOLO) and you are getting familiar with segmenting videos without the availability of labels. This work especially focuses on learning current computer vision and deep learning techniques, while programming in python. Overall, this work will help you to broaden your deep- learning  knowledge, which is particularly helpful for later work  in the area of deep learning and computer vision.

 

References:

 

https://arxiv.org/pdf/1506.02640.pdf

https://arxiv.org/pdf/2103.11264.pdf

Voraussetzungen

Deep Learning Knowledge 

Python

Kontakt

Contact:

constantin.patsch@tum.de

Supervision is possible in German and English

Betreuer:

Constantin Patsch

Uncertainty Quantification for Deep Learning-based Point Cloud Registration

Stichworte:
Uncertainty Quantification, Point Cloud Registration, Bayesian Inference, Deep Learning

Beschreibung

The problem of registering point clouds can be reduced to estimating a Euclidean transformation between two sets of 3D points [1]. Once the transformation is estimated, it can be used to register two point clouds in a common coordinate system.

Applications of point cloud registration include 3D reconstruction, localization, or change detection. However, these applications rely on a high similarity between point clouds and do not account for disturbances in the form of noise, occlusions, or outliers. Such defects degrade the quality of the point cloud and thus the accuracy of the registration-dependent application. One approach to deal with these effects is to quantify the registration uncertainty. The general idea is to use uncertainty as a guide for point cloud registration quality. If the uncertainty is too high, a new registration iteration or re-scanning is needed.

In this project, we investigate uncertainty quantification for current learning-based approaches to point cloud registration [1, 2, 3]. First, several methods for uncertainty quantification are selected [4]. Of particular interest are approaches based on Bayesian inference. The approaches are then modified to fit current point cloud registration frameworks and evaluated against benchmark datasets such as ModelNet or ShapeNet. In the evaluation, different types of scan perturbations need to be tested.

References

[1] Huang, Xiaoshui, et al. A Comprehensive Survey on Point Cloud Registration. arXiv:2103.02690, arXiv, 5 Mar. 2021. arXiv.org, http://arxiv.org/abs/2103.02690.

[2] Yuan, Wentao, et al. DeepGMR: Learning Latent Gaussian Mixture Models for Registration. arXiv:2008.09088, arXiv, 20 Aug. 2020. arXiv.org, http://arxiv.org/abs/2008.09088.

[3] Huang, Shengyu, et al. “PREDATOR: Registration of 3D Point Clouds with Low Overlap.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021, pp. 4265–74. DOI.org (Crossref), https://doi.org/10.1109/CVPR46437.2021.00425.

[4] Abdar, Moloud, et al. “A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges.” Information Fusion, vol. 76, Dec. 2021, pp. 243–97. ScienceDirecthttps://doi.org/10.1016/j.inffus.2021.05.008.

   

 

Voraussetzungen

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Kontakt

Please send your CV and Transcript of Records to:

adam.misik@tum.de

 

Betreuer:

Adam Misik

Implementing the Digital Twin of a Factory

Stichworte:
Digital Twin, Sensors, Computer Vision, Robotics

Beschreibung

A Digital Twin is a virtual representation of an asset, to which is connected in a bi-directional way: changes happening in the real asset are shown in the digital asset and vice-versa.

In this project, we will create a prototype for the Digital Twin of a factory in Nvidia Omniverse. We will create a visual representation and update it using sensor data and artificial intelligence.

Voraussetzungen

Required:

  • Python knowledge

Recommended (not all of them):

  • Experience in ROS
  • C++ knowledge
  • Robotics knowledge

Kontakt

diego.prado@tum.de

Betreuer:

Diego Fernandez Prado

Robotic Imitation Learning for Industrial Applications

Stichworte:
robotics, machine learning, computer vision, image processing, haptics, ROS

Beschreibung

Imitation Learning helps robots to learn any skill robustly and much faster than simply using reinforcement learning. In this project, we will use human demonstrations to teach a robot manipulator how to solve different industrial tasks.

To achieve out goal, we will use different sensors (cameras, Force/Torque, etc.) and we will employ state of the art Machine learning/Deep Learning techniques, together with image processing.

Voraussetzungen

Required:

- Experience in Python

- Some Machine Learning / Deep Learning / Image Processing experience

 

Also beneficial (but not a must):

- ROS experience

- Reinforcement Learning experience

Kontakt

diego.prado@tum.de

Betreuer:

Diego Fernandez Prado

Hand-pose based robotic grasp demonstration via mobile devices

Beschreibung

 

Although there is intensive research in the field of robotics since decades, autonomous robotic grasping and manipulation still remain as challenging abilities under real-life conditions. Autonomous algorithms fail more in unstructured environments such as household environments, which limits the practical use of robots in daily human life. In unsructured environments, the perception gains importance and there can often be novel and unseen cases by which the autonomous algorithms tend to fail. By these cases there is need for human correction or demonstration to increase the task performance or teach new abilities to robots. For this aim, we will create a user interface which is intuitive to use by the user on a mobile device via hand poses. At the same time the interface should provide the necessary data to efficiently assist a robot in a daily home environment. The main application will be teleassistance for robotic grasping.

 

Voraussetzungen

 

  • Basic knowledge of image processing / computer vision. 
  • Basic coding experience, especially with C#.
  • Experience with Unity game engine.
  • Basic experience with ROS.
  • Motivation to yield a successful work.

 

 

 

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Betreuer:

Hasan Furkan Kaynar

Action Segment Prediction

Stichworte:
Deep Learning, Transformer, CNN, Video Understanding

Beschreibung

This work is focused on video understanding, with a special focus on human action segmentation. It is going to involve self-attention models (like Transformers), CNN's and other deep learning architectures.

You are going to evaluate State of the Art approaches on public action segmentation datasets with respect to an action segment prediction problem setting. Furthermore, you can incorporate your own ideas into developing the final model pipeline. Additional ideas from your side are welcome.

 

Related Work:

  • https://arxiv.org/pdf/2110.08568v1.pdf
  • https://arxiv.org/pdf/2106.02036.pdf

Voraussetzungen

  • Python and Pytorch
  • Basic knowledge in deep learning

Kontakt

constantin.patsch@tum.de

Betreuer:

Constantin Patsch

A Scene Graph based Refinement for 3D Scene Point Clouds Completion

Stichworte:
Scene Completion, Point Clouds, Scene Graph, Object/Relationship detection, Deep Learning

Beschreibung

In this work, we want to investigate how scene graphs can help to improve scene completion/point cloud completion. The scene graph will be generated by object, attributes, and relationships detection with the 2D RGB images as input. The first stage of this work is to exploit the state-of-the-art scene reconstruction framework to construct the scene point clouds. In the second stage, we need to utilize the generated scene graphs to fine-tune and improve the constructed scene point clouds.

Voraussetzungen

  • High motivation to learn and conduct research
  • Good programming skills in Python, Pytorch
  • Basic experience with computer vision

Kontakt

dong.yang@tum.de

(Please attach your CV and transcript)

Betreuer:

Dong Yang, Xiao Xu

Implementation of robotic motion planning

Beschreibung

The motion planner of a robotic arm requires planning of the necessary motion, under collision avoidance and regarding the joint limitations of the robot. In this project, we will focus on motion planning of the Panda robot arm, using several planners. We will test OpenRave motion planner and compare it to the moveit motion planner.

We will also implement and test cartesian path planning using methods like the Descartes path planner.

 

At the end of this project, the student will learn about implementation and usage of different motion/path planners.

 

 

Voraussetzungen

Useful background:

- Robotic control

- Experience with ROS

 

Necessary background:

- Experience with C++

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

OCC for indoor positioning and identification in harsh environments

Stichworte:
optical camera communications; video tracking
Kurzbeschreibung:
Identify and track multiple light sources in a video stream

Beschreibung

Identify and track multiple light sources in a  video stream.

The student must record the video multiple objects to the tracked in the video stream.

Voraussetzungen

Python

Signal/image processing

 

Kontakt

fabian.seguel@tum.de

of. 2940

 

Betreuer:

Fabian Esteban Seguel Gonzalez

Sub-band analysis for indoor positioning: extracting robust features

Stichworte:
OFDMA; CSI; indoor positioning
Kurzbeschreibung:
To obtain samples

Beschreibung

The student must set up a transmission scheme based on MIMO technology and OFDMA.

Take samples in different points inside an indoor environment for further processing of the signal characteristics to obtain an estimated position of the mobile node.

Voraussetzungen

Python

WIreless communications with focus in channel state information and OFDMA MIMO systems

Knowledge in USRP not required but is a plus

Kontakt

fabian.seguel@tum.de

of. 2940

Betreuer:

Fabian Esteban Seguel Gonzalez

Development of a Zoom Chatbot for Virtual Audience Feedback

Beschreibung

Virtual conference systems provide an alternative to physical meetings that have significantly grown in importance over the last years. However, larger events require the audience to be muted to avoid an accumulation of background noise and distorted audio. While this is sufficient for unidirectional meetings, many types of meetings strongly rely on the feedback of their audience, such as in performing arts.
In this project, we want to extend Zoom sessions with a simple Chatbot that collects the audience participation of each user using a straightforward button interface. Then, the system renders the overall audience feedback based on the feedback state collected from each user. The project combines signal and audio processing with the chance to gain practical experience with app development and SDKs.

References

Voraussetzungen

  • Good knowledge in Nodejs/JavaScript
  • Experience with Git
  • Experience with Zoom SDK would be a plus

Betreuer:

High-level Robotic Teleoperation via Scene Editing

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on user interaction methods for describing a robotic task via modifying the viewed scene.

 

Voraussetzungen

Useful background:

- 3D Human-computer interfaces

- Game Design

- Digital signal processing

 

Required abilities:

- Experience with Unity and ROS

- Motivation to yield a good work

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Robotic grasp learning from human demonstrations

Stichworte:

Beschreibung

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Voraussetzungen

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

 

Kontakt

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Betreuer:

Hasan Furkan Kaynar

Jacobian Null-space Energy Dissipation TDPA for Redundancy Robots in Teleoperation

Stichworte:
Teleoperation, Robotics, Control Theory

Beschreibung

Teleoperation Systems

 

Bilateral teleoperation with haptic feedback provides its users with a new dimension of immersion in virtual or remote environments. This technology enables a great variety of applications in robotics and virtual reality, such as remote surgery and industrial digital twin [1]. Figure 1 shows a generalized human-in-the-loop teleoperation system with kinesthetic feedback, where the operator commands a remote/virtual robot to explore the environment and experiences the interactive force feedback through the haptic interface. 

Teleoperation systems face many challenges caused by unpredictable environment changes, time-delayed feedback, limited network capacity, etc. [2]. These issues inevitably distort the force feedback signal, degrading the transparency and stability of the system. In the past decades, many control algorithms and hardware architectures were developed to tackle these problems in the past decades [3].

Time Domain Passivity Method (TDPA)

 

TDPA is a passivity-based control scheme that ensures the stability of teleoperation systems in the presence of communication delays [4] (See Figure 2.). It abstracts two-port networks from the haptic system and observes the energy flow between the networks. Passivity condition is maintained by dissipating extra energy generated by non-passive networks. Original TDAP suffers from position drift and feedback force jump [5], and one reason for the position drift is that the energy, which is generated by the delayed communication, is dissipated in the task space of the robots.

 

Jacobian Null-Space for Redundancy Robot

 

Many robot mechanisms have redundant degrees of freedom (rDOFs), which means that they have a larger number of joints than the number of dimensions of their task or configuration space. The null space of the Jacobian null space stands for the redundant dimensions which can be exploited to dissipate extra energy by damping the null space motion without affecting the task space [5].

Your Task and Target

In this work, we target at improving the performance of TDPA by considering dissipating energy generated by time delay and other factors in the Jacobian null-space of the kinesthetically redundant robots. With the help of the Jacobian null-space method, we can avoid dissipating energy in the task space, so as to alleviate position drift and force distortion while keeping the system passive. For more information, previous work can be referred to as [7-9].

In this master's internship, your work will include

1.      1. surveying the related algorithms

2.      2. constructing the simulation environment

3.      3. experimenting with the state-of-the-art Jacobian null-space TDPA method.

4.      4. analyzing system passivity in Cartesian task space, joint space, null space, etc.

Voraussetzungen

Requirements

All requirements are recommended but not mandatory. However, you will need extra effort to catch up if you are unfamiliar with the following topics:

1.    3. Basic knowledge about robotics and control theory is favorable.

2.    2. Experience with robotics simulation software and platforms is favorable.

3.    1. C++, Matlab, and Python would be the primary working language. Basic knowledge about one or more of them is highly recommended.

Kontakt

zican.wang@tum.de

xiao.xu@tum.de

Betreuer:

Zican Wang, Xiao Xu

 

Wichtige Informationen zur Anfertigung der Ausarbeitung und zu Vorträgen am LMT, sowie Vorlagen für Powerpoint und LaTeX haben wir hier zusammengefasst.