High-level Robotic Teleoperation via Scene Editing
Description
Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on user interaction methods for describing a robotic task via modifying the viewed scene.
Prerequisites
Useful background:
- 3D Human-computer interfaces
- Game Design
- Digital signal processing
Required abilities:
- Experience with Unity and ROS
- Motivation to yield a good work
Contact
furkan.kaynar@tum.de
(Please provide your CV and transcript in your application)
Supervisor:
Robotic grasp learning from human demonstrations
Description
Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.
Prerequisites
Required background:
- Digital signal processing
- Computer vision
- Neural networks and other ML algorithms
Required abilities:
- Experience with Python or C++
- Experience with Tensorflow or PyTorch
- Motivation to yield a good thesis
Contact
furkan.kaynar@tum.de
(Please provide your CV and transcript in your application)
Supervisor:
Reinforcement Learning for Estimating Virtual Fixture Geometry to Improve Robotic Manipulation
Description
Robotic teleoperation is often used to accomplish complex tasks remotely with human-in-the-loop. In cases, where the task requires very precise manipulation, virtual fixtures can be used to restrict and guide the motion of the end effector of the robot while the person teleoperates. In this thesis, we will analyze the geometry of virtual fixtures depending on the scene and task. We will use reinforcement learning to estimate ideal virtual fixture model parameters. At the end of the thesis, the performance can be evaluated with user experiments.
Prerequisites
Useful background:
- Machine learning (Reinforcement Learning)
- Robotic simulation
Requirements:
- Experience with Python & Deep learning frameworks (PyTorch / Tensorflow...)
- Experience with a RL framework
- Motivation to yield a good outcome
Contact
(Please provide your CV and transcript in your application)
furkan.kaynar@tum.de
diego.prado@tum.de
Supervisor:
Robotic task learning from human demonstration
Description
Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.
Prerequisites
Required background:
- Digital signal processing
- Computer vision
- Neural networks and other ML algorithms
Required abilities:
- Experience with Python or C++
- Experience with Tensorflow or PyTorch
- Motivation to yield a good thesis
Contact
furkan.kaynar@tum.de
(Please provide your CV and transcript in your application)