Open Thesis

High-level Robotic Teleoperation via Scene Editing

Description

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on user interaction methods for describing a robotic task via modifying the viewed scene.

 

Prerequisites

Useful background:

- 3D Human-computer interfaces

- Game Design

- Digital signal processing

 

Required abilities:

- Experience with Unity and ROS

- Motivation to yield a good work

 

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Supervisor:

Hasan Furkan Kaynar

Robotic grasp learning from human demonstrations

Keywords:

Description

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Prerequisites

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

 

Supervisor:

Hasan Furkan Kaynar

Reinforcement Learning for Estimating Virtual Fixture Geometry to Improve Robotic Manipulation

Description

Robotic teleoperation is often used to accomplish complex tasks remotely with human-in-the-loop. In cases, where the task requires very precise manipulation, virtual fixtures can be used to restrict and guide the motion of the end effector of the robot while the person teleoperates. In this thesis, we will analyze the geometry of virtual fixtures depending on the scene and task. We will use reinforcement learning to estimate ideal virtual fixture model parameters. At the end of the thesis, the performance can be evaluated with user experiments.

Prerequisites

Useful background:

- Machine learning (Reinforcement Learning)

- Robotic simulation

 

Requirements:

- Experience with Python & Deep learning frameworks (PyTorch / Tensorflow...)

- Experience with a RL framework

- Motivation to yield a good outcome

 

Contact

(Please provide your CV and transcript in your application)

 

furkan.kaynar@tum.de

diego.prado@tum.de

 

Supervisor:

Hasan Furkan Kaynar, Diego Fernandez Prado

Robotic task learning from human demonstration

Description

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Prerequisites

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Supervisor:

Hasan Furkan Kaynar

Ongoing Thesis

Master's Theses

Robotic task learning from human demonstration using spherical representations

Description

Autonomous grasping and manipulation are complicated tasks which require precise planning and a high level of scene understanding. Although robot autonomy is evolving since decades, there is still need for improvement, especially for operating in unstructured environments like households. Human demonstration can improve the autonomous robot abilities further to increase the task success in different scenarios. In this thesis we will work on learning from human demonstration for improving the robot autonomy.

Prerequisites

Required background:

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Supervisor:

Hasan Furkan Kaynar

Developing of a method to solve the flexible job-shop scheduling problem with Graph Neural Networks (GNNs)

Description

...

Supervisor:

Hasan Furkan Kaynar

Depth data analysis for investigating robotic grasp estimates

Description

Many robotic applications are based on computer vision, which relies on the sensor output. A typical example is semantic scene analysis, with which the robot plans its motions. Many computer vision algorithms are trained in simulation which may or may not represent the actual sensor data realistically. Physical sensors are imperfect and the output erroneous data may deteriorate the performance of the required tasks. In this thesis, we will analyze the sensor data and estimate its effects on the final robotic task.

Prerequisites

Required background:

- Digital signal processing

- Image analysis / Computer vision

- Neural networks and other ML algorithms

 

Required abilities:

- Experience with Python or C++

- Experience with Tensorflow or PyTorch

- Motivation to yield a good thesis

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Supervisor:

Hasan Furkan Kaynar

Research Internships (Forschungspraxis)

Interactive Segmentation with Depth Information

Description

Image segmentation is a fundamental problem in computer vision. While there are autonomous methods for segmenting a given image, interactive segmentation schemes receive a human guidance in addition to the scene information. In this project, we will investigate methods for using depth information to improve segmentation accuracy.

Prerequisites

Basic knowledge of

- Digital signal processing

- Computer vision

- Neural networks and other ML algorithms

 

Familiarity with

- Python or C++

- Tensorflow or PyTorch

 

 

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Supervisor:

Hasan Furkan Kaynar

3D object model reconstruction from RGB-D scenes

Description

The robots should be able to discover their environments and learn new objects in order to be a part of daily human life. There are still challenges to detect or recognize objects in unstructured environments like a household environment. For robotic grasping and manipulation, knowing 3D models of the objects are beneficial, hence the robot needs to infer the 3D shape of an object upon observation. In this project, we will investigate methods that can infer or produce 3D models of novel objects by observing RGB-D scenes. We will analyze the methods to reconstruct 3D information with different arrangements of an RGB-D camera.

 

 

 

 

 

 

 

 

 

Prerequisites

  • Basic knowledge of digital signal processing / computer vision
  • Experience with ROS, C++, Python.
  • Experience with Artificial Neural Network libraries or motivation to learn them
  • Motivation to yield a successful work

Contact

furkan.kaynar@tum.de

Supervisor:

Hasan Furkan Kaynar

Internships

Recording of Robotic Grasping Failures

Description

The aim of this project is collecting data by robotic grasping experiments and creating a largescale labeled dataset. We will conduct experiments while attempting to grasp known or unknown objects autonomously. The complete pipeline includes:

- Estimating grasp poses via computer vision

- Robotic motion planning

- Executing the grasp physically

- Recording necessary data

- Organizing the recorded data into a well-structured dataset

 

Most of the data collection pipeline has been already developed, additions and modifications may be needed.

Prerequisites

Useful background:

- Digital signal processing

- Computer vision

- Dataset handling

 

Requirements:

- Experience with Python and ROS

- Motivation to yield a good outcome

Contact

furkan.kaynar@tum.de

 

(Please provide your CV and transcript in your application)

 

Supervisor:

Hasan Furkan Kaynar