Michael Meidinger, M.Sc.
Research Associate
Technical University of Munich
TUM School of Computation, Information and Technology
Chair of Integrated Systems
Arcisstr. 21
80333 München
Germany
Phone: +49.89.289.23871
Fax: +49.89.289.28323
Building: N1 (Theresienstr. 90)
Room: N2114
Email: michael.meidinger(at)tum.de
Curriculum vitae
- Since 2023: Doctoral Candidate at LIS
- 2021 - 2023: M.Sc. Electrical Engineering and Information Technology, Technical University of Munich
- 2018 - 2021: B.Sc. Electrical Engineering and Information Technology, Technical University of Munich
- Tutor/semester break course Digitaltechnik (2019 - 2023), working student at ASC Sensors (2020 - 2022)
Research
My research interests include chiplet architectures, particularly their interconnect and application-specific, smart functionalities beyond pure data transmission, as well as Reinforcement Learning for MPSoC runtime optimization and for autonomous driving.
If no thesis is currently advertised, or if you are interested in another topic, you are welcome to send me an unsolicited email.
Ongoing Theses
Duckietown – Combined RL-Based Steering and Speed Control
Description
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
In previous work, both the default PID controllers for steering and speed control were replaced independently by LCT RL agents. The modified versions of the Duckiebot control systems can keep up, respectively, with the default versions in terms of driving performance while using just slightly more computational resources. With some additional changes to the image processing pipeline, we also improved the accuracy of state measurements, e.g., the distance to the center of the lane. However, the two separate agents have not been combined so far, and have not been exposed to more complex driving scenarios.
This thesis aims to have the Duckiebots' driving be controlled entirely by RL. The student can achieve this by either merging our two RL agents into one or via a multi-agent approach. As their selected actions affect each other, they cannot be treated as entirely independent. For example, a more substantial heading angle correction is necessary to avoid leaving the lane when accelerating in curves.
The first step in this thesis will be to analyze the existing agents and investigate different concepts for their combination. When implementing the selected approach(es), new states and actions will likely extend the ruleset(s), which will demand more complex reward and Q-value update functions. Ideally, code optimizations will decrease the minimum possible RL cycle period. The student will compare the new system configuration to those with the existing separate agents and the baseline PID controller version regarding driving performance and computational resource utilization. A potential extension is the integration of more complex scenarios, such as intersections, pedestrian crossings, or traffic lights.
If the current LCT agents do not yield satisfactory results, the student could also explore other agent approaches, such as deep RL. As such methods are generally more resource-hungry, they might only be feasible by offloading parts of the control system to the Jetson Nano's GPU.
Prerequisites
Familiarity with RL, Python, ROS, and computer vision
Structured way of working and strong problem-solving skills
Interest in autonomous driving and robotics
Supervisor:
Duckietown - DuckieVisualizer Extension and System Maintenance
Description
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
In previous work, we developed a tool called DuckieVisualizer to monitor our Duckiebots, evaluate their driving performance, and visualize and interact with the actively learning RL agents.
This student assistant position will involve extending the tool and its respective interfaces on the robot side by further features, e.g., more complex learning algorithms or driving statistics. The underlying camera processing program should also be ported from Matlab to a faster programming language to enable real-time robot tracking. Furthermore, more robust Duckiebot identification mechanisms should be considered.
Besides these extensions to the DuckieVisualizer, the student will also do some general system maintenance tasks. This may include the hardware of the Duckiebots and their software stack, for example, merging different sub-projects and looking into quality-of-life improvements to the building process using Docker. Another task will be to help newly starting students set up their development environment and to assist them in their first steps. Finally, the student can get involved in expanding our track and adding new components, e.g., intersections or duckie pedestrian crossings.
Prerequisites
- Understanding of networking and computer vision
- Experience with Python, ROS, and GUI development
- Familiarity with Docker and Git
- Structured way of working and strong problem-solving skills
- Interest in autonomous driving and robotics
Contact
michael.meidinger@tum.de
Supervisor:
Duckietown - Improved Distance Measurement
Description
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
We use a Duckiebot's Time-of-Flight (ToF) sensor to measure the distance to objects in front of the robot. This allows it to stop before crashing into obstacles. The distance measurement is also used in our platooning mechanism. When another Duckiebot is detected via its rear dot pattern, the robot can adjust its speed to follow the other Duckiebot at a given distance.
Unfortunately, the measurement region of the integrated ToF sensor is very narrow. It only detects objects reliably in a cone of about 5 degrees in front of the robot. Objects outside this cone, either too far to the side or too high/low, cannot reflect the emitted laser beam to the sensor's collector, leading to crashes. The distance measurement is also fairly noisy, with measurement accuracy decreasing for further distances, angular offsets from the sensor, and uneven reflection surfaces. This means that the distance to the other Duckiebot is often not measured correctly in the platooning mode, causing the robot to react with unexpected maneuvers and to lose track of the leading robot.
In this student assistant project, the student will investigate how to resolve these issues. After analyzing the current setup, different sensors and their position on the robot's front should be considered. A suitable driver and some hardware adaptations will be required to add a new sensor to the Duckiebot system. Finally, they will integrate the improved distance measurement setup in our Python/ROS-based autonomous driving pipeline, evaluate it in terms of measurement region and accuracy, and compare the new setup to the baseline.
These modifications should allow us to avoid crashes more reliably and enhance our platooning mode, which will be helpful for further development, especially when moving to more difficult-to-navigate environments, e.g., tracks with intersections and sharp turns.
Prerequisites
- Basic understanding of sensor technology and data transmission protocols
- Experience or motivation to familiarize yourself with Python and ROS
- Structured way of working and strong problem-solving skills
- Interest in autonomous driving and robotics
Contact
michael.meidinger@tum.de
Supervisor:
Completed Theses
Bachelor's Theses
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
flo.maurer@tum.de
michael.meidinger@tum.de
Supervisor:
Contact
flo.maurer@tum.de
michael.meidinger@tum.de
Supervisor:
Master's Theses
Contact
michael.meidinger@tum.de
Supervisor:
Supervisor:
Research Internships (Forschungspraxis)
Contact
michael.meidinger@tum.de
Supervisor:
Contact
flo.maurer@tum.de
michael.meidinger@tum.de
Supervisor:
Seminars
Contact
Michael Meidinger
michael.meidinger@tum.de
Supervisor:
Contact
Michael Meidinger
michael.meidinger@tum.de
Supervisor:
Contact
Michael Meidinger
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Contact
michael.meidinger@tum.de
Supervisor:
Student Assistant Jobs
Contact
flo.maurer@tum.de