Michael Meidinger, M.Sc.
Wissenschaftlicher Mitarbeiter
Technische Universität München
TUM School of Computation, Information and Technology
Lehrstuhl für Integrierte Systeme
Arcisstr. 21
80333 München
Tel.: +49.89.289.23871
Fax: +49.89.289.28323
Gebäude: N1 (Theresienstr. 90)
Raum: N2114
Email: michael.meidinger(at)tum.de
Lebenslauf
- Seit 2023: Doktorand am LIS
- 2021 - 2023: M.Sc. Elektro- und Informationstechnik, TU München
- 2018 - 2021: B.Sc. Elektro- und Informationstechnik, TU München
- Tutor/Ferienkurs Digitaltechnik (2019 - 2023), Werkstudent bei ASC Sensors (2020 - 2022)
Forschung
Meine Forschungsinteressen umfassen Chiplet-Architekturen, speziell deren Interconnect und applikationsspezifische, smarte Funktionalitäten jenseits reiner Datenübertragung, sowie Reinforcement Learning zur Laufzeit-Optimierung von MPSoCs und für autonomes Fahren.
Angebotene Arbeiten
Falls gerade keine Arbeit ausgeschrieben ist, oder dich ein anderes Thema interessiert, kannst du mir gerne auch initiativ eine Mail schicken.
Laufende Arbeiten
Extended SystemC Model for Design Space Exploration of a Chiplet-Based System
Beschreibung
In the BCDC project, a working group at TUM collaborates on designing a RISC-V-based chiplet demonstration chip, of which at least two will be connected via an interposer to simulate a system of interconnected chiplets. At LIS, we work on a high-performance, low-latency chiplet interconnect with additional application-specific features managed by a smart protocol controller. It closes the gap between the underlying physical layer that takes care of data transmission across the interposer and the system bus that attaches the inter-chiplet interface to the other components of the demonstration chip.
In previous work, a high-level simulation of our system has been set up using SystemC Transaction-Level Modeling (TLM). The model represents chiplets, an additional FPGA, and their interconnect in a configurable manner. The user can record a wide range of statistics for the evaluation of different system configurations and design details. This follow-up Master’s thesis serves to extend the model to bring it closer to a real system and to enhance its functionality as a design exploration tool.
As a first step, the overall architecture of the chiplet model is to be reworked. A more realistic system bus, oriented on AXI4, including bursts and congestion handling mechanisms, is to be implemented. Other parts of the TUM demonstration chip, like hardware accelerators with local memory, are to be added.
The capabilities of the interconnect protocol should be extended, e.g., by read/write operations to other memories besides the main chiplet RAM, DMA support, or application-specific extra features as provided by the smart protocol controller. This process will be based on a more formal specification of the interconnect protocol. The modeling of the interconnect itself should also involve further details specific to the investigated standards and custom solutions.
With a more complex chiplet model, more configuration options should be offered. An example could be the instantiation of a pure memory chiplet. The type of interconnect between individual chiplets should also be configurable beyond the currently supported parameters. All of these added possibilities will rely on an overhauled inter-chiplet routing mechanism.
Furthermore, the existing user code applications should be supplemented by ones utilizing the added hardware accelerators or benefiting from multi-core operation.
On the side of usability improvements, further statistics should be collected, automatically processed, and visualized to simplify the evaluation of the explored system configurations. Ultimately, the simulation should help identify the benefits and drawbacks of these configurations and support a future HDL implementation.
Voraussetzungen
- Understanding of chiplet architectures, especially their interconnect
- Experience with SystemC TLM
- Structured and independent way of working and strong problem-solving skills
Kontakt
michael.meidinger@tum.de
Betreuer:
Duckietown - DuckieVisualizer Extension and System Maintenance
Beschreibung
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
In previous work, we developed a tool called DuckieVisualizer to monitor our Duckiebots, evaluate their driving performance, and visualize and interact with the actively learning RL agents.
This student assistant position will involve extending the tool and its respective interfaces on the robot side by further features, e.g., more complex learning algorithms or driving statistics. The underlying camera processing program should also be ported from Matlab to a faster programming language to enable real-time robot tracking. Furthermore, more robust Duckiebot identification mechanisms should be considered.
Besides these extensions to the DuckieVisualizer, the student will also do some general system maintenance tasks. This may include the hardware of the Duckiebots and their software stack, for example, merging different sub-projects and looking into quality-of-life improvements to the building process using Docker. Another task will be to help newly starting students set up their development environment and to assist them in their first steps. Finally, the student can get involved in expanding our track and adding new components, e.g., intersections or duckie pedestrian crossings.
Voraussetzungen
- Understanding of networking and computer vision
- Experience with Python, ROS, and GUI development
- Familiarity with Docker and Git
- Structured way of working and strong problem-solving skills
- Interest in autonomous driving and robotics
Kontakt
michael.meidinger@tum.de
Betreuer:
Duckietown - Improved Distance Measurement
Beschreibung
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
We use a Duckiebot's Time-of-Flight (ToF) sensor to measure the distance to objects in front of the robot. This allows it to stop before crashing into obstacles. The distance measurement is also used in our platooning mechanism. When another Duckiebot is detected via its rear dot pattern, the robot can adjust its speed to follow the other Duckiebot at a given distance.
Unfortunately, the measurement region of the integrated ToF sensor is very narrow. It only detects objects reliably in a cone of about 5 degrees in front of the robot. Objects outside this cone, either too far to the side or too high/low, cannot reflect the emitted laser beam to the sensor's collector, leading to crashes. The distance measurement is also fairly noisy, with measurement accuracy decreasing for further distances, angular offsets from the sensor, and uneven reflection surfaces. This means that the distance to the other Duckiebot is often not measured correctly in the platooning mode, causing the robot to react with unexpected maneuvers and to lose track of the leading robot.
In this student assistant project, the student will investigate how to resolve these issues. After analyzing the current setup, different sensors and their position on the robot's front should be considered. A suitable driver and some hardware adaptations will be required to add a new sensor to the Duckiebot system. Finally, they will integrate the improved distance measurement setup in our Python/ROS-based autonomous driving pipeline, evaluate it in terms of measurement region and accuracy, and compare the new setup to the baseline.
These modifications should allow us to avoid crashes more reliably and enhance our platooning mode, which will be helpful for further development, especially when moving to more difficult-to-navigate environments, e.g., tracks with intersections and sharp turns.
Voraussetzungen
- Basic understanding of sensor technology and data transmission protocols
- Experience or motivation to familiarize yourself with Python and ROS
- Structured way of working and strong problem-solving skills
- Interest in autonomous driving and robotics
Kontakt
michael.meidinger@tum.de
Betreuer:
Duckietown - Real-Time Object Recognition for Autonomous Driving
Beschreibung
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebot control system. More information on Duckietown can be found here.
In previous work, an algorithm to detect obstacles in the path of a Duckiebot was developed. It uses its Time-of-Flight (ToF) sensor for general obstacles in front of the robot. It can also specifically detect duckies in the camera image, mainly used for lane detection, by creating a color-matching mask with bounding rectangles within a certain size range and position on the track. If the robot detects any obstacle in its path, it slows down, then stops and waits for the obstacle to disappear if it is too close.
While this algorithm works well for large obstacles right in front of the ToF sensor and duckies on straight tracks, it struggles to detect obstacles in other scenarios. The ToF sensor only covers a narrow measurement region and misses objects that are off-center or too high/low. Its measurement results are also not very reliable, especially for higher distances. The camera-based recognition sometimes interchanges duckies with the track's equally yellow centerline. It can fail to detect them due to blind spots (caused by a dynamic region of interest, not an actual blind spot of the camera) when driving on a curved track.
We also want to include other objects besides duckies in our recognition algorithm, e.g., stop lines or traffic signs at intersections. Since the camera approach is tuned to the size and color of duckies, manual effort would be needed to extend it to different objects. Therefore, in this Bachelor's thesis, we want to overhaul our object recognition to be more reliable and detect various objects in different scenarios. A good approach could be using the YOLO (You Only Look Once) algorithm, a single-pass real-time object detection algorithm based on a convolutional neural network. There is also some related work around this topic in the Duckietown community.
The student will start with a continued analysis of the problems of the previous method and literature research regarding viable object detection algorithms. Afterward, they will implement and integrate the selected algorithm into the existing framework. Depending on the approach, some training will be necessary. The student will start with a continued analysis of the current problems and literature research regarding viable object detection algorithms. Afterward, they will implement and integrate the selected algorithm into the existing framework. Depending on the approach, some training will be necessary. Furthermore, they will evaluate the detection pipeline for accuracy in different scenarios, latency, and resource utilization. Once a reliable detection system is established, our system can be extended to more complex behavior, such as circumnavigating duckies or responding to intersectional traffic signs.
Voraussetzungen
- Experience with Python and, ideally, ROS
- Familiarity with neural networks and computer vision
- Structured way of working and strong problem-solving skills
- Interest in autonomous driving
Kontakt
michael.meidinger@tum.de
Betreuer:
Duckietown - Improved RL-based Vehicle Steering
Beschreibung
At LIS, we leverage the Duckietown hardware and software ecosystem to experiment with our reinforcement learning (RL) agents, known as learning classifier tables (LCTs), as part of the Duckiebots' control system (https://www.ce.cit.tum.de/lis/forschung/aktuelle-projekte/duckietown-lab/).
More information on Duckietown can be found at https://www.duckietown.org/.
In previous work, an LCT agent to steer Duckiebots has been developed using only the angular heading error for the system state. In this Bachelor's thesis, the vehicle steering agent should be improved and its functionality extended.
Starting with the existing Python/ROS implementation of the RL agent and our image processing pipeline, multiple system parts should be enhanced. On the environment side, detecting the lateral offset from the center of a lane should be improved for reliability. This will require an analysis of the current problems and some adaptations in the pipeline, possibly some hardware changes.
With more reliable lane offset values, the agent's state observation can also include it, allowing us to move further from the default PID control towards a purely RL-based steering approach. This will involve modifications to the rule population, the reward function, and potentially the learning method. Different configurations are to be implemented and evaluated in terms of their resulting performance and efficiency.
The thesis aims to shift the vehicle steering entirely to the RL agent, ideally reducing the effort for manual parameter tuning while being comparable in driving performance and computation effort.
Voraussetzungen
- Experience with Python and, ideally, ROS
- Basic knowledge of reinforcement learning
- Structured way of working and problem-solving skills
Kontakt
michael.meidinger@tum.de
Betreuer:
Abgeschlossene Arbeiten
Bachelorarbeiten
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
flo.maurer@tum.de
michael.meidinger@tum.de
Betreuer:
Kontakt
flo.maurer@tum.de
michael.meidinger@tum.de
Betreuer:
Masterarbeiten
Betreuer:
Forschungspraxis (Research Internships)
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
flo.maurer@tum.de
michael.meidinger@tum.de
Betreuer:
Seminare
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Kontakt
michael.meidinger@tum.de
Betreuer:
Studentische Hilfskräfte
Kontakt
flo.maurer@tum.de