Seminars
A Comparison of Recent Memory Prefetching Techniques
Description
DRAM modules are indispensable for modern computer architectures. Their main advantages are an easy design with only 1 Transistor per Bit and a high memory density.
However, DRAM accesses are rather slow and require a dedicated DRAM controller that coordinates the read and write accesses to the DRAM as well as the refresh cycles.
In order to reduce the DRAM access latency, the cache hierarchy can be extended by dedictated hardware access predictors in order to preload certain data to the caches before it is actually accessed.
The goal of this Seminar is to study and compare prefetching mechanisms and access predictors on cache level with several optimizations and present their benefits and usecases. A starting point of literature will be provided.
Prerequisites
B.Sc. in Electrical engineering or similar degree
Contact
Oliver Lenke
o.lenke@tum.de
Supervisor:
Asynchronous Design Using Standard EDA Tools
Description
Asynchronous logic have several advantages over conventional, clocked circuits which makes it of interest for certain areas of applications, such as network-on-chips, mixed-mode electronics, and arithmetic processors. Furthermore, a properly designed asynchronous circuit may offer both better performance and significantly lower power consumption than a synchronous equivalent.
Modern EDA tools, however, are not optimised for asynchronous design. This unfortunately complicates everything from architectural descriptions to synthesis and implementation, to verification and testing. A major concern lies in the fact that most tools are reliant upon global clocks for optimisation, as well as timing checks. For asynchronous circuits, where all functional blocks are self timed, this means that EDA tools will not be able to properly use clock constraints to optimise the critical path, thereby nullifying any speed advantages. And critically, EDA tools are not even guaranteed to produce functioning netlists. As such, in order to produce and test asynchronous circuits that are of non-trivial complexity, the standard design flow must be modified to take the characteristics of asynchronous logic into account.
For this seminar, the student should research the state-of-the-art for asynchronous logic design and testing with current industry standard EDA tools and what design flow modifications are required for producing robust and efficient asynchronous circuits.
Supervisor:
Simulation of Chiplet-based Systems
Description
With technology nodes approaching their physical limit, Moore’s law becomes continually more difficult to keep up with. As a strategy to allow further scaling, chiplet-based architectures will likely become more prevalent as they offer benefits regarding development effort and manufacturing yield.
Even while reusing IP, creating an entire multi-chiplet system is still a complicated task. Following a top-down approach, a high-level simulation can help design the system architecture before going to the register transfer level. As most available simulators cater to classical SoCs, setting up a simulation for chiplet-based systems might require special attention in selecting a framework and effort in its adaptation.
This seminar work should investigate what needs to be considered when simulating chiplet-based systems compared to SoCs, what simulation frameworks are viable, and what challenges simulation for chiplets and especially their interconnect brings.
A starting point for literature could be the following paper:
https://dl.acm.org/doi/abs/10.1145/3477206.3477459
Contact
michael.meidinger@tum.de
Supervisor:
An Overview of Service Migration in Modern Edge Computer Networks
Description
In modern Edge computer networks, applications and services should adhere to service-level agreements (SLA) like low latency or minimal throughput. Depending on demand and resource availability, these services have to be migrated between compute nodes to ensure these SLAs.
Service migration is a critical aspect of Edge computing, enabling the movement of services closer to the data source or end-users for improved performance and reduced latency. However, it comes with its own set of challenges, such as maintaining service continuity and managing resource constraints. This involves checkpointing and restarting of the applications (potentially in containers), as well as moving the data from one compute node to the other. This data movement could be further improved with RDMA technology.
This seminar should provide a background overview of the required technologies for service migration and explore recent improvements for low-latency service migration in both hardware and software.
Contact
marco.liess@tum.de