CeCaS

Mannheim CeCaS is a supra-regional research project funded by the BMBF to develop a "Central Car Server" for future automated, connected and electrified vehicles. The project network consists of numerous industrial partners, accompanied by several academic research groups.

Overarching Objective: Automotive Supercomputing Platform - powerful Central Car Server concept based on new automotive qualified high performance processors, in FinFET supported by application specific accelerators and adaptive automotive SW stack for highly automated connected vehicles.

At the Technical University of Munich, three chairs (TUM-AIR, TUM-LIS, TUM-SEC) are involved in the CeCaS project network, contributing in the areas of model-based development, requirements management, software architecture, memory technology, and security.

 

Contribution of LIS

TUM-LIS is developing approaches for intelligent pre-fetching and write-back of data by the memory controller to increase the performance of the automotive processor. In addition, a prediction model for future addresses and data accesses is being investigated using machine learning methods such as reinforcement learning.

The current approach provides a wrapper layer around the DDR controller that realizes this functionality. It reduces the access latencies to external volatile and non-volatile main memories via adaptive prefetching of data and instructions in fast on-chip SRAM memories and by intelligent write-back of modified data located in the SRAM memory to the external main memory.

In the work on the wrapper layer we cooperate with TUM-SEC who investigate suitable lightweight techniques for transparent on-the-fly en-/de-cryption of data stored on external memory to prevent unauthorized access as well as error correction codes.

Workflow

In the CeCaS project we take a two-sided approach. On the one hand, we examine various implementation concepts and approaches with a SystemC based simulation model together with our partners. On the other hand, we are also working on an FPGA implementation, which offers a deeper level of abstraction for even more precise analyses. In both areas there are often topics for student work.

 

 

Involved Researchers

Open Student Work

Fine granular Page Preloading Mechanism on an FPGA Prototype

Keywords:
VHDL, C Programming, Distributed Memory, Data Migration, Task Migration, Hardware Accelerator

Description

Their main advantages are an easy design with only 1 Transistor per Bit and a high memory density make DRAM omnipresend in most computer architectures. However, DRAM accesses are rather slow and require a dedicated DRAM controller
that coordinates the read and write accesses to the DRAM as well as the refresh cycles. In order to reduce the DRAM access latency, memory prefetching is a common technique to access data prior to their actual usage. However, this requires sophisticated prediction algorithms in order to prefetch the right data at the right time.


The Goal of this thesis is to refine an existing DRAM preloading mechanism on an  FPGA based prototype platform. Instead of preloading a whole memory page in a single atomic operation, the refinement should lead to a fine-granular page preloading, i.e. loading multiple small fractions of a page step by step while allowing regular memory accesses to be prioritized intermediately.


Towards this goal, you'll complete the following tasks:
1. Understanding the existing Memory Access and Preloading mechanism
2. VHDL implementation of the refined preloading functionalities
3. Write and execute small baremetal test programs
4. Analyse and discuss the performance results

Prerequisites

  • Good Knowledge about MPSoCs
  • Good VHDL skills
  • Good C programming skills
  • High motivation
  • Self-responsible workstyle

Contact

Oliver Lenke

o.lenke@tum.de

Supervisor:

Oliver Lenke

Current Student Work

Design and Implementation of a Memory Prefetching Mechanism on an FPGA Prototype

Keywords:
VHDL, C Programming, Distributed Memory, Data Migration, Task Migration, Hardware Accelerator

Description

Their main advantages are an easy design with only 1 Transistor per Bit and a high memory density make DRAM omnipresend in most computer architectures. However, DRAM accesses are rather slow and require a dedicated DRAM controller
that coordinates the read and write accesses to the DRAM as well as the refresh cycles. In order to reduce the DRAM access latency, memory prefetching is a common technique to access data prior to their actual usage. However, this requires sophisticated prediction algorithms in order to prefetch the right data at the right time.
The Goal of this thesis is to design and implement a DAM preloading mechanism in an existing FPGA based prototype platform and to evaluate the design appropriately.
Towards this goal, you'll complete the following tasks:
1. Understanding the existing Memory Access mechanism
2. VHDL implementation of the preloading functionalities
3. Write and execute small baremetal test programs
4. Analyse and discuss the performance results

Prerequisites

  • Good Knowledge about MPSoCs
  • Good VHDL skills
  • Good C programming skills
  • High motivation
  • Self-responsible workstyle

Contact

Oliver Lenke

o.lenke@tum.de

Supervisor:

Oliver Lenke

Completed Student Work

Supervisor:

Oliver Lenke

Supervisor:

Oliver Lenke

Student

Ali Emre Heybeli