Seminare
Exploring the AMBA AXI Bus Protocol
Beschreibung
This seminar takes a look at ARM's Advanced Microcontroller Bus Architecture (AMBA), a widely adopted standard for on-chip communication. AMBA includes three distinct protocols, each tailored to different performance requirements. The specific focus of this seminar is the Advanced eXtensible Interface (AXI) protocol. The aim is to thoroughly investigate its architecture, to analyze the employed handshake mechanisms, and to draw comparisons to the other AMBA protocols and non-AMBA protocols.
Key aspects of the seminar include:
- The starting point is the technical specification of the AXI protocol in [1]. In addition, a comprehensive literature review shall be conducted to identify publications that provide detailed analyses of the AXI protocol as well as alternative bus protocols and comparisons between them.
- The first step is to summarize the functionality of the AXI protocol with a special focus on the handshakes it employs.
- As a next step, the AXI protocol shall be compared with other bus protocols—both AMBA and non-AMBA protocols—with an emphasis on the differences and similarities in their handshake mechanisms.
The overall findings of this seminar shall be compiled into a concise, 4-page paper and presented in an EDA seminar.
Bibliography:
[1] AMBA AXI Protocol Specification: https://developer.arm.com/documentation/ihi0022/l/?lang=en
Kontakt
Please contact: Natalie.Simson@infineon.com
Betreuer:
Placement of Systolic Arrays for Neural Network Accelerators
Beschreibung
Systolic arrays are a proven architecture for parallel processing across various applications, offering design flexibility, scalability, and high efficiency. With the growing importance of neural networks in many areas, there is a need for efficient processing of the underlying computations, such as matrix multiplications and convolutions. These computations can be executed with a high degree of parallelism on neural network accelerators utilizing systolic arrays.
Just as any application-specific integrated circuit (ASIC) or field-programmable gate array (FPGA) design, neural network accelerators go through the standard phases of chip design, however, treating systolic array hardware designs the same way as any other design may lead to suboptimal results, as utilizing the regular structure of systolic arrays can lead to better solution quality[1].
Relevant works for this seminar topic include the work of Fang et al. [2], where a regular placement is used as an initial solution and then iteratively improved using the RePlAce[3] placement algorithm. The placement of systolic arrays on FPGAs is discussed by Hu et al., where the processing elements of the systolic array are placed on the DSP columns in a manner that is more efficient than the default placement of commercial placement tools[4].
In this seminar, you will investigate different macro and cell placement approaches, focusing on methods that specifically consider systolic array placement. If you have questions regarding this topic, please feel free to contact me.
[1] S. I. Ward et al., "Structure-Aware Placement Techniques for Designs With Datapaths," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 32, no. 2, pp. 228-241, Feb. 2013, doi: https://doi.org/10.1109/TCAD.2012.2233862
[2] D. Fang, B. Zhang, H. Hu, W. Li, B. Yuan and J. Hu, "Global Placement Exploiting Soft 2D Regularity". in ACM Transactions on Design Automation of Electronic Systems, vol. 30, no. 2, pp. 1-21, Jan. 2025, doi: https://doi.org/10.1145/3705729
[3] C. -K. Cheng, A. B. Kahng, I. Kang and L. Wang, "RePlAce: Advancing Solution Quality and Routability Validation in Global Placement," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 38, no. 9, pp. 1717-1730, Sept. 2019, doi: https://doi.org/10.1109/TCAD.2018.2859220
[4] H. Hu, D. Fang, W. Li, B. Yuan and J. Hu, "Systolic Array Placement on FPGAs," 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), San Francisco, CA, USA, 2023, pp. 1-9, doi: https://doi.org/10.1109/ICCAD57390.2023.10323742
Kontakt
benedikt.schaible@tum.de
Betreuer:
Physical/Analog Foundation Models
Beschreibung
Physical neural networks (PNNs) are a class of neural-like networks that make use of analogue physical systems to perform computations. Although at present confined to small-scale laboratory demonstrations, PNNs could one day transform how artificial intelligence (AI) calculations are performed. Could we train AI models many orders of magnitude larger than present ones? Could we perform model inference locally and privately on edge devices? Research over the past few years has shown that the answer to these questions is probably “yes, with enough research”. Because PNNs can make use of analogue physical computations more directly, flexibly and opportunistically than traditional computing hardware, they could change what is possible and practical for AI systems. To do this, however, will require notable progress, rethinking both how AI models work and how they are trained—primarily by considering the problems through the constraints of the underlying hardware physics. To train PNNs, backpropagation-based and backpropagation-free approaches are now being explored. These methods have various trade-offs and, so far, no method has been shown to scale to large models with the same performance as the backpropagation algorithm widely used in deep learning today. However, this challenge has been rapidly changing and a diverse ecosystem of training techniques provides clues for how PNNs may one day be used to create both more efficient and larger-scale realizations of present-scale AI models.
Based on: Momeni, Ali, et al. "Training of physical neural networks." Nature 645.8079 (2025): 53-61.
Also see: Büchel, Julian, et al. "Analog Foundation Models." arXiv preprint arXiv:2505.09663 (2025).
Kontakt
ch.wolters@tum.de
Betreuer:
Hardware–Software Co-Design for Neuro-Symbolic Computing
Beschreibung
The rapid progress of artificial intelligence (AI) has led to the emergence of a highly promising field known as neuro-symbolic (NeSy) computing. This approach combines the strengths of neural networks, which excel at data-driven learning, with the reasoning capabilities of symbolic AI. Neuro-symbolic models have the potential to overcome the limitations of each approach individually, resulting in interpretable and explainable AI systems that can reason over complex knowledge bases, learn from limited and/or noisy data, and be generalizable. However, the exploration of NeSy AI from a system perspective remains limited. This work targets an in-depth analysis of the state-of-the-art hardware-software co-design techniques for NeSy AI and discusses the associated challenges in improving system efficiency for heterogeneous computing.
Based on: X. Yang et al., "Neuro-Symbolic Computing: Advancements and Challenges in Hardware–Software Co-Design," in IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 71, no. 3, pp. 1683-1689, March 2024, doi: 10.1109/TCSII.2023.3336251.
Kontakt
ch.wolters@tum.de
Betreuer:
The ZSim Performance Simulator
Beschreibung
Performance simulation is a crucial step in modern design space exploration, enabling the identification of optimal systems. ZSim allows for fast and accurate simulations on the mircoarchitectural level and targets hughe thousand-core sytsems.
"Architectural simulation is time-consuming, and the trend
towards hundreds of cores is making sequential simulation
even slower. Existing parallel simulation techniques either
scale poorly due to excessive synchronization, or sacrifice ac-
curacy by allowing event reordering and using simplistic con-
tention models. As a result, most researchers use sequential
simulators and model small-scale systems with 16-32 cores.
With 100-core chips already available, developing simulators
that scale to thousands of cores is crucial.
We present three novel techniques that, together, make
thousand-core simulation practical. First, we speed up de-
tailed core models (including OOO cores) with instruction-
driven timing models that leverage dynamic binary trans-
lation. Second, we introduce bound-weave, a two-phase
parallelization technique that scales parallel simulation on
multicore hosts efficiently with minimal loss of accuracy.
Third, we implement lightweight user-level virtualization
to support complex workloads, including multiprogrammed,
client-server, and managed-runtime applications, without
the need for full-system simulation, sidestepping the lack
of scalable OSs and ISAs that support thousands of cores.
We use these techniques to build zsim, a fast, scalable,
and accurate simulator. On a 16-core host, zsim models a
1024-core chip at speeds of up to 1,500 MIPS using simple
cores and up to 300 MIPS using detailed OOO cores, 2-3 or-
ders of magnitude faster than existing parallel simulators.
Simulator performance scales well with both the number
of modeled cores and the number of host cores. We vali-
date zsim against a real Westmere system on a wide variety
of workloads, and find performance and microarchitectural
events to be within a narrow range of the real system." - Daniel Sanchez and Christos Kozyrakis: "ZSim: Fast and Accurate Microarchitectural Simulation of Thousand-Core Sytsems" 2013
Kontakt
conrad.foik@tum.de
Betreuer:
GPU-accelerated RTL Simulation
Beschreibung
t.b.d.
Kontakt
johannes.geier@tum.de
Betreuer:
Dynamic Neural Networks for Adaptive Inference
Beschreibung
Deep Neural Networks (DNNs) have shown high predictive performance on various tasks. However, the large compute requirements of DNNs restrict their potential deployment on embedded devices with limited resources.
Dynamic Neural Networks (DyNNs) are a class of neural networks that can adapt their structure, parameters, or computation graph based on input data. Unlike the conventional DNNs, which have a fixed architecture once trained, DyNNs offer greater efficiency and adaptability. In particular, DyNNs can reduce latency, memory usage, and energy consumption during inference by activating only the necessary subset of its structure based on the difficulties of the input data.
The most recent survey paper in [1] provides an overview of DyNNs methods until the year 2021. This seminar topic covers a literature research on the more recent DyNNs methods with the focus on DyNNs for computer vision tasks (cf. Section 2 & 3 in [1]) and their training methodologies (cf. Section 5 in [1]). You are expected to find 3-4 more recent papers on this topic, and review and compare their methods including their advantages and drawbacks.
[1] Han, Yizeng, et al. "Dynamic neural networks: A survey." IEEE transactions on pattern analysis and machine intelligence 44.11 (2021): 7436-7456.
Kontakt
mikhael.djajapermana@tum.de
Betreuer:
Innovative Memory Architectures in DNN Accelerators
Beschreibung
With the growing complexity of neural networks, more efficient and faster processing solutions are vital to enable the widespread use of artificial intelligence. Systolic arrays are among the most popular architectures for energy-efficient and high-throughput DNN hardware accelerators.
While many works implement DNN accelerators using systolic arrays on FPGAs, several (ASIC) designs from industry and academia have been presented [1-3]. To fulfill the requirements that such accelerators place on memory accesses, both in terms of data availability and latency hiding, innovative memory architectures can enable more efficient data access, reducing latency and bridging the gap towards even more powerful DNN accelerators.
One example is the Eyeriss v2 ASIC [1], which uses a distributed Global Buffer (GB) layout tailored to the demands of their row-stationary systolic array dataflow.
In this seminar, a survey of state-of-the-art DNN accelerator designs and design frameworks shall be created, focusing on their memory hierarchy.
References and Further Resources:
[1] Y. -H. Chen, T. -J. Yang, J. Emer and V. Sze. 2019 "Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices," in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 9, no. 2, pp. 292-308, June 2019, doi: https://doi.org/10.1109/JETCAS.2019.2910232
[2] Yunji Chen, Tianshi Chen, Zhiwei Xu, Ninghui Sun, and Olivier Temam. 2016. "DianNao family: energy-efficient hardware accelerators for machine learning." In Commun. ACM 59, 11 (November 2016), 105–112. https://doi.org/10.1145/2996864
[3] Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, et al. 2017. "In-Datacenter Performance Analysis of a Tensor Processing Unit." In Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA '17). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3079856.3080246
[4] Rui Xu, Sheng Ma, Yang Guo, and Dongsheng Li. 2023. A Survey of Design and Optimization for Systolic Array-based DNN Accelerators. ACM Comput. Surv. 56, 1, Article 20 (January 2024), 37 pages. https://doi.org/10.1145/3604802
[5] Bo Wang, Sheng Ma, Shengbai Luo, Lizhou Wu, Jianmin Zhang, Chunyuan Zhang, and Tiejun Li. 2024. "SparGD: A Sparse GEMM Accelerator with Dynamic Dataflow." ACM Trans. Des. Autom. Electron. Syst. 29, 2, Article 26 (March 2024), 32 pages. https://doi.org/10.1145/3634703
Kontakt
benedikt.schaible@tum.de