Jonas Kantic, M.Sc.
Wissenschaftlicher Mitarbeiter
Technische Universität München
Fakultät für Elektrotechnik und Informationstechnik
Lehrstuhl für Integrierte Systeme
Arcisstr. 21
80333 München
Tel.: +49.89.289.22962
Fax: +49.89.289.28323
Gebäude: N1 (Theresienstr. 90)
Raum: N2118
Email: jonas.kantic@tum.de
Curriculum Vitae
Education
- 2017 - 2020 Master's Studies in Technical Informatics, Leibniz University Hannover
- 2018 - 2019 Chinese Language, Beijing Foreign Studies Univeristy, Beijing
- 2013 - 2017 Bachelor's Studies in Technical Informatics, Leibniz University Hannover
Work Experience
- Since 2022 PhD student at the Chair of Integrated Systems, Technical University of Munich
- 2021 Regular Research Assistant, Institute of Microelectronic Systems, Leibniz University Hannover
- 2019 - 2020 Internship: BMW China Services, Beijing
- 2014 - 2020 Student Research Assistant, Institute of Microelectronic Systems, Leibniz University Hannover
Open Student Work
Current Student Work
Leveraging Sparsity in CNN Accelerators for Efficient Edge AI Inference
CNN, Accelerator, Sparsity, Systolic Array
Beschreibung
The applications of Convolutional Neural Networks (CNNs) are spreading through
all sectors with their usage not being limited to only powerful computers. They are
also increasingly being used inside embedded devices with limited capabilities.
However, running such complex architectures inside resource-constrained devices
is challenging since they are expensive in terms of memory usage, energy
consumption, execution speed, etc. Therefore, continuous research is being
conducted to devise optimization techniques in CNN accelerators for efficient
inference.
One property that can be exploited inside CNNs is sparsity, i.e., how dense the zeros
within a computation are. In theory, zeros do not need to be calculated since they
do not contribute to the final result which introduces an opportunity for optimizing
the computations performed in a CNN accelerator to improve the efficiency.
The motivation behind this Master’s thesis is to investigate the architectures used
inside modern CNN hardware accelerators and to define architecture changes to
leverage sparse computations for improved efficiency.
The working packages of the Master’s thesis include:
- Research state-of-art architectures used in CNN hardware accelerators.
- Research techniques to improve hardware efficiency by leveraging sparsity.
- Design architecture changes for an existing CNN accelerator to optimize sparse calculations.
- Implement the architecture changes in RTL.
- Evaluate the new accelerator architecture against the original one.
Kontakt
frieder.jespers@nxp.com
Betreuer:
Memory Capacity in Echo State Networks
Reservoir Computing, ESN, Memory Capacity
In this seminar work, the memory capacity of echo state networks shall be analyzed by reviewing relevant literature.
Beschreibung
In this seminar work, the memory capacity of echo state networks shall be analyzed by reviewing relevant literature. The following questions indicate possible topics for discussion and analysis:
- How can memory in echo state networks be characterized and differentiated?
- How can the memory capacity be measured?
- Which parameters and / or architectural design decisions have an effect on the memory capacity?
The following papers may be used as a starting point:
- Jaeger: "Short term memory in echo state networks"
- Verstraeten et al.: "Memory versus Non-Linearity in Reservoirs"
- Rodan and Tino: "Minimum complexity echo state network"
Voraussetzungen
To successfully carry out this seminar work, you should:
- work independently and self-organized
- have strong reading and writing comprehension of scientific papers
- perform structured literature research
Kontakt
Betreuer:
Hyperdimensional Computing and Integer Echo State Networks
Hyperdimensional Computing, Reservoir Computing, Echo State Networks
In this seminar work, the student is asked to discuss Hyperdimensional Computing and its application in Reservoir Computing.
Beschreibung
Echo State Networks (ESN) form a recent approach for adaptive and AI-based time series processing and analysis. In contrast to classical deep neural networks, they consist of only three layers: an input layer, the reservoir, and an output layer. Therefore, ESNs have a promising architecture for efficient hardware implementations.
Hyperdimensional Computing is a mathematical framework for computing in distributed representations with high-dimensional random vectors and neural symbolic representation.
Recently the reservoir of an ESN has been realized by a hypervector of n-bit integers and based on hyperdimensional computing resulting in an approximation of ESNs, called Integer Echo State Networks (intESN) [1].
In this seminar work, the student shall summarize the hyperdimensional computing framework. Furthermore, the application of hyperdimensional computing on ESNs in the form of the intESN.
Relevant Paper:
[1] Kleyko, D., Frady, E. P., Kheffache, M., & Osipov, E. (2017). Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware.
ArXiv. https://doi.org/10.1109/TNNLS.2020.3043309
Voraussetzungen
To successfully carry out this seminar work, you should:
- work independently and self-organized
- have strong reading and writing comprehension of scientific papers
- perform structured literature research
Kontakt
Betreuer:
Hybrid Cellular Automata in Reservoir Computing
Cellular Automata, Reservoir Computing, ReCA
In this student work, hybrid CA-based reservoirs shall be analyzed, implemented, and evaluated.
Beschreibung
Introduction
Reservoir Computing (RC) is a promising and efficient computing framework that has been derived from neural networks and is especially suitable for time series data. In contrast to deep neural networks, which stack several layers one after the other, RC models only have three layers: an input layer, a reservoir, and an output layer.
Initially, the reservoir consists of several recurrently connected neurons and the model is called Echo State Networks (ESN). However, other reservoir implementations have been developed and employed since any dynamic system can serve as a reservoir within the RC framework.
Among the simplest types of dynamic systems are elementary Cellular Automata (CA). Acting on a regular grid of cells, each cell of a CA changes its state over time according to a simple predefined local rule. Despite their simplicity, CA can exhibit rich and complex behavior. Simple elementary CA have been shown to serve as the reservoir in RC models effectively. A modification of this approach is to use hybrid CA, in which not every cell adheres to the same rule.
Tasks
In this work, the tasks may include:
- To implement a hybrid CA-based reservoir for RC models in Python using the TensorFlow framework
- To evaluate the hybrid CA-based RC model and compare it with regular CA reservoirs as well as ESNs
- Optional: To analyze the dynamic behavior of hybrid CA in terms of cycles and transients, and compare it with homogeneous (regular) CA
Voraussetzungen
In order to successfully carry out this work, you should:
- be able to work independent and self-organized
- have strong mathmatical skills; preferably have knowledge about finite fields / Galois fields
- have good practice in programming with Python and the TensorFlow framework
- have profound expertise in machine learning principles
Kontakt
Jonas Kantic | Room: N2118 | Tel: +49.89.289.22962 | E-Mail: jonas.kantic@tum.de
Betreuer:
Completed Student Work
Kontakt
Jonas Kantic
Chair of Integrated Systems
Office N2118, Building N1
Betreuer:
Student
Kontakt
Email: fabian.legl@ifta.com
Betreuer:
Betreuer:
Betreuer:
Publications
(Keine Dokumente in dieser Ansicht)
Preprints
J. Kantic and F. C. Legl and W. Stechele and J. Hermann. "ReLiCADA - Reservoir Computing using Linear Cellular Automata Design Algorithm" in arXiv, 2023, eprint: arXiv:2308.11522, DOI: https://doi.org/10.48550/arXiv.2308.11522.
Publications (Pre-TUM)
Journals
Klein, Simon Christian, Kantic, Jonas and Blume, Holger. "Fixed Point Analysis Workflow for efficient Design of Convolutional Neural Networks in Hearing Aids" in Current Directions in Biomedical Engineering, vol. 7, no. 2, 2021, pp. 787-790, DOI: https://doi.org/10.1515/cdbme-2021-2201.