AkiSens

Adaptive AI-Based Real-Time Analysis of High-Frequency Sensor Data

Introduction

AkiSens is a joint research project of the Technical University of Munich and the IfTA GmbH, funded by the Bavarian Ministry of Economic Affairs, Regional Development and Energy in the context of the Bavarian Collaborative Research Program (BayVFP) and managed by the VDI/VDE Innovation + Technik GmbH.
The goal of this research project is to develop AI-based methods which are suitable for the real-time analysis of high-frequency sensor data.

Description

Nowadays, high-frequency environmental sensing is on the rise. More and more machines and devices, ranging from industrial facilities over power plants to consumer devices, have a large variety of integrated sensors, which enable them to collect data about their environment and state of operation. Especially high-frequency sensor signals, as generated, for example, by laser sensors with sampling rates of several million samples per second, are valuable for the operation of the considered devices.

In order to get meaningful information out of the raw sensor data, sophisticated and adaptive algorithms are needed to process and interpret the data and derive actions based on the analysis results. This adaptive sensor signal processing can be accomplished with AI- and machine learning-based methods, enabling the device to detect and classify certain situations and to react accordingly. Neural Network-based approaches have recently dominated the scoreboards of many machine learning applications, achieving state-of-the-art results.

However, due to their deeply-stacked architecture, typical implementations of Neural Network models, including Deep Fully-Connected Neural Networks, Recurrent Neural Networks, Long-Short-Term Memories, and Transformers, are not suitable for the application to time-critical real-time processing of high-frequency sensor signals as they are not capable of running several million inferences per second.
Thus, implementing AI methods in hardware and applying them to such high-frequent data remains challenging and requires further research.

Application

A prominent example considered in this research project is the processing of laser sensor data measured at two gears attached at both ends of a turbine shaft, as depicted in the picture above.
While the turbine shaft and attached gears rotate during operation, the two laser sensors detect the gears' position, resulting in square signals, each with a sampling rate of 4 MHz.
During the turbine's operation, the shaft can be subject to torsional oscillations and vibrations, which can damage the turbine at a certain level. Hence, it is desired to perform a highly accurate measurement of the torque and torsional oscillations using the two sensor signals.
However, the shape of the sensor signals can vary drastically under the influence of movements of the shaft, wear of the materials, and dirt particles, leading to the fact that, for example, simple thresholding techniques are insufficient. Therefore, more than classic and deterministic signal processing methods is required. AI-based methods can be exploited in this scenario to overcome the abovementioned issues and adapt to changing operating conditions.
In order to meet the real-time requirement, the signal processing should happen on an FPGA, which requires the considered models to be simple to implement and highly parallelizable.

Our Contributions

In the context of this research project, we develop and improve reservoir computing models as an adaptive AI method for high-frequent sensor signal processing on FPGAs. We aim to optimize the developed models for simplicity in their implementation on the one hand, as well as effectiveness on the other hand, resulting in a well-balanced compromise between throughput and accuracy.
In our research, we are dealing with the following topics:

  • Reservoir Computing and Echo State Networks as a feasible architecture for machine learning models with high-frequent inferences
  • Cellular Automata as simple reservoirs in Reservoir Computing models and the analytical analysis of their dynamics when described as linear mappings over Galois fields and rings
  • Implementation of such models on FPGAs

Open Student Work

Current Student Work

Leveraging Sparsity in CNN Accelerators for Efficient Edge AI Inference

Stichworte:
CNN, Accelerator, Sparsity, Systolic Array

Beschreibung

The applications of Convolutional Neural Networks (CNNs) are spreading through
all sectors with their usage not being limited to only powerful computers. They are
also increasingly being used inside embedded devices with limited capabilities.
However, running such complex architectures inside resource-constrained devices
is challenging since they are expensive in terms of memory usage, energy
consumption, execution speed, etc. Therefore, continuous research is being
conducted to devise optimization techniques in CNN accelerators for efficient
inference.
One property that can be exploited inside CNNs is sparsity, i.e., how dense the zeros
within a computation are. In theory, zeros do not need to be calculated since they
do not contribute to the final result which introduces an opportunity for optimizing
the computations performed in a CNN accelerator to improve the efficiency.

The motivation behind this Master’s thesis is to investigate the architectures used
inside modern CNN hardware accelerators and to define architecture changes to
leverage sparse computations for improved efficiency.
The working packages of the Master’s thesis include:

  1. Research state-of-art architectures used in CNN hardware accelerators.
  2. Research techniques to improve hardware efficiency by leveraging sparsity.
  3. Design architecture changes for an existing CNN accelerator to optimize sparse calculations.
  4. Implement the architecture changes in RTL.
  5. Evaluate the new accelerator architecture against the original one.

 

Kontakt

frieder.jespers@nxp.com

Betreuer:

Jonas Kantic - Frieder Jespers (NXP Semiconductors)

Memory Capacity in Echo State Networks

Stichworte:
Reservoir Computing, ESN, Memory Capacity
Kurzbeschreibung:
In this seminar work, the memory capacity of echo state networks shall be analyzed by reviewing relevant literature.

Beschreibung

In this seminar work, the memory capacity of echo state networks shall be analyzed by reviewing relevant literature. The following questions indicate possible topics for discussion and analysis:

  • How can memory in echo state networks be characterized and differentiated?
  • How can the memory capacity be measured?
  • Which parameters and / or architectural design decisions have an effect on the memory capacity?

The following papers may be used as a starting point:

  1. Jaeger: "Short term memory in echo state networks"
  2. Verstraeten et al.: "Memory versus Non-Linearity in Reservoirs"
  3. Rodan and Tino: "Minimum complexity echo state network"

 

Voraussetzungen

To successfully carry out this seminar work, you should:

  • work independently and self-organized
  • have strong reading and writing comprehension of scientific papers
  • perform structured literature research

 

Kontakt

Jonas Kantic

Room: N2118
Tel.: +49.89.289.22962
E-Mail: jonas.kantic@tum.de

 

Betreuer:

Hyperdimensional Computing and Integer Echo State Networks

Stichworte:
Hyperdimensional Computing, Reservoir Computing, Echo State Networks
Kurzbeschreibung:
In this seminar work, the student is asked to discuss Hyperdimensional Computing and its application in Reservoir Computing.

Beschreibung

Echo State Networks (ESN) form a recent approach for adaptive and AI-based time series processing and analysis. In contrast to classical deep neural networks, they consist of only three layers: an input layer, the reservoir, and an output layer. Therefore, ESNs have a promising architecture for efficient hardware implementations.

Hyperdimensional Computing is a mathematical framework for computing in distributed representations with high-dimensional random vectors and neural symbolic representation.

Recently the reservoir of an ESN has been realized by a hypervector of n-bit integers and based on hyperdimensional computing resulting in an approximation of ESNs, called Integer Echo State Networks (intESN) [1].

In this seminar work, the student shall summarize the hyperdimensional computing framework. Furthermore, the application of hyperdimensional computing on ESNs in the form of the intESN.

Relevant Paper:

[1] Kleyko, D., Frady, E. P., Kheffache, M., & Osipov, E. (2017). Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware.
ArXiv. https://doi.org/10.1109/TNNLS.2020.3043309

 

Voraussetzungen

To successfully carry out this seminar work, you should:

  • work independently and self-organized
  • have strong reading and writing comprehension of scientific papers
  • perform structured literature research

 

Kontakt

Jonas Kantic

Room: N2118
Tel.: +49.89.289.22962
E-Mail: jonas.kantic@tum.de

 

Betreuer:

Hybrid Cellular Automata in Reservoir Computing

Stichworte:
Cellular Automata, Reservoir Computing, ReCA
Kurzbeschreibung:
In this student work, hybrid CA-based reservoirs shall be analyzed, implemented, and evaluated.

Beschreibung

Introduction

Reservoir Computing (RC) is a promising and efficient computing framework that has been derived from neural networks and is especially suitable for time series data. In contrast to deep neural networks, which stack several layers one after the other, RC models only have three layers: an input layer, a reservoir, and an output layer.
Initially, the reservoir consists of several recurrently connected neurons and the model is called Echo State Networks (ESN). However, other reservoir implementations have been developed and employed since any dynamic system can serve as a reservoir within the RC framework.

Among the simplest types of dynamic systems are elementary Cellular Automata (CA). Acting on a regular grid of cells, each cell of a CA changes its state over time according to a simple predefined local rule. Despite their simplicity, CA can exhibit rich and complex behavior. Simple elementary CA have been shown to serve as the reservoir in RC models effectively. A modification of this approach is to use hybrid CA, in which not every cell adheres to the same rule.

Tasks

In this work, the tasks may include:

  • To implement a hybrid CA-based reservoir for RC models in Python using the TensorFlow framework
  • To evaluate the hybrid CA-based RC model and compare it with regular CA reservoirs as well as ESNs
  • Optional: To analyze the dynamic behavior of hybrid CA in terms of cycles and transients, and compare it with homogeneous (regular) CA

Voraussetzungen

In order to successfully carry out this work, you should:

  • be able to work independent and self-organized
  • have strong mathmatical skills; preferably have knowledge about finite fields / Galois fields
  • have good practice in programming with Python and the TensorFlow framework
  • have profound expertise in machine learning principles

Kontakt

Jonas Kantic | Room: N2118 | Tel: +49.89.289.22962 | E-Mail: jonas.kantic@tum.de

Betreuer:

Completed Student Work

Kontakt

Jonas Kantic

Chair of Integrated Systems

Office N2118, Building N1

Betreuer:

Jonas Kantic

Student

Yizhe Zhang

Kontakt

Email: fabian.legl@ifta.com

Betreuer:

Jonas Kantic, Fabian Legl (IfTA)
- Fabian Legl (IfTA)

Betreuer:

Jonas Kantic

Betreuer:

Jonas Kantic

Publications

(Keine Dokumente in dieser Ansicht)

Preprints

J. Kantic and F. C. Legl and W. Stechele and J. Hermann. "ReLiCADA - Reservoir Computing using Linear Cellular Automata Design Algorithm" in arXiv, 2023, eprint: arXiv:2308.11522, DOI: https://doi.org/10.48550/arXiv.2308.11522.