Zusammenstellung open source erhältlicher Software
Stable Gaussian Process based Tracking Control of Lagrangian Systems
High performance tracking control can only be achieved if a good model of the dynamics is available. However, such a model is often difficult to obtain from first order physics only. In this paper, we develop a data-driven control law that ensures closed loop stability of Lagrangian systems. For this purpose, we use Gaussian Process regression for the feedforward compensation of the unknown dynamics of the system. The gains of the feedback part are adapted based on the uncertainty of the learned model. Thus, the feedback gains are kept low as long as the learned model describes the true system sufficiently precisely. We show how to select a suitable gain adaption law that incorporates the uncertainty of the model to guarantee a globally bounded tracking error. A simulation with a robot manipulator demonstrates the efficacy of the proposed control law.
The code for the publication "Stable Gaussian Process based Tracking Control of Lagrangian Systems" by Thomas Beckers, Jonas Umlauft, Dana Kulic, Sandra Hirche published at the IEEE Conference on Decision and Control (CDC) 2018 is available here.
Learning Stable Gaussian Process State Space Models
Data-driven nonparametric models gain importance as control systems are increasingly applied in domains where classical system identification is difficult, e.g., because of the system’s complexity, sparse training data or its probabilistic nature. Gaussian process state space models (GP-SSM) are a data-driven approach which requires only high-level prior knowledge like smoothness characteristics. Prior known properties like stability are also often available but rarely exploited during modeling. The enforcement of stability using control Lyapunov functions allows to incorporate this prior knowledge, but requires a data-driven Lyapunov function search. Therefore, we propose the use of Sum of Squares to enforce convergence of GP-SSMs and compare the performance to other approaches on a real-world handwriting motion dataset.
The code for the publication "Learning Stable Gaussian Process State Space Models" by Jonas Umlauft, Armin Lederer and Sandra Hirche published at the IEEE American Control Conference (ACC) 2018 is available here.
Uncertainty-based Human Trajectory Tracking with Stable Gaussian Process State Space Models
Data-driven approaches are well suited to represent human motion because arbitrary complex trajectories can be captured. Gaussian process state space models allow to encode human motion while quantifying uncertainty due to missing data. Such human motion models are relevant for many application domains such as learning by demonstration and motion prediction in human-robot collaboration. For goal-directed tasks it is essential to impose stability constraints on the model representing the human motion. Motivated by learning by demonstration applications, this paper proposes an uncertainty-based control Lyapunov function approach for goal-directed trajectory tracking. We exploit the model delity which is related to the location of the training and test data: Our approach actively strives into regions with more demonstration data and thus higher model certainty. This achieves accurate reproduction of the human motion independent of the initial condition and we show that generated trajectories are uniformly globally asymptotically stable. The approach is validated in a nonlinear learning by demonstration task where human-demonstrated motions are reproduced by the learned dynamical system, and higher precision than competitive state of the art methods is achieved.
The code for the publication "Uncertainty-based Human Trajectory Tracking with Stable Gaussian Process State Space Models" by Lukas Pöhler, Jonas Umlauft and Sandra Hirche published at the IFAC Conference on Cyber-Physical & Human Systems (CPHS) 2018 is available here.
An Uncertainty-based Control Lyapunov Approach for Control-affine Systems Modeled by Gaussian Process
Data-driven approaches in control allow for identification of highly complex dynamical systems with minimal prior knowledge. However, properly incorporating model uncertainty in the design of a stabilizing control law remains challenging. Therefore, this letter proposes a control Lyapunov function framework which semiglobally asymptotically stabilizes a partially unknown fully actuated control affine system with high probability. We propose an uncertainty-based control Lyapunov function which utilizes the model fidelity estimate of a Gaussian process model to drive the system in areas near training data with low uncertainty. We show that this behaviormaximizes the probability that the system is stabilized in the presence of power constraints using equivalence to dynamic programming. A simulation on a nonlinear system is provided.
The code for the publication "An Uncertainty-based Control Lyapunov Approachfor Control-affine Systems Modeled by Gaussian Process" by Jonas Umlauft, Lukas Pöhler and Sandra Hirche published in IEEE Control Systems Letters (L-CSS) and IEEE Conference on Decision and Control (CDC) in 2018 is available here.
Feedback Linearization based on Gaussian Processes with event-triggered Online Learning
Combining control engineering with nonparametric modeling techniques from machine learning allows to control systems without analytic description using data-driven models. Most existing approaches separate learning, i.e. the system identification based on a fixed dataset, and control, i.e. the execution of the model-based control law. This separation makes the performance highly sensitive to the initial selection of training data and possibly requires very large datasets. This article proposes a learning feedback linearizing control law using online closed-loop identification. The employed Gaussian process model updates its training data only if the model uncertainty becomes too large. This event-triggered online learning ensures high data efficiency and thereby reduces the computational complexity, which is a major barrier for using Gaussian processes under real-time constraints. We propose safe forgetting strategies of data points to adhere to budget constraint and to further increase data-efficiency. We show asymptotic stability for the tracking error under the proposed event-triggering law and illustrate the effective identification and control in simulation.
The code for the publication "Feedback Linearization based on Gaussian Processes with event-triggered Online Learning" by Jonas Umlauft and Sandra Hirche accepted to the IEEE Transactions on Automatic Control( TAC) in 2019 is available here.
Uniform Error Bounds for Gaussian Process Regression with Application to Safe Control

areas (blue background).
Data-driven models are subject to model errors due to limited and noisy training data. Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure and uniform error bounds have been derived, which allow safe control based on these models. However, existing error bounds require restrictive assumptions. In this paper, we employ the Gaussian process distribution and continuity arguments to derive a novel uniform error bound under weaker assumptions. Furthermore, we demonstrate how this distribution can be used to derive probabilistic Lipschitz constants and analyze the asymptotic behavior of our bound. Finally, we derive safety conditions for the control of unknown dynamical systems based on Gaussian process models and evaluate them in simulations of a robotic manipulator.
The code for the publication "Uniform Error Bounds for Gaussian Process Regression with Application to Safe Control" by Armin Lederer, Jonas Umlauft and Sandra Hirche accepted to the Conference on Neural Information Processing Systems (NeurIPS) is available here.
Koc-TUM Physical Human-X Interaction Repository
This repository contains real (i.e. not simulated) haptic interaction data collected from both Human-Human and Human-Robot dyads in joint object manipulation scenarios. Both scenarios were realized at the ITR and supported in part within the DFG excellence initiative research cluster "Cognition for Technical Systems - CoTeSys" (www.cotesys.org). The scenarios involve two agents carrying a large table on ball casters in a laboratory setting. Force and position information is recorded during interaction.
The dataset is collected as a result of joint research between Technische Universität München (ITR) and Koc University (Robotics and Mechatronics Laboratory and Intelligent User Interfaces Laboratory). The copyright of the data remains with these institutions.
Please note that this dataset is available for research purposes only. If you are interested in using the dataset, please do so by citing the following paper:
A. Mörtl, M. Lawitzky, A. Kucukyilmaz, M. Sezgin, C. Basdogan, S. Hirche, "The Role of Roles: Physical Cooperation between Humans and Robots," in International Journal of Robotics Research (IJRR), vol. 31 (13), 2012, pp. 1657-1675 [BibTeX], [avi]
The experimental procedure, and the details of how data is collected can be found in the aforementioned paper. May you have any queries, please direct them to Ayse Kucukyilmaz via e-mail.
Koc-TUM pHRI (physical Human-Robot Interaction) Dataset
This set consists of data collected from 18 human-robot dyads. For each dyad, the data is recorded at 1 kHz, smoothed with low-pass filtering at 15 Hertz cutoff frequency, and stored as a Matlab struct.
Downloads
- Readme.txt (2.10 KB)
- Interaction data
- KocTUM_HR_data.zip (2.54 GB)
- Videos
- KocTUM_HR_videos.zip (1.43 GB)
- Source code for generating video files
- simulateTrialsHRI.zip (7.73 KB)
Koc-TUM pHHI (physical Human-Human Interaction) Dataset
This set consists of data collected from 6 human-human dyads. For each dyad, the data is low-pass filtered, downsampled to 25Hz, and aligned. Motion is estimated by a Kalman-filter fusioning gyro and visual tracking data.
Downloads
- Readme.txt (1.57 KB)
- Interaction data
- KocTUM_HH_data.zip (1.34 MB)
- Videos
- KocTUM_HH_videos.zip (135 MB)
- Source code for generating video files
- simulateTrialsHHI.zip (7.88 KB)