Emerging Control Techniques

Robust Model-Free Control Method for Robot Manipulators

The control of robot manipulators has been well investigated in the past several decades, and it attracts considerable attention to address the robustness and tracking accuracy of the controllers. However, the challenges of this problem exist owning to the inherent nonlinear complexity, strong coupling between joints, external disturbances, and unmodeled uncertainties of the robot manipulators. As for the computed torque method, backstepping, and other model-based method, the tracking performance would be deteriorated by the external disturbances, such as friction. Obviously, some techniques, such as disturbance observer and neural network, are employed to identify the disturbances on-line, resulting in strengthening robustness of the controller. Nevertheless, some parameters should be identified on-line, which increases computational complexity.


In this research, we aim to develop the novel robust model-free control method to improve the tracking precision and achieve sub-optimal tracking performance when there exist unknown external disturbances and uncertainties. We would also take the states and controller constraints into consideration to guarantee safety of the system.

Contact / Publication details: Yongchao Wang


System Identification and Adaptive Control

Minimizing Constraint Violation Probability in MPC

System uncertainty can be handled in different ways within MPC. Robust MPC, as the name indicates, robustly accounts for the uncertainty, often resulting in conservative solutions. While Stochastic MPC yields efficient solutions, a small probability of constraint violation is permitted, based on a predefined risk parameter. 

In contrast to Robust MPC and Stochastic MPC, we propose an MPC method (CVPM-MPC), which minimizes the probability that a constraint is violated while also optimizing other control objectives. The proposed method is capable of dealing with changing uncertainty and does not require choosing a risk parameter. CVPM-MPC can be regarded as a link between Robust and Stochastic MPC.

Contact: Tim Brüdigam, Michael Fink

Publications:

Brüdigam, T.; Gaßmann, V.; Wollherr, D.; Leibold, M.: Minimization of constraint violation probability in model predictive control. Int J Robust Nonlinear Control, 2021, 1-33 [Volltext (DOI)] [Volltext (mediaTUM)]

Distributed Control

Multi-Vehicle Coupled System with Interactions based on Distributed Control Framework

Recent improvements on vehicle automation and communication systems in the real world provide a good basis for the study of multi-vehicle system. Stochastic Model Predictive Control (SMPC) is used to control autonomous vehicles because of its ability to deal with constraints in a non-conservative way. Current research about controlling autonomous vehicles by SMPC focuses on the behavior of an individual SMPC vehicle, and how the other vehicles react to the behavior of the SMPC vehicle is normally ignored. However, in real traffic, all vehicles tend to react to other vehicle in the environment.

In our multi-vehicle system, the interactions between vehicles are taken into consideration by introducing a distributed SMPC framework, where all vehicles are controlled by SMPC and able to react to other vehicles’ behaviors. Our current framework works well in most scenarios, but some cyclic traffic behaviors might occur in some scenarios. Here, cyclic behaviors mean that two vehicles want to occupy the road or let others occupy the road at the same time.

In the future, we would focus on finding methods to alleviate or even solve the cyclic behaviors.

Contact / Publication details: Ni Dang

Dynamical Processes over Social Networks: Modeling, Analysis and Control

Social networks constituted by social agents and their social relations are ubiquitous in our daily lives. Dynamic processes over social networks, which are highly related to our social activities and decision making, are prominent research topics in both theory and practice.

In this research, two typical social-network-based dynamic processes, information epidemics and opinion dynamics, are inspected aiming at filling the gap between social network analysis and control theory. Analogous to epidemics spreading in population, information epidemics are introduced to describe information diffusion in social networks. Existence of the endemic and disease-free equilibria is thoroughly studied as well as their stability conditions. Additionally, the desired diffusion performance is achieved via an novel optimal control framework, which is promising to solve the open problem of optimal control for information epidemics. Apart from diffusion processes, opinion dynamics describing the evolution of individual opinions under social influence is inspected from control theoretical point of view. We focus on opinion dynamics over social networks with cooperative-competitive interactions and address the existence question: under what conditions, there exist certain kind of distributed protocols such that the opinions are polarized, consensus and neutralized. Particular emphasis is on the joint impact of the dynamical properties of (both homogeneous and heterogeneous) individual opinions and the interaction topology w.r.t. the static diffusive coupling protocols.

Contact / Publication details: Yuhong Chen

Applications

Adaptive and Learning Control of Hybrid Systems

Many physical systems are characterized by the hybrid phenomenon, namely, continuous state evolving and discrete mode switching. The challenges to design controllers for hybrid systems are uncertainties and switching behavior. If the uncertainty is very large or the system parameters vary, a single fixed controller may not be able to stabilize the whole system. In such cases, the uncertainty and varying parameters should be captured by learning approaches and the controller is required to be adaptive. Furthermore, the switching behavior increases the hazard to stabilize the whole system.

The goal of this work is to develop intelligent control approaches, which are also compatible with learning methods, for hybrid systems with uncertainties. Applications include robot impacts (walking, jumping, pick-and-place), soft robotics, flight control, etc.

 

Contact / Publication details: Tong Liu

Human-aware robot trajectory planning for safe and efficient HRI

In human-machine interaction, the robot not only needs to satisfy physical constraints due to the limit on mechanical systems but also has to guarantee safety for itself and humans in its surrounding. Therefore, the robot has to evaluate available actions to be taken, interpret human motion with respect to possible tasks currently executed by the human, and select and execute its own action accordingly. Such a safe interaction also requires planning legible motions for the robot that can easily be understood by its human partner.

 

One of the key aims of this research is to develop novel motion planning methods to enable the robot to proactively interact with human partners. We aim to develop a probabilistic inference based continuous-time motion planning framework that incorporates future human actions and safety constraints for effective HRI in a human-robot assembly setup.

Contact / Publication details: Salman Bari

Stochastic Model Predictive Control in Autonomous Driving

Autonomous vehicles face the challenge of providing efficient transportation while safely maneuvering in an uncertain environment. Uncertainties arise in various forms, mainly because the ego vehicle is unable to perfectly predict future motion of surrounding vehicles, cyclists, and pedestrians. For example, the ego vehicle must be prepared for a sudden vehicle lane change maneuver or unforeseen jaywalking pedestrians.

We develop Model Predictive Control (MPC) methods to advance automated driving. MPC effectively handles constraints that the vehicle must meet, e.g., lane boundaries, speed limits, and collision avoidance. We tackle uncertainties in the environment by applying Stochastic Model Predictive Control (SMPC). Accounting for all uncertainties may result in overly conservative driving, greatly limiting performance, especially in urban driving situations. In some scenarios, collision is even inevitable. In SMPC, constraints are adapted, yielding chance-constraints. They are not required to always hold, but a probability parameter is chosen that specifies the desired probability of constraint satisfaction, considering system uncertainty. In other words, a lower probability parameter allows for constraint violation that occurs more often, however, performance is increased.

In our research, we focus on safety and efficiency for automated vehicle trajectory planning with SMPC. By combining SMPC with failsafe trajectory planning, the advantage of optimistic vehicle behavior with SMPC is combined with the safety guarantees of failsafe trajectory planning. Further contributions include grid-based SMPC approaches, specifically accounting for maneuver uncertainty and maneuver execution uncertainty of vehicles. Multistage SMPC is also adopted to plan the motion of the EV explicitly differentiating between short-term and long-term decisions, each with its own required features and timescale.

Contact: Tim Brüdigam, Tommaso Benciolini

Publications:

Benciolini, T.; Brüdigam, T.; Leibold, M.: Multistage Stochastic Model Predictive Control for Urban Automated Driving. 24rd IEEE International Conference on Intelligent Transportation Systems, 2021 [Volltext ( DOI )] [Volltext (mediaTUM)]

Brüdigam, T.; Olbrich, M.; Wollherr, D.; Leibold, M.: Stochastic Model Predictive Control with a Safety Guarantee for Automated Driving. IEEE Transactions on Intelligent Vehicles, 2021, 1-1 [ Volltext ( DOI )]

Brüdigam, T.; di Luzio, F.; Pallottino, L.; Wollherr, D.; Leibold, M.: Grid-Based Stochastic Model Predictive Control for Trajectory Planning in Uncertain Environments. 23rd IEEE International Conference on Intelligent Transportation Systems, 2020 [ Volltext ( DOI )] [Volltext (mediaTUM)]

Brüdigam, T.; Olbrich, M.; Leibold, M.; Wollherr, D.: Stochastic Model Predictive Control with a Safety Guarantee for Automated Driving. 21st IEEE International Conference on Intelligent Transportation Systems, 2018 [Volltext ( DOI )] [Volltext (mediaTUM)]

Brüdigam, T.; Olbrich, M.; Leibold, M.; Wollherr, D.: Stochastic Model Predictive Control with a Safety Guarantee for Automated Driving. 21st IEEE International Conference on Intelligent Transportation Systems, 2018 [Volltext ( DOI )] [Volltext (mediaTUM)]

 

Extending the MPC Prediction Horizon

A long prediction horizon in MPC is often beneficial. However, a long prediction horizon with a detailed prediction model quickly becomes computationally challenging. We provide different adaptations to MPC in order to take advantage of long prediction horizons while keeping the computational effort manageable.

These adaptations are based on two ideas:

  1. A simple system model is used for long-term predictions (with a detailed short-term prediction model).
  2. The sampling time is increased along the horizon, resulting in a non-uniformly spaced MPC prediction horizon.

In addition, these adaptations are combined with methods from Robust MPC and Stochastic MPC to account for potential model uncertainty and disturbances.

Contact: Tim Brüdigam, Johannes Teutsch

Publications:

Brüdigam, T.; Teutsch, J.; Wollherr, D.; Leibold, M.; Buss, M: Probabilistic model predictive control for extended prediction horizons. at - Automatisierungstechnik, vol. 69, no. 9, 2021, pp. 759-770 [Volltext (DOI)]

Brüdigam, T.; Prader, D.; Wollherr, D.; Leibold, M.: Model Predictive Control with Models of Different Granularity and a Non-uniformly Spaced Prediction Horizon. American Control Conference (ACC), 2021 [Volltext (DOI)]

Brüdigam, T.; Teutsch J.; Wollherr, D.; Leibold, M.: Combined Robust and Stochastic Model Predictive Control for Models of Different Granularity. 21st IFAC World Congress, 2020 [Volltext (DOI)] [Volltext (mediaTUM)]



Adaptive Action Selection in Human-Robot Collaboration

The main focus of this research area is set particularly on industrial assembly processes with mixed Human-Robot teams, which are typically well understood and defined in advance. In order to seamlessly interact with humans, an autonomous agent is required to use the basic rules of an ongoing task to plan ahead the individual possibilities each agent is granted and adapt actions on-the-fly. The main challenge when interacting with a human is that unlike robots, humans do not always follow the same sequence of actions even when a detailed plan is provided. This has thus to be incorporated in the planning, allocation and execution phase accordingly. Therefore, it is of utmost importance to analyze the mutual interference of each action. Thereby, the sequence of actions can be adjusted w.r.t. the human coworkers, unlike classic robot planning in which a robot follows a predetermined sequence of actions.

We propose an assembly planning and execution framework, which incorporates well understood methods of interactive game theory, optimal planning and multi-agent reinforcement learning to grant robots the ability of not only adapting, but to actually cooperate with human co-workers in joint assembly processes.

Contact / Publication details: Volker Gabler

Safe Learning for Event Driven Hybrid Systems

While learning techniques have achieved impressive performance in various control tasks, it is unavoidable that during the learning process, the intermediate policies may be unsafe and hence lead the system to dangerous behaviors.  In literature, there are mainly two ways to impose safety in the learning algorithms. One is to modify the cost function, and the other is to guide the exploration process.

We aim to utilize the insights from invariance control and supervisory control to impose safety guarantees for learning-based controllers in event-driven hybrid systems. A supervisor is constructed based on the safe set from reachability analysis or invariance function, and this supervisor could guide the learning process to ensure that the system remain inside the safe region when the learning-based controller is searching for an optimal policy.

Contact / Publication details: Zhehua Zhou