Exploration in hard-to-reach terrain using visual and proprioceptive data in Valles Marineris (VIPE)

Funding agency: BMWi
Duration: 3 years, 01.05.2015 – 30.04.2018
Partners: German Research Center for Artificial Intelligence GmbH (DFKI), NavVis GmbH
Contact: Dominik Van Opdenbosch, Adrian Garcea

Scope of the project

The goal of this basic research project is the effective exploration of hard-to-reach terrain in the Valles Marineris on Mars. The project of basic research closes a gap in the swarm of the Valles Marineris Explorer initiative of the DLR Space Administration. It already includes a combination of rovers and aircrafts, but these do not provide a solution for exploring steep slopes and caves. To this end, the development of a hominid robot platform is to be advanced in order to integrate them into the existing swarm.

In order to achieve this efficiently and cost-effectively, it is intended to make use of expertise and hardware developed in previous projects. Thus, it is planned to advance the development of the four-legged walking robot "Charlie", which was developed in the project "iStruct". Due to its lightweight and highly integrated design, its agility and tactile sensors it is ideal to overcome difficult terrain.

In order to keep the weight of the robot platform low (important for the agility and the transport costs of the system), only a few light sensors should be installed. Visual navigation is a very suitable technology that builds on lightweight, passive sensors and due to the large redundancy enables a reliable position estimation. In contrast to radio-based positioning, no (visual) connection to other swarm participants is necessary. However, for the above-described requirements for such a complementary robot platform, the position estimation based on a continuous visual odometry using a stereo camera, as used for the rover and flight systems, is not sufficient. Especially in areas with low brightness, the exposure times would be too long or would require a continuous and thus resource-intensive lighting.

The aim of the project is therefore to explore a novel visual positioning and mapping approach, which increases the distance between the necessary images by using a 360 ° panoramic camera, so that only a few shots are sufficient for a navigation over several meters. This way it is possible to record the individual panoramic images while stopped with a long exposure time. Furthermore, the visual navigation approach should be able to recognize previously visited places robustly (despite the often poorly distinguishable environmental texture). The thereby closed loops can be detected and taken into account in order to reduce the position drift, even with respect to other swarm participants. The resulting map data is to be exchanged and fused as part of the network intelligence within the swarm to create a decentralized visual and geometric map. To process the camera data, energy-efficient approaches should be found; which was a fundamental requirement already in the projects "NAVVIS" I and II. The key technologies and foundations developed in these projects are prerequisites for the development of this visual positioning and mapping technology.

While this approach to visual navigation is intended to enable accurate positioning over large areas, even in low light conditions, a complementary approach is needed to determine the position change between each panorama. Especially when encountering rugged terrain, it is crucial to position the "legs" of the walking robot precisely on stable surfaces. In order to make this possible, a proprioceptive approach is to be researched, which uses tactile sensors to record the body position and movement in space and convert it into position information. This is a prerequisite for motion planning and reactive motion control, which makes it possible to overcome obstacles. By merging the tactile data with visually perceived surface structures such as edges and columns, these two technologies complement each other to a very promising approach to reliably overcome difficult terrain.

The technologies to be explored in the project are also of great use on Earth in numerous applications such as personal navigation, service robotics, cave exploration, civil protection (Fukushima, earthquake areas, etc.). However, additional requirements have to be considered. The aim of this project is therefore to explore the necessary extensions for central transfer applications of these key technologies in order to make them usable for terrestrial applications.