Sensor Integration for mono SLAM

Simultaneous localization and mapping (SLAM) utilizes cameras as sensors for pose estimation. A precise and stable pose estimation is necessary for any augmented reality application.


The hardware of smart devices includes cameras as well as inertial measurement units such as gyroscope, accelerometer etc. Mono SLAM approaches, however, consider mainly image features.

Inertial sensor aided monocular SLAM can be done efficiently, i.e., increasing computational complexity only slightly, by improving tracking accuracy [1] and robustness.
Our approach can be integrated in any visual SLAM algorithm, since only a pre-estimation of the vision-derived camera pose is necessary.

We achieve data fusion with an unscented Kalman filter approach. Before the data of camera and sensor can be combined, the camera and the IMUs have to be calibrated.
Usually, the IMUs are not in the same position as the camera. Thus, in case of an embedded device such as a smart device, the IMUs position have to be estimated in relation to the camera center.

We provide the following code:

  1. Data fusion based on the unscented Kalman Filter.
  2. C++ implementation of a quaternion-based unscented Kalman filter
  3. IMU sensor calibration
  4. camera to IMU calibration

The following libraries are necessary:

- opencv 2.4.8

- libCVD (needs pthreads for compiling)

- eigen

- boost 1.55

- levmar 2.6

Download Sensor_SLAM.zip

Please contact mmk@ei.tum.de to obtain the password.

[1] P. Tiefenbacher, T. Schulze and G. Rigoll, “Off-the-Shelf Sensor Integration for mono-SLAM on Smart Devices“, in Proc. CVPRW, IEEE, 2015