Bayesian Fusion for SLAM


Overview

This is a research project I worked on together with Prof.Stefan Leutenegger at the Dyson Robotics Lab at Imperial. Our paper got accepted at IEEE RA-L and ICRA 2018.

We developed a Bayesian formulation for fusing depth camera measurements in a volumetric occupancy octree grid. Our approach is particulary targeting SLAM applications and allows direct surface extraction from the octree. We prototyped efficient real-time implementation for a CPU or a CUDA-enabled GPU. The resulting octree grid allows for efficient path planning with OMPL.

bfusion pic

Demo

Below you can see our algorithm running on the ICL-NUIM Living Room Dataset [1]. On the top-left you can see the reconstrudted 3D model, on the top-right - the RGB image, on bottom-left - the depth image and on bottom-right - the tracking error.


[1] A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM
A. Handa and T. Whelan and J.B. McDonald and A.J. Davison. International Conference on Robotics and Automation (ICRA) 2014.