You are here: Home Research Autonomous Navigation Based on Depth Camera Data

Autonomous Navigation in 3D Environments based on Depth Camera Data

We developed an integrated approach for robot localization, obstacle mapping, and path planning in 3D environments based on data of an onboard consumer-level depth camera. Our system relies on state-of-the-art techniques for environment modeling and localization, which we extended for depth camera data. Our approach performs in real-time, maintains a 3D environment representation, and estimates the robot's pose in 6D. As our experiments show, the depth camera is well-suited for robust localization and reliable obstacle avoidance in complex indoor environments.

Related publication:


The video below shows our Nao humanoid equipped with an ASUS Xtion Pro Live on top of its head. The robot estimates its 6D pose in a static 3D model of the environment based on depth data. At the same time, it constructs an 3D obstacle map from the depth data for obstacle avoidance. To allow for real-time performance, the robot updates the map from sensor data at 6Hz. The learned octree-based representation (OctoMap) is then used for real-time planning of collision-free paths.

Direct link to YouTube

Benutzerspezifische Werkzeuge