We have developed efficient approaches to obstacle avoidance for
humanoid robots based on monocular images. Our approach relies on
ground-plane estimation and trains visual classifiers using color
and texture information in a self-supervised way. During
navigation, the classifiers are automatically updated and applied to
the image stream to decide which areas are traversable. From this
information, the robot can compute a two-dimensional occupancy grid
map of the environment and use it for planning collision-free paths.
As we illustrate in thorough experiments with a real humanoid, the
classification results are highly accurate and the resulting
occupancy map enables the robot to reliably avoid obstacles during
- Vision-based Humanoid Navigation Using Self-Supervised Obstacle Detection.
D. Maier, C. Stachniss, and M. Bennewitz.
In: International Journal of Humanoid Robotics, DOI: 10.1142/S0219843613500163, 2013.
- Appearance-Based Traversability Classification in
Monocular Images Using Iterative Ground Plane Estimation.
D. Maier and M. Bennewitz.
In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012.
- Self-supervised Obstacle Detection for Humanoid
Navigation Using Monocular Vision and Sparse Laser Data.
D. Maier, M. Bennewitz, and Maren Bennewitz.
In: Proceedings of the IEEE International Conference on Robotics & Automation (ICRA), 2011.
The videos below show how our Nao humanoid trains the visual classifiers in a self-supervised fashion during navigation. The learned classifiers are applied to the stream of camera images to distriminate obstacles from the floor. Based on the traversable area, the robot builds an occupancy map for collision-free navigaton.
In the first video, the robot uses data from its 2D laser scanner to guide the training, in the second video the robot needs only its RGB camera data and odometry information.