You are here: Home Research Learning Reliable and Efficient Navigation with a Humanoid

Learning Reliable and Efficient Navigation with a Humanoid

Reliable and efficient navigation with a humanoid robot is a difficult task. First, the motion commands are executed rather inaccurately due to backlash in the joints or foot slippage. Second, the observations are typically highly affected by noise due to the shaking behavior of the robot. Thus, the localization performance is typically reduced while the robot moves and the uncertainty about its pose increases. As a result, the reliable and efficient execution of a navigation task cannot be ensured anymore since the robot's pose estimate might not correspond to the true location. We developed a reinforcement learning approach to select appropriate navigation actions for a humanoid robot equipped with a camera for localization. The robot learns to reach the destination reliably and as fast as possible, thereby choosing actions to account for motion drift and trading off velocity in terms of fast walking movements against accuracy in localization. Extensive simulated and practical experiments with a humanoid robot demonstrate that our learned policy significantly outperforms a hand-optimized navigation strategy.

Related publication:

Learning Reliable and Efficient Navigation with a Humanoid.
S. Oßwald, A. Hornung, and M. Bennewitz.
In: Proceedings of the IEEE International Conference on Robotics & Automation (ICRA), 2010.


This video below shows our Nao humanoid robot navigating in our corridor environment. The policy learned with reinforcement learning is executed in order to reach the goal fast and reliably. In addition to the external view, the robot's estimated state with the corresponding uncertainty ellipse and the camera view with detected and integrated features is shown.

Benutzerspezifische Werkzeuge