Projekt:Vision based robot navigation
Forscher: Rosenhahn, B., Grest, D., Sommer, G.This page contains some research results on vision based robot navigation. We had several student projects dealing with vision based self-localization of mobile robots.
As shown in the previous page, we are interested in visual self-localization of mobile robots. The reason for this research topic is, that the accumulated odometric data of the robot movements are too inexact for robust navigation. In contrast to ultrasonic based navigation, the use of visual information becomes more and more popular. B21) to navigate with respect to an a priori known landmark. The tasks we solve in this context are the combination of image feature extraction, pose estimation, matching and path planning.
The vision modules are the following:
|The image feature extraction consists of a modified Hough transformation algorithm.|
|We use the odometric data of the robot to gain a tracking situation and apply a local search strategy to estimate the corespondences and the pose simultanousely. Please visit the 2D-3D Pose Estimation Project page for more details about pose estimation.|
|After successfull matching and pose estimation we are able to update the position of the mobile robot. As can be seen, the algorithm even works with partially occluded images and missing or non-extractable wedges of the landmark.|