Abstract
Active cameras provide a mobile robot with the capability to fixate and track features over a wide field of view. However, their use emphasises serial attention focussing on a succession of scene features, raising the question of how this should be best achieved to provide localisation information. This paper describes a fully automatic system, able to detect, store and track suitable landmark features during goal-directed navigation. The robot chooses which of the available set of landmarks to track at a certain time to best improve its position knowledge, and decides when it is time to search for new features. Localisation performance improves on that achieved using odometry alone and shows significant advantages over passive structure-from-motion techniques. Rigorous consideration is given to the propagation of uncertainty in the estimation of the positions of the robot and scene features as the robot moves, fixates and shifts fixation. The paper shows how the estimates of these quantities are inherently coupled in any map-building system, and how features can reliably be re-found after periods of neglect, mitigating the “motion drift” problem often encountered in structure-from-motion algorithms.
Chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
N. Ayache. Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception. MIT Press, Cambridge MA, 1991.
P. A. Beardsley, I. D. Reid, A. Zisserman, and D. W. Murray. Active visual navigation using non-metric structure. In Proc 5th Int Conf on Computer Vision, Boston MA, pages 58–65. IEEE Computer Society Press, 1995.
J.-Y. Bouget and P. Perona. Visual navigation using a single camera. In Proc 5th Int Conf on Computer Vision, Boston MA, pages 645–652. IEEE Computer Society Press.
C. G. Harris and J. M. Pike. 3D positional integration from image sequences. In Proc 3rd Alvey Vision Conf, Cambridge UK, pages 233–236, 1987.
J. J. Leonard and H. F. Durrant-Whyte. Directed Sonar Navigation. Kluwer Academic Press, 1992.
D. W. Murray, I. D. Reid, and A. J. Davison. Steering and navigation behaviours using fixation. In Proc 7th British Machine Vision Conf, Edinburgh, pages 635–644, 1996.
P. M. Sharkey, D. W. Murray, S. Vandevelde, I. D. Reid, and P. F. McLauchlan. A modular head/eye platform for real-time reactive vision. Mechatronics, 3(4):517–535, 1993.
Jianbo Shi and Carlo Tomasi. Good features to track. In Proc of the IEEE Conf. on Computer Vision and Pattern Recognition, pages 593–600, 1994.
P. Whaite and F. P. Ferrie. Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3):193–205, 1997.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Davison, A.J., Murray, D.W. (1998). Mobile robot localisation using active vision. In: Burkhardt, H., Neumann, B. (eds) Computer Vision — ECCV’98. ECCV 1998. Lecture Notes in Computer Science, vol 1407. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0054781
Download citation
DOI: https://doi.org/10.1007/BFb0054781
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64613-6
Online ISBN: 978-3-540-69235-5
eBook Packages: Springer Book Archive