Assessing Image Features for Vision-Based Robot Positioning | Journal of Intelligent & Robotic Systems Skip to main content
Log in

Assessing Image Features for Vision-Based Robot Positioning

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

The development of any robotics application relying on visual information always raises the key question of what image features would be most informative about the motion to be performed. In this paper, we address this question in the context of visual robot positioning, where a neural network is used to learn the mapping between image features and robot movements, and global image descriptors are preferred to local geometric features. Using a statistical measure of variable interdependence called Mutual Information, subsets of image features most relevant for determining pose variations along each of the six degrees of freedom (dof's) of camera motion are selected. Four families of global features are considered: geometric moments, eigenfeatures, Local Feature Analysis vectors, and a novel feature called Pose-Image Covariance vectors. The experimental results described show the quantitative and qualitative benefits of performing this feature selection prior to training the neural network: Less network inputs are needed, thus considerably shortening training times; the dof's that would yield larger errors can be determined beforehand, so that more informative features can be sought; the order of the features selected for each dof often accepts an intuitive explanation, which in turn helps to provide insights for devising features tailored to each dof.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Allen, P., Timcenko, B., and Michelman, P.: Hand-eye coordination for robotic tracking and grasping, in: K. Hashimoto (ed.), Visual Servoing, World Scientific, Singapore, 1993, pp. 33–70.

    Google Scholar 

  2. Bien, Z., Jang, W., and Park, J.: Characterization and use of feature-Jacobian matrix for visual servoing, in: K. Hashimoto (ed.), Visual Servoing, World Scientific, Singapore, 1993, pp. 317–363.

    Google Scholar 

  3. Chaumette, F., Rives, P., and Espiau, B.: Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing, in: Proc. of the IEEE Intl. Conf. on Robotics and Automation, Sacramento, 1991, pp. 2248–2253.

  4. Cibas, T., Fogelman, F., Gallinari, P., and Raudys, S.: Variable selection with neural networks, Neurocomputing 12(1996), 223–248.

    Google Scholar 

  5. Corke, P.: Visual control of robot manipulators-a review, in: K. Hashimoto (ed.), Visual Servoing, World Scientific, Singapore, 1993, pp. 1–31.

    Google Scholar 

  6. Deguichi, K.: Visual servoing using eigenspace method and dynamic calculation of interaction matrices, in: Proc. 13th Intl. Conf. on Pattern Recognition, Vol. 1, 1996, pp. 302–306.

    Google Scholar 

  7. Dickmanns, E., Mysliwetz, B., and Christians, T.: An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles, IEEE Trans. Systems Man Cybernet. 20(1990), 1279–1284.

    Google Scholar 

  8. Espiau, B., Chaumette, F., and Rives, P.: A new approach to visual servoing in Robotics, IEEE Trans. Robot. Automat.(1992), 313–326.

  9. Faugeras, O.: Three Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Boston, 1993.

    Google Scholar 

  10. Feddema, J.: Visual servoing: A technology in search of an application, in: Notes for Workshop M-5 (Visual Servoing), IEEE Intl. Conf. on Robotics and Automation, San Diego, 1994.

  11. Fukunaga, K.: Statistical Pattern Recognition, 2nd edn, Academic Press, 1990.

  12. Giordana, N., Bouthemy, P., and Chaumette, F.: 2D Model-based tracking of complex shapes for visual servoing tasks, in: Notes for Workshop WS2 (Robust Vision for Vision-based Control of Motion), IEEE Intl. Conf. on Robotics and Automation, Leuven, 1998.

  13. Hager, G. and Hutchinson, S.: Visual servoing: Achievements, issues and applications, in: Notes for Workshop M-5 (Visual Servoing), IEEE Intl. Conf. on Robotics and Automation, San Diego, 1994.

  14. Harrell, R., Slaughter, D., and Adsit, P.: A fruit-tracking system for robotic harvesting, Machine Vision Appl. 2(1989), 69–80.

    Google Scholar 

  15. Hashimoto, K., Akoi, A., and Noritoyu, T.: Visual servoing with redundant features, in: Proc. 35th Conf. on Decision and Control, 1996.

  16. Hashimoto, H., Kubota, T., Kudou, M., and Harashima, F.: Self-organizing visual servo system based on neural networks, IEEE Control Systems(1992), 31–36.

  17. Horaud, R. and Dornaika, F.: Hand-eye calibration, Intl. J. Robotics Research 14(3) (1995), 195–210.

    Google Scholar 

  18. Horaud, R., Conio, B., and Lebouleux, O.: An analytic solution for the perspective 4-point problem, Computer Vision, Graphics and Image Process. 44(1989), 33–44.

    Google Scholar 

  19. Jang, W. and Bien, Z.: Feature-based visual servoing of an eye-in-hand robot with improved tracking performance, in: Proc. IEEE Conf. on Robotics and Automation, Sacramento, 1991, pp. 2254–2260.

  20. Janabi, F. and Wilson, W.: Automatic selection of image features, IEEE Trans. Robot. Automat. 13(6), (1997), 890–903.

    Google Scholar 

  21. Kabuka,M. and Arenas, A.: Position verification of a movile robot using standard pattern, IEEE J. Robotics Automat. 3(6) (1987), 505–516.

    Google Scholar 

  22. Li, W.: Mutual information functions versus correlation functions, J. Statist. Phys. 60(5/6) (1990), 328–387.

    Google Scholar 

  23. Mohr, R., Boufama, B., and Brand, P.: Understanding positioning from multiple images, Artificial Intelligence 78(1/2) (1995), 213–328.

    Google Scholar 

  24. Nayar, S., Nene, S., and Murase, H.: Subspace methods for robot vision, IEEE Trans. Robotics Automat. 12(5) (1996), 750–759.

    Google Scholar 

  25. Nene, S., Nayar, S., and Murase, H.: SLAM: Software Library for Appearance Matching, Technical Report CUCS–019–94, Dept. of Computer Science, Columbia University, USA, 1994.

    Google Scholar 

  26. Papanikolopoulos, N.: Selection of features and evaluation of visual measurements during robotic visual servoing tasks, J. Intelligent Robotic Systems 13(1995), 279–304.

    Google Scholar 

  27. Papanikolopoulos, N.: Adaptive control, visual servoing and controlled active vision, in: Notes for Workshop M-5 (Visual Servoing), IEEE Intl. Conf. on Robotics and Automation, San Diego, 1994.

  28. Penev, P. and Atick, J.: Local feature analysis: A general statistical theory for object representation, Network 7(3) (1996), 477–500.

    Google Scholar 

  29. Prokop, R. and Reeves, A.: A survey of moment-based techniques for unoccluded object representation and recognition, Graphical Models and Image Process. 54(5) (1992), 438–460.

    Google Scholar 

  30. Rives, P. and Borrelly, J.: Real-time image processing for image-based visual servoing, in: Notes for Workshop WS2 (Robust Vision for Vision-based Control of Motion), IEEE Intl. Conf. on Robotics and Automation, Leuven, 1998.

  31. Shirai, Y., Okada, R., and Yamane, T.: Robust visual tracking by integrating various cues, in: Notes for Workshop WS2 (Robust Vision for Vision-based Control of Motion), IEEE Intl. Conf. on Robotics and Automation, Leuven, 1998.

  32. Sipe, M., Casasent, D., and Neiberg, L.: Feature space trajectory representation for active vision, in: S. Rogers (ed.), Applications and Science of Artificial Neural Networks III, Vol. 3077, Society of Photo-Optical Instrumentation Engineers, 1997, pp. 254–265.

  33. Sturm, P.: Critical motion sequences for monocular self-calibration and uncalibrated Euclidean reconstruction, in: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Puerto Rico, 1997, pp. 1100–1105.

  34. Venaille, C., Wells, G., and Torras, C.: Application of neural networks to image-based control of robot arms, in: Proc. 2nd IFAC Symp. on Intelligent Components and Instruments for Control Applications (SICICA), Budapest, 1994, pp. 281–286.

  35. Wells, G., Venaille, C., and Torras, C.: Vision-based robot positioning using neural networks, Image and Vision Comput. 14(December 1996), 715–732.

    Google Scholar 

  36. Wilson, W., Williams, C., and Janabi, F.: Robust image processing and position-based visual servoing, in: Notes for Workshop WS2 (Robust Vision for Vision-based Control of Motion), IEEE Intl. Conf. on Robotics and Automation, Leuven, 1998.

  37. Wunsch, P. and Hirzinger, G.: Real-time visual tracking of 3D objects with dynamic handling of occlusion, in: Proc. IEEE Intl. Conf. on Robotics and Automation, Albuquerque, 1997.

  38. Zhang, Z., Weiss, R., and Hanson, A.: Automatic calibration and visual servoing for a robot navigation system, in: Proc. IEEE Conf. on Robotics and Automation, Atlanta, 1993, pp. 14–19.

  39. Zheng, G. and Billings, S.: Radial basis function network configuration using mutual information and the orthogonal least squares algorithm, Neural Networks 9(9) (1996), 1619–1637.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wells, G., Torras, C. Assessing Image Features for Vision-Based Robot Positioning. Journal of Intelligent and Robotic Systems 30, 95–118 (2001). https://doi.org/10.1023/A:1008198321503

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008198321503

Navigation