Weighted attentional blocks for probabilistic object tracking | The Visual Computer Skip to main content
Log in

Weighted attentional blocks for probabilistic object tracking

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

In this paper we represent the object with multiple attentional blocks which reflect some findings of selective visual attention in human perception. The attentional blocks are extracted using a branch-and-bound search method on the saliency map, and meanwhile the weight of each block is determined. Independent particle filter tracking is applied to each attentional block and the tracking results of all the blocks are then combined in a linear weighting scheme to get the location of the entire target object. The attentional blocks are propagated to the object location found in each new frame and the state of the most likely particle in each block is also updated with the new propagated position. In addition, to avoid error accumulation caused by the appearance variations, the object template and the positions of the attentional blocks are adaptively updated while tracking. Experimental results show that the proposed algorithm is able to efficiently track salient objects and is better accounted for partial occlusions and large variations in appearance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Algorithm 2
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned salient region detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1597–1604 (2009)

    Google Scholar 

  2. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 798–805 (2006)

    Google Scholar 

  3. Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2007)

    Article  Google Scholar 

  4. Babu, R.V., Perez, P., Bouthemy, P.: Robust tracking with motion estimation and local kernel-based color modeling. Image Vis. Comput. 25(8), 1205–1216 (2007)

    Article  Google Scholar 

  5. Baker, S., Matthews, I.: Lucas–Kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  6. Bousetouane, F., Dib, L., Snoussi, H.: Improved mean shift integrating texture and color features for robust real time object tracking. Vis. Comput. 29(3), 155–170 (2013)

    Article  Google Scholar 

  7. Bulbul, A., Cipiloglu, Z., Capin, T.: A color-based face tracking algorithm for enhancing interaction with mobile devices. Vis. Comput. 26(5), 311–323 (2010)

    Article  Google Scholar 

  8. CAVIAR dataset: (2003). http://groups.inf.ed.ac.uk/vision/CAVIAR/CAVIARDATA1/

  9. Cho, T.S., Avidan, S., Freeman, W.T.: The patch transform. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1489–1501 (2010)

    Article  Google Scholar 

  10. Chockalingam, P., Pradeep, N., Birchfield, S.: Adaptive fragments-based tracking of non-rigid objects using level sets. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1530–1537 (2009)

    Google Scholar 

  11. Collins, R.: Mean-shift blob tracking through scale space. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 234–240 (2003)

    Google Scholar 

  12. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  13. Doucet, A., de Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  14. Fan, J., Wu, Y., Dai, S.: Discriminative spatial attention for robust tracking. In: Proceedings of European Conference on Computer Vision (ECCV), pp. 480–493 (2010)

    Google Scholar 

  15. Gall, J., Yao, A., Razavi, N., Van Gool, L., Lempitsky, V.: Hough forests for object detection, tracking, and action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2188–2202 (2011)

    Article  Google Scholar 

  16. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proceedings of British Machine Vision Conference, vol. 1, pp. 47–56 (2006)

    Google Scholar 

  17. Isard, M., Blake, A.: CONDENSATION—conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)

    Article  Google Scholar 

  18. Kwon, J., Lee, K.M.: Visual tracking decomposition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), p. 1269 (2010)

    Google Scholar 

  19. Lampert, C.H., Blaschko, M.B., Hofmann, T.: Efficient subwindow search: a branch and bound framework for object localization. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2129–2142 (2009)

    Article  Google Scholar 

  20. Li, G., Wu, H.: Robust object tracking using kernel-based weighted fragments. In: Proceedings of International Conference on Multimedia Technology, pp. 3643–3646 (2011)

    Google Scholar 

  21. Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007)

    Google Scholar 

  22. Luo, Y., Yuan, J., Xue, P., Tian, Q.: Saliency density maximization for efficient visual objects discovery. IEEE Trans. Circuits Syst. Video Technol. 21(12), 1822–1834 (2011)

    Article  Google Scholar 

  23. Mei, X., Ling, H.: Robust visual tracking using L1 minimization. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2009)

    Google Scholar 

  24. Nejhum, S., Ho, J., Yang, M.: Online visual tracking with histograms and articulating blocks. Comput. Vis. Image Underst. 114(8), 901–914 (2010)

    Article  Google Scholar 

  25. Nummiaro, K., Koller-Meier, E., Van Gool, L.: An adaptive color-based particle filter. Image Vis. Comput. 21(1), 99–110 (2003)

    Article  Google Scholar 

  26. Palmer, S.E.: Vision Science: Photons to Phenomenology. The MIT Press, Cambridge (1999)

    Google Scholar 

  27. Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proceedings of European Conference on Computer Vision (ECCV), pp. 661–675 (2002)

    Google Scholar 

  28. Santner, J., Leistner, C., Saffari, A., Pock, T., Bischof, H.: PROST: parallel robust online simple tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 723–730 (2010)

    Google Scholar 

  29. Shi, J., Tomasi, C.: Good features to track. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 593–600 (1994)

    Google Scholar 

  30. Singh, V., Nevatia, R.: Simultaneous tracking and action recognition for single actor human actions. Vis. Comput. 27(12), 1115–1123 (2011)

    Article  Google Scholar 

  31. Valenti, R., Sebe, N., Gevers, T.: Image saliency by isocentric curvedness and color. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 2185–2192 (2009)

    Google Scholar 

  32. Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., Schmalstieg, D.: Real-time detection and tracking for augmented reality on mobile phones. IEEE Trans. Vis. Comput. Graph. 16(3), 355–368 (2010)

    Article  Google Scholar 

  33. Wu, Y., Fan, J.: Contextual flow. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 33–40 (2009)

    Google Scholar 

  34. Zhang, K., Song, H.: Real-time visual tracking via online weighted multiple instance learning. Pattern Recognit. 46(1), 397–411 (2013)

    Article  MATH  Google Scholar 

  35. Zhang, X., Liu, H., Li, X.: Target tracking for mobile robot platforms via object matching and background anti-matching. Robot. Auton. Syst. 58(11), 1197–1206 (2010)

    Article  Google Scholar 

  36. Zhu, J., Lao, Y., Zheng, Y.: Object tracking in structured environments for video surveillance applications. IEEE Trans. Circuits Syst. Video Technol. 20(2), 223–235 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the National Natural Science Foundation of China (61232011, 61202294), the National Key Technology R&D Program (No. 2011BAH27B01, 2011BHA16B08), NSFC-Guangdong Joint Fund (No. U1135005, U1201252), the Industry-academy-research Project of Guangdong (No. 2011A091000032), the Special Foundation of Industry Development for Biology, Internet, New Energy and New Material of Shenzhen (No. JC201104220324A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaonan Luo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wu, H., Li, G. & Luo, X. Weighted attentional blocks for probabilistic object tracking. Vis Comput 30, 229–243 (2014). https://doi.org/10.1007/s00371-013-0823-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-013-0823-3

Keywords