Performance Analysis of Synthetic Events via Visual Object Trackers | SpringerLink
Skip to main content

Performance Analysis of Synthetic Events via Visual Object Trackers

  • Conference paper
  • First Online:
Intelligent Computing (SAI 2024)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 1018))

Included in the following conference series:

  • 207 Accesses

Abstract

This study investigated the effectiveness of synthetic events in enhancing the accuracy of visual object trackers, particularly in scenarios where conventional RGB data encounters difficulties, such as with rapidly moving objects, motion blur, and varying lighting conditions, thereby highlighting the potential of event cameras in tracking applications. Synthetic events were generated from RGB Visual Object Tracking (VOT) datasets using the v2e toolbox and post processed in Inivation Dynamic Vision (DV) software. This post processed data was subsequently fused with traditional RGB data. Evaluation was conducted through the Pytracking library to measure potential tracking improvements. The results showed a notable increase in tracking efficacy upon the integration of post processed synthetic events with RGB data. Conclusively, synthetically generated events have the capacity to augment current state-of-the-art (SOTA) VOT frameworks with minimal Neural Network (NN) adjustments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 20591
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
JPY 31459
Price includes VAT (Japan)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/uzh-rpg/event-based_vision_resources.

  2. 2.

    https://inivation.gitlab.io/dv/dv-docs/.

  3. 3.

    https://inivation.com/wp-content/uploads/2020/05/White-Paper-May-2020.pdf.

  4. 4.

    https://github.com/visionml/pytracking.

References

  1. AL-Shatnawi, A., et al.: Face recognition model based on the Laplacian pyramid fusion technique. Int. J. Adv. Soft Comput. Appl. 13 (2021)

    Google Scholar 

  2. Bavirisetti, D.P., et al.: Fusion of MRI and CT images using guided image filter and image statistics. IMA 27 (2017)

    Google Scholar 

  3. Bhat, G., et al.: Learning discriminative model prediction for tracking. In: ICCVW, pp. 6182–6191 (2019)

    Google Scholar 

  4. Fan, H., et al.: LaSOT: a high-quality benchmark for large-scale single object tracking. In: CVPR, pp. 5374–5383 (2019)

    Google Scholar 

  5. Fan, H., et al.: LaSOT: a high-quality large-scale single object tracking benchmark. IJCV 129, 439–461 (2021)

    Article  Google Scholar 

  6. Feng, Y., et al.: Event density based denoising method for dynamic vision sensor. Appl. Sci. 10(6), 2024 (2020)

    Article  Google Scholar 

  7. Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. PAMI 44(1), 154–180 (2020)

    Article  Google Scholar 

  8. Gehrig, D., et al.: End-to-end learning of representations for asynchronous event-based data. In: ICCV (2019)

    Google Scholar 

  9. Gu, F., et al.: EventDrop: data augmentation for event-based learning. In: IJCAI (2021)

    Google Scholar 

  10. Handa, A., Newcombe, R.A., Angeli, A., Davison, A.J.: Real-time camera tracking: when is high frame-rate best? In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 222–235. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_17

    Chapter  Google Scholar 

  11. Hoffmann, J.E., et al.: Real-time adaptive object detection and tracking for autonomous vehicles. IEEE Trans. IV 6(3), 450–459 (2020)

    Google Scholar 

  12. Horé, A., et al: Image quality metrics: PSNR vs. SSIM. In: ICPR (2010)

    Google Scholar 

  13. Hu, Y., et al.: V2E: from video frames to realistic DVS events. In: CVPR, pp. 1312–1321 (2021)

    Google Scholar 

  14. Javed, S., et al.: Visual object tracking with discriminative filters and Siamese networks: a survey and outlook. IEEE Trans. PAMI 45(5), 6552–6574 (2023)

    Google Scholar 

  15. Ji, M., et al.: SCTN: event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front. Neurosci. 17, 1123698 (2023)

    Article  Google Scholar 

  16. Joshi, K.A., et al.: A survey on moving object detection and tracking in video surveillance system. IJSCE 2(3), 44–48 (2012)

    Google Scholar 

  17. Khodamoradi, A., et al.: \(o(n)\)o(n)-space spatiotemporal filter for reducing noise in neuromorphic vision sensors. IEEE Trans. ETC 9(1), 15–23 (2021)

    MathSciNet  Google Scholar 

  18. Lagorce, X., et al.: HOTS: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans. PAMI (2017)

    Google Scholar 

  19. Lea, C., et al.: Temporal convolutional networks for action segmentation and detection. In: CVPR (2017)

    Google Scholar 

  20. Li, H., et al.: Infrared and visible image fusion using a deep learning framework. In: ICPR, August 2018

    Google Scholar 

  21. Lichtsteiner, P., et al.: A 128 \(\times \) 128 120 DB 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE JSSC 43(2), 566–576 (2008)

    Google Scholar 

  22. Mayer, C., et al.: Transforming model prediction for tracking. In: CVPR, pp. 8731–8740 (2022)

    Google Scholar 

  23. Mitrokhin, A., et al.: Event-based moving object detection and tracking. arXiv preprint arXiv:1803.04523 (2018)

  24. Mitrokhin, A., et al.: Event-based moving object detection and tracking. In: IEEE/RSJ IROS, pp. 1–9 (2018)

    Google Scholar 

  25. Mueller, M., Smith, N., Ghanem, B.: A benchmark and simulator for UAV tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 445–461. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_27

    Chapter  Google Scholar 

  26. Neil, D., et al.: Phased LSTM: accelerating recurrent network training for long or event-based sequences. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  27. Panchal, P., et al.: A review on object detection and tracking methods. IJESE 2(1), 7–12 (2015)

    MathSciNet  Google Scholar 

  28. Paul, M., et al.: Robust visual tracking by segmentation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022, LNCS, vol. 13682, pp. 571–588. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20047-2_33

  29. Rebecq, H., et al.: Events-to-video: bringing modern computer vision to event cameras. In: CVPR, pp. 3852–3861 (2019)

    Google Scholar 

  30. Rueckauer, B., et al.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. (2017)

    Google Scholar 

  31. Sun, J., et al.: Poisson image fusion based on Markov random field fusion model. Inf. Fusion 14 (2013)

    Google Scholar 

  32. Wang, Q., et al.: Space-time event clouds for gesture recognition: from RGB cameras to event cameras. In: IEEE WACV, pp. 1826–1835 (2019)

    Google Scholar 

  33. Zhang, H., et al.: Image fusion meets deep learning: a survey and perspective. Inf. Fusion 76, 323–336 (2021)

    Article  Google Scholar 

  34. Zhang, J., et al.: Object tracking by jointly exploiting frame and event domain. In: ICCV, pp. 13043–13052 (2021)

    Google Scholar 

  35. Zhang, J., et al.: Frame-event alignment and fusion network for high frame rate tracking. In: CVPR, pp. 9781–9790, June 2023

    Google Scholar 

  36. Zhang, S., et al.: EVtracker: an event-driven spatiotemporal method for dynamic object tracking. Sensors 22(16), 6090 (2022)

    Article  Google Scholar 

  37. Zheng, X., et al.: Deep learning for event-based vision: a comprehensive survey and benchmarks. arXiv preprint arXiv:2302.08890 (2023)

  38. Zhu, A.Z., et al.: Event-based visual flow. IEEE Trans. NNLS (2018)

    Google Scholar 

Download references

Acknowledgement

This publication acknowledges the support provided by the Khalifa University of Science and Technology under Faculty Start-Up grants FSU-2022-003 Award No. 8474000401.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamad Alansari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Alansari, M., AlRemeithi, H., Alansari, S., Werghi, N., Javed, S. (2024). Performance Analysis of Synthetic Events via Visual Object Trackers. In: Arai, K. (eds) Intelligent Computing. SAI 2024. Lecture Notes in Networks and Systems, vol 1018. Springer, Cham. https://doi.org/10.1007/978-3-031-62269-4_26

Download citation

Publish with us

Policies and ethics