Abstract
This study investigated the effectiveness of synthetic events in enhancing the accuracy of visual object trackers, particularly in scenarios where conventional RGB data encounters difficulties, such as with rapidly moving objects, motion blur, and varying lighting conditions, thereby highlighting the potential of event cameras in tracking applications. Synthetic events were generated from RGB Visual Object Tracking (VOT) datasets using the v2e toolbox and post processed in Inivation Dynamic Vision (DV) software. This post processed data was subsequently fused with traditional RGB data. Evaluation was conducted through the Pytracking library to measure potential tracking improvements. The results showed a notable increase in tracking efficacy upon the integration of post processed synthetic events with RGB data. Conclusively, synthetically generated events have the capacity to augment current state-of-the-art (SOTA) VOT frameworks with minimal Neural Network (NN) adjustments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
AL-Shatnawi, A., et al.: Face recognition model based on the Laplacian pyramid fusion technique. Int. J. Adv. Soft Comput. Appl. 13 (2021)
Bavirisetti, D.P., et al.: Fusion of MRI and CT images using guided image filter and image statistics. IMA 27 (2017)
Bhat, G., et al.: Learning discriminative model prediction for tracking. In: ICCVW, pp. 6182–6191 (2019)
Fan, H., et al.: LaSOT: a high-quality benchmark for large-scale single object tracking. In: CVPR, pp. 5374–5383 (2019)
Fan, H., et al.: LaSOT: a high-quality large-scale single object tracking benchmark. IJCV 129, 439–461 (2021)
Feng, Y., et al.: Event density based denoising method for dynamic vision sensor. Appl. Sci. 10(6), 2024 (2020)
Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. PAMI 44(1), 154–180 (2020)
Gehrig, D., et al.: End-to-end learning of representations for asynchronous event-based data. In: ICCV (2019)
Gu, F., et al.: EventDrop: data augmentation for event-based learning. In: IJCAI (2021)
Handa, A., Newcombe, R.A., Angeli, A., Davison, A.J.: Real-time camera tracking: when is high frame-rate best? In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 222–235. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33786-4_17
Hoffmann, J.E., et al.: Real-time adaptive object detection and tracking for autonomous vehicles. IEEE Trans. IV 6(3), 450–459 (2020)
Horé, A., et al: Image quality metrics: PSNR vs. SSIM. In: ICPR (2010)
Hu, Y., et al.: V2E: from video frames to realistic DVS events. In: CVPR, pp. 1312–1321 (2021)
Javed, S., et al.: Visual object tracking with discriminative filters and Siamese networks: a survey and outlook. IEEE Trans. PAMI 45(5), 6552–6574 (2023)
Ji, M., et al.: SCTN: event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front. Neurosci. 17, 1123698 (2023)
Joshi, K.A., et al.: A survey on moving object detection and tracking in video surveillance system. IJSCE 2(3), 44–48 (2012)
Khodamoradi, A., et al.: \(o(n)\)o(n)-space spatiotemporal filter for reducing noise in neuromorphic vision sensors. IEEE Trans. ETC 9(1), 15–23 (2021)
Lagorce, X., et al.: HOTS: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans. PAMI (2017)
Lea, C., et al.: Temporal convolutional networks for action segmentation and detection. In: CVPR (2017)
Li, H., et al.: Infrared and visible image fusion using a deep learning framework. In: ICPR, August 2018
Lichtsteiner, P., et al.: A 128 \(\times \) 128 120 DB 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE JSSC 43(2), 566–576 (2008)
Mayer, C., et al.: Transforming model prediction for tracking. In: CVPR, pp. 8731–8740 (2022)
Mitrokhin, A., et al.: Event-based moving object detection and tracking. arXiv preprint arXiv:1803.04523 (2018)
Mitrokhin, A., et al.: Event-based moving object detection and tracking. In: IEEE/RSJ IROS, pp. 1–9 (2018)
Mueller, M., Smith, N., Ghanem, B.: A benchmark and simulator for UAV tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 445–461. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_27
Neil, D., et al.: Phased LSTM: accelerating recurrent network training for long or event-based sequences. In: Advances in Neural Information Processing Systems (2016)
Panchal, P., et al.: A review on object detection and tracking methods. IJESE 2(1), 7–12 (2015)
Paul, M., et al.: Robust visual tracking by segmentation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022, LNCS, vol. 13682, pp. 571–588. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20047-2_33
Rebecq, H., et al.: Events-to-video: bringing modern computer vision to event cameras. In: CVPR, pp. 3852–3861 (2019)
Rueckauer, B., et al.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. (2017)
Sun, J., et al.: Poisson image fusion based on Markov random field fusion model. Inf. Fusion 14 (2013)
Wang, Q., et al.: Space-time event clouds for gesture recognition: from RGB cameras to event cameras. In: IEEE WACV, pp. 1826–1835 (2019)
Zhang, H., et al.: Image fusion meets deep learning: a survey and perspective. Inf. Fusion 76, 323–336 (2021)
Zhang, J., et al.: Object tracking by jointly exploiting frame and event domain. In: ICCV, pp. 13043–13052 (2021)
Zhang, J., et al.: Frame-event alignment and fusion network for high frame rate tracking. In: CVPR, pp. 9781–9790, June 2023
Zhang, S., et al.: EVtracker: an event-driven spatiotemporal method for dynamic object tracking. Sensors 22(16), 6090 (2022)
Zheng, X., et al.: Deep learning for event-based vision: a comprehensive survey and benchmarks. arXiv preprint arXiv:2302.08890 (2023)
Zhu, A.Z., et al.: Event-based visual flow. IEEE Trans. NNLS (2018)
Acknowledgement
This publication acknowledges the support provided by the Khalifa University of Science and Technology under Faculty Start-Up grants FSU-2022-003 Award No. 8474000401.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Alansari, M., AlRemeithi, H., Alansari, S., Werghi, N., Javed, S. (2024). Performance Analysis of Synthetic Events via Visual Object Trackers. In: Arai, K. (eds) Intelligent Computing. SAI 2024. Lecture Notes in Networks and Systems, vol 1018. Springer, Cham. https://doi.org/10.1007/978-3-031-62269-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-62269-4_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-62268-7
Online ISBN: 978-3-031-62269-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)