Abstract
Aiming to address the problem of counting multi-target moving vehicles in the various complex traffic environments, this paper proposes a detecting and tracking method based on YOLO (You Only Look Once) and Deep Sort, and evaluates its performance with public dataset (TUA-DETRAC) and two self-collection datasets. The YOLOv4 algorithm is firstly used to detect each moving vehicles, and then Deep Sort algorithm is adopted to track multi-target vehicles. The experimental results show that moving vehicles can be effectively detected and tracked in real time under different traffic environments including daytime, nighttime, rainy and crowded scenes. The experimental results show that the proposed method can reach 93% average detection accuracy with 20fps of tracking speed, and is capable of dealing with different traffic and climate conditions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Lai, A.H., Fung, G.S., Yung, N.H.: Vehicle type classification from visual-based dimension estimation. In: 2001 IEEE Intelligent Transportation Systems, pp. 201–206. IEEE (2001)
Zhu, Z., Yang, B., Xu, G., Shi, D.: A real-time vision system for automatic traffic monitoring based on 2D spatio-temporal images. In: 1996 Proceedings Third IEEE Workshop on Applications of Computer Vision, pp. 162–167 (1996)
Wu, W., QiSen, Z., Mingjun, W.: A method of vehicle classification using models and neural networks. In: 2001 IEEE VTS 53rd Vehicular Technology Conference, vol. 4, pp. 3022–3026. IEEE (2001)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Iandola, F N., Han, S., Moskewicz, M.W., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016)
Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)
Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: 2016 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Liu, W., Anguelov, D., Erhan, et al.: Focal loss for dense object detection. In: IEEE International Conference on Computer Vision, ICCV, pp. 21–37 (2017)
Cao, S.Y., Liu, Y.H., Li, X.Z.: Vehicle detection method based on fast R-CNN. J. Image Graph. 671–677 (2017)
Bewley, A., Ge, Z., Ott, L., et al.: Simple Online and Realtime Tracking. In: IEEE International Conference on Image Processing (ICIP) (2016)
Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: 2017 IEEE International Conference on Image Processing, pp. 3645–3649 (2017)
Bochkovskiy, A., Wang, C.Y., Liao, H.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)
Rauh, A., Briechle, K., Hanebeck, U.D.: Nonlinear measurement update and prediction: prior Density Splitting Mixture Estimator. In: 2009 IEEE Control Applications (CCA) & Intelligent Control (ISIC), pp. 1421–1426 (1992)
Lou, L., Zhao, L., Geng, T.: Objects detecting and tracking for traffic flow measurement using video surveillance. J. Traffic Transp. Eng. 12(4), 107–11 (2012)
Lou, L., Zhang, Q., Liu, C.F., et al.: Detecting and counting the moving vehicles using Mask R-CNN. In: 2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS), pp. 987–992 (2019)
Acknowledgements
We acknowledge funding from the Hainan Provincial Natural Science Foundation of China (No: 620RC558), Natural Science Foundation Project of CQCSTC (No. cstc2018jcyj AX0398), Science Project of Hainan University (KYQD(ZR)20022).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yan, J., Wang, Z., Li, Y., Chen, Z., Chen, X., Lou, L. (2022). Detecting and Tracking the Moving Vehicles Based on Deep Learning. In: Jansen, T., Jensen, R., Mac Parthaláin, N., Lin, CM. (eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing, vol 1409. Springer, Cham. https://doi.org/10.1007/978-3-030-87094-2_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-87094-2_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87093-5
Online ISBN: 978-3-030-87094-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)