{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,31]],"date-time":"2024-08-31T07:30:33Z","timestamp":1725089433875},"reference-count":33,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2023,7,13]],"date-time":"2023-07-13T00:00:00Z","timestamp":1689206400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100016258","name":"Institute of Civil Military Technology Cooperation","doi-asserted-by":"publisher","award":["UM22311RD3"],"id":[{"id":"10.13039\/501100016258","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"Although Short-Wave Infrared (SWIR) sensors have advantages in terms of robustness in bad weather and low-light conditions, the SWIR images have not been well studied for automated object detection and tracking systems. The majority of previous multi-object tracking studies have focused on pedestrian tracking in visible-spectrum images, but tracking different types of vehicles is also important in city-surveillance scenarios. In addition, the previous studies were based on high-computing-power environments such as GPU workstations or servers, but edge computing should be considered to reduce network bandwidth usage and privacy concerns in city-surveillance scenarios. In this paper, we propose a fast and effective multi-object tracking method, called Multi-Class Distance-based Tracking (MCDTrack), on SWIR images of city-surveillance scenarios in a low-power and low-computation edge-computing environment. Eight-bit integer quantized object detection models are used, and simple distance and IoU-based similarity scores are employed to realize effective multi-object tracking in an edge-computing environment. Our MCDTrack is not only superior to previous multi-object tracking methods but also shows high tracking accuracy of 77.5% MOTA and 80.2% IDF1 although the object detection and tracking are performed on the edge-computing device. Our study results indicate that a robust city-surveillance solution can be developed based on the edge-computing environment and low-frame-rate SWIR images.<\/jats:p>","DOI":"10.3390\/s23146373","type":"journal-article","created":{"date-parts":[[2023,7,14]],"date-time":"2023-07-14T04:49:30Z","timestamp":1689310170000},"page":"6373","source":"Crossref","is-referenced-by-count":2,"title":["Multi-Object Tracking on SWIR Images for City Surveillance in an Edge-Computing Environment"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"http:\/\/orcid.org\/0000-0003-3745-886X","authenticated-orcid":false,"given":"Jihun","family":"Park","sequence":"first","affiliation":[{"name":"A2Mind Inc., Daejeon 34087, Republic of Korea"}]},{"ORCID":"http:\/\/orcid.org\/0009-0000-0179-0639","authenticated-orcid":false,"given":"Jinseok","family":"Hong","sequence":"additional","affiliation":[{"name":"A2Mind Inc., Daejeon 34087, Republic of Korea"}]},{"given":"Wooil","family":"Shim","sequence":"additional","affiliation":[{"name":"A2Mind Inc., Daejeon 34087, Republic of Korea"}]},{"ORCID":"http:\/\/orcid.org\/0009-0001-2384-1326","authenticated-orcid":false,"given":"Dae-Jin","family":"Jung","sequence":"additional","affiliation":[{"name":"A2Mind Inc., Daejeon 34087, Republic of Korea"}]}],"member":"1968","published-online":{"date-parts":[[2023,7,13]]},"reference":[{"key":"ref_1","first-page":"58","article-title":"Robust drone detection with static VIS and SWIR cameras for day and night counter-UAV","volume":"Volume 11166","year":"2019","journal-title":"Proceedings of the Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"500","DOI":"10.5916\/jamet.2020.44.6.500","article-title":"Deep learning-based drone detection with SWIR cameras","volume":"44","author":"Park","year":"2020","journal-title":"J. Adv. Mar. Eng. Technol. JAMET"},{"key":"ref_3","first-page":"65","article-title":"A real-time SWIR-image-based gas leak detection and localization system","volume":"Volume 11129","author":"Alhmoudi","year":"2019","journal-title":"Infrared Sensors, Devices, and Applications IX"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Pavlovi\u0107, M.S., Milanovi\u0107, P.D., Stankovi\u0107, M.S., Peri\u0107, D.B., Popadi\u0107, I.V., and Peri\u0107, M.V. (2022). Deep Learning Based SWIR Object Detection in Long-Range Surveillance Systems: An Automated Cross-Spectral Approach. Sensors, 22.","DOI":"10.3390\/s22072562"},{"key":"ref_5","first-page":"187","article-title":"What good is SWIR? Passive day comparison of VIS, NIR, and SWIR","volume":"Volume 8706","author":"Driggers","year":"2013","journal-title":"Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXIV"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Sun, P., Jiang, Y., Yu, D., Weng, F., Yuan, Z., Luo, P., Liu, W., and Wang, X. (2022, January 23\u201327). Bytetrack: Multi-object tracking by associating every detection box. Proceedings of the Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel. Proceedings, Part XXII.","DOI":"10.1007\/978-3-031-20047-2_1"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Wojke, N., Bewley, A., and Paulus, D. (2017, January 17\u201320). Simple Online and Realtime Tracking with a Deep Association Metric. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296962"},{"key":"ref_8","unstructured":"Rankin, A.L., and Matthies, L.H. (2008). Daytime Mud Detection for Unmanned Ground Vehicle Autonomous Navigation, California Inst of Technology Pasadena Jet Propulsion Lab. Technical Report."},{"key":"ref_9","first-page":"746","article-title":"Long-range night\/day human identification using active-SWIR imaging","volume":"Volume 8704","author":"Lemoff","year":"2013","journal-title":"Infrared Technology and Applications XXXIX"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kandylakis, Z., Vasili, K., and Karantzalos, K. (2019). Fusing multimodal video data for detecting moving objects\/targets in challenging indoor and outdoor scenes. Remote Sens., 11.","DOI":"10.3390\/rs11040446"},{"key":"ref_11","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015, January 7\u201312). Faster r-cnn: Towards real-time object detection with region proposal networks. Proceedings of the Advances in neural information processing systems, Montreal, QC, Canada."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","article-title":"The pascal visual object classes (voc) challenge","volume":"88","author":"Everingham","year":"2010","journal-title":"Int. J. Comput. Vis."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_16","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_17","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv."},{"key":"ref_18","unstructured":"Jocher, G. (2023, July 11). YOLOv5 by Ultralytics. Available online: https:\/\/docs.ultralytics.com\/models\/yolov5\/#supported-tasks."},{"key":"ref_19","unstructured":"Ge, Z., Liu, S., Wang, F., Li, Z., and Sun, J. (2021). YOLOX: Exceeding YOLO Series in 2021. arXiv."},{"key":"ref_20","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv."},{"key":"ref_21","unstructured":"Zhu, X., Su, W., Lu, L., Li, B., Wang, X., and Dai, J. (2020). Deformable detr: Deformable transformers for end-to-end object detection. arXiv."},{"key":"ref_22","unstructured":"Zong, Z., Song, G., and Liu, Y. (2022). DETRs with Collaborative Hybrid Assignments Training. arXiv."},{"key":"ref_23","unstructured":"Chen, Q., Wang, J., Han, C., Zhang, S., Li, Z., Chen, X., Chen, J., Wang, X., Han, S., and Zhang, G. (2022). Group detr v2: Strong object detector with encoder-decoder pretraining. arXiv."},{"key":"ref_24","unstructured":"Zhang, H., Li, F., Liu, S., Zhang, L., Su, H., Zhu, J., Ni, L., and Shum, H.Y. (2022). Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Bewley, A., Ge, Z., Ott, L., Ramos, F., and Upcroft, B. (2016, January 25\u201328). Simple online and realtime tracking. Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.","DOI":"10.1109\/ICIP.2016.7533003"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1115\/1.3662552","article-title":"A new approach to linear filtering and prediction problems","volume":"82","author":"Kalman","year":"1960","journal-title":"J. Basic Eng."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"83","DOI":"10.1002\/nav.3800020109","article-title":"The Hungarian method for the assignment problem","volume":"2","author":"Kuhn","year":"1955","journal-title":"Nav. Res. Logist. Q."},{"key":"ref_28","unstructured":"Wang, Y.H. (2022). SMILEtrack: SiMIlarity LEarning for Multiple Object Tracking. arXiv."},{"key":"ref_29","unstructured":"Aharon, N., Orfaig, R., and Bobrovsky, B.Z. (2022). BoT-SORT: Robust associations multi-pedestrian tracking. arXiv."},{"key":"ref_30","unstructured":"Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., and Devin, M. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"261","DOI":"10.1038\/s41592-019-0686-2","article-title":"SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python","volume":"17","author":"Virtanen","year":"2020","journal-title":"Nat. Methods"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"246309","DOI":"10.1155\/2008\/246309","article-title":"Evaluating multiple object tracking performance: The clear mot metrics","volume":"2008","author":"Bernardin","year":"2008","journal-title":"EURASIP J. Image Video Process."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Ristani, E., Solera, F., Zou, R., Cucchiara, R., and Tomasi, C. (2016, January 8\u201316). Performance measures and a data set for multi-target, multi-camera tracking. Proceedings of the Computer Vision\u2013ECCV 2016 Workshops, Amsterdam, The Netherlands. Proceedings, Part II.","DOI":"10.1007\/978-3-319-48881-3_2"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6373\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,14]],"date-time":"2023-07-14T05:12:40Z","timestamp":1689311560000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/14\/6373"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,13]]},"references-count":33,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2023,7]]}},"alternative-id":["s23146373"],"URL":"https:\/\/doi.org\/10.3390\/s23146373","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,13]]}}}