{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,1,16]],"date-time":"2025-01-16T05:36:27Z","timestamp":1737005787868,"version":"3.33.0"},"reference-count":58,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2024,4,25]],"date-time":"2024-04-25T00:00:00Z","timestamp":1714003200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["42274020"],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Science and Technology Planning Project of Jiangsu Province","award":["BE2023692"]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["41874006"],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness and accuracy. In this study, a localization and mapping scheme named LVI-fusion based on multi-sensor fusion of camera, lidar and IMU is proposed. Different sensors have different data acquisition frequencies. To solve the problem of time inconsistency in heterogeneous sensor data tight coupling, the time alignment module is used to align the time stamp between the lidar, camera and IMU. The image segmentation algorithm is used to segment the dynamic target of the image and extract the static key points. At the same time, the optical flow tracking based on the static key points are carried out and a robust feature point depth recovery model is proposed to realize the robust estimation of feature point depth. Finally, lidar constraint factor, IMU pre-integral constraint factor and visual constraint factor together construct the error equation that is processed with a sliding window-based optimization module. Experimental results show that the proposed algorithm has competitive accuracy and robustness.<\/jats:p>","DOI":"10.3390\/rs16091524","type":"journal-article","created":{"date-parts":[[2024,4,26]],"date-time":"2024-04-26T07:23:47Z","timestamp":1714116227000},"page":"1524","source":"Crossref","is-referenced-by-count":4,"title":["LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme"],"prefix":"10.3390","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0237-3663","authenticated-orcid":false,"given":"Zhenbin","family":"Liu","sequence":"first","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]},{"given":"Zengke","family":"Li","sequence":"additional","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]},{"given":"Ao","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]},{"given":"Kefan","family":"Shao","sequence":"additional","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]},{"given":"Qiang","family":"Guo","sequence":"additional","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]},{"given":"Chuanhao","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,4,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1309","DOI":"10.1109\/TRO.2016.2624754","article-title":"Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age","volume":"32","author":"Cadena","year":"2016","journal-title":"IEEE Trans. Robot."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"6408","DOI":"10.1109\/JSEN.2020.3038432","article-title":"Attention-SLAM: A Visual Monocular SLAM Learning From Human Gaze","volume":"21","author":"Li","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Debeunne, C., and Vivet, D. (2020). A review of visual-Lidar fusion based simultaneous localization and mapping. Sensors, 20.","DOI":"10.3390\/s20072068"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TRO.2016.2597321","article-title":"On-Manifold Preintegration for Real-Time Visual\u2014Inertial Odometry","volume":"33","author":"Forster","year":"2017","journal-title":"IEEE Trans. Robot."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Tao, Y., He, Y., and Ma, X. (2021, January 24\u201326). SLAM Method Based on Multi-Sensor Information Fusion. Proceedings of the 2021 International Conference on Computer Network, Electronic and Automation (ICCNEA), Xi\u2019an, China.","DOI":"10.1109\/ICCNEA53019.2021.00070"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Yu, H., Wang, Q., Yan, C., Feng, Y., Sun, Y., and Li, L. (2024). DLD-SLAM: RGB-D Visual Simultaneous Localisation and Mapping in Indoor Dynamic Environments Based on Deep Learning. Remote Sens., 16.","DOI":"10.3390\/rs16020246"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Huletski, A., Kartashov, D., and Krinkin, K. (2015, January 9\u201314). Evaluation of the modern visual SLAM methods. Proceedings of the 2015 Artificial Intelligence and Natural Language and Information Extraction, Social Media and Web Search FRUCT Conference (AINL-ISMW FRUCT), St. Petersburg, Russia.","DOI":"10.1109\/AINL-ISMW-FRUCT.2015.7382963"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Shan, T., Englot, B., and Ratti, C. (2021\u20135, January 30). LVI-SAM: Tightly-Coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561996"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"2266","DOI":"10.1109\/LRA.2021.3138527","article-title":"M2DGR: A Multi-Sensor and Multi-Scenario SLAM Dataset for Ground Robots","volume":"7","author":"Yin","year":"2022","journal-title":"IEEE Robot. Auto Let."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"2","DOI":"10.1007\/s10846-022-01582-8","article-title":"Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: A Survey","volume":"105","author":"Chghaf","year":"2022","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1052","DOI":"10.1109\/TPAMI.2007.1049","article-title":"MonoSLAM: Real-Time Single Camera SLAM","volume":"29","author":"Davison","year":"2007","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Klein, G., and Murray, D. (2007, January 13\u201316). Parallel Tracking and Mapping for Small AR Workspaces. Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan.","DOI":"10.1109\/ISMAR.2007.4538852"},{"key":"ref_13","first-page":"1147","article-title":"ORB-SLAM: A Versatile and Accurate Monocular SLAM System","volume":"31","author":"Montiel","year":"2017","journal-title":"IEEE Trans. Robot."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Rublee, E., Rabaud, V., and Konolige, K. (2011, January 6\u201313). ORB: An efficient alternative to SIFT or SURF. Proceedings of the IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126544"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1255","DOI":"10.1109\/TRO.2017.2705103","article-title":"ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras","volume":"33","year":"2017","journal-title":"IEEE Trans. Robot."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Forster, C., Pizzoli, M., and Scaramuzza, D. (June, January 31). SVO: Fast semi-direct monocular visual odometry. Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China.","DOI":"10.1109\/ICRA.2014.6906584"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Engel, J., Thomas, S., and Cremers, D. (2014, January 6\u201312). Lsd-Salm: Large-Scale Direct Monocular Salm. Proceedings of the European Conference on Computer Vision, Cham, Switzerland.","DOI":"10.1007\/978-3-319-10605-2_54"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"611","DOI":"10.1109\/TPAMI.2017.2658577","article-title":"Direct Sparse Odometry","volume":"40","author":"Engel","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Mourikis, A.I., and Roumeliotis, S.I. (2007, January 10\u201314). A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy.","DOI":"10.1109\/ROBOT.2007.364024"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"314","DOI":"10.1177\/0278364914554813","article-title":"Keyframe-based visual\u2013inertial odometry using nonlinear optimization","volume":"34","author":"Leutenegger","year":"2015","journal-title":"Int. J. Robot. Res."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TRO.2018.2853729","article-title":"VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator","volume":"34","author":"Qin","year":"2018","journal-title":"IEEE Trans. Robot."},{"key":"ref_22","unstructured":"Qin, T., Pan, J., and Cao, S. (2019). A general optimization-based framework for local odometry estimation with multiple sensors. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1874","DOI":"10.1109\/TRO.2021.3075644","article-title":"Orb-slam3: An accurate open-source library for visual, visual\u2013inertial, and multimap slam","volume":"37","author":"Campos","year":"2021","journal-title":"IEEE Trans. Robot."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Hess, W., Kohler, D., and Rapp, H. (2016, January 16\u201321). Real-time loop closure in 2D LIDAR SLAM. Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487258"},{"key":"ref_25","first-page":"1","article-title":"LOAM: Lidar Odometry and Mapping in Real-time","volume":"2","author":"Zhang","year":"2014","journal-title":"Robot. Sci. Syst."},{"key":"ref_26","unstructured":"Qin, T., and Cao, S. (2024, April 23). A-LOAM. Available online: https:\/\/github.com\/HKUST-Aerial-Robotics\/A-LOAM."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Shan, T., and Englot, B. (2018, January 1\u20135). LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8594299"},{"key":"ref_28","unstructured":"Kimm, G. (2024, April 23). SC-LeGO-LOAM. Available online: https:\/\/gitee.com\/zhankun3280\/lslidar_c16_lego_loam."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Kim, G., and Kim, A. (2018, January 1\u20135). Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8593953"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Zhao, S., Fang, Z., and Li, H. (2019, January 3\u20138). A Robust Laser-Inertial Odometry and Mapping Method for Large-Scale Highway Environments. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS40897.2019.8967880"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ye, H., Chen, Y., and Liu, M. (2019, January 20\u201324). Tightly Coupled 3D Lidar Inertial Odometry and Mapping. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793511"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Shan, T., Englot, B., and Meyers, D. (2020\u201324, January 24). LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9341176"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Qin, C., Ye, H., and Pranata, C.E. (August, January 31). LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation. Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France.","DOI":"10.1109\/ICRA40945.2020.9197567"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"3317","DOI":"10.1109\/LRA.2021.3064227","article-title":"FAST-LIO: A Fast, Robust Lidar-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter","volume":"6","author":"Xu","year":"2020","journal-title":"IEEE Robot. Autom. Let."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"2053","DOI":"10.1109\/TRO.2022.3141876","article-title":"FAST-LIO2: Fast Direct Lidar-Inertial Odometry","volume":"38","author":"Xu","year":"2022","journal-title":"IEEE Trans. Robot."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"4861","DOI":"10.1109\/LRA.2022.3152830","article-title":"Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels","volume":"7","author":"Bai","year":"2022","journal-title":"IEEE Robot. Autom. Let."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Graeter, J., Wilczynski, A., and Lauer, M. (2018, January 1\u20135). LIMO: Lidar-Monocular Visual Odometry. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8594394"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Zhang, J., and Singh, S. (2015, January 26\u201330). Visual-Lidar odometry and mapping: Low-drift, robust, and fast. Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA.","DOI":"10.1109\/ICRA.2015.7139486"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1231","DOI":"10.1177\/0278364913491297","article-title":"Vision meets robotics: The kitti dataset","volume":"32","author":"Geiger","year":"2013","journal-title":"Int. J. Robot. Res."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Shao, W., Vijayarangan, S., and Li, C. (2019, January 3\u20138). Stereo Visual Inertial Lidar Simultaneous Localization and Mapping. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS40897.2019.8968012"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Zuo, X., Geneva, P., and Lee, W. (2019, January 3\u20138). LIC-Fusion: Lidar-Inertial-Camera Odometry. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS40897.2019.8967746"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Zuo, X. (2020\u201324, January 24). LIC-Fusion 2.0: Lidar-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking. Proceedings of the 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.","DOI":"10.1109\/IROS45743.2020.9340704"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/LRA.2021.3056380","article-title":"Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry IEEE Robot","volume":"6","author":"Wisth","year":"2021","journal-title":"Autom. Let."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"7469","DOI":"10.1109\/LRA.2021.3095515","article-title":"R2 LIVE: A Robust, Real-Time, Lidar-Inertial-Visual Tightly-Coupled State Estimator and Mapping","volume":"6","author":"Lin","year":"2021","journal-title":"IEEE Robot. Autom. Let."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Lin, J., and Zheng, C. (2022, January 23\u201327). R3LIVE: A Robust, Real-time, RGB-colored, Lidar-Inertial-Visual tightly-coupled state Estimation and mapping package. Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA.","DOI":"10.1109\/ICRA46639.2022.9811935"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zheng, C. (2022, January 23\u201327). FAST-LIVO: Fast and Tightly-coupled Sparse-Direct Lidar-Inertial-Visual Odometry. Proceedings of the 2022 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.","DOI":"10.1109\/IROS47612.2022.9981107"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2023, January 17\u201324). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Proceedings of the 2023 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"ref_48","unstructured":"Lin, J., Chen, W.M., Lin, Y., Cohn, J., and Han, S. (2007). MCUNet: Tiny Deep Learning on IoT Devices. arXiv."},{"key":"ref_49","unstructured":"Lyu, R. (2024, April 23). Nanodet-Plus: Super Fast and High Accuracy Lightweight Anchor-Free Object Detection Model. Available online: https:\/\/github.com\/RangiLyu\/nanodet."},{"key":"ref_50","unstructured":"Ge, Z., Liu, S., and Wang, F. (2021). Yolox: Exceeding yolo series in 2021. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"110","DOI":"10.1016\/j.procs.2019.08.147","article-title":"Mobilenet convolutional neural networks and support vector machines for palmprint recognition","volume":"157","author":"Michele","year":"2019","journal-title":"Procedia Comput. Sci."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., and Lin, M. (2018, January 18\u201322). Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00716"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Han, K., Wang, Y., and Tian, Q. (2020, January 13\u201319). Ghostnet: More features from cheap operations. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"ref_54","unstructured":"Targ, S., Almeida, D., and Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Yu, F., Wang, D., and Shelhamer, E. (2018, January 18\u201323). Deep layer aggregation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00255"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Liao, H.Y.M., and Wu, Y.H. (2020, January 14\u201319). CSPNet: A new backbone that can enhance learning capability of CNN. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"ref_57","unstructured":"Sol\u2019a, J. (2017). Quaternion kinematics for the error-state Kalman filter. arXiv."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"217","DOI":"10.1007\/s00190-014-0771-3","article-title":"Review and principles of PPP-RTK methods","volume":"89","author":"Teunissen","year":"2015","journal-title":"J. Geod."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/9\/1524\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,15]],"date-time":"2025-01-15T19:20:43Z","timestamp":1736968843000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/9\/1524"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,4,25]]},"references-count":58,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2024,5]]}},"alternative-id":["rs16091524"],"URL":"https:\/\/doi.org\/10.3390\/rs16091524","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2024,4,25]]}}}