An Efficient Approach to Initialization of Visual-Inertial Navigation System using Closed-Form Solution for Autonomous Robots | Journal of Intelligent & Robotic Systems Skip to main content
Log in

An Efficient Approach to Initialization of Visual-Inertial Navigation System using Closed-Form Solution for Autonomous Robots

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

The visual-inertial navigation system using a single camera and IMU requires an accurate initialization without increasing the processing cost and complexity for real-time deployment. The processing cost in the existing solutions can be traced to the gyroscopic bias estimation using 1) Closed-form solutions (Martinelli, Int. J. Comput. Vis. 106(2), 138–152, 2014; Kaiser et al., IEEE Robot. Autom. Lett. 2(1), 18–25, 2017 and 2) Loosely coupled schemes using visual-inertial alignment (Mur-Artal and Tardós IEEE Robot. Autom. Lett. 2(2), 796–803 2017); Qin, IEEE Trans. Robot. 34(4), 1004–1020 2018). The complexity arises because of the non-linear nature of the system to estimate the gyro bias, which is solved either by directly solving the non-linear, non-convex problem or by decoupling the vision and IMU measurements using linear models. The termination conditions are based on condition number or covariance of the estimated variables, which varies from one experiment to another. The present paper seeks to improve the closed-form solution with higher accuracy and less processing cost per frame. The proposed method separates the gyroscope bias estimation from the closed-form solution using tightly coupled but linear models with reduced number of variables in the closed-form solution. This paper also addresses the problem inherent to the closed-form solutions, which requires sufficient motion in the initialization window with a minimum number of common features. Towards this, a novel method of propagating the past information into the present initialization window is presented. This reduces the total processing cost per frame by limiting the initialization window to 10 frames ≈ 1s without compromising the motion inside the window. We also present a common and intuitive termination criteria which is independent from the experiment scenario. This helps to increase the robustness in the initialization by removing erroneous solutions. The proposed method is evaluated with EuRoC Micro Aerial Vehicle (MAV) dataset (Burri et al. 2016) sequences. We compare the proposed method with a recently proposed loosely coupled method, which shows the improved accuracy, processing cost and robustness in the initialization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Martinelli, A.: Closed-form solution of visual-inertial structure from motion. Int. J. Comput. Vis. 106(2), 138–152 (2014). https://doi.org/10.1007/s11263-013-0647-7

    Article  MathSciNet  Google Scholar 

  2. Kaiser, J., Martinelli, A., Fontana, F., Scaramuzza, D.: Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation. IEEE Robot. Autom. Lett. 2(1), 18–25 (2017). https://doi.org/10.1109/LRA.2016.2521413

    Article  Google Scholar 

  3. Mur-Artal, R., Tardós, J.D.: Visual-inertial monocular slam with map reuse. IEEE Robot. Autom. Lett. 2(2), 796–803 (2017). https://doi.org/10.1109/LRA.2017.2653359

    Article  Google Scholar 

  4. Qin, T., Li, P., Shen, S.: Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020 (2018). https://doi.org/10.1109/TRO.2018.2853729

    Article  Google Scholar 

  5. Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M W, Siegwart, R.: The euroc micro aerial vehicle datasets. The International Journal of Robotics Research (2016). https://doi.org/10.1177/0278364915620033

  6. Araar, O., Aouf, N., Vitanov, I.: Vision based autonomous landing of multirotor uav on moving platform. J. Intell. Robot. Syst. 85(2), 369–384 (2017). https://doi.org/10.1007/s10846-016-0399-z

    Article  Google Scholar 

  7. Lin, Y., Gao, F., Qin, T., Gao, W., Liu, T., Wu, W., Yang, Z., Shen, S.: Autonomous aerial navigation using monocular visual-inertial fusion. J. Field Robot. 35(1), 23–51 (2018). https://doi.org/10.1002/rob.21732

    Article  Google Scholar 

  8. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’81, pp. 674–679. Morgan Kaufmann Publishers Inc., San Francisco. http://dl.acm.org/citation.cfm?id=1623264.1623280 (1981)

  9. Häne, C., Heng, L., Lee, G., Fraundorfer, F., Furgale, P., Sattler, T., Pollefeys, M.: 3d visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image Vis. Comput. (2017). https://doi.org/10.1016/j.imavis.2017.07.003

  10. Häne, C., Sattler, T., Pollefeys, M.: Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5101–5108. https://doi.org/10.1109/IROS.2015.7354095 (2015)

  11. Shi, J.: Tomasi: Good features to track. In: 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. https://doi.org/10.1109/CVPR.1994.323794 (1994)

  12. Maimone, M., Cheng, Y., Matthies, L.: Two years of visual odometry on the mars exploration rovers. J. Field Robot. 24(3), 169–186 (2007). https://doi.org/10.1002/rob.20184

    Article  Google Scholar 

  13. Schuster, M.J., Brunner, S.G., Bussmann, K., Büttner, S., Dömel, A., Hellerer, M., Lehner, H., Lehner, P., Porges, O., Reill, J., Riedel, S., Vayugundla, M., Vodermayer, B., Bodenmüller, T., Brand, C., Friedl, W., Grixa, I., Hirschmüller, H., Kaßecker, M., Márton, Z.-C., Nissler, C., Ruess, F., Suppa, M., Wedler, A.: Towards autonomous planetary exploration. J. Intell. Robot. Syst. 93(3), 461–494 (2019). https://doi.org/10.1007/s10846-017-0680-9

    Article  Google Scholar 

  14. Marder-Eppstein, E.: Project tango, pp. 25–25 (2016). https://doi.org/10.1145/2933540.2933550

  15. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018). https://doi.org/10.1109/TPAMI.2017.2658577

    Article  Google Scholar 

  16. Mur-Artal, R., Montiel, J.M.M., Tardäs, J.D.: Orb-slam: A versatile and accurate monocular slam system. IEEE Trans. Robot. 31(5), 1147–1163 (2015). https://doi.org/10.1109/TRO.2015.2463671

    Article  Google Scholar 

  17. Klein, G., Murray, D.: Parallel tracking and mapping for small ar workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234. https://doi.org/10.1109/ISMAR.2007.4538852 (2007)

  18. Forster, C., Pizzoli, M., Scaramuzza, D.: Svo: Fast semi-direct monocular visual odometry. https://doi.org/10.1109/ICRA.2014.6906584 (2014)

  19. Martinelli, A.: Vision and imu data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination. IEEE Trans. Robot. 28(1), 44–60 (2012). https://doi.org/10.1109/TRO.2011.2160468

    Article  Google Scholar 

  20. Qin, T., Shen, S.: Robust initialization of monocular visual-inertial estimation on aerial robots. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4225–4232. https://doi.org/10.1109/IROS.2017.8206284 (2017)

  21. Mourikis, A.I., Roumeliotis, S.I.: A multi-state constraint kalman filter for vision-aided inertial navigation. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, pp. 3565–3572. https://doi.org/10.1109/ROBOT.2007.364024 (2007)

  22. Hesch, J.A., Kottas, D.G., Bowman, S.L., Roumeliotis, S.I.: Consistency analysis and improvement of vision-aided inertial navigation. IEEE Trans. Robot. 30 (1), 158–176 (2014). https://doi.org/10.1109/TRO.2013.2277549

    Article  Google Scholar 

  23. Lynen, S., Achtelik, M.W., Weiss, S., Chli, M., Siegwart, R.: A robust and modular multi-sensor fusion approach applied to mav navigation. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3923–3929. https://doi.org/10.1109/IROS.2013.6696917 (2013)

  24. Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 33(1), 1–21 (2017). https://doi.org/10.1109/TRO.2016.2597321

    Article  Google Scholar 

  25. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015). https://doi.org/10.1177/0278364914554813

    Article  Google Scholar 

  26. Sibley, G., Matthies, L., Sukhatme, G.: Sliding window filter with application to planetary landing. J. Field Robot. 27(5), 587–608 (2010). https://doi.org/10.1002/rob.20360

    Article  Google Scholar 

  27. Shen, S., Mulgaonkar, Y., Michael, N., Kumar, V.: Initialization-free monocular visual-inertial state estimation with application to autonomous mavs. In: Hsieh, M.A., Khatib, O., Kumar, V. (eds.) Experimental Robotics: The 14th International Symposium on Experimental Robotics, pp 211–227. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-23778-7_15

  28. Yang, Z., Shen, S.: Monocular visual-inertial state estimation with online initialization and camera-imu extrinsic calibration. IEEE Trans. Autom. Sci. Eng. 14(1), 39–51 (2017). https://doi.org/10.1109/TASE.2016.2550621

    Article  Google Scholar 

  29. Zhang, Z., Scaramuzza, D.: A tutorial on quantitative trajectory evaluation for visual(-inertial) odometry. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7244–7251 (2018). https://doi.org/10.1109/IROS.2018.8593941

  30. Campos, C., José M.M., Tardós, J.D.: Fast and robust initialization for visual-inertial slam. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 1288–1294. https://doi.org/10.1109/ICRA.2019.8793718 (2019)

  31. Domínguez-Conti, J., Yin, J., Alami, Y., Civera, J.: Visual-inertial slam initialization: A general linear formulation and a gravity-observing non-linear optimization. In: 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 37–45. https://doi.org/10.1109/ISMAR.2018.00027 (2018)

  32. Eigen: Solving sparse linear systems (2016). https://eigen.tuxfamily.org/dox/group_TopicSparseSystems.html

  33. Nützi, G., Weiss, S., Scaramuzza, D., Siegwart, R.: Fusion of imu and vision for absolute scale estimation in monocular slam. J. Intell. Robot. Syst. 61 (1), 287–299 (2011). https://doi.org/10.1007/s10846-010-9490-z

    Article  Google Scholar 

  34. Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo. https://doi.org/10.1109/IROS.2013.6696514 (2013)

  35. Rehder, J., Nikolic, J., Schneider, T., Hinzmann, T., Siegwart, R.: Extending kalibr: Calibrating the extrinsics of multiple imus and of individual axes. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311. https://doi.org/10.1109/ICRA.2016.7487628 (2016)

  36. Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: Imu preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. In: Proceedings of Robotics: Science and Systems, Rome. https://doi.org/10.15607/rss.2015.xi.006 (2015)

  37. Crassidis, J.L.: Sigma-point kalman filtering for integrated gps and inertial navigation. IEEE Trans. Aerosp. Electron. Syst. 42(2), 750–756 (2006). https://doi.org/10.1109/TAES.2006.1642588

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledges the UGC-NET JRF fellowship and CSIR - National Aerospace Laboratories for carrying out the work as part of PhD program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bharadwaja Yathirajam.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yathirajam, B., Sevoor Meenakshisundaram, V. & Challaghatta Muniyappa, A. An Efficient Approach to Initialization of Visual-Inertial Navigation System using Closed-Form Solution for Autonomous Robots. J Intell Robot Syst 101, 59 (2021). https://doi.org/10.1007/s10846-021-01313-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01313-5

Keywords

Navigation