Distance estimation with semantic segmentation and edge detection of surround view images | Intelligent Service Robotics Skip to main content
Log in

Distance estimation with semantic segmentation and edge detection of surround view images

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

A Correction to this article was published on 28 February 2024

This article has been updated

Abstract

This paper presents a method for obtaining 2D distance data through a robot’s surround view camera system. By converting semantic segmentation images into bird’s eye view, the location of the traversable region can be determined. However, since this depends entirely on the performance of the segmentation, noise may exist at the boundary between the traversable region and obstacle in untrained objects and environments. Therefore, instead of classifying the class of each pixel through semantic segmentation, obtaining the probability distribution for each class can yield the probability distribution for the boundary between traversable region and obstacle. Using this probability distribution, the boundary can be obtained from the edges obtained from each image. By transforming this into the accurate x, y coordinates through bird’s eye view, the position of the obstacle can be obtained from each image. We compared the results with LiDAR measurements and observed an error of about 5%, and it was confirmed that the localization algorithm can obtain the global pose of the robot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availibility

All data in this paper are available

Change history

References

  1. Choi J, Lee G, Lee C (2021) Reinforcement learning-based dynamic obstacle avoidance and integration of path planning. Intel Serv Robot 14:663–677

    Article  Google Scholar 

  2. Humenberger M (2021) Methods for visual localization. https://europe.naverlabs.com/blog/methods-for-visual-localization/

  3. Moulon P, Monasse P, Marlet R (2013) Proceedings of the IEEE international conference on computer vision, pp 3248–3255

  4. Sattler T, Leibe B, Kobbelt L (2011) 2011 International Conference on Computer Vision (IEEE, 2011), pp 667–674

  5. Sattler T, Leibe B, Kobbelt L (2016) Efficient and effective prioritized matching for large-scale image-based localization. IEEE Trans Pattern Anal Mach Intell 39(9):1744–1756

    Article  PubMed  Google Scholar 

  6. Liu L, Li H, Dai Y (2017) Proceedings of the IEEE International Conference on Computer Vision, pp 2372–2381

  7. Taira H, Okutomi M, Sattler T, Cimpoi M, Pollefeys M, Sivic J, Pajdla T, Torii A (2018) Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7199–7209

  8. Schonberger JL, Frahm JM (2016) Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4104–4113

  9. Brachmann E, Rother C (2018) Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4654–4662

  10. Torii A, Sivic J, Pajdla T (2011) 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops) (IEEE, 2011), pp 102–109

  11. Sattler T, Zhou Q, Pollefeys M, Leal-Taixe L (2019) Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3302–3312

  12. Zhang W, Kosecka J (2006) Third international symposium on 3D data processing, visualization, and transmission (3DPVT’06) (IEEE, 2006), pp 33–40

  13. Zhou Q, Sattler T, Pollefeys M, Leal-Taixe L (2020) 2020 IEEE International conference on robotics and automation (ICRA) (IEEE, 2020), pp 3319–3326

  14. Pion N, Humenberger M, Csurka G, Cabon Y, Sattler T (2020) 2020 international conference on 3d vision (3DV) (IEEE, 2020), pp 483–494

  15. Kendall A, Grimes M, Cipolla R (2015) Proceedings of the IEEE international conference on computer vision, pp 2938–2946

  16. Kendall A, Cipolla R (2017) Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5974–5983

  17. Radwan N, Valada A, Burgard W (2018) Vlocnet++: deep multitask learning for semantic visual localization and odometry. IEEE Robot Automat Lett 3(4):4407–4414

    Article  Google Scholar 

  18. Yu M, Ma G (2015) 2015 IEEE Intelligent Vehicles Symposium (IV) (IEEE, 2015), pp 53–58

  19. Zuo G, Zheng T, Liu Y, Xu Z, Gong D, Yu J (2021) Fine semantic mapping based on dense segmentation network. Intel Serv Robot 14:47–60

    Article  Google Scholar 

  20. Chen LC, Papandreou G, Schroff F, Adam H (2017) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587

  21. Song S, Lichtenberg SP, Xiao J (2015) Proceedings of the IEEE conference on computer vision and pattern recognition, pp 567–576

  22. Xiang Z, Yu J, Li J, Su J (2019) 2019 IEEE/RSJ International conference on intelligent robots and systems (IROS) (IEEE, 2019), pp 2486–2492

  23. Xiang Z, Bao A, Su J (2021) 2021 IEEE International conference on robotics and automation (ICRA) (IEEE, 2021), pp 11546–11552

  24. Thrun S (2002) UAI, vol. 2 (Citeseer, 2002), pp 511–518

Download references

Funding

This study was supported by the Research Program funded by the SeoulTech (Seoul National University of Science and Technology).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the methodology conception and design. JJ developed the main algorithm and performed the experiments. HL collected the data and trained the semantic segmentation, and CL analyzed the results and reviewed the manuscript.

Corresponding author

Correspondence to Chibum Lee.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

Not applicable

Consent for publication

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: the funding note has been corrected.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jung, J., Lee, H. & Lee, C. Distance estimation with semantic segmentation and edge detection of surround view images. Intel Serv Robotics 16, 633–641 (2023). https://doi.org/10.1007/s11370-023-00486-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-023-00486-2

Keywords

Navigation