End-to-End Probabilistic Depth Perception and 3D Obstacle Avoidance using POMDP | Journal of Intelligent & Robotic Systems Skip to main content
Log in

End-to-End Probabilistic Depth Perception and 3D Obstacle Avoidance using POMDP

  • Short Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

In most real world applications, noisy and incomplete information about the robot proximity is inevitable due to imperfections coupled with the onboard sensors. The perception and control problems go hand in hand in order to efficiently plan safe robot maneuvers. This paper proposes a method to generate robot actions directly from a sequence of depth images. The notion of Artificial Potential Field (APF) approach is used where a robot action is obtained by combining the attractive and repulsive actions generated by the goal and the obstacles respectively. This article assumes environment perception uncertainty that relates to the estimation of an obstacle’s location relative to the robot. The repulsive action generation is formulated as a Partially Observable Markov Decision Process (POMDP). A Particle Filter (PF) approach is used to estimate and track valid scene points in the robot sensing horizon from an imperfect depth image stream. The most probable candidates for an occupied region are used to generate a velocity action that minimizes the repulsive potential at each time instant. Approximately optimal solutions to the POMDP are obtained using the QMDP technique which enables us to perform computationally expensive operations prior to a robot run. Consequently, suitable repulsive actions are generated onboard the robot, each time an image is received, in a computationally feasible way. An attractive action, obtained by solving for the negative gradient of the attractive potential is finally added to the repulsive action to generate a final robot action at every time step. Lastly, the robustness and reliability of this approach is demonstrated close-loop on a quadrotor UAV equipped with a depth camera. The experiments also demonstrate that the method is very computationally efficient and can be run on a variety of platforms that have limited resources on-board.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots. https://doi.org/10.1007/s10514-012-9321-0. http://octomap.github.com. Software available at http://octomap.github.com(2013)

  2. Oleynikova, H., Taylor, Z., Fehr, M., Siegwart, R., Nieto, J.: Voxblox: Incremental 3D Euclidean Signed Distance Fields for On-Board Mav Planning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)

  3. Borst, A., Haag, J.: Neural networks in the cockpit of the fly. J. Comp. Physiol. A. 188(6), 419–437 (2002)

    Article  Google Scholar 

  4. Gibson, J.J.: The perception of the visual world (1950)

  5. Warren, W.H., Kay, B.A., Zosh, W.D., Duchon, A.P., Sahuc, S.: Optic flow is used to control human walking. Nat. Neurosci. 4(2), 213–216 (2001)

    Article  Google Scholar 

  6. Wang, C., Wang, J., Shen, Y., Zhang, X.: Autonomous navigation of uavs in large-scale complex environments: a deep reinforcement learning approach. IEEE Trans. Veh. Technol. 68(3), 2124–2136 (2019)

    Article  Google Scholar 

  7. Ross, S., Melik-Barkhudarov, N., Shankar, K.S., Wendel, A., Dey, D., Bagnell, J.A., Hebert, M.: Learning Monocular Reactive Uav Control in Cluttered Natural Environments. In: 2013 IEEE International Conference on Robotics and Automation, pp. 1765–1772. IEEE (2013)

  8. Stevšić, S., Nägeli, T., Alonso-Mora, J., Hilliges, O.: Sample efficient learning of path following and obstacle avoidance behavior for quadrotors. IEEE Robot. Autom. Lett. 3(4), 3852–3859 (2018)

    Article  Google Scholar 

  9. Rubí, B., Pérez, R., Morcego, B.: A survey of path following control strategies for uavs focused on quadrotors. J. Intell. Robot. Syst., 1–25 (2019)

  10. Humbert, J.S., Hyslop, A.M.: Bioinspired visuomotor convergence. IEEE Trans. Robot. 26 (1), 121–130 (2009)

    Article  Google Scholar 

  11. Escobar-Alvarez, H.D., Ohradzansky, M., Keshavan, J., Ranganathan, B.N., Humbert, J.S.: Bioinspired approaches for autonomous small-object detection and avoidance. IEEE Trans. Robot. 35 (5), 1220–1232 (2019)

    Article  Google Scholar 

  12. Ohradzansky, M.T., Mills, A.B., Rush, E.R., Riley, D.G., Frew, E.W., Humbert, J.S.: Reactive Control and Metric-Topological Planning for Exploration. In: 2020 IEEE International Conference on Robotics and Automation. IEEE (2020)

  13. Dubey, G., Arora, S., Scherer, S.: Droan—Disparity-Space Representation for Obstacle Avoidance. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1324–1330. IEEE (2017)

  14. Matthies, L., Brockers, R., Kuwata, Y., Weiss, S.: Stereo Vision-Based Obstacle Avoidance for Micro Air Vehicles Using Disparity Space. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3242–3249. IEEE (2014)

  15. Smith, J.S., Vela, P.: PiPS: 2Lanning in Perception Space. In: 2017 IEEE International Conference On Robotics and Automation (ICRA), pp. 6204–6209. IEEE (2017)

  16. Ahmad, S., Fierro, R.: Real-Time Quadrotor Navigation through Planning in Depth Space in Unstructured Environments. In: 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1467–1476. IEEE (2019)

  17. Madani, O., Hanks, S., Condon, A.: On the Undecidability of Probabilistic Planning and Infinite-Horizon Partially Observable Markov Decision Problems. In: AAAI/IAAI, pp. 541–548 (1999)

  18. Papadimitriou, C.H., Tsitsiklis, J.N.: The complexity of markov decision processes. Mathem. Oper. Res. 12(3), 441–450 (1987)

    Article  MathSciNet  Google Scholar 

  19. Silver, D., Veness, J.: Monte-carlo planning in large pomdps. Neural Information Processing Systems (2010)

  20. Somani, A., Ye, N., Hsu, D., Lee, W.S.: Despot: Online Pomdp Planning with Regularization. In: NIPS, vol. 13, pp. 1772–1780 (2013)

  21. Garg, N.P., Hsu, D., Lee, W.S.: Despot-Alpha: Online Pomdp Planning with Large State and Observation Spaces. In: Robotics: Science and Systems (2019)

  22. Sunberg, Z., Kochenderfer, M.: Online algorithms for pomdps with continuous state, action, and observation spaces. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 28 (2018)

  23. Ross, S., Pineau, J., Paquet, S., Chaib-Draa, B.: Online planning algorithms for pomdps. J. Artif. Intell. Res. 32, 663–704 (2008)

    Article  MathSciNet  Google Scholar 

  24. Smith, T., Simmons, R.: Point-based pomdp algorithms: improved analysis and implementation. In: Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pp. 542–549 (2005)

  25. Pineau, J., Gordon, G., Thrun, S.: Anytime point-based approximations for large pomdps. J. Artif. Intell. Res. 27, 335–380 (2006)

    Article  Google Scholar 

  26. Kurniawati, H., Hsu, D., Lee, W.S.: Sarsop: Efficient Point-Based Pomdp Planning by Approximating Optimally Reachable Belief Spaces. In: Robotics: Science and Systems, vol. 2008. Citeseer (2008)

  27. Shani, G., Pineau, J., Kaplow, R.: A survey of point-based pomdp solvers. Auton. Agent. Multi-Agent Syst. 27(1), 1–51 (2013)

    Article  Google Scholar 

  28. Walraven, E., Spaan, M.T.: Point-based value iteration for finite-horizon pomdps. J. Artif. Intell. Res. 65, 307–341 (2019)

    Article  MathSciNet  Google Scholar 

  29. Littman, M.L., Cassandra, A.R., Kaelbling, L.P.: Learning policies for partially observable environments: Scaling up. In: Machine Learning Proceedings 1995, pp. 362–370. Elsevier (1995)

  30. Kochenderfer, M.J.: Decision making under uncertainty: theory and application. MIT press (2015)

  31. Foka, A., Trahanias, P.: Real-time hierarchical pomdps for autonomous robot navigation. Robot. Auton. Syst. 55(7), 561–571 (2007)

    Article  Google Scholar 

  32. Foka, A.F., Trahanias, P.E.: Probabilistic autonomous robot navigation in dynamic environments with human motion prediction. Int. J. Soc. Robot. 2(1), 79–94 (2010)

    Article  Google Scholar 

  33. Sridharan, M., Wyatt, J., Dearden, R.: Planning to see: a hierarchical approach to planning visual actions on a robot using pomdps. Artif. Intell. 174(11), 704–725 (2010)

    Article  Google Scholar 

  34. Pajarinen, J., Kyrki, V.: Robotic manipulation of multiple objects as a pomdp. Artif. Intell. 247, 213–228 (2017)

    Article  MathSciNet  Google Scholar 

  35. Candido, S., Hutchinson, S.: Minimum Uncertainty Robot Navigation Using Information-Guided Pomdp Planning. In: 2011 IEEE International Conference on Robotics and Automation, pp. 6102–6108. IEEE (2011)

  36. Zhang, S., Sridharan, M., Washington, C.: Active visual planning for mobile robot teams using hierarchical pomdps. IEEE Trans. Robot. 29(4), 975–985 (2013)

    Article  Google Scholar 

  37. Gordon, N.J., Salmond, D.J., Smith, A.F.: Novel approach to nonlinear/non-gaussian bayesian state estimation. In: IEE proceedings F (radar and signal processing), vol. 140, pp. 107–113. IET (1993)

  38. Yang, C., Duraiswami, R., Davis, L.: Fast Multiple Object Tracking via a Hierarchical Particle Filter. In: Tenth IEEE International Conference on Computer Vision (ICCV’05), vol. 1, pp. 212–219. IEEE (2005)

  39. Isard, M., Blake, A.: Condensation—conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)

    Article  Google Scholar 

  40. Khan, Z., Balch, T., Dellaert, F.: A rao-blackwellized particle filter for eigentracking. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 2, pp. II–II. IEEE (2004)

  41. Zhang, T., Liu, S., Xu, C., Liu, B., Yang, M.H.: Correlation particle filter for visual tracking. IEEE Trans. Image Process. 27(6), 2676–2687 (2017)

    Article  MathSciNet  Google Scholar 

  42. Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L., Savarese, S.: Social lstm: Human trajectory prediction in crowded spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 961–971 (2016)

  43. Ballan, L., Castaldo, F., Alahi, A., Palmieri, F., Savarese, S.: Knowledge Transfer for Scene-Specific Motion Prediction. In: European Conference on Computer Vision, pp. 697–713. Springer (2016)

  44. Zhong, J., Sun, H., Cao, W., He, Z.: Pedestrian motion trajectory prediction with stereo-based 3d deep pose estimation and trajectory learning. IEEE access 8, 23480–23486 (2020)

    Article  Google Scholar 

  45. Teixeira, M.A.S., Neves, Jr, F., Koubaa, A., de Arruda, L.V.R., de Oliveira, A.S.: Deepspatial: Intelligent spatial sensor to perception of things. IEEE Sens. J. 21(4), 3966–3976 (2020)

    Article  Google Scholar 

  46. Ahmad, S., Sunberg, Z.N., Humbert, J.S.: Apf-pf: Probabilistic depth perception for 3d reactive obstacle avoidance. In: 2021 American Control Conference (ACC), pp. 32–39. IEEE (2021). To appear (preprint available at: arXiv:2010.08063

  47. DARPA: Unearthing the subterranean environment (2020). https://www.subtchallenge.com/

  48. Ohradzansky, M.T., Rush, E.R., Riley, D.G., Mills, A.B., Ahmad, S., McGuire, S., Biggie, H., Harlow, K., Miles, M.J., Frew, E.W., Heckman, C., Humbert, J.S.: Multi-Agent Autonomy: Advancements and Challenges in Subterranean Exploration. Journal of Field Robotics. Under revision (2021)

  49. Bertsekas, D.P.: Dynamic Programming and Optimal Control, vol. 1. MA, Athena Scientific Belmont (1995)

    MATH  Google Scholar 

  50. Khatib, O.: Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. In: Autonomous Robot Vehicles, pp. 396–404. Springer (1986)

  51. Mellinger, D., Kumar, V.: Minimum Snap Trajectory Generation and Control for Quadrotors. In: 2011 IEEE International Conference on Robotics and Automation, pp. 2520–2525. IEEE (2011)

  52. Nguyen, C.V., Izadi, S., Lovell, D.: Modeling Kinect Sensor Noise for Improved 3D Reconstruction and Tracking. In: 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, pp. 524–530. IEEE (2012)

  53. Choo, B., DeVore, M.D., Beling, P.A.: Statistical Models of Horizontal and Vertical Stochastic Noise for the Microsoft KinectTM Sensor. In: IECON 2014-40Th Annual Conference of the IEEE Industrial Electronics Society, pp. 2624–2630. IEEE (2014)

  54. Khoshelham, K., Elberink, S.O.: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2), 1437–1454 (2012)

    Article  Google Scholar 

  55. Park, J.H., Shin, Y.D., Bae, J.H., Baeg, M.H.: Spatial uncertainty model for visual features using a kinectTM sensor. Sensors 12(7), 8640–8662 (2012)

    Article  Google Scholar 

  56. Xu, Y., Long, Q., Mita, S., Tehrani, H., Ishimaru, K., Shirai, N.: Real-Time Stereo Vision System at Nighttime with Noise Reduction Using Simplified Non-Local Matching Cost. In: 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 998–1003. IEEE (2016)

  57. Owens, K., Matthies, L.: Passive night vision sensor comparison for unmanned ground vehicle stereo vision navigation. In: Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS’99), pp. 59–68. IEEE (1999)

  58. Kanazawa, Y., Kanatani, K.: Reliability of 3-d reconstruction by stereo vision. IEICE Trans. Inf. Syst. 78(10), 1301–1306 (1995)

    Google Scholar 

  59. Sarbolandi, H., Lefloch, D., Kolb, A.: Kinect range sensing: Structured-light versus time-of-flight kinect. Comput. Vis. Image Understand. 139, 1–20 (2015)

    Article  Google Scholar 

  60. Horaud, R., Hansard, M., Evangelidis, G., Ménier, C.: An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 27(7), 1005–1020 (2016)

    Article  Google Scholar 

  61. Fürsattel, P., Placht, S., Balda, M., Schaller, C., Hofmann, H., Maier, A., Riess, C.: A comparative error analysis of current time-of-flight sensors. IEEE Trans. Comput. Imaging 2(1), 27–41 (2015)

    Article  MathSciNet  Google Scholar 

  62. Georgiev, M., Bregović, R., Gotchev, A.: Time-of-flight range measurement in low-sensing environment: Noise analysis and complex-domain non-local denoising. IEEE Trans. Image Process. 27(6), 2911–2926 (2018)

    Article  MathSciNet  Google Scholar 

  63. Mallick, T., Das, P. P., Majumdar, A. K.: Characterizations of noise in kinect depth images: a review. IEEE Sens. J. 14(6), 1731–1740 (2014)

    Article  Google Scholar 

  64. Landau, M. J., Choo, B. Y., Beling, P. A.: Simulating kinect infrared and depth images. IEEE Trans. Cybern. 46(12), 3018–3031 (2015)

    Article  Google Scholar 

  65. Planche, B., Wu, Z., Ma, K., Sun, S., Kluckner, S., Lehmann, O., Chen, T., Hutter, A., Zakharov, S., Kosch, H., et al.: Depthsynth: Real-Time Realistic Synthetic Data Generation from Cad Models for 2.5 D Recognition. In: 2017 International Conference on 3D Vision (3DV), pp. 1–10. IEEE (2017)

  66. Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2107–2116 (2017)

  67. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3722–3731 (2017)

  68. Isola, P., Zhu, J. Y., Zhou, T., Efros, A. A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134 (2017)

  69. Sweeney, C., Izatt, G., Tedrake, R.: A Supervised Approach to Predicting Noise in Depth Images. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 796–802. IEEE (2019)

  70. Smith, A.: Sequential Monte Carlo methods in practice. Springer Science & Business Media (2013)

  71. Doucet, A., Godsill, S., Andrieu, C.: On sequential monte carlo sampling methods for bayesian filtering. Stat. Comput. 10(3), 197–208 (2000)

    Article  Google Scholar 

  72. Smallwood, R.D., Sondik, E.J.: The optimal control of partially observable markov processes over a finite horizon. Oper Res. 21(5), 1071–1088 (1973)

    Article  Google Scholar 

  73. Littman, M.L.: The witness algorithm: Solving partially observable markov decision processes. Brown University, Providence (1994)

  74. cplusplus: discrete_distribution::operator() - c++ reference (2020). https://www.cplusplus.com/reference/random/discrete_distribution/operator()/

  75. cplusplus: discrete_distribution::(constructor) - c++ reference (2020). http://www.cplusplus.com/reference/random/discrete_distribution/discrete_distribution/

  76. Intel: Depth camera d435 - intel ®; realsense depth and tracking cameras (2020). https://www.intelrealsense.com/depth-camera-d435/

  77. Lumenier: Qav500 v2 fpv quadcopter (2020). https://www.lumenier.com/commercial/qav500

  78. Ouster: High-resolution os1 lidar sensor: robotics, trucking, mapping (2020). https://ouster.com/products/os1-lidar-sensor/

  79. Mircrostrain, L.: 3dm-gx5-15 vru - lord sensing systems (2020). https://www.microstrain.com/inertial/3dm-gx5-15

  80. Hess, W., Kohler, D., Rapp, H., Andor, D.: Real-Time Loop Closure in 2D Lidar Slam. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1271–1278. IEEE (2016)

Download references

Acknowledgements

This work was supported through the DARPA Subterranean Challenge, cooperative agreement number HR0011-18-2-0043.

Funding

This work was supported through the DARPA Subterranean Challenge, cooperative agreement number HR0011-18-2-0043.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, review and editing: Shakeeb Ahmad, Zachary N. Sunberg, J. Sean Humbert; Investigation, software and original draft preparation: Shakeeb Ahmad; Funding acquisition and supervision: J. Sean Humbert;

Corresponding author

Correspondence to Shakeeb Ahmad.

Ethics declarations

Conflicts of interests/ Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Availability of data and material

The autonomous flight videos are posted at https://youtu.be/oUyh_vGSgJg.

Code availability

The software is available at https://github.com/shakeebbb/apf_pf.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahmad, S., Sunberg, Z.N. & Humbert, J.S. End-to-End Probabilistic Depth Perception and 3D Obstacle Avoidance using POMDP. J Intell Robot Syst 103, 33 (2021). https://doi.org/10.1007/s10846-021-01489-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01489-w

Keywords

Navigation