Review of Learning-Based Robotic Manipulation in Cluttered Environments
Abstract
:1. Introduction
2. Essential Reinforcement Learning Terminologies
3. The Review Protocol Methodology
- Duplicated and irrelevant papers were eliminated by scanning the article titles and abstracts.
- The full contents of the articles that were filtered out in the first part were then read, and the articles were classified into taxonomic groups.
4. Numerical Analysis of Final Set of Articles
5. Critical Review
5.1. The Removal of Objects TASK
5.1.1. Sole-Grasping Policy
5.1.2. Suction-Based Grasping
5.1.3. Multifunctional Gripper-Based Grasping
5.1.4. Synergy of Two Primitive Actions
5.2. Assembly and Rearrangement Task
5.2.1. Assembly Task
5.2.2. Rearrangement Task
5.3. Object Retrieval and Singulation Task
5.3.1. Object Retrieval Task
5.3.2. Singulation Task
6. Challenges and Future Directions
6.1. The Challenge of Sole-Grasping Policy
6.2. The Challenge of Synergizing Two Actions
6.3. The Challenge of Assembly and Rearrangement of Objects
6.4. The Challenge of Object Retrieval and Singulation Task
6.5. The Challenge of Grasping Deformable, Transparent, Black and Shiny Objects
7. Recommendations
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Rocha, L.F.; Ferreira, M.; Santos, V.; Paulo Moreira, A. Object recognition and pose estimation for industrial applications: A cascade system. Robot. Comput. Integr. Manuf. 2014, 30, 605–621. [Google Scholar] [CrossRef]
- Marwan, Q.M.; Chua, S.C.; Kwek, L.C. Comprehensive Review on Reaching and Grasping of Objects in Robotics. Robotica 2021, 39, 1849–1882. [Google Scholar] [CrossRef]
- Kappassov, Z.; Corrales, J.A.; Perdereau, V. Tactile sensing in dexterous robot hands—Review. Rob. Auton. Syst. 2015, 74, 195–220. [Google Scholar] [CrossRef] [Green Version]
- Saudabayev, A.; Varol, H.A. Sensors for robotic hands: A survey of state of the art. IEEE Access 2015, 3, 1765–1782. [Google Scholar] [CrossRef]
- Luo, S.; Bimbo, J.; Dahiya, R.; Liu, H. Robotic tactile perception of object properties: A review. Mechatronics 2017, 48, 54–67. [Google Scholar] [CrossRef] [Green Version]
- Zou, L.; Ge, C.; Wang, Z.J.; Cretu, E.; Li, X. Novel tactile sensor technology and smart tactile sensing systems: A review. Sensors 2017, 17, 2653. [Google Scholar] [CrossRef]
- Chi, C.; Sun, X.; Xue, N.; Li, T.; Liu, C. Recent progress in technologies for tactile sensors. Sensors 2018, 18, 948. [Google Scholar] [CrossRef] [Green Version]
- Honarpardaz, M.; Tarkian, M.; Ölvander, J.; Feng, X. Finger design automation for industrial robot grippers: A review. Rob. Auton. Syst. 2017, 87, 104–119. [Google Scholar] [CrossRef] [Green Version]
- Hughes, J.; Culha, U.; Giardina, F.; Guenther, F.; Rosendo, A.; Iida, F. Soft manipulators and grippers: A review. Front. Robot. AI 2016, 3, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Shintake, J.; Cacucciolo, V.; Floreano, D.; Shea, H. Soft Robotic Grippers. Adv. Mater. 2018, 30, e1707035. [Google Scholar] [CrossRef]
- Terrile, S.; Argüelles, M.; Barrientos, A. Comparison of different technologies for soft robotics grippers. Sensors 2021, 21, 3253. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Krahn, J.; Menon, C. Bioinspired Dry Adhesive Materials and Their Application in Robotics: A Review. J. Bionic Eng. 2016, 13, 181–199. [Google Scholar] [CrossRef]
- Gorissen, B.; Reynaerts, D.; Konishi, S.; Yoshida, K.; Kim, J.W.; De Volder, M. Elastic Inflatable Actuators for Soft Robotic Applications. Adv. Mater. 2017, 29, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Ersen, M.; Oztop, E.; Sariel, S. Cognition-Enabled Robot Manipulation in Human Environments: Requirements, Recent Work, and Open Problems. IEEE Robot. Autom. Mag. 2017, 24, 108–122. [Google Scholar] [CrossRef]
- Billard, A.; Kragic, D. Trends and challenges in robot manipulation. Science 2019, 364, eaat8414. [Google Scholar] [CrossRef]
- Rantoson, R.; Bartoli, A. A 3D deformable model-based framework for the retrieval of near-isometric flattenable objects using Bag-of-Visual-Words. Comput. Vis. Image Underst. 2018, 167, 89–108. [Google Scholar] [CrossRef]
- Saeedvand, S.; Mandala, H.; Baltes, J. Hierarchical deep reinforcement learning to drag heavy objects by adult-sized humanoid robot. Appl. Soft Comput. 2021, 110, 107601. [Google Scholar] [CrossRef]
- Ahn, G.; Park, M.; Park, Y.-J.; Hur, S. Interactive Q-Learning Approach for Pick-and-Place Optimization of the Die Attach Process in the Semiconductor Industry. Math. Probl. Eng. 2019, 2019, 4602052. [Google Scholar] [CrossRef]
- Mohammed, M.Q.; Chung, K.L.; Chyi, C.S. Pick and Place Objects in a Cluttered Scene Using Deep Reinforcement Learning. Int. J. Mech. Mechatron. Eng. IJMME 2020, 20, 50–57. [Google Scholar]
- Lan, X.; Qiao, Y.; Lee, B. Towards Pick and Place Multi Robot Coordination Using Multi-agent Deep Reinforcement Learning. In Proceedings of the 2021 7th International Conference on Automation, Robotics and Applications (ICARA), Prague, Czech Republic, 4–6 February 2021; pp. 85–89. [Google Scholar] [CrossRef]
- Mohammed, M.Q.; Chung, K.L.; Chyi, C.S. Review of Deep Reinforcement Learning-Based Object Grasping: Techniques, Open Challenges, and Recommendations. IEEE Access 2020, 8, 178450–178481. [Google Scholar] [CrossRef]
- Nguyen, H.; La, H. Review of Deep Reinforcement Learning for Robot Manipulation. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 590–595. [Google Scholar] [CrossRef]
- Lobbezoo, A.; Qian, Y.; Kwon, H.J. Reinforcement learning for pick and place operations in robotics: A survey. Robotics 2021, 10, 105. [Google Scholar] [CrossRef]
- Panzer, M.; Bender, B. Deep reinforcement learning in production systems: A systematic literature review. Int. J. Prod. Res. 2022, 60, 4316–4341. [Google Scholar] [CrossRef]
- Cordeiro, A.; Rocha, L.F.; Costa, C.; Costa, P.; Silva, M.F. Bin Picking Approaches Based on Deep Learning Techniques: A State-of-the-Art Survey. In Proceedings of the 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Santa Maria da Feira, Portugal, 29–30 April 2022; pp. 110–117. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- François-Lavet, V.; Henderson, P.; Islam, R.; Bellemare, M.G.; Pineau, J. An Introduction to Deep Reinforcement Learning; NOW: Hanover, MA, USA, 2018; Volume 11, pp. 219–354. [Google Scholar]
- Pajarinen, J.; Kyrki, V. Robotic manipulation of multiple objects as a POMDP. Artif. Intell. 2017, 247, 213–228. [Google Scholar] [CrossRef]
- Abolghasemi, P.; Bölöni, L. Accept Synthetic Objects as Real: End-to-End Training of Attentive Deep Visuomotor Policies for Manipulation in Clutter. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6506–6512. [Google Scholar]
- Zeng, A.; Yu, K.-T.; Song, S.; Suo, D.; Walker, E.; Rodriguez, A.; Xiao, J. Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1383–1386. [Google Scholar]
- Song, S.; Zeng, A.; Lee, J.; Funkhouser, T. Grasping in the Wild: Learning 6DoF Closed-Loop Grasping From Low-Cost Demonstrations. IEEE Robot. Autom. Lett. 2020, 5, 4978–4985. [Google Scholar] [CrossRef]
- Mohammed, M.Q.; Kwek, L.C.; Chua, S.C. Learning Pick to Place Objects using Self-supervised Learning with Minimal Training Resources. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 493–499. [Google Scholar] [CrossRef]
- Mohammed, M.Q.; Kwek, L.C.; Chua, S.C.; Alandoli, E.A. Color Matching Based Approach for Robotic Grasping. In Proceedings of the 2021 International Congress of Advanced Technology and Engineering (ICOTEN), Taiz, Yemen, 4–5 July 2021; pp. 1–8. [Google Scholar]
- Florence, P.R.; Manuelli, L.; Tedrake, R. Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. arXiv 2018, 1–12. arXiv:1806.08756v2. [Google Scholar]
- Song, Y.; Fei, Y.; Cheng, C.; Li, X.; Yu, C. UG-Net for Robotic Grasping using Only Depth Image. In Proceedings of the 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR), Irkutsk, Russia, 4–9 August 2019; pp. 913–918. [Google Scholar]
- Chen, X.; Ye, Z.; Sun, J.; Fan, Y.; Hu, F.; Wang, C.; Lu, C. Transferable Active Grasping and Real Embodied Dataset. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3611–3618. [Google Scholar]
- Corona, E.; Pumarola, A.; Alenyà, G.; Moreno-Noguer, F.; Rogez, G. GanHand: Predicting Human Grasp Affordances in Multi-Object Scenes. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 5030–5040. [Google Scholar]
- Kalashnikov, D.; Irpan, A.; Pastor, P.; Ibarz, J.; Herzog, A.; Jang, E.; Quillen, D.; Holly, E.; Kalakrishnan, M.; Vanhoucke, V.; et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. In Proceedings of the 2nd Conference on Robot Learning, PMLR 87, Zürich, Switzerland, 29–31 October 2018; pp. 651–673. [Google Scholar]
- Wu, B.; Akinola, I.; Allen, P.K. Pixel-Attentive Policy Gradient for Multi-Fingered Grasping in Cluttered Scenes. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1789–1796. [Google Scholar]
- Wada, K.; Kitagawa, S.; Okada, K.; Inaba, M. Instance Segmentation of Visible and Occluded Regions for Finding and Picking Target from a Pile of Objects. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2048–2055. [Google Scholar]
- Murali, A.; Mousavian, A.; Eppner, C.; Paxton, C.; Fox, D. 6-DOF Grasping for Target-driven Object Manipulation in Clutter. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6232–6238. [Google Scholar]
- Sundermeyer, M.; Mousavian, A.; Triebel, R.; Fox, D. Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 13438–13444. [Google Scholar]
- Berscheid, L.; Rühr, T.; Kröger, T. Improving Data Efficiency of Self-supervised Learning for Robotic Grasping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2125–2131. [Google Scholar]
- Berscheid, L.; Friedrich, C.; Kröger, T. Robot Learning of 6 DoF Grasping using Model-based Adaptive Primitives. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 4474–4480. [Google Scholar]
- Lou, X.; Yang, Y.; Choi, C. Collision-Aware Target-Driven Object Grasping in Constrained Environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 6364–6370. [Google Scholar]
- Corsaro, M.; Tellex, S.; Konidaris, G. Learning to Detect Multi-Modal Grasps for Dexterous Grasping in Dense Clutter. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4647–4653. [Google Scholar]
- Wu, B.; Akinola, I.; Gupta, A.; Xu, F.; Varley, J.; Watkins-Valls, D.; Allen, P.K. Generative Attention Learning: A “GenerAL” framework for high-performance multi-fingered grasping in clutter. Auton. Robots 2020, 44, 971–990. [Google Scholar] [CrossRef]
- Lundell, J.; Verdoja, F.; Kyrki, V. DDGC: Generative Deep Dexterous Grasping in Clutter. IEEE Robot. Autom. Lett. 2021, 6, 6899–6906. [Google Scholar] [CrossRef]
- Morrison, D.; Corke, P.; Leitner, J. Learning robust, real-time, reactive robotic grasping. Int. J. Rob. Res. 2020, 39, 183–201. [Google Scholar] [CrossRef]
- Wada, K.; Okada, K.; Inaba, M. Joint learning of instance and semantic segmentation for robotic pick-and-place with heavy occlusions in clutter. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9558–9564. [Google Scholar]
- Hasegawa, S.; Wada, K.; Kitagawa, S.; Uchimi, Y.; Okada, K.; Inaba, M. GraspFusion: Realizing Complex Motion by Learning and Fusing Grasp Modalities with Instance Segmentation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7235–7241. [Google Scholar]
- Kim, T.; Park, Y.; Park, Y.; Suh, I.H. Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping in Clutter by State Representation Learning Based on Disentanglement of a Raw Input Image. arXiv 2020, 1–8. arXiv:2002.11903v1. [Google Scholar]
- Sundermeyer, M.; Mousavian, A.; Triebel, R.; Fox, D. Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes. arXiv 2021, 1–7. arXiv:2103.14127v1. [Google Scholar]
- Fujita, M.; Domae, Y.; Noda, A.; Garcia Ricardez, G.A.; Nagatani, T.; Zeng, A.; Song, S.; Rodriguez, A.; Causo, A.; Chen, I.M.; et al. What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics. Adv. Robot. 2020, 34, 560–574. [Google Scholar] [CrossRef]
- Mitash, C.; Bekris, K.E.; Boularias, A. A self-supervised learning system for object detection using physics simulation and multi-view pose estimation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 545–551. [Google Scholar]
- Kitagawa, S.; Wada, K.; Hasegawa, S.; Okada, K.; Inaba, M. Multi-Stage Learning of Selective Dual-Arm Grasping Based on Obtaining and Pruning Grasping Points Through the Robot Experience in the Real World. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7123–7130. [Google Scholar]
- Shao, Q.; Hu, J.; Wang, W.; Fang, Y.; Liu, W.; Qi, J.; Ma, J. Suction Grasp Region Prediction Using Self-supervised Learning for Object Picking in Dense Clutter. In Proceedings of the 2019 IEEE 5th International Conference on Mechatronics System and Robots (ICMSR), Singapore, 3–5 May 2019; pp. 7–12. [Google Scholar]
- Han, M.; Liu, W.; Pan, Z.; Xuse, T.; Shao, Q.; Ma, J.; Wang, W. Object-Agnostic Suction Grasp Affordance Detection in Dense Cluster Using Self-Supervised Learning. arXiv 2019, 1–6. arXiv:1906.02995v1. [Google Scholar]
- Cao, H.; Zeng, W.; Wu, I. Reinforcement Learning for Picking Cluttered General Objects with Dense Object Descriptors. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 6358–6364. [Google Scholar]
- Zeng, A.; Song, S.; Yu, K.-T.; Donlon, E.; Hogan, F.R.; Bauza, M.; Ma, D.; Taylor, O.; Liu, M.; Romo, E.; et al. Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 3750–3757. [Google Scholar]
- Liu, H.; Deng, Y.; Guo, D.; Fang, B.; Sun, F.; Yang, W. An Interactive Perception Method for Warehouse Automation in Smart Cities. IEEE Trans. Ind. Informatics 2021, 17, 830–838. [Google Scholar] [CrossRef]
- Deng, Y.; Guo, X.; Wei, Y.; Lu, K.; Fang, B.; Guo, D.; Liu, H.; Sun, F. Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 619–626. [Google Scholar]
- Liu, H.; Yuan, Y.; Deng, Y.; Guo, X.; Wei, Y.; Lu, K.; Fang, B.; Guo, D.; Sun, F. Active Affordance Exploration for Robot Grasping. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Heidelberg/Berlin, Germany, 2019; pp. 426–438. [Google Scholar] [CrossRef]
- Yen-Chen, L.; Zeng, A.; Song, S.; Isola, P.; Lin, T.-Y. Learning to See before Learning to Act: Visual Pre-training for Manipulation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 7286–7293. [Google Scholar]
- Zeng, A.; Song, S.; Welker, S.; Lee, J.; Rodriguez, A.; Funkhouser, T. Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4238–4245. [Google Scholar]
- Chen, Y.; Ju, Z.; Yang, C. Combining Reinforcement Learning and Rule-based Method to Manipulate Objects in Clutter. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–6. [Google Scholar]
- Berscheid, L.; Meißner, P.; Kröger, T. Robot Learning of Shifting Objects for Grasping in Cluttered Environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 612–618. [Google Scholar]
- Ni, P.; Zhang, W.; Zhang, H.; Cao, Q. Learning efficient push and grasp policy in a totebox from simulation. Adv. Robot. 2020, 34, 873–887. [Google Scholar] [CrossRef]
- Yang, Z.; Shang, H. Robotic pushing and grasping knowledge learning via attention deep Q-learning network. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Heidelberg/Berlin, Germany, 2020; pp. 223–234. [Google Scholar] [CrossRef]
- Mohammed, M.Q.; Kwek, L.C.; Chua, S.C.; Aljaloud, A.S.; Al-dhaqm, A.; Al-mekhlafi, Z.G.; Mohammed, B.A. Deep reinforcement learning-based robotic grasping in clutter and occlusion. Sustainability 2021, 13, 13686. [Google Scholar] [CrossRef]
- Lu, N.; Lu, T.; Cai, Y.; Wang, S. Active Pushing for Better Grasping in Dense Clutter with Deep Reinforcement Learning. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 1657–1663. [Google Scholar]
- Goodrich, B.; Kuefler, A.; Richards, W.D. Depth by Poking: Learning to Estimate Depth from Self-Supervised Grasping. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10466–10472. [Google Scholar]
- Yang, Y.; Ni, Z.; Gao, M.; Zhang, J.; Tao, D. Collaborative Pushing and Grasping of Tightly Stacked Objects via Deep Reinforcement Learning. IEEE/CAA J. Autom. Sin. 2021, 9, 135–145. [Google Scholar] [CrossRef]
- Kiatos, M.; Sarantopoulos, I.; Koutras, L.; Malassiotis, S.; Doulgeri, Z. Learning Push-Grasping in Dense Clutter. IEEE Robot. Autom. Lett. 2022, 7, 8783–8790. [Google Scholar] [CrossRef]
- Lu, N.; Cai, Y.; Lu, T.; Cao, X.; Guo, W.; Wang, S. Picking out the Impurities: Attention-based Push-Grasping in Dense Clutter. Robotica 2022, 1–16. [Google Scholar] [CrossRef]
- Peng, G.; Liao, J.; Guan, S.; Yang, J.; Li, X. A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints. Sci. Rep. 2022, 12, 3927. [Google Scholar] [CrossRef]
- Serhan, B.; Pandya, H.; Kucukyilmaz, A.; Neumann, G. Push-to-See: Learning Non-Prehensile Manipulation to Enhance Instance Segmentation via Deep Q-Learning. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 1513–1519. [Google Scholar]
- Ren, D.; Ren, X.; Wang, X.; Digumarti, S.T.; Shi, G. Fast-Learning Grasping and Pre-Grasping via Clutter Quantization and Q-map Masking. arXiv 2021, 1–8. arXiv:2107.02452v1. [Google Scholar]
- Gualtieri, M.; ten Pas, A.; Platt, R. Pick and Place Without Geometric Object Models. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7433–7440. [Google Scholar]
- Berscheid, L.; Meißner, P.; Kröger, T. Self-Supervised Learning for Precise Pick-and-Place Without Object Model. IEEE Robot. Autom. Lett. 2020, 5, 4828–4835. [Google Scholar] [CrossRef]
- Su, Y.-S.; Lu, S.-H.; Ser, P.-S.; Hsu, W.-T.; Lai, W.-C.; Xie, B.; Huang, H.-M.; Lee, T.-Y.; Chen, H.-W.; Yu, L.-F.; et al. Pose-Aware Placement of Objects with Semantic Labels-Brandname-based Affordance Prediction and Cooperative Dual-Arm Active Manipulation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 4760–4767. [Google Scholar]
- Zhao, W.; Chen, W. Hierarchical POMDP planning for object manipulation in clutter. Rob. Auton. Syst. 2021, 139, 103736. [Google Scholar] [CrossRef]
- Hundt, A.; Killeen, B.; Greene, N.; Wu, H.; Kwon, H.; Paxton, C.; Hager, G.D. “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer. IEEE Robot. Autom. Lett. 2020, 5, 6724–6731. [Google Scholar] [CrossRef]
- Li, R.; Jabri, A.; Darrell, T.; Agrawal, P. Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4051–4058. [Google Scholar]
- Huang, E.; Jia, Z.; Mason, M.T. Large-scale multi-object rearrangement. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 211–218. [Google Scholar]
- Yuan, W.; Hang, K.; Kragic, D.; Wang, M.Y.; Stork, J.A. End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer. Rob. Auton. Syst. 2019, 119, 119–134. [Google Scholar] [CrossRef]
- Song, H.; Haustein, J.A.; Yuan, W.; Hang, K.; Wang, M.Y.; Kragic, D.; Stork, J.A. Multi-Object Rearrangement with Monte Carlo Tree Search: A Case Study on Planar Nonprehensile Sorting. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 9433–9440. [Google Scholar]
- Rouillard, T.; Howard, I.; Cui, L. Autonomous Two-Stage Object Retrieval Using Supervised and Reinforcement Learning. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 780–786. [Google Scholar]
- Chen, C.; Li, H.-Y.; Zhang, X.; Liu, X.; Tan, U.-X. Towards Robotic Picking of Targets with Background Distractors using Deep Reinforcement Learning. In Proceedings of the 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 21–22 August 2019; pp. 166–171. [Google Scholar]
- Novkovic, T.; Pautrat, R.; Furrer, F.; Breyer, M.; Siegwart, R.; Nieto, J. Object Finding in Cluttered Scenes Using Interactive Perception. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8338–8344. [Google Scholar]
- Yang, Y.; Liang, H.; Choi, C. A Deep Learning Approach to Grasping the Invisible. IEEE Robot. Autom. Lett. 2020, 5, 2232–2239. [Google Scholar] [CrossRef] [Green Version]
- Zuo, G.; Tong, J.; Wang, Z.; Gong, D. A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects. Cognit. Comput. 2022. [Google Scholar] [CrossRef]
- Fujita, Y.; Uenishi, K.; Ummadisingu, A.; Nagarajan, P.; Masuda, S.; Castro, M.Y. Distributed Reinforcement Learning of Targeted Grasping with Active Vision for Mobile Manipulators. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 9712–9719. [Google Scholar]
- Andrychowicz, M.; Wolski, F.; Ray, A.; Schneider, J.; Fong, R.; Welinder, P.; McGrew, B.; Tobin, J.; Abbeel, P.; Zaremba, W. Hindsight experience replay. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5049–5059. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85047009130&partnerID=40&md5=ca73138ba801e435530b77496eeafe86 (accessed on 20 July 2022).
- Kurenkov, A.; Taglic, J.; Kulkarni, R.; Dominguez-Kuhne, M.; Garg, A.; Martín-Martín, R.; Savarese, S. Visuomotor mechanical search: Learning to retrieve target objects in clutter. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2021; pp. 8408–8414. [Google Scholar]
- Huang, B.; Guo, T.; Boularias, A.; Yu, J. Interleaving Monte Carlo Tree Search and Self-Supervised Learning for Object Retrieval in Clutter. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 625–632. [Google Scholar]
- Kumar, K.N.; Essa, I.; Ha, S. Graph-based Cluttered Scene Generation and Interactive Exploration using Deep Reinforcement Learning. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 7521–7527. [Google Scholar]
- Danielczuk, M.; Angelova, A.; Vanhoucke, V.; Goldberg, K. X-Ray: Mechanical search for an occluded object by minimizing support of learned occupancy distributions. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2021; pp. 9577–9584. [Google Scholar]
- Deng, Y.; Guo, D.; Guo, X.; Zhang, N.; Liu, H.; Sun, F. MQA: Answering the Question via Robotic Manipulation. In Proceedings of the Robotics: Science and Systems (RSS 2021), New York, NY, USA, 27 June–1 July 2021; pp. 1–10. [Google Scholar]
- Xu, K.; Yu, H.; Lai, Q.; Wang, Y.; Xiong, R. Efficient learning of goal-oriented push-grasping synergy in clutter. IEEE Robot. Autom. Lett. 2021, 6, 6337–6344. [Google Scholar] [CrossRef]
- Huang, B.; Han, S.D.; Yu, J.; Boularias, A. Visual Foresight Trees for Object Retrieval From Clutter With Nonprehensile Rearrangement. IEEE Robot. Autom. Lett. 2022, 7, 231–238. [Google Scholar] [CrossRef]
- Bejjani, W.; Agboh, W.C.; Dogar, M.R.; Leonetti, M. Occlusion-Aware Search for Object Retrieval in Clutter. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic, 27 September–1 October 2021; pp. 1–8. [Google Scholar]
- Cheong, S.; Cho, B.Y.; Lee, J.; Lee, J.; Kim, D.H.; Nam, C.; Kim, C.; Park, S. Obstacle rearrangement for robotic manipulation in clutter using a deep Q-network. Intell. Serv. Robot. 2021, 14, 549–561. [Google Scholar] [CrossRef]
- Bejjani, W.; Papallas, R.; Leonetti, M.; Dogar, M.R. Planning with a Receding Horizon for Manipulation in Clutter Using a Learned Value Function. In Proceedings of the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018; pp. 1–9. [Google Scholar]
- Bejjani, W.; Dogar, M.R.; Leonetti, M. Learning Physics-Based Manipulation in Clutter: Combining Image-Based Generalization and Look-Ahead Planning. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6562–6569. [Google Scholar]
- Bejjani, W.; Leonetti, M.; Dogar, M.R. Learning image-based Receding Horizon Planning for manipulation in clutter. Rob. Auton. Syst. 2021, 138, 103730. [Google Scholar] [CrossRef]
- Wu, P.; Chen, W.; Liu, H.; Duan, Y.; Lin, N.; Chen, X. Predicting Grasping Order in Clutter Environment by Using Both Color Image and Points Cloud. In Proceedings of the 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 21–22 August 2019; pp. 197–202. [Google Scholar]
- Papallas, R.; Dogar, M.R. Non-Prehensile Manipulation in Clutter with Human-In-The-Loop. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6723–6729. [Google Scholar]
- Papallas, R.; Cohn, A.G.; Dogar, M.R. Online replanning with human-in-The-loop for non-prehensile manipulation in clutter-A trajectory optimization based approach. IEEE Robot. Autom. Lett. 2020, 5, 5377–5384. [Google Scholar] [CrossRef]
- Kiatos, M.; Malassiotis, S. Robust object grasping in clutter via singulation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 1596–1600. [Google Scholar]
- Sarantopoulos, I.; Kiatos, M.; Doulgeri, Z.; Malassiotis, S. Split Deep Q-Learning for Robust Object Singulation*. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6225–6231. [Google Scholar]
- Sarantopoulos, I.; Kiatos, M.; Doulgeri, Z.; Malassiotis, S. Total Singulation With Modular Reinforcement Learning. IEEE Robot. Autom. Lett. 2021, 6, 4117–4124. [Google Scholar] [CrossRef]
- Tekden, A.E.; Erdem, A.; Erdem, E.; Asfour, T.; Ugur, E. Object and Relation Centric Representations for Push Effect Prediction. arXiv 2021, 1–12. arXiv:2102.02100. [Google Scholar]
- Won, J.; Park, Y.; Yi, B.-J.; Suh, I.H. Object Singulation by Nonlinear Pushing for Robotic Grasping. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2402–2407. [Google Scholar]
- Kiatos, M.; Malassiotis, S.; Sarantopoulos, I. A Geometric Approach for Grasping Unknown Objects With Multifingered Hands. IEEE Trans. Robot. 2021, 37, 735–746. [Google Scholar] [CrossRef]
- Mahler, J.; Liang, J.; Niyaz, S.; Aubry, M.; Laskey, M.; Doan, R.; Liu, X.; Ojea, J.A.; Goldberg, K. Dex-Net 2.0: Deep learning to plan Robust grasps with synthetic point clouds and analytic grasp metrics. In Proceedings of the 2017 Robotics: Science and Systems (RSS), Cambridge, MA, USA, 12–16 July 2017; pp. 1–10. [Google Scholar]
- Mousavian, A.; Eppner, C.; Fox, D. 6-DOF GraspNet: Variational grasp generation for object manipulation. In Proceedings of the the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 2901–2910. [Google Scholar]
- Iriondo, A.; Lazkano, E.; Ansuategi, A. Affordance-based grasping point detection using graph convolutional networks for industrial bin-picking applications. Sensors 2021, 21, 816. [Google Scholar] [CrossRef]
- Cheng, B.; Wu, W.; Tao, D.; Mei, S.; Mao, T.; Cheng, J. Random Cropping Ensemble Neural Network for Image Classification in a Robotic Arm Grasping System. IEEE Trans. Instrum. Meas. 2020, 69, 6795–6806. [Google Scholar] [CrossRef]
- D’Avella, S.; Tripicchio, P.; Avizzano, C.A. A study on picking objects in cluttered environments: Exploiting depth features for a custom low-cost universal jamming gripper. Robot. Comput. Integr. Manuf. 2020, 63, 101888. [Google Scholar] [CrossRef]
- Wang, J.; Hu, C.; Wang, Y.; Zhu, Y. Dynamics Learning With Object-Centric Interaction Networks for Robot Manipulation. IEEE Access 2021, 9, 68277–68288. [Google Scholar] [CrossRef]
- Uc-Cetina, V.; Navarro-Guerrero, N.; Martin-Gonzalez, A.; Weber, C.; Wermter, S. Survey on reinforcement learning for language processing. arXiv 2021, 1–33. arXiv:2104.05565v1. [Google Scholar] [CrossRef]
- Sajjan, S.; Moore, M.; Pan, M.; Nagaraja, G.; Lee, J.; Zeng, A.; Song, S. Clear Grasp: 3D Shape Estimation of Transparent Objects for Manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 3634–3642. [Google Scholar]
- Hu, Z.; Han, T.; Sun, P.; Pan, J.; Manocha, D. 3-D Deformable Object Manipulation Using Deep Neural Networks. IEEE Robot. Autom. Lett. 2019, 4, 4255–4261. [Google Scholar] [CrossRef]
- Wang, X.; Jiang, X.; Zhao, J.; Wang, S.; Liu, Y.-H. Grasping Objects Mixed with Towels. IEEE Access 2020, 8, 129338–129346. [Google Scholar] [CrossRef]
- Tran, L.V.; Lin, H.-Y. BiLuNetICP: A Deep Neural Network for Object Semantic Segmentation and 6D Pose Recognition. IEEE Sens. J. 2021, 21, 11748–11757. [Google Scholar] [CrossRef]
- Xu, Z.; Wu, J.; Zeng, A.; Tenenbaum, J.; Song, S. DensePhysNet: Learning Dense Physical Object Representations Via Multi-Step Dynamic Interactions. arXiv 2019. 1–10. [Google Scholar] [CrossRef]
- Zakka, K.; Zeng, A.; Lee, J.; Song, S. Form2Fit: Learning Shape Priors for Generalizable Assembly from Disassembly. In Proceedings of the Proceeding of 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9404–9410. [Google Scholar]
- Wang, C.; Lin, P. Q-PointNet: Intelligent Stacked-Objects Grasping Using a RGBD Sensor and a Dexterous Hand. In Proceedings of the 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA, 6–9 July 2020; pp. 601–606. [Google Scholar]
- Ni, P.; Zhang, W.; Zhu, X.; Cao, Q. PointNet++ Grasping: Learning An End-to-end Spatial Grasp Generation Algorithm from Sparse Point Clouds. In Proceedings of the IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020; pp. 3619–3625. [Google Scholar]
- Wu, B.; Akinola, I.; Varley, J.; Allen, P. MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning. arXiv 2019, 1–20. arXiv:1909.04787v2. [Google Scholar]
- Schnieders, B.; Palmer, G.; Luo, S.; Tuyls, K. Fully convolutional one-shot object segmentation for industrial robotics. In Proceedings of the the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, Montreal, QC, Canada, 13–17 May 2019; pp. 1161–1169. [Google Scholar]
- Morrison, D.; Leitner, J.; Corke, P. Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. In Proceedings of the Robotics: Science and Systems XIV (RSS 2018), Pittsburgh, PA, USA, 26–30 June 2018; pp. 1–10. [Google Scholar]
- Calandra, R.; Owens, A.; Upadhyaya, M.; Yuan, W.; Lin, J.; Adelson, E.H.; Levine, S. The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? In Proceedings of the the Conference on Robot Learning (CoRL), Mountain View, CA, USA, 13–15 November 2017; pp. 1–10. [Google Scholar]
- Eitel, A.; Hauff, N.; Burgard, W. Self-supervised Transfer Learning for Instance Segmentation through Physical Interaction. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macau, China, 3–8 November 2019; pp. 4020–4026. [Google Scholar]
- Li, A.; Danielczuk, M.; Goldberg, K. One-Shot Shape-Based Amodal-to-Modal Instance Segmentation. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1375–1382. [Google Scholar]
- Nematollahi, I.; Mees, O.; Hermann, L.; Burgard, W. Hindsight for foresight: Unsupervised structured dynamics models from physical interaction. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 25–29 October 2020; pp. 5319–5326. [Google Scholar]
Terms | Definition |
---|---|
| A branch of machine learning concerned with how agents should operate in an environment to maximize a notion of cumulative future reward. |
| MDPs give a general framework for sequential decision making, and the dynamics of an MDP are defined by a probability distribution. An MDP serves as the conceptual framework for RL because it allows the RL interaction process to be expressed in probabilistic terms. |
| A basic table in which the maximum expected future rewards for each state of action are computed. |
| The expected return from a given state under a specific policy. |
| Once given a state-action pair, a Q function will generate Q values using either state-value functions or state-action value functions. |
| is taken is called the value of action. |
| A policy is a manner of describing an agent’s behavior by mapping its current state to a probability distribution over actions . |
| Is one which as good as or better than every other policy. The value function for the optimal policy thus has the greatest value possible in every state. |
| The deterministic and stochastic policies are the two most common types of policies observed in the RL domain.
|
|
|
| During exploitation, Epsilon-Greedy picks the action that maximizes the current value estimation, while during exploration, Epsilon-Greedy chooses a uniform action at random. So, the action with the highest value is called a “greedy action,” and the other actions are called “non-greedy actions.” |
| The discount factor basically expresses how important rewards in the far future are to reinforcement learning agents in comparison to rewards in the near future, 0 ≤ γ ≤ 1.
|
|
|
Methodology | Drawbacks | Gripper | Ref. |
---|---|---|---|
|
| Parallel-jaw finger | [30] |
|
| Parallel-jaw finger | [38] |
|
| Parallel-jaw finger | [34] |
|
| Suction grasp multifunctional gripper | [40] |
|
| Parallel-jaw finger | [43] |
|
| Multi-finger gripper | [39] |
|
| Parallel-jaw finger (dual-arm robot) | [35] |
|
| Parallel-jaw finger | [31] |
|
| Parallel-jaw finger | [36] |
|
| Parallel-jaw finger | [29] |
|
| Multi-finger (Human hand) | [37] |
|
| Parallel-jaw finger | [52] |
|
| Parallel-jaw finger | [41] |
|
| Test on a range of parallel-jaw and multi-finger robot hands. | [47] |
|
| Parallel-jaw finger | [49] |
|
| Parallel-jaw finger | [53] |
|
| Parallel-jaw finger | [45] |
|
| Multi-finger | [48] |
Ref. | Challenge | Method | Weakness | Gripper | Mechanism | Success Rate |
---|---|---|---|---|---|---|
[65] | Grasping of objects placed in well-organized shapes | Deep Q-learning |
| Parallel-jaw finger | Push-to-grasp | 80.3% |
[67] | Grasping of objects aligned with the bin wall or boundaries | Deep Q-learning |
| Parallel-jaw finger | Shift-to-grasp | 91.7% |
[62] [63] [61] | Grasping of objects placed among highly random cluttered objects | DQN |
| Multifunctional gripper | Push-to-grasp | 77% |
[71] | Grasping of objects in well-organized shapes | Deep Q-learning |
| Parallel-jaw finger | Push-to-grasp | 83.1% |
[72] | Grasping of objects in cluttered bins | Deep Q-learning |
| Suction cup | Poke-to-grasp | N/A |
[66] | Grasping of objects in well-organized shapes | The twin delayed deep deterministic policy gradient |
| Parallel-jaw finger | Push-to-grasp | 73.5% |
[69] | Grasping of objects placed randomly in clutters | Attention DQN |
| Parallel-jaw finger | Push-to-grasp | 73.5% |
[68] | Grasping of objects aligned with the bin wall or boundaries | Deep Q-learning |
| Parallel-jaw finger | Push-to-grasp | 74.6% |
[78] | Grasping of objects in well-organised shapes | A duelling-DDQN |
| Parallel-jaw finger | Push-to-grasp | 94% |
Ref | Method | Weakness | Gripper | Mechanism | Success Rate |
---|---|---|---|---|---|
[89] |
|
| Parallel-jaw finger | Push-to-grasp | N/A |
[90] |
|
| Parallel-jaw finger | Push-to-grasp | 97.3% |
[91] |
|
| Parallel-jaw finger | Push-to-grasp | 86.0% |
[100] |
|
| Parallel-jaw finger | Push-to-grasp | 83.1% |
[101] |
|
| Parallel-jaw finger | Push-to-grasp | 98.5% |
[103] |
|
| Parallel-jaw finger | Pick-to-place | 74~95% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mohammed, M.Q.; Kwek, L.C.; Chua, S.C.; Al-Dhaqm, A.; Nahavandi, S.; Eisa, T.A.E.; Miskon, M.F.; Al-Mhiqani, M.N.; Ali, A.; Abaker, M.; et al. Review of Learning-Based Robotic Manipulation in Cluttered Environments. Sensors 2022, 22, 7938. https://doi.org/10.3390/s22207938
Mohammed MQ, Kwek LC, Chua SC, Al-Dhaqm A, Nahavandi S, Eisa TAE, Miskon MF, Al-Mhiqani MN, Ali A, Abaker M, et al. Review of Learning-Based Robotic Manipulation in Cluttered Environments. Sensors. 2022; 22(20):7938. https://doi.org/10.3390/s22207938
Chicago/Turabian StyleMohammed, Marwan Qaid, Lee Chung Kwek, Shing Chyi Chua, Arafat Al-Dhaqm, Saeid Nahavandi, Taiseer Abdalla Elfadil Eisa, Muhammad Fahmi Miskon, Mohammed Nasser Al-Mhiqani, Abdulalem Ali, Mohammed Abaker, and et al. 2022. "Review of Learning-Based Robotic Manipulation in Cluttered Environments" Sensors 22, no. 20: 7938. https://doi.org/10.3390/s22207938
APA StyleMohammed, M. Q., Kwek, L. C., Chua, S. C., Al-Dhaqm, A., Nahavandi, S., Eisa, T. A. E., Miskon, M. F., Al-Mhiqani, M. N., Ali, A., Abaker, M., & Alandoli, E. A. (2022). Review of Learning-Based Robotic Manipulation in Cluttered Environments. Sensors, 22(20), 7938. https://doi.org/10.3390/s22207938