Abstract
Robotic grasping of unknown objects in cluttered scenes is already well established, mainly based on advances in Deep Learning methods. A major drawback is the need for a big amount of real-world training data. Furthermore these networks are not interpretable in a sense that it is not clear why certain grasp attempts fail. To make the process of robotic grasping traceable and simplify the overall model we suggest to divide the complex task of robotic grasping into three simpler tasks to find stable grasp points. The first task is to find all grasp points where the gripper can be lowered onto the table without colliding with the object. The second task is to determine for the grasp points and gripper parameters from the first step how the object moves while the gripper is closed. Finally in the third step for all grasp points from the second step it is predicted whether the object slips out of the gripper during lifting. By this simplification it is possible to understand for each grasp point why it is stable and - just as important - why others are unstable or not feasible. In this study we focus on the second task, the prediction of the physical interaction between gripper and object while the gripper is closed. We investigate different Convolutional Neural Network (CNN) architectures and identify the architecture(s) that predict the physical interactions in image space best. We perform the experiments for training data generation in the robot and physics simulator V-REP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Caldera, S., Rassau, A., Chai, D.: Review of deep learning methods in robotic grasp detection. Multimodal Technol. Inter. 2(3), 57 (2018)
Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)
Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: IEEE International Conference on Robotics and Automation (2015)
Levine, S., et al.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37(4–5), 421–436 (2017)
Finn, C., Goodfellow, I., Levine, S.: Unsupervised learning for physical interaction through video prediction. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 64–72 (2016)
Morrison, D., Corke, P., Leitner, J.: Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. CoRR arXiv:1804.05172 (2018)
Schenck, W., Hasenbein, H., Möller, R.: Detecting affordances by mental imagery. In: Proceedings of the SAB Workshop on Artificial Mental Imagery, pp. 15–18 (2012)
Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution - A Practical Approach to Global Optimization, 2nd edn, p. 110. Springer, Heidelberg (2005)
Bicchi, A., Kumar, V.: Robotic grasping and contact: A review. In: IEEE International Conference on Robotics and Automation, vol. 5499, pp. 348–353 (2000)
Oh, J., et al.: Action-conditional video prediction using deep networks in Atari games. In: Advances in Neural Information Processing Systems, pp. 2863–2871 (2015)
Bäuerle, A., Ropinski, T.: Net2Vis: transforming deep convolutional networks into publication-ready visualizations. CoRR arXiv:1902.04394 (2019)
Mottaghi, R., et al.: Newtonian image understanding: unfolding the dynamics of objects in static images. CoRR arXiv:1511.04048 (2015)
Mottaghi, R., et al.: “What happens if..." learning to predict the effect of forces in images. CoRR arXiv:1603.05600 (2016)
He, K., Sun, J.: Convolutional neural networks at constrained time cost. CoRR arXiv:1412.1710 (2014)
Copellia Robotics Homepage. http://www.coppeliarobotics.com. Accessed 26 Feb 2020
Acknowledgements
This work was supported by the EFRE-NRW funding programme “Forschungsinfrastrukturen” (grant no. 34.EFRE-0300119).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Schwan, C., Schenck, W. (2020). Visual Movement Prediction for Stable Grasp Point Detection. In: Iliadis, L., Angelov, P., Jayne, C., Pimenidis, E. (eds) Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference. EANN 2020. Proceedings of the International Neural Networks Society, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-030-48791-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-48791-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48790-4
Online ISBN: 978-3-030-48791-1
eBook Packages: Computer ScienceComputer Science (R0)