Abstract
Our research focuses on exploring new modalities to make robots acquire skills in a fast and user-friendly manner. In this work we present a novel active interface with perception and projection capabilities for simplifying the skill transfer process. The interface allows humans and robots to interact with each other in the same environment, with respect to visual feedback. During the learning process, the real workspace is used as a tangible interface for helping the user to better understand what the robot has learned up to then, to display information about the task or to get feedback and guidance. Thus, the user is able to incrementally visualize and assess the learner’s state and, at the same time, focus on the skill transfer without disrupting the continuity of the teaching interaction. We also propose a proof-of-concept, as a core element of the architecture, based on an experimental setting where a pico-projector and an rgb-depth sensor are mounted onto the end-effector of a 7-DOF robotic arm.
Similar content being viewed by others
References
Alenyà G, Dellen B, Torras C (2011) 3D modelling of leaves from color and ToF data for robotized plant measuring. In: Proceedings of IEEE international conference on robotics and automation (ICRA), pp 3408–3414
Aleotti J, Caselli S (2006) Grasp recognition in virtual reality for robot pregrasp planning by demonstration. In: Proceedings of IEEE international conference on robotics and automation (ICRA), pp 2801–2806
Bimber O, Raskar R (2005) Spatial augmented reality: a modern approach to augmented reality. In: Proceedings of annual conference on computer graphics and interactive techniques (SIGGRAPH). ACM, New York
Calinon S, Billard A (2007) What is the teacher’s role in robot programming by demonstration? Toward benchmarks for improved learning. Interact Stud 8(3):441–464 (Special issue on psychological benchmarks in human-robot interaction)
Calinon S, Guenter F, Billard A (2007) On learning, representing and generalizing a task in a humanoid robot. IEEE Trans Syst Man Cybern, Part B, Cybern 37(2):286–298
Calinon S, Sardellitti I, Caldwell DG (2010) Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies. In: Proc IEEE/RSJ intl conf on intelligent robots and systems (IROS), Taipei, Taiwan, pp 249–254
Chernova S, Veloso M (2010) Confidence-based multi-robot learning from demonstration. Int J Soc Robot 2(2):195–215
Delson N, West H (1994) Robot programming by human demonstration: the use of human variation in identifying obstacle free trajectories. In: Proceedings of IEEE international conference on robotics and automation (ICRA), pp 564–571
Dourish P (2001) Where the action is: the foundations of embodied interaction. MIT Press, Cambridge
Ekvall S, Kragic D (2006) Learning task models from multiple human demonstrations. In: The 15th IEEE international symposium on robot and human interactive communication (ROMAN), pp 358–363
Faugeras OD, Toscani G (1986) The calibration problem for stereo. In: Proceeding of the IEEE computer society conference on computer vision and pattern recognition (CVPR), pp 15–20
Featherstone R, Orin DE, (2008) Dynamics. In: Siciliano B, Khatib OO (eds) Handbook of robotics. Springer, Secaucus, pp 35–65
Friedrich H, Muench S, Dillmann R, Bocionek S, Sassin M (1996) Robot programming by demonstration (RPD): supporting the induction by human interaction. Mach Learn 23(2):163–189
Gillet A, Sanner M, Stoffler D, Olson A (2005) Tangible augmented interfaces for structural molecular biology. IEEE Comput Graph Appl 25:13–17
Greenfield P (1984) Theory of the teacher in learning activities of everyday life. In: Rogoff B, Lave J (eds) Everyday cognition: its development in social context. Harvard University Press, Cambridge
Harrison C, Benko H, Wilson AD (2011) Omnitouch: wearable multitouch interaction everywhere. In: Proceedings of the 24th annual ACM symposium on user interface software and technology (UIST). ACM, New York, pp 441–450
Hosoi K, Dao VN, Mori A, Sugimoto M (2007) Visicon: a robot control interface for visualizing manipulation using a handheld projector. In: Proceedings of the international conference on advances in computer entertainment technology. ACM, New York, pp 99–106
Ishii H, Ullmer B (1997) Tangible bits: towards seamless interfaces between people, bits and atoms. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI). ACM, New York, pp 234–241
Ito M, Noda K, Hoshino Y, Tani J (2006) Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. Neural Netw 19(3):323–337
Kazuki K, Seiji Y (2010) Extending commands embedded in actions for human-robot cooperative tasks. Int J Soc Robot 2(2):159–173
Kendall DG (1989) A survey of the statistical theory of shape. Stat Sci 4(2):87–99
Kheddar A (2001) Teleoperation based on the hidden robot concept. IEEE Trans Syst Man Cybern, Part A, Syst Hum 31(1):1–13
Klemmer SR, Hartmann B, Takayama L (2006) How bodies matter: five themes for interaction design. In: Proceedings of the 6th conference on designing interactive systems (DIS). ACM, New York, pp 140–149
Kuniyoshi Y, Inaba M, Inoue H (1989) Teaching by showing: Generating robot programs by visual observation of human performance. In: Proceedings intl symposium of industrial robots, Tokyo, Japan, pp 119–126
Lauria S, Bugmann G, Kyriacou T, Klein E (2002) Mobile robot programming using natural language. Robot Auton Syst 38(3–4):171–181
Linder N, Maes P (2010) LuminAR: portable robotic augmented reality interface design and prototype. In: Adjunct proceedings of the 23nd annual ACM symposium on user interface software and technology (UIST). ACM, New York, pp 395–396
Luh JYS, Walker MW, Paul RPC (1980) Resolved-acceleration control of mechanical manipulators. IEEE Trans Autom Control 25:468–474
Nicolescu MN, Mataric MJ (2003) Natural methods for robot task learning: instructive demonstrations, generalization and practice. In: Second international joint conference on autonomous agents and Multi-Agents and Multi-Agent systems, Melbourne, Australia
Ogawara K, Takamatsu J, Kimura H, Ikeuchi K (2003) Extraction of essential interactions through multiple observations of human demonstrations. In: IEEE transactions on industrial joint Conference on autonomous agents and multiagent systems, pp 667–675
Okada K, Kino Y, Inaba M, Inoue H (2003) Visually-based humanoid remote control system under operator’s assistance and its application to object manipulation. In: Proc IEEE intl conf on humanoid robots (Humanoids), Karlsruhe, Germany
OpenKinect: Kinect calibrations. Online available at http://openkinect.org/wiki/Calibration
OpenNI (User Guide (2010)). Online available at http://openni.org/documentation
Park J, Kim GJ (2009) Robots with projectors: an alternative to anthropomorphic HRI. In: Proceedings of the 4th ACM/IEEE international conference on human robot interaction (HRI). ACM, New York, pp 221–222
Primesense NITE 1.3 algorithms notes (2010). Online available at http://primesense.com
Rusu RB, Cousins S (2011) 3D is here: point cloud library PCL. In: Proceedings of IEEE international conference on robotics and automation (ICRA), Shanghai, China
Schöning J, Rohs M, Kratz S, Löchtefeld M, Krüger A (2009) Map torchlight: a mobile augmented reality camera projector unit. In: Adjunct proceedings of the 27th international conference on human factors in computing systems (CHI EA). ACM, New York, pp 3841–3846
Segre AM (1988) Machine learning of robot assembly plans. Kluwer Academic, Norwell
Shotton J, Fitzgibbon AW, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: Proceeding of the IEEE computer society conference on computer vision and pattern recognition (CVPR), pp 1297–1304
Song H, Grossman T, Fitzmaurice G, Guimbretiere F, Khan A, Attar R, Kurtenbach G (2009) Penlight: combining a mobile projector and a digital pen for dynamic visual overlay. In: Proceedings of the 27th international conference on human factors in computing systems (CHI). ACM, New York, pp 143–152
Thomaz AL, Breazeal C (2008) Teachable robots: understanding human teaching behavior to build more effective robot learners. Artif Intell 172(6–7):716–737
Vogel C, Poggendorf M, Walter C, Elkmann N (2011) Towards safe physical human-robot collaboration: a projection-based safety system. In: Proceedings of international conference on intelligent robots and systems IEEE/RSJ, San Francisco, CA, USA, pp 3355–3360
Voyles R, Khosla P (1998) A multi-agent system for programming robotic agents by human demonstration. In: AI and manufacturing research planning workshop
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
De Tommaso, D., Calinon, S. & Caldwell, D.G. A Tangible Interface for Transferring Skills. Int J of Soc Robotics 4, 397–408 (2012). https://doi.org/10.1007/s12369-012-0154-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-012-0154-y