Abstract
Computationally efficient task execution is very important for autonomous mobile robots endowed with limited on-board computational resources. Most robot control approaches assume a fixed state and action representation, and use a single algorithm to map states to actions. However, not all situations in a given task require equally complex algorithms and equally detailed state and action representations. The main motivation for this work is a desire to reduce the computational footprint of performing a task by allowing the robot to run simpler algorithms whenever possible, and resort to a more complex algorithm only when needed. We contribute the Multi-Resolution Task Execution (MRTE) algorithm that utilizes human feedback to learn a mapping from a given state to an appropriate detail resolution consisting of a state and action representation, and an algorithm providing a mapping from states to actions at that resolution. The robot learns a policy from human demonstration to switch between different detail resolutions as needed while favoring lower detail resolutions to reduce computational cost of task execution. We then present the Model Plus Correction (M+C) algorithm to improve the performance of an algorithm using corrective human feedback without modifying the algorithm itself. Finally, we introduce the Multi-Resolution Model Plus Correction (MRM+C) algorithm as a combination of MRTE and M+C. MRM+C learns how to select an appropriate detail resolution to operate at in a given state from human demonstration. Furthermore, it allows the teacher to provide corrective demonstration at different detail resolutions to improve overall task execution performance. We provide formal definitions of MRTE, M+C, and MRM+C algorithms, and show how they relate to general robot control problem and Learning from Demonstration (LfD) approach. We present experimental results de-monstrating the effectiveness of proposed methods on a goal-directed humanoid obstacle avoidance task.
Similar content being viewed by others
References
Argall B, Browning B, Veloso M (2008) Learning robot motion control with demonstration and advice-operators. In: Proceedings of IROS’08
Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Autom Syst 57(5):469–483. doi:10.1016/j.robot.2008.10.024
Argall BD, Sauser E, Billard A (2010) Tactile guidance for policy adaptation. Found Trends Robot 1(2):79–133
Chernova S, Veloso M (2009) Interactive policy learning through confidence-based autonomy. J Artif Intell Res 34:1–25
Cobo LC, Zang P, Isbell CL Jr, Thomaz AL (2011) Automatic state abstraction from demonstration. In: Proceedings of IJCAI 2011
Gerkey BP, Vaughan RT, Howard A (2003) The player/stage project: tools for multi-robot and distributed sensor systems. In: Proc of the intl conf on advanced robotics (ICAR 2003), pp 317–323
Grollman D, Jenkins O (2007) Dogged learning for robots. In: Proceedings of ICRA 2007
Hoffmann J, Jüngel M, Lötzsch M (2004) A vision based system for goal-directed obstacle avoidance used in the RC’03 obstacle avoidance challenge. In: RoboCup 2004 symposium
Kolter JZ, Abbeel P, Ng AY (2007) Hierarchical apprenticeship learning with application to quadruped locomotion. In: Proceedings of NIPS’07
Lenser S, Veloso M (2003) Visual sonar: fast obstacle avoidance using monocular vision. In: Proceedings of IROS 2003
Levine S, Popovic Z, Koltun V (2010) Feature construction for inverse reinforcement learning. In: Proc of NIPS 2010, pp. 1342–1350
Mericli C, Mericli T, Akin HL (2010) A reward function generation method using genetic algorithms: a robot soccer case study. In: Proc of AAMAS 2010
Mericli C, Veloso M, Akin HL (2011) Task refinement for autonomous robots using complementary corrective human feedback. Int J Adv Rob Syst
Mericli C, Veloso M, Akin HL (2012) Improving biped walk stability with complementary corrective demonstration. Auton. Robots 32:419–432
Sullivan K, Luke S, Ziparo VA (2010) Hierarchical learning from demonstration on humanoid robots. In: Humanoids 2010 workshop on humanoid robots learning from human interaction
Thomaz AL, Breazeal C (2006) Reinforcement learning with human teachers: evidence of feedback and guidance with implications for learning performance. In: Proceedings of AAAI 2006
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Meriçli, Ç., Veloso, M. & Akın, H.L. Multi-resolution Corrective Demonstration for Efficient Task Execution and Refinement. Int J of Soc Robotics 4, 423–435 (2012). https://doi.org/10.1007/s12369-012-0159-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-012-0159-6