Abstract
When scaling up RL to large continuous domains with imperfect representations and hierarchical structure, we often try applying algorithms that are proven to converge in small finite domains, and then just hope for the best. This talk will advocate instead designing algorithms that adhere to the constraints, and indeed take advantage of the opportunities, that might come with the problem at hand. Drawing on several different research threads within the Learning Agents Research Group at UT Austin, I will discuss four types of issues that arise from these contraints and opportunities: 1) Representation – choosing the algorithm for the problem’s representation and adapating the representation to fit the algorithm; 2) Interaction – with other agents and with human trainers; 3) Synthesis – of different algorithms for the same problem and of different concepts in the same algorithm; and 4) Mortality – the opportunity to improve learning based on past experience and the constraint that one can’t explore exhaustively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Stone, P. (2012). Invited Talk: PRISM – Practical RL: Representation, Interaction, Synthesis, and Mortality. In: Sanner, S., Hutter, M. (eds) Recent Advances in Reinforcement Learning. EWRL 2011. Lecture Notes in Computer Science(), vol 7188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29946-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-29946-9_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29945-2
Online ISBN: 978-3-642-29946-9
eBook Packages: Computer ScienceComputer Science (R0)