Abstract
We study the problem of learning stochastic actions in propositional, factored environments, and precisely the problem of identifying STRIPS-like effects from transitions in which they are ambiguous. We give an unbiased, maximum likelihood approach, and show that maximally likely actions can be computed efficiently from observations. We also discuss how this study can be used to extend an RL approach for actions with independent effects to one for actions with correlated effects.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: Structural assumptions and computational leverage. J. Artificial Intelligence Research 11, 1–94 (1999)
Brafman, R.I., Tennenholtz, M.: R-max: A general polynomial time algorithm for near-optimal reinforcement learning. J. Machine Learning Research 3, 213–231 (2002)
Dearden, R., Boutilier, C.: Abstraction and approximate decision-theoretic planning. Artificial Intelligence 89, 219–283 (1997)
Degris, T., Sigaud, O., Wuillemin, P.-H.: Learning the structure of factored Markov Decision Processes in reinforcement learning problems. In: Proc. International Conference on Machine Learning (ICML 2006), pp. 257–264. ACM (2006)
Džeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001)
Kearns, M., Koller, D.: Efficient reinforcement learning in factored MDPs. In: Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI 1999), pp. 740–474. Morgan Kaufmann (1999)
Kearns, M., Singh, S.: Near-optimal reinforcement learning in polynomial time. Machine Learning 49(2-3), 209–232 (2002)
Kushmerick, N., Hanks, S., Weld, D.S.: An algorithm for probabilistic planning. Artificial Intelligence 76, 239–286 (1995)
Li, L., Littman, M.L., Walsh, T.J., Strehl, A.L.: Knows what it knows: a framework for self-aware learning. Machine Learning 82, 399–443 (2011)
Pasula, H.M., Zettlemoyer, L.S., Kaelbling, L.P.: Learning symbolic models of stochastic domains. J. Artificial Intelligence Research 29, 309–352 (2007)
Rodrigues, C., Gérard, P., Rouveirol, C., Soldano, H.: Incremental learning of relational action rules. In: Proc. 9th International Conference on Machine Learning and Applications (ICMLA 2010), pp. 451–458. IEEE Computer Society (2010)
Strehl, A.L., Diuk, C., Littman, M.L.: Efficient structure learning in factored-state MDPs. In: Proc. 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 645–650. AAAI Press (2007)
Walsh, T.J., Szita, I., Diuk, C., Littman, M.L.: Exploring compact reinforcement-learning representations with linear regression. In: Proc. 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 (2009)
Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., Weinberger, M.J.: Inequalities for the l 1 deviation of the empirical distribution. Tech. Rep. HPL-2003-97, Hewlett-Packard Company (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lesner, B., Zanuttini, B. (2012). Handling Ambiguous Effects in Action Learning. In: Sanner, S., Hutter, M. (eds) Recent Advances in Reinforcement Learning. EWRL 2011. Lecture Notes in Computer Science(), vol 7188. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29946-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-642-29946-9_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29945-2
Online ISBN: 978-3-642-29946-9
eBook Packages: Computer ScienceComputer Science (R0)