Abstract
In reinforcement learning exploration phase, it is necessary to introduce a process of trial and error to discover better rewards obtained from environment. To this end, one usually uses the uniform pseudorandom number generator in exploration phase. However, it is known that chaotic source also provides a random-like sequence similar to stochastic source. In this paper we have employed the chaotic generator in the exploration phase of reinforcement learning in a nondeterministic maze problem. We obtained promising results in the so called maze problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Cuayáhuitl, H.: Hierarchical Reinforcement Learning for Spoken Dialogue Systems, PhD thesis, University of Edinburgh (2009)
Vidal, J.M.: Fundamentals of Multi Agent Systems (2009) (unpublished)
Shoham, Y., Leyton-Brown, K.: MULTIAGENT SYSTEMS Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, Cambridge (2009)
Morihiro, K., Matsui, N., Nishimura, H.: Effects of Chaotic Exploration on Reinforcement Maze Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3213, pp. 833–839. Springer, Heidelberg (2004)
Kellert, S.H.: In the Wake of Chaos: Unpredictable Order in Dynamical Systems. University of Chicago Press, Chicago (1993) ISBN 0226429768
Meng, X.P., Meng, J., Lui, L.J.: Quantum Chaotic Reinforcement Learning. In: Fourth International Conference on Natural Computation (2008)
Parker, T.S., Chua, L.O.: Practical Numerical Algorithms for Chaotic Systems. Springer, Heidelberg (1989)
Ott, E., Sauer, T., Yorke, J.A.: Coping with Chaos: Analysis of Chaotic Data and the Exploitation of Chaotic Systems. John Wiley & Sons, Inc., New York (1994)
Handa, H.: Evolutionary Computation on Multitask Reinforcement Learning Problems. In: IEEE International Conference on Networking, Sensing and Control, pp. 685–688 (2007)
Goh, K., Tan, K.: Evolutionary Multi-objective Optimization in Uncertain Environments. Springer, Heidelberg (2009)
Jiang, J.: A Framework for Aggregation of Multiple Reinforcement Learning Algorithms. PhD thesis, University of Waterloo (2007)
Beigi, A., Parvin, H., Mozayani, N., Minaei, B.: Improving Reinforcement Learning Agents Using Genetic Algorithms. In: An, A., Lingras, P., Petty, S., Huang, R. (eds.) AMT 2010. LNCS, vol. 6335, pp. 330–337. Springer, Heidelberg (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Beigi, A., Mozayani, N., Parvin, H. (2011). Chaotic Exploration Generator for Evolutionary Reinforcement Learning Agents in Nondeterministic Environments. In: Dobnikar, A., Lotrič, U., Šter, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2011. Lecture Notes in Computer Science, vol 6594. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20267-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-642-20267-4_26
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-20266-7
Online ISBN: 978-3-642-20267-4
eBook Packages: Computer ScienceComputer Science (R0)