Abstract
The paper reports on an experiment, in which a Knowledge-Based Reinforcement Learning (KB-RL) method was compared to a Neural Network (NN) approach in solving a classical Artificial Intelligence (AI) task. In contrast to NNs, which require a substantial amount of data to learn a good policy, the KB-RL method seeks to encode human knowledge into the solution, considerably reducing the amount of data needed for a good policy. By means of Reinforcement Learning (RL), KB-RL learns to optimize the model and improves the output of the system. Furthermore, KB-RL offers the advantage of a clear explanation of the taken decisions as well as transparent reasoning behind the solution.
The goal of the reported experiment was to examine the performance of the KB-RL method in contrast to the Neural Network and to explore the capabilities of KB-RL to deliver a strong solution for the AI tasks. The results show that, within the designed settings, KB-RL outperformed the NN, and was able to learn a better policy from the available amount of data. These results support the opinion that Artificial Intelligence can benefit from the discovery and study of alternative approaches, potentially extending the frontiers of AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
A. Houk, P.: A Strategic Game Playing Agent for FreeCiv. Master’s thesis, Northwestern University, Illinois, United States (2004)
Abdullah, M.S., Kimble, C., Benest, I., Paige, R.: Knowledge-based systems: a re-evaluation. J. Knowl. Manage. 10(3), 127–142 (2006)
Akerkar, R., Sajja, P.: Knowledge-Based Systems, 1st edn. Jones and Bartlett Publishers Inc., Burlington (2009)
Arnold, F., Horvat, B., Sacks, A.M.: Freeciv learner : a machine learning project utilizing genetic algorithms. Interim Report. The University of Auckland, Game AI Group (2005)
Avram, G.: Empirical study on knowledge based systems. Electron. J. Inf. Syst. Eval. 8, 11–20 (2005)
Bologna, G., Hayashi, Y.: A comparison study on rule extraction from neural network ensembles, boosted shallow trees, and SVMs. Appl. Comput. Intell. Soft Comput. 2018, 1–20 (2018). https://doi.org/10.1155/2018/4084850
Branavan, S.R.K., Silver, D., Barzilay, R.: Learning to Win by Reading Manuals in a Monte-Carlo Framework. CoRR abs/1401.5390 (2014)
Cannady, J.: Artificial neural networks for misuse detection. In: National Information Systems Security Conference, pp. 443–456 (1998)
Chandrasekaran, B., Swartout, W.: Explanations in knowledge systems: the role of explicit representation of design knowledge. IEEE Exp. 6, 47–49 (1991)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org
Haykin, S.: Neural Networks: A Comprehensive Foundation, 3rd edn. Prentice-Hall Inc., Upper Saddle River (2007)
Hinkelmann, K., Ahmed, S., Corradini, F.: Combining machine learning with knowledge engineering to detect fake news in social networks - a survey. In: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering (2019)
Hinrichs, T., Forbus, K.: Toward higher-order qualitative representations. In: Proceedings of QR 2012 (2012)
Hinrichs, T., Forbus, K.: Analogical learning in a turn-based strategy game. In: IJCAI International Joint Conference on Artificial Intelligence, pp. 853–858 (12 2007)
Jones, J., Goel, A.: Knowledge organization and structural credit assignment. In: Proceedings of IJCAI-05 Workshop on Reasoning, Representation and Learning in Computer Games, Edinburgh, UK, August 2005
Jones, J., Goel, A.K.: Metareasoning for adaptation of classification knowledge. In: AAMAS (2009)
Jones, J., Parnin, C., Sinharoy, A., Rugaber, S., Goel, A.K.: Adapting game-playing agents to game requirements. In: Proceedings of Fifth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-09), pp. 148–153 (2009)
Khalil, K.M., Abdel-Aziz, M., Nazmy, T.T., Salem, A.B.M.: Intelligent Techniques for Resolving Conflicts of Knowledge in Multi-agent Decision Support Systems. ArXiv abs/1401.4381 (2014)
Kołcz, A., Chowdhury, A., Alspector, J.: Data duplication: an imbalance problem? In: In: Proceedings of the ICML 2003 Workshop on Learning from Imbalanced Datasets (2003)
Kumar, R., Srivastava, S., Gupta, J.R., Mohindru, A.: Comparative study of neural networks for dynamic nonlinear systems identification. Soft Comput. 23(1), 101–114 (2019)
Lécué, F.: On the role of knowledge graphs in explainable AI. In: Joint Proceedings of the 6th International Workshop on Dataset PROFlLing and Search & the 1st Workshop on Semantic Explainability co-located with the 18th International Semantic Web Conference (ISWC 2019), Auckland, New Zealand, 27 October 2019, p. 29 (2019)
Lucas, P.: Expert Systems. In: Kok, J.N. (ed.) Encyclopedia of Life Support Systems (EOLSS), pp. 328–356. Eolss Publishers, Paris (2009)
Mitrea, C., Lee, C., Wu, Z.: A comparison between neural networks and traditional forecasting methods: a case study. Int. J. Eng. Bus. Manage. 1 (2009). https://doi.org/10.5772/6777
Muggleton, S., Raedt, L.D.: Inductive logic programming: theory and methods. J. Logic Program. 19(20), 629–679 (1994)
Navarro, H., Bennun, L.: Descriptive examples of the limitations of artificial neural networks applied to the analysis of independent stochastic data. Int. J. Comput. Eng. Technol. 5, 40–42 (2014)
Nechepurenko, L., Voss, V.: FreeCiv Games for the Experiment on Comparing Knowledge-Based Reinforcement Learning and Neural Networks in Strategic Games (2019)
Neches, R., Swartout, W.R., Moore, J.: Explainable (and maintainable) expert systems. In: Proceedings of the 9th International Joint Conference on Artificial Intelligence, IJCAI 1985, vol. 1, pp. 382–389. Morgan Kaufmann Publishers Inc., San Francisco (1985)
Oravec, J.A.: Expert systems and knowledge-based engineering (1984–1991). Int. J. Des. Learn. 5(2), 66–75 (2014)
Reed, R., Marks, R.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. Bradford Book. MIT Press, Cambridge (1999)
Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge (1998)
Towell, G.G., Shavlik, J.W.: Knowledge-based artificial neural networks. Artif. Intell. 70(1–2), 119–165 (1994). https://doi.org/10.1016/0004-3702(94)90105-8
Tseng, H.H., Luo, Y., Haken, R.T., Naqa, I.E.: The role of machine learning in knowledge-based response-adapted radiotherapy. Front. Oncol. 8, 266 (2018)
Tu, J.V.: Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J. Clin. Epidemiol. 49(11), 1225–1231 (1996)
Ulam, P., Goel, A., Jones, J., Murdock, W.: Using model-based reflection to guide reinforcement learning. In: Fourth AAAI Conference on AI in Interactive Digital Entertainment (2008)
Voss, V., Nechepurenko, L.: FreeCiv Games Played by Knowledge-based Reinforcement Learning (2019). https://doi.org/10.5281/zenodo.3266624
Voss, V., Nechepurenko, L., Schaefer, R., Bauer, S.: Playing a strategy game with knowledge-based reinforcement learning. SN Comput. Sci. 1(2), 78 (2020)
Watson, I., Azhar, D., Chuyang, Y.T., Pan, W., Chen, G.: Optimization in Strategy Games : Using Genetic Algorithms to Optimize City Development in FreeCiv (2009). https://doi.org/10.1.1.567.7035
Wender, S.: Integrating Reinforcement Learning into Strategy Games. Master’s thesis, The University of Auckland, Auckland, New Zealand (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Nechepurenko, L., Voss, V., Gritsenko, V. (2020). Comparing Knowledge-Based Reinforcement Learning to Neural Networks in a Strategy Game. In: de la Cal, E.A., Villar Flecha, J.R., Quintián, H., Corchado, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2020. Lecture Notes in Computer Science(), vol 12344. Springer, Cham. https://doi.org/10.1007/978-3-030-61705-9_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-61705-9_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61704-2
Online ISBN: 978-3-030-61705-9
eBook Packages: Computer ScienceComputer Science (R0)