This paper proposes novel methods to provide intelligence for characters in fighting action games by using neural networks. First, how a character learns basic game rules and matches against randomly acting opponents is considered. Since each action takes more than one time unit in general fighting action games, the results of a character's action are exposed not immediately but several time units later. We evaluate the fitness of a decision by using the relative score change caused by the decision. Whenever the scores of fighting characters are changed, the decision causing the score change is identified, and then the neural network is trained by using the score difference and the previous input and output values which induced the decision. Second, how to cope more properly with opponents that act with predefined action patterns is addressed. The opponents' past actions are utilized to find out the optimal counter-actions for the patterns. Lastly, a method in order to learn moving actions is proposed. To evaluate the performance of the proposed algorithm, we implement a simple fighting action game. Then the proposed intelligent character (IC) fights with the opponent characters (OCs) which act randomly or with predefined action patterns. The results show that the IC understands the game rules and finds out the optimal counter-actions for the opponents' action patterns by itself.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Byeong Heon CHO, Sung Hoon JUNG, Yeong Rak SEONG, Ha Ryoung OH, "Exploiting Intelligence in Fighting Action Games Using Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E89-D, no. 3, pp. 1249-1256, March 2006, doi: 10.1093/ietisy/e89-d.3.1249.
Abstract: This paper proposes novel methods to provide intelligence for characters in fighting action games by using neural networks. First, how a character learns basic game rules and matches against randomly acting opponents is considered. Since each action takes more than one time unit in general fighting action games, the results of a character's action are exposed not immediately but several time units later. We evaluate the fitness of a decision by using the relative score change caused by the decision. Whenever the scores of fighting characters are changed, the decision causing the score change is identified, and then the neural network is trained by using the score difference and the previous input and output values which induced the decision. Second, how to cope more properly with opponents that act with predefined action patterns is addressed. The opponents' past actions are utilized to find out the optimal counter-actions for the patterns. Lastly, a method in order to learn moving actions is proposed. To evaluate the performance of the proposed algorithm, we implement a simple fighting action game. Then the proposed intelligent character (IC) fights with the opponent characters (OCs) which act randomly or with predefined action patterns. The results show that the IC understands the game rules and finds out the optimal counter-actions for the opponents' action patterns by itself.
URL: https://globals.ieice.org/en_transactions/information/10.1093/ietisy/e89-d.3.1249/_p
Copy
@ARTICLE{e89-d_3_1249,
author={Byeong Heon CHO, Sung Hoon JUNG, Yeong Rak SEONG, Ha Ryoung OH, },
journal={IEICE TRANSACTIONS on Information},
title={Exploiting Intelligence in Fighting Action Games Using Neural Networks},
year={2006},
volume={E89-D},
number={3},
pages={1249-1256},
abstract={This paper proposes novel methods to provide intelligence for characters in fighting action games by using neural networks. First, how a character learns basic game rules and matches against randomly acting opponents is considered. Since each action takes more than one time unit in general fighting action games, the results of a character's action are exposed not immediately but several time units later. We evaluate the fitness of a decision by using the relative score change caused by the decision. Whenever the scores of fighting characters are changed, the decision causing the score change is identified, and then the neural network is trained by using the score difference and the previous input and output values which induced the decision. Second, how to cope more properly with opponents that act with predefined action patterns is addressed. The opponents' past actions are utilized to find out the optimal counter-actions for the patterns. Lastly, a method in order to learn moving actions is proposed. To evaluate the performance of the proposed algorithm, we implement a simple fighting action game. Then the proposed intelligent character (IC) fights with the opponent characters (OCs) which act randomly or with predefined action patterns. The results show that the IC understands the game rules and finds out the optimal counter-actions for the opponents' action patterns by itself.},
keywords={},
doi={10.1093/ietisy/e89-d.3.1249},
ISSN={1745-1361},
month={March},}
Copy
TY - JOUR
TI - Exploiting Intelligence in Fighting Action Games Using Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1249
EP - 1256
AU - Byeong Heon CHO
AU - Sung Hoon JUNG
AU - Yeong Rak SEONG
AU - Ha Ryoung OH
PY - 2006
DO - 10.1093/ietisy/e89-d.3.1249
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E89-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2006
AB - This paper proposes novel methods to provide intelligence for characters in fighting action games by using neural networks. First, how a character learns basic game rules and matches against randomly acting opponents is considered. Since each action takes more than one time unit in general fighting action games, the results of a character's action are exposed not immediately but several time units later. We evaluate the fitness of a decision by using the relative score change caused by the decision. Whenever the scores of fighting characters are changed, the decision causing the score change is identified, and then the neural network is trained by using the score difference and the previous input and output values which induced the decision. Second, how to cope more properly with opponents that act with predefined action patterns is addressed. The opponents' past actions are utilized to find out the optimal counter-actions for the patterns. Lastly, a method in order to learn moving actions is proposed. To evaluate the performance of the proposed algorithm, we implement a simple fighting action game. Then the proposed intelligent character (IC) fights with the opponent characters (OCs) which act randomly or with predefined action patterns. The results show that the IC understands the game rules and finds out the optimal counter-actions for the opponents' action patterns by itself.
ER -