{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T01:12:20Z","timestamp":1722993140419},"reference-count":12,"publisher":"Fuji Technology Press Ltd.","issue":"5","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["JACIII","J. Adv. Comput. Intell. Intell. Inform."],"published-print":{"date-parts":[[2017,9,20]]},"abstract":"This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.<\/jats:p>","DOI":"10.20965\/jaciii.2017.p0917","type":"journal-article","created":{"date-parts":[[2017,9,20]],"date-time":"2017-09-20T21:09:00Z","timestamp":1505941740000},"page":"917-929","source":"Crossref","is-referenced-by-count":3,"title":["Comparison Between Reinforcement Learning Methods with Different Goal Selections in Multi-Agent Cooperation"],"prefix":"10.20965","volume":"21","author":[{"given":"Fumito","family":"Uwano","sequence":"first","affiliation":[]},{"name":"The University of Electro-Communications 1-5-1 Chofugaoka, Chofu-shi, Tokyo, Japan","sequence":"first","affiliation":[]},{"given":"Keiki","family":"Takadama","sequence":"additional","affiliation":[]}],"member":"8550","published-online":{"date-parts":[[2017,9,20]]},"reference":[{"key":"key-10.20965\/jaciii.2017.p0917-1","unstructured":"K.-H. Park, Y.-J. Kim, and J.-H. Kim, \u201cModular Q-learning Based Multi-Agent Cooperation for Robot Soccer,\u201d Robotics and Autonomous System, pp. 3026-3033, 2015."},{"key":"key-10.20965\/jaciii.2017.p0917-2","doi-asserted-by":"crossref","unstructured":"M. Camara, O. Bonham-Carter, and J. Jumadinova, \u201cA Multi-agent System with Reinforcement Learning Agents for Biomedical Text Mining,\u201d Proc. of the 6th ACM Conf. on Bioinformatics, Computational Biology and Health Informatics, BCB\u201915, pp. 634-643, NY, USA, ACM, 2015.","DOI":"10.1145\/2808719.2812596"},{"key":"key-10.20965\/jaciii.2017.p0917-3","unstructured":"H. Iima and Y. Kuroe, \u201cSwarm Reinforcement Learning Methods Improving Certainty of Learning for a Multi-Robot Formation Problem,\u201d CEC, pp. 3026-3033, May 2015."},{"key":"key-10.20965\/jaciii.2017.p0917-4","doi-asserted-by":"crossref","unstructured":"Y. Ichikawa and K. Takadama, \u201cDesigning Internal Reward of Reinforcement Learning Agents in Multi-step Dilemma Problem,\u201d J. Adv. Comput. Intell. Intell. Inform. (JACIII), Vol.17, No.6, pp. 926-931, 2013.","DOI":"10.20965\/jaciii.2013.p0926"},{"key":"key-10.20965\/jaciii.2017.p0917-5","unstructured":"M. Elidrisi, N. Johnson, M. Gini, and J. Crandall, \u201cFast Adaptive Learning in Repeated Stochastic Games by Game Abstraction,\u201d AAMAS, pp. 1141-1148, May 2014."},{"key":"key-10.20965\/jaciii.2017.p0917-6","unstructured":"K. J. Prabuchandran, A. N. H. Kumar, and S. Bhatnagar, \u201cMultiagent reinforcement learning for traffic signal control,\u201d In Intelligent Transportation Systems (ITSC), 2014 IEEE 17th Int. Conf. on, pp. 2529-2534, Oct 2014."},{"key":"key-10.20965\/jaciii.2017.p0917-7","doi-asserted-by":"crossref","unstructured":"M. Tan, \u201cMulti-Agent Reinforcement Learning: Independent vs. Cooperative Agents,\u201d Proc. of the 10th Int. Conf. on Machine Learning, pp. 330-337, Morgan Kaufmann, 1993.","DOI":"10.1016\/B978-1-55860-307-3.50049-6"},{"key":"key-10.20965\/jaciii.2017.p0917-8","unstructured":"K. V. Karl Tuyls and T. Lenaerts, \u201cA Selection-Mutation Model for Q-learning in Multi-Agent Systems,\u201d Robotics and Autonomous System, pp. 3026-3033, May 2015."},{"key":"key-10.20965\/jaciii.2017.p0917-9","doi-asserted-by":"crossref","unstructured":"E. M. d. Cote, A. Lazaric, and M. Restelli, \u201cLearning to Cooperate in Multi-Agent Social Dilemmas,\u201d AAMAS, pp. 783-785, May 2006.","DOI":"10.1145\/1160633.1160770"},{"key":"key-10.20965\/jaciii.2017.p0917-10","doi-asserted-by":"crossref","unstructured":"F. Uwano and K. Takadama, \u201cCommunication-Less Cooperative Q-Learning Agents in Maze Problem,\u201d pp. 453-467, Springer International Publishing, Cham, 2017.","DOI":"10.1007\/978-3-319-49049-6_33"},{"key":"key-10.20965\/jaciii.2017.p0917-11","unstructured":"R. S. Sutton and A. G. Barto, \u201cIntroduction to Reinforcement Learning,\u201d MIT Press, Cambridge, MA, USA, 1st edition, 1998."},{"key":"key-10.20965\/jaciii.2017.p0917-12","unstructured":"C. J. Watkins, \u201cLearning from Delayed Rewards,\u201d Ph.D. thesis, King\u2019s College, 1989."}],"container-title":["Journal of Advanced Computational Intelligence and Intelligent Informatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.fujipress.jp\/main\/wp-content\/themes\/Fujipress\/phyosetsu.php?ppno=JACII002100050017","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2019,10,3]],"date-time":"2019-10-03T15:38:18Z","timestamp":1570117098000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.fujipress.jp\/jaciii\/jc\/jacii002100050917"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,9,20]]},"references-count":12,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2017,9,20]]},"published-print":{"date-parts":[[2017,9,20]]}},"URL":"https:\/\/doi.org\/10.20965\/jaciii.2017.p0917","relation":{},"ISSN":["1883-8014","1343-0130"],"issn-type":[{"type":"electronic","value":"1883-8014"},{"type":"print","value":"1343-0130"}],"subject":[],"published":{"date-parts":[[2017,9,20]]}}}