{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,1,18]],"date-time":"2025-01-18T20:10:22Z","timestamp":1737231022754,"version":"3.33.0"},"reference-count":10,"publisher":"Wiley","issue":"6","license":[{"start":{"date-parts":[[2007,3,21]],"date-time":"2007-03-21T00:00:00Z","timestamp":1174435200000},"content-version":"vor","delay-in-days":4462,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Systems & Computers in Japan"],"published-print":{"date-parts":[[1995,1]]},"abstract":"Abstract<\/jats:title>As the method to accelerate the learning by error back\u2010propagation, several studies have been proposed in which the parameter called gain is introduced. In those studies, however, the acceleration effect is evaluated only numerically, and there is no theoretical analysis of the effect of the gain on the learning process.<\/jats:p>This paper points out that those studies can also be realized by methods without introducing the gain, and presents a detailed analysis of the effect of the gain from a unified viewpoint. The following properties are revealed as a result. The error back\u2010propagation method, in which a constant gain is introduced, can be reduced to the ordinary error back\u2010propagation method without introducing the gain. When the dynamic gain is introduced, the method cannot be reduced to the steepest descent method, as well as the momentum method, without introducing the gain. Furthermore, it is shown that there exists a characteristic superellipse that determines the behavior of the gain.<\/jats:p>By analyzing the characteristic superellipse, a theoretical basis is provided for the instability of the method introducing the dynamic gain. This paper presents a unified treatment of the method introducing the gain and the method not introducing the gain from a unified viewpoint which have been considered independently. The effect of the gain on the learning process is analyzed, which will help in developing a new learning method in the future.<\/jats:p>","DOI":"10.1002\/scj.4690260605","type":"journal-article","created":{"date-parts":[[2007,7,8]],"date-time":"2007-07-08T08:49:55Z","timestamp":1183884595000},"page":"49-58","source":"Crossref","is-referenced-by-count":0,"title":["Analysis of the error back\u2010propagation learning algorithms with gain"],"prefix":"10.1002","volume":"26","author":[{"given":"Qi","family":"Jia","sequence":"first","affiliation":[]},{"given":"Katsuyuki","family":"Hagiwara","sequence":"additional","affiliation":[]},{"given":"Shiro","family":"Usui","sequence":"additional","affiliation":[]},{"given":"Naohiro","family":"Toda","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2007,3,21]]},"reference":[{"first-page":"318","volume-title":"Parallel Distributed Processing I","author":"Rumelhart D. E.","key":"e_1_2_1_2_2"},{"key":"e_1_2_1_3_2","doi-asserted-by":"publisher","DOI":"10.1007\/BF00332914"},{"key":"e_1_2_1_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/0893-6080(88)90003-2"},{"key":"e_1_2_1_5_2","unstructured":"D. C.Plaut S. J.NowlanandG. E.Hinton.Experiments on learning by back\u2010propagation. Carnegie\u2010Mellon Univ. Comput. Sci. Dept. Tech. Rep. CMU\u2010CS\u201086\u2013126 (1986)."},{"key":"e_1_2_1_6_2","unstructured":"S. J.Nowlan.Gain variation in recurrent error propagation networks. Toronto Univ. Comp. Sci. Dept. Tech. Rep. CRG\u2010TR\u201088 (1988)."},{"key":"e_1_2_1_7_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1990.2.2.226"},{"key":"e_1_2_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/21.101159"},{"key":"e_1_2_1_9_2","doi-asserted-by":"crossref","unstructured":"J. M.Zurada.Lambda learning rule for feedforward neural networks. ICNN'93 San Francisco pp.1808\u20131811(1993).","DOI":"10.1109\/ICNN.1993.298831"},{"issue":"8","key":"e_1_2_1_10_2","first-page":"1179","article-title":"A study of initial value setting in back\u2010propagation learning algorithm in neural network","volume":"73","author":"Jia Q.","year":"1990","journal-title":"Trans. (D\u2010II) I.E.I.C.E., Japan"},{"key":"e_1_2_1_11_2","doi-asserted-by":"publisher","DOI":"10.1002\/scj.4690221009"}],"container-title":["Systems and Computers in Japan"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/api.wiley.com\/onlinelibrary\/tdm\/v1\/articles\/10.1002%2Fscj.4690260605","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/scj.4690260605","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,1,18]],"date-time":"2025-01-18T19:45:20Z","timestamp":1737229520000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/scj.4690260605"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[1995,1]]},"references-count":10,"journal-issue":{"issue":"6","published-print":{"date-parts":[[1995,1]]}},"alternative-id":["10.1002\/scj.4690260605"],"URL":"https:\/\/doi.org\/10.1002\/scj.4690260605","archive":["Portico"],"relation":{},"ISSN":["0882-1666","1520-684X"],"issn-type":[{"type":"print","value":"0882-1666"},{"type":"electronic","value":"1520-684X"}],"subject":[],"published":{"date-parts":[[1995,1]]}}}