Abstract
While the Resilient Backpropagation (RPROP) method can be extremely fast in converging to a solution, it suffers from the local minima problem. In this paper, a fast and reliable learning algorithm for multi-layer artificial neural networks is proposed. The learning model has two phases: the RPROP phase and the gradient ascent phase. The repetition of two phases can help the network get out of local minima. The proposed algorithm is tested on some benchmark problems. For all the above problems, the systems are shown to be capable of escaping from the local minima and converge faster than the Backpropagation with momentum algorithm and the simulated annealing techniques.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, X., Wang, H., Dai, G., Tang, Z. (2006). A Reliable Resilient Backpropagation Method with Gradient Ascent. In: Huang, DS., Li, K., Irwin, G.W. (eds) Computational Intelligence. ICIC 2006. Lecture Notes in Computer Science(), vol 4114. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-37275-2_31
Download citation
DOI: https://doi.org/10.1007/978-3-540-37275-2_31
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37274-5
Online ISBN: 978-3-540-37275-2
eBook Packages: Computer ScienceComputer Science (R0)