Abstract
A mathematical framework for the convergence analysis of the well-known Quickprop method is described. Furthermore, we propose a modification of this method that exhibits improved convergence speed and stability, and, at the same time, alleviates the use of heuristic learning parameters. Simulations are conducted to compare and evaluate the performance of the new modified Quickprop algorithm with various popular training algorithms. The results of the experiments indicate that the increased convergence rates achieved by the proposed algorithm, affect by no means its generalization capability and stability.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Brodatz, P.: Textures-a Photographic Album for Artists and Designers. Dover, New York, 1966.
Broyden, C. G.: A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19 (1965), 577–593.
Dennis, J. E. Jr and Moré , J.: A characterization of superlinear convergence and its applications to quasi Newton methods. Math. Comp. 28 (1974), 577–593.
Dennis, J. E. Jr and Schnabel, R. B.: A view of unconstrained optimization. In: G. L. Nemhauser et al., (eds), Handbooks inOR&MS, Vol. 1. Elsevier Science Publishers, 1989.
Dennis, J. E. Jr and Schnabel, R. B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM, Philadelphia, 1996. (Originally published: Prentice Hall, Inc., New Jersey, 1983.)
Fahlman, S. E.: Faster-learning variations on back-propagation: an empirical study. In: D. S. Touretzky, G. E. Hinton and T. J. Sejnowski, (eds), Proc. 1988 Connectionist Models Summer School. Morgan Kaufmann, San Mateo, CA, 1988, pp. 38–51.
Gilbert J. C. and Nocedal. J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optimization 2 (1992), 21–42.
Haralick, R., Shanmugan, K. and Dinstein, I.: Textural features for image classification. IEEE Trans. System, Man and Cybernetics 3 (1973), 610–621.
Jacobs, R. A.: Increased rates of convergence through learning rate adaptation. Neural Networks 1 (1988), 295–307.
Lee, Y., Oh, S. H. and Kim, M. W.: An analysis of premature saturation in backpropagation learning. Neural Networks 6 (1993), 719–728.
Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: A new method in neural network supervised training with imprecision. Proc. IEEE 3rd Int. Conf.Electronics, Circuits and Systems, 1996, 287–290.
Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: Effective back-propagation with variable stepsize. Neural Networks 10 (1997), 69–82.
Magoulas, G. D., Vrahatis, M. N. and Androulakis, G. S.: Improving the convergence of the back-propagation algorithm using learning rate adaptation methods. Neural Computation 11 (1999), 1769–1796.
Nocedal, J.: Theory of algorithms for unconstrained optimization. Acta Numerica (1992), 199–242.
Ortega, J. M. and Rheinboldt, W. C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, 1970.
Polak, E.: Optimization: Algorithms and Consistent Approximations. Springer-Verlag, New York, 1997.
Rigler, A. K., Irvine, J. M. and Vogl, T. P.: Eescaling of variables in backpropagation learning. Neural Networks 4 (1991), 225–229.
Rumelhart, D. E., Hinton, G. E. and Williams, R. J:. Learning internal representations by error propagation. In: D. E. Rumelhart and J. L. McClelland, eds, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, Massachusetts, 1, 1986, pp. 318–362.
Sperduti, A. and Starita, A.: Speed up learning and network optimization with extended back-propagation. Neural Networks 6 (1993), 365–383.
Vogl, T. P., Mangis, J. K., Rigler, J. K., Zink, W. T. and Alkon, D. L.: Accelerating the convergence of the back-propagation method. Biological Cybernetics 59 (1988), 257–263.
Wolfe, P.: Convergence conditions for ascent methods. SIAM Review 11 (1969), 226–235.
Wolfe, P.: Convergence conditions for ascent methods. II: Some corrections. SIAM Review 13 (1971), 185–188.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Vrahatis, M.N., Magoulas, G.D. & Plagianakos, V.P. Globally Convergent Modification of the Quickprop Method. Neural Processing Letters 12, 159–170 (2000). https://doi.org/10.1023/A:1009661729970
Issue Date:
DOI: https://doi.org/10.1023/A:1009661729970