Abstract
Neural Network training requires a large number of learning epochs. An appropriate learning rate is important to the overall performance of the training. Under a weight-update algorithm, a low learning rate would make the network learning slowly, and a high learning rate would make the weights and error function diverge. To optimize the model parameters, this paper presents theoretical and empirical analysis of learning rate in neural network modeling for its application in stock price prediction, an increasing learning rate approach is suggested for practice. The effect of momentum factor is also investigated to speed up the convergence for network training.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Nilsson, J. (ed.): Artificial Intelligence: A New Synthesis. Morgan Kaufmann Publishers, Inc., San Francisco (1998)
Pao, Y.-H. (ed.): Adaptive Pattern Recognition and Neural Networks. Addison-Wesley Publishing Company, Inc., Reading (1999)
Vidyasagar, M. (ed.): A Theory of Learning and Generalization. Springer, Heidelberg (1997)
NeuroShell2 User’s Manual, Ward System Group, Inc., Frederick, MD (2002)
McNelis, P.D. (ed.): Neural networks in Finance: Gaining predictive edge in the market. Elsevier Academic Press, Amsterdam (2005)
Reed, R.D., Marks, R.J.: Neural Smithing, Massachusetts Institute of Technology (1999)
Haykin, S.: Neural networks: A Comprehensive Foundation. MacMillan, New York (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ke, J., Liu, X., Wang, G. (2008). Theoretical and Empirical Analysis of the Learning Rate and Momentum Factor in Neural Network Modeling for Stock Prediction. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds) Advances in Computation and Intelligence. ISICA 2008. Lecture Notes in Computer Science, vol 5370. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92137-0_76
Download citation
DOI: https://doi.org/10.1007/978-3-540-92137-0_76
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-92136-3
Online ISBN: 978-3-540-92137-0
eBook Packages: Computer ScienceComputer Science (R0)