Abstract
Multilayer neural networks using supervised training try to minimize the error between a given correct answer and the ones produced by the network. The weights in the neural network are adjusted at each iteration and after adequate epochs, adjusted weights give results close to correct answers. Besides the current error, accumulated errors from past iterations are also used for updating weights. This resembles the integral action in control theory, but the method took the name momentums in machine learning. Control theory uses one more technique for achieving faster tracking: the derivative action. In this research, we added the missing derivative action to the training algorithm and obtained promising results. The training algorithm with derivative action achieved 3.8 times speedup comparing to the momentum method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kim, P.: MATLAB Deep Learning with Machine Learning, Neural Networks and Artificial Intelligence, 1st edn. Apress, Soul (2017)
Nitta, T.: A back-propagation algorithm for complex numbered neural networks. In: Proceedings of 1993 International Conference on Neural Networks, Nagoya, pp. 1649–1652 (1993)
Kim, M.S.: Modification of backpropagation networks for complex-valued signal processing in frequency domain. In: Proceedings of IJCNN vol. 3, pp. 27–31 (1990)
Nitta, T., Fruya, T.: A complex back-propagation learning. Trans. Inf. Process. Soc. Japan 32(10), 1319–1329 (1991)
Wiegerinck, W., Komoda, A., Heskes, T.: Stochastic dynamics of learning with momentum in neural networks. J. Phys. A Math. Gen. 27, 4425–4437 (1994)
Swanston, D.J., Bishop, J.M., Mitchell, R.J.: Simple adaptive momentum: new algorithm for training multiplayer perceptrons. Electron. Lett. 30, 1498–1500 (1994)
Scarpetta, S., Rattray, M., Saad, D.: Natural gradient matrix momentum. In: Proceedings of the Ninth International Conference on Neural Networks, pp. 43–48. The Institution of Electrical Engineers, London (1999)
Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)
Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Networks 61, 85–117 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Gürhanlı, A., Çevik, T., Çevik, N. (2019). Effect of Derivative Action on Back-Propagation Algorithms. In: Abraham, A., Gandhi, N., Pant, M. (eds) Innovations in Bio-Inspired Computing and Applications. IBICA 2018. Advances in Intelligent Systems and Computing, vol 939. Springer, Cham. https://doi.org/10.1007/978-3-030-16681-6_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-16681-6_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16680-9
Online ISBN: 978-3-030-16681-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)