Abstract
Extreme learning machines are single-hidden layer feed- forward neural networks, where the training is restricted to the output weights in order to achieve fast learning with good performance. The success of learning strongly depends on the random parameter initialization. To overcome the problem of unsuited initialization ranges, a novel and efficient pretraining method to adapt extreme learning machines task-specific is presented. The pretraining aims at desired output distributions of the hidden neurons. It leads to better performance and less dependence on the size of the hidden layer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In: International Joint Conference on Neural Networks (IJCNN 2004), Budapest, Hungary (July 2004)
Feng, G., Huang, G.-B., Lin, Q., Gay, R.: Error Minimized Extreme Learning Machine with Growth of Hidden Nodes and Incremental Learning. Trans. Neur. Netw. 20, 1352–1357 (2009)
Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: Optimally Pruned Extreme Learning Machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2010)
Steil, J.J.: Online Reservoir Adaptation by Intrinsic Plasticity for Backpropagation-Decorrelation and Echo State Learning. Neural Networks, Special Issue on Echo State and Liquid State Networks, 353–364 (2007)
Triesch, J.: Synergies beween Intrinsic and Synaptic Plasticity in Individual Model Neurons. In: NIPS (2005)
Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Soviet Math. Dokl. (4), 1035–1038 (1963)
Penrose, R.: A Generalized Inverse for Matrices. Mathematical Proceedings of the Cambridge Philosophical Society, pp. 406–413 (1955)
Frank, A., Asuncion, A.: UCI machine learning repository (2010)
Platt, J.: Resource-Allocating Network for Function Interpolation. Neural Computation 3(2) (1991)
Yingwei, L., Sundararajan, N., Saratchandran, P.: A Sequential Learning Scheme for Function Approximation using Minimal Radial Basis Function Neural Networks. Neural Comput. 9, 461–478 (1997)
Huang, G.-B., Chen, L., Siew, C.-K.: Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes. IEEE Transactions on Neural Networks 17(4), 879–892 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Neumann, K., Steil, J.J. (2011). Batch Intrinsic Plasticity for Extreme Learning Machines. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2011. ICANN 2011. Lecture Notes in Computer Science, vol 6791. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21735-7_42
Download citation
DOI: https://doi.org/10.1007/978-3-642-21735-7_42
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21734-0
Online ISBN: 978-3-642-21735-7
eBook Packages: Computer ScienceComputer Science (R0)