Abstract
In this paper, we propose a new information-theoretic method called ”information enhancement learning” to realize competitive learning and self-organizing maps. In addition, we propose a computational method to detect the importance of input variables and to find the optimal input variables. In our information enhancement learning, there are three types of information, namely, self-enhancement, collective enhancement and local enhancement. With self-enhancement and collective enhancement, we can realize self-organizing maps. In addition, we use local enhanced information to detect the importance of input units or input variables. Then, the variance of local information is used to determine the optimal values of the enhanced information. We applied the method to an artificial data. In the problem, information enhancement learning was able to produce self-organizing maps close to those produced by the conventional SOM. In addition, the importance of input variables detected by local enhanced information corresponded to the importance obtained by directly computing errors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Kamimura, R.: Feature discovery by enhancement and relaxation of competitive units. In: Fyfe, C., Kim, D., Lee, S.-Y., Yin, H. (eds.) IDEAL 2008. LNCS, vol. 5326, pp. 148–155. Springer, Heidelberg (2008)
Kohonen, T.: The self-organizing maps. Proceedings of the IEEE 78(9), 1464–1480 (1990)
Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (1995)
Guyon, I., Elisseeff, A.: An introduction of variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)
Rakotomamonjy, A.: Variable selection using svm-based criteria. Journal of Machine Learning Research 3, 1357–1370 (2003)
Perkins, S., Lacker, K., Theiler, J.: Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research 3, 1333–1356 (2003)
Reunanen, J.: Overfitting in making comparison between variable selection methods. Journal of Machine Learning Research 3, 1371–1382 (2003)
Castellano, G., Fanelli, A.M.: Variable selection using neural-network models. Neurocomputing 31, 1–13 (1999)
Kamimura, R.: Cooperative information maximization with gauissian activation functions for self-organizing maps. IEEE Transactions on Neural Networks 17(4), 909–919 (2006)
Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output. Neural Computation 1, 402–411 (1989)
Ueda, N., Nakano, R.: Deterministic annealing variant of the em algorithm. In: Advances in Neural Information Processing Systems, pp. 545–552 (1995)
Rose, K., Gurewitz, E., Fox, G.C.: Statistical mechanics and phase transition in clustering. Physical review letters 65(8), 945–948 (1990)
Martinez, T.M., Berkovich, S.G., Schulten, K.J.: Neural-gas network for vector quanitization and its application to time-series prediction. IEEE transactions on neural networks 4(4), 558–569 (1993)
Erdogmus, D., Principe, J.: Lower and upper bounds for misclassification probability based on renyi’s information. Journal of VLSI signal processing systems 37(2/3), 305–317 (2004)
Torkkola, K.: Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research 3, 1415–1438 (2003)
Kamimura, R.: Free energy-based competitive learning for mutual information maximization. In: Proceedings of IEEE Conference on Systems, Man, and Cybernetics, pp. 223–227 (2008)
Kamimura, R.: Free energy-based competitive learning for self-organizing maps. In: Proceedings of Artificial Intelligence and Applications, pp. 414–419 (2008)
Heskes, T.: Self-organizing maps, vector quantization, and mixture modeling. IEEE Transactions on Neural Networks 12(6), 1299–1305 (2001)
Versanto, J., Himberg, J., ALhoniemi, E., Parhankagas, J.: Som toolbox for matlab 5. Tech. Rep. A57, Helsinki University of Technology (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kamimura, R. (2009). Information Enhancement Learning: Local Enhanced Information to Detect the Importance of Input Variables in Competitive Learning. In: Palmer-Brown, D., Draganova, C., Pimenidis, E., Mouratidis, H. (eds) Engineering Applications of Neural Networks. EANN 2009. Communications in Computer and Information Science, vol 43. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03969-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-642-03969-0_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03968-3
Online ISBN: 978-3-642-03969-0
eBook Packages: Computer ScienceComputer Science (R0)