Abstract
Based on negative correlation learning [1] and evolutionary learning, evolutionary ensembles with negative correlation learning (EENCL) was proposed for learning and designing of neural network ensembles [2]. The idea of EENCL is to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. EENCL used a fitness sharing based on the covering set. Such fitness sharing did not make accurate measurement on the similarity in the population. In this paper, a fitness sharing scheme based on mutual information is introduced in EENCL to evolve a diverse and cooperative population. The effectiveness of such evolutionary learning approach was tested on two real-world problems. This paper has also analyzed negative correlation learning in terms of mutual information on a regression task in the different noise conditions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Y. Liu and X. Yao. Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, 29(6):716–725, 1999.
Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation, 4(4):380–387, 2000.
Y. Liu and X. Yao. Towards designing neural network ensembles by evolution. In Parallel Problem Solving from Nature—PPSN V: Proc. of the Fifth International Conference on Parallel Problem Solving from Nature, volume 1498 of Lecture Notes in Computer Science, pages 623–632. Springer-Verlag, Berlin, 1998.
D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA, 1989.
J. C. A. van der Lubbe. Information Theory. Prentice-Hall International, Inc., 2nd edition, 1999.
R. T. Clemen and R. L. Winkler. Limits for the precision and value of information from dependent sources. Operations Research, 33:427–442, 1985.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. I, pages 318–362. MIT Press, Cambridge, MA, 1986.
R. A. Jacobs. Bias/variance analyses of mixture-of-experts architectures. Neural Computation, 9:369–383, 1997.
D. B. Fogel. Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE Press, New York, NY, 1995.
D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood Limited, London, 1994.
M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, 36:111–147, 1974.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Liu, Y., Yao, X. (2002). Learning and Evolution by Minimization of Mutual Information. In: Guervós, J.J.M., Adamidis, P., Beyer, HG., Schwefel, HP., Fernández-Villacañas, JL. (eds) Parallel Problem Solving from Nature — PPSN VII. PPSN 2002. Lecture Notes in Computer Science, vol 2439. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45712-7_48
Download citation
DOI: https://doi.org/10.1007/3-540-45712-7_48
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44139-7
Online ISBN: 978-3-540-45712-1
eBook Packages: Springer Book Archive