Abstract
Considering the probabilistic approach to neural networks in the framework of statistical pattern recognition we assume approximation of class-conditional probability distributions by finite mixtures of product components. The mixture components can be interpreted as probabilistic neurons in neurophysiological terms and, in this respect, the fixed probabilistic description becomes conflicting with the well known short-term dynamic properties of biological neurons. We show that some parameters of PNN can be “released” for the sake of dynamic processes without destroying the statistically correct decision making. In particular, we can iteratively adapt the mixture component weights or modify the input pattern in order to facilitate the correct recognition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Baram, Y.: Bayesian classification by iterated weighting. Neurocomputing 25, 73–79 (1999)
Grim, J.: On numerical evaluation of maximum - likelihood estimates for finite mixtures of distributions. Kybernetika 18, 173–190 (1982)
Grim, J.: Maximum-likelihood design of layered neural networks. In: International Conference on Pattern Recognition. Proceedings, pp. 85–89. IEEE Computer Society Press, Los Alamitos (1996)
Grim, J.: Design of multilayer neural networks by information preserving transforms. In: Pessa, E., Penna, M.P., Montesanto, A. (eds.) Third European Congress on Systems Science, pp. 977–982. Edizioni Kappa, Roma (1996)
Grim, J.: Information approach to structural optimization of probabilistic neural networks. In: Ferrer, L., Caselles, A. (eds.) Fourth European Congress on Systems Science, pp. 527–539. SESGE, Valencia (1999)
Grim, J.: A sequential modification of EM algorithm. In: Gaul, W., Locarek-Junge, H. (eds.) Classification in the Information Age. Studies in Classif., Data Anal., and Knowl. Organization, pp. 163–170. Springer, Berlin (1999)
Grim, J., Vejvalková, J.: An iterative inference mechanism for the probabilistic expert system PES. International Journal of General Systems 27, 373–396 (1999)
Grim, J., Just, P., Pudil, P.: Strictly modular probabilistic neural networks for pattern recognition. Neural Network World 13, 599–615 (2003)
Grim, J., Kittler, J., Pudil, P., Somol, P.: Combining multiple classifiers in probabilistic neural networks. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 157–166. Springer, Heidelberg (2000)
Grim, J., Kittler, J., Pudil, P., Somol, P.: Multiple classifier fusion in probabilistic neural networks. Pattern Analysis & Applications 5, 221–233 (2002)
Grim, J., Pudil, P., Somol, P.: Recognition of handwritten numerals by structural probabilistic neural networks. In: Bothe, H., Rojas, R. (eds.) Proceedings of the Second ICSC Symposium on Neural Computation, pp. 528–534. ICSC, Wetaskiwin (2000)
Grim, J., Pudil, P., Somol, P.: Boosting in probabilistic neural networks. In: Kasturi, R., Laurendeau, D., Suen, C. (eds.) Proc. 16th International Conference on Pattern Recognition, pp. 136–139. IEEE Comp. Soc, Los Alamitos (2002)
Haykin, S.: Neural Networks: a comprehensive foundation. Morgan Kaufman, San Mateo, CA (1993)
Hebb, D.O.: The Organization of Behavior: A Neuropsychological Theory. Wiley, New York (1949)
Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, New York, Menlo Park CA, Amsterdam (1991)
McLachlan, G.J., Peel, D.: Finite Mixture Models. John Wiley and Sons, New York, Toronto (2000)
Schlesinger, M.I.: Relation between learning and self-learning in pattern recognition (in Russian). Kibernetika (Kiev) 6, 81–88 (1968)
Specht, D.F.: Probabilistic neural networks for classification, mapping or associative memory. In: Proc. of the IEEE Intternational Conference on Neural Networks, vol. I, pp. 525–532. IEEE Computer Society Press, Los Alamitos (1988)
Streit, L.R., Luginbuhl, T.E.: Maximum-likelihood training of probabilistic neural networks. IEEE Trans. on Neural Networks 5, 764–783 (1994)
Vajda, I., Grim, J.: About the maximum information and maximum likelihood principles in neural networks. Kybernetika 34, 485–494 (1998)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Grim, J., Hora, J. (2007). Recurrent Bayesian Reasoning in Probabilistic Neural Networks. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds) Artificial Neural Networks – ICANN 2007. ICANN 2007. Lecture Notes in Computer Science, vol 4668. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74690-4_14
Download citation
DOI: https://doi.org/10.1007/978-3-540-74690-4_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74689-8
Online ISBN: 978-3-540-74690-4
eBook Packages: Computer ScienceComputer Science (R0)