Abstract
Most of the methods for combining classifiers rely on the assumption that the experts to be combined make uncorrelated errors. Unfortunately, this theoretical assumption is not easy to satisfy in practical cases, thus effecting the performance obtainable by applying any combination strategy. We tried to solve this problem by explicitly modeling the dependencies among the experts through the estimation of the joint probability distributions among the outputs of the classifiers and the true class. In this paper we propose a new weighted majority vote rule, that uses the joint probabilities of each class as weights for combining classifier outputs. A Bayesian Network automatically infers the joint probability distribution for each class. The final decision is made by taking into account both the votes received by each class and the statistical behavior of the classifiers. The experimental results confirmed the effectiveness of the proposed method.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive learning algorithms for vector quantization. Neural Netw. 3(3), 277–290 (1990)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
De Stefano, C., D’Elia, C., Scotto di Freca, A., Marcelli, A.: Classifier combination by bayesian networks for handwriting recognition. Int. J. of Pattern Rec. and Artif. Intell. 23(5), 887–905 (2009)
De Stefano, C., Fontanella, F., Marrocco, C., di Freca, A.S.: A hybrid evolutionary algorithm for bayesian networks learning: An application to classifier combination. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplicatons 2010, Part I. LNCS, vol. 6024, pp. 221–230. Springer, Heidelberg (2010)
Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Proceedings of ICML 1996, pp. 148–156 (1996)
Heckerman, D.: A tutorial on learning with bayesian networks. Tech. rep., Learning in Graphical Models (1995)
Kang, H.J., Lee, S.W.: Combination of multiple classifiers by minimizing the upper bound of bayes error rate for unconstrained handwritten numeral recognition. Int. J. of Pattern Rec. and Artif. Intell. 19(3), 395–413 (2005)
Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Transactions on PAMI 20(3), 226–239 (1998)
Kuncheva, L., Skurichina, M., Duin, R.P.W.: An experimental study on diversity for bagging and boosting with linear classifiers. Information Fusion 3(4), 245–258 (2002)
Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience (2004)
Oza, N., Tumer, K.: Classifier ensembles: Select real-world applications. Information Fusion 9(1), 4–20 (2008)
Sierra, B., Serrano, N., Larraaga, P., Plasencia, E.J., Inza, I., Jimnez, J.J., Revuelta, P., Mora, M.L.: Using bayesian networks in the construction of a bi-level multi-classifier. a case study using intensive care unit patients data. Artificial Intelligence in Medicine 22(3), 233–248 (2001)
Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. on Systems, Man, and Cybernetics 22(3), 418–435 (1992)
Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cordella, L.P., De Stefano, C., Fontanella, F., Scotto di Freca, A. (2013). A Weighted Majority Vote Strategy Using Bayesian Networks. In: Petrosino, A. (eds) Image Analysis and Processing – ICIAP 2013. ICIAP 2013. Lecture Notes in Computer Science, vol 8157. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41184-7_23
Download citation
DOI: https://doi.org/10.1007/978-3-642-41184-7_23
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41183-0
Online ISBN: 978-3-642-41184-7
eBook Packages: Computer ScienceComputer Science (R0)