Abstract
This paper proposes a voting strategy for knowledge integration and decision making systems with information uncertainty. As ensemble learning methods have recently attracted growing attention from both academia and industry, it is critical to understand the fundamental problem of voting strategy for such learning methodologies. Motivated by the signal to noise ratio (SNR) concept, we propose a method that can vote optimally according to the knowledge level of each hypothesis. The mathematical framework based on gradient analysis is used to find the optimal weights, and a voting algorithm, BoostVote, is presented in detail in this paper. Simulation analyses based on synthetic data and real-world data sets with comparison to the existing voting rules demonstrate the effectiveness of this method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Schapire, R.E.: The Strength of Weak Learnability. Machine Learning 5(2), 197–227 (1990)
Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)
Freund, Y., Schapire, R.E.: Experiments With a New Boosting Algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)
Freund, Y.: An Adaptive Version of the Boost by Majority Algorithm. Machine Learning 43(3), 293–318 (2001)
Ho, T.K.: Random Subspace Method for Constructing Decision Forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)
Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)
He, H., Shen, X.: A Ranked Subspace Learning Method for Gene Expression Data Classification. In: International conference on Artificial Intelligence, pp. 358–364 (2007)
Rodríguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation Forest: A New Classifier Ensemble Method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10), 1619–1630 (2006)
Wolpert, D.H.: Stacked Generalization. Neural Network 5(2), 241–259 (1992)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive Mixtures of Local Experts. Neural Computation 3(1), 79–87 (1991)
Kittler, J., Hatel, M., Duin, R.P.W., Matas, J.: On Combining Classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226–239 (1998)
Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Elsevier, Academic Press (2006)
Starzyk, J.A., Ding, M., He, H.: Optimized Interconnections in Probabilistic Self-Organizing Learning. In: IASTED International Conference on Artificial Intelligence and Applications, pp. 14–16 (2005)
UCI Machine Learning Repository, http://mlean.ics.uci.edu/MLRepository.html
Opitz, D., Maclin, R.: Popular Ensemble Methods: An Empirical Study. J. Artificial Intelligence Research 11, 169–198 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
He, H., Cao, Y., Wen, J., Cheng, S. (2008). A Boost Voting Strategy for Knowledge Integration and Decision Making. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds) Advances in Neural Networks - ISNN 2008. ISNN 2008. Lecture Notes in Computer Science, vol 5263. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87732-5_53
Download citation
DOI: https://doi.org/10.1007/978-3-540-87732-5_53
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87731-8
Online ISBN: 978-3-540-87732-5
eBook Packages: Computer ScienceComputer Science (R0)