A Boost Voting Strategy for Knowledge Integration and Decision Making | SpringerLink
Skip to main content

A Boost Voting Strategy for Knowledge Integration and Decision Making

  • Conference paper
Advances in Neural Networks - ISNN 2008 (ISNN 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5263))

Included in the following conference series:

Abstract

This paper proposes a voting strategy for knowledge integration and decision making systems with information uncertainty. As ensemble learning methods have recently attracted growing attention from both academia and industry, it is critical to understand the fundamental problem of voting strategy for such learning methodologies. Motivated by the signal to noise ratio (SNR) concept, we propose a method that can vote optimally according to the knowledge level of each hypothesis. The mathematical framework based on gradient analysis is used to find the optimal weights, and a voting algorithm, BoostVote, is presented in detail in this paper. Simulation analyses based on synthetic data and real-world data sets with comparison to the existing voting rules demonstrate the effectiveness of this method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14299
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Schapire, R.E.: The Strength of Weak Learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  2. Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  3. Freund, Y., Schapire, R.E.: Experiments With a New Boosting Algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)

    Google Scholar 

  4. Freund, Y.: An Adaptive Version of the Boost by Majority Algorithm. Machine Learning 43(3), 293–318 (2001)

    Article  MATH  Google Scholar 

  5. Ho, T.K.: Random Subspace Method for Constructing Decision Forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)

    Article  Google Scholar 

  6. Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  7. He, H., Shen, X.: A Ranked Subspace Learning Method for Gene Expression Data Classification. In: International conference on Artificial Intelligence, pp. 358–364 (2007)

    Google Scholar 

  8. Rodríguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation Forest: A New Classifier Ensemble Method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10), 1619–1630 (2006)

    Article  Google Scholar 

  9. Wolpert, D.H.: Stacked Generalization. Neural Network 5(2), 241–259 (1992)

    Article  Google Scholar 

  10. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive Mixtures of Local Experts. Neural Computation 3(1), 79–87 (1991)

    Article  Google Scholar 

  11. Kittler, J., Hatel, M., Duin, R.P.W., Matas, J.: On Combining Classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3), 226–239 (1998)

    Article  Google Scholar 

  12. Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Elsevier, Academic Press (2006)

    Google Scholar 

  13. Starzyk, J.A., Ding, M., He, H.: Optimized Interconnections in Probabilistic Self-Organizing Learning. In: IASTED International Conference on Artificial Intelligence and Applications, pp. 14–16 (2005)

    Google Scholar 

  14. UCI Machine Learning Repository, http://mlean.ics.uci.edu/MLRepository.html

  15. Opitz, D., Maclin, R.: Popular Ensemble Methods: An Empirical Study. J. Artificial Intelligence Research 11, 169–198 (1999)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

He, H., Cao, Y., Wen, J., Cheng, S. (2008). A Boost Voting Strategy for Knowledge Integration and Decision Making. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds) Advances in Neural Networks - ISNN 2008. ISNN 2008. Lecture Notes in Computer Science, vol 5263. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87732-5_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87732-5_53

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87731-8

  • Online ISBN: 978-3-540-87732-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics