Improving Classification Performance by Combining Multiple TAN Classifiers | SpringerLink
Skip to main content

Improving Classification Performance by Combining Multiple TAN Classifiers

  • Conference paper
  • First Online:
Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2639))

  • 717 Accesses

Abstract

Boosting is an effective classifier combination method, which can improve classification performance of an unstable learning algorithm. But it does not have much more improvements on a stable learning algorithm. TAN, Tree-Augmented Naive Bayes, is a tree-like Bayesian network. The standard TAN learning algorithm generates a stable TAN classifier, which is difficult to improve its accuracy by boosting technique. In this paper, a new TAN learning algorithm called GTAN and a TAN classifier combination method called Boosting-MultiTAN are presented. Through comparisons of this TAN classifier combination method with the standard TAN classifier in the experiments, the Boosting-MultiTAN shows higher classification accuracy than the standard TAN classifier on the most data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schapire, R.E., The strength of weak learnability. Machine Learning, 5(1990), 197–227.

    Google Scholar 

  2. Freund, Y. and Schapire, R.E., Experiments with a new Boosting algorithm, Proceedings of the Thirteenth International Conference on machine learning, San Francisco, CA: Morgan Kaufmann(1996), 148–156.

    Google Scholar 

  3. Freund, Y., An adaptive version of the Boost by majority algorithm, Proceedings of the twelfth Annual Conference on Computational Learning Theory(1999)

    Google Scholar 

  4. Bauer, E. and Kohavi, R., An empirical comparison of voting classification algorithms: Bagging, boosting, and variants, Machine Learning, 36(1/2) (1999), 105–139.

    Article  Google Scholar 

  5. Quinlan, J.R., Bagging, Boosting, and C4.5, Proceedings of the Thirteenth National Conference on Artificial Intelligence, Menlo Park, CA: AAAI Press(1996), 725–730.

    Google Scholar 

  6. Friedman, N., Geiger, D., and Goldszmidt, M., Bayesian network classifiers. Machine Learning, 29(1997), 131–163.

    Article  MATH  Google Scholar 

  7. Keogh, E. J., Pazzani, M. J.: Learning Augmented Bayesian Classifiers: A Comparison of Distribution-Based and Classification-Based Approaches. In: Proceedings of the Seventh International Workshop on Artificial Intelligence and Statistics. (1999) 225–230

    Google Scholar 

  8. Ting, K.M. and Zheng, Z., Improving the performance of boosting for naive Bayesian classification. Proceedings of the Third Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD-99), Berlin: Springer-Verlag(1999), 296–305.

    Google Scholar 

  9. Zheng, Z., Naive Bayesian Classifier Committees. Proceedings of ECML’98, Berlin: Springer Verlag, (1998), 196–207.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shi, H., Wang, Z., Huang, H. (2003). Improving Classification Performance by Combining Multiple TAN Classifiers. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds) Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. RSFDGrC 2003. Lecture Notes in Computer Science(), vol 2639. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39205-X_105

Download citation

  • DOI: https://doi.org/10.1007/3-540-39205-X_105

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-14040-5

  • Online ISBN: 978-3-540-39205-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics