Smooth Boosting Using an Information-Based Criterion | SpringerLink
Skip to main content

Smooth Boosting Using an Information-Based Criterion

  • Conference paper
Algorithmic Learning Theory (ALT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4264))

Included in the following conference series:

Abstract

Smooth boosting algorithms are variants of boosting methods which handle only smooth distributions on the data. They are proved to be noise-tolerant and can be used in the “boosting by filtering” scheme, which is suitable for learning over huge data. However, current smooth boosting algorithms have rooms for improvements: Among non-smooth boosting algorithms, real AdaBoost or InfoBoost, can perform more efficiently than typical boosting algorithms by using an information-based criterion for choosing hypotheses. In this paper, we propose a new smooth boosting algorithm with another information-based criterion based on Gini index. we show that it inherits the advantages of two approaches, smooth boosting and information-based approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Aslam, J.A.: Improving algorithms for boosting. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 200–207 (2000)

    Google Scholar 

  2. Balcazar, J.L., Dai, Y., Watanabe, O.: Provably fast training algorithms for support vector machines. In: Proceedings of IEEE International Conference on Data Mining (ICDM 2001), pp. 43–50 (2001)

    Google Scholar 

  3. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth International Group (1984)

    Google Scholar 

  4. Dasgupta, S., Long, P.M.: Boosting with diverse base classifiers. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS, vol. 2777, pp. 273–287. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  5. Domingo, C., Gavaldà, R., Watanabe, O.: Adaptive sampling methods for scaling up knowledge discovery algorithms. Data Mining and Knowledge Discovery 6(2), 131–152 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Domingo, C., Watanabe, O.: MadaBoost: A modification of AdaBoost. In: Proceedings of 13th Annual Conference on Computational Learning Theory, pp. 180–189 (2000)

    Google Scholar 

  7. Domingos, P., Hulten, G.: Mining high-speed data streams. In: Terano, T., Chen, A.L.P. (eds.) PAKDD 2000. LNCS, vol. 1805. Springer, Heidelberg (2000)

    Google Scholar 

  8. Feller, W.: An introduction to probability theory and its applications. Wiley, Chichester (1950)

    MATH  Google Scholar 

  9. Freund, Y.: An improved boosting algorithm and its implications on learning complexity. In: Proc. 5th Annual ACM Workshop on Computational Learning Theory, pp. 391–398. ACM Press, New York (1992)

    Chapter  Google Scholar 

  10. Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121(2), 256–285 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  11. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  12. Freund, Y.: An adaptive version of the boost by majority algorithm. In: COLT 1999: Proceedings of the twelfth annual conference on Computational learning theory, pp. 102–113 (1999)

    Google Scholar 

  13. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statisitics 2, 337–374 (2000)

    Article  MathSciNet  Google Scholar 

  14. Gavinsky, D.: Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research (2003)

    Google Scholar 

  15. Hatano, K., Warmuth, M.K.: Boosting versus covering. In: Advances in Neural Information Processing Systems 16 (2003)

    Google Scholar 

  16. Hatano, K., Watanabe, O.: Learning r-of-k functions by boosting. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS, vol. 3244, pp. 114–126. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  17. Hatano, K.: Smooth boosting using an information-based criterion. Technical Report DOI-TR-225, Department of Informatics, Kyushu University (2006)

    Google Scholar 

  18. Kearns, M., Mansour, Y.: On the boosting ability of top-down decision tree learning algorithms. Journal of Computer and System Sciences 58(1), 109–128 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  19. Mansour, Y., McAllester, D.A.: Boosting using branching programs. Journal of Computer and System Sciences 64(1), 103–112 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  20. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: a new explanation for the effectiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  21. Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  22. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  23. Scheffer, T., Wrobel, S.: Finding the most interesting patterns in a database quickly by using sequential sampling. Journal of Machine Learning Research 3, 833–862 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  24. Serfling, R.J.: Approximation theorems of mathematical statistics. Wiley, Chichester (1980)

    Book  MATH  Google Scholar 

  25. Servedio, R.A.: Smooth boosting and learning with malicious noise. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS, vol. 2111, pp. 473–489. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  26. Takimoto, E., Koya, S., Maruoka, A.: Boosting based on divide and merge. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS, vol. 3244, pp. 127–141. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  27. Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hatano, K. (2006). Smooth Boosting Using an Information-Based Criterion. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science(), vol 4264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11894841_25

Download citation

  • DOI: https://doi.org/10.1007/11894841_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46649-9

  • Online ISBN: 978-3-540-46650-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics