Can AdaBoost.M1 Learn Incrementally? A Comparison to Learn + +  Under Different Combination Rules | SpringerLink
Skip to main content

Can AdaBoost.M1 Learn Incrementally? A Comparison to Learn + +  Under Different Combination Rules

  • Conference paper
Artificial Neural Networks – ICANN 2006 (ICANN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4131))

Included in the following conference series:

Abstract

We had previously introduced Learn + + , inspired in part by the ensemble based AdaBoost algorithm, for incrementally learning from new data, including new concept classes, without forgetting what had been previously learned. In this effort, we compare the incremental learning performance of Learn + +  and AdaBoost under several combination schemes, including their native, weighted majority voting. We show on several databases that changing AdaBoost’s distribution update rule from hypothesis based update to ensemble based update allows significantly more efficient incremental learning ability, regardless of the combination rule used to combine the classifiers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. French, R.: Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences 3(4), 128–135 (1999)

    Article  MathSciNet  Google Scholar 

  2. Grossberg, S.: Nonlinear neural networks: principles, mechanisms and architectures. Neural Networks 1(1), 17–61 (1988)

    Article  MathSciNet  Google Scholar 

  3. Carpenter, G.A., Grossberg, S., Markuzon, N., Reynolds, J.H., Rosen, D.B.: Fuzzy ARTMAP: A neural network architecture for incremental learning of analog multidimen-sional maps. IEEE Trans. on Neural Networks 3(5), 698–713 (1992)

    Article  Google Scholar 

  4. Freund, Y., Schapire, R.: Decision-theoretic generalization of on-line learning and an application to boosting. J. Comp. Sys. Sci. 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  5. Polikar, R., Udpa, L., Udpa, S., Honavar, V.: Learn + + : An incremental learning algorithm for supervised neural networks. IEEE Trans. on System, Man and Cybernetics (C) 31(4), 497–508 (2001)

    Article  Google Scholar 

  6. Polikar, R., Byorick, J., Krause, S., Marino, A., Moreton, M.: Learn++: A classifier independent incremental learning algorithm for supervised neural networks. In: Proc. of Int. Joint Conference on Neural Networks (IJCNN 2002), May 12-17, 2002, vol. 2, pp. 1742–1747. Honolulu, HI (2002)

    Google Scholar 

  7. Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans. on Pattern Analy. and Machine Int. 20(3), 226–239 (1998)

    Article  Google Scholar 

  8. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, N.J (2004)

    Book  MATH  Google Scholar 

  9. Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Rec. 34(2), 299–314 (2001)

    Article  MATH  Google Scholar 

  10. Blake, C.L., Merz, C.J.: Univ. of California, Irvine, Repository of Machine Learning Databases at Irvine, CA

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mohammed, H.S., Leander, J., Marbach, M., Polikar, R. (2006). Can AdaBoost.M1 Learn Incrementally? A Comparison to Learn + +  Under Different Combination Rules. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds) Artificial Neural Networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11840817_27

Download citation

  • DOI: https://doi.org/10.1007/11840817_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-38625-4

  • Online ISBN: 978-3-540-38627-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics