An Error-Based Measure for Concept Drift Detection and Characterization | SpringerLink
Skip to main content

An Error-Based Measure for Concept Drift Detection and Characterization

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14286))

Included in the following conference series:

  • 904 Accesses

Abstract

Continual learning is an increasingly studied field, aiming at regulating catastrophic forgetting for online machine learning tasks. In this article, we propose a prediction error measure for continual learning, to detect concept drift induced from learned data input before the learning step. In addition, we check this measure’s ability for characterization of the drift. For these purposes, we propose an algorithm to compute the proposed measure on a data stream while also estimating concept drift. Then, we calculate the correlation coefficients between this estimate and our measurement, using time series analysis. To validate our proposal, we base our experiments on simulated streams of metadata collected from an industrial dataset corresponding to real conversation data. The results show that the proposed measure constitutes a reliable criterion for concept drift detection. They also show that a characterization of the drift relative to components of the stream is possible thanks to the proposed measure.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://riverml.xyz/0.14.0/.

References

  1. Agrahari, S., Singh, A.K.: Concept drift detection in data stream mining: a literature review. J. King Saud Univ. Comput. Inf. Sci. (2021). https://doi.org/10.1016/j.jksuci.2021.11.006

    Article  Google Scholar 

  2. Aljundi, R., et al.: Online Continual Learning with Maximally Interfered Retrieval. arXiv:1908.04742 (2019)

  3. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI (2019)

    Google Scholar 

  4. Baena-Garcıa, M., Gavalda, R., Morales-Bueno, R.: Early drift detection method. In: Fourth International Workshop on Knowledge Discovery from Data Streams, vol. 6, pp. 77–86 (2006)

    Google Scholar 

  5. Bayram, F., Ahmed, B.S., Kassler, A.: From Concept Drift to Model Degradation: An Overview on Performance-Aware Drift Detectors (2022). https://doi.org/10.48550/arXiv.2203.11070

  6. Biesialska, M., Biesialska, K., Costa-jussà, M.R.: Continual lifelong learning in natural language processing: a survey. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 6523–6541 (2020). https://doi.org/10.18653/v1/2020.coling-main.574

  7. Bifet, A., Gavaldà, R.: Learning from time-changing data with adaptive windowing. In: Proceedings of the 2007 SIAM International Conference on Data Mining (SDM), pp. 443–448. Society for Industrial and Applied Mathematics (2007). https://doi.org/10.1137/1.9781611972771.42

  8. Bifet, A., et al.: MOA: massive online analysis, a framework for stream classification and clustering. In: Proceedings of the First Workshop on Applications of Pattern Analysis, pp. 44–50 (2010)

    Google Scholar 

  9. Bracewell, R.: Pentagram notation for cross correlation. In: The Fourier Transform and Its Applications, vol. 46, p. 243. McGraw-Hill, New York (1965)

    Google Scholar 

  10. Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS (LNAI), vol. 3171, pp. 286–295. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28645-5_29

    Chapter  Google Scholar 

  11. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., Bouchachia, A.: A survey on concept drift adaptation. ACM Comput. Surv. 46(4), 44:1–44:37 (2014). https://doi.org/10.1145/2523813

  12. Gupta, U., Babu, M., Ayoub, R., Kishinevsky, M., Paterna, F., Ogras, U.Y.: STAFF: online learning with stabilized adaptive forgetting factor and feature selection algorithm. In: Proceedings of the 55th Annual Design Automation Conference, San Francisco, California, pp. 1–6. ACM (2018). https://doi.org/10.1145/3195970.3196122

  13. Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, pp. 97–106. ACM (2001). https://doi.org/10.1145/502512.502529

  14. Kemker, R., Kanan, C.: FearNet: brain-inspired model for incremental learning. In: International Conference on Learning Representations (2022)

    Google Scholar 

  15. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114

    Article  MathSciNet  MATH  Google Scholar 

  16. Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: International Conference in Machine Learning, p. 10 (2019)

    Google Scholar 

  17. Lin, J.: The lambda and the kappa. IEEE Internet Comput. 21(5), 60–66 (2017)

    Article  Google Scholar 

  18. Lopes, R.H.C.: Kolmogorov-Smirnov test. In: Lovric, M. (ed.) International Encyclopedia of Statistical Science, pp. 718–720. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-04898-2_326

    Chapter  Google Scholar 

  19. Masana, M., Tuytelaars, T., van de Weijer, J.: Ternary Feature Masks: Zero-forgetting for task-incremental learning. arXiv:2001.08714 (2021)

  20. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989). https://doi.org/10.1016/S0079-7421(08)60536-8

  21. Nguyen, C.V., Achille, A., Lam, M., Hassner, T., Mahadevan, V., Soatto, S.: Toward Understanding Catastrophic Forgetting in Continual Learning. arXiv:1908.01091 (2019)

  22. Papoulis, A.: The fourier integral and its applications. Polytechnic Institute of Brooklyn, McCraw-Hill Book Company Inc., USA (1962). ISBN 67-048447-3

    Google Scholar 

  23. Raab, C., Heusinger, M., Schleif, F.M.: Reactive soft prototype computing for concept drift streams. Neurocomputing 416, 340–351 (2020). https://doi.org/10.1016/j.neucom.2019.11.111

    Article  Google Scholar 

  24. Ramasesh, V.V., Dyer, E., Raghu, M.: Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics. arXiv:2007.07400 (2020)

  25. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 5533–5542. IEEE (2017). https://doi.org/10.1109/CVPR.2017.587

  26. Ring, M.B.: Continual Learning in Reinforcement Environments. GMD-Bericht (1994)

    Google Scholar 

  27. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual Learning with Deep Generative Replay. arXiv:1705.08690 (2017)

  28. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2021). https://doi.org/10.1109/TNNLS.2020.3027314

    Article  Google Scholar 

  29. Wang, S., Choi, Y., Chen, J., El-Khamy, M., Henao, R.: Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks (2022)

    Google Scholar 

  30. Yu, H., Webb, G.I.: Adaptive online extreme learning machine by regulating forgetting factor by concept drift map. Neurocomputing 343, 141–153 (2019). https://doi.org/10.1016/j.neucom.2018.11.098

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antoine Bugnicourt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bugnicourt, A., Mokadem, R., Morvan, F., Bebeshina, N. (2023). An Error-Based Measure for Concept Drift Detection and Characterization. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44505-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44504-0

  • Online ISBN: 978-3-031-44505-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics