Causes of Catastrophic Forgetting in Class-Incremental Semantic Segmentation | SpringerLink
Skip to main content

Causes of Catastrophic Forgetting in Class-Incremental Semantic Segmentation

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13847))

Included in the following conference series:

Abstract

Class-incremental learning for semantic segmentation (CiSS) is presently a highly researched field which aims at updating a semantic segmentation model by sequentially learning new semantic classes. A major challenge in CiSS is overcoming the effects of catastrophic forgetting, which describes the sudden drop of accuracy on previously learned classes after the model is trained on a new set of classes. Despite latest advances in mitigating catastrophic forgetting, the underlying causes of forgetting specifically in CiSS are not well understood. Therefore, in a set of experiments and representational analyses, we demonstrate that the semantic shift of the background class and a bias towards new classes are the major causes of forgetting in CiSS. Furthermore, we show that both causes mostly manifest themselves in deeper classification layers of the network, while the early layers of the model are not affected. Finally, we demonstrate how both causes are effectively mitigated utilizing the information contained in the background, with the help of knowledge distillation and an unbiased cross-entropy loss.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    As the focus of this paper is to understand the general causes of forgetting in CiSS, we leave the study of the impact of different splits, more classes and longer task sequences to future work.

  2. 2.

    The confusion matrices are shown in the supplementary material.

References

  1. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  2. Cermelli, F., Fontanel, D., Tavera, A., Ciccone, M., Caputo, B.: Incremental learning in semantic segmentation from image labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4371–4381 (2022)

    Google Scholar 

  3. Cermelli, F., Mancini, M., Bulo, S.R., Ricci, E., Caputo, B.: Modeling the background for incremental learning in semantic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 9230–9239 (2020). https://doi.org/10.1109/CVPR42600.2020.00925. https://arxiv.org/abs/2002.00718

  4. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

    Google Scholar 

  5. Csiszárik, A., Kőrösi-Szabó, P., Matszangosz, A., Papp, G., Varga, D.: Similarity and matching of neural network representations. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 5656–5668. Curran Associates, Inc. (2021). https://proceedings.neurips.cc/paper/2021/file/2cb274e6ce940f47beb8011d8ecb1462-Paper.pdf

  6. Davari, M., Asadi, N., Mudur, S., Aljundi, R., Belilovsky, E.: Probing representation forgetting in supervised and unsupervised continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16712–16721 (2022)

    Google Scholar 

  7. Delange, M., et al.: A continual learning survey: defying forgetting in classification tasks. IEEE Trans. Pattern Anal. Mach. Intell. 44(7), 3366–3385 (2021). https://doi.org/10.1109/TPAMI.2021.3057446

    Article  Google Scholar 

  8. Douillard, A., Chen, Y., Dapogny, A., Cord, M.: PLOP: learning without forgetting for continual semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  9. Douillard, A., Chen, Y., Dapogny, A., Cord, M.: Tackling catastrophic forgetting and background shift in continual semantic segmentation. CoRR abs/2106.15287 (2021). https://arxiv.org/abs/2106.15287

  10. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results (2012). http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html

  11. French, R.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999). https://doi.org/10.1016/s1364-6613(99)01294-2

    Article  Google Scholar 

  12. Gallardo, J., Hayes, T.L., Kanan, C.: Self-supervised training enhances online continual learning. In: British Machine Vision Conference (BMVC) (2021)

    Google Scholar 

  13. Hayes, T.L., Kafle, K., Shrestha, R., Acharya, M., Kanan, C.: REMIND your neural network to prevent catastrophic forgetting. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 466–483. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_28

    Chapter  Google Scholar 

  14. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015). http://arxiv.org/abs/1503.02531

  15. Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Learning a unified classifier incrementally via rebalancing. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  16. Hsu, Y.C., Liu, Y.C., Ramasamy, A., Kira, Z.: Re-evaluating continual learning scenarios: a categorization and case for strong baselines. In: NeurIPS Continual learning Workshop (2018). https://arxiv.org/abs/1810.12488

  17. Kalb, T., Roschani, M., Ruf, M., Beyerer, J.: Continual learning for class- and domain-incremental semantic segmentation. In: 2021 IEEE Intelligent Vehicles Symposium (IV), pp. 1345–1351 (2021). https://doi.org/10.1109/IV48863.2021.9575493

  18. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114. https://www.pnas.org/doi/abs/10.1073/pnas.1611835114

  19. Klingner, M., Bär, A., Donn, P., Fingscheidt, T.: Class-incremental learning for semantic segmentation re-using neither old data nor old labels. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8. IEEE (2020)

    Google Scholar 

  20. Kornblith, S., Norouzi, M., Lee, H., Hinton, G.: Similarity of neural network representations revisited. In: 36th International Conference on Machine Learning, ICML 2019, vol. 2019-June, pp. 6156–6175 (2019). https://arxiv.org/abs/1905.00414

  21. Lesort, T., George, T., Rish, I.: Continual learning in deep networks: an analysis of the last layer (2021). https://doi.org/10.48550/ARXIV.2106.01834. https://arxiv.org/abs/2106.01834

  22. Lesort, T., Stoian, A., Filliat, D.: Regularization shortcomings for continual learning (2020)

    Google Scholar 

  23. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2018). https://doi.org/10.1109/TPAMI.2017.2773081

    Article  Google Scholar 

  24. Maracani, A., Michieli, U., Toldo, M., Zanuttigh, P.: Recall: replay-based continual learning in semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7026–7035 (2021)

    Google Scholar 

  25. Masana, M., Liu, X., Twardowski, B., Menta, M., Bagdanov, A.D., van de Weijer, J.: Class-incremental learning: survey and performance evaluation. arXiv preprint arXiv:2010.15277 (2020)

  26. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation - Advances in Research and Theory, vol. 24, pp. 109–165 (1989). https://doi.org/10.1016/S0079-7421(08)60536-8

  27. Mehta, S.V., Patil, D., Chandar, S., Strubell, E.: An empirical investigation of the role of pre-training in lifelong learning (2021). https://doi.org/10.48550/ARXIV.2112.09153. https://arxiv.org/abs/2112.09153

  28. Michieli, U., Zanuttigh, P.: Incremental learning techniques for semantic segmentation. In: Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019, pp. 3205–3212 (2019). https://doi.org/10.1109/ICCVW.2019.00400. http://arxiv.org/abs/1907.13372

  29. Michieli, U., Zanuttigh, P.: Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1114–1124 (2021)

    Google Scholar 

  30. Mirzadeh, S.I., Farajtabar, M., Gorur, D., Pascanu, R., Ghasemzadeh, H.: Linear mode connectivity in multitask and continual learning. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=Fmg_fQYUejf

  31. Neyshabur, B., Sedghi, H., Zhang, C.: What is being transferred in transfer learning? In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 512–523. Curran Associates, Inc. (2020). https://proceedings.neurips.cc/paper/2020/file/0607f4c705595b911a4f3e7a127b44e0-Paper.pdf

  32. Nguyen, C.V., Achille, A., Lam, M., Hassner, T., Mahadevan, V., Soatto, S.: Toward Understanding Catastrophic Forgetting in Continual Learning (2019). http://arxiv.org/abs/1908.01091

  33. Ramasesh, V.V., Dyer, E., Raghu, M.: Anatomy of catastrophic forgetting: hidden representations and task semantics. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=LhY8QdUGSuw

  34. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: CVPR (2017)

    Google Scholar 

  35. Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., Wayne, G.: Experience replay for continual learning. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/fa7cdfad1a5aaf8370ebeda47a1ff1c3-Paper.pdf

  36. Romera, E., Álvarez, J.M., Bergasa, L.M., Arroyo, R.: ERFNet: efficient residual factorized convnet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 19(1), 263–272 (2018). https://doi.org/10.1109/TITS.2017.2750080

    Article  Google Scholar 

  37. Ronneberger, O., et al.: U-net: convolutional networks for biomedical image segmentation (2015). https://doi.org/10.48550/ARXIV.1505.04597. https://arxiv.org/abs/1505.04597

  38. Salimans, T., Kingma, D.P.: Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016). https://proceedings.neurips.cc/paper/2016/file/ed265bc903a5a097f61d3ec064d96d2e-Paper.pdf

  39. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  40. Tasar, O., Tarabalka, Y., Alliez, P.: Incremental learning for semantic segmentation of large-scale remote sensing data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 12(9), 3524–3537 (2019). https://doi.org/10.1109/JSTARS.2019.2925416

    Article  Google Scholar 

  41. van de Ven, G.M., Siegelmann, H.T., Tolias, A.S.: Brain-inspired replay for continual learning with artificial neural networks. Nat. Commun. 11(1), 4069 (2020). https://doi.org/10.1038/s41467-020-17866-2

    Article  Google Scholar 

  42. Yu, L., Liu, X., van de Weijer, J.: Self-training for class-incremental semantic segmentation. IEEE Trans. Neural Netw. Learn. Syst. 1–12 (2022). https://doi.org/10.1109/TNNLS.2022.3155746

  43. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 3987–3995. PMLR, International Convention Centre, Sydney, Australia (2017). http://proceedings.mlr.press/v70/zenke17a.html

Download references

Acknowledgments

The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Climate Action within the project “KI Delta Learning” (Förderkennzeichen 19A19013T). The authors would like to thank the consortium for the successful cooperation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Kalb .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4564 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kalb, T., Beyerer, J. (2023). Causes of Catastrophic Forgetting in Class-Incremental Semantic Segmentation. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13847. Springer, Cham. https://doi.org/10.1007/978-3-031-26293-7_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26293-7_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26292-0

  • Online ISBN: 978-3-031-26293-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics