Abstract
Limited training data and annotation shortage are challenges for developing automated medical image analysis systems. As a potential solution, self-supervised learning (SSL) causes an increasing attention from the community. The key part in SSL is its proxy task that defines the supervisory signals and drives the learning toward effective feature representations. However, most SSL approaches usually focus on a single proxy task, which greatly limits the expressive power of the learned features and therefore deteriorates the network generalization capacity. In this regard, we hereby propose two strategies of aggregation in terms of complementarity of various forms to boost the robustness of self-supervised learned features. We firstly propose a principled framework of multi-task aggregative self-supervised learning from limited medical samples to form a unified representation, with an intent of exploiting feature complementarity among different tasks. Then, in self-aggregative SSL, we propose to self-complement an existing proxy task with an auxiliary loss function based on a linear centered kernel alignment metric, which explicitly promotes the exploring of where are uncovered by the features learned from a proxy task at hand to further boost the modeling capability. Our extensive experiments on 2D and 3D medical image classification tasks under limited data and annotation scenarios confirm that the proposed aggregation strategies successfully boost the classification accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhu, J., Li, Y., Hu, Y., Zhou, S.K.: Embedding task knowledge into 3D neural networks via self-supervised learning. arXiv preprint arXiv:2006.05798 (2020)
Blendowski, M., Nickisch, H., Heinrich, M.P.: How to learn from unlabeled volume data: self-supervised 3D context feature learning. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 649–657. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32226-7_72
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. arXiv: Learning (2020)
Doersch, C., Zisserman, A.: Multi-task self-supervised visual learning. In: IEEE International Conference on Computer Vision, pp. 2070–2079 (2017)
Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv:1803.07728 (2018)
Grill, J.B., et al.: Bootstrap your own latent: A new approach to self-supervised learning. ArXiv abs/2006.07733 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Henaff, O.J., Razavi, A., Doersch, C., Eslami, S.M.A., Den Oord, A.V.: Data-efficient image recognition with contrastive predictive coding. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)
Jakab, T., Gupta, A., Bilen, H., Vedaldi, A.: Self-supervised learning of interpretable keypoints from unlabelled videos. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 8784–8794 (2020)
Kaggle: Aptos 2019 blindness detection (2019). https://www.kaggle.com/c/aptos2019-blindness-detection
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kolesnikov, A., Zhai, X., Beyer, L.: Revisiting self-supervised visual representation learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1920–1929 (2019)
Kornblith, S., Norouzi, M., Lee, H., Hinton, G.E.: Similarity of neural network representations revisited. In: International Conference on Machine Learning (2019)
Larsson, G., Maire, M., Shakhnarovich, G.: Colorization as a proxy task for visual understanding. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 840–849 (2017)
Morcos, A.S., Raghu, M., Bengio, S.: Insights on representational similarity in neural networks with canonical correlation. In: Conference on Neural Information Processing Systems (2018)
Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving Jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5
Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Tajbakhsh, N., Jeyaseelan, L., Li, Q., Chiang, J., Wu, Z., Ding, X.: Embracing imperfect datasets: a review of deep learning solutions for medical image segmentation. Med. Image Anal. 63, 101693 (2020)
Wilks, D.: Canonical correlation analysis (CCA). Int. Geophys. 100, 563–582 (2011)
Zhou, S., et al.: A review of deep learning in medical imaging: image traits, technology trends, case studies with progress highlights, and future promises. In: Proceedings of the IEEE, August 2020
Zhou, Z., Sodha, V., Pang, J., Gotway, M.B., Liang, J.: Models genesis. Med. Image Anal. 67, 101840 (2020)
Zhou, Z., et al.: Models genesis: generic autodidactic models for 3D medical image analysis. In: Medical Image Computing and Computer Assisted Intervention, pp. 384–393 (2019)
Zhu, J., Li, Y., Hu, Y., Ma, K., Zhou, S.K., Zheng, Y.: Rubik’s Cube+: a self-supervised feature learning framework for 3D medical image analysis. In: Medical Image Analysis, vol. 64, p. 101746 (2020)
Zhuang, X., Li, Y., Hu, Y., Ma, K., Yang, Y., Zheng, Y.: Self-supervised feature learning for 3D medical images by playing a Rubik’s cube. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 420–428. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_46
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, J., Li, Y., Ding, L., Zhou, S.K. (2022). Aggregative Self-supervised Feature Learning from Limited Medical Images. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13438. Springer, Cham. https://doi.org/10.1007/978-3-031-16452-1_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-16452-1_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16451-4
Online ISBN: 978-3-031-16452-1
eBook Packages: Computer ScienceComputer Science (R0)