Abstract
Existing interactive segmentation methods leverage automatic segmentation and user interactions for label refinement, significantly reducing the annotation workload compared to manual annotation. However, these methods lack quick adaptability to ambiguous and noisy data, which is a challenge in CT volumes containing lung lesions from COVID-19 patients. In this work, we propose an adaptive multi-scale online likelihood network (MONet) that adaptively learns in a data-efficient online setting from both an initial automatic segmentation and user interactions providing corrections. We achieve adaptive learning by proposing an adaptive loss that extends the influence of user-provided interaction to neighboring regions with similar features. In addition, we propose a data-efficient probability-guided pruning method that discards uncertain and redundant labels in the initial segmentation to enable efficient online training and inference. Our proposed method was evaluated by an expert in a blinded comparative study on COVID-19 lung lesion annotation task in CT. Our approach achieved 5.86% higher Dice score with 24.67% less perceived NASA-TLX workload score than the state-of-the-art. Source code is available at: https://github.com/masadcv/MONet-MONAILabel.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Asad, M., Dorent, R., Vercauteren, T.: Fastgeodis: Fast generalised geodesic distance transform. arXiv preprint arXiv:2208.00001 (2022)
Asad, M., Fidon, L., Vercauteren, T.: ECONet: Efficient convolutional online likelihood network for scribble-based interactive segmentation. In: Medical Imaging with Deep Learning (2022)
Boykov, Y.Y., Jolly, M.-P.: Interactive graph cuts for optimal boundary and region segmentation of objects in ND images. In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, pp. 105–112 (2001)
Budd, S., Robinson, E.C., Kainz, B.: A survey on active learning and human-in the-loop deep learning for medical image analysis. Med. Image Anal. 71, 102062 (2021)
Chassagnon, G., et al.: AI-Driven CT-based quantification, staging and short-term outcome prediction of COVID-19 pneumonia. arXiv preprint arXiv:2004.12852 (2020)
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49
Diaz-Pinto, A., et al.: Monai label: A framework for AI-assisted interactive labeling of 3D medical images. arXiv preprint arXiv:2203.12362 (2022)
Gonzalez, C., Gotkowski, K., Bucher, A., Fischbach, R., Kaltenborn, I., Mukhopadhyay, A.: Detecting when pre-trained nnU-net models fail silently for Covid-19 lung lesion segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12907, pp. 304–314. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87234-2_29
Hart, S.G.: NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the Human Factors And Ergonomics Society Annual Meeting, pp. 904–908 (2006)
Ho, Y., Wookey, S.: The real-world-weight cross-entropy loss function: modeling the costs of mislabeling. IEEE Access 8, 4806–4813 (2019)
Kukar, M., Kononenko, I., et al.: Cost-sensitive learning with neural networks. In: ECAI, pp. 8–94 (1998)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, pp. 3431–3440(2015)
Loshchilov, I., Hutter, F.: Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)
Luo, X., et al.: MIDeepSeg: minimally interactive segmentation of unseen objects from medical images using deep learning. Med. Image Anal. 72, 102102 (2021)
McGrath, H., et al.: Manual segmentation versus semi-automated segmentation for quantifying vestibular schwannoma volume on MRI. Int. J. Comput. Assist. Radiol. Surg. 15, 1445–1455 (2020)
McLaren, T.A., Gruden, J.F., Green, D.B.: The bullseye sign: a variant of the reverse halo sign in COVID-19 pneumonia. Clin. Imaging 68, 191–96 (2020)
MONAI Consortium, MONAI: Medical Open Network for AI. (2020). https://github.com/Project-MONAI/MONAI
Rajchl, M. et al.: Deepcut: Object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans. Med. Imaging 36(2), 674–683 (2016)
Ramkumar, A., et al.: Using GOMS and NASA-TLX to to evaluate human-computer interaction process in interactive segmentation. Int. J. Human-Computer Interact. 33(2), 123–34 (2017)
Revel, M.-P., et al.: Study of thoracic CT in COVID-19: the STOIC project. Radiology 301(1), E361–E370 (2021)
Roth, H., et al.: Rapid Artificial Intelligence Solutions in a Pandemic-The COVID-19-20 Lung CT Lesion Segmentation Challenge (2021)
Rubin, G.D., et al.: The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the Fleischner Society. Radiology 296(1), 172–80 (2020)
Tilborghs, S., et al.: Comparative study of deep learning methods for the automatic segmentation of lung, lesion and lesion type in CT scans of COVID-19 patients. arXiv preprint arXiv:2007.15546 (2020)
Wang, G., et al.: Dynamically balanced online random forests for interactive scribble based segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 35–360 (2016)
Wang, G., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 155–1572 (2018)
Wang, G., et al.: Interactive medical image segmentation using deep learning with imag-specific fine tuning. IEEE transactions on medical imaging 37(7), 1562–1573 (2018)
Wang, G., et al.: A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images. IEEE Trans. Med. Imaging 39(8), 2653–2663 (2020)
Williams, H., et al.: Interactive segmentation via deep learning and b-spline explicit active surfaces. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 315–325. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_30
Acknowledgment
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101016131 (icovid project). This work was also supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. This project utilized scribbles-based interactive segmentation tools from opensource project MONAI Label (https://github.com/Project-MONAI/MONAILabel) [7].
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Asad, M. et al. (2023). Adaptive Multi-scale Online Likelihood Network for AI-Assisted Interactive Segmentation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14221. Springer, Cham. https://doi.org/10.1007/978-3-031-43895-0_53
Download citation
DOI: https://doi.org/10.1007/978-3-031-43895-0_53
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43894-3
Online ISBN: 978-3-031-43895-0
eBook Packages: Computer ScienceComputer Science (R0)