Abstract
We propose a solution for BraTS22 challenge that builds on top of our previous submission—Optimized U-Net method. This year we focused on improving the model architecture and training schedule. The proposed method further improves scores on both our internal cross validation and challenge validation data. The validation mean dice scores are: ET 0.8381, TC 0.8802, WT 0.9292, and mean Hausdorff95: ET 14.460, TC 5.840, WT 3.594.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Goodenberger, M.L., Jenkins, R.B.: Genetics of adult glioma. Cancer Genet. 205(12), 613–621 (2012). https://doi.org/10.1016/j.cancergen.2012.10.009
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Zeng, T., Wu, B., Ji, S.: DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation. Bioinformatics 33(16), 2555–2562 (2017). https://doi.org/10.1093/bioinformatics/btx188
Baid, U., et al.: The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification (2021)
Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015). https://doi.org/10.1109/TMI.2014.2377694
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4 (2017). https://doi.org/10.1038/sdata.2017.117
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection, July 2017. https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection, July 2017. https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 311–320. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_28
Jiang, Z., Ding, C., Liu, M., Tao, D.: Two-stage cascaded U-Net: 1st place solution to BraTS challenge 2019 segmentation task. In: Crimi, A., Bakas, S. (eds.) BrainLes 2019. LNCS, vol. 11992, pp. 231–241. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46640-4_22
Isensee, F., Jäger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)
Isensee, F., Jäger, P.F., Full, P.M., Vollmuth, P., Maier-Hein, K.H.: nnU-Net for brain tumor segmentation. In: Crimi, A., Bakas, S. (eds.) BrainLes 2020. LNCS, vol. 12659, pp. 118–132. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72087-2_11
Futrega, M., Milesi, A., Marcinkiewicz, M., Ribalta, P.: Optimized U-Net for brain tumor segmentation (2021). https://doi.org/10.48550/ARXIV.2110.03352. https://arxiv.org/abs/2110.03352
Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks (2016)
Szegedy, C., et al.: Deep residual learning for image recognition (2014)
Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1
Hatamizadeh, A., Yang, D., Roth, H., Xu, D.: UNETR: transformers for 3D medical image segmentation (2021)
Zhu, Q., Du, B., Turkbey, B., Choyke, P.L., Yan, P.: Deeply-supervised CNN for prostate segmentation (2017)
Ghiasi, G., Lin, T.Y., Le, Q.V.: DropBlock: a regularization method for convolutional networks (2018)
Cox, R., et al.: A (sort of) new image data format standard: NiFTI-1, vol. 22, January 2004
Milletari, F., Navab, N., Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV) (2016)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Micikevicius, P., et al.: Mixed precision training (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2017)
Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification (2015)
Antonelli, M., et al.: The medical segmentation decathlon. Nat. Commun. 13, 4128 (2022)
NVIDIA nnU-Net implementation. https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Segmentation/nnUNet. Accessed 30 Sept 2021
Dosovitskiy, A., et al.: An image is worth \(16\times 16\) words: transformers for image recognition at scale (2021)
Vaswani, A., et al.: Attention is all you need (2017)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes (2014)
Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: International Conference on Computer Vision (ICCV) (2017)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Futrega, M., Marcinkiewicz, M., Ribalta, P. (2023). Tuning U-Net for Brain Tumor Segmentation. In: Bakas, S., et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2022. Lecture Notes in Computer Science, vol 13769. Springer, Cham. https://doi.org/10.1007/978-3-031-33842-7_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-33842-7_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33841-0
Online ISBN: 978-3-031-33842-7
eBook Packages: Computer ScienceComputer Science (R0)