Abstract
In medical image synthesis, the development of robust and reliable baseline methods is crucial due to the complexity and variability of existing techniques. Despite advances with architectures such as GANs and diffusion models, a clear state-of-the-art has yet to be established. This paper introduces a versatile adaptation of the nnU-Net framework as a robust baseline for both cross-modality synthesis and image inpainting tasks. Known for its superior performance in segmentation challenges, nnU-Net’s automatic configuration and parameter optimization capabilities have been adapted for these new applications. We evaluate this method on two use cases: pelvis MR to CT translation using the Synthrad2023 challenge dataset and local synthesis using the BraTs 2023 inpainting challenge dataset. Standard synthesis metrics -MAE, MSE, SSIM and PSNR- demonstrate that our adapted nnU-Net outperforms GAN-based methods like pix2pixHD and ranks among the best methods for both challenges. We recommend this adapted nnU-Net as a new benchmark for medical image translation and inpainting tasks, and provide our implementations for public use on GitHub.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
F. Isensee, P.F. Jaeger, S. A. Kohl et al. : nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. (2021)
F. Isensee, T. Wald, C. Ulrich et al. : nnu-net revisited: A call for rigorous validation in 3d medical image segmentation. arXiv preprint arXiv:2404.09556. (2024)
I. Goodfellow, J. Pouget-Abadie, Mirza, B. Xu, et al. : Generative adversarial nets. In: Advances in neural information processing systems. pp 2672-2680 (2014)
P. Isola, J. Zhu, T. Zhou, A. Efros : Image-to-Image Translation with Conditional Adversarial Networks. CVPR, pp 5967-5976. (2017)
T. Wang, M. Liu, J. Zhu, et al : High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. CVPR, pp 8798-8807. (2018)
J. Johnson, A. Alahi, L. Fei-Fei : Perceptual Losses for Real-Time Style Transfer and Super-Resolution. ECCV 2016 (2016)
S. Chen, K. Ma and Y. Zheng : Med3D: Transfer Learning for 3D Medical Image Analysis. arXiv preprint arXiv:1904.00625 (2019)
A. Thummerer, E. van der Bijl, A. Galapon Jr et al. : SynthRAD2023 Grand Challenge dataset: Generating synthetic CT for radiotherapy. Medical physics, 50(7), 4664-4674. (2023)
E. Huijben, M.L. Terpstra, A. Galapon Jr et al. : Generating Synthetic Computed Tomography for Radiotherapy: SynthRAD2023 Challenge Report. arXiv preprint arXiv:2403.08447. (2024)
R. Rombach, A. Blattmann, D. Lorenz et al. : High-resolution image synthesis with latent diffusion models. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684-10695 (2022)
J. Wolterink, A. Dinkla, M. Savenije et al. : Deep MR to CT Synthesis using Unpaired Data. (2017)
Y. Lei, J. Harms, T. Wang et al. : MRI-Only Based Synthetic CT Generation Using Dense Cycle Consistent Generative Adversarial Networks. Medical Physics. (2019)
A. Longuefosse, J. Raoul, I. Benlala et al. : Generating high-resolution synthetic CT from lung MRI with ultrashort echo times: initial evaluation in cystic fibrosis. Radiology, 308(1), e230052. (2023)
F. Kofler, F. Meissen, F. Steinbauer et al. : The Brain Tumor Segmentation (BraTS) Challenge 2023: Local Synthesis of Healthy Brain Tissue via Inpainting. arXiv preprint arXiv:2305.08992. (2023)
A. Durrer, J. Wolleb, F. Bieder et al. : Denoising Diffusion Models for 3D Healthy Brain Tissue Inpainting. arXiv preprint arXiv:2403.14499. (2024)
R. Zhu, X. Zhang, H. Pang et al. : Advancing Brain Tumor Inpainting with Generative Models. arXiv preprint arXiv:2402.01509. (2024)
A. Lugmayr, M. Danelljan, A. Romero et al. : Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11461-11471). (2022)
Acknowledgments
This work was granted access to the HPC resources of IDRIS under the allocations 2022-AD011013848R1 and 2022-AD011013926 made by GENCI.
This work benefited from the support of the project HoliBrain of the French National Research Agency (ANR-23-CE45-0020-01).
This project is supported by the Precision and global vascular brain health institute funded by the France 2030 investment plan as part of the IHU3 initiative (ANR-23-IAHU-0001).
This study received financial support from the French government in the framework of the University of Bordeaux’s France 2030 program/RRI “IMPACT and the PEPR StratifyAging”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Longuefosse, A. et al. (2025). Adapted nnU-Net: A Robust Baseline for Cross-Modality Synthesis and Medical Image Inpainting. In: Fernandez, V., Wolterink, J.M., Wiesner, D., Remedios, S., Zuo, L., Casamitjana, A. (eds) Simulation and Synthesis in Medical Imaging. SASHIMI 2024. Lecture Notes in Computer Science, vol 15187. Springer, Cham. https://doi.org/10.1007/978-3-031-73281-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-73281-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73280-5
Online ISBN: 978-3-031-73281-2
eBook Packages: Computer ScienceComputer Science (R0)