Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction | SpringerLink
Skip to main content

Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15007))

  • 1134 Accesses

Abstract

Reducing scan time in Positron Emission Tomography (PET) imaging while maintaining high-quality images is crucial for minimizing patient discomfort and radiation exposure. Due to the limited size of datasets and distribution discrepancy across scanners in medical imaging, fine-tuning in a parameter-efficient and effective manner is on the rise. Motivated by the potential of Parameter Efficient Fine-Tuning (PEFT), we aim to address these issues by effectively leveraging PEFT to improve limited data and GPU resource issues in multi-scanner setups. In this paper, we introduce PETITE, Parameter Efficient Fine-Tuning for MultI-scanner PET to PET REconstruction, which represents the optimal PEFT combination when independently applying encoder-decoder components to each model architecture. To the best of our knowledge, this study is the first to systematically explore the efficacy of diverse PEFT techniques in medical imaging reconstruction tasks via prevalent encoder-decoder models. This investigation, in particular, brings intriguing insights into PETITE as we show further improvements by treating the encoder and decoder separately and mixing different PEFT methods, namely, Mix-PEFT. Using multi-scanner PET datasets comprised of five different scanners, we extensively test the cross-scanner PET scan time reduction performances (i.e., a model pre-trained on one scanner is fine-tuned on a different scanner) of 21 feasible Mix-PEFT combinations to derive optimal PETITE. We show that training with less than 1% parameters using PETITE performs on par with full fine-tuning (i.e., 100% parameter). Code is available at: https://github.com/MICV-yonsei/PETITE.

Y. Kim and G. Choi—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 20591
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 25739
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anwar, S.M., Majid, M., Qayyum, A., Awais, M., Alnowami, M., Khan, M.K.: Medical image analysis using convolutional neural networks: a review. J. Med. Syst. 42, 1–13 (2018)

    Article  Google Scholar 

  2. Becker, G., et al.: Early diagnosis of parkinson’s disease. J. neurol. 249(Suppl 3), iii40–iii48 (2002)

    Google Scholar 

  3. Conti, M.: Focus on time-of-flight pet: the benefits of improved time resolution. Eur. J. Nucl. Med. Mol. Imaging 38(6), 1147–1157 (2011)

    Article  Google Scholar 

  4. Cummings, J.: The national institute on aging-alzheimer’s association framework on alzheimer’s disease: application to clinical trials. Alzheimer’s Dement. 15(1), 172–178 (2019)

    Article  Google Scholar 

  5. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale (2020)

    Google Scholar 

  6. Dutt, R., Ericsson, L., Sanchez, P., Tsaftaris, S.A., Hospedales, T.M.: Parameter-efficient fine-tuning for medical image analysis: the missed opportunity. CoRR arxiv preprint arxiv: abs/2305.08252 (2023). https://doi.org/10.48550/arXiv.2305.08252

  7. Edalati, A., Tahaei, M.S., Kobyzev, I., Nia, V.P., Clark, J.J., Rezagholizadeh, M.: Krona: parameter efficient tuning with kronecker adapter. CoRR arxiv preprint arxiv: abs/2212.10650 (2022). https://doi.org/10.48550/arXiv.2212.10650

  8. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  9. Hatamizadeh, A., et al.: UNETR: Transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications Of Computer Vision, pp. 574–584 (2022)

    Google Scholar 

  10. He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., Neubig, G.: Towards a unified view of parameter-efficient transfer learning. CoRR arxiv preprint arxiv: abs/2110.04366 (2021). https://arxiv.org/abs/2110.04366

  11. Herzog, H., et al.: Motion artifact reduction on parametric pet images of neuroreceptor binding. J. Nucl. Med. 46(6), 1059–1065 (2005)

    Google Scholar 

  12. Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: International Conference on Machine Learning, pp. 2790–2799. PMLR (2019)

    Google Scholar 

  13. Hu, E.J., yelong shen, Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: LoRA: low-rank adaptation of large language models. In: International Conference on Learning Representations (2022). https://openreview.net/forum?id=nZeVKeeFYf9

  14. Jia, M., et al.: Visual prompt tuning. In: European Conference on Computer Vision, pp. 709–727. Springer (2022). https://doi.org/10.1007/978-3-031-19827-4_41

  15. Li, X., et al.: Artificial general intelligence for medical imaging. arXiv preprint arXiv:2306.05480 (2023)

  16. Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation, pp. 4582–4597 (2021)

    Google Scholar 

  17. Lialin, V., Deshpande, V., Rumshisky, A.: Scaling down to scale up: a guide to parameter-efficient fine-tuning. CoRR arxiv preprint arxiv: abs/2303.15647 (2023). https://doi.org/10.48550/arXiv.2303.15647

  18. Lian, D., Zhou, D., Feng, J., Wang, X.: Scaling & shifting your features: a new baseline for efficient model tuning. Adv. Neural. Inf. Process. Syst. 35, 109–123 (2022)

    Google Scholar 

  19. Liu, X., Ji, K., Fu, Y., Du, Z., Yang, Z., Tang, J.: P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks. CoRR arxiv preprint arxiv: abs/2110.07602 (2021). https://arxiv.org/abs/2110.07602

  20. Luo, Y., et al.: 3D transformer-GAN for high-quality PET reconstruction. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VI 24, pp. 276–285. Springer (2021). https://doi.org/10.1007/978-3-030-87231-1_27

  21. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  22. Pfeiffer, J., Kamath, A., Rücklé, A., Cho, K., Gurevych, I.: Adapterfusion: non-destructive task composition for transfer learning. CoRR arxiv preprint arxiv: abs/2005.00247 (2020). https://arxiv.org/abs/2005.00247

  23. Wang, Y., et al.: 3D conditional generative adversarial networks for high-quality pet image estimation at low dose. Neuroimage 174, 550–562 (2018)

    Article  Google Scholar 

  24. Wu, H., et al.: CvT: introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22–31 (2021)

    Google Scholar 

  25. Yoo, S., Kim, E., Jung, D., Lee, J., Yoon, S.: Improving visual prompt tuning for self-supervised vision transformers. In: International Conference on Machine Learning, pp. 40075–40092. PMLR (2023)

    Google Scholar 

  26. Zeng, P., et al.: 3d cvt-gan: A 3D convolutional vision transformer-GAN for pet reconstruction. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 516–526. Springer (2022). https://doi.org/10.1007/978-3-031-16446-0_49

Download references

Acknowledgments

This work was supported in part by the IITP 2020-0-01361 (AI Graduate School Program at Yonsei University), NRF RS-2023–00262002, and NRF RS-2023–00219019 funded by Korean Government (MSIT).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seong Jae Hwang .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 113 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, Y., Choi, G., Hwang, S.J. (2024). Parameter Efficient Fine Tuning for Multi-scanner PET to PET Reconstruction. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15007. Springer, Cham. https://doi.org/10.1007/978-3-031-72104-5_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72104-5_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72103-8

  • Online ISBN: 978-3-031-72104-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics