GLAN: GAN Assisted Lightweight Attention Network for Biomedical Imaging Based Diagnostics | Cognitive Computation Skip to main content

Advertisement

Log in

GLAN: GAN Assisted Lightweight Attention Network for Biomedical Imaging Based Diagnostics

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Manual assessment of biomedical imaging based diagnostics is limited as it is time-consuming and subjective. Bio-inspired diagnostics applications on embedded and mobile devices are becoming more popular as they overcome these limitations and aid in early detection and diagnosis. The neural theory of visual attention puts forward that the processing resources are dedicated to more important information, for resource saving and performance improvement. Moreover, adversarial learning can potentially alleviate the various biases of the human cognitive system. The limited performance of the current lightweight network approaches can be attributed to the absence of above-mentioned properties of the human cognition. Accordingly, we introduce GLAN, a lightweight attention based network that is particularly well suited to embedded and mobile devices. GLAN’s design follows lightweight architecture design principles for encoder-decoder design. To improve performance a twofold novel strategy is adopted. Firstly, we equip the encoder and the decoder with lightweight attention mechanisms to increase their focus and improve segmentation performance. Secondly, adversarial learning is employed with augmentation to increase the generalization ability of the lightweight attention network. e have evaluated our GLAN on three different applications including lung segmentation, digestive tract polyp segmentation and optic disc segmentation. GLAN is quite competitive in terms of segmentation performance while comprehensively outperforming recent alternatives in terms of computational requirements. Specifically, GLAN requires 94.17%, 78.18%, 80.95% and 67.56% less computational parameters as compared with four recent lightweight alternatives. This advocates its application in real-time biomedical imaging diagnostics on embedded and mobile devices in clinical settings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The MC Chest X-ray dataset [31] employed in this study is available at the Open-i repository and can be requested through the contact at https://openi.nlm.nih.gov/faq#faq-tb-coll. The CVC-ClinicDB dataset [32] used in this study is available at https://polyp.grand-challenge.org/CVCClinicDB/ as part of the endoscopic vision sub-challenge. The IDRiD dataset [33] that includes images obtained from an eye examination of diabetic patients in Nanded, M.S., India, is available at the IEEE DataPort and can be accessed at https://ieee-dataport.org/open-access/indian-diabetic-retinopathy-image-dataset-idrid by IEEE Dataport users.

Notes

  1. The dataset can be requested at http://archive.nlm.nih.gov/repos/chestImages.php.

  2. The dataset can be accessed from https://polyp.grand-challenge.org/CVCClinicDB/

  3. Implementation available at https://github.com/DengPingFan/PraNet

  4. The leaderboard results are available at https://idrid.grand-challenge.org/Leaderboard/

References

  1. Nie D, Wang L, Xiang L, Zhou S, Adeli E. Difficulty-aware attention network with confidence learning for medical image segmentation. In: AAAI Conference on Artificial Intelligence. 2019;33:1085-92.

  2. Park H, Lee HJ, Kim HG, Ro YM, Shin D, Lee SR, et al. Endometrium segmentation on transvaginal ultrasound image using key-point discriminator. Med Phys. 2019;46(9):3974–84.

    Article  Google Scholar 

  3. Maninis KK, Pont-Tuset J, Arbeláez P, Van Gool L. Deep Retinal Image Understanding. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W, editors. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. Cham: Springer International Publishing. 2016. p. 140–8.

    Chapter  Google Scholar 

  4. Song G, Kai W, Hong K, Yujun Z, Yingqi G, Tao L. BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation. Int J Me Inform. 2019;126(105):113.

    Google Scholar 

  5. Jonathan L, Evan S, Trevor D. Fully convolutional networks for semantic segmentation. In: IEEE Conference On Computer Vision and Pattern Recognition. 2015:3431–3440.

  6. Yan Z, Yang X, Cheng KT. A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation. IEEE Journal of Biomedical and Health Informatics. 2019;23(4):1427–36.

    Article  Google Scholar 

  7. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Medical Image Computing and Computer-Assisted Intervention. 2015;2344–1.

  8. Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans Patt Anal Mach Intell. 2017;39(12):2481–95.

    Article  Google Scholar 

  9. Ji L, Jiang X, Gao Y, Fang Z, Cai Q, Wei Z. ADR-Net: context extraction network based on M-Net for medical image segmentation. Med Phys. 2020;47(9):4254–64.

    Article  Google Scholar 

  10. Chen J, Lu Y, Yu Q, Luo X, Adeli E, Wang Y, et al. Transunet: Transformers make strong encoders for medical image segmentation. http://arxiv.org/abs/2102.04306arXiv:2102.04306.2021.

  11. Reib S, Seibold C, Freytag A, Rodner E, Stiefelhagen R. Every annotation counts: Multi-label deep supervision for medical image segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021;9532–42.

  12. Ni J, Wu J, Tong J, Chen Z, Zhao J. GC-Net: Global context network for medical image segmentation. Computer Methods Prog Biomed. 2020;190:105121.

  13. Goyal M, Reeves ND, Rajbhandari S, Yap MH. Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices. IEEE J Biomed Health Inform. 2019;23(4):1730–41.

    Article  Google Scholar 

  14. Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, et al. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep. 2019;9(1):14465–9.

    Article  Google Scholar 

  15. Guo X, Khalid MA, Domingos I, Michala AL, Adriko M, Rowel C, et al. Smartphone-based DNA diagnostics for malaria detection using deep learning for local decision support and blockchain technology for security. Nat Electron. 2021;4(8):615–24.

    Article  Google Scholar 

  16. Vaze S, Xie W, Namburete AIL. Low-Memory CNNs Enabling Real-Time Ultrasound Segmentation Towards Mobile Deployment. IEEE J Biomed Health Inform. 2020;24(4):1059–69.

    Article  Google Scholar 

  17. Zhang L, Shi L, Cheng JCY, Chu WCW, Yu SCH. LPAQR-Net: Efficient vertebra segmentation from biplanar whole-spine radiographs. IEEE J Biomed Health Inform. 2021;25(7):2710–21.

    Article  Google Scholar 

  18. Bundesen C, Habekost T, Kyllingsbaek S. A neural theory of visual attention: bridging cognition and neurophysiology. Psychol Rev. 2005;112(2):291–328.

    Article  Google Scholar 

  19. Hosseini H, Poovendran R. Semantic Adversarial Examples. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2018;1695–16955.

  20. Zhou S, Nie D, Adeli E, Yin J, Lian J, Shen D. High-resolution encoder-decoder networks for low-contrast medical image segmentation. IEEE Trans Image Process. 2019;29:461–75.

    Article  MathSciNet  MATH  Google Scholar 

  21. Iqbal A, Sharif M, Khan MA, Nisar W, Alhaisoni M. FF-UNet: a U-Shaped deep convolutional neural network for multimodal biomedical image segmentation. Cognit Comput. 2022;14:1287-302. Available from: https://doi.org/10.1007/s12559-022-10038-y.

  22. Valanarasu JMJ, Oza P, Hacihaliloglu I, Patel VM. Medical transformer: Gated axial-attention for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2021;36–46.

  23. Xue Y, Xu T, Zhang H, Long LR, Huang X. SegAN: adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics. 2018;16(3):383–92.

    Article  Google Scholar 

  24. Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, et al. Searching for MobileNetV3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2019.

  25. Wu Y, Xia Y, Song Y, Zhang D, Liu D, Zhang C, et al. Vessel-Net: retinal vessel segmentation under multi-path supervision. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2019;264-72.

  26. Romera E, Àlvarez JM, Bergasa LM, Arroyo R. ERFNet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Trans Intell Transp Syst. 2018;19(1):263–72.

    Article  Google Scholar 

  27. Laibacher T, Weyde T, Jalali S. M2u-net: Effective and efficient retinal vessel segmentation for real-world applications. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019;0-0.

  28. Ma N, Zhang X, Zheng HT, Sun J. ShuffleNet V2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV) 2018.

  29. Son J, Park SJ, Jung KH. Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks. J Digit Imaging. 2019;32(3):499–512.

    Article  Google Scholar 

  30. Lata K, Dave M, Image-to-Image Nishanth KN. Network translation using generative adversarial, In. 3rd International conference on Electronics. Commun Aerospace Technol (ICECA). 2019;186–9.

  31. Jaeger S, Candemir S, Antani S, Wàing YXJ, Lu PX, Thoma G. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant Imaging Med Surg. 2014;4(6). Available from: https://qims.amegroups.com/article/view/5132.

  32. Bernal J, Sánchez FJ, Fernández-Esparrach G, Gil D, Rodríguez C, Vilariño F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput Med Imaging Graph. 2015;43:99–111.

  33. Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, et al. IDRiD: Diabetic retinopathy - segmentation and grading challenge. Med Image Anal. 2020;59:101561. Available from: https://www.sciencedirect.com/science/article/pii/S1361841519301033.

  34. Kingma DP, Ba J. Adam: A Method for Stochastic Optimization. In: Bengio Y, LeCun Y, editors. 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings; 2015. Available from: http://arxiv.org/abs/1412.6980.

  35. Fan D, Ji G, Zhou T, Chen G, Fu H, Shen J, et al. PraNet: Parallel Reverse Attention Network for Polyp Segmentation. In: Med Image Comput Comput Assisted Intervention. 2020;263–73.

  36. Jha D, Smedsrud PH, Riegler MA, Johansen D, Lange TD, Halvorsen P, et al. ResUNet++: An Advanced Architecture for Medical Image Segmentation. In: 2019 IEEE International Symposium on Multimedia (ISM). 2019;225–55.

  37. Fan D, Cheng M, Liu Y, Li T, Borji A. Structure-Measure: A New Way to Evaluate Foreground Maps. In: 2017 IEEE International Conference on Computer Vision. 2017; 4558–67.

  38. Howard A, Sandler M, Chu G, Chen LC, Chen B, Tan M, et al. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019;1314–24.

  39. Laibacher T, Weyde T, Jalali S. M2U-Net: Effective and Efficient Retinal Vessel Segmentation for Real-World Applications. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Long Beach, CA, USA, June 16-20, 2019;115–24.

  40. Souza JC, Bandeira Diniz JO, Ferreira JL, Franç da Silva GL, Corrêa Silva A, de Paiva AC. An automatic method for lung segmentation and reconstruction in chest X-ray using deep neural networks. Computer Methods and Programs in Biomedicine. 2019;177:285–96. Available from: https://www.sciencedirect.com/science/article/pii/S0169260719303517.

  41. Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. 2018;3–11.

  42. Fang Y, Chen C, Yuan Y, Tong K. Selective Feature Aggregation Network with Area-Boundary Constraints for Polyp Segmentation. In: Medical Image Computing and Computer Assisted Intervention. 2019;302–10.

  43. Sarhan A, Al-Khaz’Aly A, Gorner A, Swift A, Rokne J, Alhajj R, et al. Utilizing transfer learning and a customized loss function for optic disc segmentation from retinal images. In: Proceedings of the Asian Conference on Computer Vision (ACCV). 2020.

  44. Hasan MK, Alam MA, Elahi MTE, Roy S, Martí R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif Intell Med. 2021;111:102001. Available from: https://www.sciencedirect.com/science/article/pii/S0933365720312665.

  45. Paszke A, Chaurasia A, Kim S, Culurciello E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint 2016. http://arxiv.org/abs/1606.02147.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tariq M. Khan.

Ethics declarations

Research Involving Human and Animal Participants

This article does not contain any studies with human participants or animals performed by any of the authors.

Competing Interest

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naqvi, S.S., Langah, Z.A., Khan, H.A. et al. GLAN: GAN Assisted Lightweight Attention Network for Biomedical Imaging Based Diagnostics. Cogn Comput 15, 932–942 (2023). https://doi.org/10.1007/s12559-023-10131-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-023-10131-w

Keywords

Navigation