On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study | SpringerLink
Skip to main content

On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study

  • Conference paper
  • First Online:
Artificial Intelligence in Medicine (AIME 2024)

Abstract

Over-investigation is a longstanding challenge in the contemporary healthcare system. Employing feature selection techniques from Explainable AI (XAI), we conceptualise medical investigation decisions as a “feature finding” problem in machine learning, utilising XAI feature attribution to select the most effective investigations for each patient. Focused on ophthalmology, our research applies this framework to effectively identify investigations for diagnosing eye conditions on an individual basis. Our results demonstrate the algorithm’s proficiency in accurately identifying recommended investigations that align with clinical judgment. Our contributions include modelling the selection of optimal medical investigations as a feature finding problem and introducing an algorithm for computing optimal investigations.

This work is supported by the Ministry of Education, Singapore. (Grant RG17/22).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7550
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We also experimented with 50% and 75%, but the resulting concordance with diagnostic guidance remained consistent.

References

  1. Dreisler, E., et al.: EMAS clinical guide: assessment of the endometrium in peri and postmenopausal women. Maturitas 75, 181–190 (2013)

    Article  Google Scholar 

  2. Du, Y., et al.: An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus. Sci. Rep. 12, 1170 (2022)

    Article  Google Scholar 

  3. Ferraro, S., Panteghini, M.: The role of laboratory in ensuring appropriate test requests. Clin. Biochem. 50(10), 555–561 (2017)

    Google Scholar 

  4. Hardy, N., et al.: Intraprocedural artificial intelligence for colorectal cancer detection and characterisation in endoscopy and laparoscopy. Surg. Innov. 28, 768–775 (2021)

    Article  Google Scholar 

  5. Hill, M., et al.: ACOG practice bulletin. Diagnosis of abnormal uterine bleeding in reproductive-aged women. Obstet. Gynecol. 120, 197–206 (2012)

    Google Scholar 

  6. Jankauskaite, L., et al.: Overuse of medical care in paediatrics: a survey from five countries in the European academy of pediatrics. Front. Pediatrics 10, 945540 (2022)

    Google Scholar 

  7. Joel, C.T., Burgess, M., Crispin, P.: Costs and consequences of over-investigation of minor transfusion reactions. Internal Med. J. 54, 301–306 (2023)

    Article  Google Scholar 

  8. Khanduja, S., et al.: Retrospective review of magnetic resonance imaging of the lumbosacral spine: are we overinvestigating? Neurospine 15, 383–387 (2018)

    Article  Google Scholar 

  9. Li, J., et al.: Feature selection: a data perspective. ACM Comput. Surv. (CSUR) 50(6), 1–45 (2017)

    Article  Google Scholar 

  10. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  11. Petrou, P.: Failed attempts to reduce inappropriate laboratory utilization in an emergency department setting in Cyprus: lessons learned. J. Emerg. Med. 50(3), 510–517 (2016)

    Google Scholar 

  12. Renaud, M.C., et al.: Epidemiology and investigations for suspected endometrial cancer. J. Obstet. Gynaecol. Can. 35(4), 380–381 (2013)

    Article  Google Scholar 

  13. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of SIGKDD, pp. 1135–1144 (2016)

    Google Scholar 

  14. Scrimin, F., et al.: Hysteroscopic chasing for endometrial cancer in a low-risk population: risks of overinvestigation. Arch. Gynecol. Obstet. 293, 851–856 (2015)

    Article  Google Scholar 

  15. Singh, S., et al.: Abnormal uterine bleeding in pre-menopausal women. J. Obstet. Gynaecol. Can. 35(5), 473–475 (2013)

    Article  Google Scholar 

  16. Stewart, J., et al.: Unknown primary adenocarcinoma: incidence of overinvestigation and natural history. BMJ 1(6177), 1530 (1979)

    Article  Google Scholar 

  17. Walraven, C.V., et al.: Do we know what inappropriate laboratory utilization is? A systematic review of laboratory clinical audits. JAMA 280(6), 550–558 (1998)

    Article  Google Scholar 

  18. Wright, A.D.: Overinvestigation. BMJ 1(5641), 440 (1969)

    Article  Google Scholar 

  19. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51

    Chapter  Google Scholar 

  20. Zacharias, J., et al.: Designing a feature selection method based on explainable artificial intelligence. Electron. Mark. 32(4), 2159–2184 (2022)

    Article  Google Scholar 

  21. Zhou, Y., et al.: Do feature attribution methods correctly attribute features? In: Proceedings of the AAAI, vol. 36, pp. 9623–9633 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiuyi Fan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, R.S. et al. (2024). On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study. In: Finkelstein, J., Moskovitch, R., Parimbelli, E. (eds) Artificial Intelligence in Medicine. AIME 2024. Lecture Notes in Computer Science(), vol 14845. Springer, Cham. https://doi.org/10.1007/978-3-031-66535-6_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-66535-6_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-66534-9

  • Online ISBN: 978-3-031-66535-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics