Abstract
Over-investigation is a longstanding challenge in the contemporary healthcare system. Employing feature selection techniques from Explainable AI (XAI), we conceptualise medical investigation decisions as a “feature finding” problem in machine learning, utilising XAI feature attribution to select the most effective investigations for each patient. Focused on ophthalmology, our research applies this framework to effectively identify investigations for diagnosing eye conditions on an individual basis. Our results demonstrate the algorithm’s proficiency in accurately identifying recommended investigations that align with clinical judgment. Our contributions include modelling the selection of optimal medical investigations as a feature finding problem and introducing an algorithm for computing optimal investigations.
This work is supported by the Ministry of Education, Singapore. (Grant RG17/22).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We also experimented with 50% and 75%, but the resulting concordance with diagnostic guidance remained consistent.
References
Dreisler, E., et al.: EMAS clinical guide: assessment of the endometrium in peri and postmenopausal women. Maturitas 75, 181–190 (2013)
Du, Y., et al.: An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus. Sci. Rep. 12, 1170 (2022)
Ferraro, S., Panteghini, M.: The role of laboratory in ensuring appropriate test requests. Clin. Biochem. 50(10), 555–561 (2017)
Hardy, N., et al.: Intraprocedural artificial intelligence for colorectal cancer detection and characterisation in endoscopy and laparoscopy. Surg. Innov. 28, 768–775 (2021)
Hill, M., et al.: ACOG practice bulletin. Diagnosis of abnormal uterine bleeding in reproductive-aged women. Obstet. Gynecol. 120, 197–206 (2012)
Jankauskaite, L., et al.: Overuse of medical care in paediatrics: a survey from five countries in the European academy of pediatrics. Front. Pediatrics 10, 945540 (2022)
Joel, C.T., Burgess, M., Crispin, P.: Costs and consequences of over-investigation of minor transfusion reactions. Internal Med. J. 54, 301–306 (2023)
Khanduja, S., et al.: Retrospective review of magnetic resonance imaging of the lumbosacral spine: are we overinvestigating? Neurospine 15, 383–387 (2018)
Li, J., et al.: Feature selection: a data perspective. ACM Comput. Surv. (CSUR) 50(6), 1–45 (2017)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Petrou, P.: Failed attempts to reduce inappropriate laboratory utilization in an emergency department setting in Cyprus: lessons learned. J. Emerg. Med. 50(3), 510–517 (2016)
Renaud, M.C., et al.: Epidemiology and investigations for suspected endometrial cancer. J. Obstet. Gynaecol. Can. 35(4), 380–381 (2013)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of SIGKDD, pp. 1135–1144 (2016)
Scrimin, F., et al.: Hysteroscopic chasing for endometrial cancer in a low-risk population: risks of overinvestigation. Arch. Gynecol. Obstet. 293, 851–856 (2015)
Singh, S., et al.: Abnormal uterine bleeding in pre-menopausal women. J. Obstet. Gynaecol. Can. 35(5), 473–475 (2013)
Stewart, J., et al.: Unknown primary adenocarcinoma: incidence of overinvestigation and natural history. BMJ 1(6177), 1530 (1979)
Walraven, C.V., et al.: Do we know what inappropriate laboratory utilization is? A systematic review of laboratory clinical audits. JAMA 280(6), 550–558 (1998)
Wright, A.D.: Overinvestigation. BMJ 1(5641), 440 (1969)
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51
Zacharias, J., et al.: Designing a feature selection method based on explainable artificial intelligence. Electron. Mark. 32(4), 2159–2184 (2022)
Zhou, Y., et al.: Do feature attribution methods correctly attribute features? In: Proceedings of the AAAI, vol. 36, pp. 9623–9633 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kumar, R.S. et al. (2024). On Identifying Effective Investigations with Feature Finding Using Explainable AI: An Ophthalmology Case Study. In: Finkelstein, J., Moskovitch, R., Parimbelli, E. (eds) Artificial Intelligence in Medicine. AIME 2024. Lecture Notes in Computer Science(), vol 14845. Springer, Cham. https://doi.org/10.1007/978-3-031-66535-6_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-66535-6_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-66534-9
Online ISBN: 978-3-031-66535-6
eBook Packages: Computer ScienceComputer Science (R0)