Abstract
Artificial intelligence supports healthcare professionals with predictive modeling, greatly transforming clinical decision-making. This study addresses the crucial need for fairness and explainability in AI applications within healthcare to ensure equitable outcomes across diverse patient demographics. By focusing on the predictive modeling of sepsis-related mortality, we propose a method that learns a performance-optimized predictive model and then employs the transfer learning process to produce a model with better fairness. Our method also introduces a novel permutation-based feature importance algorithm aiming at elucidating the contribution of each feature in enhancing fairness on predictions. Unlike existing explainability methods concentrating on explaining feature contribution to predictive performance, our proposed method uniquely bridges the gap in understanding how each feature contributes to fairness. This advancement is pivotal, given sepsis’s significant mortality rate and its role in one-third of hospital deaths. Our method not only aids in identifying and mitigating biases within the predictive model but also fosters trust among healthcare stakeholders by improving the transparency and fairness of model predictions, thereby contributing to more equitable and trustworthy healthcare delivery.
C.-H. Chang and X. Wang—Both authors contributed equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Angerschmid, A., Theuermann, K., Holzinger, A., Chen, F., Zhou, J.: Effects of fairness and explanation on trust in ethical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) Machine Learning and Knowledge Extraction, LNCS, vol. 13480, pp. 51–67. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-14463-9_4
Bao, C., Deng, F., Zhao, S.: Machine-learning models for prediction of sepsis patients mortality. Medicina Intensiva (English Edition) 47(6), 315–325 (2023)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
CDC: Sepsis Is The Body’s Extreme Response To An Infection. (2023)
Combi, C., et al.: The IHI Rochester report 2022 on healthcare informatics research: resuming after the CoViD-19. J. Healthcare Inform. Res. 7(2), 169–202 (2023)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3323–3331 (2016)
Johnson, A., Bulgarelli, L., Pollard, T., Horng, S., Celi, L.A., Mark, R.: MIMIC-IV. 10.13026/6MM1-EK67
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777 (2017)
Ribeiro, M., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 97–101 (2016)
Taylor, R.A., et al.: Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven. Mach. Learn. Approach. Acad. Emerg. Med. 23(3), 269–278 (2016)
Unal, I.: Defining an optimal cut-point value in ROC analysis: an alternative approach. Comput. Math. Methods Med. 2017, 3762651 (2017)
Wang, H., Li, Y., Naidech, A., Luo, Y.: Comparison between machine learning methods for mortality prediction for sepsis patients with different social determinants. BMC Med. Inform. Decis. Mak. 22(S2), 156 (2022)
Yang, C.C.: Explainable artificial intelligence for predictive modeling in healthcare. J. Healthcare Inform. Res. 6(2), 228 (2022)
Zhou, J., Chen, F., Holzinger, A.: Towards explainability for AI fairness. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds.) xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML. LNCS, vol. 13200, pp. 375–386. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-04083-2_18
Acknowledgement
This work was supported in part by the National Science Foundation under the Grants IIS-1741306 and IIS-2235548, and by the Department of Defense under the Grant DoD W91XWH-05-1-023. This material is based upon work supported by (while serving at) the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chang, CH., Wang, X., Yang, C.C. (2024). Explainable AI for Fair Sepsis Mortality Predictive Model. In: Finkelstein, J., Moskovitch, R., Parimbelli, E. (eds) Artificial Intelligence in Medicine. AIME 2024. Lecture Notes in Computer Science(), vol 14845. Springer, Cham. https://doi.org/10.1007/978-3-031-66535-6_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-66535-6_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-66534-9
Online ISBN: 978-3-031-66535-6
eBook Packages: Computer ScienceComputer Science (R0)