Explainable AI for Fair Sepsis Mortality Predictive Model | SpringerLink
Skip to main content

Explainable AI for Fair Sepsis Mortality Predictive Model

  • Conference paper
  • First Online:
Artificial Intelligence in Medicine (AIME 2024)

Abstract

Artificial intelligence supports healthcare professionals with predictive modeling, greatly transforming clinical decision-making. This study addresses the crucial need for fairness and explainability in AI applications within healthcare to ensure equitable outcomes across diverse patient demographics. By focusing on the predictive modeling of sepsis-related mortality, we propose a method that learns a performance-optimized predictive model and then employs the transfer learning process to produce a model with better fairness. Our method also introduces a novel permutation-based feature importance algorithm aiming at elucidating the contribution of each feature in enhancing fairness on predictions. Unlike existing explainability methods concentrating on explaining feature contribution to predictive performance, our proposed method uniquely bridges the gap in understanding how each feature contributes to fairness. This advancement is pivotal, given sepsis’s significant mortality rate and its role in one-third of hospital deaths. Our method not only aids in identifying and mitigating biases within the predictive model but also fosters trust among healthcare stakeholders by improving the transparency and fairness of model predictions, thereby contributing to more equitable and trustworthy healthcare delivery.

C.-H. Chang and X. Wang—Both authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7550
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Angerschmid, A., Theuermann, K., Holzinger, A., Chen, F., Zhou, J.: Effects of fairness and explanation on trust in ethical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) Machine Learning and Knowledge Extraction, LNCS, vol. 13480, pp. 51–67. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-14463-9_4

  2. Bao, C., Deng, F., Zhao, S.: Machine-learning models for prediction of sepsis patients mortality. Medicina Intensiva (English Edition) 47(6), 315–325 (2023)

    Article  Google Scholar 

  3. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  4. CDC: Sepsis Is The Body’s Extreme Response To An Infection. (2023)

    Google Scholar 

  5. Combi, C., et al.: The IHI Rochester report 2022 on healthcare informatics research: resuming after the CoViD-19. J. Healthcare Inform. Res. 7(2), 169–202 (2023)

    Article  Google Scholar 

  6. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3323–3331 (2016)

    Google Scholar 

  7. Johnson, A., Bulgarelli, L., Pollard, T., Horng, S., Celi, L.A., Mark, R.: MIMIC-IV. 10.13026/6MM1-EK67

    Google Scholar 

  8. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777 (2017)

    Google Scholar 

  9. Ribeiro, M., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 97–101 (2016)

    Google Scholar 

  10. Taylor, R.A., et al.: Prediction of in-hospital mortality in emergency department patients with sepsis: a local big data-driven. Mach. Learn. Approach. Acad. Emerg. Med. 23(3), 269–278 (2016)

    Article  Google Scholar 

  11. Unal, I.: Defining an optimal cut-point value in ROC analysis: an alternative approach. Comput. Math. Methods Med. 2017, 3762651 (2017)

    Article  Google Scholar 

  12. Wang, H., Li, Y., Naidech, A., Luo, Y.: Comparison between machine learning methods for mortality prediction for sepsis patients with different social determinants. BMC Med. Inform. Decis. Mak. 22(S2), 156 (2022)

    Article  Google Scholar 

  13. Yang, C.C.: Explainable artificial intelligence for predictive modeling in healthcare. J. Healthcare Inform. Res. 6(2), 228 (2022)

    Article  Google Scholar 

  14. Zhou, J., Chen, F., Holzinger, A.: Towards explainability for AI fairness. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds.) xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML. LNCS, vol. 13200, pp. 375–386. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-04083-2_18

Download references

Acknowledgement

This work was supported in part by the National Science Foundation under the Grants IIS-1741306 and IIS-2235548, and by the Department of Defense under the Grant DoD W91XWH-05-1-023. This material is based upon work supported by (while serving at) the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher C. Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chang, CH., Wang, X., Yang, C.C. (2024). Explainable AI for Fair Sepsis Mortality Predictive Model. In: Finkelstein, J., Moskovitch, R., Parimbelli, E. (eds) Artificial Intelligence in Medicine. AIME 2024. Lecture Notes in Computer Science(), vol 14845. Springer, Cham. https://doi.org/10.1007/978-3-031-66535-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-66535-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-66534-9

  • Online ISBN: 978-3-031-66535-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics