Algorithmic Fairness in Healthcare Data with Weighted Loss and Adversarial Learning | SpringerLink
Skip to main content

Algorithmic Fairness in Healthcare Data with Weighted Loss and Adversarial Learning

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 824))

Included in the following conference series:

  • 378 Accesses

Abstract

Fairness in terms of various sensitive or protected attributes such as race, gender, age group, etc. has been a subject of great importance in the healthcare domain. Group fairness is considered as one of the principal criteria. However, most of the prevailing mitigation techniques emphasize on tuning the training algorithms while overlooking the fact that the training data may possibly be the primary reason for the biased outcomes. In this work, we address two sensitive attributes (age group and gender) with empirical evaluations of systemic inflammatory response syndrome (SIRS) classification for a dataset extracted from electronic health records (EHRs) for the essential task of improving equity in outcomes. Machine learning (ML)-based technologies are progressively becoming prevalent in hospitals; therefore, our approach carries out a demand for the frameworks to consider performance trade-offs regarding sensitive patient attributes combined with model training and permit organizations to utilize their ML resources in manners that are aware of potential fairness and equity issues. With the intended purpose of fairness, we experiment with a number of strategies to reduce disparities in algorithmic performance with respect to gender and age group. We leverage a sample and label balancing technique using weighted loss along with adversarial learning for an observational cohort derived from EHRs to introduce a “fair” SIRS classification model with minimized discrepancy in error rates over different groups. We experimentally illustrate that our strategy has the ability to align the distribution of SIRS classification outcomes for the models constructed from high-dimensional EHR data across a number of groups simultaneously.

ELISE STUDY GROUP: Louisa Bode\(^{\;a}\); Marcel Mast\(^{\;a}\); Antje Wulff\(^{\;a ,\; d}\); Michael Marschollek\(^{\;a}\); Sven Schamer\(^{\;b}\); Henning Rathert\(^{\;b}\); Thomas Jack\(^{\;b}\); Philipp Beerbaum\(^{\;b}\); Nicole Rübsamen\(^{\;c}\); Julia Böhnke\(^{\;c}\); André Karch\(^{\;c}\); Pronaya Prosun Das\(^{\;e}\); Lena Wiese\(^{\;e}\); Christian Groszweski-Anders\(^{\;f}\); Andreas Haller\(^{\;f}\); Torsten Frank\(^{\;f}\)

\(^{a}\)Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover, Germany.

\(^{b}\)Department of Pediatric Cardiology and Intensive Care Medicine, Hannover Medical School, Hannover, Germany.

\(^{c}\)Institute of Epidemiology and Social Medicine, University of Muenster, Muenster, Germany.

\(^{d}\)Big Data in Medicine, Department of Health Services Research, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany.

\(^{e}\)Research Group Bioinformatics, Fraunhofer Institute for Toxicology and Experimental Medicine, Hannover, Germany.

\(^{f}\)medisite GmbH, Hannover, Germany.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 26311
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 32889
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. American Medical Association et al.: Ama passes first policy recommendations on augmented intelligence (2018). www.ama-assn.org/ama-passes-first-policy-recommendations-augmented-intelligence. Accessed 6 2018

  2. Barocas, S., Hardt, M., Narayanan, A.: Fairness and machine learning. fairmlbook (2019). www.fairmlbook.org

  3. Beutel, A., Chen, J., Zhao, Z., Chi, Ed.H..: Data decisions and theoretical implications when adversarially learning fair representations (2017). arXiv:1707.00075

  4. Bone, R.C., Balk, R.A., Cerra, F.B., Dellinger, R.P., Fein, A.M., Knaus, W.A., Schein, R.M.H., Sibbald, W.J.: Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. Chest 101(6), 1644–1655 (1992)

    Article  Google Scholar 

  5. Char, D.S., Shah, N.H., Magnus, D.: Implementing machine learning in health care-addressing ethical challenges. New England J. Med. 378(11), 981 (2018)

    Article  Google Scholar 

  6. Chen, I., Johansson, F.D., Sontag, D.: Why is my classifier discriminatory? Advances in Neural Information Processing Systems 31 (2018)

    Google Scholar 

  7. Cohen, I.G., Amarasingham, R., Shah, A., Xie, B., Lo, B.: The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Aff. 33(7), 1139–1147 (2014)

    Article  Google Scholar 

  8. Dellinger, R.P., Levy, M.M., Carlet, J.M., Bion, J., Parker, M.M., Jaeschke, R., Reinhart, K., Angus, D.C., Brun-Buisson, C., Beale, R., et al.: Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008. Intensive Care Med. 34(1), 17–60 (2008)

    Google Scholar 

  9. Mengnan, D., Yang, F., Zou, N., Xia, H.: Fairness in deep learning: a computational perspective. IEEE Intell. Syst. 36(4), 25–34 (2020)

    Google Scholar 

  10. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  11. Edwards, H., Storkey, A.: Censoring representations with an adversary (2015). arXiv:1511.05897

  12. Fazelpour, S., Lipton, Z.C.: Algorithmic fairness from a non-ideal perspective. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 57–63 (2020)

    Google Scholar 

  13. Goldstein, B., Giroir, B., Randolph, A., et al.: International pediatric sepsis consensus conference: definitions for sepsis and organ dysfunction in pediatrics. Pediatr. Crit. Care Med. 6(1), 2–8 (2005)

    Article  Google Scholar 

  14. Gupta, A., Liu, T., Shepherd, S., Paiva, W.: Using statistical and machine learning methods to evaluate the prognostic accuracy of sirs and qsofa. Healthc. Inf. Res. 24(2), 139–147 (2018)

    Article  Google Scholar 

  15. Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  16. Hébert-Johnson, U., Kim, M., Reingold, O., Rothblum, G.: Multicalibration: calibration for the (computationally-identifiable) masses. In: International Conference on Machine Learning, pp. 1939–1948. PMLR (2018)

    Google Scholar 

  17. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)

    Article  Google Scholar 

  18. Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: International Conference on Machine Learning, pp. 2564–2572. PMLR (2018)

    Google Scholar 

  19. Kleinberg, J., Ludwig, J., Mullainathan, S., Rambachan, A.: Algorithmic fairness. In: AEA Papers and Proceedings, vol. 108, pp. 22–27 (2018)

    Google Scholar 

  20. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores (2016). arXiv:1609.05807

  21. Krause, J., Gulshan, V., Rahimy, E., Karth, P., Widner, K., Corrado, G.S., Peng, L., Webster, D.R.: Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 125(8), 1264–1272 (2018)

    Google Scholar 

  22. Madras, D., Creager, E., Pitassi, T., Zemel, R.: Learning adversarially fair and transferable representations. In: International Conference on Machine Learning, pp. 3384–3393. PMLR (2018)

    Google Scholar 

  23. Niazkar, M.: Revisiting the estimation of colebrook friction factor: a comparison between artificial intelligence models and cw based explicit equations. KSCE J. Civ. Eng. 23(10), 4311–4326 (2019)

    Article  Google Scholar 

  24. Niazkar, M.: Assessment of artificial intelligence models for calculating optimum properties of lined channels. J. Hydroinf. 22(5), 1410–1423 (2020)

    Article  Google Scholar 

  25. Niazkar, M., Talebbeydokhti, N., Afzali, S.H.: Novel grain and form roughness estimator scheme incorporating artificial intelligence models. Water Resour. Manag. 33(2), 757–773 (2019)

    Google Scholar 

  26. Pedreshi, D., Ruggieri, S., Turini, F.: Discrimination-aware data mining. In: Proceedings of the 14th ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, pp. 560–568 (2008)

    Google Scholar 

  27. Piri, S., Delen, D., Liu, T., Zolbanin, H.M.: A data analytics approach to building a clinical decision support system for diabetic retinopathy: developing and deploying a model ensemble. Decis. Supp. Syst. 101, 12–27 (2017)

    Google Scholar 

  28. Shapiro, N., Howell, M.D., Bates, D.W., Angus, D.C., Ngo, L., and Daniel Talmor. The association of sepsis syndrome and organ dysfunction with mortality in emergency department patients with suspected infection. Ann. Emerg. Med. 48(5), 583–590 (2006)

    Google Scholar 

  29. Vembandasamy, K., Sasipriya, R., Deepa, E.: Heart diseases detection using naive bayes algorithm. Int. J. Innov. Sci. Eng. & Technol. 2(9), 441–444 (2015)

    Google Scholar 

  30. Wang, T., Zhao, J., Yatskar, M., Chang, K.-W., Ordonez, V.: Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5310–5319 (2019)

    Google Scholar 

  31. Wu, C., Wu, F., Wang, X., Huang, Y., Xie, X.: Fairrec: fairness-aware news recommendation with decomposed adversarial learning. AAAI (2021)

    Google Scholar 

  32. Wulff, A., Montag, S., Rübsamen, N., Dziuba, F., Marschollek, M., Beerbaum, P., Karch, A., Jack, T.: Clinical evaluation of an interoperable clinical decision-support system for the detection of systemic inflammatory response syndrome in critically ill children. BMC Med. Inf. Decis. Mak. 21(1), 1–9 (2021)

    Google Scholar 

  33. Renzhe, Y., Li, Q., Fischer, C., Doroudi, S., Xu, D.: Evaluating different sources of student data. International Educational Data Mining Society, Towards accurate and fair prediction of college success (2020)

    Google Scholar 

Download references

Acknowledgment

The ELISE project is partially funded by the Federal Ministry of Health; Grant No. 2520DAT66A. This work was also partially supported by the Fraunhofer Internal Programs under Grant No. Attract 042-601000. Ethics approval for use of routine data was given by the Ethics Committee of Hannover Medical School (approval number 9819_BO_S_2021). We would like to thank our colleagues from the MHH Information Technology (MIT) from the Hannover Medical School for their support.

Author information

Authors and Affiliations

Authors

Consortia

Corresponding author

Correspondence to Pronaya Prosun Das .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Das, P.P., Mast, M., Wiese, L., Jack, T., Wulff, A., ELISE STUDY GROUP. (2024). Algorithmic Fairness in Healthcare Data with Weighted Loss and Adversarial Learning. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2023. Lecture Notes in Networks and Systems, vol 824. Springer, Cham. https://doi.org/10.1007/978-3-031-47715-7_18

Download citation

Publish with us

Policies and ethics