Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Oct 12;28(11):2393-2403.
doi: 10.1093/jamia/ocab148.

Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups

Affiliations

Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups

Hale M Thompson et al. J Am Med Inform Assoc. .

Abstract

Objectives: To assess fairness and bias of a previously validated machine learning opioid misuse classifier.

Materials & methods: Two experiments were conducted with the classifier's original (n = 1000) and external validation (n = 53 974) datasets from 2 health systems. Bias was assessed via testing for differences in type II error rates across racial/ethnic subgroups (Black, Hispanic/Latinx, White, Other) using bootstrapped 95% confidence intervals. A local surrogate model was estimated to interpret the classifier's predictions by race and averaged globally from the datasets. Subgroup analyses and post-hoc recalibrations were conducted to attempt to mitigate biased metrics.

Results: We identified bias in the false negative rate (FNR = 0.32) of the Black subgroup compared to the FNR (0.17) of the White subgroup. Top features included "heroin" and "substance abuse" across subgroups. Post-hoc recalibrations eliminated bias in FNR with minimal changes in other subgroup error metrics. The Black FNR subgroup had higher risk scores for readmission and mortality than the White FNR subgroup, and a higher mortality risk score than the Black true positive subgroup (P < .05).

Discussion: The Black FNR subgroup had the greatest severity of disease and risk for poor outcomes. Similar features were present between subgroups for predicting opioid misuse, but inequities were present. Post-hoc mitigation techniques mitigated bias in type II error rate without creating substantial type I error rates. From model design through deployment, bias and data disadvantages should be systematically addressed.

Conclusion: Standardized, transparent bias assessments are needed to improve trustworthiness in clinical machine learning models.

Keywords: bias and fairness; interpretability; machine learning; natural language processing; opioid use disorder; structural racism.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Plot of bias and fairness point estimates with bootstrapped 95% confidence intervals for the NLP opioid misuse classifier’s predictions for the external validation cohort.
Figure 2.
Figure 2.
NLP opioid misuse classifier’s top features for positive cases in original development dataset (2007-2017).
Figure 3.
Figure 3.
NLP opioid misuse classifier’s top features for positive cases in Black subgroup of external validation dataset (2017–2019).
Figure 4.
Figure 4.
NLP opioid misuse classifier’s top features for positive cases in White subgroup of external validation dataset (2017–2019).
Figure 5.
Figure 5.
Plot of bias and fairness metrics of the NLP opioid misuse classifier’s prediction for the external validation cohort with cut point adjustment by subgroup.
Figure 6.
Figure 6.
Plot of bias and fairness metrics of the NLP opioid misuse classifier’s prediction for the external validation cohort after model recalibration by subgroup.

Similar articles

Cited by

References

    1. Burnside M, Crocket H, Mayo M, et al.Do-it-yourself automated insulin delivery: a leading example of the democratization of medicine. J Diabetes Sci Technol 2020; 14 (5): 878–82. - PMC - PubMed
    1. Allen B, Agarwal S, Kalpathy-Cramer J, et al.Democratizing AI. J Am Coll Radiol 2019; 16 (7): 961–3. - PubMed
    1. Gupta V, Roth H, Buch V, et al. Democratizing artificial intelligence in healthcare: a study of model development across 2 institutions incorporating transfer learning. arXiv[eess.IV], http://arxiv.org/abs/2009.12437, 2020, preprint: not peer reviewed.
    1. Sharma B, Dligach D, Swope K, et al.Publicly available machine learning models for identifying opioid misuse from the clinical notes of hospitalized patients. BMC Med Inform Decis Mak 2020; 20 (1): 79. - PMC - PubMed
    1. Afshar M, Phillips A, Karnik N, et al.Natural language processing and machine learning to identify alcohol misuse from the electronic health record in trauma patients: development and internal validation. J Am Med Inform Assoc 2019; 26 (3): 254–61. - PMC - PubMed

Publication types