Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Oct 3;13(1):5821.
doi: 10.1038/s41467-022-33417-3.

Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence

Affiliations

Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence

Anne-Marie Nussberger et al. Nat Commun. .

Abstract

As Artificial Intelligence (AI) proliferates across important social institutions, many of the most powerful AI systems available are difficult to interpret for end-users and engineers alike. Here, we sought to characterize public attitudes towards AI interpretability. Across seven studies (N = 2475), we demonstrate robust and positive attitudes towards interpretable AI among non-experts that generalize across a variety of real-world applications and follow predictable patterns. Participants value interpretability positively across different levels of AI autonomy and accuracy, and rate interpretability as more important for AI decisions involving high stakes and scarce resources. Crucially, when AI interpretability trades off against AI accuracy, participants prioritize accuracy over interpretability under the same conditions driving positive attitudes towards interpretability in the first place: amidst high stakes and scarce resources. These attitudes could drive a proliferation of AI systems making high-impact ethical decisions that are difficult to explain and understand.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1. Attitudes towards interpretability across real-world AI applications.
Joyplot visualizes the distributions of interpretability ratings, averaged across recommend and decide versions. Participants (N = 170) responded to the question “how important is it that the AI in this application is explainable, even if it performs accurately?” on a 5-point rating scale (1 = not at all important, 5 = extremely important).
Fig. 2
Fig. 2. Exemplary instructions from Study 2.
Schematic representation of the instructions for the vaccine application with its four versions. Each version was shown on a separate page, with the same general scenario described at the top. The depicted bolding and underlining corresponds to the format shown to participants.
Fig. 3
Fig. 3. Results for Study 2.
Participants’ responses from Study 2 (N = 84) to the question “In this case, how important is it that the AI is explainable?” on a continuous slider-scale from “not at all important” (1) to “extremely important” (5). All panels show the jittered raw data, its density, the point estimate of the mean with its 95% confidence intervals, and interquartile ranges; all grouped by stakes (indicated by fill colour; low stakes = yellow, high stakes = red) and scarcity (indicated on x-axes). In summary, participants rated interpretability as more important for high stakes and high scarcity situations. Main effects for stakes and scarcity were not qualified by an interaction. a data aggregated across all five applications; triangle-shaped data points represent averages for the five applications. bf non-aggregated data for each individual application; circle-shaped data points represent individual responses.
Fig. 4
Fig. 4. Results for Study 3A.
Participants’ responses from Study 3A (N = 261) to the question “How important is it that the given AI model is explainable?” on continuous slider-scales from “not at all important” (1) to “extremely important” (5). For each AI model with a given level of accuracy, there was a separate slider-scale. Panels show the jittered raw data, its density, the point estimate of the mean with its 95% confidence intervals, and interquartile ranges. Overall, there was a slight tendency for participants to rate interpretability as less important for more accurate models. a Data aggregated across all five applications; triangle-shaped data points represent averages for every of the five applications. bf Non-aggregated data for each individual application; circle-shaped data points represent individual responses.
Fig. 5
Fig. 5. Dependent variable and results for Studies 3B and 3C.
a Dependent variable on which participants were asked to move the slider to a position representing their preference for the interpretability - accuracy tradeoff. The order of attributes and hence the direction of the slider was counter-balanced across participants. b Tradeoff-preferences from Study 3B (N = 112; within-subjects design), aggregating across all five applications. c Tradeoff-preferences from Study 3C (N = 1344; between-subjects design), aggregating across all five applications.

Similar articles

Cited by

References

    1. Artificial intelligence in transport: Current and future developments, opportunities and challenges. Think Tank, European Parliament [Policy Briefing, 2019]. https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2019)635609.
    1. Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V. Predicting judicial decisions of the European Court of Human Rights: a natural language processing perspective. PeerJ Comput. Sci. 2016;2:e93. doi: 10.7717/peerj-cs.93. - DOI
    1. Miotto R, Li L, Kidd BA, Dudley JT. Deep Patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 2016;6:1–10. doi: 10.1038/srep26094. - DOI - PMC - PubMed
    1. Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B. Definitions, methods, and applications in interpretable machine learning. Proc. Natl Acad. Sci. USA. 2019;116:22071–22080. doi: 10.1073/pnas.1900654116. - DOI - PMC - PubMed
    1. Gunning D, et al. XAI—explainable artificial intelligence. Sci. Robot. 2019;4:eaay7120. doi: 10.1126/scirobotics.aay7120. - DOI - PubMed

Publication types