Abstract
The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation-generating methods. In practice, Shapley explanations are widely used. However, they are often presented as visualizations and thus leave their interpretation to the user. As such, even ML experts have difficulties interpreting them appropriately. On the other hand, combining visual cues with textual rationales has been shown to facilitate understanding and communicative effectiveness. Further, the social sciences suggest that explanations are a social and iterative process between the explainer and the explainee. Thus, interactivity should be a guiding principle in the design of explanation facilities. Therefore, we (i) briefly review prior research on interactivity and naturalness in XAI, (ii) designed and implemented the interactive explanation interface SHAPRap that provides local and global Shapley explanations in an accessible format, and (iii) evaluated our prototype in a formative user study with 16 participants in a loan application scenario. We believe that interactive explanation facilities that provide multiple levels of explanations offer a promising approach for empowering humans to better understand a model’s behavior and its limitations on a local as well as global level. With our work, we inform designers of XAI systems about human-centric ways to tailor explanation interfaces to end users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
We re-framed the Loan_Status column to represent the default risk and the Credit_History column to represent a negative item on a credit report.
- 4.
Level 1: I understand which features the AI has access to and what the AI predicts as an output., Level 4: I understand which features are more important than others for the AI prediction., Level 7: I understand how much individual feature values influence the AI prediction and which feature values depend on others.
References
Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: CHI 2018 (2018). https://doi.org/10.1145/3173574.3174156
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access (2018). https://doi.org/10.1109/ACCESS.2018.2870052
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. (2020). https://doi.org/10.1016/j.inffus.2019.12.012
Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020). https://doi.org/10.1145/3351095.3375624
Biran, O., McKeown, K.: Human-centric justification of machine learning predictions. In: IJCAI 2017 (2017). https://doi.org/10.24963/ijcai.2017/202
Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. (2016). https://doi.org/10.1177/2053951715622512
Cheng, H.F., et al.: Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders. In: CHI 2019 (2019). https://doi.org/10.1145/3290605.3300789
Chromik, M., Eiband, M., Buchner, F., Krüger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: IUI 2021 (2021). https://doi.org/10.1145/3397481.3450644
Das, D., Chernova, S.: Leveraging rationales to improve human task performance. In: IUI 2020 (2020). https://doi.org/10.1145/3290605.3300789
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: IUI 2019 (2019). https://doi.org/10.1145/3301275.3302316
Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)
Forrest, J., Sripada, S., Pang, W., Coghill, G.: Towards making NLG a voice for interpretable machine learning. In: INLG (2018). https://doi.org/10.18653/v1/W18-6522
Gkatzia, D., Lemon, O., Rieser, V.: Natural language generation enhances human decision-making with uncertain information. Presented at the (2016)
Gosiewska, A., Biecek, P.: Do not trust additive explanations. ArXiv (2020). https://arxiv.org/abs/1903.11420
Healey, C.G., Booth, K.S., Enns, J.T.: High-speed visual estimation using pre attentive processing. ACM Trans. Comput. Hum. Interact. (1996). https://doi.org/10.1145/230562.230563
Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. CoRR (2018). https://arxiv.org/abs/1812.04608
Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: CHI 2020 (2020). https://doi.org/10.1145/3313831.3376219
Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: CHI 2020 (2020). https://doi.org/10.1145/3313831.3376590
Lipton, Z.C.: The mythos of model interpretability. ACM Queue (2016). https://doi.org/10.1145/3236386.3241340
Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. (2020). https://doi.org/10.1038/s42256-019-0138-9
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. (2019). https://doi.org/10.1016/j.artint.2018.07.007
Oulasvirta, A., Hornbaek, K.: HCI research as problem-solving. In: CHI 2016 (2016). https://doi.org/10.1145/2858036.2858283
Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Mind. Mach. (2019). https://doi.org/10.1007/s11023-019-09502-w
Reiter, E.: Natural language generation challenges for explainable AI. In: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019) (2019). https://doi.org/10.18653/v1/W19-8402
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016). https://doi.org/10.1145/2939672.2939778
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. (2019). https://doi.org/10.1038/S42256-019-0048-X
Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games (1953)
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020). https://doi.org/10.1145/3375627.3375830
Sokol, K., Flach, P.A.: One explanation does not fit all. KI - Künstliche Intelligenz (2020). https://doi.org/10.1007/s13218-020-00637-y
Springer, A., Whittaker, S.: Progressive disclosure. ACM Trans. Interact. Intell. Syst. (2020). https://doi.org/10.1145/3374218
Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: CHI 2019 (2019). https://doi.org/10.1145/3290605.3300831
Weld, D.S., Bansal, G.: The challenge of crafting intelligible intelligence. Commun. ACM (2019). https://doi.org/10.1145/3282486
Werner, C.: Explainable ai through rule-based interactive conversation. In: EDBT/ICDT Workshops (2020). http://ceur-ws.org/Vol-2578/ETMLP3.pdf
Xu, W.: Toward human-centered AI: a perspective from human-computer interaction. Interactions (2019). https://doi.org/10.1145/3328485
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Chromik, M. (2021). Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12933. Springer, Cham. https://doi.org/10.1007/978-3-030-85616-8_37
Download citation
DOI: https://doi.org/10.1007/978-3-030-85616-8_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85615-1
Online ISBN: 978-3-030-85616-8
eBook Packages: Computer ScienceComputer Science (R0)