Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives | SpringerLink
Skip to main content

Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2021 (INTERACT 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12933))

Included in the following conference series:

Abstract

The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation-generating methods. In practice, Shapley explanations are widely used. However, they are often presented as visualizations and thus leave their interpretation to the user. As such, even ML experts have difficulties interpreting them appropriately. On the other hand, combining visual cues with textual rationales has been shown to facilitate understanding and communicative effectiveness. Further, the social sciences suggest that explanations are a social and iterative process between the explainer and the explainee. Thus, interactivity should be a guiding principle in the design of explanation facilities. Therefore, we (i) briefly review prior research on interactivity and naturalness in XAI, (ii) designed and implemented the interactive explanation interface SHAPRap that provides local and global Shapley explanations in an accessible format, and (iii) evaluated our prototype in a formative user study with 16 participants in a loan application scenario. We believe that interactive explanation facilities that provide multiple levels of explanations offer a promising approach for empowering humans to better understand a model’s behavior and its limitations on a local as well as global level. With our work, we inform designers of XAI systems about human-centric ways to tailor explanation interfaces to end users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    github.com/slundberg/shap.

  2. 2.

    datahack.analyticsvidhya.com/contest/practice-problem-loan-prediction-iii/.

  3. 3.

    We re-framed the Loan_Status column to represent the default risk and the Credit_History column to represent a negative item on a credit report.

  4. 4.

    Level 1: I understand which features the AI has access to and what the AI predicts as an output., Level 4: I understand which features are more important than others for the AI prediction., Level 7: I understand how much individual feature values influence the AI prediction and which feature values depend on others.

References

  1. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: CHI 2018 (2018). https://doi.org/10.1145/3173574.3174156

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  3. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  4. Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020). https://doi.org/10.1145/3351095.3375624

  5. Biran, O., McKeown, K.: Human-centric justification of machine learning predictions. In: IJCAI 2017 (2017). https://doi.org/10.24963/ijcai.2017/202

  6. Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. (2016). https://doi.org/10.1177/2053951715622512

    Article  Google Scholar 

  7. Cheng, H.F., et al.: Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders. In: CHI 2019 (2019). https://doi.org/10.1145/3290605.3300789

  8. Chromik, M., Eiband, M., Buchner, F., Krüger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: IUI 2021 (2021). https://doi.org/10.1145/3397481.3450644

  9. Das, D., Chernova, S.: Leveraging rationales to improve human task performance. In: IUI 2020 (2020). https://doi.org/10.1145/3290605.3300789

  10. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: IUI 2019 (2019). https://doi.org/10.1145/3301275.3302316

  11. Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)

    Google Scholar 

  12. Forrest, J., Sripada, S., Pang, W., Coghill, G.: Towards making NLG a voice for interpretable machine learning. In: INLG (2018). https://doi.org/10.18653/v1/W18-6522

  13. Gkatzia, D., Lemon, O., Rieser, V.: Natural language generation enhances human decision-making with uncertain information. Presented at the (2016)

    Google Scholar 

  14. Gosiewska, A., Biecek, P.: Do not trust additive explanations. ArXiv (2020). https://arxiv.org/abs/1903.11420

  15. Healey, C.G., Booth, K.S., Enns, J.T.: High-speed visual estimation using pre attentive processing. ACM Trans. Comput. Hum. Interact. (1996). https://doi.org/10.1145/230562.230563

    Article  Google Scholar 

  16. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. CoRR (2018). https://arxiv.org/abs/1812.04608

  17. Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: CHI 2020 (2020). https://doi.org/10.1145/3313831.3376219

  18. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: CHI 2020 (2020). https://doi.org/10.1145/3313831.3376590

  19. Lipton, Z.C.: The mythos of model interpretability. ACM Queue (2016). https://doi.org/10.1145/3236386.3241340

    Article  Google Scholar 

  20. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. (2020). https://doi.org/10.1038/s42256-019-0138-9

    Article  Google Scholar 

  21. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. (2019). https://doi.org/10.1016/j.artint.2018.07.007

    Article  MathSciNet  MATH  Google Scholar 

  22. Oulasvirta, A., Hornbaek, K.: HCI research as problem-solving. In: CHI 2016 (2016). https://doi.org/10.1145/2858036.2858283

  23. Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Mind. Mach. (2019). https://doi.org/10.1007/s11023-019-09502-w

    Article  Google Scholar 

  24. Reiter, E.: Natural language generation challenges for explainable AI. In: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019) (2019). https://doi.org/10.18653/v1/W19-8402

  25. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016). https://doi.org/10.1145/2939672.2939778

  26. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. (2019). https://doi.org/10.1038/S42256-019-0048-X

    Article  Google Scholar 

  27. Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games (1953)

    Google Scholar 

  28. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling lime and shap: adversarial attacks on post hoc explanation methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020). https://doi.org/10.1145/3375627.3375830

  29. Sokol, K., Flach, P.A.: One explanation does not fit all. KI - Künstliche Intelligenz (2020). https://doi.org/10.1007/s13218-020-00637-y

    Article  Google Scholar 

  30. Springer, A., Whittaker, S.: Progressive disclosure. ACM Trans. Interact. Intell. Syst. (2020). https://doi.org/10.1145/3374218

    Article  Google Scholar 

  31. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: CHI 2019 (2019). https://doi.org/10.1145/3290605.3300831

  32. Weld, D.S., Bansal, G.: The challenge of crafting intelligible intelligence. Commun. ACM (2019). https://doi.org/10.1145/3282486

    Article  Google Scholar 

  33. Werner, C.: Explainable ai through rule-based interactive conversation. In: EDBT/ICDT Workshops (2020). http://ceur-ws.org/Vol-2578/ETMLP3.pdf

  34. Xu, W.: Toward human-centered AI: a perspective from human-computer interaction. Interactions (2019). https://doi.org/10.1145/3328485

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Chromik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chromik, M. (2021). Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12933. Springer, Cham. https://doi.org/10.1007/978-3-030-85616-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85616-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85615-1

  • Online ISBN: 978-3-030-85616-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics