Human-AI Interaction Paradigm for Evaluating Explainable Artificial Intelligence | SpringerLink
Skip to main content

Human-AI Interaction Paradigm for Evaluating Explainable Artificial Intelligence

  • Conference paper
  • First Online:
HCI International 2022 Posters (HCII 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1580))

Included in the following conference series:

Abstract

This article seeks to propose a framework and corresponding paradigm for evaluating explanations provided by explainable artificial intelligence (XAI). The article argues for the need for evaluation paradigms – different people performing different tasks in different contexts will react differently to different explanations. It reviews previous research evaluating XAI explanations while also identifying the main contribution of this work – a flexible paradigm researchers can use to evaluate XAI models, rather than a list of factors. The article then outlines a framework which offers causal relationships between five key factors – mental models, probability estimates, trust, knowledge, and performance. It then outlines a paradigm consisting of a training, testing and evaluation phase. The work is discussed in relation to predictive models, guidelines for XAI developers, and adaptive explainable artificial intelligence - a recommender system capable of predicting what the preferred explanations would be for a specific domain-expert on a particular task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Shahroudnejad, A.: A survey on understanding, visualizations, and explanation of deep neural networks. arXiv preprint arXiv:2102.01792 (2021)

  2. Miller, T.: “ But why?” Understanding explainable artificial intelligence. XRDS: crossroads. ACM Mag. Stud. 25(3), 20–25 (2019)

    Google Scholar 

  3. Molnar, C.: Interpretable Machine Learning. Lulu. Com. (2020)

    Google Scholar 

  4. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  5. Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparenc, pp. 648–657 (2020)

    Google Scholar 

  6. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)

    Article  Google Scholar 

  7. Lee, E., Braines, D., Stiffler, M., Hudler, A., Harborne, D.: Developing the sensitivity of LIME for better machine learning explanation. In: Pham, T., Soloman, L. (eds.) Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, p. 1100610. SPIE (2019).

    Google Scholar 

  8. Lubo-Robles, D., Devegowda, D., Jayaram, V., Bedle, H., Marfurt, K.J., Pranter, M.J.: Machine learning model interpretability using SHAP values: application to a seismic facies classification task. In: SEG International Exposition and Annual Meeting (2020)

    Google Scholar 

  9. Kazhdan, D., Dimanov, B., Jamnik, M., Liò, P., Weller, A.: Now you see me (CME): concept-based model extraction. arXiv preprint arXiv:2010.13233 (2020)

  10. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596 (2020)

  11. Shvo, M., Klassen, T.Q., McIlraith, S.A.: Towards the role of theory of mind in explanation. In: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, pp. 75–93 (2020)

    Google Scholar 

  12. Sutcliffe, K.M., Weick, K.E.: Information overload revisited. In: Hodgkinson, G.P., Starbuck, W.H. (eds.) The Oxford Handbook of Organizational Decision Making. Oxford University Press, London (2009)

    Google Scholar 

  13. Ssebandeke, A., Franklin, M., Lagnado, D.: Explanations that backfire: explainable artificial intelligence can cause information overload. Unpublished Manuscript (Submitted 2022)

    Google Scholar 

  14. Ehsan, U., et al.: The who in explainable AI: how AI background shapes perceptions of AI explanations. arXiv preprint arXiv:2107.13509 (2021)

  15. Dragoni, M., Donadello, I., Eccher, C.: Explainable AI meets persuasiveness: translating reasoning results into behavioral change advice. AI Med. 105, 101840 (2020)

    Google Scholar 

  16. Donadello, I., Dragoni, M., Eccher, C.: Explaining reasoning algorithms with persuasiveness: a case study for a behavioural change system. In: Proceedings of the 35th Annual ACM Symposium on Applied Computing, pp. 646–653 (2020)

    Google Scholar 

  17. Lakkaraju, H., Bastani, O.: “How do I fool you?” Manipulating user trust via misleading black box explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 79–85 (2020)

    Google Scholar 

  18. Ariely, D., Norton, M.I.: How actions create–not just reveal–preferences. Trends Cogn. Sci. 12(1), 13–16 (2008)

    Article  Google Scholar 

  19. Ashton, H., Franklin, M.: The problem of behaviour and preference manipulation in AI systems. In: The AAAI-22 Workshop on Artificial Intelligence Safety (SafeAI 2022) (2022)

    Google Scholar 

  20. Franklin, M., Ashton, H., Gorman, R., Armstrong, S.: Recognising the importance of preference change: a call for a coordinated multidisciplinary research effort in the age of AI. In: AAAI-22 Workshop on AI For Behavior Change (2022)

    Google Scholar 

  21. Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018)

  22. Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12(3), 1353 (2022)

    Article  Google Scholar 

  23. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K:. Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), pp. 1078–1088 (2019)

    Google Scholar 

  24. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)

    Google Scholar 

  25. Lage, I., et al.: An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006 (2019)

  26. Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018)

  27. Kindermans, P.J., Hooker, S., Adebayo, J., Alber, M., Schütt, K.T., Dähne, S., Erhan, D., Kim, B.: The (Un)reliability of saliency methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 267–280. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_14

    Chapter  Google Scholar 

  28. Chromik, M., Schuessler, M.: A taxonomy for human subject evaluation of black-box explanations in XAI. In: ExSS-ATEC@ IUI (2020)

    Google Scholar 

  29. Sperrle, F., El-Assady, M., Guo, G., Chau, D.H., Endert, A., Keim, D.: Should we trust (x) AI? Design dimensions for structured experimental evaluations. arXiv preprint arXiv:2009.06433 (2020)

  30. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)

  31. Peltola, T., Celikok, M.M., Daee, P., Kaski, S.: Modelling user’s theory of AI’s mind in interactive intelligent systems. arXiv preprint arXiv:1809.02869 (2018)

  32. Berliner, D.C., Calfee, R.C.: Handbook of Educational Psychology. Routledge (2013)

    Google Scholar 

  33. Malle BF, Ullman D. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction, pp. 3–25. (2021)

    Google Scholar 

  34. Kaur, D., Uslu, S., Rittichier, K.J., Durresi, A.: trustworthy artificial intelligence: a review. ACM Comput. Surv. (CSUR) 55(2), 1–38 (2022)

    Article  Google Scholar 

  35. Tversky, A., Kahneman, D.: Causal schemas in judgments under uncertainty. Progr. Soc. Psychol. 1, 49–72 (2015)

    Google Scholar 

  36. Kirfel, L., Icard, T., Gerstenberg, T.: Inference from explanation. J. Exp. Psychol. Gene. (2021)

    Google Scholar 

  37. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matija Franklin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Franklin, M., Lagnado, D. (2022). Human-AI Interaction Paradigm for Evaluating Explainable Artificial Intelligence. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2022 Posters. HCII 2022. Communications in Computer and Information Science, vol 1580. Springer, Cham. https://doi.org/10.1007/978-3-031-06417-3_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06417-3_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06416-6

  • Online ISBN: 978-3-031-06417-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics