Study on the Helpfulness of Explainable Artificial Intelligence | SpringerLink
Skip to main content

Study on the Helpfulness of Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

Explainable Artificial Intelligence (XAI) is essential for building advanced machine learning-powered applications, especially in critical domains such as medical diagnostics or autonomous driving. Legal, business, and ethical requirements motivate using effective XAI, but the increasing number of different methods makes it challenging to pick the right ones. Further, as explanations are highly context-dependent, measuring the effectiveness of XAI methods without users can only reveal a limited amount of information, excluding human factors such as the ability to understand it. We propose to evaluate XAI methods via the user’s ability to successfully perform a proxy task, designed such that a good performance is an indicator for the explanation to provide helpful information. In other words, we address the helpfulness of XAI for human decision-making. Further, a user study on state-of-the-art methods was conducted, showing differences in their ability to generate trust and skepticism and the ability to judge the rightfulness of an AI decision correctly. Based on the results, we highly recommend using and extending this approach for more objective-based human-centered user studies to measure XAI performance in an end-to-end fashion.

We express our gratitude to Fraunhofer Heinrich-Hertz-Institute for financially supporting our work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9151
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11439
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    complete link : https://github.com/tlabarta/helpfulnessofxai.

References

  1. Achtibat, R., et al.: From attribution maps to human-understandable explanations through concept relevance propagation. Nature Mach. Intell. 5(9), 1006–1019 (2023)

    Article  Google Scholar 

  2. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  Google Scholar 

  3. Badillo, S.: An introduction to machine learning. Clinical Pharmacol. Therapeut. 107(4), 871–885 (2020)

    Article  Google Scholar 

  4. Bertrand, A., Belloum, R., Eagan, J.R., Maxwell, W.: How cognitive biases affect xai-assisted decision-making: a systematic review. In: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pp. 78–91 (2022)

    Google Scholar 

  5. Cabitza, F., et al.: Quod erat demonstrandum?-towards a typology of the concept of explanation for the design of explainable AI. Expert Syst. Appl. 213, 118888 (2023)

    Article  Google Scholar 

  6. Carli, R., Najjar, A., Calvaresi, D.: Risk and exposure of XAI in persuasion and argumentation: the case of manipulation. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems: 4th International Workshop, EXTRAAMAS 2022, Virtual Event, May 9–10, 2022, Revised Selected Papers, pp. 204–220. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-15565-9_13

    Chapter  Google Scholar 

  7. Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)

    Google Scholar 

  8. Cohen, J.: Statistical power analysis for the behavioral sciences. Routledge (2013)

    Google Scholar 

  9. Council of European Union: Regulation (eu) 2016/679 of the European parliament and of the council of 27 pril 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation) (text with eea relevance) (May 2016). https://gdpr.eu

  10. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (xai): a survey. arXiv preprint arXiv:2006.11371 (2020)

  11. Du, Y., Antoniadi, A.M., McNestry, C., McAuliffe, F.M., Mooney, C.: The role of XAI in advice-taking from a clinical decision support system: A comparative user study of feature contribution-based and example-based explanations. Appl. Sci. 12(20), 10323 (2022)

    Article  Google Scholar 

  12. of European Union, C.: Ethics guidelines for trustworthy ai. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. Accessed 30 Dec 2022

  13. Evans, T., et al.: The explainability paradox: challenges for XAI in digital pathology. Futur. Gener. Comput. Syst. 133, 281–296 (2022)

    Article  Google Scholar 

  14. Farhat, H., Sakr, G.E., Kilany, R.: Deep learning applications in pulmonary medical imaging: recent updates and insights on Covid-19. Mach. Vis. Appl. 31(6), 1–42 (2020)

    Article  Google Scholar 

  15. Garreau, D., Mardaoui, D.: What does lime really see in images? In: International Conference on Machine Learning, pp. 3620–3629. PMLR (2021)

    Google Scholar 

  16. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nature Mach. Intell. 2(11), 665–673 (2020)

    Article  Google Scholar 

  17. Grigorescu, S., Trasnea, B., Cocias, T., Macesanu, G.: A survey of deep learning techniques for autonomous driving. J. Field Robot. 37(3), 362–386 (2020)

    Article  Google Scholar 

  18. Hedström, A., et al.: Quantus: an explainable ai toolkit for responsible evaluation of neural network explanations. arXiv preprint arXiv:2202.06861 (2022)

  19. Hodges, J., Mohan, S.: Machine learning in gifted education: a demonstration using neural networks. Gifted Child Quart. 63(4), 243–252 (2019)

    Article  Google Scholar 

  20. Hu, X., Chu, L., Pei, J., Liu, W., Bian, J.: Model complexity of deep learning: a survey. Knowl. Inf. Syst. 63(10), 2585–2619 (2021)

    Article  Google Scholar 

  21. Kim, J., Rohrbach, A., Darrell, T., Canny, J., Akata, Z.: Textual explanations for self-driving vehicles. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 563–578 (2018)

    Google Scholar 

  22. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  23. Lakkaraju, H., Bastani, O.: “how do i fool you?” manipulating user trust via misleading black box explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 79–85 (2020)

    Google Scholar 

  24. Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Faithful and customizable explanations of black box models. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 131–138 (2019)

    Google Scholar 

  25. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking clever Hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1–8 (2019)

    Article  Google Scholar 

  26. Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2020)

    Google Scholar 

  27. Liao, Q.V., Zhang, Y., Luss, R., Doshi-Velez, F., Dhurandhar, A.: Connecting algorithmic research and usage contexts: a perspective of contextualized evaluation for explainable AI. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. vol. 10, pp. 147–159 (2022)

    Google Scholar 

  28. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nature Mach. Intell. 2(1), 56–67 (2020)

    Article  Google Scholar 

  29. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  30. Matarese, M., Rea, F., Sciutti, A.: How much informative is your xai? a decision-making assessment task to objectively measure the goodness of explanations. arXiv preprint arXiv:2312.04379 (2023)

  31. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.R.: Layer-wise relevance propagation: an overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 193–209 (2019)

    Google Scholar 

  32. Müller, H., Holzinger, A.: Kandinsky patterns. Artif. Intell. 300, 103546 (2021)

    Article  MathSciNet  Google Scholar 

  33. Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. 116(44), 22071–22080 (2019)

    Article  MathSciNet  Google Scholar 

  34. Recht, B., Schmidt, L., Roelofs, R., Shankar, V.: Imagenetv2. https://imagenetv2.org. Accessed 17 Sept 2022

  35. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  36. Salewski, L., Koepke, A.S., Lensch, H.P.A., Akata, Z.: CLEVR-X: a visual reasoning dataset for natural language explanations. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, K.-R., Samek, W. (eds.) xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pp. 69–88. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-04083-2_5

    Chapter  Google Scholar 

  37. Shapley, L.S.: A value for n-person games. Classics in game theory 69 (1997)

    Google Scholar 

  38. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  39. Sovrano, F., Vitali, F.: How to quantify the degree of explainability: Experiments and practical implications. In: 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–9. IEEE (2022)

    Google Scholar 

  40. Speith, T.: A review of taxonomies of explainable artificial intelligence (xai) methods. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2239–2250 (2022)

    Google Scholar 

  41. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  42. Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inform. Fusion 76, 89–106 (2021)

    Article  Google Scholar 

  43. van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021)

    Article  MathSciNet  Google Scholar 

  44. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)

    Google Scholar 

  45. Yerushalmy, J.: Statistical problems in assessing methods of medical diagnosis, with special reference to x-ray techniques. Public Health Rep. 1896–1970, 1432–1449 (1947)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Labarta .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors declare no conflict of interest.

Appendices

Appendix A Additional Visualizations

Fig. 6.
figure 6

Mean Accuracy based on educational background, machine learning experience in years, self-assessment of the usefulness of machine learning experience, and self-assessment on visual impairment.

Fig. 7.
figure 7

Convergence of accuracy over the number of participants.

Fig. 8.
figure 8

Difference in accuracy between VGG16 and AlexNet

Fig. 9.
figure 9

Summary of means for accuracy, sensitivity, and specificity for all examined XAI methods compared to the random baseline.

Appendix B Demographic Overview of Participants

(See Table 4)

Table 4. Results of demographic questions

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Labarta, T., Kulicheva, E., Froelian, R., Geißler, C., Melman, X., von Klitzing, J. (2024). Study on the Helpfulness of Explainable Artificial Intelligence. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2156. Springer, Cham. https://doi.org/10.1007/978-3-031-63803-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63803-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63802-2

  • Online ISBN: 978-3-031-63803-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics