Quantifying the Demand for Explainability | SpringerLink
Skip to main content

Quantifying the Demand for Explainability

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2021 (INTERACT 2021)

Abstract

Software that uses Artificial Intelligence technology like Machine Learning is becoming ubiquitous with even more applications ahead. Yet, the very nature of these systems has made it very hard to understand how they operate, creating a demand for explanations. While many approaches have been and are being developed, it remains unclear how strong this demand is for different domains, application types, and user groups. To assess this, we introduce a novel survey scale to quantify the demand for explainability. We also apply this scale to an exemplary set of applications, novel and traditional, in surveys with 212 participants, showing that interest in explainability is high in general for intelligent systems but also traditional software. While this validates the heightened interest in explainability, it also reveals further questions, e.g. where we can find synergies or how intelligent systems require different explanations compare to traditional but equally complex software.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.rdocumentation.org/packages/psych/.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Amershi, S., et al.: Guidelines for human-AI interaction. In: Brewster, S.A., Fitzpatrick, G., Cox, A.L., Kostakos, V. (eds.) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, 04–09 May 2019, p. 3. ACM (2019)

    Google Scholar 

  3. Barbalau, A., Cosma, A., Ionescu, R.T., Popescu, M.: A generic and model-agnostic exemplar synthetization framework for explainable AI (2020)

    Google Scholar 

  4. Bohlender, D., Köhl, M.A.: Towards a characterization of explainable systems. CoRR (abs/1902.03096) (2019)

    Google Scholar 

  5. Bunt, A., Lount, M., Lauzon, C.: Are explanations always important?: a study of deployed, low-cost intelligent interactive systems. In: Duarte, C., Carriço, L., Jorge, J.A., Oviatt, S.L., Gonçalves, D. (eds.) 17th International Conference on Intelligent User Interfaces, IUI 2012, Lisbon, Portugal, 14–17 February 2012, pp. 169–178. ACM (2012)

    Google Scholar 

  6. Cohen, I.G., Graver, H.: A doctor’s touch: What big data in health care can teach us about predictive policing. SSRN Electron. J. (2019)

    Google Scholar 

  7. Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)

    Article  Google Scholar 

  8. Edison, S.W., Geissler, G.L.: Measuring attitudes towards general technology: antecedents, hypotheses and scale development. J. Target. Meas. Anal. Mark. 12(2), 137–156 (2003)

    Article  Google Scholar 

  9. Ehrlich, K., Kirk, S.E., Patterson, J.F., Rasmussen, J.C., Ross, S.I., Gruen, D.M.: Taking advice from intelligent systems: the double-edged sword of explanations. In: Pu, P., Pazzani, M.J., André, E., Riecken, D. (eds.) Proceedings of the 16th International Conference on Intelligent User Interfaces, IUI 2011, Palo Alto, CA, USA, 13–16 February, 2011, pp. 125–134. ACM (2011)

    Google Scholar 

  10. Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. CHI EA 2019, New York, NY, USA, pp. 1–6 Association for Computing Machinery (2019)

    Google Scholar 

  11. Eiband, M., Völkel, S.T., Buschek, D., Cook, S., Hussmann, H.: When people and algorithms meet: user-reported problems in intelligent everyday applications. In: Fu, W., Pan, S., Brdiczka, O., Chau, P., Calvary, G. (eds.) Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI 2019, Marina del Ray, CA, USA, 17–20 March 2019, pp. 96–106. ACM (2019)

    Google Scholar 

  12. Gade, K., Geyik, S.C., Kenthapadi, K., Mithal, V., Taly, A.: Explainable AI in industry. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD 2019, New York NY, USA, pp. 3203–3204. Association for Computing Machinery (2019)

    Google Scholar 

  13. Gaviria, C., Corredor, J.A., Zuluaga-Rendón, Z.: “if it matters, I can explain it”: social desirability of knowledge increases the illusion of explanatory depth. In: Gunzelmann, G., Howes, A., Tenbrink, T., Davelaar, E.J. (eds.) Proceedings of the 39th Annual Meeting of the Cognitive Science Society, CogSci 2017, London, UK, 16–29 July 2017 (2017). cognitivesciencesociety.org

    Google Scholar 

  14. Goebel, R., et al.: Explainable AI: the new 42? In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) Machine Learning and Knowledge Extraction, pp. 295–303. Springer, Cham (2018)

    Chapter  Google Scholar 

  15. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.: XAI - explainable artificial intelligence. Sci. Robotics 4(37) (2019)

    Google Scholar 

  16. Hall, M., Harborne, D., Tomsett, R., Galetic, V., Quintana-Amate, S.: A systematic method to understand requirements for explainable AI (xai) systems (2019)

    Google Scholar 

  17. Hase, P., Bansal, M.: Evaluating explainable AI: which algorithmic explanations help users predict model behavior? (2020)

    Google Scholar 

  18. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: challenges and prospects. CoRR (abs/1812.04608) (2018)

    Google Scholar 

  19. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? CoRR (abs/1712.09923) (2017)

    Google Scholar 

  20. Lundberg, S.M., et al.: Explainable AI for trees: From local explanations to global understanding. CoRR (abs/1905.04610) (2019)

    Google Scholar 

  21. Mair, Patrick: Factor Analysis. In: Modern Psychometrics with R. UR, pp. 17–61. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93177-7_2

  22. Melis, M., Demontis, A., Pintor, M., Sotgiu, A., Biggio, B.: SECML: a python library for secure and explainable machine learning (2019)

    Google Scholar 

  23. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. CoRR (abs/1712.00547) (2017)

    Google Scholar 

  24. Mittelstadt, B.D., Floridi, L.: Transparent, explainable, and accountable AI for robotics. Sci. Robotics 2(6) (2017)

    Google Scholar 

  25. Ribera, M., Lapedriza, À.: Can we do better explanations? A proposal of user-centered explainable AI. In: Trattner, C., Parra, D., Riche, N. (eds.) Joint Proceedings of the ACM IUI 2019 Workshops co-located with the 24th ACM Conference on Intelligent User Interfaces (ACM IUI 2019), Los Angeles, USA, March 20, 2019. CEUR Workshop Proceedings, vol. 2327. CEUR-WS.org (2019)

    Google Scholar 

  26. Rozenblit, L., Keil, F.C.: The misunderstood limits of folk science: an illusion of explanatory depth. Cogn. Sci. 26(5), 521–562 (2002)

    Article  Google Scholar 

  27. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (xai): toward medical xai. Presented at the (2020)

    Google Scholar 

  28. Völkel, S.T., Schneegass, C., Eiband, M., Buschek, D.: What is “intelligent” in intelligent user interfaces?: a meta-analysis of 25 years of IUI. In: Paternò, F., Oliver, N., Conati, C., Spano, L.D., Tintarev, N. (eds.) IUI 2020: 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020, pp. 477–487. ACM (2020)

    Google Scholar 

  29. Wang, D., Yang, Q., Abdul, A.M., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Brewster, S.A., Fitzpatrick, G., Cox, A.L., Kostakos, V. (eds.) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, 04–09 May 2019, p. 601. ACM (2019)

    Google Scholar 

  30. Wickramasinghe, C.S., Marino, D.L., Grandio, J., Manic, M.: Trustworthy AI development guidelines for human system interaction. In: 13th International Conference on Human System Interaction, HSI 2020, Tokyo, Japan, 6–8 June 2020, pp. 130–136. IEEE (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Weber .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Weber, T., Hußmann, H., Eiband, M. (2021). Quantifying the Demand for Explainability. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12933. Springer, Cham. https://doi.org/10.1007/978-3-030-85616-8_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85616-8_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85615-1

  • Online ISBN: 978-3-030-85616-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics