The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy | SpringerLink
Skip to main content

The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2024)

Abstract

There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems. Building on such considerations, the paper discusses the potential dangers of large language models (LLMs) that generate explanations to support the outcomes produced. While these models may give users the illusion of control over the system’s responses, they actually have persuasive and non-explanatory effects. Therefore, it is argued here that XAI, appropriately regulated, should be a resource to empower users of AI systems. Any other apparent explanations should be reported to avoid misleading and circumventing effects.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 6634
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 8293
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aljanabi, M., Ghazi, M., Ali, A.H., Abed, S.A., et al.: ChatGpt: open possibilities. Iraqi J. Comput. Sci. Math. 4(1), 62–64 (2023)

    Google Scholar 

  2. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, pp. 1078–1088, 2019. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  3. Balakrishnan, P., Nataraajan, R., Desai, A.: Consumer rationality and economic efficiency: is the assumed link justified? Mark. Manage. J. 10(1) (2000)

    Google Scholar 

  4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021)

    Google Scholar 

  5. Binz, M., Schulz, E.: Using cognitive psychology to understand GPT-3. Proc. Nat. Acad. Sci. 120(6), e2218523120 (2023)

    Article  Google Scholar 

  6. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 workshop on explainable AI (XAI), vol. 8, pp. 8–13 (2017)

    Google Scholar 

  7. Block-Lieb, S., Janger, E.J.: The myth of the rational borrower: rationality, behavioralism, and the misguided reform of bankruptcy law. Tex. L. Rev. 84, 1481 (2005)

    Google Scholar 

  8. Bourke, R.: What is conservatism? History, ideology and party. Eur. J. Polit. Theo. 17(4), 449–475 (2018)

    Article  Google Scholar 

  9. Brandeis, L.D.: Other People’s Money and How The Bankers Use It (1914). Boston, MA and New York, NY (Bedford/St. Martin’s) (1995)

    Google Scholar 

  10. Brandtzaeg, P.B., Skjuve, M., Følstad, A.: My AI friend: how users of a social chatbot understand their human-AI friendship. Hum. Commun. Res. 48(3), 404–429 (2022)

    Article  Google Scholar 

  11. Carli, R.: Deception in Social Robotics: Problematic Profiles of Human Robot Interaction and the Universality of Human Vulnerability. Forthcoming (Forthcoming)

    Google Scholar 

  12. Carli, R., Najjar, A.: Rethinking trust in social robotics (2021). arXiv preprint arXiv:2109.06800

  13. Carli, R., Najjar, A.: A vulnerability-oriented impact assessment for the development of human-centred and fundamental rights-empowering social robots. In: Proceedings of the 11th International Conference on Human-Agent Interaction, pp. 404–406 (2023)

    Google Scholar 

  14. Carli, R., Najjar, A., Calvaresi, D.: Human-social robots interaction: the blurred line between necessary anthropomorphization and manipulation. In: Proceedings of the 10th International Conference on Human-Agent Interaction, pp. 321–323 (2022)

    Google Scholar 

  15. Carli, R., Najjar, A., Calvaresi, D.: Risk and exposure of XAI in persuasion and argumentation: the case of manipulation. In: Explainable and Transparent AI and Multi-Agent Systems: 4th International Workshop, EXTRAAMAS 2022, Virtual Event, May 9–10, 2022, Revised Selected Papers. pp. 204–220. Springer (2022)

    Google Scholar 

  16. Carlson, S.M., Koenig, M.A., Harms, M.B.: Theory of mind. Wiley Interdisciplinary Reviews: Cognitive Science 4(4), 391–402 (2013)

    Google Scholar 

  17. Cassirer, E.: Filosofia delle forme simboliche: Il linguaggio/[trad. di Eraldo Arnaud]. Nuova Italia (1996)

    Google Scholar 

  18. Castano, E., Martingano, A.J., Basile, G., Bergen, E., Jeong, E.H.K.: Listening in to a conversation enhances theory of mind. Curr. Res. Ecol. Soc. Psychol. 4, 100108 (2023)

    Article  Google Scholar 

  19. Chan, A.A.Y.H.: Anthropomorphism as a conservation tool. Biodivers. Conserv. 21, 1889–1892 (2012)

    Article  Google Scholar 

  20. Chang, Y., et al.: A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 15(3), 1–45 (2023)

    Google Scholar 

  21. Crevier, D.: AI: The Tumultuous History of the Search for Artificial Intelligence. Basic Books, Inc (1993)

    Google Scholar 

  22. Dacey, M.: Anthropomorphism as cognitive bias. Philos. Sci. 84(5), 1152–1164 (2017)

    Article  Google Scholar 

  23. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (xai): A survey (2020). arXiv preprint arXiv:2006.11371

  24. Directive, T.: Directive 2004/109/EC of the European parliament and of the council of 15 December 2004 on the harmonisation of transparency requirements in relation to information about issuers whose securities are admitted to trading on a regulated market and amending directive 2001/34/EC. OJ L 390(15.12) (2004)

    Google Scholar 

  25. Dumouchel, P., Damiano, L.: Living with Robots. Harvard University Press (2017)

    Google Scholar 

  26. Dwivedi, R., et al.: Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55(9), 1–33 (2023)

    Article  Google Scholar 

  27. Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114(4), 864 (2007)

    Article  Google Scholar 

  28. Fineman, M.A.: Vulnerability: Reflections on a New Ethical Foundation for Law and Politics. Ashgate Publishing Ltd (2013)

    Google Scholar 

  29. Gatt, L., Caggiano, I.A.: Consumers and digital environments as a structural vulnerability relationship (2022)

    Google Scholar 

  30. Graziani, M., et al.: A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 1–32 (2022)

    Google Scholar 

  31. Guthrie, S.E.: Anthropomorphism: a definition and a theory (1997)

    Google Scholar 

  32. Hakim, F.Z.M., Indrayani, L.M., Amalia, R.M.: A dialogic analysis of compliment strategies employed by replika chatbot. In: Third International Conference of Arts, Language and Culture (ICALC 2018), pp. 266–271. Atlantis Press (2019)

    Google Scholar 

  33. Herrmann, H.: What’s next for responsible artificial intelligence: a way forward through responsible innovation. Heliyon (2023)

    Google Scholar 

  34. Holterman, B., van Deemter, K.: Does chatgpt have theory of mind? (2023). arXiv preprint arXiv:2305.14020

  35. Iftikhar, L., Iftikhar, M.F., Hanif, M.I.: Docgpt: Impact of chatgpt-3 on health services as a virtual doctor. EC Paediat. 12(1), 45–55 (2023)

    Google Scholar 

  36. Imamguluyev, R.: The rise of gpt-3: Implications for natural language processing and beyond. Journal homepage:www.ijrpr.com ISSN 2582, 7421 (2023)

    Google Scholar 

  37. Ippolito, F.: La vulnerabilità quale principio emergente nel diritto internazionale dei diritti umani? Ars interpretandi 24(2), 63–93 (2019)

    Google Scholar 

  38. Johnson, D., et al.: Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the chat-GPT model. Res. square (2023)

    Google Scholar 

  39. Kim, J., Park, K., Ryu, H.: Social values of care robots. Int. J. Environ. Res. Public Health 19(24), 16657 (2022)

    Article  Google Scholar 

  40. Köbis, N., Bonnefon, J.F., Rahwan, I.: Bad machines corrupt good morals. Nat. Hum. Behav. 5(6), 679–685 (2021)

    Article  Google Scholar 

  41. Levillain, F., Zibetti, E.: Behavioral objects: the rise of the evocative machines. J. Hum. Robot Interact. 6(1), 4–24 (2017)

    Article  Google Scholar 

  42. Lin, H.F.: Examination of cognitive absorption influencing the intention to use a virtual community. Behav. Inf. Technol. 28(5), 421–431 (2009)

    Article  Google Scholar 

  43. Lotto, B.: Percezioni: come il cervello costruisce il mondo. Bollati Boringhieri (2022)

    Google Scholar 

  44. Ma, Z., Sansom, J., Peng, R., Chai, J.: Towards a holistic landscape of situated theory of mind in large language models (2023). arXiv preprint arXiv:2310.19619

  45. Marcus, G., Davis, E.: Gpt-3, bloviator: OpenAI’s language generator has no idea what it’s talking about. Technol. Rev. 294 (2020)

    Google Scholar 

  46. Mariotti, E., Alonso, J.M., Gatt, A.: Towards harnessing natural language generation to explain black-box models. In: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pp. 22–27 (2020)

    Google Scholar 

  47. Marriott, H.R., Pitardi, V.: One is the loneliest number... two can be as bad as one. the influence of ai friendship apps on users’ well-being and addiction. Psychol. Mark.41(1), 86–101 (2024)

    Google Scholar 

  48. Mavrepis, P., Makridis, G., Fatouros, G., Koukos, V., Separdani, M.M., Kyriazis, D.: Xai for all: Can large language models simplify explainable ai? (2024). arXiv preprint arXiv:2401.13110

  49. Misztal, A.: From ticks to tricks of time: narrative and temporal configuration of experience. Phenomenol. Cogn. Sci. 19(1), 59–78 (2020)

    Article  Google Scholar 

  50. Mohseni, S., et al.: Machine learning explanations to prevent overtrust in fake news detection. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 15, pp. 421–431 (2021)

    Google Scholar 

  51. Morava, M., Andrew, S.: Loneliness won’t end when the pandemic ends. (2021)

    Google Scholar 

  52. Nam, D., Macvean, A., Hellendoorn, V., Vasilescu, B., Myers, B.: Using an llm to help with code understanding. In: 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE), pp. 881–881. IEEE Computer Society (2024)

    Google Scholar 

  53. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)

    Article  Google Scholar 

  54. Oviedo-Trespalacios, O., et al.: The risks of using chatGPT to obtain common safety-related information and advice. Saf. Sci. 167, 106244 (2023)

    Article  Google Scholar 

  55. Parliament, E., the Council: Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (and amending regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828, (artificial intelligence act) (Emendaments, 6 March 2024)

    Google Scholar 

  56. Pentina, I., Hancock, T., Xie, T.: Exploring relationship development with social chatbots: A mixed-method study of replika. Comput. Hum. Behav. 140, 107600 (2023)

    Article  Google Scholar 

  57. Pitetti-Heil, J.: Artificial intelligence from science fiction to soul machines:(re-) configuring empathy between bodies, knowledge, and power. Artif. Intell. Human Enhancement Affirmative Crit. Approaches Humanit. 21, 287 (2022)

    Article  Google Scholar 

  58. Quinn, W.: Rationality and the human good. Soc. Philos. Policy 9(2), 81–95 (1992)

    Article  Google Scholar 

  59. Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48(1), 137–141 (2020)

    Article  Google Scholar 

  60. Regulation, P.: Regulation (EU) 2016/679 of the European parliament and of the council. Regulation (EU) 679, 2016 (2016)

    Google Scholar 

  61. Roth, G.: Why long-lasting therapeutic changes in the brain need time. Psychotherapeut 61, 455–461 (2016)

    Article  Google Scholar 

  62. Roth, G., Strüber, N.: Emotion, motivation, personality and their neurobiological foundations. In: Psychoneuroscience, pp. 143–174. Springer (2023)

    Google Scholar 

  63. Sahlin, N.E., Brännmark, J.: How can we be moral when we are so irrational? Logique et Analyse 101–126 (2013)

    Google Scholar 

  64. Salles, A., Evers, K., Farisco, M.: Anthropomorphism in AI. AJOB Neurosci. 11(2), 88–95 (2020)

    Google Scholar 

  65. Schreiber, D.: On social attribution: implications of recent cognitive neuroscience research for race, law, and politics. Sci. Eng. Ethics 18, 557–566 (2012)

    Article  Google Scholar 

  66. Seymour, M., Riemer, K., Kay, J.: Actors, avatars and agents: potentials and implications of natural face technology for the creation of realistic visual presence. J. Assoc. Inf. Syst. 19(10), 4 (2018)

    Google Scholar 

  67. Smestad, T.L.: Personality Matters! Improving The User Experience of Chatbot Interfaces-Personality provides a stable pattern to guide the design and behaviour of conversational agents. Master’s thesis, NTNU (2018)

    Google Scholar 

  68. Sobieszek, A., Price, T.: Playing games with AIs: the limits of GPT-3 and similar large language models. Mind. Mach. 32(2), 341–364 (2022)

    Article  Google Scholar 

  69. Sookkaew, J., Saephoo, P.: digital influencer : development and coexistence with digital social groups. Int. J. Adv. Comput. Sci. Appl. 12(12) (2021)

    Google Scholar 

  70. Stanovich, K.E.: Why humans are (sometimes) less rational than other animals: cognitive complexity and the axioms of rational choice. Thinking Reasoning 19(1), 1–26 (2013)

    Article  Google Scholar 

  71. Strachan, J., et al.: Testing theory of mind in GPT models and humans (2023)

    Google Scholar 

  72. Stringhi, E.: Hallucinating (or poorly fed) LLMs? the problem of data accuracy. i-lex 16(2), 54–63 (2023)

    Google Scholar 

  73. Taddeo, M., Floridi, L.: How AI can be a force for good. Science 361(6404), 751–752 (2018)

    Article  MathSciNet  Google Scholar 

  74. Tian, X., Risha, Z., Ahmed, I., Lekshmi Narayanan, A.B., Biehl, J.: Let’s talk it out: a chatbot for effective study habit behavioral change. Proc. ACM Hum. Comput. Interact. 5(CSCW1), 1–32 (2021)

    Article  Google Scholar 

  75. Tourinho, A., de Oliveira, B.M.K.: Time flies when you are having fun: cognitive absorption and beliefs about social media usage. AIS Trans. Replication Res. 5(1), 4 (2019)

    Google Scholar 

  76. Trajtenberg, M.: Artificial intelligence as the next gpt. Econ. Artif. Intell. agenda 175 (2019)

    Google Scholar 

  77. Trower, T.: Bob and beyond: A microsoft insider remembers (2010)

    Google Scholar 

  78. Wang, W., et al.: The earth is flat? unveiling factual errors in large language models (2024). arXiv preprint arXiv:2401.00761

  79. Wu, J., Yang, S., Zhan, R., Yuan, Y., Wong, D.F., Chao, L.S.: A survey on llm-gernerated text detection: Necessity, methods, and future directions (2023). arXiv preprint arXiv:2310.14724

  80. Xi, Z., et al.: The rise and potential of large language model based agents: A survey (2023). arXiv preprint arXiv:2309.07864

  81. Xu, K., Lombard, M.: Media are social actors: expanding the casa paradigm in the 21st century. In: Annual Conference of the International Communication Association, Fukuoka, Japan (2016)

    Google Scholar 

  82. Zhang, X., Guo, Y., Stepputtis, S., Sycara, K., Campbell, J.: Explaining agent behavior with large language models (2023). arXiv preprint arXiv:2309.10346

Download references

Acknowledgments

This work is partially supported by the Joint Doctorate grant agreement No 814177 LAST-JD-Rights of Internet of Everything, and the Chist-Era grant CHIST-ERA19-XAI-005—(i) the Swiss National Science Foundation (G.A. 20CH21_195530), (iii) the Luxembourg National Research Fund (G.A. INTER/CHIST/19/14589586).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rachele Carli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Carli, R., Calvaresi, D. (2024). The Wildcard XAI: from a Necessity, to a Resource, to a Dangerous Decoy. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2024. Lecture Notes in Computer Science(), vol 14847. Springer, Cham. https://doi.org/10.1007/978-3-031-70074-3_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70074-3_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70073-6

  • Online ISBN: 978-3-031-70074-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics