Abstract
Artificial intelligence (AI) systems have been increasingly adopted for decision support, behavioral change purposes, assistance, and aid in daily activities and decisions. Thus, focusing on design and interaction that, in addition to being functional, foster users’ acceptance and trust is increasingly necessary. Human-computer interaction (HCI) and human-robot interaction (HRI) studies focused more and more on the exploitation of communication means and interfaces to possibly enact deception. Despite the literal meaning often attributed to the term, deception does not always denote a merely manipulative intent. The expression “banal deception” has been theorized to specifically refer to design strategies that aim to facilitate the interaction. Advances in explainable AI (XAI) could serve as technical means to minimize the risk of distortive effects on people’s perceptions and will. However, this paper argues that how the provided explanations and their content can exacerbate the deceptive dynamics or even manipulate the end user. Therefore, in order to avoid similar consequences, this analysis suggests legal principles to which the explanation must conform to mitigate the side effects of deception in HCI/HRI. Such principles will be made enforceable by assessing the impact of deception on the end users based on the concept of vulnerability – understood here as the rationalization of the inviolable right of human dignity – and control measures implemented in the given systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adrienne, K.: Effective enforcement of human rights: the Tysiac v. Poland case. Studia Iuridica Auctoritate Universitatis Pecs Publicata 143, 186 (2009)
AI HLEG: High-level expert group on artificial intelligence (2019)
Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019, Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)
UN General Assembly, et al.: Universal declaration of human rights. UN General Assembly 302(2), 14–25 (1948)
Astromskė, K., Peičius, E., Astromskis, P.: Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & Soc. 36, 509–520 (2021). https://doi.org/10.1007/s00146-020-01008-9
Baker, R.S., De Carvalho, A., Raspat, J., Aleven, V., Corbett, A.T., Koedinger, K.R.: Educational software features that encourage and discourage “gaming the system”. In: Proceedings of the 14th International Conference on Artificial Intelligence in Education, pp. 475–482 (2009)
Banks, J.: Theory of mind in social robots: replication of five established human tests. Int. J. Soc. Robot. 12(2), 403–414 (2020)
Barroso, L.R.: Here, there, and everywhere: human dignity in contemporary law and in the transnational discourse. BC Int’l Comp. L. Rev. 35, 331 (2012)
Beyleveld, D., Brownsword, R.: Human Dignity in Bioethics and Biolaw (2001)
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, pp. 8–13 (2017)
Bissoli, L., et al.: A virtual coaching platform to support therapy compliance in obesity. In: 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 694–699. IEEE (2022)
Bradeško, L., Mladenić, D.: A survey of chatbot systems through a Loebner Prize competition. In: Proceedings of Slovenian Language Technologies Society Eighth Conference of Language Technologies, vol. 2, pp. 34–37 (2012)
Bublitz, J.C.: The Nascent right to psychological integrity and mental self-determination. In: The Cambridge Handbook of New Human Rights: Recognition, Novelty, Rhetoric, pp. 387–403 (2020)
Calvaresi, D., et al.: EREBOTS: privacy-compliant agent-based platform for multi-scenario personalized health-assistant chatbots. Electronics 10(6), 666 (2021)
Calvaresi, D., et al.: Ethical and legal considerations for nutrition virtual coaches. AI Ethics, 1–28 (2022). https://doi.org/10.1007/s43681-022-00237-6
Calvaresi, D., Cesarini, D., Sernani, P., Marinoni, M., Dragoni, A.F., Sturm, A.: Exploring the ambient assisted living domain: a systematic review. J. Ambient. Intell. Humaniz. Comput. 8(2), 239–257 (2017)
Caporael, L.R.: Anthropomorphism and mechanomorphism: two faces of the human machine. Comput. Hum. Behav. 2(3), 215–234 (1986)
Carli, R., Najjar, A., Calvaresi, D.: Risk and exposure of XAI in persuasion and argumentation: the case of manipulation. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022. LNCS, vol. 13283, pp. 204–220. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15565-9_13
Ch’ng, S.I., Yeong, L.S., Ang, X.Y.: Preliminary findings of using chat-bots as a course FAQ tool. In: 2019 IEEE Conference on e-Learning, e-Management & e-Services (IC3e), pp. 1–5. IEEE (2019)
Cisek, P.: Beyond the computer metaphor: behaviour as interaction. J. Conscious. Stud. 6(11–12), 125–142 (1999)
European Commission: Charter of fundamental rights of the European Union, 2012/c 326/02. Official Journal of the European Union (2012)
Coons, C., Weber, M.: Manipulation: Theory and Practice. Oxford University Press (2014)
Crevier, D.: AI: The Tumultuous History of the Search for Artificial Intelligence. Basic Books, Inc. (1993)
Crowther-Heyck, H.: George A. Miller, language, and the computer metaphor and mind. Hist. Psychol. 2(1), 37 (1999)
Dennett, D.C.: The Intentional Stance. MIT Press (1987)
Dicke, K.: The founding function of human dignity in the universal declaration of human rights. In: The Concept of Human Dignity in Human Rights Discourse, pp. 111–120. Brill Nijhoff (2001)
Druce, J., Niehaus, J., Moody, V., Jensen, D., Littman, M.L.: Brittle AI, causal confusion, and bad mental models: challenges and successes in the XAI program. arXiv preprint arXiv:2106.05506 (2021)
Edmonds, B.: The constructibility of artificial intelligence (as defined by the Turing test). In: The Turing test: The Elusive Standard of Artificial Intelligence, pp. 145–150 (2003)
Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114(4), 864 (2007)
Fabre-Magnan, M.: La dignité en droit: un axiome. Revue interdisciplinaire d’études juridiques 58(1), 1–30 (2007)
Fejes, E., Futó, I.: Artificial intelligence in public administration-supporting administrative decisions. PÉNZÜGYI SZEMLE/Public Finan. Q. 66(SE/1), 23–51 (2021)
Fineman, M.A.: Vulnerability: Reflections on a New Ethical Foundation for Law and Politics. Ashgate Publishing, Ltd. (2013)
Glocker, M.L., Langleben, D.D., Ruparel, K., Loughead, J.W., Gur, R.C., Sachser, N.: Baby schema in infant faces induces cuteness perception and motivation for caretaking in adults. Ethology 115(3), 257–263 (2009)
Graziani, M., et al.: A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 56, 3473–3504 (2022)
Guzman, A.L.: Making AI safe for humans: a conversation with Siri. In: Socialbots and Their Friends, pp. 85–101. Routledge (2016)
Heri, C.: Responsive Human Rights: Vulnerability, Ill-treatment and the ECtHR. Bloomsbury Academic (2021)
Ippolito, F.: La vulnerabilità quale principio emergente nel diritto internazionale dei diritti umani? Ars Interpretandi 24(2), 63–93 (2019)
Kim, J., Park, K., Ryu, H.: Social values of care robots. Int. J. Environ. Res. Public Health 19(24), 16657 (2022)
Knijn, T., Lepianka, D.: Justice and Vulnerability in Europe: An Interdisciplinary Approach. Edward Elgar Publishing (2020)
Kopelman, L.M.: The best interests standard for incompetent or incapacitated persons of all ages. J. Law Med. Ethics 35(1), 187–196 (2007)
Korn, J.H.: Illusions of Reality: A History of Deception in Social Psychology. SUNY Press (1997)
Lee, S.l., Lau, I.Y.m., Kiesler, S., Chiu, C.Y.: Human mental models of humanoid robots. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 2767–2772. IEEE (2005)
Leonard, A.: Bots: The Origin of the New Species. Wired Books, Incorporated (1997)
Leonard, T.C.: Richard H. Thaler, Cass R. Sunstein, Nudge: improving decisions about health, wealth, and happiness. Constit. Polit. Econ. 19(4), 356–360 (2008)
Magid, B.: The meaning of projection in self psychology. J. Am. Acad. Psychoanal. 14(4), 473–483 (1986)
Margalit, A.: Autonomy: errors and manipulation. Jerusalem Rev. Legal Stud. 14(1), 102–112 (2016)
Marshall, J.: Personal Freedom Through Human Rights Law? Autonomy, Identity and Integrity under the European Convention on Human Rights. Brill (2008)
Massaro, D.W.: The computer as a metaphor for psychological inquiry: considerations and recommendations. Behav. Res. Meth. Instrum. Comput. 18, 73–92 (1986)
United States. President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research: Making Health Care Decisions Volume One: Report (1982)
Mitnick, K.D., Simon, W.L.: The Art of Deception: Controlling the Human Element of Security. Wiley (2003)
Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000)
Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 72–78 (1994)
Natale, S.: Deceitful Media: Artificial Intelligence and Social Life After the Turing Test. Oxford University Press, USA (2021)
Papacharissi, Z.: A Networked Self and Human Augmentics, Artificial Intelligence, Sentience. Routledge, UK (2018)
Reeves, B., Nass, C.: Media Equation Theory (1996). Accessed 5 Mar 2009
Roberts, T., Zheng, Y.: Datafication, dehumanisation and participatory development. In: Zheng, Y., Abbott, P., Robles-Flores, J.A. (eds.) Freedom and Social Inclusion in a Connected World, ICT4D 2022. IFIP Advances in Information and Communication Technology, vol. 657, pp. 377–396. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19429-0_23
Sabatello, M.: Children with disabilities: a critical appraisal. Int. J. Child. Rights 21(3), 464–487 (2013)
Sætra, H.S.: The parasitic nature of social AI: Sharing minds with the mindless. Integr. Psychol. Behav. Sci. 54, 308–326 (2020)
Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C., Bigdeli, A.: “How do you know that i don’t understand?’’ A look at the future of intelligent tutoring systems. Comput. Hum. Behav. 24(4), 1342–1363 (2008)
Schneider, B.: You are not a gadget: a manifesto. J. Technol. Educ. 23(2), 70–72 (2012)
Schreiber, D.: On social attribution: implications of recent cognitive neuroscience research for race, law, and politics. Sci. Eng. Ethics 18, 557–566 (2012)
Seymour, W., Van Kleek, M.: Exploring interactions between trust, anthropomorphism, and relationship development in voice assistants. Proc. ACM Hum. Comput. Interact. 5(CSCW2), 1–16 (2021)
Switzky, L.: Eliza effects: Pygmalion and the early development of artificial intelligence. Shaw 40(1), 50–68 (2020)
Timmer, A.: A quiet revolution: vulnerability in the European court of human rights. In: Vulnerability, pp. 147–170. Routledge (2016)
Trower, T.: Bob and beyond: a Microsoft insider remembers (2010)
Turing, A.M.: Computing machinery and intelligence. In: Epstein, R., Roberts, G., Beber, G. (eds.) Parsing the Turing Test, pp. 23–65. Springer, Dordrecht (2009). https://doi.org/10.1007/978-1-4020-6710-5_3
White, L.A.: The symbol: the origin and basis of human behavior. Philos. Sci. 7(4), 451–463 (1940)
Yang, Y., Liu, Y., Lv, X., Ai, J., Li, Y.: Anthropomorphism and customers’ willingness to use artificial intelligence service agents. J. Hospitality Mark. Manage. 31(1), 1–23 (2022)
Zatti, P.: Note sulla semantica della dignità. Maschere del diritto volti della vita, pp. 24–49 (2009)
Acknowledgments
This work is partially supported by the Joint Doctorate grant agreement No 814177 LAST-JD-Rights of Internet of Everything, and the Chist-Era grant CHIST-ERA19-XAI-005, and by (i) the Swiss National Science Foundation (G.A. 20CH21_195530), (ii) the Italian Ministry for Universities and Research, (iii) the Luxembourg National Research Fund (G.A. INTER/CHIST/19/14589586), (iv) the Scientific and Research Council of Turkey (TÜBİTAK, G.A. 120N680).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Carli, R., Calvaresi, D. (2023). Reinterpreting Vulnerability to Tackle Deception in Principles-Based XAI for Human-Computer Interaction. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2023. Lecture Notes in Computer Science(), vol 14127. Springer, Cham. https://doi.org/10.1007/978-3-031-40878-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-40878-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40877-9
Online ISBN: 978-3-031-40878-6
eBook Packages: Computer ScienceComputer Science (R0)