Abstract
The analysis presented is focused on the interaction among social robots and humans. It is here stated that, despite the multidisciplinary debate around the theme, social robots have to be ontologically deemed objects. The pleasant design and the simulation of intelligence, as much as social and emotional competences, are useful to convey acceptability and to favour interaction. However, they may lead to forms of manipulation which can impact the users’ will and undermine their physical and psychological integrity. This rises the need of a legal framework, able to guarantee a really human-centred development of new technologies and to ensure the protection of people involved in the interaction. Therefore, the recent European proposal of regulation, the Artificial Intelligence Act, is examined. In particular, the section on prohibited practices is critically analysed, so as to highlight the controversial aspects of such an approach. Thus, it is suggested the role of human dignity as a balancing principle to address the issues related to user manipulation in the human-robot interaction domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Riva, G., Riva, E.: CARESSES: the world’s first culturally sensitive robots for elderly care. Cyberpsychol. Behav. Soc. Netw. 22, 430 (2019)
Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114, 864 (2007)
Bryson, J.J.: Robots should be slaves. Close Engagements Artif. Companions Key Soc. Psychol. Ethical Des. Issues 8, 63–74 (2010)
European Commission: Building Trust in Human-Centric Artificial Intelligence. COM (2019) 168 final. European Commission (2019)
Calo, R.: Robotics and the Lessons of Cyberlaw. Calif. Law Rev. 103, 513–563 (2015)
Suchman, L.: Subject objects. Fem. Theory 12, 119–145 (2011)
Agar, N.: How to treat machines that might have minds. Philos. Technol. 33, 269–282 (2020)
Danaher, J.: Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci. Eng. Ethics 26, 2023–2049 (2020)
Gutman, M., Rathgeber, B., Syed, T.: Action and Autonomy: A hidden Dilemma in Artificial Autonomous Systems. In: Decker, M., Gutman, M. (eds.) Robo- and Informationethics. Some Fundamentals, pp. 231–256. Lit, Zürich (2012)
Gutmann, M., Rathgeber, B., Syed, T.: Organic Computing: Metaphor or Model? In: Müller-Schloer, C., Schmeck, H., Ungerer, T. (eds.) Organic Computing—A Paradigm Shift for Complex Systems, pp. 111–125. Springer Basel, Basel (2011). https://doi.org/10.1007/978-3-0348-0130-0_7
Lotto, B., Cardilli, L.M., Olivero, G.: Percezioni: come il cervello costruisce il mondo. Bollati Boringhieri (2017)
Floreano, D., Keller, L.: Evolution of adaptive behaviour in robots by means of Darwinian selection. PLoS Biol. 8, e1000292 (2010)
Moriarty, D.E., Schultz, A.C., Grefenstette, J.J.: Algorithms for reinforcement learning. J. Artif. Intell. Res. 11, 199 (1999)
Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf. Technol. 6, 175–183 (2004)
Matthias, A.: From coder to creator. Responsibility issues in intelligent artifact design. In: Luppicini, R., Adell, R. (eds.) Handbook of Research in Technoethics, vol. Handbook of Research in Technoethics. Hersher (2008)
De Jong, R.: The retribution-gap and responsibility-loci related to robots and automated technologies: a reply to nyholm. Sci. Eng. Ethics 26, 727–735 (2020)
Bertolini, A.: Robots as products: the case for a realistic analysis of robotic applications and liability rules. Law Innov. Technol. 5, 214–247 (2013)
Walter, W.G.: An imitation of life. Sci. Am. 182, 42–45 (1950)
Martin, C.D.: The myth of the awesome thinking machine. Commun. ACM 36, 120–133 (1993)
Kemeny, J.G.: Man viewed as a machine. Sci. Am. 192, 58–67 (1955)
Floridi, L.: Artificial intelligence’s new frontier: artificial companions and the fourth revolution. Metaphilosophy 39, 651–655 (2008)
Turing, A.: Computing machinery and intelligence. Mind 49, 433–460 (1950)
Falcone, R., Capirci, O., Lucidi, F., Zoccolotti, P.: Prospettive di intelligenza artificiale: mente, lavoro e società nel mondo del machine learning. G. Ital. Psicol. 45, 43–68 (2018)
Warwick, K., Shah, H.: Can machines think? A report on Turing test experiments at the royal society. J. Exp. Theor. Artif. Intell. 28, 1–11 (2016)
Bartha, P.: Analogy and analogical reasoning (2013)
Gieryn, T.F.: Boundary-work and the demarcation of science from non-science: strains and interests in professional ideologies of scientists. Am. Soc. Rev. 48(6), 781–795 (1983)
Glocker, M.L., Langleben, D.D., Ruparel, K., Loughead, J.W., Gur, R.C., Sachser, N.: Baby schema in infant faces induces cuteness perception and motivation for caretaking in adults. Ethology 115, 257–263 (2009)
Gn, J.: A lovable metaphor: on the affect, language and design of ‘cute.’ East Asian J. Popular Culture 2, 49–61 (2016)
Lacey, C., Caudwell, C.: Cuteness as a ‘dark pattern’ in home robots. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 374–381. IEEE, (2019)
Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103 (2000)
Alley, T.R.: Infantile head shape as an elicitor of adult protection. Merrill-Palmer Quarterly (1982) 411–427 (1983)
Hildebrandt, K.A., Fitzgerald, H.E.: Facial feature determinants of perceived infant attractiveness. Infant Behav. Dev. 2, 329–339 (1979)
Seltzer, M.: Bodies and Machines (Routledge Revivals). Routledge (2014)
Horstmann, A.C., Bock, N., Linhuber, E., Szczuka, J.M., Straßmann, C., Krämer, N.C.: Do a robot’s social skills and its objection discourage interactants from switching the robot off? PLoS ONE 13, e0201581 (2018)
Bartneck, C., Forlizzi, J.: Shaping human-robot interaction: understanding the social aspects of intelligent robotic products. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, pp. 1731–1732 (2004)
Damiano, L., Dumouchel, P.G.: Emotions in Relation. Epistemological and Ethical Scaffolding for Mixed Human-Robot Social Ecologies. HUMANA. MENTE J. Philos. Stud. 13(37), 181–206 (2020)
Sparrow, R., Sparrow, L.: In the hands of machines? The future of aged care. Mind. Mach. 16, 141–161 (2006)
Di Dio, C., et al.: Shall i trust you? From child–robot interaction to trusting relationships. Front. Psychol. 11, 469 (2020)
Hanoch, Y., Arvizzigno, F., Hernandez García, D., Denham, S., Belpaeme, T., Gummerum, M.: The robot made me do it: human-robot interaction and risk-taking behavior. Cyberpsychol. Behav. Soc. Netw. 24, 337–342 (2021)
Gillath, O., Ai, T., Branicky, M.S., Keshmiri, S., Davison, R.B., Spaulding, R.: Attachment and trust in artificial intelligence. Comput. Hum. Behav. 115, 106607 (2021)
Chemero, A.: Radical Embodied Cognitive Science. MIT press, New York (2011)
Breazeal, C.: JIBO, the world’s first social robot for the home [Internet]. Indiegogo (2014)
Lacey, C., Caudwell, C.B.: The robotic archetype: character animation and social robotics. In: Ge, S., et al. (eds) Social Robotics. ICSR 2018. LNCS, vol. 11357. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-05204-1_3
Van Camp, J.: Review: JIBO social robot. Wired 11, 17 (2017)
Mara, M., Appel, M.: Effects of lateral head tilt on user perceptions of humanoid and android robots. Comput. Hum. Behav. 44, 326–334 (2015)
Caudwell, C., Lacey, C., Sandoval, E.B.: The (Ir) relevance of robot cuteness: an exploratory study of emotionally durable robot design. In: Proceedings of the 31st Australian Conference on Human-Computer-Interaction, pp. 64–72 (2019)
Hodson, H.: The first family robot. Elsevier (2014)
Chapman, J.: Emotionally Durable Design: Objects, Experiences and Empathy. Routledge, London (2015)
Bucher, T.: If\(..\) Then: Algorithmic Power and Politics. Oxford University Press, Oxford (2018)
Hoffman, G.: Anki, jibo, and kuri: what we can learn from social robots that didn’t make it. IEEE Spectrum (2019)
Breazeal, C.: Grand Challenges of Building Sociable Robots (2004)
Solon, O.: There is no point making robots look and act like humans. Wired UK NA (2011). https://www.wired.co.uk/article/humanoid-robots
Ebrahimji, A.: In her dying days, a woman with coronavirus repeatedly talked to Alexa about her pain CNN (2020). https://edition.cnn.com/2020/04/10/us/alexa-nursing-home-coronavirus-trnd/index.html
Natale, S., Ballatore, A.: Imagining the thinking machine: technological myths and the rise of artificial intelligence. Convergence 26, 3–18 (2020)
Jung, C.G.: II libro rosso: liber novus. Bollati Boringhieri (2014)
Guzman, A.L.: Imagining the voice in the machine: the ontology of digital social agents. University of Illinois at Chicago (2015)
Guzman, A.L., Lewis, S.C.: Artificial intelligence and communication: a human-machine communication research agenda. New Media Soc. 22, 70–86 (2020)
Chattaraman, V., Kwon, W.-S., Gilbert, J.E., Ross, K.: Should AI-based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput. Hum. Behav. 90, 315–330 (2019)
Natale, S.: To believe in Siri: a critical analysis of AI voice assistants (2020)
Humphry, J., Chesher, C.: Preparing for smart voice assistants: cultural histories and media innovations. New Media Soc. 23, 1971–1988 (2021)
Wilks, Y.: Artificial Intelligence: Modern Magic or Dangerous Future? Icon Books (2019)
Wagner, B.: Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping? Being Profiled, pp. 84–89. Amsterdam University Press (2018)
Sacco, R.: Legal formants: a dynamic approach to comparative law. Am. J. Comp. Law I 39(2), 343–401 (1991)
European Commission: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts. COM/2021/206 final. European Commission (2021)
Sax, M.: Between empowerment and manipulation: the ethics and regulation of for-profit health apps. Kluwer Law International BV (2021)
Coeckelbergh, M.: Artificial companions: empathy and vulnerability mirroring in human-robot relations. Stud. Ethics, law, Technol. 4(3), (2011)
Gandy, O.H.: Coming to Terms With Chance: Engaging Rational Discrimination And Cumulative Disadvantage. Routledge, London (2016)
Turkle, S.: Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books, New York (2011)
Harris, I., Jennings, R.C., Pullinger, D., Rogerson, S., Duquenoy, P.: Ethical assessment of new technologies: a meta‐methodology. J. Inf., Commun. Ethics Soc. (2011)
O’Mahony, C.: There is no such thing as a right to dignity. Int. J. Const. Law 10, 551–574 (2012)
Dreier, H.: Die „guten Sitten“ zwischen Normativität und Faktizität. In: Harrer, F., Honsell, H., Mader, P. (eds.) Gedächtnisschrift für Theo Mayer-Maly, pp. 141–158. Springer Vienna, Vienna (2011). https://doi.org/10.1007/978-3-7091-0001-1_9
Gros, M.: Il principio di precauzione dinnanzi al giudice amministrativo francese. Il principio di precauzione dinnanzi al giudice amministrativo francese, pp. 709–758 (2013)
Bertolini, A.: Human-robot interaction and deception. Osservatorio del diritto civile e commerciale, Rivista semestrale 7(2), 645–659 (2018)
Fabre-Magnan, M.: La dignité en droit: un axiome. Revue interdisciplinaire d’études juridiques 58, 1–30 (2007)
Kretzmer, D., Klein, E.: The Concept of Human Dignity in Human Rights Discourse. Kluwer Law International The Hague (2002)
Kolakowski, L.: What is left of Socialism. First Things: A Monthly J. Religion Public Life 42–47 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Bertolini, A., Carli, R. (2022). Human-Robot Interaction and User Manipulation. In: Baghaei, N., Vassileva, J., Ali, R., Oyibo, K. (eds) Persuasive Technology. PERSUASIVE 2022. Lecture Notes in Computer Science, vol 13213. Springer, Cham. https://doi.org/10.1007/978-3-030-98438-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-98438-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-98437-3
Online ISBN: 978-3-030-98438-0
eBook Packages: Computer ScienceComputer Science (R0)