Abstract
Written vignettes are often used in experiments to explore potential differences between the way participants interpret the behaviour and mental states of artificial autonomous actors (hence A-bots) contrasted against human actors. A reoccurring result from this body of research has been the similarity of results in mental-state attributions between A-bots and humans. This paper reports the results of a short measure consisting of four questions. We find that by asking participants about whether A-bots can feel pain or pleasure, whether they deserve rights, or whether they would be good parents, satisfactory differences can be derived between human and A-bot groups. By asking these questions, experimenters can be more confident that participants are constructing mental representations of A-bots differently than those of humans.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Awad, E., et al.: Blaming humans in autonomous vehicle accidents: shared responsibility across levels of automation (2018). arXiv preprint arXiv:1803.07170
Schweigert, W.A.: Research Methods in Psychology: A Handbook. Waveland Press, Long Grove (2021)
Fiala, B., Arico, A., Nichols, S.: You, robot (2014)
Kneer, M.: Can a robot lie? exploring the folk concept of lying as applied to artificial agents. Cogn. Sci. 45(10), e13032 (2021)
Thellman, S., Silvervarg, A., Ziemke, T.: Folk-psychological interpretation of human vs. humanoid robot behavior: exploring the intentional stance toward robots. Front. Psychol. 8(NOV), 1–14 (2017)
Hidalgo, C.A., Orghian, D., Canals, J.A., De Almeida, F., Martin, N.: How Humans Judge Machines. MIT Press, Cambridge (2021)
Tobia, K., Nielsen, A., Stremitzer, A.: When does physician use of AI increase liability? J. Nucl. Med. 62(1), 17–21 (2021)
Ashton, H., Franklin, M., Lagnado, D.: Testing a definition of intent for AI in a legal setting. Submitted manuscript (2022)
Franklin, M., Awad, E., Lagnado, D.: Blaming automated vehicles in difficult situations. Iscience 24(4), 102252 (2021)
Laurent, S.M., Nuñez, N.L., Schweitzer, K.A.: The influence of desire and knowledge on perception of each other and related mental states, and different mechanisms for blame. J. Exp. Soc. Psychol. 60, 27–38 (2015)
Malle, B.F., Guglielmo, S., Guglielmo, A.E.: Moral, cognitive, and social: the nature of blame. Social Think. Interpers. Behav., 313–331 (2012)
Gray, H.M., Gray, K., Wegner, D.M.: Dimensions of mind perception. Science 315(5812), 619 (2007)
Sytsma, J., Machery, E.: Two conceptions of subjective experience. Philos. Stud. 151(2), 299–327 (2010)
Buckwalter, W., Phelan, M.: Function and feeling machines: a defense of the philosophical conception of subjective experience. Philos. Stud. 166(2), 349–361 (2012). https://doi.org/10.1007/s11098-012-0039-9
Huebner, B.: Commonsense concepts of phenomenal consciousness: does anyone care about functional zombies? Phenomenol. Cogn. Sci. 9(1), 133–155 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ashton, H., Franklin, M. (2022). A Method to Check that Participants Really are Imagining Artificial Minds When Ascribing Mental States. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds) HCI International 2022 – Late Breaking Posters. HCII 2022. Communications in Computer and Information Science, vol 1655. Springer, Cham. https://doi.org/10.1007/978-3-031-19682-9_59
Download citation
DOI: https://doi.org/10.1007/978-3-031-19682-9_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19681-2
Online ISBN: 978-3-031-19682-9
eBook Packages: Computer ScienceComputer Science (R0)