Abstract
Machine agency, defined as the ability of machines to act autonomously and interact with users, is becoming increasingly significant in the field of human-machine interaction research. This is especially evident in relation to ChatGPT and other generative artificial intelligence (AI) tools. In this paper, we present an initial extension of S. Shyam Sundar’s theory of machine agency, specifically in the context of human-AI communication. We review existing literature on this topic and use real-life news reports on ChatGPT from November 2022 to April 2023 as a basis for illustrating the factors that influence people’s perceptions of machine agency as either positive or negative. These perceptions are influenced by a range of factors, including ethical alignment, privacy, transparency, social inclusiveness, human autonomy, and well-being. We propose a more explicit differentiation between “good” and “bad” machine agency, a conceptualization that can enhance our understanding of the complexities of human-AI communication. We believe this approach can contribute to the development of guidelines and best practices for using generative AI tools and similar AI technologies. Finally, this conceptualization may help people and the public to better benefit from generative AI and identify its risks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Future of Life Institute Homepage: https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Last accessed 11 Apr 2023
Manyika, J., et al.: Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation. McKinsey Global Institute (2017)
Szollosy, M.: Freud, Frankenstein and our fear of robots: projection in our cultural perception of technology. AI & Soc. 32(3), 433–439 (2016). https://doi.org/10.1007/s00146-016-0654-7
McClure, P.K.: “You’re fired”, says the robot: the rise of automation in the workplace, technophobes, and fears of unemployment. Soc. Sci. Comput. Rev. 36(2), 139–156 (2018). https://doi.org/10.1177/0894439317698637
Cave, S., Dihal, K.: Hopes and fears for intelligent machines in fiction and reality. Nat. Mach. Intell. 1(2), 74–78 (2019). https://doi.org/10.1038/s42256-019-0020-9
Sundar, S.S.: Rise of machine agency: a framework for studying the psychology of human–ai interaction (HAII). J. Comput.-Mediated Commun. 25(1), 74–88 (2020). https://doi.org/10.1093/jcmc/zmz026
Sundar, S.S., Lee, E.-J.: Rethinking communication in the era of artificial intelligence. Hum. Commun. Res. 48(3), 379–385 (2022). https://doi.org/10.1093/hcr/hqac014
Guzman, A.L., Lewis, S.C.: Artificial intelligence and communication: a human–machine communication research agenda. New Media Soc. 22(1), 70–86 (2019). https://doi.org/10.1177/1461444819858691
Fox, J., Gambino, A.: Relationship development with humanoid social robots: applying interpersonal theories to human-robot interaction. Cyberpsychol. Behav. Soc. Netw. 24(5), 294–299 (2021). https://doi.org/10.1089/cyber.2020.0181
Gibbs, J., Kirkwood, G., Fang, C., Wilkenfeld, J.N.: Negotiating agency and control: theorizing human-machine communication from a Structurational perspective. Human-Mach. Commun. 2, 153–171 (2021). https://doi.org/10.30658/hmc.2.8
Laapotti, T., Raappana, M.: Algorithms and organizing. Hum. Commun. Res. 48(3), 491–515 (2022). https://doi.org/10.1093/hcr/hqac013
OpenAI Homepage: https://openai.com/blog/chatgpt. Last accessed 10 May 2023
OpenAI Homepage: https://openai.com/research/gpt-4. Last accessed 10 May 2023
Markov, T., et al.: A Holistic Approach to Undesired Content Detection in the Real World (2022). http://arxiv.org/abs/2208.03274
Fogg, B.J.: Persuasive technology: using computers to change what we think and do. Ubiquity 2002(December), 2 (2002). https://doi.org/10.1145/764008.763957
Krügel, S., Ostermaier, A., Uhl, M.: ChatGPT’s inconsistent moral advice influences users’ judgment. Sci. Rep. 13(1), 4569 (2023). https://doi.org/10.1038/s41598-023-31341-0
OpenAI Homepage: https://openai.com/charter. Last accessed 10 May 2023
Beer, D.: The social power of algorithms. Inf. Commun. Soc. 20(1), 1–13 (2017). https://doi.org/10.1080/1369118X.2016.1216147
van Dis, E.A.M., Bollen, J., Zuidema, W., van Rooij, R., Bockting, C.L.: ChatGPT: five priorities for research. Nature 614(7947), 224–226 (2023). https://doi.org/10.1038/d41586-023-00288-7
Floridi, L.: AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos. Technol. (2023). https://doi.org/10.2139/ssrn.4358789
Brandtzaeg, P.B., Skjuve, M., Følstad, A.: My AI friend: how users of a social chatbot understand their human-AI friendship. Human Commun. Res. 48(3), 404–429 (2022). https://doi.org/10.1093/hcr/hqac008
Fitzpatrick, K.K., Darcy, A., Vierhile, M.: Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. J. Med. Internet Res. Mental Health 4(2), e19 (2017). https://doi.org/10.2196/mental.7785
Stephens, T.N., Joerin, A., Rauws, M., Werk, L.N.: Feasibility of pediatric obesity and prediabetes treatment support through tess, the ai behavioral coaching Chatbot. Trans. Behav. Med. 9(3), 440–447 (2019). https://doi.org/10.1093/tbm/ibz043
Skjuve, M., Følstad, A., Fostervold, K.I., Brandtzaeg, P.B.: My chatbot companion – a study of human-chatbot relationships. Int. J. Hum Comput Stud. 149, 102601 (2021). https://doi.org/10.1016/j.ijhcs.2021.102601
Ta, V., et al.: User experiences of social support from companion chatbots in everyday contexts: thematic analysis. J. Med. Internet Res. 22(3), e16235 (2020). https://doi.org/10.2196/16235
Brandtzæg, P.B., Skjuve, M., Kristoffer Dysthe, K.K., Følstad, A.: When the social becomes non-human: young people’s perception of social support in Chatbots. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp.1–13. Association for Computing Machinery, New York, NY, United States (2021). https://doi.org/10.1145/3411764.3445318
Dwivedi, Y.K., et al.: “So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manage. 71, 102642 (2023). https://doi.org/10.1016/j.ijinfomgt.2023.102642
Ruane, E., Birhane, A., Ventresque, A.: Conversational AI: social and ethical considerations. In: Irish Conference on Artificial Intelligence and Cognitive Science, pp.104–115 (2019)
Larsson, S., et al.: Sustainable AI: An Inventory of the State of Knowledge of Ethical, Social, and Legal Challenges Related to Artificial Intelligence (2019). https://lucris.lub.lu.se/ws/portalfiles/portal/62833751/Larson_et_al_2019_SUSTAINABLE_AI_web_ENG_05.pdf
Mikalef, P., Conboy, K., Lundström, J.E., Popovič, A.: Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur. J. Inf. Syst. 3, 257–268 (2022). https://doi.org/10.1080/0960085X.2022.2026621
Skjuve, M., Brandtzæg, P.B., Følstad, A.: Why People Use ChatGPT (2023). https://doi.org/10.2139/ssrn.4376834
European Commission: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html. Last accessed 21 May 2023
New York Times: https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html. Last accessed 9 June 2023
Wired: https://www.wired.com/story/red-teaming-gpt-4-was-valuable-violet-teaming-will-make-it-better/. Last accessed 9 June 2023
Yahoo News: https://au.news.yahoo.com/ai-tech-chatgpt-improve-inclusion-000952663.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuYmluZy5jb20v&guce_referrer_sig=AQAAAAiczrRdseP88CK3UiNE3tyHisJvzRpzJbotcTrKUe8vE7t3ysN21g_yeJlaQQBbEtVrDZSWIrNGfAofPAOogesEFcg-5HoG33_GKly126MbJYAjDvmghzcon24saDdhOEZ1XkPv3m-C_nIoSbGttegbIV1X35Y0K4D8gG6pFfWU. Last accessed 9 June 2023
Vox: https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai. Last accessed 9 June 2023
The Guardian: https://www.theguardian.com/commentisfree/2022/dec/08/the-guardian-view-on-chatgpt-an-eerily-good-human-impersonator. Last accessed 9 June 2023
Bloomberg: https://www.bloomberg.com/opinion/articles/2022-12-17/chatgpt-holds-promise-and-peril-bloomberg-opinion-digest#xj4y7vzkg. Last accessed 9 June 2023
BuzzFeed: https://www.buzzfeednews.com/article/fjollaarifi/chatgpt-ai-for-therapy-mental-health. Last accessed 9 June 2023
insider: https://www.insider.com/chat-gpt-successor-gpt-4-can-help-doctors-save-lives-2023-4#:~:text=GPT-4%20is%20the%20latest%20AI%20technology%20released%20from,lives%2C%20but%20shouldn%27t%20be%20used%20without%20human%20supervision. Last accessed 9 June 2023
The Telegraph: https://www.telegraph.co.uk/news/2023/04/28/chatgpt-better-bedside-manner-empathy-than-doctors/. Last accessed 9 June 2023
Noy, S., Zhang, W.: Experimental evidence on the productivity effects of generative artificial intelligence. SSRN Electron. J. (2023). https://doi.org/10.2139/ssrn.4375283
Johnson, S.B., King, A.J., Warner, E.L., Aneja, S., Kann, B.H., Bylund, C.L.: Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information. JNCI Cancer Spectr. 7(2), pkad015 (2023). https://doi.org/10.1093/jncics/pkad015
Borji, A.: A Categorical Archive of ChatGPT Failures, pp. 1–41 (2023). http://arxiv.org/abs/2302.03494
New York Times: https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html. Last accessed 29 June 2023
Yahoo News: https://www.yahoo.com/news/bings-chatbot-compared-associated-press-114209531.html. Last accessed 9 June 2023
Reuters: https://shorturhttps://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/l.at/hDIR2. Last accessed 9 June 2023
The Telegraph: https://www.telegraph.co.uk/news/2023/04/09/chatgpt-artificial-intelligence-terrorism-terror-attack/. Last accessed 9 June 2023
New York Post: https://nypost.com/2023/04/11/ai-bot-chaosgpt-tweet-plans-to-destroy-humanity-after-being-tasked/. Last accessed 9 June 2023
Financial Times: https://www.ft.com/content/3ce7ed9d-df95-4f5f-a3c7-ec8398ce9c50. Last accessed 9 June 2023
The Economic Times: https://economictimes.indiatimes.com/tech/technology/canada-opens-investigation-into-ai-firm-behind-chatgpt/articleshow/99258321.cms. Last accessed 9 June 2023
Vice: https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit. Last accessed 9 June 2023
inews.co.uk: https://inews.co.uk/news/chatgpt-secrecy-concerns-experts-ai-companies-copyright-lawsuits-2236444. Last accessed 9 June 2023
Bloomberg: https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results. Last accessed 9 June 2023
Insider: https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1. Last accessed 9 June 2023
USA Today: https://eu.usatoday.com/story/tech/2023/02/09/woke-chatgpt-conservatives-bias/11215353002/. Last accessed 9 June 2023
Vox: https://www.vox.com/future-perfect/23674696/chatgpt-ai-creativity-originality-homogenization. Last accessed 9 June 2023
The Atlantic: https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/. Last accessed 9 June 2023
The Washington Post: https://www.washingtonpost.com/technology/2023/02/14/chatgpt-dan-jailbreak/. Last accessed 9 June 2023
New York Times: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html. Last accessed 9 June 2023
CNN: https://edition.cnn.com/2023/03/29/tech/chatgpt-ai-automation-jobs-impact-intl-hnk/index.html. Last accessed 9 June 2023
Vice: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says. Last accessed 9 June 2023
Skjuve, M., Følstad, A., Brandtzæg, P.B.: A longitudinal study of self-disclosure in human–chatbot relationships. Interact. Comput. 35(1), 24–39 (2023). https://doi.org/10.1093/iwc/iwad022
Harcup, T., O’Neill, D.: What is news? news values revisited (again). Journal. Stud. 18(12), 1470–1488 (2017). https://doi.org/10.1080/1461670X.2016.1150193
Acknowledgement
This research is partly financed by the Norwegian Media Authority, and the research project An AI-Powered Society.
Author’s note: ChatGPT was employed in this article to summarize and rephrase some parts of the document. The ChatGPT output was then manually edited and reviewed by the authors.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Brandtzaeg, P.B., You, Y., Wang, X., Lao, Y. (2023). “Good” and “Bad” Machine Agency in the Context of Human-AI Communication: The Case of ChatGPT. In: Degen, H., Ntoa, S., Moallem, A. (eds) HCI International 2023 – Late Breaking Papers. HCII 2023. Lecture Notes in Computer Science, vol 14059. Springer, Cham. https://doi.org/10.1007/978-3-031-48057-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-48057-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-48056-0
Online ISBN: 978-3-031-48057-7
eBook Packages: Computer ScienceComputer Science (R0)