Abstract
As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could willingly make themselves answerable for whatever events ensue, even if those events stem from the robot’s autonomous decision(s). This blank check solution was originally proposed in the context of automated warfare (Champagne & Tonkens, 2015), but we extend it to cover all robots. We argue that, because moral answerability in the blank check is accepted voluntarily and before bad outcomes are known, it proves superior to alternative ways of assigning blame. We end by highlighting how, in addition to being just, this self-initiated and prospective moral answerability for robot harm provides deterrence that the four other stances cannot match.
Similar content being viewed by others
Data Availability
Not applicable.
References
Abbott, R. (2020). The reasonable robot: Artificial intelligence and the law. Cambridge University Press.
Baalen, S., & Boon, M. (2019). Epistemology for interdisciplinary research – shifting philosophical paradigms of science. European Journal for Philosophy of Science, 9(1), 1–28. https://doi.org/10.1007/s13194-018-0242-4.
Barnett, R. E. (1977). Restitution: A new paradigm of criminal justice. Ethics, 87(4), 279–301. https://doi.org/10.1086/292043.
Behdadi, D., & Munthe, C. (2020). A normative approach to artificial moral agency. Minds and Machines, 30(2), 195–218. https://doi.org/10.1007/s11023-020-09525-8.
Berber, A., & Srećković, S. (2023). When something goes wrong: Who is responsible for errors in ML decision-making? AI & Society. https://doi.org/10.1007/s00146-023-01640-1.
Bernáth, L. (2021). Can autonomous agents without phenomenal consciousness be morally responsible? Philosophy & Technology, 34(4), 1363–1382. https://doi.org/10.1007/s13347-021-00462-7.
Bernstein, S. (2017). Causal proportions and moral responsibility. In D. Shoemaker (Ed.), Oxford studies in agency and responsibility, volume 4 (pp. 165–182). Oxford University Press. https://doi.org/10.1093/oso/9780198805601.003.0009.
Brooks, S. K., & Greenberg, N. (2021). Psychological impact of being wrongfully accused of criminal offences: A systematic literature review. Medicine, Science and the Law, 61(1), 44–54. https://doi.org/10.1177/0025802420949069.
Brożek, B., & Jakubiec, M. (2017). On the legal responsibility of autonomous machines. Artificial Intelligence and Law, 25(3), 293–304. https://doi.org/10.1007/s10506-017-9207-8.
Cappuccio, M. L., Peeters, A., & McDonald, W. (2019). Sympathy for Dolores: Moral consideration for robots based on virtue and recognition. Philosophy & Technology, 33(1), 9–31. https://doi.org/10.1007/s13347-019-0341-y.
Cappuccio, M. L., Sandoval, E. B., Mubin, O., Obaid, M., & Velonaki, M. (2021). Robotics aids for character building: More than just another enabling condition. International Journal of Social Robotics, 13(1), 1–5. https://doi.org/10.1007/s12369-021-00756-y.
Carson, H. L. (1917). The trial of animals and insects: A little known chapter of mediæval jurisprudence. Proceedings of the American Philosophical Society, 56(5), 410–415. https://ark.13960.t27b26t0z.
Cernea, M. V. (2017). The ethical troubles of future warfare: On the prohibition of autonomous weapon systems. Annals of the University of Bucharest Philosophy Series, 66(2), 67–89.
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Champagne, M. (2021). The mandatory ontology of robot responsibility. Cambridge Quarterly of Healthcare Ethics, 30(3), 448–454. https://doi.org/10.1017/S0963180120000997.
Champagne, M., & Tonkens, R. (2015). Bridging the responsibility gap in automated warfare. Philosophy & Technology, 28(1), 125–137. https://doi.org/10.1007/s13347-013-0138-3.
Chandler, D. (2018). Distributed responsibility: Moral agency in a non-linear world. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 182–195). Routledge. https://doi.org/10.4324/9781315201399.
Chomanski, B. (2021). Liability for robots: Sidestepping the gaps. Philosophy & Technology, 34(4), 1013–1032. https://doi.org/10.1007/s13347-021-00448-5.
Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & Society, 24(2), 181–189. https://doi.org/10.1007/s00146-009-0208-3.
Coeckelbergh, M. (2019). Moved by machines: Performance metaphors and philosophy of technology. Routledge.
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8.
Coghlan, S., Vetere, F., Waycott, J., & Neves, B. B. (2019). Could social robots make us kinder or crueller to humans and animals? International Journal of Social Robotics, 11(5), 741–751. https://doi.org/10.1007/s12369-019-00583-2.
Conradie, N., Kempt, H., & Königs, P. (2022). Introduction to the topical collection on AI and responsibility. Philosophy & Technology, 35(4), article 97. https://doi.org/10.1007/s13347-022-00583-7.
Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309. https://doi.org/10.1007/s10676-016-9403-3.
de Santoni, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34(4), 1057–1084. https://doi.org/10.1007/s13347-021-00450-x.
Dennett, D. C. (1987). The intentional stance. MIT Press.
Dennett, D. C. (1997). When HAL kills, who’s to blame? Computer ethics. In D. G. Stork (Ed.), HAL’s legacy: 2001’s computer as dream and reality (pp. 351–365). MIT Press.
Dennett, D. C. (2023). The problem with counterfeit people. The Atlantic, May 16.
Di Nucci, E. (2018). Sexual rights, disability and sex robots. In J. Danaher, & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 73–88). MIT Press.
Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260.
Enoch, D. (2012). Being responsible, taking responsibility, and penumbral agency. In U. Heuer, & G. Lang (Eds.), Luck, value, and commitment: Themes from the ethics of Bernard Wiliams (pp. 95–132). Oxford University Press.
Feinberg, J. (1965). The expressive function of punishment. The Monist, 49(3), 397–423. https://doi.org/10.5840/monist196549326.
Fischer, J. M., & Ravizza, M. (2000). Responsibility and control: A theory of moral responsibility. Cambridge University Press.
Fricker, M. (2016). What’s the point of blame? A paradigm based explanation. Noûs, 50(1), 165–183. https://doi.org/10.1111/nous.12067.
Gerdes, A. (2018). Lethal autonomous weapon systems and responsibility gaps. Philosophy Study, 8(5), 231–239. https://doi.org/10.17265/2159-5313/2018.05.004.
Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436. https://doi.org/10.1525/nclr.2016.19.3.412.
Gogoshin, D. L. (2021). Robot responsibility and moral community. Frontiers in Robotics and AI, 8(768092). https://doi.org/10.3389/frobt.2021.768092.
Gunkel, D. J. (2018a). The other question: Can and should robots have rights? Ethics and Information Technology, 20(2), 87–99. https://doi.org/10.1007/s10676-017-9442-4.
Gunkel, D. J. (2018b). Robot rights. MIT Press.
Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22(4), 307–320. https://doi.org/10.1007/s10676-017-9442-4.
Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25(3), 255–271. https://doi.org/10.1007/s10506-017-9208-7.
Hansson, S. O. (2023). Who is responsible if the car itself is driving? In D. P. Michelfelder (Ed.), Test-driving the future: Autonomous vehicles and the ethics of technological change (pp. 43–58). Rowman and Littlefield.
Hart, H. L. A. (2008). Punishment and responsibility: Essays in the philosophy of law. Oxford University Press.
Hayenhjelm, M., & Wolff, J. (2012). The moral problem of risk impositions: A survey of the literature. European Journal of Philosophy, 20(S1), E26–E51. https://doi.org/10.1111/j.1468-0378.2011.00482.x.
Hew, P. C. (2014). Artificial moral agents are infeasible with foreseeable technologies. Ethics and Information Technology, 16(3), 197–206. https://doi.org/10.1007/s10676-014-9345-6.
Himmelreich, J. (2019). Responsibility for killer robots. Ethical Theory and Moral Practice, 22(3), 731–747. https://doi.org/10.1007/s10677-019-10007-9.
James, W. (1898). Philosophical conceptions and practical results. University Chronicle, 1(4), 287–310.
Joyce, R. (2001). The myth of morality. Cambridge University Press.
Kazman, S. (1990). Deadly overcaution: FDA’s drug approval process. Journal of Regulation and Social Costs, 1(1), 35–54.
Kiener, M. (2022). Can we bridge AI’s responsibility gap at will? Ethical Theory and Moral Practice, 25(4), 575–593. https://doi.org/10.1007/s10677-022-10313-9.
Kneer, M., & Stuart, M. T. (2021). Playing the blame game with robots. In HRI ‘21 Companion: Companion of the 2021 ACM/IEEE international conference on human-robot interaction (pp. 407–411). https://doi.org/10.1145/3434074.3447202.
Köhler, S., Roughley, N., & Sauer, H. (2018). Technologically blurred accountability? Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 51–68). Routledge. https://doi.org/10.4324/9781315201399.
Kraaijeveld, S. R. (2020). Debunking (the) retribution (gap). Science and Engineering Ethics, 26(3), 1315–1328. https://doi.org/10.1007/s11948-019-00148-6.
Kraaijeveld, S. R. (2021). Experimental philosophy of technology. Philosophy & Technology, 34(4), 993–1012. https://doi.org/10.1007/s13347-021-00447-6.
Kühler, M. (2020). Technological moral luck. In B. Beck, & M. Kühler (Eds.), Technology, anthropology, and dimensions of responsibility (pp. 115–132). J. B. Metzler Verlag. https://doi.org/10.1007/978-3-476-04896-7_9.
Lemley, M. A., & Casey, B. (2019). Remedies for robots. The University of Chicago Law Review, 86(5), 1311–1396. https://doi.org/10.2139/ssrn.3223621.
Lévinas, E. (1985). Ethics and infinity: Conversations with Philippe Nemo. Trans. R. A. Cohen. Duquesne University Press.
Lévinas, E. (1998). Discovering existence with Husserl. Trans. R. A. Cohen & M. B. Smith. Northwestern University Press.
Lima, G., Grgić-Hlača, N., & Cha, M. (2021). Human perceptions on moral responsibility of AI: A case study in AI-assisted bail decision-making. Proceedings of the 2021 CHI conference on human factors in computing systems, article 235. https://doi.org/10.1145/3411764.3445260.
Lima, G., Grgić-Hlača, N., & Cha, M. (2023). Blaming humans and machines: What shapes people’s reactions to algorithmic harm. In Proceedings of the 2023 CHI conference on human factors in computing systems. Association for Computing Machinery. https://doi.org/10.1145/3544548.3580953.
Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186. https://doi.org/10.1080/1047840X.2014.877340.
Mamak, K. (2022). Should violence against robots be banned? International Journal of Social Robotics, 14(4), 1057–1066. https://doi.org/10.1007/s12369-021-00852-z.
Mason, E. (2019). Ways to be blameworthy: Rightness, wrongness, and responsibility. Oxford University Press.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1.
Napoleon, V. R. (2009). Ayook: Gitksan legal order, law, and legal theory. Doctoral dissertation, University of Victoria, Canada.
Oimann, A. (2023). The responsibility gap and LAWS: A critical mapping of the debate. Philosophy & Technology, 36(5), article 5. https://doi.org/10.1007/s13347-023-00605-y.
Oldridge, D. (2005). Strange histories: The trial of the pig, the walking dead, and other matters of fact from the medieval and renaissance worlds. Routledge.
Parfit, D. (1984). Reasons and persons. Clarendon Press.
Restivo, S. (2017). Sociology, science, and the end of philosophy: How society shapes brains, gods, maths, and logics. Palgrave Macmillan.
Royakkers, L., & Olsthoorn, P. (2018). Lethal military robots: Who is responsible when things go wrong? In R. Luppicini (Ed.), The changing scope of technoethics in contemporary society (pp. 106–123). IGI Global. https://doi.org/10.4018/978-1-5225-5094-5.ch006.
Sætra, H. S. (2021). Challenging the neo-anthropocentric relational approach to robot rights. Frontiers in Robotics and AI, 8(744426). https://doi.org/10.3389/frobt.2021.744426.
Sartorio, C. (2007). Causation and responsibility. Philosophy Compass, 2(5), 749–765. https://doi.org/10.1111/j.1747-9991.2007.00097.x.
Scanlon, T. M. (2008). Moral dimensions: Permissibility, meaning, blame. Harvard University Press.
Shoemaker, D. (2011). Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics, 121(3), 603–632. https://doi.org/10.1086/659003.
Smith, A. M. (2007). On being responsible and holding responsible. The Journal of Ethics, 11(4), 465–484. https://doi.org/10.1007/s10892-005-7989-5.
Smith, N., & Vickers, D. (2021). Statistically responsible artificial intelligences. Ethics and Information Technology, 23(3), 483–493. https://doi.org/10.1007/s10676-021-09591-1.
Søvik, A. O. (2022). How a non-conscious robot could be an agent with capacity for morally responsible behaviour. AI and Ethics, 2(4), 789–800. https://doi.org/10.1007/s43681-022-00140-0.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x.
Sparrow, R. (2017). Robots, rape, and representation. International Journal of Social Robotics, 9(4), 465–477. https://doi.org/10.1007/s12369-017-0413-z.
Sparrow, R. (2021). Virtue and vice in our relationships with robots: Is there an asymmetry and how might it be explained? International Journal of Social Robotics, 13(1), 23–29. https://doi.org/10.1007/s12369-020-00631-2.
Stenseke, J. (2022a). Interdisciplinary confusion and resolution in the context of moral machines. Science and Engineering Ethics, 28(3), 1–17. https://doi.org/10.1007/s11948-022-00378-1.
Stenseke, J. (2022b). The morality of artificial friends in Ishiguro’s Klara and the Sun. Journal of Science Fiction and Philosophy, 5, 1–18.
Strawson, P. F. (2008). Freedom and resentment and other essays. Routledge.
Stuart, M. T., & Kneer, M. (2021). Guilty artificial minds: Folk attributions of mens rea and culpability to artificially intelligent agents. Proceedings of the Association for Computing Machinery Conference on Human-Computer Interaction, 5(CSCW2), article 363. https://doi.org/10.1145/3479507.
Taddeo, M., & Blanchard, A. (2022). Accepting moral responsibility for the actions of autonomous weapons systems—a moral gambit. Philosophy & Technology, 35(3), 1–24. https://doi.org/10.1007/s13347-022-00571-x.
Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y.
Tigard, D. (2021a). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30(3), 435–447. https://doi.org/10.1017/S0963180120000985.
Tigard, D. (2021b). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589–607. https://doi.org/10.1007/s13347-020-00414-7.
Tollon, F. (2021). The artificial view: Toward a non-anthropocentric account of moral patiency. Ethics and Information Technology, 23(2), 147–155. https://doi.org/10.1007/s10676-020-09540-4.
Turner, J. (2018). Robot rules: Regulating Artificial Intelligence. Palgrave Macmillan.
van de Poel, I., Royakkers, L., & Zwart, S. D. (2015). Moral responsibility and the problem of many hands. Routledge.
Watson, G. (2004). Agency and answerability: Selected essays. Oxford University Press.
Wolf, S. (2001). The moral of moral luck. Philosophical Exchange, 31(1), 5–19. http://hdl.handle.net/20.500.12648/3203.
Wolf, S. (2011). Blame, Italian style. In R. J. Wallace, R. Kumar, & S. Freeman (Eds.), Reasons and recognition: Essays on the philosophy of T. M. Scanlon (pp. 332–347). Oxford University Press.
Funding
No funds, grants, or other financial support were received by MC or RT during the preparation of this manuscript.
Author information
Authors and Affiliations
Contributions
MC and RT both contributed to this manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
MC and RT consent to have this manuscript published.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Champagne, M., Tonkens, R. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm. Sci Eng Ethics 29, 27 (2023). https://doi.org/10.1007/s11948-023-00449-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11948-023-00449-x