Abstract
I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, and the nature of what it is to have moral status. Some radical challenges to certain technological presuppositions and ramifications of the infocentric approach will be discussed. Notwithstanding the obvious tensions between the infocentric perspective on one side and the biocentric and ecocentric perspectives on the other, we will see that there are also striking parallels in the way that each of these three approaches generates challenges to an anthropocentric ethical hegemony, and possible scope for some degree of convergence.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
See Torrance (2008, 2009). It should be pointed out that these are roles that an individual may play, and that clearly a single person can occupy both roles at the same time—for instance if a earthquake victim (who is a moral recipient in that she is, or ought to be, the object of others’ moral concern) also performs heroic acts in saving other victims’ life (and thus, as someone whose behaviour is to be morally commended, occupies in this respect the role of moral producer). Also the two kinds of role may partially overlap, as they perhaps do in the concept of ‘respect’: if I respect you for your compassion and concern for justice (moral producer), I may see you as therefore deserving of particular consideration (moral recipient).
Apart from Infocentrism, these concepts are discussed, for example, in Curry (2006). Like the area of informatics-based ethics, that of bio- and eco-based ethics offers an important debate with exclusively humanity-centred approaches to ethical value. The exploration of the parallels between these different areas, each of which supports a sustained critique of anthropic views, is one of the key themes of the present paper.
We only consider secular approaches here: clearly religion-based approaches of different sorts offer other interesting perspectives on the bounds of the moral constituency, but it is not possible to include them within the scope of the present discussion.
In connection with the ethical status of non-human living organisms, a characteristic, and much-quoted, expression of ethical anthropocentrism is to be found in Kant, on the question of the treatment of animals. Kant believed that animals could be treated only as means rather than as ends; thus, as he put it, ‘we have no immediate duties to animals; our duties toward them are indirect duties to humanity.’ Kant (1997: 212); see also (Frey 1980).
This idea can be understood in at least two ways: either as the claim that moral thinking is primarily cognitive or intellectual in nature, rather than emotional or affective (the former view being associated with ethical rationalists such as Kant); or with the more modern view that information (in some sense of the term based on computer science, information-theory or similar fields) is a key determinant of moral value. (Luciano Floridi defends a view of the latter sort—see, for example, Floridi 2008a, b). Certainly the second of these two variants, and possibly also the first, more ancient, one may be seen as being embraced by the infocentric approach.
Notions of autonomy are notoriously difficult to define. As a rough and ready way of marking the difference between operational and ethical autonomy, a vehicle that can navigate without a human driver either on board or in tele-control may be considered as operationally autonomous, without being ethically autonomous. Perhaps, if its object-avoidance system enables it to reliably avoid colliding with human pedestrians, cyclists, etc., then that might go some way to qualifying it for the latter description.
See, for example, Sparrow (2007), for a discussion of the issue of shifting blame from human to machine in the specific domain of autonomous robots deployed in a theatre of war; a lot of the issues concerning moral responsibility in this domain apply more widely to other domains where artificial agents may be employed.
As suggested earlier, it seems plausible to suppose that sentience—the ability to experience pleasure, pain, conscious emotion, perceptual qualia, etc.—plays a key role as a determinant of whether a being is a fit target for moral respect (i.e. of moral receptivity). But sentience may not be an exclusive determinant of moral receptivity. Many people believe that the remains of deceased people ought to be treated with respect: this is accepted even by those who strongly believe that bodily death is an end to experience and is accepted even in the case of those corpses for whom there are no living relations or dear ones who would be hurt by those corpses being treated without respect. Other examples of attributing (something like) moral concern to non-sentient entities will be considered later.
The field of Machine Consciousness in some ways mirrors that of ME. As in the latter, Machine Consciousness includes the practical development of artificial models of aspects of consciousness, or even attempts to instantiate consciousness in robots, as well as broader philosophical discussions of the scope and limitations of such a research programme. For discussion of the relations between machine consciousness and ethics, with implications for machine ethics, see Torrance (2000, 2007).
Thus Dennett, commenting on the Cog Project in the AI Lab at MIT, which had the explicit aim of producing a ‘conscious’ robot, wrote: “more than a few participants in the Cog project are already musing about what obligations they might come to have to Cog, over and above their obligations to the Cog team” (1998: 169).
On successors to the human race, see in particular (Dietrich 2007; Bostrom 2004, 2005). Dietrich argues that replacement of the human race by superintelligences would be a good thing, as the human race is a net source of moral evil in the world, due to ineradicable evolutionary factors. This does seem to be a very extreme, and markedly anti-anthropic, version of this kind of view; fortunately there are other writers who are concerned to ensure that possible artificial superintelligences can cohabit in a friendly and compassionate way with humans (Yudkowsky 2001, 2008; Goertzel 2006).
See Sparrow and Sparrow (2006) for a critical view of robot care of the elderly.
Calverley (2005) gives an interesting account of how rights for robots might be supported as an extension of biocentric arguments offered in favour of granting rights to non-human animals.
However, this terminology is not necessarily used by dark green eco-theorists. For an influential anticipation of dark green ecology see Leopold (1949).
For Floridi, the four revolutions were ushered in, respectively, by Copernicus, Darwin, Freud and Turing.
I am grateful to Ron Chrisley for useful insights on this point.
David Abram suggests, on the contrary, that it is the advent of alphabetic, phonetic writing, and all the technologies that came in its train, that was a key factor in the loss of primitive experience of nature. Have the books (including Abram’s) that followed alphabetization been of net benefit to mankind and/or to nature?
References
Abram D (1996) The spell of the sensuous: perception and language in a more-than-human world. Random House, NY
Abram D (2010) Becoming animal: an earthly cosmology. Pantheon Books, NY
Aleksander I (2005) The world in my mind, my mind in the world: key mechanisms of consciousness in humans, animals and machines. Imprint Academic, Thorverton
Anderson M, Anderson SL (eds) (2011) Machine ethics. Cambridge University Press, NY
Bentham J (1823) Introduction to the principles of morals and legislation, 2nd edn. Reprinted 1907, Clarendon Press, Oxford
Bostrom N (2000) When machines outsmart humans. Futures 35(7):759–764
Bostrom N (2004) The future of human evolution. In: Tandy C (ed) Death and anti-death: two hundred years after Kant; fifty years after turing. Ria University Press, Palo Alto, pp 339–371
Bostrom N (2005) The ethics of superintelligent machines. In: Smit I, Wallach W, Lasker G (eds) Symposium on cognitive, emotive and ethical aspects of decision-making in humans and artificial intelligence, InterSymp 05, IIAS Press, Windsor
Calverley (2005) Android science and the animal rights movement: are there analogies?’ In: Proceedings of CogSci-2005 workshop, Cognitive Science Society, Stresa, pp 127–136
Chalmers D (2010) The singularity: a philosophical analysis. J Conscious Stud 17(910):7–65
Curry P (2006) Ecological ethics: an introduction. Polity Press, Cambridge
De Jaegher H (2008) Social understanding through direct perception? yes, by interacting. Conscious Cogn 18:535–542
De Jaegher H, Di Paolo E (2007) Participatory sense-making: an enactive approach to social cognition. Phenomenol Cogn Sci 6(4):485–507
De Waal F (2006) Primates and philosophers: how morality evolved. Princeton U.P, Oxford
Dennett DC (1978) Why you can’t make a computer that feels pain. In: Brainstorms: philosophical essays on mind and psychology. MIT Press, Cambridge, pp 190–232
Dennett DC (1998) The practical requirements for making a conscious robot. In: Dennett DC (ed) Brainchildren: essays on designing minds. Penguin Books, London, pp 153–170
Di Paolo E (2005) Autopoiesis, adaptivity, teleology, agency. Phenomenol Cogn Sci 4:97–125
Dietrich E (2007) After the humans are gone. J Exp Theor Artif Intell 19(1):55–67
Ernste H (2004) The pragmatism of life in poststructuralist times. Environ Plan A 36:437–450
Floridi L (2008a) Artificial intelligence’s new frontier: artificial companions and the fourth revolution. Metaphilosophy 39(4–5):651–655
Floridi L (2008b) Information ethics, its nature and scope. In: Weckert J, Van den Hoven J (eds) Moral philosophy and information technology. Cambridge U.P., Cambridge, pp 40–65
Floridi L (2008c) Information ethics: a reappraisal. Ethics Inf Technol 10(2–3):189–204
Franklin S (1995) Artificial minds. MIT Press, Boston
Frey RG (1980) Interests and rights: the case against animals. Clarendon Press, Oxford
Gallagher S (2001) The practice of mind: theory, simulation or primary interaction? J Conscious Stud 8(5–7):83–108
Gallagher S (2008) Direct perception in the intersubjective context. Conscious Cogn 17:535–543
Goertzel B (2006) Ten years to a positive singularity (if we really, really try). Talk to Transvision 2006, Helsinki. http://www.goertzel.org/papers/tenyears.htm
Haikonen P (2003) The cognitive approach to conscious machines. Imprint Academic, Thorverton
Holland O (ed) (2003) Special issue on machine consciousness. J Conscious Stud 10(4–5)
Jonas H (1996/2001) The phenomenon of life: toward a philosophical biology. Northwestern University Press, Evanston (originally published by Harper & Row (NY) in 1996)
Joy B (2000) Why the future doesn’t need us. Wired 8(04). http://www.wired.com/wired/archive/8.04/joy_pr.html
Kant I (1997) In: Heath P, Schneewind JB (eds) Lectures on ethics, Cambridge U.P., Cambridge
Kurzweil R (2001) One half of an argument (response to Lanier 2000). The edge (online publication), 8.4.01. http://www.edge.org/3rd_culture/kurzweil/kurzweil_index.html
Kurzweil R (2005) The singularity is near: when humans transcend biology. Viking Press, NY
LaChat M (2004) “Playing god” and the construction of artificial persons. In: Smit I, Wallach W, Lasker G (eds) Symposium on cognitive, emotive and ethical aspects of decision-making in humans and artificial intelligence, InterSymp 04, IIAS Press, Windsor
Lanier J (2000) One half of a Manifesto. The edge (online publication), 11.11.00. http://www.edge.org/3rd_culture/lanier/lanier_index.html
Leopold A (1949) The land ethic. In: A sand county almanac with sketches here and there, Oxford University Press, New York, pp 201 ff
Lovelock J (1979) Gaia: a new look at life on earth. Oxford U.P., Oxford
Lovelock J (2006) The revenge of Gaia: why the earth is fighting back, and how we can still save humanity. Allen Lane, London
Maturana H, Varela F (1980) Autopoiesis and cognition: the realization of the living. D. Reidel Publishing, Dordrecht
Midgley M (1978) Beast and man: the roots of human nature. Cornell U.P., Ithaca
Moor J (2006) The nature, importance and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21
Moravec H (1988) Mind children: the future of robot and human intelligence. Harvard U.P., Cambridge
Naess A (1973) The shallow and the deep long-range ecology movements. Inquiry 16:95–100
Naess A, Sessions G (1984) Basic principles of deep ecology. Ecophilosophy 6:3–7
Regan T (1983) The case for animal rights. University of California Press, Berkeley
Singer P (1977) Animal liberation. Granada, London
Singer P (2011) The expanding circle: ethics, evolution and moral progress. Princeton U.P., Princeton. (Revised edition of Singer P (1981), The expanding circle: ethics and sociobiology. Farrar, Strauss and Giroux, NY)
Singer P, Sagan A (2009) When robots have feelings. The guardian, Tues 15 December 2009. http://www.guardian.co.uk/commentisfree/2009/dec/14/rage-against-machines-robots
Sloman A (1978) The computer revolution in philosophy: philosophy, science and models of mind. Harvester Press, Hassocks
Sparrow R (2007) Killer robots. Appl Philos 24(1):62–77
Sparrow R, Sparrow L (2006) In the hands of machines? the future of aged care. Minds Mach 16(2):141–161
Sylvan R, Bennett D (1994) The greening of ethics: from human chauvinism to deep-green theory. White Horse Press, Cambridge
Thompson E (2007) Mind in life: biology, phenomenology and the sciences of mind. Harvard U.P., Cambridge
Torrance S (2000) Towards an ethics for epersons. In: Proceedings of AISB’00 symposium on AI, ethics and (Quasi-) human rights, University of Birmingham
Torrance S (2007) Two conceptions of machine phenomenality. J Conscious Stud 14(7):154–166
Torrance S (2008) Ethics, consciousness and artificial agents. Artif Intell Soc 22(4):495–521
Torrance S (2009) Will robots have their own ethics? Philosophy Now (72). http://www.philosophynow.org/issues/72/Will_Robots_Need_Their_Own_Ethics. Accessed 22 Apr 2012
Torrance S (2011) Would a super-intelligent AI necessarily be (super-) conscious?’ In: Chrisley R, Clowes R, Torrance S (eds) Proceedings of machine consciousness symposium at the AISB’11 convention, University of York, pp 67–74
Torrance S, Roche D (2011) Does an artificial agent need to be conscious to have ethical status? In: van den Berg B, Klaming L (eds) Technologies on the stand: legal and ethical questions in neuroscience and robotics. Wolf Legal Publishers, Nijmegen, pp 285–310
Torrance S, Clowes R, Chrisley R (eds) (2007) Machine consciousness: embodiment and imagination. Special issue of J Conscious Stud 14(4)
Trevarthen C, Reddy V (2007) Consciousness in infants. In: Velmans M, Schneider S (eds) The Blackwell companion to consciousness. Blackwell, Oxford, pp 41–57
Vinge V (1993) The coming technological singularity: how to survive in the post-human era. Whole Earth Rev Winter 1993. http://www.wholeearth.com/uploads/2/File/documents/technological_singularity.pdf. Accessed 22 Apr 2012
Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford U.P., Oxford
Wilson EO (1984) Biophilia. Harvard U.P., Cambridge
Wilson EO (1994) The diversity of life. Penguin, Harmondsworth
Wright R (1994) The moral animal: evolutionary psychology and everyday life. Pantheon Books, NY
Yudkowsky ES (2001) Creating friendly AI. http://www.singinst.org/upload/CFAI.html
Yudkowsky ES (2008) Cognitive biases potentially affecting judgement of global risks. In: Bostrom N, Cirkovic M (eds) Global catastrophic risks. Oxford U.P., Oxford, pp 91–119
Acknowledgments
I would like to thank the following for helpful discussion: Mark Bishop, Margaret Boden, David Calverley, Ron Chrisley, Robert Clowes, Mark Coeckelbergh, Anna Dumitriu, Tom Froese, Pietro Pavese, John Pickering, Denis Roche, Aaron Sloman, Susan Stuart, Wendell Wallach, Blay Whitby. The formation of this paper was assisted by visits funded by the British Academy and the EUCogII network.
Author information
Authors and Affiliations
Corresponding author
Additional information
The following chapter was previously published, under the title ‘Machine Ethics and the Idea of a More-Than-Human Moral World’, in Machine Ethics, edited by Michael Anderson and Susan Leigh Anderson, © 2011 Cambridge University Press, pages 115–137. It is reproduced here, with some amendments, and a new introductory section, by permission of Cambridge University Press.
Rights and permissions
About this article
Cite this article
Torrance, S. Artificial agents and the expanding ethical circle. AI & Soc 28, 399–414 (2013). https://doi.org/10.1007/s00146-012-0422-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-012-0422-2