Abstract
Rapid progress in robotics and AI potentially pose huge challenges regarding several roles that used to be traditionally reserved for human agents: Human core competences such as autonomy, agency, and responsibility might one day apply to artificial systems as well. I will give an overview on the philosophical discipline of robot ethics via the phenomenon of responsibility as a crucial human competence. In a first step I will ask for the traditional understanding of the term “responsibility and formulate a minimal definition that exclusively includes the necessary etymological elements as the ‘lowest common denominator’ of the responsibility concept: Responsibility as the ability to answer is a normative concept that rests on the assumption that the responsible subject in question is equipped with a specific psycho-motivational constitution. In a second step I will outline my understanding of the discipline of robot ethics, in order to ask in a third step how to ascribe responsibility in man-machine-interaction. For these purposes I will elaborate on my concept of responsibility networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brooks, R. A., Breazeal, C., Marjanović, M., Scasselatti, B., & Williamson M. M. (1999). The cog project, building a humanoid robot. In C. Nehaniv (Ed.), Computation for metaphors. Analogy, and agents (pp. 52–87). Springer, Wiesbaden.
Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379.
Foot, P. (1967). Moral beliefs. In P. Foot (Ed.), Theories of ethics (pp. 83–100). Oxford University Press.
Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68/1, 5–20.
Loh, J. (2017). Strukturen und Relata der Verantwortung. In Heidbrink, L., Langbehn, C., & Loh, J. (Eds.), Handbuch Verantwortung (pp. 35–56). Springer VS, Wiesbaden.
Loh, J. (2017). Roboterethik. Information Philosophie, 1, 20–33.
Loh, J., & Loh, W. (2017). Autonomy and responsibility in hybrid systems. The example of autonomous cars. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0. From autonomous cars to artificial intelligence (pp. 35–50). Oxford University Press.
Neuhäuser, C. (2014). Roboter und moralische Verantwortung. In E. Hilgendorf (Ed.), Robotik im Kontext von Recht und Moral (pp. 269–286). Nomos, Baden-Baden.
Rawls, J. (2001). Justice as fairness. A restatement. Harvard University Press.
Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3/3, 417–157.
Sombetzki, J. (2014). Verantwortung als Begriff, Fähigkeit, Aufgabe. Eine Drei-Ebenen-Analyse. Springer VS, Wiesbaden.
Wallach, W. & Allen, C. (2009). Moral machines. Teaching robots right from wrong. MIT Press, Cambridge, Massachusetts, London.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Loh (née Sombetzki), J. (2018). On Building Responsible Robots. In: Karafillidis, A., Weidner, R. (eds) Developing Support Technologies. Biosystems & Biorobotics, vol 23. Springer, Cham. https://doi.org/10.1007/978-3-030-01836-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-01836-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01835-1
Online ISBN: 978-3-030-01836-8
eBook Packages: Computer ScienceComputer Science (R0)