Ethical Regulation of Robots Must Be Embedded in Their Operating Systems | SpringerLink
Skip to main content

Ethical Regulation of Robots Must Be Embedded in Their Operating Systems

  • Chapter
A Construction Manual for Robots' Ethical Systems

Part of the book series: Cognitive Technologies ((COGTECH))

Abstract

The authors argue that unless computational deontic logics (or, for that matter, any other class of systems for mechanizing moral and/or legal principles) or achieving ethical control of future AIs and robots are woven into the operating-system level of such artifacts, such control will be at best dangerously brittle.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
JPY 14299
Price includes VAT (Japan)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The authors of this chapter are deeply grateful to OFAI for the opportunity to discuss robot ethics in a lively and wonderfully productive workshop in Vienna, and to both ONR and AFOSR for support that enables the rigorous pursuit of robot moral reasoning.

  2. 2.

    In keeping with [5].

  3. 3.

    Not much unlike the current-day ROS.

  4. 4.

    Alternatively, given that meth is still illegal, COLT could decide to concoct an equally addictive but new drug not covered by standing law.

  5. 5.

    For example, techniques for replacing specification and operation of Turing machine with suitably constructed first-order theories, and the Curry–Howard isomorphism.

  6. 6.

    For example, versions of it allow the generation of Chisholm’s Paradox; see [4].

  7. 7.

    This may not always be proper.

  8. 8.

    The source code for this example can be downloaded from https://github.com/naveensundarg/EthicalSubstrate.

  9. 9.

    Under joint development by the HRI Lab (Scheutz) at Tufts University, the RAIR Lab (Bringsjord & Govindarajulu) and Social Interaction Lab (Si) at RPI, with contributions on the psychology side from Bertram Malle of Brown University. In addition to these investigators, the project includes two consultants: John Mikhail of Georgetown University Law School and Joshua Knobe of Yale University. This research project is sponsored by a MURI grant from the Office of Naval Research in the States. We are here and herein describing the logic-based ethical engineering designed and carried out by Bringsjord and Govindarajulu of the RAIR Lab (though in the final section (Sect. 5.5) we point to the need to link deontic logic to emotions, with help from Si).

  10. 10.

    Of course, the technical substance of our hierarchy approach would presumably provide elements useful in the approach advocated in the preset position paper.

  11. 11.

    UIMA has found considerable success as the backbone of IBM’s famous Watson system [7], which in 2011, to much fanfare (at least in the USA), beat the best human players in the game of Jeopardy!.

  12. 12.

    Though written rather long ago, [17] is still a wonderful introduction to the subfield in formal logic of conditional logic. In the final analysis, sophisticated moral reasoning can only be accurately modeled for formal logics that include conditionals much more expressive and nuanced than the material conditional. For example, even the well-known trolley-problem cases (in which, to save multiple lives, one can either redirect a train, killing one person in the process, or directly stop the train by throwing someone in front of it), which are not exactly complicated, require, when analyzed informally but systematically, as shown, e.g., by Mikhail [16], counterfactuals.

  13. 13.

    We are here pointing to the labor-shortage problem. For an approach to the technical challenge of program verification based on proof-checking, in which, assuming that programs are recast as proof finders, program verification becomes straightforward (at least programmatically speaking), see [1]. In this approach, traditional program verification is needed only for the one small piece of code that implements proof-checking.

  14. 14.

    Govindarajulu’s [10] dissertation marks a contribution to the so-called harder half of the crowdsourcing direction. Again, the “easier half,” which apparently is what DARPA has hitherto spent money to address, is to use games to allow nonexperts playing them to generate specifications corresponding to code. The harder half is devoted to proving that such specifications are indeed true with respect to the associated code. In Govindarajulu’s novel games, to play is to find proofs that specifications do in fact hold of programs. Interested readers have only to search the internet for ‘Catabot Rescue’.

References

  1. Arkoudas, K., Bringsjord, S.: Computers, justification, and mathematical knowledge. Mind Mach. 17(2), 185–202 (2007). http://kryten.mm.rpi.edu/ka_sb_proofs_offprint.pdf

  2. Bringsjord, S.: The logicist manifesto: at long last let logic-based ai become a field unto itself. J. Appl. Log. 6(4), 502–525 (2008). http://kryten.mm.rpi.edu/SB_LAI_Manifesto_091808.pdf

  3. Bringsjord, S., Govindarajulu, N.S.: Toward a modern geography of minds, machines, and math. In: Müller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 5, pp. 151–165. Springer, New York (2013). http://www.springerlink.com/content/hg712w4l23523xw5

  4. Bringsjord, S., Arkoudas, K., Bello, P.: Toward a general logicist methodology for engineering ethically correct robots. IEEE Intell. Syst. 21(4), 38–44 (2006). http://kryten.mm.rpi.edu/bringsjord_inference_robot_ethics_preprint.pdf

  5. Bringsjord, S., Bringsjord, A., Bello, P.: Belief in the singularity is fideistic. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds.) The Singularity Hypothesis, pp. 395–408. Springer, New York (2013)

    Google Scholar 

  6. Ferrucci, D., Lally, A.: UIMA: an architectural approach to unstructured information processing in the corporate research environment. Nat. Lang. Eng. 10, 327–348 (2004)

    Article  Google Scholar 

  7. Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A., Lally, A., Murdock, W., Nyberg, E., Prager, J., Schlaefer, N., Welty, C.: Building Watson: an overview of the DeepQA project. AI Mag. 31, 59–79 (2010). http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf

  8. Fitting, M., Mendelsohn, R.L.: First-Order Modal Logic, vol. 277, Kluwer, Dordrecht (1998)

    Book  MATH  Google Scholar 

  9. Goble, L. (ed.): The Blackwell Guide to Philosophical Logic. Blackwell Publishing, Oxford (2001)

    MATH  Google Scholar 

  10. Govindarajulu, N.S.: Uncomputable games: games for crowdsourcing formal reasoning. Ph.D. thesis, Rensselaer Polytechnic Institute (2013)

    Google Scholar 

  11. Greco, G., Greco, S., Zumpano, E.: A logical framework for querying and repairing inconsistent databases. IEEE Trans. Knowl. Data Eng. 15(6), 1389–1408 (2003)

    Article  Google Scholar 

  12. Hardegree, G.: Introduction to modal logic. This is an on-line textbook available, as of February 2012, at this http://people.umass.edu/gmhwww/511/text.htm (2011)

  13. Klein, G.: A formally verified OS kernel. Now what? In: Kaufmann, M., Paulson, L.C. (eds.) Interactive Theorem Proving. Lecture Notes in Computer Science, vol. 6172, pp. 1–7. Springer, Berlin/Heidelberg (2010)

    Chapter  Google Scholar 

  14. Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: seL4: formal verification of an OS Kernel. In: Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles, SOSP ’09, pp. 207–220. ACM, New York (2009)

    Google Scholar 

  15. McNamara, P.: Deontic logic. In: Zalta, E. (ed.) The Stanford Encyclopedia of Philosophy, Fall 2010 edn. (2010). The section of the article discussing a dyadic system is available at: http://plato.stanford.edu/entries/logic-deontic/chisholm.html

  16. Mikhail, J.: Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment, Kindle edn. Cambridge University Press, Cambridge (2011)

    Book  Google Scholar 

  17. Nute, D.: Conditional logic. In: Gabay, D., Guenthner, F. (eds.) Handbook of Philosophical Logic Volume II: Extensions of Classical Logic, pp. 387–439. D. Reidel, Dordrecht (1984)

    Chapter  Google Scholar 

  18. Schermerhorn, P., Kramer, J., Brick, T., Anderson, D., Dingler, A., Scheutz, M.: DIARC: A testbed for natural human-robot interactions. In: Proceedings of AAAI 2006 Mobile Robot Workshop (2006)

    Google Scholar 

  19. Si, M., Marsella, S., Pynadath, D.: Modeling appraisal in theory of mind reasoning. J. Agent Multi-Agent Syst. 20, 14–31 (2010)

    Article  Google Scholar 

  20. Stickel, M.E.: SNARK - SRI’s new automated reasoning kit. http://www.ai.sri.com/~stickel/snark.html (2008). Retrieved on July 26, 2013

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Naveen Sundar Govindarajulu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Govindarajulu, N.S., Bringsjord, S. (2015). Ethical Regulation of Robots Must Be Embedded in Their Operating Systems. In: Trappl, R. (eds) A Construction Manual for Robots' Ethical Systems. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-319-21548-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-21548-8_5

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-21547-1

  • Online ISBN: 978-3-319-21548-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics