131720-Thumbnail Image.png
Description
In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any

In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela assert that even if robots become complex enough that they possess all the capacities required for moral responsibility, their history of being programmed undermines the robot’s autonomy in a responsibility-undermining way. In this paper, I argue that a robot’s history of being programmed does not undermine that robot’s autonomy in a responsibility-undermining way. I begin the paper with an introduction to Raul and Hakli’s argument, as well as an introduction to several case studies that will be utilized to explain my argument throughout the paper. I then display why Hakli and Makela’s argument is a compelling case against robots being able to be morally responsible agents. Next, I extract Hakli and Makela’s argument and explain it thoroughly. I then present my counterargument and explain why it is a counterexample to that of Hakli and Makela’s.
331.31 KB application/pdf

Download restricted. Please sign in.
Restrictions Statement

Barrett Honors College theses and creative projects are restricted to ASU community members.

Details

Title
  • The Moral Responsibility of Complex Robots
Contributors
Date Created
2020-05
Resource Type
  • Text
  • Machine-readable links