Matching Items (2)
Filtering by

Clear all filters

131980-Thumbnail Image.png
Description
There is a widely held assumption that a good chief executive in the business world will
be a good chief executive in the government. In the past, there have been many Chief Executives
in the government who have had either military experience, or some congressional experience.
President Ulysses S. Grant

There is a widely held assumption that a good chief executive in the business world will
be a good chief executive in the government. In the past, there have been many Chief Executives
in the government who have had either military experience, or some congressional experience.
President Ulysses S. Grant was a General, President Zachary Tayler was a Major General,
President Herbert Hoover was the Secretary of Commerce, and contributed to the Treaty of
Versailles, and therefore cannot be criticized on the basis of having no practical government
experience, as well as President Dwight D. Eisenhower, who was also a Commanding
General. On the other hand, with many well-known entrepreneurs, people tend to focus on the
achievements that those people accomplish, and thus see that as something that can be
transitioned from business to politics. However, I would argue that this is generally not the case.
ContributorsGuerrero, Ismael (Author) / Watson, Jeffrey (Thesis director) / Broberg, Gregory (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131720-Thumbnail Image.png
Description
In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela

In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela assert that even if robots become complex enough that they possess all the capacities required for moral responsibility, their history of being programmed undermines the robot’s autonomy in a responsibility-undermining way. In this paper, I argue that a robot’s history of being programmed does not undermine that robot’s autonomy in a responsibility-undermining way. I begin the paper with an introduction to Raul and Hakli’s argument, as well as an introduction to several case studies that will be utilized to explain my argument throughout the paper. I then display why Hakli and Makela’s argument is a compelling case against robots being able to be morally responsible agents. Next, I extract Hakli and Makela’s argument and explain it thoroughly. I then present my counterargument and explain why it is a counterexample to that of Hakli and Makela’s.
ContributorsAnderson, Troy David (Author) / Khoury, Andrew (Thesis director) / Watson, Jeffrey (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05