Matching Items (4)
Filtering by

Clear all filters

130894-Thumbnail Image.png
Description
The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s status quo. Not only has the understanding of co-robotics exploded in the industrial world, but in research as well. The National Science Foundation (NSF) defines co-robots as the following: “...a robot whose main purpose is to work with people or other robots to accomplish a goal” (NSF, 1). The latest iteration of their National Robotics Initiative, NRI-2.0, focuses on efforts of creating co-robots optimized for ‘scalability, customizability, lowering barriers to entry, and societal impact’(NSF, 1). While many avenues have been explored for the implementation of co-robotics to create more efficient processes and sustainable lifestyles, this project’s focus was on societal impact co-robotics in the field of human safety and well-being. Introducing a co-robotics and computer vision AI solution for first responder assistance would help bring awareness and efficiency to public safety. The use of real-time identification techniques would create a greater range of awareness for first responders in high-stress situations. A combination of environmental features collected through sensors (camera and radar) could be used to identify people and objects within certain environments where visual impairments and obstructions are high (eg. burning buildings, smoke-filled rooms, ect.). Information about situational conditions (environmental readings, locations of other occupants, etc.) could be transmitted to first responders in emergency situations, maximizing situational awareness. This would not only aid first responders in the evaluation of emergency situations, but it would provide useful data for the first responder that would help materialize the most effective course of action for said situation.
ContributorsScott, Kylel D (Author) / Benjamin, Victor (Thesis director) / Liu, Xiao (Committee member) / Engineering Programs (Contributor) / College of Integrative Sciences and Arts (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
131720-Thumbnail Image.png
Description
In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela

In the past several years, the long-standing debate over freedom and responsibility has been applied to artificial intelligence (AI). Some such as Raul Hakli and Pekka Makela argue that no matter how complex robotics becomes, it is impossible for any robot to become a morally responsible agent. Hakli and Makela assert that even if robots become complex enough that they possess all the capacities required for moral responsibility, their history of being programmed undermines the robot’s autonomy in a responsibility-undermining way. In this paper, I argue that a robot’s history of being programmed does not undermine that robot’s autonomy in a responsibility-undermining way. I begin the paper with an introduction to Raul and Hakli’s argument, as well as an introduction to several case studies that will be utilized to explain my argument throughout the paper. I then display why Hakli and Makela’s argument is a compelling case against robots being able to be morally responsible agents. Next, I extract Hakli and Makela’s argument and explain it thoroughly. I then present my counterargument and explain why it is a counterexample to that of Hakli and Makela’s.
ContributorsAnderson, Troy David (Author) / Khoury, Andrew (Thesis director) / Watson, Jeffrey (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / College of Integrative Sciences and Arts (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
161939-Thumbnail Image.png
Description
Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination

Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination of simpler behaviors. It is tempting to apply similar idea such that simpler behaviors can be combined in a meaningful way to tailor the complex combination. Such an approach would enable faster learning and modular design of behaviors. Complex behaviors can be combined with other behaviors to create even more advanced behaviors resulting in a rich set of possibilities. Similar to RL, combined behavior can keep evolving by interacting with the environment. The requirement of this method is to specify a reasonable set of simple behaviors. In this research, I present an algorithm that aims at combining behavior such that the resulting behavior has characteristics of each individual behavior. This approach has been inspired by behavior based robotics, such as the subsumption architecture and motor schema-based design. The combination algorithm outputs n weights to combine behaviors linearly. The weights are state dependent and change dynamically at every step in an episode. This idea is tested on discrete and continuous environments like OpenAI’s “Lunar Lander” and “Biped Walker”. Results are compared with related domains like Multi-objective RL, Hierarchical RL, Transfer learning, and basic RL. It is observed that the combination of behaviors is a novel way of learning which helps the agent achieve required characteristics. A combination is learned for a given state and so the agent is able to learn faster in an efficient manner compared to other similar approaches. Agent beautifully demonstrates characteristics of multiple behaviors which helps the agent to learn and adapt to the environment. Future directions are also suggested as possible extensions to this research.
ContributorsVora, Kevin Jatin (Author) / Zhang, Yu (Thesis advisor) / Yang, Yezhou (Committee member) / Praharaj, Sarbeswar (Committee member) / Arizona State University (Publisher)
Created2021
Description

This thesis proposes a new steering system for agricultural machinery with the aim of improving the automation capabilities of farming robots. Accurate and reliable autonomous machinery has the potential to provide significant benefits to the efficiency of farming operations, but the existing systems for performing one of the most essential

This thesis proposes a new steering system for agricultural machinery with the aim of improving the automation capabilities of farming robots. Accurate and reliable autonomous machinery has the potential to provide significant benefits to the efficiency of farming operations, but the existing systems for performing one of the most essential automation functions, autonomous steering to keep machinery on the proper course, each have drawbacks that impact their usability in various scenarios. In order to address these issues, a new lidar-based system was developed for automatic steering in a typical farm field. This approach uses a two-dimensional lidar unit to scan the ground in front of the robot to detect and steer based on farm tracks, a common feature in many farm fields. This system was implemented and evaluated, with results demonstrating that the system is capable of providing accurate steering corrections.

ContributorsBrauer, Jude (Author) / Mehlhase, Alexandra (Thesis director) / Heinrichs, Robert (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor) / College of Integrative Sciences and Arts (Contributor)
Created2023-05