This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 3 of 3
Filtering by

Clear all filters

157469-Thumbnail Image.png
Description
What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes

What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery.

The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error.
ContributorsBell, Rebecca C (Author) / Zhang, Wenlong (Thesis advisor) / Chiou, Erin (Committee member) / Aukes, Daniel (Committee member) / Arizona State University (Publisher)
Created2019
158465-Thumbnail Image.png
Description
Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability,

Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability, it is necessary to measure the position of the rider on the bicycle and to understand the rider's intent. Applying autonomy to bicycles also has the potential to address some of the challenges posed by traditional automobiles, including CO2 emissions, land use for roads and parking, pedestrian safety, high ownership cost, and difficulty traversing narrow or partially obstructed paths.

The Smart Bike research platform provides a set of sensors and actuators designed to aid in understanding human-bicycle interaction and to provide active balance control to the bicycle. The platform consists of two specially outfitted bicycles, one with force and inertial measurement sensors and the other with robotic steering and a control moment gyroscope, along with the associated software for collecting useful data and running controlled experiments. Each bicycle operates as a self-contained embedded system, which can be used for untethered field testing or can be linked to a remote user interface for real-time monitoring and configuration. Testing with both systems reveals promising capability for applications in human-bicycle interaction and robotics research.
ContributorsBush, Jonathan Ernest (Author) / Zhang, Wenlong (Thesis advisor) / Heinrichs, Robert (Thesis advisor) / Sandy, Douglas (Committee member) / Arizona State University (Publisher)
Created2020
Description
Self-Driving cars are a long-lasting ambition for many AI scientists and engineers. In the last decade alone, many self-driving cars like Google Waymo, Tesla Autopilot, Uber, etc. have been roaming the streets of many cities. As a rapidly expanding field, researchers all over the world are attempting to develop more

Self-Driving cars are a long-lasting ambition for many AI scientists and engineers. In the last decade alone, many self-driving cars like Google Waymo, Tesla Autopilot, Uber, etc. have been roaming the streets of many cities. As a rapidly expanding field, researchers all over the world are attempting to develop more safe and efficient AI agents that can navigate through our cities. However, driving is a very complex task to master even for a human, let alone the challenges in developing robots to do the same. It requires attention and inputs from the surroundings of the car, and it is nearly impossible for us to program all the possible factors affecting this complex task. As a solution, imitation learning was introduced, wherein the agents learn a policy, mapping the observations to the actions through demonstrations given by humans. Through imitation learning, one could easily teach self-driving cars the expected behavior in many scenarios. Despite their autonomous nature, it is undeniable that humans play a vital role in the development and execution of safe and trustworthy self-driving cars and hence form the strongest link in this application of Human-Robot Interaction. Several approaches were taken to incorporate this link between humans and self-driving cars, one of which involves the communication of human's navigational instruction to self-driving cars. The communicative channel provides humans with control over the agent’s decisions as well as the ability to guide them in real-time. In this work, the abilities of imitation learning in creating a self-driving agent that can follow natural language instructions given by humans based on environmental objects’ descriptions were explored. The proposed model architecture is capable of handling latent temporal context in these instructions thus making the agent capable of taking multiple decisions along its course. The work shows promising results that push the boundaries of natural language instructions and their complexities in navigating self-driving cars through towns.
ContributorsMoudhgalya, Nithish B (Author) / Amor, Hani Ben (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021