Matching Items (5)
Filtering by

Clear all filters

157469-Thumbnail Image.png
Description
What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes

What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery.

The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error.
ContributorsBell, Rebecca C (Author) / Zhang, Wenlong (Thesis advisor) / Chiou, Erin (Committee member) / Aukes, Daniel (Committee member) / Arizona State University (Publisher)
Created2019
137772-Thumbnail Image.png
Description
As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.
ContributorsBevilacqua, Vincent Frank (Author) / Artemiadis, Panagiotis (Thesis director) / Santello, Marco (Committee member) / Trimble, Steven (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2013-05
158465-Thumbnail Image.png
Description
Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability,

Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability, it is necessary to measure the position of the rider on the bicycle and to understand the rider's intent. Applying autonomy to bicycles also has the potential to address some of the challenges posed by traditional automobiles, including CO2 emissions, land use for roads and parking, pedestrian safety, high ownership cost, and difficulty traversing narrow or partially obstructed paths.

The Smart Bike research platform provides a set of sensors and actuators designed to aid in understanding human-bicycle interaction and to provide active balance control to the bicycle. The platform consists of two specially outfitted bicycles, one with force and inertial measurement sensors and the other with robotic steering and a control moment gyroscope, along with the associated software for collecting useful data and running controlled experiments. Each bicycle operates as a self-contained embedded system, which can be used for untethered field testing or can be linked to a remote user interface for real-time monitoring and configuration. Testing with both systems reveals promising capability for applications in human-bicycle interaction and robotics research.
ContributorsBush, Jonathan Ernest (Author) / Zhang, Wenlong (Thesis advisor) / Heinrichs, Robert (Thesis advisor) / Sandy, Douglas (Committee member) / Arizona State University (Publisher)
Created2020
Description
Self-Driving cars are a long-lasting ambition for many AI scientists and engineers. In the last decade alone, many self-driving cars like Google Waymo, Tesla Autopilot, Uber, etc. have been roaming the streets of many cities. As a rapidly expanding field, researchers all over the world are attempting to develop more

Self-Driving cars are a long-lasting ambition for many AI scientists and engineers. In the last decade alone, many self-driving cars like Google Waymo, Tesla Autopilot, Uber, etc. have been roaming the streets of many cities. As a rapidly expanding field, researchers all over the world are attempting to develop more safe and efficient AI agents that can navigate through our cities. However, driving is a very complex task to master even for a human, let alone the challenges in developing robots to do the same. It requires attention and inputs from the surroundings of the car, and it is nearly impossible for us to program all the possible factors affecting this complex task. As a solution, imitation learning was introduced, wherein the agents learn a policy, mapping the observations to the actions through demonstrations given by humans. Through imitation learning, one could easily teach self-driving cars the expected behavior in many scenarios. Despite their autonomous nature, it is undeniable that humans play a vital role in the development and execution of safe and trustworthy self-driving cars and hence form the strongest link in this application of Human-Robot Interaction. Several approaches were taken to incorporate this link between humans and self-driving cars, one of which involves the communication of human's navigational instruction to self-driving cars. The communicative channel provides humans with control over the agent’s decisions as well as the ability to guide them in real-time. In this work, the abilities of imitation learning in creating a self-driving agent that can follow natural language instructions given by humans based on environmental objects’ descriptions were explored. The proposed model architecture is capable of handling latent temporal context in these instructions thus making the agent capable of taking multiple decisions along its course. The work shows promising results that push the boundaries of natural language instructions and their complexities in navigating self-driving cars through towns.
ContributorsMoudhgalya, Nithish B (Author) / Amor, Hani Ben (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021
132547-Thumbnail Image.png
Description
Bicycles are already used for daily transportation by a large share of the world's population and provide a partial solution for many issues facing the world today. The low environmental impact of bicycling combined with the reduced requirement for road and parking spaces makes bicycles a good choice for transportation

Bicycles are already used for daily transportation by a large share of the world's population and provide a partial solution for many issues facing the world today. The low environmental impact of bicycling combined with the reduced requirement for road and parking spaces makes bicycles a good choice for transportation over short distances in urban areas. Bicycle riding has also been shown to improve overall health and increase life expectancy. However, riding a bicycle may be inconvenient or impossible for persons with disabilities due to the complex and coordinated nature of the task. Automated bicycles provide an interesting area of study for human-robot interaction, due to the number of contact points between the rider and the bicycle. The goal of the Smart Bike project is to provide a platform for future study of the physical interaction between a semi-autonomous bicycle robot and a human rider, with possible applications in rehabilitation and autonomous vehicle research.

This thesis presents the development of two balance control systems, which utilize actively controlled steering and a control moment gyroscope to stabilize the bicycle at high and low speeds. These systems may also be used to introduce disturbances, which can be useful for studying human reactions. The effectiveness of the steering balance control system is verified through testing with a PID controller in an outdoor environment. Also presented is the development of a force sensitive bicycle seat which provides feedback used to estimate the pose of the rider on the bicycle. The relationship between seat force distribution is demonstrated with a motion capture experiment. A corresponding software system is developed for balance control and sensor integration, with inputs from the rider, the internal balance and steering controller, and a remote operator.
ContributorsBush, Jonathan Ernest (Author) / Zhang, Wenlong (Thesis director) / Sandy, Douglas (Committee member) / Software Engineering (Contributor, Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05