Matching Items (18)

134257-Thumbnail Image.png

HA-MRA: A Human-Aware Multi-Robot Architecture

Description

This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This

This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or disconnect. The system was demonstrated on two real robots, a Fetch and a PeopleBot, by conducting a surveillance task on the fifth floor of the Computer Science building at Arizona State University. The next part of the system includes extensions for teaming with humans. An Android application was created to serve as the interface between the system and human teammates. This application provides a way for the system to communicate with humans in the loop. In addition, it sends location information of the human teammates to the system so that goal recognition can be performed. This goal recognition allows the generation of human-aware plans. This capability was demonstrated in a mock search and rescue scenario using the Fetch to locate a missing teammate.

Contributors

Agent

Created

Date Created
  • 2017-05

133401-Thumbnail Image.png

Development of a Game of Logic for Investigating of Trust in Human Robot Interaction

Description

As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important

As robotics technology advances, robots are being created for use in situations where they collaborate with humans on complex tasks.  For this to be safe and successful, it is important to understand what causes humans to trust robots more or less during a collaborative task.  This research project aims to investigate human-robot trust through a collaborative game of logic that can be played with a human and a robot together. This thesis details the development of a game of logic that could be used for this purpose. The game of logic is based upon a popular game in AI research called ‘Wumpus World’. The original Wumpus World game was a low-interactivity game to be played by humans alone. In this project, the Wumpus World game is modified for a high degree of interactivity with a human player, while also allowing the game to be played simultaneously by an AI algorithm.

Contributors

Agent

Created

Date Created
  • 2018-05

137772-Thumbnail Image.png

An Investigation of Human Error Correction in Anthropomorphic Robotic Armatures

Description

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists

As robots become more prevalent, the need is growing for efficient yet stable control systems for applications with humans in the loop. As such, it is a challenge for scientists and engineers to develop robust and agile systems that are capable of detecting instability in teleoperated systems. Despite how much research has been done to characterize the spatiotemporal parameters of human arm motions for reaching and gasping, not much has been done to characterize the behavior of human arm motion in response to control errors in a system. The scope of this investigation is to investigate human corrective actions in response to error in an anthropomorphic teleoperated robot limb. Characterizing human corrective actions contributes to the development of control strategies that are capable of mitigating potential instabilities inherent in human-machine control interfaces. Characterization of human corrective actions requires the simulation of a teleoperated anthropomorphic armature and the comparison of a human subject's arm kinematics, in response to error, against the human arm kinematics without error. This was achieved using OpenGL software to simulate a teleoperated robot arm and an NDI motion tracking system to acquire the subject's arm position and orientation. Error was intermittently and programmatically introduced to the virtual robot's joints as the subject attempted to reach for several targets located around the arm. The comparison of error free human arm kinematics to error prone human arm kinematics revealed an addition of a bell shaped velocity peak into the human subject's tangential velocity profile. The size, extent, and location of the additional velocity peak depended on target location and join angle error. Some joint angle and target location combinations do not produce an additional peak but simply maintain the end effector velocity at a low value until the target is reached. Additional joint angle error parameters and degrees of freedom are needed to continue this investigation.

Contributors

Agent

Created

Date Created
  • 2013-05

132547-Thumbnail Image.png

Automated Bicycle Human-in-the-Loop Control

Description

Bicycles are already used for daily transportation by a large share of the world's population and provide a partial solution for many issues facing the world today. The low environmental

Bicycles are already used for daily transportation by a large share of the world's population and provide a partial solution for many issues facing the world today. The low environmental impact of bicycling combined with the reduced requirement for road and parking spaces makes bicycles a good choice for transportation over short distances in urban areas. Bicycle riding has also been shown to improve overall health and increase life expectancy. However, riding a bicycle may be inconvenient or impossible for persons with disabilities due to the complex and coordinated nature of the task. Automated bicycles provide an interesting area of study for human-robot interaction, due to the number of contact points between the rider and the bicycle. The goal of the Smart Bike project is to provide a platform for future study of the physical interaction between a semi-autonomous bicycle robot and a human rider, with possible applications in rehabilitation and autonomous vehicle research.

This thesis presents the development of two balance control systems, which utilize actively controlled steering and a control moment gyroscope to stabilize the bicycle at high and low speeds. These systems may also be used to introduce disturbances, which can be useful for studying human reactions. The effectiveness of the steering balance control system is verified through testing with a PID controller in an outdoor environment. Also presented is the development of a force sensitive bicycle seat which provides feedback used to estimate the pose of the rider on the bicycle. The relationship between seat force distribution is demonstrated with a motion capture experiment. A corresponding software system is developed for balance control and sensor integration, with inputs from the rider, the internal balance and steering controller, and a remote operator.

Contributors

Agent

Created

Date Created
  • 2019-05

153240-Thumbnail Image.png

Robotic augmentation of human locomotion for high speed running

Description

Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at

Human running requires extensive training and conditioning for an individual to maintain high speeds (greater than 10mph) for an extended duration of time. Studies have shown that running at peak speeds generates a high metabolic cost due to the use of large muscle groups in the legs associated with the human gait cycle. Applying supplemental external and internal forces to the human body during the gait cycle has been shown to decrease the metabolic cost for walking, allowing individuals to carry additional weight and walk further distances. Significant research has been conducted to reduce the metabolic cost of walking, however, there are few if any documented studies that focus specifically on reducing the metabolic cost associated with high speed running. Three mechanical systems were designed to work in concert with the human user to decrease metabolic cost and increase the range and speeds at which a human can run.

The methods of design require a focus on mathematical modeling, simulations, and metabolic cost. Mathematical modeling and simulations are used to aid in the design process of robotic systems and metabolic testing is regarded as the final analysis process to determine the true effectiveness of robotic prototypes. Metabolic data, (VO2) is the volumetric consumption of oxygen, per minute, per unit mass (ml/min/kg). Metabolic testing consists of analyzing the oxygen consumption of a test subject while performing a task naturally and then comparing that data with analyzed oxygen consumption of the same task while using an assistive device.

Three devices were designed and tested to augment high speed running. The first device, AirLegs V1, is a mostly aluminum exoskeleton with two pneumatic linear actuators connecting from the lower back directly to the user's thighs, allowing the device to induce a torque on the leg by pushing and pulling on the user's thigh during running. The device also makes use of two smaller pneumatic linear actuators which drive cables connecting to small lever arms at the back of the heel, inducing a torque at the ankles. Device two, AirLegs V2, is also pneumatically powered but is considered to be a soft suit version of the first device. It uses cables to interface the forces created by actuators located vertically on the user's back. These cables then connect to the back of the user's knees resulting in greater flexibility and range of motion of the legs. Device three, a Jet Pack, produces an external force against the user's torso to propel a user forward and upward making it easier to run. Third party testing, pilot demonstrations and timed trials have demonstrated that all three of the devices effectively reduce the metabolic cost of running below that of natural running with no device.

Contributors

Agent

Created

Date Created
  • 2014

153204-Thumbnail Image.png

Mere exposure effect on uncanny feelings toward virtual characters and robots

Description

As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to

As technology increases, so does the concern that the humanlike virtual characters and android robots being created today will fall into the uncanny valley. The current study aims to determine whether uncanny feelings from modern virtual characters and robots can be significantly affected by the mere exposure effect. Previous research shows that mere exposure can increase positive feelings toward novel stimuli (Zajonc, 1968). It is predicted that the repeated exposure to virtual characters and robots can cause a significant decrease in uncanny feelings. The current study aimed to show that modern virtual characters and robots possessing uncanny traits will be rated significantly less uncanny after being viewed multiple times.

Contributors

Agent

Created

Date Created
  • 2014

158465-Thumbnail Image.png

Physical Human-Bicycle Interfaces for Robotic Balance Assistance

Description

Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated

Riding a bicycle requires accurately performing several tasks, such as balancing and navigation, which may be difficult or even impossible for persons with disabilities. These difficulties may be partly alleviated by providing active balance and steering assistance to the rider. In order to provide this assistance while maintaining free maneuverability, it is necessary to measure the position of the rider on the bicycle and to understand the rider's intent. Applying autonomy to bicycles also has the potential to address some of the challenges posed by traditional automobiles, including CO2 emissions, land use for roads and parking, pedestrian safety, high ownership cost, and difficulty traversing narrow or partially obstructed paths.

The Smart Bike research platform provides a set of sensors and actuators designed to aid in understanding human-bicycle interaction and to provide active balance control to the bicycle. The platform consists of two specially outfitted bicycles, one with force and inertial measurement sensors and the other with robotic steering and a control moment gyroscope, along with the associated software for collecting useful data and running controlled experiments. Each bicycle operates as a self-contained embedded system, which can be used for untethered field testing or can be linked to a remote user interface for real-time monitoring and configuration. Testing with both systems reveals promising capability for applications in human-bicycle interaction and robotics research.

Contributors

Agent

Created

Date Created
  • 2020

151173-Thumbnail Image.png

A high level language for human robot interaction

Description

While developing autonomous intelligent robots has been the goal of many research programs, a more practical application involving intelligent robots is the formation of teams consisting of both humans and

While developing autonomous intelligent robots has been the goal of many research programs, a more practical application involving intelligent robots is the formation of teams consisting of both humans and robots. An example of such an application is search and rescue operations where robots commanded by humans are sent to environments too dangerous for humans. For such human-robot interaction, natural language is considered a good communication medium as it allows humans with less training about the robot's internal language to be able to command and interact with the robot. However, any natural language communication from the human needs to be translated to a formal language that the robot can understand. Similarly, before the robot can communicate (in natural language) with the human, it needs to formulate its communique in some formal language which then gets translated into natural language. In this paper, I develop a high level language for communication between humans and robots and demonstrate various aspects through a robotics simulation. These language constructs borrow some ideas from action execution languages and are grounded with respect to simulated human-robot interaction transcripts.

Contributors

Agent

Created

Date Created
  • 2012

155378-Thumbnail Image.png

Robots that anticipate pain: anticipating physical perturbations from visual cues through deep predictive models

Description

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub- networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.

Contributors

Agent

Created

Date Created
  • 2017

154073-Thumbnail Image.png

Human factors analysis of automated planning technologies for human-robot teaming

Description

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.

Contributors

Agent

Created

Date Created
  • 2015