Matching Items (72)
Filtering by

Clear all filters

153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
ContributorsNarayanan, Vignesh (Author) / Kambhampati, Subbarao (Thesis advisor) / Zhang, Yu (Thesis advisor) / Cooke, Nancy J. (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
157016-Thumbnail Image.png
Description
A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware"

A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
ContributorsChakraborti, Tathagata (Author) / Kambhampati, Subbarao (Thesis advisor) / Talamadupula, Kartik (Committee member) / Scheutz, Matthias (Committee member) / Ben Amor, Hani (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2018
133909-Thumbnail Image.png
Description
The field of robotics is rapidly expanding, and with it, the methods of teaching and introducing students must also advance alongside new technologies. There is a challenge in robotics education, especially at high school levels, to expose them to more modern and practical robots. One way to bridge this ga

The field of robotics is rapidly expanding, and with it, the methods of teaching and introducing students must also advance alongside new technologies. There is a challenge in robotics education, especially at high school levels, to expose them to more modern and practical robots. One way to bridge this gap is human-robot interaction for a more hands-on and impactful experience that will leave students more interested in pursuing the field. Our project is a Robotic Head Kit that can be used in an educational setting to teach about its electrical, mechanical, programming, and psychological concepts. We took an existing robot head prototype and further advanced it so it can be easily assembled while still maintaining human complexity. Our research for this project dove into the electronics, mechanics, software, and even psychological barriers present in order to advance the already existing head design. The kit we have developed combines the field of robotics with psychology to create and add more life-like features and functionality to the robot, nicknamed "James Junior." The goal of our Honors Thesis was to initially fix electrical, mechanical, and software problems present. We were then tasked to run tests with high school students to validate our assembly instructions while gathering their observations and feedback about the robot's programmed reactions and emotions. The electrical problems were solved with custom PCBs designed to power and program the existing servo motors on the head. A new set of assembly instructions were written and modifications to the 3D printed parts were made for the kit. In software, existing code was improved to implement a user interface via keypad and joystick to give students control of the robot head they construct themselves. The results of our tests showed that we were not only successful in creating an intuitive robot head kit that could be easily assembled by high school students, but we were also successful in programming human-like expressions that could be emotionally perceived by the students.
ContributorsRathke, Benjamin (Co-author) / Rivera, Gerardo (Co-author) / Sodemann, Angela (Thesis director) / Itagi, Manjunath (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135645-Thumbnail Image.png
Description
This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not

This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not limited in size and location to the user's arm as with exoskeletal devices. Soft robotics differ from traditional robotics in that they are made using soft materials such as silicone elastomers rather than hard materials such as metals or plastics. This thesis presents the design, fabrication, and testing of the arm, including the joints and the actuators to move them, as well as the design and fabrication of the human-body interface to unite man and machine. This prototype utilizes two types of pneumatically-driven actuators, pneumatic artificial muscles and fiber-reinforced actuators, to actuate the elbow and shoulder joints, respectively. The robotic limb is mounted at the waist on a backpack frame to avoid interfering with the wearer's biological arm. Through testing and evaluation, this prototype device proves the feasibility of soft supernumerary limbs, and opens up opportunities for further development into the field.
ContributorsOlson, Weston Roscoe (Author) / Polygerinos, Panagiotis (Thesis director) / Zhang, Wenlong (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136627-Thumbnail Image.png
Description
This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an

This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an ultimate goal, it was hoped that this research will contribute to optimizing human-swarm interaction for the design of human supervisory controllers for swarms. To achieve the stated goals, two investigations were conducted. First, subjects gaze was tracked while observing a simulated swarm as it moved across the screen. This swarm changed in size, disturbance level in the position of the agents, speed, and path curvature. Second, subjects were asked to play a supervisory role as they watched a swarm move across the screen toward targets. The subjects determined whether a collision would occur and with which target while their responses as well as their gaze was tracked. In the case of an observatory role, a model of human gaze was created. This was embodied in a second order model similar to that of a spring-mass-damper system. This model was similar across subjects and stable. In the case of a supervisory role, inherent weaknesses in human perception were found, such as the inability to predict future position of curved paths. These findings are discussed in depth within the thesis. Overall, the results presented suggest that understanding human perception of swarms offers a new approach to the problem of swarm control. The ability to adapt controls to the strengths and weaknesses could lead to great strides in the reduction of operators in the control of one UAV, resulting in a move towards one man operation of a swarm.
ContributorsWhitton, Elena Michelle (Author) / Artemiadis, Panagiotis (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
136487-Thumbnail Image.png
Description
Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative

Robotic rehabilitation for upper limb post-stroke recovery is a developing technology. However, there are major issues in the implementation of this type of rehabilitation, issues which decrease efficacy. Two of the major solutions currently being explored to the upper limb post-stroke rehabilitation problem are the use of socially assistive rehabilitative robots, robots which directly interact with patients, and the use of exoskeleton-based systems of rehabilitation. While there is great promise in both of these techniques, they currently lack sufficient efficacy to objectively justify their costs. The overall efficacy to both of these techniques is about the same as conventional therapy, yet each has higher overhead costs that conventional therapy does. However there are associated long-term cost savings in each case, meaning that the actual current viability of either of these techniques is somewhat nebulous. In both cases, the problems which decrease technique viability are largely related to joint action, the interaction between robot and human in completing specific tasks, and issues in robot adaptability that make joint action difficult. As such, the largest part of current research into rehabilitative robotics aims to make robots behave in more "human-like" manners or to bypass the joint action problem entirely.
ContributorsRamakrishna, Vijay Kambhampati (Author) / Helms Tillery, Stephen (Thesis director) / Buneo, Christopher (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / W. P. Carey School of Business (Contributor) / School of Life Sciences (Contributor)
Created2015-05