Matching Items (2,679)
Filtering by

Clear all filters

157469-Thumbnail Image.png
Description
What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes

What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery.

The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error.
ContributorsBell, Rebecca C (Author) / Zhang, Wenlong (Thesis advisor) / Chiou, Erin (Committee member) / Aukes, Daniel (Committee member) / Arizona State University (Publisher)
Created2019
157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
157435-Thumbnail Image.png
Description
Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations that are too fast might not accomplish anything and having

Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations that are too fast might not accomplish anything and having them too slow could slow the user down causing them to get frustrated.

This study explores the subject of animation and its speed by trying to answer the following questions – 1) Do people notice whether an animation is present 2) Does animation affect the enjoyment of a transition? and 3) If animation does affect enjoyment, what is the effect of different animation speeds?

The study was conducted using 3 prototypes of an application to order bottled water in which the transitions between different brands of bottled water were animated at 0ms, 300ms and 650ms. A survey was conducted to see if the participants were able to spot any difference between the prototypes and if they did, which one they preferred.

It was found that most people did not recognize any difference between the prototypes. Even people who recognized a difference between the prototypes did not have any preference of speed.
ContributorsIjari, Kusum (Author) / Branaghan, Russell (Thesis advisor) / Chiou, Erin (Committee member) / Roscoe, Rod (Committee member) / Arizona State University (Publisher)
Created2019
156924-Thumbnail Image.png
Description
Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a

Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a critical event arise. This study

examines two aspects of alerts that could help facilitate driver takeover: mode (auditory

and tactile) and direction (towards and away). Auditory alerts appear to be somewhat

more effective than tactile alerts, though both modes produce significantly faster reaction

times than no alert. Alerts moving towards the driver also appear to be more effective

than alerts moving away from the driver. Future research should examine how

multimodal alerts differ from single mode, and see if higher fidelity alerts influence

takeover times.
ContributorsBrogdon, Michael A (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2018
157150-Thumbnail Image.png
Description
This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual reality or monitor interaction to facilitate training. The data did

This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual reality or monitor interaction to facilitate training. The data did suggest that training involving rich textured environments might be more beneficial under virtual reality conditions, however nothing significant was found in the analysis. It might be possible that significance could be obtained by comparing a virtual reality set-up with higher fidelity to a monitor trial.
ContributorsWhitson, Richard (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2019
157313-Thumbnail Image.png
Description
Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array

Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array of tasks to be scheduled (2) idea on constraints like importance cum order of tasks and (3) the individual abilities of the operators. One example of such kind of scheduling is the crew scheduling done for astronauts who will spend time at International Space Station (ISS). The schedule for the crew of ISS is decided before the mission starts. Human planners take part in the decision-making process to determine the timing of activities for multiple days for multiple crew members at ISS. Given the unpredictability of individual assignments and limitations identified with the various operators, deciding upon a satisfactory timetable is a challenging task. The objective of the current work is to develop an automated decision assistant that would assist human planners in coming up with an acceptable task schedule for the crew. At the same time, the decision assistant will also ensure that human planners are always in the driver's seat throughout this process of decision-making.

The decision assistant will make use of automated planning technology to assist human planners. The guidelines of Naturalistic Decision Making (NDM) and the Human-In-The -Loop decision making were followed to make sure that the human is always in the driver's seat. The use cases considered are standard situations which come up during decision-making in crew-scheduling. The effectiveness of automated decision assistance was evaluated by setting it up for domain experts on a comparable domain of scheduling courses for master students. The results of the user study evaluating the effectiveness of automated decision support were subsequently published.
ContributorsMIshra, Aditya Prasad (Author) / Kambhampati, Subbarao (Thesis advisor) / Chiou, Erin (Committee member) / Demakethepalli Venkateswara, Hemanth Kumar (Committee member) / Arizona State University (Publisher)
Created2019
157284-Thumbnail Image.png
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research

Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
ContributorsRadina, Earl (Author) / Gray, Robert (Thesis advisor) / Chiou, Erin (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2019