Matching Items (8)
Filtering by

Clear all filters

157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
157284-Thumbnail Image.png
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research

Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
ContributorsRadina, Earl (Author) / Gray, Robert (Thesis advisor) / Chiou, Erin (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2019
157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
ContributorsHsiung, Chi-Ping (M.S.) (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
155505-Thumbnail Image.png
Description
While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various

While various collision warning studies in driving have been conducted, only a handful of studies have investigated the effectiveness of warnings with a distracted driver. Across four experiments, the present study aimed to understand the apparent gap in the literature of distracted drivers and warning effectiveness, specifically by studying various warnings presented to drivers while they were operating a smart phone. Experiment One attempted to understand which smart phone tasks, (text vs image) or (self-paced vs other-paced) are the most distracting to a driver. Experiment Two compared the effectiveness of different smartphone based applications (app’s) for mitigating driver distraction. Experiment Three investigated the effects of informative auditory and tactile warnings which were designed to convey directional information to a distracted driver (moving towards or away). Lastly, Experiment Four extended the research into the area of autonomous driving by investigating the effectiveness of different auditory take-over request signals. Novel to both Experiment Three and Four was that the warnings were delivered from the source of the distraction (i.e., by either the sound triggered at the smart phone location or through a vibration given on the wrist of the hand holding the smart phone). This warning placement was an attempt to break the driver’s attentional focus on their smart phone and understand how to best re-orient the driver in order to improve the driver’s situational awareness (SA). The overall goal was to explore these novel methods of improved SA so drivers may more quickly and appropriately respond to a critical event.
ContributorsMcNabb, Jaimie Christine (Author) / Gray, Dr. Rob (Thesis advisor) / Branaghan, Dr. Russell (Committee member) / Becker, Dr. Vaughn (Committee member) / Arizona State University (Publisher)
Created2017
157710-Thumbnail Image.png
Description
With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles

With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles correlate with those emotions? These questions were addressed by conducting a survey to measure participant’s positive affect, negative affect, and trust when driving in the presence of autonomous vehicles. Participants’ were issued a pretest measuring existing knowledge of autonomous vehicles, followed by measures of affect and trust. After completing this pre-test portion of the study, participants were given information about how autonomous vehicles work, and were then presented with a posttest identical to the pretest. The educational intervention had no effect on positive or negative affect, though there was a positive relationship between positive affect and trust and a negative relationship between negative affect and trust. These findings will be used to inform future research endeavors researching trust and autonomous vehicles using a test bed developed at Arizona State University. This test bed allows for researchers to examine the behavior of multiple participants at the same time and include autonomous vehicles in studies.
ContributorsMartin, Sterling (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2019
171724-Thumbnail Image.png
Description
Human-robot teams (HRTs) have seen more frequent use over the past few years,specifically, in the context of Search and Rescue (SAR) environments. Trust is an important factor in the success of HRTs. Both trust and reliance must be appropriately calibrated for the human operator to work faultlessly with a robot

Human-robot teams (HRTs) have seen more frequent use over the past few years,specifically, in the context of Search and Rescue (SAR) environments. Trust is an important factor in the success of HRTs. Both trust and reliance must be appropriately calibrated for the human operator to work faultlessly with a robot teammate. In highly complex and time restrictive environments, such as a search and rescue mission following a disaster, uncertainty information may be given by the robot in the form of confidence to help properly calibrate trust and reliance. This study seeks to examine the impact that confidence information may have on trust and how it may help calibrate reliance in complex HRTs. Trust and reliance data were gathered using a simulated SAR task environment for participants who then received confidence information from the robot for one of two missions. Results from this study indicated that trust was higher when participants received confidence information from the robot, however, no clear relationship between confidence and reliance were found. The findings from this study can be used to further improve human-robot teaming in search and rescue tasks.
ContributorsWolff, Alexandra (Author) / Cooke, Nancy J (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Rob (Committee member) / Arizona State University (Publisher)
Created2022
157988-Thumbnail Image.png
Description
The current study aims to explore factors affecting trust in human-drone collaboration. A current gap exists in research surrounding civilian drone use and the role of trust in human-drone interaction and collaboration. Specifically, existing research lacks an explanation of the relationship between drone pilot experience, trust, and trust-related behaviors as

The current study aims to explore factors affecting trust in human-drone collaboration. A current gap exists in research surrounding civilian drone use and the role of trust in human-drone interaction and collaboration. Specifically, existing research lacks an explanation of the relationship between drone pilot experience, trust, and trust-related behaviors as well as other factors. Using two dimensions of trust in human-automation team—purpose and performance—the effects of experience on drone design and trust is studied to explore factors that may contribute to such a model. An online survey was conducted to examine civilian drone operators’ experience, familiarity, expertise, and trust in commercially available drones. It was predicted that factors of prior experience (familiarity, self-reported expertise) would have a significant effect on trust in drones. The choice to use or exclude the drone propellers in a search-and-identify scenario, paired with the pilots’ experience with drones, would further confirm the relevance of the trust dimensions of purpose versus performance in the human-drone relationship. If the pilot has a positive sense of purpose and benevolence with the drone, the pilot trusts the drone has a positive intent towards them and the task. If the pilot has trust in the performance of the drone, they ascertain that the drone has the skill to do the task. The researcher found no significant differences between mean trust scores across levels of familiarity, but did find some interaction between self-report expertise, familiarity, and trust. Future research should further explore more concrete measures of situational participant factors such as self-confidence and expertise to understand their role in civilian pilots’ trust in their drone.
ContributorsNiichel, Madeline Kathleen (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
131772-Thumbnail Image.png
Description
The purpose of this review is to determine how to measure and assess human trust in medical technology. A systematic literature review was selected as the path to understand the landscape for measuring trust up to this point. I started by creating a method of systematically reading through related studies

The purpose of this review is to determine how to measure and assess human trust in medical technology. A systematic literature review was selected as the path to understand the landscape for measuring trust up to this point. I started by creating a method of systematically reading through related studies in databases before summarizing results and concluding with a recommended design for the upcoming study. This required searching several databases and learning each advanced search methods for each in order to determine which databases provided the most relevant results. From there, the reader examined the results, keeping track in a spreadsheet. The first pass through filtered out the results which did not include detailed methods of measuring trust. The second pass took detailed notes on the remaining studies, keeping track of authors, participants, subjects, methods, instruments, issues, limitations, analytics, and validation. After summarizing the results, discussing trends in the results, and mentioning limitations a conclusion was devised. The recommendation is to use an uncompressed self-reported questionnaire with 4-10 questions on a six-point-Likert scale with reversing scales throughout. Though the studies analyzed were specific to medical settings, this method can work outside of the medical setting for measuring human trust.
ContributorsGaugler, Grady (Author) / Chiou, Erin (Thesis director) / Craig, Scotty (Committee member) / Dean, Herberger Institute for Design and the Arts (Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05