Matching Items (4)
157120-Thumbnail Image.png
Description
As automation becomes more prevalent in society, the frequency that systems involve interactive human-automation control increases. Previous studies have shown accountability to be a valuable way of eliciting human engagement and reducing various biases, but these studies have involved the presence of an authority figure during the research. The current

As automation becomes more prevalent in society, the frequency that systems involve interactive human-automation control increases. Previous studies have shown accountability to be a valuable way of eliciting human engagement and reducing various biases, but these studies have involved the presence of an authority figure during the research. The current research sought to explore the effect of accountability in the absence of an authority figure. To do this, 40 participants took part in this study by playing a microworld simulation. Half were told they would be interviewed after the simulation, and half were told data was not being collected. Eleven dependent variables were collected (accountability, number of resources shared, player score, agent score, combined score, and the six measures of the NASA- Task Load Index), of which statistical significance was found in number of resources shared, player score, and agent score. While not conclusive, the results suggest that accountability affects human-automation interactions even in the absence of an authority figure. It is suggested that future research seek to find a reliable way to measure accountability and examine how long accountability effects last.
ContributorsWilkins, Adam (Author) / Chiou, Erin K. (Thesis advisor) / Gray, Robert (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
171652-Thumbnail Image.png
Description
The implementation of chatbots in customer service is widely prevalent in today’s world with insufficient research to appropriately refine all of their conversational abilities. Chatbots are favored for their ability to handle simple and typical requests made by users, but chatbots have proven to be prone to conversational breakdowns. The

The implementation of chatbots in customer service is widely prevalent in today’s world with insufficient research to appropriately refine all of their conversational abilities. Chatbots are favored for their ability to handle simple and typical requests made by users, but chatbots have proven to be prone to conversational breakdowns. The study researched how the use of repair strategies to combat conversational breakdowns in a simple versus complex task setting affected user experience. Thirty participants were collected and organized into six different groups in a two by three between subjects factorial design. Participants were assigned one of two tasks (simple or complex) and one of three repair strategies (repeat, confirmation, or options). A Wizard-of-Oz approach was used to simulate a chatbot that participants interacted with to complete a task in a hypothetical setting. Participants completed the task with this researcher-controlled chatbot as it intentionally failed the conversation multiple times, only to repair it with a repair strategy. Participants recorded their user experience regarding the chatbot afterwards. An Analysis of Covariance statistical test was run with task duration being a covariate variable. Findings indicate that the simple task difficulty was significant in improving the user experience that participants recorded whereas the particular repair strategy had no effect on the user experience. This indicates that simpler tasks lead to improved positive user experience and the more time that is spent on a task, the less positive the user experience. Overall, results associated with the effects of task difficulty and repair strategies on user experience were only partially consistent with previous literature.
ContributorsRios, Aaron (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Chiou, Erin K. (Committee member) / Arizona State University (Publisher)
Created2022
171442-Thumbnail Image.png
Description
Team communication facilitates team coordination strategies and situations, and how teammates perceive one another. In human-machine teams, these perceptions affect how people trust and anthropomorphize their machine counterparts, which in turn affects future team communication, forming a feedback loop. This thesis investigates how personifying and objectifying contents in human-machine team

Team communication facilitates team coordination strategies and situations, and how teammates perceive one another. In human-machine teams, these perceptions affect how people trust and anthropomorphize their machine counterparts, which in turn affects future team communication, forming a feedback loop. This thesis investigates how personifying and objectifying contents in human-machine team communication relate to team performance and perceptions in a simulated remotely piloted aircraft system task environment. A total of 46 participants grouped into teams of two were assigned unique roles and teamed with a synthetic pilot agent that in reality was a trained confederate following a script. Quantities of verbal personifications and objectifications were compared to questionnaire responses about participants’ perceived trust and anthropomorphism of the synthetic pilot, as well as team performance. It was hypothesized that verbal personifications would positively correlate with reflective trust, anthropomorphism, and team performance, and that verbal objectifications would negatively correlate with the same measures. It was also predicted that verbal personifications would decrease over time as human teammates interact more with the machine teammate, and that verbal objectifications would increase. Verbal personifications were not found to be correlated with trust and anthropomorphism outside of perceptions related to gender, albeit patterns of change in the navigator’s personifications coincided with a co-calibration of trust among the navigator and the photographer. Results supported the prediction that verbal objectifications are negatively correlated with trust and anthropomorphism of a teammate. Significant relationships between verbal personifications and objectifications and team performance were not found. This study provides support to the notion that people verbally personify machines to ease communication when necessary, and that the same processes that underlie tendencies to personify machines may be reciprocally related to those that influence team trust. Overall, this study provides evidence that personifying and objectifying language in human-machine team communication is a viable candidate for measuring the perceptions and states of teams, even in highly restricted communication environments.
ContributorsCohen, Myke C. (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin K. (Committee member) / Amazeen, Polemnia G. (Committee member) / Arizona State University (Publisher)
Created2022
157641-Thumbnail Image.png
Description
Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive

Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive systems discover opportunities to expand and refine their performance. In this study, team interaction exploration is examined in a HAT composed of a human navigator, human photographer, and a synthetic pilot while they perform a remotely-piloted aerial reconnaissance task. Failures in automation and the synthetic pilot’s autonomy were injected throughout ten missions as roadblocks. Teams were clustered by performance into high-, middle-, and low-performing groups. It was hypothesized that high-performing teams would exchange more text-messages containing unique content or sender-recipient combinations than middle- and low-performing teams, and that teams would exchange less unique messages over time. The results indicate that high-performing teams had more unique team interactions than middle-performing teams. Additionally, teams generally had more exploratory team interactions in the first session of missions than the second session. Implications and suggestions for future work are discussed.
ContributorsLematta, Glenn Joseph (Author) / Chiou, Erin K. (Thesis advisor) / Cooke, Nancy J. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2019