This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 2 of 2
Filtering by

Clear all filters

171652-Thumbnail Image.png
Description
The implementation of chatbots in customer service is widely prevalent in today’s world with insufficient research to appropriately refine all of their conversational abilities. Chatbots are favored for their ability to handle simple and typical requests made by users, but chatbots have proven to be prone to conversational breakdowns. The

The implementation of chatbots in customer service is widely prevalent in today’s world with insufficient research to appropriately refine all of their conversational abilities. Chatbots are favored for their ability to handle simple and typical requests made by users, but chatbots have proven to be prone to conversational breakdowns. The study researched how the use of repair strategies to combat conversational breakdowns in a simple versus complex task setting affected user experience. Thirty participants were collected and organized into six different groups in a two by three between subjects factorial design. Participants were assigned one of two tasks (simple or complex) and one of three repair strategies (repeat, confirmation, or options). A Wizard-of-Oz approach was used to simulate a chatbot that participants interacted with to complete a task in a hypothetical setting. Participants completed the task with this researcher-controlled chatbot as it intentionally failed the conversation multiple times, only to repair it with a repair strategy. Participants recorded their user experience regarding the chatbot afterwards. An Analysis of Covariance statistical test was run with task duration being a covariate variable. Findings indicate that the simple task difficulty was significant in improving the user experience that participants recorded whereas the particular repair strategy had no effect on the user experience. This indicates that simpler tasks lead to improved positive user experience and the more time that is spent on a task, the less positive the user experience. Overall, results associated with the effects of task difficulty and repair strategies on user experience were only partially consistent with previous literature.
ContributorsRios, Aaron (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Chiou, Erin K. (Committee member) / Arizona State University (Publisher)
Created2022
190942-Thumbnail Image.png
Description
It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI

It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI agents are integrated into the team, the first study asks what roles can AI agents take? The first study investigates this issue by asking whether an AI agent can take the role of a facilitator and in turn, improve planning outcomes by facilitating team processes. Results indicate that the human facilitator was significantly better than the AI facilitator at reducing cognitive biases such as groupthink, anchoring, and information pooling, as well as increasing decision quality and score. Additionally, participants viewed the AI facilitator negatively and ignored its inputs compared to the human facilitator. Yet, participants in the AI Facilitator condition performed significantly better than participants in the No Facilitator condition, illustrating that having an AI facilitator was better than having no facilitator at all. The second study explores whether artificial social intelligence (ASI) agents can take the role of advisors and subsequently improve team processes and mission outcome during a simulated search-and-rescue mission. The results of this study indicate that although ASI advisors can successfully advise teams, they also use a significantly greater number of taskwork interventions than teamwork interventions. Additionally, this study served to identify what the ASI advisors got right compared to the human advisor and vice versa. Implications and future directions are discussed.
ContributorsBuchanan, Verica (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2023