This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 1 of 1
Filtering by

Clear all filters

171876-Thumbnail Image.png
Description
As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an

As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an understanding inevitably deviates from the ground truth due to reasons such as the human’s lack of understanding of the domain or misunderstanding of the agent’s capabilities. Such differences would result in an unmatched expectation of the agent’s behavior with the agent’s optimal behavior, thereby biasing the human’s assessment of the agent’s performance. In this dissertation, I focus on when these differences are due to a biased belief about domain dynamics. I especially investigate the impact of such a biased belief on the agent’s decision-making process in two different problem settings from a learning perspective. In the first setting, the agent is tasked to accomplish a task alone but must infer the human’s objectives from the human’s feedback on the agent’s behavior in the environment. In such a case, the human biased feedback could mislead the agent to learn a reward function that results in a sub-optimal and, potentially, undesired policy. In the second setting, the agent must accomplish a task with a human observer. Given that the agent’s optimal behavior may not match the human’s expectation due to the biased belief, the agent’s optimal behavior may be viewed as inexplicable, leading to degraded performance and loss of trust. Consequently, this dissertation proposes approaches that (1) endow the agent with the ability to be aware of the human’s biased belief while inferring the human’s objectives, thereby (2) neutralize the impact of the model differences in a reinforcement learning framework, and (3) behave explicably by reconciling the human’s expectation and optimality during decision-making.
ContributorsGong, Ze (Author) / Zhang, Yu (Thesis advisor) / Amor, Hani Ben (Committee member) / Kambhampati, Subbarao (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2022