Description
The rapid increase in the volume and complexity of data lead to accelerated Artificial Intelligence (AI) applications, primarily as intelligent machines, in everyday life. Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework,

The rapid increase in the volume and complexity of data lead to accelerated Artificial Intelligence (AI) applications, primarily as intelligent machines, in everyday life. Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework, which provides the rationale behind an AI agent's decision-making. Therefore, the validity of the AI models is constrained based on their ability to explain their decision-making rationale. On the other hand, AI agents cannot perceive the social situation that human experts may recognize using their background knowledge, specifically in cybersecurity and the military. Social behavior depends on situation awareness, and it relies on interpretability, transparency, and fairness when we envision efficient Human-AI collaboration. Consequently, the human remains an essential element for planning, especially when the problem's constraints are difficult to express for an agent in a dynamic setting. This dissertation will first develop different model-based explanation generation approaches to predict where the human teammate would misunderstand the plan and, therefore, generate an explanation accordingly. The robot's generated explanation or interactive explicable behavior maintains the human teammate's cognitive workload and increases the overall team situation awareness throughout human-robot interaction. Further, it will focus on a rule-based model to preserve the collaborative engagement of the team by exploring essential aspects of the facilitator agent design. In addition to recognizing wherein the plan might be discrepancies, focusing on the decision-making process provides insight into the reason behind the conflict between the human expectation and the robot's behavior. Employing a rule-based framework will shift the focus from assisting an individual (human) teammate to helping the team interactively while maintaining collaboration. Hence, concentrating on teaming provides the opportunity to recognize some cognitive biases that skew the teammate's expectations and affect interaction behavior. This dissertation investigates how to maintain collaboration engagement or cognitive readiness for collaborative planning tasks. Moreover, this dissertation aims to lay out a planning framework focusing on the human teammate's cognitive abilities to understand the machine-provided explanations while collaborating on a planning task. Consequently, this dissertation explored the design for AI facilitator, helping a team tasked with a challenging task to plan collaboratively, mitigating the teaming biases, and communicate effectively. This dissertation investigates the effect of some cognitive biases on the task outcome and shapes the utility function. The facilitator's role is to facilitate goal alignment, the consensus of planning strategies, utility management, effective communication, and mitigate biases.
Reuse Permissions
  • Downloads
    pdf (7.5 MB)

    Details

    Title
    • Towards Human-Machine Symbiosis: Design for Effective AI Facilitation
    Contributors
    Date Created
    2021
    Resource Type
  • Text
  • Collections this item is in
    Note
    • Partial requirement for: Ph.D., Arizona State University, 2021
    • Field of study: Computer Science

    Machine-readable links