Matching Items (10)
Filtering by

Clear all filters

152834-Thumbnail Image.png
Description
Current work in planning assumes that user preferences and/or domain dynamics are completely specified in advance, and aims to search for a single solution plan to satisfy these. In many real world scenarios, however, providing a complete specification of user preferences and domain dynamics becomes a time-consuming and error-prone task.

Current work in planning assumes that user preferences and/or domain dynamics are completely specified in advance, and aims to search for a single solution plan to satisfy these. In many real world scenarios, however, providing a complete specification of user preferences and domain dynamics becomes a time-consuming and error-prone task. More often than not, a user may provide no knowledge or at best partial knowledge of her preferences with respect to a desired plan. Similarly, a domain writer may only be able to determine certain parts, not all, of the model of some actions in a domain. Such modeling issues requires new concepts on what a solution should be, and novel techniques in solving the problem. When user preferences are incomplete, rather than presenting a single plan, the planner must instead provide a set of plans containing one or more plans that are similar to the one that the user prefers. This research first proposes the usage of different measures to capture the quality of such plan sets. These are domain-independent distance measures based on plan elements if no knowledge of the user preferences is given, or the Integrated Preference Function measure in case incomplete knowledge of such preferences is provided. It then investigates various heuristic approaches to generate plan sets in accordance with these measures, and presents empirical results demonstrating the promise of the methods. The second part of this research addresses planning problems with incomplete domain models, specifically those annotated with possible preconditions and effects of actions. It formalizes the notion of plan robustness capturing the probability of success for plans during execution. A method of assessing plan robustness based on the weighted model counting approach is proposed. Two approaches for synthesizing robust plans are introduced. The first one compiles the robust plan synthesis problems to the conformant probabilistic planning problems. The second approximates the robustness measure with lower and upper bounds, incorporating them into a stochastic local search for estimating distance heuristic to a goal state. The resulting planner outperforms a state-of-the-art planner that can handle incomplete domain models in both plan quality and planning time.
ContributorsNguyễn, Tuấn Anh (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Do, Minh (Committee member) / Lee, Joohyung (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
151144-Thumbnail Image.png
Description
Automated planning problems classically involve finding a sequence of actions that transform an initial state to some state satisfying a conjunctive set of goals with no temporal constraints. But in many real-world problems, the best plan may involve satisfying only a subset of goals or missing defined goal deadlines. For

Automated planning problems classically involve finding a sequence of actions that transform an initial state to some state satisfying a conjunctive set of goals with no temporal constraints. But in many real-world problems, the best plan may involve satisfying only a subset of goals or missing defined goal deadlines. For example, this may be required when goals are logically conflicting, or when there are time or cost constraints such that achieving all goals on time may be too expensive. In this case, goals and deadlines must be declared as soft. I call these partial satisfaction planning (PSP) problems. In this work, I focus on particular types of PSP problems, where goals are given a quantitative value based on whether (or when) they are achieved. The objective is to find a plan with the best quality. A first challenge is in finding adequate goal representations that capture common types of goal achievement rewards and costs. One popular representation is to give a single reward on each goal of a planning problem. I further expand on this approach by allowing users to directly introduce utility dependencies, providing for changes of goal achievement reward directly based on the goals a plan achieves. After, I introduce time-dependent goal costs, where a plan incurs penalty if it will achieve a goal past a specified deadline. To solve PSP problems with goal utility dependencies, I look at using state-of-the-art methodologies currently employed for classical planning problems involving heuristic search. In doing so, one faces the challenge of simultaneously determining the best set of goals and plan to achieve them. This is complicated by utility dependencies defined by a user and cost dependencies within the plan. To address this, I introduce a set of heuristics based on combinations using relaxed plans and integer programming formulations. Further, I explore an approach to improve search through learning techniques by using automatically generated state features to find new states from which to search. Finally, the investigation into handling time-dependent goal costs leads us to an improved search technique derived from observations based on solving discretized approximations of cost functions.
ContributorsBenton, J (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Do, Minh B. (Committee member) / Smith, David E. (Committee member) / Langley, Pat (Committee member) / Arizona State University (Publisher)
Created2012
156469-Thumbnail Image.png
Description
The 21st-century professional or knowledge worker spends much of the working day engaging others through electronic communication. The modes of communication available to knowledge workers have rapidly increased due to computerized technology advances: conference and video calls, instant messaging, e-mail, social media, podcasts, audio books, webinars, and much more. Professionals

The 21st-century professional or knowledge worker spends much of the working day engaging others through electronic communication. The modes of communication available to knowledge workers have rapidly increased due to computerized technology advances: conference and video calls, instant messaging, e-mail, social media, podcasts, audio books, webinars, and much more. Professionals who think for a living express feelings of stress about their ability to respond and fear missing critical tasks or information as they attempt to wade through all the electronic communication that floods their inboxes. Although many electronic communication tools compete for the attention of the contemporary knowledge worker, most professionals use an electronic personal information management (PIM) system, more commonly known as an e-mail application and often the ubiquitous Microsoft Outlook program. The aim of this research was to provide knowledge workers with solutions to manage the influx of electronic communication that arrives daily by studying the workers in their working environment. This dissertation represents a quest to understand the current strategies knowledge workers use to manage their e-mail, and if modification of e-mail management strategies can have an impact on productivity and stress levels for these professionals. Today’s knowledge workers rarely work entirely alone, justifying the importance of also exploring methods to improve electronic communications within teams.
ContributorsCounts, Virginia (Author) / Parrish, Kristen (Thesis advisor) / Allenby, Braden (Thesis advisor) / Landis, Amy (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
156622-Thumbnail Image.png
Description
Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks.

Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks. These three questions essentially entail understanding the attacker’s

use of deception, the capabilities available, and the intent of launching the attack. These

three issues are highly inter-related. If an adversary can hide their intent, they can better

deceive a defender. If an adversary’s capabilities are not well understood, then determining

what their goals are becomes difficult as the defender is uncertain if they have the necessary

tools to accomplish them. However, the understanding of these aspects are also mutually

supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we

understand intent and capabilities, a defender may be able to see through deception schemes.

In this dissertation, I present three pieces of work to tackle these questions to obtain

a better understanding of cyber threats. First, we introduce a new reasoning framework

to address deception. We evaluate the framework by building a dataset from DEFCON

capture-the-flag exercise to identify the person or group responsible for a cyber attack.

We demonstrate that the framework not only handles cases of deception but also provides

transparent decision making in identifying the threat actor. The second task uses a cognitive

learning model to determine the intent – goals of the threat actor on the target system.

The third task looks at understanding the capabilities of threat actors to target systems by

identifying at-risk systems from hacker discussions on darkweb websites. To achieve this

task we gather discussions from more than 300 darkweb websites relating to malicious

hacking.
ContributorsNunes, Eric (Author) / Shakarian, Paulo (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
155511-Thumbnail Image.png
Description
The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative

The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative text remains untapped due—in large part—to human limitations. The human ability to comprehend rich text and extract hidden meanings is far superior to known computational algorithms but remains unscalable. In this research, computational treatment is given to online news framing for exposing a deeper level of expressivity coined “double subjectivity” as characterized by its cumulative amplification effects. A visual language is offered for extracting spatial and temporal dynamics of double subjectivity that may give insight into social influence about critical issues, such as environmental, economic, or political discourse. This research offers benefits of 1) scalability for processing hidden meanings in big data and 2) visibility of the entire network dynamics over time and space to give users insight into the current status and future trends of mass communication.
ContributorsCheeks, Loretta H. (Author) / Gaffar, Ashraf (Thesis advisor) / Wald, Dara M (Committee member) / Ben Amor, Hani (Committee member) / Doupe, Adam (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2017
155568-Thumbnail Image.png
Description
This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that

This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.
ContributorsDemir, Mustafa, Ph.D (Author) / Cooke, Nancy J. (Thesis advisor) / Bekki, Jennifer (Committee member) / Amazeen, Polemnia G (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
189223-Thumbnail Image.png
Description
What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for

What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for HARTs. Team resilience here is defined as the ability of a team to reorganize team processes to rebound or morph to overcome an unforeseen challenge. A distinction from the individual, group, or organizational aspects of resilience for teams is how team resilience trades off with team interdependent capacity. The following study collected data from 28 teams comprised of two human participants (recruited from a university populace) and a synthetic teammate (played by an experienced experimenter). Each team completed a series of six reconnaissance missions presented to them in a Minecraft world. The research aim was to identify how to better integrate synthetic teammates for high-risk, high-stress dynamic operations to boost HART performance and HART resilience. All team communications were orally over Zoom. The primary manipulation was the communication given by the synthetic teammate (between-subjects, Task or Task+): Task only communicated the essentials, and Task+ offered clear and concise communications of its own capabilities and limitations. Performance and resilience were measured using a primary mission task score (based upon how many tasks teams completed), time-based measures (such as how long it took to recognize a problem or reorder team processes), and a subjective team resilience score (calculated from participant responses to a survey prompt). The research findings suggest the clear and concise reminders from Task+ enhanced HART performance and HART resilience during high-stress missions in which the teams were challenged by novel events. An exploratory study regarding what personalities may correlate with these improved performance metrics indicated that the Big Five trait taxonomies of extraversion and conscientiousness were positively correlated, whereas neuroticism was negatively correlated with higher HART performance and HART resilience. Future integration of synthetic teammates must consider the types of communications that will be offered to maximize HART performance and HART resilience.
ContributorsGraham, Hudson D. (Author) / Cooke, Nancy J. (Thesis advisor) / Gray, Robert (Committee member) / Holder, Eric (Committee member) / Arizona State University (Publisher)
Created2023
161301-Thumbnail Image.png
Description
In settings where a human and an embodied AI (artificially intelligent) agent coexist, the AI agent has to be capable of reasoning with the human's preconceived notions about the environment as well as with the human's perception limitations. In addition, it should be capable of communicating intentions and objectives effectively

In settings where a human and an embodied AI (artificially intelligent) agent coexist, the AI agent has to be capable of reasoning with the human's preconceived notions about the environment as well as with the human's perception limitations. In addition, it should be capable of communicating intentions and objectives effectively to the human-in-the-loop. While acting in the presence of human observers, the AI agent can synthesize interpretable behaviors like explicable, legible, and assistive behaviors by accounting for the human's mental model (inclusive of her sensor model) in its reasoning process. This thesis will study different behavior synthesis algorithms which focus on improving the interpretability of the agent's behavior in the presence of a human observer. Further, this thesis will study how environment redesign strategies can be leveraged to improve the overall interpretability of the agent's behavior. At times, the agent's environment may also consist of purely adversarial entities or mixed entities (i.e. adversarial as well as cooperative entities), that are trying to infer information from the AI agent's behavior. In such settings, it is crucial for the agent to exhibit obfuscatory behavior that prevents sensitive information from falling into the hands of the adversarial entities. This thesis will show that it is possible to synthesize interpretable as well as obfuscatory behaviors using a single underlying algorithmic framework.
ContributorsKulkarni, Anagha (Author) / Kambhampati, Subbarao (Thesis advisor) / Kamar, Ece (Committee member) / Smith, David E. (Committee member) / Srivastava, Siddharth (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2021
190942-Thumbnail Image.png
Description
It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI

It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI agents are integrated into the team, the first study asks what roles can AI agents take? The first study investigates this issue by asking whether an AI agent can take the role of a facilitator and in turn, improve planning outcomes by facilitating team processes. Results indicate that the human facilitator was significantly better than the AI facilitator at reducing cognitive biases such as groupthink, anchoring, and information pooling, as well as increasing decision quality and score. Additionally, participants viewed the AI facilitator negatively and ignored its inputs compared to the human facilitator. Yet, participants in the AI Facilitator condition performed significantly better than participants in the No Facilitator condition, illustrating that having an AI facilitator was better than having no facilitator at all. The second study explores whether artificial social intelligence (ASI) agents can take the role of advisors and subsequently improve team processes and mission outcome during a simulated search-and-rescue mission. The results of this study indicate that although ASI advisors can successfully advise teams, they also use a significantly greater number of taskwork interventions than teamwork interventions. Additionally, this study served to identify what the ASI advisors got right compared to the human advisor and vice versa. Implications and future directions are discussed.
ContributorsBuchanan, Verica (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2023