Matching Items (10)
Filtering by

Clear all filters

153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
150534-Thumbnail Image.png
Description
Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription

Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription and default logic, are expressive but lack efficient implementations. The nonmonotonic formalisms that are based on the declarative logic programming approach, such as Answer Set Programming (ASP), have efficient implementations but are not expressive enough for representing and reasoning with open domains. This dissertation uses the first-order stable model semantics, which extends both first-order logic and ASP, to relate circumscription to ASP, and to integrate DLs and ASP, thereby partially overcoming the limitations of the formalisms. By exploiting the relationship between circumscription and ASP, well-known action formalisms, such as the situation calculus, the event calculus, and Temporal Action Logics, are reformulated in ASP. The advantages of these reformulations are shown with respect to the generality of the reasoning tasks that can be handled and with respect to the computational efficiency. The integration of DLs and ASP presented in this dissertation provides a framework for integrating rules and ontologies for the semantic web. This framework enables us to perform nonmonotonic reasoning with DL knowledge bases. Observing the need to integrate action theories and ontologies, the above results are used to reformulate the problem of integrating action theories and ontologies as a problem of integrating rules and ontologies, thus enabling us to use the computational tools developed in the context of the latter for the former.
ContributorsPalla, Ravi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2012
154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
ContributorsNarayanan, Vignesh (Author) / Kambhampati, Subbarao (Thesis advisor) / Zhang, Yu (Thesis advisor) / Cooke, Nancy J. (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
156622-Thumbnail Image.png
Description
Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks.

Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks. These three questions essentially entail understanding the attacker’s

use of deception, the capabilities available, and the intent of launching the attack. These

three issues are highly inter-related. If an adversary can hide their intent, they can better

deceive a defender. If an adversary’s capabilities are not well understood, then determining

what their goals are becomes difficult as the defender is uncertain if they have the necessary

tools to accomplish them. However, the understanding of these aspects are also mutually

supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we

understand intent and capabilities, a defender may be able to see through deception schemes.

In this dissertation, I present three pieces of work to tackle these questions to obtain

a better understanding of cyber threats. First, we introduce a new reasoning framework

to address deception. We evaluate the framework by building a dataset from DEFCON

capture-the-flag exercise to identify the person or group responsible for a cyber attack.

We demonstrate that the framework not only handles cases of deception but also provides

transparent decision making in identifying the threat actor. The second task uses a cognitive

learning model to determine the intent – goals of the threat actor on the target system.

The third task looks at understanding the capabilities of threat actors to target systems by

identifying at-risk systems from hacker discussions on darkweb websites. To achieve this

task we gather discussions from more than 300 darkweb websites relating to malicious

hacking.
ContributorsNunes, Eric (Author) / Shakarian, Paulo (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
154648-Thumbnail Image.png
Description
Knowledge representation and reasoning is a prominent subject of study within the field of artificial intelligence that is concerned with the symbolic representation of knowledge in such a way to facilitate automated reasoning about this knowledge. Often in real-world domains, it is necessary to perform defeasible reasoning when representing default

Knowledge representation and reasoning is a prominent subject of study within the field of artificial intelligence that is concerned with the symbolic representation of knowledge in such a way to facilitate automated reasoning about this knowledge. Often in real-world domains, it is necessary to perform defeasible reasoning when representing default behaviors of systems. Answer Set Programming is a widely-used knowledge representation framework that is well-suited for such reasoning tasks and has been successfully applied to practical domains due to efficient computation through grounding--a process that replaces variables with variable-free terms--and propositional solvers similar to SAT solvers. However, some domains provide a challenge for grounding-based methods such as domains requiring reasoning about continuous time or resources.

To address these domains, there have been several proposals to achieve efficiency through loose integrations with efficient declarative solvers such as constraint solvers or satisfiability modulo theories solvers. While these approaches successfully avoid substantial grounding, due to the loose integration, they are not suitable for performing defeasible reasoning on functions. As a result, this expressive reasoning on functions must either be performed using predicates to simulate the functions or in a way that is not elaboration tolerant. Neither compromise is reasonable; the former suffers from the grounding bottleneck when domains are large as is often the case in real-world domains while the latter necessitates encodings to be non-trivially modified for elaborations.

This dissertation presents a novel framework called Answer Set Programming Modulo Theories (ASPMT) that is a tight integration of the stable model semantics and satisfiability modulo theories. This framework both supports defeasible reasoning about functions and alleviates the grounding bottleneck. Combining the strengths of Answer Set Programming and satisfiability modulo theories enables efficient continuous reasoning while still supporting rich reasoning features such as reasoning about defaults and reasoning in domains with incomplete knowledge. This framework is realized in two prototype implementations called MVSM and ASPMT2SMT, and the latter was recently incorporated into a non-monotonic spatial reasoning system. To define the semantics of this framework, we extend the first-order stable model semantics by Ferraris, Lee and Lifschitz to allow "intensional functions" and provide analyses of the theoretical properties of this new formalism and on the relationships between this and existing approaches.
ContributorsBartholomew, Michael James (Author) / Lee, Joohyung (Thesis advisor) / Bazzi, Rida (Committee member) / Colbourn, Charles (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2016
155511-Thumbnail Image.png
Description
The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative

The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative text remains untapped due—in large part—to human limitations. The human ability to comprehend rich text and extract hidden meanings is far superior to known computational algorithms but remains unscalable. In this research, computational treatment is given to online news framing for exposing a deeper level of expressivity coined “double subjectivity” as characterized by its cumulative amplification effects. A visual language is offered for extracting spatial and temporal dynamics of double subjectivity that may give insight into social influence about critical issues, such as environmental, economic, or political discourse. This research offers benefits of 1) scalability for processing hidden meanings in big data and 2) visibility of the entire network dynamics over time and space to give users insight into the current status and future trends of mass communication.
ContributorsCheeks, Loretta H. (Author) / Gaffar, Ashraf (Thesis advisor) / Wald, Dara M (Committee member) / Ben Amor, Hani (Committee member) / Doupe, Adam (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2017
155568-Thumbnail Image.png
Description
This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that

This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions.
ContributorsDemir, Mustafa, Ph.D (Author) / Cooke, Nancy J. (Thesis advisor) / Bekki, Jennifer (Committee member) / Amazeen, Polemnia G (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2017
189223-Thumbnail Image.png
Description
What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for

What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for HARTs. Team resilience here is defined as the ability of a team to reorganize team processes to rebound or morph to overcome an unforeseen challenge. A distinction from the individual, group, or organizational aspects of resilience for teams is how team resilience trades off with team interdependent capacity. The following study collected data from 28 teams comprised of two human participants (recruited from a university populace) and a synthetic teammate (played by an experienced experimenter). Each team completed a series of six reconnaissance missions presented to them in a Minecraft world. The research aim was to identify how to better integrate synthetic teammates for high-risk, high-stress dynamic operations to boost HART performance and HART resilience. All team communications were orally over Zoom. The primary manipulation was the communication given by the synthetic teammate (between-subjects, Task or Task+): Task only communicated the essentials, and Task+ offered clear and concise communications of its own capabilities and limitations. Performance and resilience were measured using a primary mission task score (based upon how many tasks teams completed), time-based measures (such as how long it took to recognize a problem or reorder team processes), and a subjective team resilience score (calculated from participant responses to a survey prompt). The research findings suggest the clear and concise reminders from Task+ enhanced HART performance and HART resilience during high-stress missions in which the teams were challenged by novel events. An exploratory study regarding what personalities may correlate with these improved performance metrics indicated that the Big Five trait taxonomies of extraversion and conscientiousness were positively correlated, whereas neuroticism was negatively correlated with higher HART performance and HART resilience. Future integration of synthetic teammates must consider the types of communications that will be offered to maximize HART performance and HART resilience.
ContributorsGraham, Hudson D. (Author) / Cooke, Nancy J. (Thesis advisor) / Gray, Robert (Committee member) / Holder, Eric (Committee member) / Arizona State University (Publisher)
Created2023
157641-Thumbnail Image.png
Description
Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive

Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive systems discover opportunities to expand and refine their performance. In this study, team interaction exploration is examined in a HAT composed of a human navigator, human photographer, and a synthetic pilot while they perform a remotely-piloted aerial reconnaissance task. Failures in automation and the synthetic pilot’s autonomy were injected throughout ten missions as roadblocks. Teams were clustered by performance into high-, middle-, and low-performing groups. It was hypothesized that high-performing teams would exchange more text-messages containing unique content or sender-recipient combinations than middle- and low-performing teams, and that teams would exchange less unique messages over time. The results indicate that high-performing teams had more unique team interactions than middle-performing teams. Additionally, teams generally had more exploratory team interactions in the first session of missions than the second session. Implications and suggestions for future work are discussed.
ContributorsLematta, Glenn Joseph (Author) / Chiou, Erin K. (Thesis advisor) / Cooke, Nancy J. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2019
190942-Thumbnail Image.png
Description
It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI

It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI agents are integrated into the team, the first study asks what roles can AI agents take? The first study investigates this issue by asking whether an AI agent can take the role of a facilitator and in turn, improve planning outcomes by facilitating team processes. Results indicate that the human facilitator was significantly better than the AI facilitator at reducing cognitive biases such as groupthink, anchoring, and information pooling, as well as increasing decision quality and score. Additionally, participants viewed the AI facilitator negatively and ignored its inputs compared to the human facilitator. Yet, participants in the AI Facilitator condition performed significantly better than participants in the No Facilitator condition, illustrating that having an AI facilitator was better than having no facilitator at all. The second study explores whether artificial social intelligence (ASI) agents can take the role of advisors and subsequently improve team processes and mission outcome during a simulated search-and-rescue mission. The results of this study indicate that although ASI advisors can successfully advise teams, they also use a significantly greater number of taskwork interventions than teamwork interventions. Additionally, this study served to identify what the ASI advisors got right compared to the human advisor and vice versa. Implications and future directions are discussed.
ContributorsBuchanan, Verica (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2023