Matching Items (16)
Filtering by

Clear all filters

151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
150534-Thumbnail Image.png
Description
Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription

Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription and default logic, are expressive but lack efficient implementations. The nonmonotonic formalisms that are based on the declarative logic programming approach, such as Answer Set Programming (ASP), have efficient implementations but are not expressive enough for representing and reasoning with open domains. This dissertation uses the first-order stable model semantics, which extends both first-order logic and ASP, to relate circumscription to ASP, and to integrate DLs and ASP, thereby partially overcoming the limitations of the formalisms. By exploiting the relationship between circumscription and ASP, well-known action formalisms, such as the situation calculus, the event calculus, and Temporal Action Logics, are reformulated in ASP. The advantages of these reformulations are shown with respect to the generality of the reasoning tasks that can be handled and with respect to the computational efficiency. The integration of DLs and ASP presented in this dissertation provides a framework for integrating rules and ontologies for the semantic web. This framework enables us to perform nonmonotonic reasoning with DL knowledge bases. Observing the need to integrate action theories and ontologies, the above results are used to reformulate the problem of integrating action theories and ontologies as a problem of integrating rules and ontologies, thus enabling us to use the computational tools developed in the context of the latter for the former.
ContributorsPalla, Ravi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2012
149454-Thumbnail Image.png
Description
Goal specification is an important aspect of designing autonomous agents. A goal does not only refer to the set of states for the agent to reach. A goal also defines restrictions on the paths the agent should follow. Temporal logics are widely used in goal specification. However, they lack the

Goal specification is an important aspect of designing autonomous agents. A goal does not only refer to the set of states for the agent to reach. A goal also defines restrictions on the paths the agent should follow. Temporal logics are widely used in goal specification. However, they lack the ability to represent goals in a non-deterministic domain, goals that change non-monotonically, and goals with preferences. This dissertation defines new goal specification languages by extending temporal logics to address these issues. First considered is the goal specification in non-deterministic domains, in which an agent following a policy leads to a set of paths. A logic is proposed to distinguish paths of the agent from all paths in the domain. In addition, to address the need of comparing policies for finding the best ones, a language capable of quantifying over policies is proposed. As policy structures of agents play an important role in goal specification, languages are also defined by considering different policy structures. Besides, after an agent is given an initial goal, the agent may change its expectations or the domain may change, thus goals that are previously specified may need to be further updated, revised, partially retracted, or even completely changed. Non-monotonic goal specification languages that can make these changes in an elaboration tolerant manner are needed. Two languages that rely on labeling sub-formulas and connecting multiple rules are developed to address non-monotonicity in goal specification. Also, agents may have preferential relations among sub-goals, and the preferential relations may change as agents achieve other sub-goals. By nesting a comparison operator with other temporal operators, a language with dynamic preferences is proposed. Various goals that cannot be expressed in other languages are expressed in the proposed languages. Finally, plans are given for some goals specified in the proposed languages.
ContributorsZhao, Jicheng (Author) / Baral, Chitta (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Lee, Joohyung (Committee member) / Lifschitz, Vladimir (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2010
190942-Thumbnail Image.png
Description
It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI

It is difficult to imagine a society that does not utilize teams. At the same time, teams need to evolve to meet today’s challenges of the ever-increasing proliferation of data and complexity. It may be useful to add artificial intelligent (AI) agents to team up with humans. Then, as AI agents are integrated into the team, the first study asks what roles can AI agents take? The first study investigates this issue by asking whether an AI agent can take the role of a facilitator and in turn, improve planning outcomes by facilitating team processes. Results indicate that the human facilitator was significantly better than the AI facilitator at reducing cognitive biases such as groupthink, anchoring, and information pooling, as well as increasing decision quality and score. Additionally, participants viewed the AI facilitator negatively and ignored its inputs compared to the human facilitator. Yet, participants in the AI Facilitator condition performed significantly better than participants in the No Facilitator condition, illustrating that having an AI facilitator was better than having no facilitator at all. The second study explores whether artificial social intelligence (ASI) agents can take the role of advisors and subsequently improve team processes and mission outcome during a simulated search-and-rescue mission. The results of this study indicate that although ASI advisors can successfully advise teams, they also use a significantly greater number of taskwork interventions than teamwork interventions. Additionally, this study served to identify what the ASI advisors got right compared to the human advisor and vice versa. Implications and future directions are discussed.
ContributorsBuchanan, Verica (Author) / Cooke, Nancy J. (Thesis advisor) / Gutzwiller, Robert S. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2023
189223-Thumbnail Image.png
Description
What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for

What makes a human, artificial intelligence, and robot team (HART) succeed despite unforeseen challenges in a complex sociotechnical world? Are there personalities that are better suited for HARTs facing the unexpected? Only recently has resilience been considered specifically at the team level, and few studies have addressed team resilience for HARTs. Team resilience here is defined as the ability of a team to reorganize team processes to rebound or morph to overcome an unforeseen challenge. A distinction from the individual, group, or organizational aspects of resilience for teams is how team resilience trades off with team interdependent capacity. The following study collected data from 28 teams comprised of two human participants (recruited from a university populace) and a synthetic teammate (played by an experienced experimenter). Each team completed a series of six reconnaissance missions presented to them in a Minecraft world. The research aim was to identify how to better integrate synthetic teammates for high-risk, high-stress dynamic operations to boost HART performance and HART resilience. All team communications were orally over Zoom. The primary manipulation was the communication given by the synthetic teammate (between-subjects, Task or Task+): Task only communicated the essentials, and Task+ offered clear and concise communications of its own capabilities and limitations. Performance and resilience were measured using a primary mission task score (based upon how many tasks teams completed), time-based measures (such as how long it took to recognize a problem or reorder team processes), and a subjective team resilience score (calculated from participant responses to a survey prompt). The research findings suggest the clear and concise reminders from Task+ enhanced HART performance and HART resilience during high-stress missions in which the teams were challenged by novel events. An exploratory study regarding what personalities may correlate with these improved performance metrics indicated that the Big Five trait taxonomies of extraversion and conscientiousness were positively correlated, whereas neuroticism was negatively correlated with higher HART performance and HART resilience. Future integration of synthetic teammates must consider the types of communications that will be offered to maximize HART performance and HART resilience.
ContributorsGraham, Hudson D. (Author) / Cooke, Nancy J. (Thesis advisor) / Gray, Robert (Committee member) / Holder, Eric (Committee member) / Arizona State University (Publisher)
Created2023
157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
156622-Thumbnail Image.png
Description
Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks.

Reasoning about the activities of cyber threat actors is critical to defend against cyber

attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult

to determine who the attacker is, what the desired goals are of the attacker, and how they will

carry out their attacks. These three questions essentially entail understanding the attacker’s

use of deception, the capabilities available, and the intent of launching the attack. These

three issues are highly inter-related. If an adversary can hide their intent, they can better

deceive a defender. If an adversary’s capabilities are not well understood, then determining

what their goals are becomes difficult as the defender is uncertain if they have the necessary

tools to accomplish them. However, the understanding of these aspects are also mutually

supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we

understand intent and capabilities, a defender may be able to see through deception schemes.

In this dissertation, I present three pieces of work to tackle these questions to obtain

a better understanding of cyber threats. First, we introduce a new reasoning framework

to address deception. We evaluate the framework by building a dataset from DEFCON

capture-the-flag exercise to identify the person or group responsible for a cyber attack.

We demonstrate that the framework not only handles cases of deception but also provides

transparent decision making in identifying the threat actor. The second task uses a cognitive

learning model to determine the intent – goals of the threat actor on the target system.

The third task looks at understanding the capabilities of threat actors to target systems by

identifying at-risk systems from hacker discussions on darkweb websites. To achieve this

task we gather discussions from more than 300 darkweb websites relating to malicious

hacking.
ContributorsNunes, Eric (Author) / Shakarian, Paulo (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2018
157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
ContributorsHsiung, Chi-Ping (M.S.) (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019
157641-Thumbnail Image.png
Description
Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive

Human-agent teams (HATs) are expected to play a larger role in future command and control systems where resilience is critical for team effectiveness. The question of how HATs interact to be effective in both normal and unexpected situations is worthy of further examination. Exploratory behaviors are one that way adaptive systems discover opportunities to expand and refine their performance. In this study, team interaction exploration is examined in a HAT composed of a human navigator, human photographer, and a synthetic pilot while they perform a remotely-piloted aerial reconnaissance task. Failures in automation and the synthetic pilot’s autonomy were injected throughout ten missions as roadblocks. Teams were clustered by performance into high-, middle-, and low-performing groups. It was hypothesized that high-performing teams would exchange more text-messages containing unique content or sender-recipient combinations than middle- and low-performing teams, and that teams would exchange less unique messages over time. The results indicate that high-performing teams had more unique team interactions than middle-performing teams. Additionally, teams generally had more exploratory team interactions in the first session of missions than the second session. Implications and suggestions for future work are discussed.
ContributorsLematta, Glenn Joseph (Author) / Chiou, Erin K. (Thesis advisor) / Cooke, Nancy J. (Committee member) / Roscoe, Rod D. (Committee member) / Arizona State University (Publisher)
Created2019
154648-Thumbnail Image.png
Description
Knowledge representation and reasoning is a prominent subject of study within the field of artificial intelligence that is concerned with the symbolic representation of knowledge in such a way to facilitate automated reasoning about this knowledge. Often in real-world domains, it is necessary to perform defeasible reasoning when representing default

Knowledge representation and reasoning is a prominent subject of study within the field of artificial intelligence that is concerned with the symbolic representation of knowledge in such a way to facilitate automated reasoning about this knowledge. Often in real-world domains, it is necessary to perform defeasible reasoning when representing default behaviors of systems. Answer Set Programming is a widely-used knowledge representation framework that is well-suited for such reasoning tasks and has been successfully applied to practical domains due to efficient computation through grounding--a process that replaces variables with variable-free terms--and propositional solvers similar to SAT solvers. However, some domains provide a challenge for grounding-based methods such as domains requiring reasoning about continuous time or resources.

To address these domains, there have been several proposals to achieve efficiency through loose integrations with efficient declarative solvers such as constraint solvers or satisfiability modulo theories solvers. While these approaches successfully avoid substantial grounding, due to the loose integration, they are not suitable for performing defeasible reasoning on functions. As a result, this expressive reasoning on functions must either be performed using predicates to simulate the functions or in a way that is not elaboration tolerant. Neither compromise is reasonable; the former suffers from the grounding bottleneck when domains are large as is often the case in real-world domains while the latter necessitates encodings to be non-trivially modified for elaborations.

This dissertation presents a novel framework called Answer Set Programming Modulo Theories (ASPMT) that is a tight integration of the stable model semantics and satisfiability modulo theories. This framework both supports defeasible reasoning about functions and alleviates the grounding bottleneck. Combining the strengths of Answer Set Programming and satisfiability modulo theories enables efficient continuous reasoning while still supporting rich reasoning features such as reasoning about defaults and reasoning in domains with incomplete knowledge. This framework is realized in two prototype implementations called MVSM and ASPMT2SMT, and the latter was recently incorporated into a non-monotonic spatial reasoning system. To define the semantics of this framework, we extend the first-order stable model semantics by Ferraris, Lee and Lifschitz to allow "intensional functions" and provide analyses of the theoretical properties of this new formalism and on the relationships between this and existing approaches.
ContributorsBartholomew, Michael James (Author) / Lee, Joohyung (Thesis advisor) / Bazzi, Rida (Committee member) / Colbourn, Charles (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2016