Matching Items (33)
Filtering by

Clear all filters

150062-Thumbnail Image.png
Description
TaxiWorld is a Matlab simulation of a city with a fleet of taxis which operate within it, with the goal of transporting passengers to their destinations. The size of the city, as well as the number of available taxis and the frequency and general locations of fare appearances can all

TaxiWorld is a Matlab simulation of a city with a fleet of taxis which operate within it, with the goal of transporting passengers to their destinations. The size of the city, as well as the number of available taxis and the frequency and general locations of fare appearances can all be set on a scenario-by-scenario basis. The taxis must attempt to service the fares as quickly as possible, by picking each one up and carrying it to its drop-off location. The TaxiWorld scenario is formally modeled using both Decentralized Partially-Observable Markov Decision Processes (Dec-POMDPs) and Multi-agent Markov Decision Processes (MMDPs). The purpose of developing formal models is to learn how to build and use formal Markov models, such as can be given to planners to solve for optimal policies in problem domains. However, finding optimal solutions for Dec-POMDPs is NEXP-Complete, so an empirical algorithm was also developed as an improvement to the method already in use on the simulator, and the methods were compared in identical scenarios to determine which is more effective. The empirical method is of course not optimal - rather, it attempts to simply account for some of the most important factors to achieve an acceptable level of effectiveness while still retaining a reasonable level of computational complexity for online solving.
ContributorsWhite, Christopher (Author) / Kambhampati, Subbarao (Thesis advisor) / Gupta, Sandeep (Committee member) / Varsamopoulos, Georgios (Committee member) / Arizona State University (Publisher)
Created2011
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
150534-Thumbnail Image.png
Description
Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription

Different logic-based knowledge representation formalisms have different limitations either with respect to expressivity or with respect to computational efficiency. First-order logic, which is the basis of Description Logics (DLs), is not suitable for defeasible reasoning due to its monotonic nature. The nonmonotonic formalisms that extend first-order logic, such as circumscription and default logic, are expressive but lack efficient implementations. The nonmonotonic formalisms that are based on the declarative logic programming approach, such as Answer Set Programming (ASP), have efficient implementations but are not expressive enough for representing and reasoning with open domains. This dissertation uses the first-order stable model semantics, which extends both first-order logic and ASP, to relate circumscription to ASP, and to integrate DLs and ASP, thereby partially overcoming the limitations of the formalisms. By exploiting the relationship between circumscription and ASP, well-known action formalisms, such as the situation calculus, the event calculus, and Temporal Action Logics, are reformulated in ASP. The advantages of these reformulations are shown with respect to the generality of the reasoning tasks that can be handled and with respect to the computational efficiency. The integration of DLs and ASP presented in this dissertation provides a framework for integrating rules and ontologies for the semantic web. This framework enables us to perform nonmonotonic reasoning with DL knowledge bases. Observing the need to integrate action theories and ontologies, the above results are used to reformulate the problem of integrating action theories and ontologies as a problem of integrating rules and ontologies, thus enabling us to use the computational tools developed in the context of the latter for the former.
ContributorsPalla, Ravi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2012
151144-Thumbnail Image.png
Description
Automated planning problems classically involve finding a sequence of actions that transform an initial state to some state satisfying a conjunctive set of goals with no temporal constraints. But in many real-world problems, the best plan may involve satisfying only a subset of goals or missing defined goal deadlines. For

Automated planning problems classically involve finding a sequence of actions that transform an initial state to some state satisfying a conjunctive set of goals with no temporal constraints. But in many real-world problems, the best plan may involve satisfying only a subset of goals or missing defined goal deadlines. For example, this may be required when goals are logically conflicting, or when there are time or cost constraints such that achieving all goals on time may be too expensive. In this case, goals and deadlines must be declared as soft. I call these partial satisfaction planning (PSP) problems. In this work, I focus on particular types of PSP problems, where goals are given a quantitative value based on whether (or when) they are achieved. The objective is to find a plan with the best quality. A first challenge is in finding adequate goal representations that capture common types of goal achievement rewards and costs. One popular representation is to give a single reward on each goal of a planning problem. I further expand on this approach by allowing users to directly introduce utility dependencies, providing for changes of goal achievement reward directly based on the goals a plan achieves. After, I introduce time-dependent goal costs, where a plan incurs penalty if it will achieve a goal past a specified deadline. To solve PSP problems with goal utility dependencies, I look at using state-of-the-art methodologies currently employed for classical planning problems involving heuristic search. In doing so, one faces the challenge of simultaneously determining the best set of goals and plan to achieve them. This is complicated by utility dependencies defined by a user and cost dependencies within the plan. To address this, I introduce a set of heuristics based on combinations using relaxed plans and integer programming formulations. Further, I explore an approach to improve search through learning techniques by using automatically generated state features to find new states from which to search. Finally, the investigation into handling time-dependent goal costs leads us to an improved search technique derived from observations based on solving discretized approximations of cost functions.
ContributorsBenton, J (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Do, Minh B. (Committee member) / Smith, David E. (Committee member) / Langley, Pat (Committee member) / Arizona State University (Publisher)
Created2012
149622-Thumbnail Image.png
Description
Building computational models of human problem solving has been a longstanding goal in Artificial Intelligence research. The theories of cognitive architectures addressed this issue by embedding models of problem solving within them. This thesis presents an extended account of human problem solving and describes its implementation within one such theory

Building computational models of human problem solving has been a longstanding goal in Artificial Intelligence research. The theories of cognitive architectures addressed this issue by embedding models of problem solving within them. This thesis presents an extended account of human problem solving and describes its implementation within one such theory of cognitive architecture--ICARUS. The document begins by reviewing the standard theory of problem solving, along with how previous versions of ICARUS have incorporated and expanded on it. Next it discusses some limitations of the existing mechanism and proposes four extensions that eliminate these limitations, elaborate the framework along interesting dimensions, and bring it into closer alignment with human problem-solving abilities. After this, it presents evaluations on four domains that establish the benefits of these extensions. The results demonstrate the system's ability to solve problems in various domains and its generality. In closing, it outlines related work and notes promising directions for additional research.
ContributorsTrivedi, Nishant (Author) / Langley, Patrick W (Thesis advisor) / VanLehn, Kurt (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2011
134257-Thumbnail Image.png
Description
This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or

This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or disconnect. The system was demonstrated on two real robots, a Fetch and a PeopleBot, by conducting a surveillance task on the fifth floor of the Computer Science building at Arizona State University. The next part of the system includes extensions for teaming with humans. An Android application was created to serve as the interface between the system and human teammates. This application provides a way for the system to communicate with humans in the loop. In addition, it sends location information of the human teammates to the system so that goal recognition can be performed. This goal recognition allows the generation of human-aware plans. This capability was demonstrated in a mock search and rescue scenario using the Fetch to locate a missing teammate.
ContributorsSaba, Gabriel Christer (Author) / Kambhampati, Subbarao (Thesis director) / Doupé, Adam (Committee member) / Chakraborti, Tathagata (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
171492-Thumbnail Image.png
Description
The future will be replete with Artificial Intelligence (AI) based agents closely collaborating with humans. Although it is challenging to construct such systems for real-world conditions, the Intelligent Tutoring System (ITS) community has proposed several techniques to work closely with students. However, there is a need to extend these systems

The future will be replete with Artificial Intelligence (AI) based agents closely collaborating with humans. Although it is challenging to construct such systems for real-world conditions, the Intelligent Tutoring System (ITS) community has proposed several techniques to work closely with students. However, there is a need to extend these systems outside the controlled environment of the classroom. More recently, Human-Aware Planning (HAP) community has developed generalized AI techniques for collaborating with humans and providing personalized support or guidance to the collaborators. In this thesis, the take learning from the ITS community is extend to construct such human-aware systems for real-world domains and evaluate them with real stakeholders. First, the applicability of HAP to ITS is demonstrated, by modeling the behavior in a classroom and a state-of-the-art tutoring system called Dragoon. Then these techniques are extended to provide decision support to a human teammate and evaluate the effectiveness of the framework through ablation studies to support students in constructing their plan of study (\ipos). The results show that these techniques are helpful and can support users in their tasks. In the third section of the thesis, an ITS scenario of asking questions (or problems) in active environments is modeled by constructing questions to elicit a human teammate's model of understanding. The framework is evaluated through a user study, where the results show that the queries can be used for eliciting the human teammate's mental model.
ContributorsGrover, Sachin (Author) / Kambhampati, Subbarao (Thesis advisor) / Smith, David (Committee member) / Srivastava, Sidhharth (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2022
171440-Thumbnail Image.png
Description
Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference that is riding the top of the current artificial intelligence

Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference that is riding the top of the current artificial intelligence (AI) wave, having experienced the greatest boost in popularity above any other machine learning solution. However, due to their inscrutable design based on the optimization of millions of parameters, it is ever so complex to understand how their decision is influenced nor why (and when) they fail. While some works aim at explaining neural network decisions or making systems to be inherently interpretable the great majority of state of the art machine learning works prioritize performance over interpretability effectively becoming black boxes. Hence, there is still uncertainty in the decision boundaries of these already deployed solutions whose predictions should still be analyzed and taken with care. This becomes even more important when these models are used on sensitive scenarios such as medicine, criminal justice, settings with native inherent social biases or where egregious mispredictions can negatively impact the system or human trust down the line. Thus, the aim of this work is to provide a comprehensive analysis on the failure modes of the state of the art neural networks from three domains: large image classifiers and their misclassifications, generative adversarial networks when used for data augmentation and transformer networks applied to structured representations and reasoning about actions and change.
ContributorsOlmo Hernandez, Alberto (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Sengupta, Sailik (Committee member) / Arizona State University (Publisher)
Created2022
171959-Thumbnail Image.png
Description
Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems

Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems has also spurred a lot of interest in ensuring that AI systems can efficiently and effectively work and collaborate with their intended users. Chief among the efforts in this direction has been the pursuit of imbuing these agents with the ability to provide intuitive and useful explanations regarding their decisions and actions to end-users. In this dissertation, I will describe various works that I have done in the area of explaining sequential decision-making problems. Furthermore, I will frame the discussions of my work within a broader framework for understanding and analyzing explainable AI (XAI). My works herein tackle many of the core challenges related to explaining automated decisions to users including (1) techniques to address asymmetry in knowledge between the user and the system, (2) techniques to address asymmetry in inferential capabilities, and (3) techniques to address vocabulary mismatch.The dissertation will also describe the works I have done in generating interpretable behavior and policy summarization. I will conclude this dissertation, by using the framework of human-aware explanation as a lens to analyze and understand the current landscape of explainable planning.
ContributorsSreedharan, Sarath (Author) / Kambhampati, Subbarao (Thesis advisor) / Kim, Been (Committee member) / Smith, David E (Committee member) / Srivastava, Siddharth (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171876-Thumbnail Image.png
Description
As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an

As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an understanding inevitably deviates from the ground truth due to reasons such as the human’s lack of understanding of the domain or misunderstanding of the agent’s capabilities. Such differences would result in an unmatched expectation of the agent’s behavior with the agent’s optimal behavior, thereby biasing the human’s assessment of the agent’s performance. In this dissertation, I focus on when these differences are due to a biased belief about domain dynamics. I especially investigate the impact of such a biased belief on the agent’s decision-making process in two different problem settings from a learning perspective. In the first setting, the agent is tasked to accomplish a task alone but must infer the human’s objectives from the human’s feedback on the agent’s behavior in the environment. In such a case, the human biased feedback could mislead the agent to learn a reward function that results in a sub-optimal and, potentially, undesired policy. In the second setting, the agent must accomplish a task with a human observer. Given that the agent’s optimal behavior may not match the human’s expectation due to the biased belief, the agent’s optimal behavior may be viewed as inexplicable, leading to degraded performance and loss of trust. Consequently, this dissertation proposes approaches that (1) endow the agent with the ability to be aware of the human’s biased belief while inferring the human’s objectives, thereby (2) neutralize the impact of the model differences in a reinforcement learning framework, and (3) behave explicably by reconciling the human’s expectation and optimality during decision-making.
ContributorsGong, Ze (Author) / Zhang, Yu (Thesis advisor) / Amor, Hani Ben (Committee member) / Kambhampati, Subbarao (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2022