Matching Items (5)
Filtering by

Clear all filters

157311-Thumbnail Image.png
Description
Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and

Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and efficient representations for many problem domains that require complex reasoning.

However, while ASP is effective on deterministic problem domains, it is not suitable for applications involving quantitative uncertainty, for example, those that require probabilistic reasoning. Furthermore, it is hard to utilize information that can be statistically induced from data with ASP problem modeling.

This dissertation presents the language LP^MLN, which is a probabilistic extension of the stable model semantics with the concept of weighted rules, inspired by Markov Logic. An LP^MLN program defines a probability distribution over "soft" stable models, which may not satisfy all rules, but the more rules with the bigger weights they satisfy, the bigger their probabilities. LP^MLN takes advantage of both ASP and Markov Logic in a single framework, allowing representation of problems that require both logical and probabilistic reasoning in an intuitive and elaboration tolerant way.

This dissertation establishes formal relations between LP^MLN and several other formalisms, discusses inference and weight learning algorithms under LP^MLN, and presents systems implementing the algorithms. LP^MLN systems can be used to compute other languages translatable into LP^MLN.

The advantage of LP^MLN for probabilistic reasoning is illustrated by a probabilistic extension of the action language BC+, called pBC+, defined as a high-level notation of LP^MLN for describing transition systems. Various probabilistic reasoning about transition systems, especially probabilistic diagnosis, can be modeled in pBC+ and computed using LP^MLN systems. pBC+ is further extended with the notion of utility, through a decision-theoretic extension of LP^MLN, and related with Markov Decision Process (MDP) in terms of policy optimization problems. pBC+ can be used to represent (PO)MDP in a succinct and elaboration tolerant way, which enables planning with (PO)MDP algorithms in action domains whose description requires rich KR constructs, such as recursive definitions and indirect effects of actions.
ContributorsWang, Yi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Natarajan, Sriraam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2019
189313-Thumbnail Image.png
Description
This dissertation introduces and examines Soft Curved Reconfigurable Anisotropic Mechanisms (SCRAMs) as a solution to address actuation, manufacturing, and modeling challenges in the field of soft robotics, with the aim of facilitating the broader implementation of soft robots in various industries. SCRAM systems utilize the curved geometry of thin elastic

This dissertation introduces and examines Soft Curved Reconfigurable Anisotropic Mechanisms (SCRAMs) as a solution to address actuation, manufacturing, and modeling challenges in the field of soft robotics, with the aim of facilitating the broader implementation of soft robots in various industries. SCRAM systems utilize the curved geometry of thin elastic structures to tackle these challenges in soft robots. SCRAM devices can modify their dynamic behavior by incorporating reconfigurable anisotropic stiffness, thereby enabling tailored locomotion patterns for specific tasks. This approach simplifies the actuation of robots, resulting in lighter, more flexible, cost-effective, and safer soft robotic systems. This dissertation demonstrates the potential of SCRAM devices through several case studies. These studies investigate virtual joints and shape change propagation in tubes, as well as anisotropic dynamic behavior in vibrational soft twisted beams, effectively demonstrating interesting locomotion patterns that are achievable using simple actuation mechanisms. The dissertation also addresses modeling and simulation challenges by introducing a reduced-order modeling approach. This approach enables fast and accurate simulations of soft robots and is compatible with existing rigid body simulators. Additionally, this dissertation investigates the prototyping processes of SCRAM devices and offers a comprehensive framework for the development of these devices. Overall, this dissertation demonstrates the potential of SCRAM devices to overcome actuation, modeling, and manufacturing challenges in soft robotics. The innovative concepts and approaches presented have implications for various industries that require cost-effective, adaptable, and safe robotic systems. SCRAM devices pave the way for the widespread application of soft robots in diverse domains.
ContributorsJiang, Yuhao (Author) / Aukes, Daniel (Thesis advisor) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Marvi, Hamidreza (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2023
171959-Thumbnail Image.png
Description
Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems

Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems has also spurred a lot of interest in ensuring that AI systems can efficiently and effectively work and collaborate with their intended users. Chief among the efforts in this direction has been the pursuit of imbuing these agents with the ability to provide intuitive and useful explanations regarding their decisions and actions to end-users. In this dissertation, I will describe various works that I have done in the area of explaining sequential decision-making problems. Furthermore, I will frame the discussions of my work within a broader framework for understanding and analyzing explainable AI (XAI). My works herein tackle many of the core challenges related to explaining automated decisions to users including (1) techniques to address asymmetry in knowledge between the user and the system, (2) techniques to address asymmetry in inferential capabilities, and (3) techniques to address vocabulary mismatch.The dissertation will also describe the works I have done in generating interpretable behavior and policy summarization. I will conclude this dissertation, by using the framework of human-aware explanation as a lens to analyze and understand the current landscape of explainable planning.
ContributorsSreedharan, Sarath (Author) / Kambhampati, Subbarao (Thesis advisor) / Kim, Been (Committee member) / Smith, David E (Committee member) / Srivastava, Siddharth (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171413-Thumbnail Image.png
Description
With improvements in automation and system capabilities, human responsibilities in those advanced systems can get more complicated; greater situational awareness and performance may be asked of human agents in roles such as fail-safe operators. This phenomenon of automation improvements requiring more from humans in the loop, is connected to the

With improvements in automation and system capabilities, human responsibilities in those advanced systems can get more complicated; greater situational awareness and performance may be asked of human agents in roles such as fail-safe operators. This phenomenon of automation improvements requiring more from humans in the loop, is connected to the well-known “paradox of automation”. Unfortunately, humans have cognitive limitations that can constrain a person's performance on a task. If one considers human cognitive limitations when designing solutions or policies for human agents, then better results are possible. The focus of this dissertation is on improving human involvement in planning and execution for Sequential Decision Making (SDM) problems. Existing work already considers incorporating humans into planning and execution in SDM, but with limited consideration for cognitive limitations. The work herein focuses on how to improve human involvement through problems in motion planning, planning interfaces, Markov Decision Processes (MDP), and human-team scheduling. This done by first discussing the human modeling assumptions currently used in the literature and their shortcomings. Then this dissertation tackles a set of problems by considering problem-specific human cognitive limitations --such as those associated with memory and inference-- as well as use lessons from fields such as cognitive ergonomics.
ContributorsGopalakrishnan, Sriram (Author) / Kambhampati, Subbarao (Thesis advisor) / Srivastava, Siddharth (Committee member) / Scheutz, Matthias (Committee member) / Zhang, Yu (Tony) (Committee member) / Arizona State University (Publisher)
Created2022
161301-Thumbnail Image.png
Description
In settings where a human and an embodied AI (artificially intelligent) agent coexist, the AI agent has to be capable of reasoning with the human's preconceived notions about the environment as well as with the human's perception limitations. In addition, it should be capable of communicating intentions and objectives effectively

In settings where a human and an embodied AI (artificially intelligent) agent coexist, the AI agent has to be capable of reasoning with the human's preconceived notions about the environment as well as with the human's perception limitations. In addition, it should be capable of communicating intentions and objectives effectively to the human-in-the-loop. While acting in the presence of human observers, the AI agent can synthesize interpretable behaviors like explicable, legible, and assistive behaviors by accounting for the human's mental model (inclusive of her sensor model) in its reasoning process. This thesis will study different behavior synthesis algorithms which focus on improving the interpretability of the agent's behavior in the presence of a human observer. Further, this thesis will study how environment redesign strategies can be leveraged to improve the overall interpretability of the agent's behavior. At times, the agent's environment may also consist of purely adversarial entities or mixed entities (i.e. adversarial as well as cooperative entities), that are trying to infer information from the AI agent's behavior. In such settings, it is crucial for the agent to exhibit obfuscatory behavior that prevents sensitive information from falling into the hands of the adversarial entities. This thesis will show that it is possible to synthesize interpretable as well as obfuscatory behaviors using a single underlying algorithmic framework.
ContributorsKulkarni, Anagha (Author) / Kambhampati, Subbarao (Thesis advisor) / Kamar, Ece (Committee member) / Smith, David E. (Committee member) / Srivastava, Siddharth (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2021