Autonomously Learning World-Model Representations For Efficient Robot Planning

193613-Thumbnail Image.png
Description
In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans

In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans for world models that enable planning. Consequently, it is not only expensive to create such world models, as it requires human experts who understand the domain as well as robot limitations, these models may also be biased by human embodiment, which can be limiting for robots whose kinematics are not human-like. This thesis answers the fundamental question: Can we learn such world models automatically? This research shows that we can learn complex world models directly from unannotated and unlabeled demonstrations containing only the configurations of the robot and the objects in the environment. The core contributions of this thesis are the first known approaches for i) task and motion planning that explicitly handle stochasticity, ii) automatically inventing neuro-symbolic state and action abstractions for deterministic and stochastic motion planning, and iii) automatically inventing relational and interpretable world models in the form of symbolic predicates and actions. This thesis also presents a thorough and rigorous empirical experimentation. With experiments in both simulated and real-world settings, this thesis has demonstrated the efficacy and robustness of automatically learned world models in overcoming challenges, generalizing beyond situations encountered during training.
Date Created
2024
Agent

QPMeL: Quantum Polar Metric Learning

193330-Thumbnail Image.png
Description
Deep metric learning has recently shown extremely promising results in the classical data domain, creating well-separated feature spaces. This idea was also adapted to quantum computers via Quantum Metric Learning (QMeL). QMeL consists of a 2 step process with a

Deep metric learning has recently shown extremely promising results in the classical data domain, creating well-separated feature spaces. This idea was also adapted to quantum computers via Quantum Metric Learning (QMeL). QMeL consists of a 2 step process with a classical model to compress the data to fit into the limited number of qubits, then train a Parameterized Quantum Circuit (PQC) to create better separation in Hilbert Space. However, on Noisy Intermediate Scale Quantum (NISQ) devices, QMeL solutions result in high circuit width and depth, both of which limit scalability. The proposed Quantum Polar Metric Learning (QPMeL ), uses a classical model to learn the parameters of the polar form of a qubit. A shallow PQC with Ry and Rz gates is then utilized to create the state and a trainable layer of ZZ(θ)-gates to learn entanglement. The circuit also computes fidelity via a SWAP Test for the proposed Fidelity Triplet Loss function, used to train both classical and quantum components. When compared to QMeL approaches, QPMeL achieves 3X better multi-class separation, while using only 1/2 the number of gates and depth. QPMeL is shown to outperform classical networks with similar configurations, presentinga promising avenue for future research on fully classical models with quantum loss functions.
Date Created
2024
Agent

Adapting Robotic Systems to User Control

189263-Thumbnail Image.png
Description
In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems

In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems is that users are often non-experts, with limited knowledge about the robot's hardware and dynamics. In the domain of human-robot interaction, there exist different modalities of conveying information regarding the desired behavior of the robot, most commonly used are demonstrations, and preferences. While it is challenging for non-experts to provide demonstrations of robot behavior, works that consider preferences expressed as trajectory rankings lead to users providing noisy and possibly conflicting information, leading to slow adaptation or system failures. The end user can be expected to be familiar with the dynamics and how they relate to their desired objectives through repeated interactions with the system. However, due to inadequate knowledge about the system dynamics, it is expected that the user would find it challenging to provide feedback on all dimension's of the system's behavior at all times. Thus, the key innovation of this work is to enable users to provide partial instead of completely specified preferences as with traditional methods that learn from user preferences. In particular, I consider partial preferences in the form of preferences over plant dynamic parameters, for which I propose Adaptive User Control (AUC) of robotic systems. I leverage the correlations between the observed and hidden parameter preferences to deal with incompleteness. I use a sparse Gaussian Process Latent Variable Model formulation to learn hidden variables that represent the relationships between the observed and hidden preferences over the system parameters. This model is trained using Stochastic Variational Inference with a distributed loss formulation. I evaluate AUC in a custom drone-swarm environment and several domains from DeepMind control suite. I compare AUC with the state-of-the-art preference-based reinforcement learning methods that are utilized with user preferences. Results show that AUC outperforms the baselines substantially in terms of sample and feedback complexity.
Date Created
2023
Agent

Todas! An Introduction to Mahjong through the Filipino Style of Play

Description

This project is a video game implementation of the Filipino ruleset of Mahjong for the purpose of increasing awareness of the Mahjong game and Filipino culture. The game, titled Todas!, is built from scratch using various free resources and contains

This project is a video game implementation of the Filipino ruleset of Mahjong for the purpose of increasing awareness of the Mahjong game and Filipino culture. The game, titled Todas!, is built from scratch using various free resources and contains a Tutorial for teaching players the basics of the game and a Multiplayer mode that enables remote gameplay for up to four people.

Date Created
2023-05
Agent

The Application of Rollout-Style Search to Decision-Making in the Game of Chess

Description

This paper explores the inner workings of algorithms that computers may use to play Chess. First, we discuss the classical Alpha-Beta algorithm and several improvements, including Quiescence Search, Transposition Tables, and more. Next, we examine the state-of-the-art Monte Carlo Tree

This paper explores the inner workings of algorithms that computers may use to play Chess. First, we discuss the classical Alpha-Beta algorithm and several improvements, including Quiescence Search, Transposition Tables, and more. Next, we examine the state-of-the-art Monte Carlo Tree Search algorithm and relevant optimizations. After that, we consider a recent algorithm that transforms Alpha-Beta into a “Rollout” search, blending it with Monte Carlo Tree Search under the rollout paradigm. We then discuss our C++ Chess Engine, Homura, and explain its implementation of a hybrid algorithm combining Alpha-Beta with MCTS. Finally, we show that Homura can play master-level Chess at a strength currently exceeding that of our backtracking Alpha-Beta.

Date Created
2023-05
Agent

Foundations of Human-Aware Explanations for Sequential Decision-Making Problems

171959-Thumbnail Image.png
Description
Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from

Recent breakthroughs in Artificial Intelligence (AI) have brought the dream of developing and deploying complex AI systems that can potentially transform everyday life closer to reality than ever before. However, the growing realization that there might soon be people from all walks of life using and working with these systems has also spurred a lot of interest in ensuring that AI systems can efficiently and effectively work and collaborate with their intended users. Chief among the efforts in this direction has been the pursuit of imbuing these agents with the ability to provide intuitive and useful explanations regarding their decisions and actions to end-users. In this dissertation, I will describe various works that I have done in the area of explaining sequential decision-making problems. Furthermore, I will frame the discussions of my work within a broader framework for understanding and analyzing explainable AI (XAI). My works herein tackle many of the core challenges related to explaining automated decisions to users including (1) techniques to address asymmetry in knowledge between the user and the system, (2) techniques to address asymmetry in inferential capabilities, and (3) techniques to address vocabulary mismatch.The dissertation will also describe the works I have done in generating interpretable behavior and policy summarization. I will conclude this dissertation, by using the framework of human-aware explanation as a lens to analyze and understand the current landscape of explainable planning.
Date Created
2022
Agent

What Do You Want Me To Do? Addressing Model Differences for Human-Aware Decision-Making from A Learning Perspective

171876-Thumbnail Image.png
Description
As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which

As intelligent agents become pervasive in our lives, they are expected to not only achieve tasks alone but also engage in tasks with humans in the loop. In such cases, the human naturally forms an understanding of the agent, which affects his perception of the agent’s behavior. However, such an understanding inevitably deviates from the ground truth due to reasons such as the human’s lack of understanding of the domain or misunderstanding of the agent’s capabilities. Such differences would result in an unmatched expectation of the agent’s behavior with the agent’s optimal behavior, thereby biasing the human’s assessment of the agent’s performance. In this dissertation, I focus on when these differences are due to a biased belief about domain dynamics. I especially investigate the impact of such a biased belief on the agent’s decision-making process in two different problem settings from a learning perspective. In the first setting, the agent is tasked to accomplish a task alone but must infer the human’s objectives from the human’s feedback on the agent’s behavior in the environment. In such a case, the human biased feedback could mislead the agent to learn a reward function that results in a sub-optimal and, potentially, undesired policy. In the second setting, the agent must accomplish a task with a human observer. Given that the agent’s optimal behavior may not match the human’s expectation due to the biased belief, the agent’s optimal behavior may be viewed as inexplicable, leading to degraded performance and loss of trust. Consequently, this dissertation proposes approaches that (1) endow the agent with the ability to be aware of the human’s biased belief while inferring the human’s objectives, thereby (2) neutralize the impact of the model differences in a reinforcement learning framework, and (3) behave explicably by reconciling the human’s expectation and optimality during decision-making.
Date Created
2022
Agent

Max Markov Chain

171601-Thumbnail Image.png
Description
High-order Markov Chains are useful in a variety of situations. However, theseprocesses are limited in the complexity of the domains they can model. In complex domains, Markov models can require 100’s of Gigabytes of ram leading to the need of a parsimonious

High-order Markov Chains are useful in a variety of situations. However, theseprocesses are limited in the complexity of the domains they can model. In complex domains, Markov models can require 100’s of Gigabytes of ram leading to the need of a parsimonious model. In this work, I present the Max Markov Chain (MMC). A robust model for estimating high-order datasets using only first-order parameters. High-order Markov chains (HMC) and Markov approximations (MTDg) struggle to scale to large state spaces due to the exponentially growing number of parameters required to model these domains. MMC can accurately approximate these models using only first-order parameters given the domain fulfills the MMC assumption. MMC naturally has better sample efficiency, and the desired spatial and computational advantages over HMCs and approximate HMCs. I will present evidence demonstrating the effectiveness of MMC in a variety of domains and compare its performance with HMCs and Markov approximations. Human behavior is inherently complex and challenging to model. Due to the high number of parameters required for traditional Markov models, the excessive computing requirements make real-time human simulation computationally expensive and impractical. I argue in certain situations, the behavior of humans follows that of a sparsely connected Markov model. In this work I focus on the subset of Markov Models which are just that, sparsely connected.
Date Created
2022
Agent

Human-Aware AI Methods for Active Teaming

171492-Thumbnail Image.png
Description
The future will be replete with Artificial Intelligence (AI) based agents closely collaborating with humans. Although it is challenging to construct such systems for real-world conditions, the Intelligent Tutoring System (ITS) community has proposed several techniques to work closely with

The future will be replete with Artificial Intelligence (AI) based agents closely collaborating with humans. Although it is challenging to construct such systems for real-world conditions, the Intelligent Tutoring System (ITS) community has proposed several techniques to work closely with students. However, there is a need to extend these systems outside the controlled environment of the classroom. More recently, Human-Aware Planning (HAP) community has developed generalized AI techniques for collaborating with humans and providing personalized support or guidance to the collaborators. In this thesis, the take learning from the ITS community is extend to construct such human-aware systems for real-world domains and evaluate them with real stakeholders. First, the applicability of HAP to ITS is demonstrated, by modeling the behavior in a classroom and a state-of-the-art tutoring system called Dragoon. Then these techniques are extended to provide decision support to a human teammate and evaluate the effectiveness of the framework through ablation studies to support students in constructing their plan of study (\ipos). The results show that these techniques are helpful and can support users in their tasks. In the third section of the thesis, an ITS scenario of asking questions (or problems) in active environments is modeled by constructing questions to elicit a human teammate's model of understanding. The framework is evaluated through a user study, where the results show that the queries can be used for eliciting the human teammate's mental model.
Date Created
2022
Agent

Analyzing Failure Modes of Inscrutable Machine Learning Models

171440-Thumbnail Image.png
Description
Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference

Machine learning models and in specific, neural networks, are well known for being inscrutable in nature. From image classification tasks and generative techniques for data augmentation, to general purpose natural language models, neural networks are currently the algorithm of preference that is riding the top of the current artificial intelligence (AI) wave, having experienced the greatest boost in popularity above any other machine learning solution. However, due to their inscrutable design based on the optimization of millions of parameters, it is ever so complex to understand how their decision is influenced nor why (and when) they fail. While some works aim at explaining neural network decisions or making systems to be inherently interpretable the great majority of state of the art machine learning works prioritize performance over interpretability effectively becoming black boxes. Hence, there is still uncertainty in the decision boundaries of these already deployed solutions whose predictions should still be analyzed and taken with care. This becomes even more important when these models are used on sensitive scenarios such as medicine, criminal justice, settings with native inherent social biases or where egregious mispredictions can negatively impact the system or human trust down the line. Thus, the aim of this work is to provide a comprehensive analysis on the failure modes of the state of the art neural networks from three domains: large image classifiers and their misclassifications, generative adversarial networks when used for data augmentation and transformer networks applied to structured representations and reasoning about actions and change.
Date Created
2022
Agent