Matching Items (7)
Filtering by

Clear all filters

156771-Thumbnail Image.png
Description
Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert

Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert and, as a result, the scope of a robot's autonomy and ability to safely explore and learn in new and unforeseen environments is constrained by the specifics of the designed reward function. In this thesis, I design and implement a stateful collision anticipation model with powerful predictive capability based upon my research of sequential data modeling and modern recurrent neural networks. I also develop deep reinforcement learning methods whose rewards are generated by self-supervised training and intrinsic signals. The main objective is to work towards the development of resilient robots that can learn to anticipate and avoid damaging interactions by combining visual and proprioceptive cues from internal sensors. The introduced solutions are inspired by pain pathways in humans and animals, because such pathways are known to guide decision-making processes and promote self-preservation. A new "robot dodge ball' benchmark is introduced in order to test the validity of the developed algorithms in dynamic environments.
ContributorsRichardson, Trevor W (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2018
157202-Thumbnail Image.png
Description
In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an

In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an identification network, a new sampling-based motion planner, Learn and Link, is introduced. This planner leverages critical regions to overcome the limitations of uniform sampling while still maintaining guarantees of correctness inherent to sampling-based algorithms. Learn and Link is evaluated against planners from the Open Motion Planning Library (OMPL) on an extensive suite of challenging navigation planning problems. This work shows that critical areas of an environment are learnable, and can be used by Learn and Link to solve MP problems with far less planning time than existing sampling-based planners.
ContributorsMolina, Daniel, M.S (Author) / Srivastava, Siddharth (Thesis advisor) / Li, Baoxin (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2019
Description
This research introduces Roblocks, a user-friendly system for learning Artificial Intelligence (AI) planning concepts using mobile manipulator robots. It uses a visual programming interface based on block-structured programming to make AI planning concepts easier to grasp for those who are new to robotics and AI planning. Users get to accomplish

This research introduces Roblocks, a user-friendly system for learning Artificial Intelligence (AI) planning concepts using mobile manipulator robots. It uses a visual programming interface based on block-structured programming to make AI planning concepts easier to grasp for those who are new to robotics and AI planning. Users get to accomplish any desired tasks by dynamically populating puzzle shaped blocks encoding the robot’s possible actions, allowing them to carry out tasks like navigation, planning, and manipulation by connecting blocks instead of writing code. Roblocks has two levels, where in the first level users are made to re-arrange a jumbled set of actions of a plan in the correct order so that a given goal could be achieved. In the second level, they select actions of their choice but at each step only those actions pertaining to the current state are made available to them, thereby pruning down the vast number of possible actions and suggesting only the truly feasible and relevant actions. Both of these levels have a simulation where the user plan is executed. Moreover, if the user plan is invalid or fails to achieve the given goal condition then an explanation for the failure is provided in simple English language. This makes it easier for everyone (especially for non-roboticists) to understand the cause of the failure.
ContributorsDave, Chirav (Author) / Srivastava, Siddharth (Thesis advisor) / Hsiao, Ihan (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2019
157926-Thumbnail Image.png
Description
In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of

In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of robotic manipulator arms.

The first part of this thesis develops and investigates new methods for addressing these problems through hierarchical task and motion planning for manipulation with a focus on autonomous construction of free-standing structures using precision-cut planks. These planks can be arranged in various orientations to design complex structures; reliably and autonomously building such structures from scratch is computationally intractable due to the long planning horizon and the infinite branching factor of possible grasps and placements that the robot could make.

An abstract representation is developed for this class of problems and show how pose generators can be used to autonomously compute feasible robot motion plans for constructing a given structure. The approach was evaluated through simulation and on a real ABB YuMi robot. Results show that hierarchical algorithms for planning can effectively overcome the computational barriers to solving such problems.

The second part of this thesis proposes a deep learning-based algorithm to identify critical regions for motion planning. Further investigation is done whether these learned critical regions can be translated to learn high-level landmark actions for automated planning.
ContributorsKumar, Kislay (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
158844-Thumbnail Image.png
Description
Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a

Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a problem instance without re-using information from previously solved instances. Research in generalized planning has demonstrated the utility of constructing algorithm-like plans that reuse such information. However, using such techniques in an MDP setting has not been adequately explored.

This thesis presents a novel approach for learning generalized partial policies that can be used to solve problems with different object names and/or object quantities using very few example policies for learning. This approach uses abstraction for state representation, which allows the identification of patterns in solutions such as loops that are agnostic to problem-specific properties. This thesis also presents some theoretical results related to the uniqueness and succinctness of the policies computed using such a representation. The presented algorithm can be used as fast, yet greedy and incomplete method for policy computation while falling back to a complete policy search algorithm when needed. Extensive empirical evaluation on discrete MDP benchmarks shows that this approach generalizes effectively and is often able to solve problems much faster than existing state-of-art discrete MDP solvers. Finally, the practical applicability of this approach is demonstrated by incorporating it in an anytime stochastic task and motion planning framework to successfully construct free-standing tower structures using Keva planks.
ContributorsKala Vasudevan, Deepak (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158851-Thumbnail Image.png
Description
Most planning agents assume complete knowledge of the domain, which may not be the case in scenarios where certain domain knowledge is missing. This problem could be due to design flaws or arise from domain ramifications or qualifications. In such cases, planning algorithms could produce highly undesirable behaviors. Planning with

Most planning agents assume complete knowledge of the domain, which may not be the case in scenarios where certain domain knowledge is missing. This problem could be due to design flaws or arise from domain ramifications or qualifications. In such cases, planning algorithms could produce highly undesirable behaviors. Planning with incomplete domain knowledge is more challenging than partial observability in the sense that the planning agent is unaware of the existence of such knowledge, in contrast to it being just unobservable or partially observable. That is the difference between known unknowns and unknown unknowns.

In this thesis, I introduce and formulate this as the problem of Domain Concretization, which is inverse to domain abstraction studied extensively before. Furthermore, I present a solution that starts from the incomplete domain model provided to the agent by the designer and uses teacher traces from human users to determine the candidate model set under a minimalistic model assumption. A robust plan is then generated for the maximum probability of success under the set of candidate models. In addition to a standard search formulation in the model-space, I propose a sample-based search method and also an online version of it to improve search time. The solution presented has been evaluated on various International Planning Competition domains where incompleteness was introduced by deleting certain predicates from the complete domain model. The solution is also tested in a robot simulation domain to illustrate its effectiveness in handling incomplete domain knowledge. The results show that the plan generated by the algorithm increases the plan success rate without impacting action cost too much.
ContributorsSharma, Akshay (Author) / Zhang, Yu (Thesis advisor) / Fainekos, Georgios (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2020
161715-Thumbnail Image.png
Description
Understanding the limits and capabilities of an AI system is essential for safe and effective usability of modern AI systems. In the query-based AI assessment paradigm, a personalized assessment module queries a black-box AI system on behalf of a user and returns a user-interpretable model of the AI system’s capabilities.

Understanding the limits and capabilities of an AI system is essential for safe and effective usability of modern AI systems. In the query-based AI assessment paradigm, a personalized assessment module queries a black-box AI system on behalf of a user and returns a user-interpretable model of the AI system’s capabilities. This thesis develops this paradigm to learn interpretable action models of simulator-based agents. Two types of agents are considered: the first uses high-level actions where the user’s vocabulary captures the simulator state perfectly, and the second operates on low-level actions where the user’s vocabulary captures only an abstraction of the simulator state. Methods are developed to interface the assessment module with these agents. Empirical results show that this method is capable of learning interpretable models of agents operating in a range of domains.
ContributorsMarpally, Shashank Rao (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Fainekos, Georgios E (Committee member) / Arizona State University (Publisher)
Created2021