Matching Items (21)
Filtering by

Clear all filters

156771-Thumbnail Image.png
Description
Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert

Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert and, as a result, the scope of a robot's autonomy and ability to safely explore and learn in new and unforeseen environments is constrained by the specifics of the designed reward function. In this thesis, I design and implement a stateful collision anticipation model with powerful predictive capability based upon my research of sequential data modeling and modern recurrent neural networks. I also develop deep reinforcement learning methods whose rewards are generated by self-supervised training and intrinsic signals. The main objective is to work towards the development of resilient robots that can learn to anticipate and avoid damaging interactions by combining visual and proprioceptive cues from internal sensors. The introduced solutions are inspired by pain pathways in humans and animals, because such pathways are known to guide decision-making processes and promote self-preservation. A new "robot dodge ball' benchmark is introduced in order to test the validity of the developed algorithms in dynamic environments.
ContributorsRichardson, Trevor W (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2018
157202-Thumbnail Image.png
Description
In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an

In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP).

In addition to an identification network, a new sampling-based motion planner, Learn and Link, is introduced. This planner leverages critical regions to overcome the limitations of uniform sampling while still maintaining guarantees of correctness inherent to sampling-based algorithms. Learn and Link is evaluated against planners from the Open Motion Planning Library (OMPL) on an extensive suite of challenging navigation planning problems. This work shows that critical areas of an environment are learnable, and can be used by Learn and Link to solve MP problems with far less planning time than existing sampling-based planners.
ContributorsMolina, Daniel, M.S (Author) / Srivastava, Siddharth (Thesis advisor) / Li, Baoxin (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2019
157311-Thumbnail Image.png
Description
Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and

Knowledge Representation (KR) is one of the prominent approaches to Artificial Intelligence (AI) that is concerned with representing knowledge in a form that computer systems can utilize to solve complex problems. Answer Set Programming (ASP), based on the stable model semantics, is a widely-used KR framework that facilitates elegant and efficient representations for many problem domains that require complex reasoning.

However, while ASP is effective on deterministic problem domains, it is not suitable for applications involving quantitative uncertainty, for example, those that require probabilistic reasoning. Furthermore, it is hard to utilize information that can be statistically induced from data with ASP problem modeling.

This dissertation presents the language LP^MLN, which is a probabilistic extension of the stable model semantics with the concept of weighted rules, inspired by Markov Logic. An LP^MLN program defines a probability distribution over "soft" stable models, which may not satisfy all rules, but the more rules with the bigger weights they satisfy, the bigger their probabilities. LP^MLN takes advantage of both ASP and Markov Logic in a single framework, allowing representation of problems that require both logical and probabilistic reasoning in an intuitive and elaboration tolerant way.

This dissertation establishes formal relations between LP^MLN and several other formalisms, discusses inference and weight learning algorithms under LP^MLN, and presents systems implementing the algorithms. LP^MLN systems can be used to compute other languages translatable into LP^MLN.

The advantage of LP^MLN for probabilistic reasoning is illustrated by a probabilistic extension of the action language BC+, called pBC+, defined as a high-level notation of LP^MLN for describing transition systems. Various probabilistic reasoning about transition systems, especially probabilistic diagnosis, can be modeled in pBC+ and computed using LP^MLN systems. pBC+ is further extended with the notion of utility, through a decision-theoretic extension of LP^MLN, and related with Markov Decision Process (MDP) in terms of policy optimization problems. pBC+ can be used to represent (PO)MDP in a succinct and elaboration tolerant way, which enables planning with (PO)MDP algorithms in action domains whose description requires rich KR constructs, such as recursive definitions and indirect effects of actions.
ContributorsWang, Yi (Author) / Lee, Joohyung (Thesis advisor) / Baral, Chitta (Committee member) / Kambhampati, Subbarao (Committee member) / Natarajan, Sriraam (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2019
156971-Thumbnail Image.png
Description
Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present

Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present memory-augmented neural networks that synthesize robot navigation policies which a) encode long-term temporal dependencies b) make decisions in

partially observed environments and c) quantify the uncertainty inherent in the task.

We extract information about the temporal structure of a task via imitation learning

from human demonstration and evaluate the performance of the models on control

policies for a robot navigation task. Experiments are performed in partially observed

environments in both simulation and the real world
ContributorsSrivatsav, Nambi (Author) / Ben Amor, Hani (Thesis advisor) / Srivastava, Siddharth (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2018
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
189313-Thumbnail Image.png
Description
This dissertation introduces and examines Soft Curved Reconfigurable Anisotropic Mechanisms (SCRAMs) as a solution to address actuation, manufacturing, and modeling challenges in the field of soft robotics, with the aim of facilitating the broader implementation of soft robots in various industries. SCRAM systems utilize the curved geometry of thin elastic

This dissertation introduces and examines Soft Curved Reconfigurable Anisotropic Mechanisms (SCRAMs) as a solution to address actuation, manufacturing, and modeling challenges in the field of soft robotics, with the aim of facilitating the broader implementation of soft robots in various industries. SCRAM systems utilize the curved geometry of thin elastic structures to tackle these challenges in soft robots. SCRAM devices can modify their dynamic behavior by incorporating reconfigurable anisotropic stiffness, thereby enabling tailored locomotion patterns for specific tasks. This approach simplifies the actuation of robots, resulting in lighter, more flexible, cost-effective, and safer soft robotic systems. This dissertation demonstrates the potential of SCRAM devices through several case studies. These studies investigate virtual joints and shape change propagation in tubes, as well as anisotropic dynamic behavior in vibrational soft twisted beams, effectively demonstrating interesting locomotion patterns that are achievable using simple actuation mechanisms. The dissertation also addresses modeling and simulation challenges by introducing a reduced-order modeling approach. This approach enables fast and accurate simulations of soft robots and is compatible with existing rigid body simulators. Additionally, this dissertation investigates the prototyping processes of SCRAM devices and offers a comprehensive framework for the development of these devices. Overall, this dissertation demonstrates the potential of SCRAM devices to overcome actuation, modeling, and manufacturing challenges in soft robotics. The innovative concepts and approaches presented have implications for various industries that require cost-effective, adaptable, and safe robotic systems. SCRAM devices pave the way for the widespread application of soft robots in diverse domains.
ContributorsJiang, Yuhao (Author) / Aukes, Daniel (Thesis advisor) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Marvi, Hamidreza (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2023
132671-Thumbnail Image.png
Description
While there are many existing systems which take natural language descriptions and use them to generate images or text, few systems exist to generate 3d renderings or environments based on natural language. Most of those systems are very limited in scope and require precise, predefined language to work, or large

While there are many existing systems which take natural language descriptions and use them to generate images or text, few systems exist to generate 3d renderings or environments based on natural language. Most of those systems are very limited in scope and require precise, predefined language to work, or large well tagged datasets for their models. In this project I attempt to apply concepts in NLP and procedural generation to a system which can generate a rough scene estimation of a natural language description in a 3d environment from a free use database of models. The primary objective of this system, rather than a completely accurate representation, is to generate a useful or interesting result. The use of such a system comes in assisting designers who utilize 3d scenes or environments for their work.
ContributorsHann, Jacob R. (Author) / Kobayashi, Yoshihiro (Thesis director) / Srivastava, Siddharth (Committee member) / Computer Science and Engineering Program (Contributor) / Computing and Informatics Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165152-Thumbnail Image.png
Description
Many real-world problems rely on the collaboration of multiple agents. Making plans for these multiple agents such that the goal state can be achieved becomes more and more difficult as the number of objects to consider increases. The increase in the number of objects results in the exponential increase in

Many real-world problems rely on the collaboration of multiple agents. Making plans for these multiple agents such that the goal state can be achieved becomes more and more difficult as the number of objects to consider increases. The increase in the number of objects results in the exponential increase in time and space required to find a viable plan. By mapping each agent onto some team, creating an abstract plan, and applying the abstract plan to the concrete problem, we can produce plans that reach the goal state more quickly than by solving them directly. This is demonstrated by applying this method to multiple problems in a custom domain dubbed the “garden” domain.
ContributorsAtkinson, Kyle (Author) / Srivastava, Siddharth (Thesis director) / Shah, Naman (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-05
Description
This research introduces Roblocks, a user-friendly system for learning Artificial Intelligence (AI) planning concepts using mobile manipulator robots. It uses a visual programming interface based on block-structured programming to make AI planning concepts easier to grasp for those who are new to robotics and AI planning. Users get to accomplish

This research introduces Roblocks, a user-friendly system for learning Artificial Intelligence (AI) planning concepts using mobile manipulator robots. It uses a visual programming interface based on block-structured programming to make AI planning concepts easier to grasp for those who are new to robotics and AI planning. Users get to accomplish any desired tasks by dynamically populating puzzle shaped blocks encoding the robot’s possible actions, allowing them to carry out tasks like navigation, planning, and manipulation by connecting blocks instead of writing code. Roblocks has two levels, where in the first level users are made to re-arrange a jumbled set of actions of a plan in the correct order so that a given goal could be achieved. In the second level, they select actions of their choice but at each step only those actions pertaining to the current state are made available to them, thereby pruning down the vast number of possible actions and suggesting only the truly feasible and relevant actions. Both of these levels have a simulation where the user plan is executed. Moreover, if the user plan is invalid or fails to achieve the given goal condition then an explanation for the failure is provided in simple English language. This makes it easier for everyone (especially for non-roboticists) to understand the cause of the failure.
ContributorsDave, Chirav (Author) / Srivastava, Siddharth (Thesis advisor) / Hsiao, Ihan (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2019
157926-Thumbnail Image.png
Description
In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of

In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of robotic manipulator arms.

The first part of this thesis develops and investigates new methods for addressing these problems through hierarchical task and motion planning for manipulation with a focus on autonomous construction of free-standing structures using precision-cut planks. These planks can be arranged in various orientations to design complex structures; reliably and autonomously building such structures from scratch is computationally intractable due to the long planning horizon and the infinite branching factor of possible grasps and placements that the robot could make.

An abstract representation is developed for this class of problems and show how pose generators can be used to autonomously compute feasible robot motion plans for constructing a given structure. The approach was evaluated through simulation and on a real ABB YuMi robot. Results show that hierarchical algorithms for planning can effectively overcome the computational barriers to solving such problems.

The second part of this thesis proposes a deep learning-based algorithm to identify critical regions for motion planning. Further investigation is done whether these learned critical regions can be translated to learn high-level landmark actions for automated planning.
ContributorsKumar, Kislay (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019