Matching Items (3)
Filtering by

Clear all filters

189299-Thumbnail Image.png
Description
Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed

Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed by the agent during the training process. Nowadays, more and more applications, both in industry and daily lives, require at least two arms, instead of requiring only a single arm. A dual-arm robot satisfies much more needs of different types of tasks, such as folding clothes at home, making a hamburger in a grill or picking and placing a product in a warehouse. The applications done in this paper are all about object pushing. This thesis focuses on how to train the agent to learn pushing an object away as far as possible. Reinforcement Learning (RL), which is a type of Machine Learning (ML), is then utilized in this paper to train the agent to generate optimal actions. Deep Deterministic Policy Gradient (DDPG) and Hindsight Experience Replay (HER) are the two RL methods used in this thesis.
ContributorsLin, Steve (Author) / Ben Amor, Hani (Thesis advisor) / Redkar, Sangram (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2023
158180-Thumbnail Image.png
Description
Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.

Inspired by human's ability to remember past experiences and apply the same when a similar situation occurs,

Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.

Inspired by human's ability to remember past experiences and apply the same when a similar situation occurs, the research community has attempted to augment memory with Neural Network to store the previously learned information. Together with this, the community has also developed mechanisms to perform domain-specific weight switching to handle multiple domains using a single model. Notably, the two research fields work independently, and the goal of this dissertation is to combine their capabilities.

This dissertation introduces a Neural Network module augmented with two external memories, one allowing the network to read and write the information and another to perform domain-specific weight switching. Two learning tasks are proposed in this work to investigate the model performance - solving mathematics operations sequence and action based on color sequence identification. A wide range of experiments with these two tasks verify the model's learning capabilities.
ContributorsPatel, Deep Chittranjan (Author) / Ben Amor, Hani (Thesis advisor) / Banerjee, Ayan (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2020
161730-Thumbnail Image.png
Description
Robotic assisted devices in gait rehabilitation have not seen penetration into clinical settings proportionate to the developments in this field. A possible reason for this is due to the development and evaluation of these devices from a predominantly engineering perspective. One way to mitigate this effect is to further include

Robotic assisted devices in gait rehabilitation have not seen penetration into clinical settings proportionate to the developments in this field. A possible reason for this is due to the development and evaluation of these devices from a predominantly engineering perspective. One way to mitigate this effect is to further include the principles of neurophysiology into the development of these systems. To further include these principles, this research proposes a method for grounded evaluation of three machine learning algorithms to gain insight on what modeling approaches are able to both replicate therapist assistance and emulate therapist strategies. The algorithms evaluated in this paper include ordinary least squares regression (OLS), gaussian process regression (GPR) and inverse reinforcement learning (IRL). The results show that grounded evaluation is able to provide evidence to support the algorithms at a higher resolution. Also, it was observed that GPR is likely the most accurate algorithm to replicate therapist assistance and to emulate therapist adaptation strategies.
ContributorsSmith, Mason Owen (Author) / Zhang, Wenlong (Thesis advisor) / Ben Amor, Hani (Committee member) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2021