Matching Items (3)
Filtering by

Clear all filters

168417-Thumbnail Image.png
Description
Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction,

Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction, implicit learning and reproduction of human behavior is the major challenge. This work tries to compare some of the recent advances taking a phenomenological approach to trajectory prediction. \par The work is expected to mainly target on generating future events or trajectories based on the previous data observed across many time intervals. In particular, this work presents and compares machine learning models to generate various human handwriting trajectories. Although the behavior of every individual is unique, it is still possible to broadly generalize and learn the underlying human behavior from the current observations to predict future human writing trajectories. This enables the machine or the robot to generate future handwriting trajectories given an initial trajectory from the individual thus helping the person to fill up the rest of the letter or curve. This work tests and compares the performance of Conditional Variational Autoencoders and Sinusoidal Representation Network models on handwriting trajectory prediction and reconstruction.
ContributorsKota, Venkata Anil (Author) / Ben Amor, Hani (Thesis advisor) / Venkateswara, Hemanth Kumar Demakethepalli (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2021
161938-Thumbnail Image.png
Description
Reinforcement Learning(RL) algorithms have made a remarkable contribution in the eld of robotics and training human-like agents. On the other hand, Evolutionary Algorithms(EA) are not well explored and promoted to use in the robotics field. However, they have an excellent potential to perform well. In thesis work, various RL learning

Reinforcement Learning(RL) algorithms have made a remarkable contribution in the eld of robotics and training human-like agents. On the other hand, Evolutionary Algorithms(EA) are not well explored and promoted to use in the robotics field. However, they have an excellent potential to perform well. In thesis work, various RL learning algorithms like Q-learning, Deep Deterministic Policy Gradient(DDPG), and Evolutionary Algorithms(EA) like Harmony Search Algorithm(HSA) are tested for a customized Penalty Kick Robot environment. The experiments are done with both discrete and continuous action space for a penalty kick agent. The main goal is to identify which algorithm suites best in which scenario. Furthermore, a goalkeeper agent is also introduced to block the ball from reaching the goal post using the multiagent learning algorithm.
ContributorsTrivedi, Maitry Ronakbhai (Author) / Amor, Heni Ben (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Sugar, Thomas (Committee member) / Arizona State University (Publisher)
Created2021
189299-Thumbnail Image.png
Description
Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed

Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed by the agent during the training process. Nowadays, more and more applications, both in industry and daily lives, require at least two arms, instead of requiring only a single arm. A dual-arm robot satisfies much more needs of different types of tasks, such as folding clothes at home, making a hamburger in a grill or picking and placing a product in a warehouse. The applications done in this paper are all about object pushing. This thesis focuses on how to train the agent to learn pushing an object away as far as possible. Reinforcement Learning (RL), which is a type of Machine Learning (ML), is then utilized in this paper to train the agent to generate optimal actions. Deep Deterministic Policy Gradient (DDPG) and Hindsight Experience Replay (HER) are the two RL methods used in this thesis.
ContributorsLin, Steve (Author) / Ben Amor, Hani (Thesis advisor) / Redkar, Sangram (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2023