This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 2 of 2
Filtering by

Clear all filters

156771-Thumbnail Image.png
Description
Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert

Reinforcement learning (RL) is a powerful methodology for teaching autonomous agents complex behaviors and skills. A critical component in most RL algorithms is the reward function -- a mathematical function that provides numerical estimates for desirable and undesirable states. Typically, the reward function must be hand-designed by a human expert and, as a result, the scope of a robot's autonomy and ability to safely explore and learn in new and unforeseen environments is constrained by the specifics of the designed reward function. In this thesis, I design and implement a stateful collision anticipation model with powerful predictive capability based upon my research of sequential data modeling and modern recurrent neural networks. I also develop deep reinforcement learning methods whose rewards are generated by self-supervised training and intrinsic signals. The main objective is to work towards the development of resilient robots that can learn to anticipate and avoid damaging interactions by combining visual and proprioceptive cues from internal sensors. The introduced solutions are inspired by pain pathways in humans and animals, because such pathways are known to guide decision-making processes and promote self-preservation. A new "robot dodge ball' benchmark is introduced in order to test the validity of the developed algorithms in dynamic environments.
ContributorsRichardson, Trevor W (Author) / Ben Amor, Heni (Thesis advisor) / Yang, Yezhou (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2018
Description
In a pursuit-evasion setup where one group of agents tracks down another adversarial group, vision-based algorithms have been known to make use of techniques such as Linear Dynamic Estimation to determine the probable future location of an evader in a given environment. This helps a pursuer attain an edge over

In a pursuit-evasion setup where one group of agents tracks down another adversarial group, vision-based algorithms have been known to make use of techniques such as Linear Dynamic Estimation to determine the probable future location of an evader in a given environment. This helps a pursuer attain an edge over the evader that has conventionally benefited from the uncertainty of the pursuit. The pursuer can utilize this knowledge to enable a faster capture of the evader, as opposed to a pursuer that only knows the evader's current location. Inspired by the function of dorsal anterior cingulate cortex (dACC) neurons in natural predators, the use of a predictive model that is built using an encoder-decoder Long Short-Term Memory (LSTM) Network and can produce a more accurate estimate of the evader's future location is proposed. This enables an even quicker capture of a target when compared to previously used filtering-based methods. The effectiveness of the approach is evaluated by setting up these agents in an environment based in the Modular Open Robots Simulation Engine (MORSE). Cross-domain adaptability of the method, without the explicit need to retrain the prediction model is demonstrated by evaluating it in another domain.
ContributorsGodbole, Sumedh (Author) / Yang, Yezhou (Thesis advisor) / Srivastava, Siddharth (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021