This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.
In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.
Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.
This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical
methods for coping with sparse rewards during reinforcement learning modify the
reward landscape so as to better guide the learner. In contrast, this work combines
RL with a planner in order…
This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical
methods for coping with sparse rewards during reinforcement learning modify the
reward landscape so as to better guide the learner. In contrast, this work combines
RL with a planner in order to utilize other information about the environment. As
the scope for representing environmental information is limited in RL, this work has
conflated a model-free learning algorithm – temporal difference (TD) learning – with
a Hierarchical Task Network (HTN) planner to accommodate rich environmental
information in the algorithm. In the perpetual sparse rewards problem, rewards
reemerge after being collected within a fixed interval of time, culminating in a lack of a
well-defined goal state as an exit condition to the problem. Incorporating planning in
the learning algorithm not only improves the quality of the solution, but the algorithm
also avoids the ambiguity of incorporating a goal of maximizing profit while using
only a planning algorithm to solve this problem. Upon occasionally using the HTN
planner, this algorithm provides the necessary tweak toward the optimal solution. In
this work, I have demonstrated an on-policy algorithm that has improved the quality
of the solution over vanilla reinforcement learning. The objective of this work has
been to observe the capacity of the synthesized algorithm in finding optimal policies to
maximize rewards, awareness of the environment, and the awareness of the presence
of other agents in the vicinity.
Vehicles traverse granular media through complex reactions with large numbers of small particles. Many approaches rely on empirical trends derived from wheeled vehicles in well-characterized media. However, the environments of numerous bodies such as Mars or the moon are primarily composed of fines called regolith which require different design considerations.…
Vehicles traverse granular media through complex reactions with large numbers of small particles. Many approaches rely on empirical trends derived from wheeled vehicles in well-characterized media. However, the environments of numerous bodies such as Mars or the moon are primarily composed of fines called regolith which require different design considerations. This dissertation discusses research aimed at understanding the role and function of empirical, computational, and theoretical granular physics approaches as they apply to helical geometries, their envelope of applicability, and the development of new laws. First, a static Archimedes screw submerged in granular material (glass beads) is analyzed using two methods: Granular Resistive Force Theory (RFT), an empirically derived set of equations based on fluid dynamic superposition principles, and Discrete element method (DEM) simulations, a particle modeling software. Dynamic experiments further confirm the computational method with multi-body dynamics (MBD)-DEM co-simulations. Granular Scaling Laws (GSL), a set of physics relationships based on non-dimensional analysis, are utilized for the gravity-modified environments. A testing chamber to contain a lunar analogue, BP-1, is developed and built. An investigation of straight and helical grousered wheels in both silica sand and BP-1 is performed to examine general GSL applicability for lunar purposes. Mechanical power draw and velocity prediction by GSL show non-trivial but predictable deviation. BP-1 properties are characterized and applied to an MBD-DEM environment for the first time. MBD-DEM simulation results between Earth gravity and lunar gravity show good agreement with theoretical predictions for both power and velocity. The experimental deviation is further investigated and found to have a mass-dependant component driven by granular sinkage and engagement. Finally, a robust set of helical granular scaling laws (HGSL) are derived. The granular dynamics scaling of three-dimensional screw-driven mobility is reduced to a similar theory as wheeled scaling laws, provided the screw is radially continuous. The new laws are validated in BP-1 with results showing very close agreement to predictions. A gravity-variant version of these laws is validated with MBD-DEM simulations. The results of the dissertation suggest GSL, HGSL, and MBD-DEM give reasonable approximations for use in lunar environments to predict rover mobility given adequate granular engagement.