Matching Items (3)
Filtering by

Clear all filters

168636-Thumbnail Image.png
Description
The ability for aerial manipulators to stay aloft while interacting with dynamic environments is critical for successfully in situ data acquisition methods in arboreal environments. One widely used platform utilizes a six degree of freedom manipulator attached to quadcoper or octocopter, to sample a tree leaf by maintaining the system

The ability for aerial manipulators to stay aloft while interacting with dynamic environments is critical for successfully in situ data acquisition methods in arboreal environments. One widely used platform utilizes a six degree of freedom manipulator attached to quadcoper or octocopter, to sample a tree leaf by maintaining the system in a hover while the arm pulls the leaf for a sample. Other system are comprised of simpler quadcopter with a fixed mechanical device to physically cut the leaf while the system is manually piloted. Neither of these common methods account or compensate for the variation of inherent dynamics occurring in the arboreal-aerial manipulator interaction effects. This research proposes force and velocity feedback methods to control an aerial manipulation platform while allowing waypoint navigation within the work space to take place. Using these methods requires minimal knowledge of the system and the dynamic parameters. This thesis outlines the Robot Operating System (ROS) based Open Autonomous Air Vehicle (OpenUAV) simulations performed on the purposed three degree of freedom redundant aerial manipulation platform.
ContributorsCohen, Daniel (Author) / Das, Jnaneshwar (Thesis advisor) / Marvi, Hamidreza (Committee member) / Saldaña, David (Committee member) / Arizona State University (Publisher)
Created2022
171816-Thumbnail Image.png
Description
This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical methods for coping with sparse rewards during reinforcement learning modify the reward landscape so as to better guide the learner. In contrast, this work combines RL with a planner in order

This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical methods for coping with sparse rewards during reinforcement learning modify the reward landscape so as to better guide the learner. In contrast, this work combines RL with a planner in order to utilize other information about the environment. As the scope for representing environmental information is limited in RL, this work has conflated a model-free learning algorithm – temporal difference (TD) learning – with a Hierarchical Task Network (HTN) planner to accommodate rich environmental information in the algorithm. In the perpetual sparse rewards problem, rewards reemerge after being collected within a fixed interval of time, culminating in a lack of a well-defined goal state as an exit condition to the problem. Incorporating planning in the learning algorithm not only improves the quality of the solution, but the algorithm also avoids the ambiguity of incorporating a goal of maximizing profit while using only a planning algorithm to solve this problem. Upon occasionally using the HTN planner, this algorithm provides the necessary tweak toward the optimal solution. In this work, I have demonstrated an on-policy algorithm that has improved the quality of the solution over vanilla reinforcement learning. The objective of this work has been to observe the capacity of the synthesized algorithm in finding optimal policies to maximize rewards, awareness of the environment, and the awareness of the presence of other agents in the vicinity.
ContributorsNandan, Swastik (Author) / Pavlic, Theodore (Thesis advisor) / Das, Jnaneshwar (Thesis advisor) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022
Description
Rock traits (grain size, shape, orientation) are fundamental indicators of geologic processes including geomorphology and active tectonics. Fault zone evolution, fault slip rates, and earthquake timing are informed by examinations of discontinuities in the displacements of the Earth surface at fault scarps. Fault scarps indicate the structure of fault zones

Rock traits (grain size, shape, orientation) are fundamental indicators of geologic processes including geomorphology and active tectonics. Fault zone evolution, fault slip rates, and earthquake timing are informed by examinations of discontinuities in the displacements of the Earth surface at fault scarps. Fault scarps indicate the structure of fault zones fans, relay ramps, and double faults, as well as the surface process response to the deformation and can thus indicate the activity of the fault zone and its potential hazard. “Rocky” fault scarps are unusual because they share characteristics of bedrock and alluvial fault scarps. The Volcanic Tablelands in Bishop, CA offer a natural laboratory with an array of rocky fault scarps. Machine learning mask-Region Convolutional Neural Network segments an orthophoto to identify individual particles along a specific rocky fault scarp. The resulting rock traits for thousands of particles along the scarp are used to develop conceptual models for rocky scarp geomorphology and evolution. In addition to rocky scarp classification, these tools may be useful in many sedimentary and volcanological applications for particle mapping and characterization.
ContributorsScott, Tyler (Author) / Arrowsmith, Ramon (Thesis advisor) / Das, Jnaneshwar (Committee member) / DeVecchio, Duane (Committee member) / Arizona State University (Publisher)
Created2020