Matching Items (18)
Filtering by

Clear all filters

152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152324-Thumbnail Image.png
Description
With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human

With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human to provide it some supervisory parameters that modify the degree of autonomy or allocate extra tasks to the robot. In this regard, this thesis presents an approach to include a provision to accept and incorporate such human inputs and modify the navigation functions of the robot accordingly. Concepts such as applying kinematical constraints while planning paths, traversing of unknown areas with an intent of maximizing field of view, performing complex tasks on command etc. have been examined and implemented. The approaches have been tested in Robot Operating System (ROS), using robots such as the iRobot Create, Personal Robotics (PR2) etc. Simulations and experimental demonstrations have proved that this approach is feasible for solving some of the existing problems and that it certainly can pave way to further research for enhancing functionality.
ContributorsVemprala, Sai Hemachandra (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152741-Thumbnail Image.png
Description
This project is to develop a new method to generate GPS waypoints for better terrain mapping efficiency using an UAV. To create a map of a desired terrain, an UAV is used to capture images at particular GPS locations. These images are then stitched together to form a complete ma

This project is to develop a new method to generate GPS waypoints for better terrain mapping efficiency using an UAV. To create a map of a desired terrain, an UAV is used to capture images at particular GPS locations. These images are then stitched together to form a complete map of the terrain. To generate a good map using image stitching, the images are desired to have a certain percentage of overlap between them. In high windy condition, an UAV may not capture image at desired GPS location, which in turn interferes with the desired percentage of overlap between images; both frontal and sideways; thus causing discrepancies while stitching the images together. The information about the exact GPS locations at which the images are captured can be found on the flight logs that are stored in the Ground Control Station and the Auto pilot board. The objective is to look at the flight logs, predict the waypoints at which the UAV might have swayed from the desired flight path. If there are locations where flight swayed from intended path, the code should generate a new set of waypoints for a correction flight. This will save the time required for stitching the images together, thus making the whole process faster and more efficient.
ContributorsGhadage, Prasannakumar Prakashrao (Author) / Saripalli, Srikanth (Thesis advisor) / Berman, Spring M (Thesis advisor) / Thangavelautham, Jekanthan (Committee member) / Arizona State University (Publisher)
Created2014
150353-Thumbnail Image.png
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find

Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
155083-Thumbnail Image.png
Description
Multi-sensor fusion is a fundamental problem in Robot Perception. For a robot to operate in a real world environment, multiple sensors are often needed. Thus, fusing data from various sensors accurately is vital for robot perception. In the first part of this thesis, the problem of fusing information from a

Multi-sensor fusion is a fundamental problem in Robot Perception. For a robot to operate in a real world environment, multiple sensors are often needed. Thus, fusing data from various sensors accurately is vital for robot perception. In the first part of this thesis, the problem of fusing information from a LIDAR, a color camera and a thermal camera to build RGB-Depth-Thermal (RGBDT) maps is investigated. An algorithm that solves a non-linear optimization problem to compute the relative pose between the cameras and the LIDAR is presented. The relative pose estimate is then used to find the color and thermal texture of each LIDAR point. Next, the various sources of error that can cause the mis-coloring of a LIDAR point after the cross- calibration are identified. Theoretical analyses of these errors reveal that the coloring errors due to noisy LIDAR points, errors in the estimation of the camera matrix, and errors in the estimation of translation between the sensors disappear with distance. But errors in the estimation of the rotation between the sensors causes the coloring error to increase with distance.

On a robot (vehicle) with multiple sensors, sensor fusion algorithms allow us to represent the data in the vehicle frame. But data acquired temporally in the vehicle frame needs to be registered in a global frame to obtain a map of the environment. Mapping techniques involving the Iterative Closest Point (ICP) algorithm and the Normal Distributions Transform (NDT) assume that a good initial estimate of the transformation between the 3D scans is available. This restricts the ability to stitch maps that were acquired at different times. Mapping can become flexible if maps that were acquired temporally can be merged later. To this end, the second part of this thesis focuses on developing an automated algorithm that fuses two maps by finding a congruent set of five points forming a pyramid.

Mapping has various application domains beyond Robot Navigation. The third part of this thesis considers a unique application domain where the surface displace- ments caused by an earthquake are to be recovered using pre- and post-earthquake LIDAR data. A technique to recover the 3D surface displacements is developed and the results are presented on real earthquake datasets: El Mayur Cucupa earthquake, Mexico, 2010 and Fukushima earthquake, Japan, 2011.
ContributorsKrishnan, Aravindhan K (Author) / Saripalli, Srikanth (Thesis advisor) / Klesh, Andrew (Committee member) / Fainekos, Georgios (Committee member) / Thangavelautham, Jekan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2016
168636-Thumbnail Image.png
Description
The ability for aerial manipulators to stay aloft while interacting with dynamic environments is critical for successfully in situ data acquisition methods in arboreal environments. One widely used platform utilizes a six degree of freedom manipulator attached to quadcoper or octocopter, to sample a tree leaf by maintaining the system

The ability for aerial manipulators to stay aloft while interacting with dynamic environments is critical for successfully in situ data acquisition methods in arboreal environments. One widely used platform utilizes a six degree of freedom manipulator attached to quadcoper or octocopter, to sample a tree leaf by maintaining the system in a hover while the arm pulls the leaf for a sample. Other system are comprised of simpler quadcopter with a fixed mechanical device to physically cut the leaf while the system is manually piloted. Neither of these common methods account or compensate for the variation of inherent dynamics occurring in the arboreal-aerial manipulator interaction effects. This research proposes force and velocity feedback methods to control an aerial manipulation platform while allowing waypoint navigation within the work space to take place. Using these methods requires minimal knowledge of the system and the dynamic parameters. This thesis outlines the Robot Operating System (ROS) based Open Autonomous Air Vehicle (OpenUAV) simulations performed on the purposed three degree of freedom redundant aerial manipulation platform.
ContributorsCohen, Daniel (Author) / Das, Jnaneshwar (Thesis advisor) / Marvi, Hamidreza (Committee member) / Saldaña, David (Committee member) / Arizona State University (Publisher)
Created2022
168583-Thumbnail Image.png
Description
Technological progress in robot sensing, design, and fabrication, and the availability of open source software frameworks such as the Robot Operating System (ROS), are advancing the applications of swarm robotics from toy problems to real-world tasks such as surveillance, precision agriculture, search-and-rescue, and infrastructure inspection. These applications will require the

Technological progress in robot sensing, design, and fabrication, and the availability of open source software frameworks such as the Robot Operating System (ROS), are advancing the applications of swarm robotics from toy problems to real-world tasks such as surveillance, precision agriculture, search-and-rescue, and infrastructure inspection. These applications will require the development of robot controllers and system architectures that scale well with the number of robots and that are robust to robot errors and failures. To achieve this, one approach is to design decentralized robot control policies that require only local sensing and local, ad-hoc communication. In particular, stochastic control policies can be designed that are agnostic to individual robot identities and do not require a priori information about the environment or sophisticated computation, sensing, navigation, or communication capabilities. This dissertation presents novel swarm control strategies with these properties for detecting and mapping static targets, which represent features of interest, in an unknown, bounded, obstacle-free environment. The robots move on a finite spatial grid according to the time-homogeneous transition probabilities of a Discrete-Time Discrete-State (DTDS) Markov chain model, and they exchange information with other robots within their communication range using a consensus (agreement) protocol. This dissertation extend theoretical guarantees on multi-robot consensus over fixed and time-varying communication networks with known connectivity properties to consensus over the networks that have Markovian switching dynamics and no presumed connectivity. This dissertation develops such swarm consensus strategies for detecting a single feature in the environment, tracking multiple features, and reconstructing a discrete distribution of features modeled as an occupancy grid map. The proposed consensus approaches are validated in numerical simulations and in 3D physics-based simulations of quadrotors in Gazebo. The scalability of the proposed approaches is examined through extensive numerical simulation studies over different swarm populations and environment sizes.
ContributorsShirsat, Aniket (Author) / Berman, Spring (Thesis advisor) / Lee, Hyunglae (Committee member) / Marvi, Hamid (Committee member) / Saripalli, Srikanth (Committee member) / Gharavi, Lance (Committee member) / Arizona State University (Publisher)
Created2022
168402-Thumbnail Image.png
Description
Autonomous Robots have a tremendous potential to assist humans in environmental monitoring tasks. In order to generate meaningful data for humans to analyze, the robots need to collect accurate data and develop reliable representation of the environment. This is achieved by employing scalable and robust navigation and mapping algorithms that

Autonomous Robots have a tremendous potential to assist humans in environmental monitoring tasks. In order to generate meaningful data for humans to analyze, the robots need to collect accurate data and develop reliable representation of the environment. This is achieved by employing scalable and robust navigation and mapping algorithms that facilitate acquiring and understanding data collected from the array of on-board sensors. To this end, this thesis presents navigation and mapping algorithms for autonomous robots that can enable robot navigation in complexenvironments and develop real time semantic map of the environment respectively. The first part of the thesis presents a novel navigation algorithm for an autonomous underwater vehicle that can maintain a fixed distance from the coral terrain while following a human diver. Following a human diver ensures that the robot would visit all important sites in the coral reef while maintaining a constant distance from the terrain reduces heterscedasticity in the measurements. This algorithm was tested on three different synthetic terrains including a real model of a coral reef in Hawaii. The second part of the thesis presents a dense semantic surfel mapping technique based on top of a popular surfel mapping algorithm that can generate meaningful maps in real time. A semantic mask from a depth aligned RGB-D camera was used to assign labels to the surfels which were then probabilistically updated with multiple measurements. The mapping algorithm was tested with simulated data from an RGB-D camera and the results were analyzed.
ContributorsAntervedi, Lakshmi Gana Prasad (Author) / Das, Jnaneshwar (Thesis advisor) / Martin, Roberta E (Committee member) / Marvi, Hamid (Committee member) / Arizona State University (Publisher)
Created2021
171574-Thumbnail Image.png
Description
Despite the rapid adoption of robotics and machine learning in industry, their application to scientific studies remains under-explored. Combining industry-driven advances with scientific exploration provides new perspectives and a greater understanding of the planet and its environmental processes. Focusing on rock detection, mapping, and dynamics analysis, I present technical approaches

Despite the rapid adoption of robotics and machine learning in industry, their application to scientific studies remains under-explored. Combining industry-driven advances with scientific exploration provides new perspectives and a greater understanding of the planet and its environmental processes. Focusing on rock detection, mapping, and dynamics analysis, I present technical approaches and scientific results of developing robotics and machine learning technologies for geomorphology and seismic hazard analysis. I demonstrate an interdisciplinary research direction to push the frontiers of both robotics and geosciences, with potential translational contributions to commercial applications for hazard monitoring and prospecting. To understand the effects of rocky fault scarp development on rock trait distributions, I present a data-processing pipeline that utilizes unpiloted aerial vehicles (UAVs) and deep learning to segment densely distributed rocks in several orders of magnitude. Quantification and correlation analysis of rock trait distributions demonstrate a statistical approach for geomorphology studies. Fragile geological features such as precariously balanced rocks (PBRs) provide upper-bound ground motion constraints for hazard analysis. I develop an offboard method and onboard method as complementary to each other for PBR searching and mapping. Using deep learning, the offboard method segments PBRs in point clouds reconstructed from UAV surveys. The onboard method equips a UAV with edge-computing devices and stereo cameras, enabling onboard machine learning for real-time PBR search, detection, and mapping during surveillance. The offboard method provides an efficient solution to find PBR candidates in existing point clouds, which is useful for field reconnaissance. The onboard method emphasizes mapping individual PBRs for their complete visible surface features, such as basal contacts with pedestals–critical geometry to analyze fragility. After PBRs are mapped, I investigate PBR dynamics by building a virtual shake robot (VSR) that simulates ground motions to test PBR overturning. The VSR demonstrates that ground motion directions and niches are important factors determining PBR fragility, which were rarely considered in previous studies. The VSR also enables PBR large-displacement studies by tracking a toppled-PBR trajectory, presenting novel methods of rockfall hazard zoning. I build a real mini shake robot providing a reverse method to validate simulation experiments in the VSR.
ContributorsChen, Zhiang (Author) / Arrowsmith, Ramon (Thesis advisor) / Das, Jnaneshwar (Thesis advisor) / Bell, James (Committee member) / Berman, Spring (Committee member) / Christensen, Philip (Committee member) / Whipple, Kelin (Committee member) / Arizona State University (Publisher)
Created2022
171816-Thumbnail Image.png
Description
This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical methods for coping with sparse rewards during reinforcement learning modify the reward landscape so as to better guide the learner. In contrast, this work combines RL with a planner in order

This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical methods for coping with sparse rewards during reinforcement learning modify the reward landscape so as to better guide the learner. In contrast, this work combines RL with a planner in order to utilize other information about the environment. As the scope for representing environmental information is limited in RL, this work has conflated a model-free learning algorithm – temporal difference (TD) learning – with a Hierarchical Task Network (HTN) planner to accommodate rich environmental information in the algorithm. In the perpetual sparse rewards problem, rewards reemerge after being collected within a fixed interval of time, culminating in a lack of a well-defined goal state as an exit condition to the problem. Incorporating planning in the learning algorithm not only improves the quality of the solution, but the algorithm also avoids the ambiguity of incorporating a goal of maximizing profit while using only a planning algorithm to solve this problem. Upon occasionally using the HTN planner, this algorithm provides the necessary tweak toward the optimal solution. In this work, I have demonstrated an on-policy algorithm that has improved the quality of the solution over vanilla reinforcement learning. The objective of this work has been to observe the capacity of the synthesized algorithm in finding optimal policies to maximize rewards, awareness of the environment, and the awareness of the presence of other agents in the vicinity.
ContributorsNandan, Swastik (Author) / Pavlic, Theodore (Thesis advisor) / Das, Jnaneshwar (Thesis advisor) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022