Matching Items (15)
Filtering by
- All Subjects: robotics
- Genre: Masters Thesis
- Creators: Das, Jnaneshwar
- Creators: Turaga, Pavan
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
Description
With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human to provide it some supervisory parameters that modify the degree of autonomy or allocate extra tasks to the robot. In this regard, this thesis presents an approach to include a provision to accept and incorporate such human inputs and modify the navigation functions of the robot accordingly. Concepts such as applying kinematical constraints while planning paths, traversing of unknown areas with an intent of maximizing field of view, performing complex tasks on command etc. have been examined and implemented. The approaches have been tested in Robot Operating System (ROS), using robots such as the iRobot Create, Personal Robotics (PR2) etc. Simulations and experimental demonstrations have proved that this approach is feasible for solving some of the existing problems and that it certainly can pave way to further research for enhancing functionality.
ContributorsVemprala, Sai Hemachandra (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
Description
Fisheye cameras are special cameras that have a much larger field of view compared to
conventional cameras. The large field of view comes at a price of non-linear distortions
introduced near the boundaries of the images captured by such cameras. Despite this
drawback, they are being used increasingly in many applications of computer vision,
robotics, reconnaissance, astrophotography, surveillance and automotive applications.
The images captured from such cameras can be corrected for their distortion if the
cameras are calibrated and the distortion function is determined. Calibration also allows
fisheye cameras to be used in tasks involving metric scene measurement, metric
scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.
This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
conventional cameras. The large field of view comes at a price of non-linear distortions
introduced near the boundaries of the images captured by such cameras. Despite this
drawback, they are being used increasingly in many applications of computer vision,
robotics, reconnaissance, astrophotography, surveillance and automotive applications.
The images captured from such cameras can be corrected for their distortion if the
cameras are calibrated and the distortion function is determined. Calibration also allows
fisheye cameras to be used in tasks involving metric scene measurement, metric
scene reconstruction and other simultaneous localization and mapping (SLAM) algorithms.
This thesis presents a calibration toolbox (FisheyeCDC Toolbox) that implements a collection of some of the most widely used techniques for calibration of fisheye cameras under one package. This enables an inexperienced user to calibrate his/her own camera without the need for a theoretical understanding about computer vision and camera calibration. This thesis also explores some of the applications of calibration such as distortion correction and 3D reconstruction.
ContributorsKashyap Takmul Purushothama Raju, Vinay (Author) / Karam, Lina (Thesis advisor) / Turaga, Pavan (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2014
Description
Advancements in computer vision and machine learning have added a new dimension to remote sensing applications with the aid of imagery analysis techniques. Applications such as autonomous navigation and terrain classification which make use of image classification techniques are challenging problems and research is still being carried out to find better solutions. In this thesis, a novel method is proposed which uses image registration techniques to provide better image classification. This method reduces the error rate of classification by performing image registration of the images with the previously obtained images before performing classification. The motivation behind this is the fact that images that are obtained in the same region which need to be classified will not differ significantly in characteristics. Hence, registration will provide an image that matches closer to the previously obtained image, thus providing better classification. To illustrate that the proposed method works, naïve Bayes and iterative closest point (ICP) algorithms are used for the image classification and registration stages respectively. This implementation was tested extensively in simulation using synthetic images and using a real life data set called the Defense Advanced Research Project Agency (DARPA) Learning Applied to Ground Robots (LAGR) dataset. The results show that the ICP algorithm does help in better classification with Naïve Bayes by reducing the error rate by an average of about 10% in the synthetic data and by about 7% on the actual datasets used.
ContributorsMuralidhar, Ashwini (Author) / Saripalli, Srikanth (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2011
Description
The ability for aerial manipulators to stay aloft while interacting with dynamic environments is critical for successfully in situ data acquisition methods in arboreal environments. One widely used platform utilizes a six degree of freedom manipulator attached to quadcoper or octocopter, to sample a tree leaf by maintaining the system in a hover while the arm pulls the leaf for a sample. Other system are comprised of simpler quadcopter with a fixed mechanical device to physically cut the leaf while the system is manually piloted. Neither of these common methods account or compensate for the variation of inherent dynamics occurring in the arboreal-aerial manipulator interaction effects. This research proposes force and velocity feedback methods to control an aerial manipulation platform while allowing waypoint navigation within the work space to take place. Using these methods requires minimal knowledge of the system and the dynamic parameters. This thesis outlines the Robot Operating System (ROS) based Open Autonomous Air Vehicle (OpenUAV) simulations performed on the purposed three degree of freedom redundant aerial manipulation platform.
ContributorsCohen, Daniel (Author) / Das, Jnaneshwar (Thesis advisor) / Marvi, Hamidreza (Committee member) / Saldaña, David (Committee member) / Arizona State University (Publisher)
Created2022
Description
Simultaneous localization and mapping (SLAM) has traditionally relied on low-level geometric or optical features. However, these features-based SLAM methods often struggle with feature-less or repetitive scenes. Additionally, low-level features may not provide sufficient information for robot navigation and manipulation, leaving robots without a complete understanding of the 3D spatial world. Advanced information is necessary to address these limitations. Fortunately, recent developments in learning-based 3D reconstruction allow robots to not only detect semantic meanings, but also recognize the 3D structure of objects from a few images. By combining this 3D structural information, SLAM can be improved from a low-level approach to a structure-aware approach. This work propose a novel approach for multi-view 3D reconstruction using recurrent transformer. This approach allows robots to accumulate information from multiple views and encode them into a compact latent space. The resulting latent representations are then decoded to produce 3D structural landmarks, which can be used to improve robot localization and mapping.
ContributorsHuang, Chi-Yao (Author) / Yang, Yezhou (Thesis advisor) / Turaga, Pavan (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2023
Description
Autonomous Robots have a tremendous potential to assist humans in environmental monitoring tasks. In order to generate meaningful data for humans to analyze, the robots need to collect accurate data and develop reliable representation of the environment. This is achieved by employing scalable and robust navigation and mapping algorithms that facilitate acquiring and understanding data collected from the array of on-board sensors. To this end, this thesis presents navigation and mapping algorithms for autonomous robots that can enable robot navigation in complexenvironments and develop real time semantic map of the environment respectively. The first part of the thesis presents a novel navigation algorithm for an autonomous underwater vehicle that can maintain a fixed distance from the coral terrain while following a human diver. Following a human diver ensures that the robot would visit all important sites in the coral reef while maintaining a constant distance from the terrain reduces heterscedasticity in the measurements. This algorithm was tested on three different synthetic terrains including a real model of a coral reef in Hawaii. The second part of the thesis presents a dense semantic surfel mapping technique based on top of a popular surfel mapping algorithm that can generate meaningful maps in real time. A semantic mask from a depth aligned RGB-D camera was used to assign labels
to the surfels which were then probabilistically updated with multiple measurements. The mapping algorithm was tested with simulated data from an RGB-D camera and the results were analyzed.
ContributorsAntervedi, Lakshmi Gana Prasad (Author) / Das, Jnaneshwar (Thesis advisor) / Martin, Roberta E (Committee member) / Marvi, Hamid (Committee member) / Arizona State University (Publisher)
Created2021
Description
This work has improved the quality of the solution to the sparse rewards problemby combining reinforcement learning (RL) with knowledge-rich planning. Classical
methods for coping with sparse rewards during reinforcement learning modify the
reward landscape so as to better guide the learner. In contrast, this work combines
RL with a planner in order to utilize other information about the environment. As
the scope for representing environmental information is limited in RL, this work has
conflated a model-free learning algorithm – temporal difference (TD) learning – with
a Hierarchical Task Network (HTN) planner to accommodate rich environmental
information in the algorithm. In the perpetual sparse rewards problem, rewards
reemerge after being collected within a fixed interval of time, culminating in a lack of a
well-defined goal state as an exit condition to the problem. Incorporating planning in
the learning algorithm not only improves the quality of the solution, but the algorithm
also avoids the ambiguity of incorporating a goal of maximizing profit while using
only a planning algorithm to solve this problem. Upon occasionally using the HTN
planner, this algorithm provides the necessary tweak toward the optimal solution. In
this work, I have demonstrated an on-policy algorithm that has improved the quality
of the solution over vanilla reinforcement learning. The objective of this work has
been to observe the capacity of the synthesized algorithm in finding optimal policies to
maximize rewards, awareness of the environment, and the awareness of the presence
of other agents in the vicinity.
ContributorsNandan, Swastik (Author) / Pavlic, Theodore (Thesis advisor) / Das, Jnaneshwar (Thesis advisor) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022
Description
Navigation and mapping in GPS-denied environments, such as coal mines ordilapidated buildings filled with smog or particulate matter, pose a significant challenge
due to the limitations of conventional LiDAR or vision systems. Therefore there
exists a need for a navigation algorithm and mapping strategy which do not use vision
systems but are still able to explore and map the environment. The map can further
be used by first responders and cave explorers to access the environments.
This thesis presents the design of a collision-resilient Unmanned Aerial Vehicle
(UAV), XPLORER that utilizes a novel navigation algorithm for exploration and
simultaneous mapping of the environment. The real-time navigation algorithm uses
the onboard Inertial Measurement Units (IMUs) and arm bending angles for contact
estimation and employs an Explore and Exploit strategy. Additionally, the quadrotor
design is discussed, highlighting its improved stability over the previous design.
The generated map of the environment can be utilized by autonomous vehicles to
navigate the environment. The navigation algorithm is validated in multiple real-time
experiments in different scenarios consisting of concave and convex corners and circular
objects. Furthermore, the developed mapping framework can serve as an auxiliary
input for map generation along with conventional LiDAR or vision-based mapping
algorithms.
Both the navigation and mapping algorithms are designed to be modular, making
them compatible with conventional UAVs also. This research contributes to the
development of navigation and mapping techniques for GPS-denied environments,
enabling safer and more efficient exploration of challenging territories.
ContributorsPandian Saravanakumaran, Aravind Adhith (Author) / Zhang, Wenlong (Thesis advisor) / Das, Jnaneshwar (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2023
Description
Acrobatic maneuvers of quadrotors present unique challenges concerning trajectorygeneration, control, and execution. Specifically, the flip maneuver requires dynamically
feasible trajectories and precise control. Various factors, including rotor dynamics,
thrust allocation, and control strategies, influence the successful execution of flips.
This research introduces an approach for tracking optimal trajectories to execute flip
maneuvers while ensuring system stability autonomously. Model Predictive Control
(MPC) designs the controller, enabling the quadrotor to plan and execute optimal
trajectories in real-time, accounting for dynamic constraints and environmental factors.
The utilization of predictive models enables the quadrotor to anticipate and adapt to
changes during aggressive maneuvers.
Simulation-based evaluations were conducted in the ROS and Gazebo environments.
These evaluations provide valuable insights into the quadrotor’s behavior, response
time, and tracking accuracy. Additionally, real-time flight experiments utilizing state-
of-the-art flight controllers, such as the PixHawk 4, and companion computers, like
the Hardkernel Odroid, validate the effectiveness of the proposed control algorithms
in practical scenarios. The conducted experiments also demonstrate the successful
execution of the proposed approach.
This research’s outcomes contribute to quadrotor technology’s advancement, particularly in acrobatic maneuverability. This opens up possibilities for executing
maneuvers with precise timing, such as slingshot probe releases during flips. Moreover,
this research demonstrates the efficacy of MPC controllers in achieving autonomous
probe throws within no-fly zone environments while maintaining an accurate desired
range. Field application of this research includes probe deployment into volcanic
plumes or challenging-to-access rocky fault scarps, and imaging of sites of interest. along flight paths through rolling or pitching maneuvers of the quadrotor, to use sensorsuch as cameras or spectrometers on the quadrotor belly.
Contributorsjain, saransh (Author) / Das, Jnaneshwar (Thesis advisor) / Zhang, Wenlong (Committee member) / Marvi, Hamid (Committee member) / Arizona State University (Publisher)
Created2023