This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 2 of 2
Filtering by

Clear all filters

155083-Thumbnail Image.png
Description
Multi-sensor fusion is a fundamental problem in Robot Perception. For a robot to operate in a real world environment, multiple sensors are often needed. Thus, fusing data from various sensors accurately is vital for robot perception. In the first part of this thesis, the problem of fusing information from a

Multi-sensor fusion is a fundamental problem in Robot Perception. For a robot to operate in a real world environment, multiple sensors are often needed. Thus, fusing data from various sensors accurately is vital for robot perception. In the first part of this thesis, the problem of fusing information from a LIDAR, a color camera and a thermal camera to build RGB-Depth-Thermal (RGBDT) maps is investigated. An algorithm that solves a non-linear optimization problem to compute the relative pose between the cameras and the LIDAR is presented. The relative pose estimate is then used to find the color and thermal texture of each LIDAR point. Next, the various sources of error that can cause the mis-coloring of a LIDAR point after the cross- calibration are identified. Theoretical analyses of these errors reveal that the coloring errors due to noisy LIDAR points, errors in the estimation of the camera matrix, and errors in the estimation of translation between the sensors disappear with distance. But errors in the estimation of the rotation between the sensors causes the coloring error to increase with distance.

On a robot (vehicle) with multiple sensors, sensor fusion algorithms allow us to represent the data in the vehicle frame. But data acquired temporally in the vehicle frame needs to be registered in a global frame to obtain a map of the environment. Mapping techniques involving the Iterative Closest Point (ICP) algorithm and the Normal Distributions Transform (NDT) assume that a good initial estimate of the transformation between the 3D scans is available. This restricts the ability to stitch maps that were acquired at different times. Mapping can become flexible if maps that were acquired temporally can be merged later. To this end, the second part of this thesis focuses on developing an automated algorithm that fuses two maps by finding a congruent set of five points forming a pyramid.

Mapping has various application domains beyond Robot Navigation. The third part of this thesis considers a unique application domain where the surface displace- ments caused by an earthquake are to be recovered using pre- and post-earthquake LIDAR data. A technique to recover the 3D surface displacements is developed and the results are presented on real earthquake datasets: El Mayur Cucupa earthquake, Mexico, 2010 and Fukushima earthquake, Japan, 2011.
ContributorsKrishnan, Aravindhan K (Author) / Saripalli, Srikanth (Thesis advisor) / Klesh, Andrew (Committee member) / Fainekos, Georgios (Committee member) / Thangavelautham, Jekan (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2016
158302-Thumbnail Image.png
Description
Antibiotic resistance is a very important issue that threatens mankind. As bacteria

are becoming resistant to multiple antibiotics, many common antibiotics will soon

become ineective. The ineciency of current methods for diagnostics is an important

cause of antibiotic resistance, since due to their relative slowness, treatment plans

are often based on physician's experience rather

Antibiotic resistance is a very important issue that threatens mankind. As bacteria

are becoming resistant to multiple antibiotics, many common antibiotics will soon

become ineective. The ineciency of current methods for diagnostics is an important

cause of antibiotic resistance, since due to their relative slowness, treatment plans

are often based on physician's experience rather than on test results, having a high

chance of being inaccurate or not optimal. This leads to a need of faster, pointof-

care (POC) methods, which can provide results in a few hours. Motivated by

recent advances on computer vision methods, three projects have been developed

for bacteria identication and antibiotic susceptibility tests (AST), with the goal of

speeding up the diagnostics process. The rst two projects focus on obtaining features

from optical microscopy such as bacteria shape and motion patterns to distinguish

active and inactive cells. The results show their potential as novel methods for AST,

being able to obtain results within a window of 30 min to 3 hours, a much faster

time frame than the gold standard approach based on cell culture, which takes at

least half a day to be completed. The last project focus on the identication task,

combining large volume light scattering microscopy (LVM) and deep learning to

distinguish bacteria from urine particles. The developed setup is suitable for pointof-

care applications, as a large volume can be viewed at a time, avoiding the need

for cell culturing or enrichment. This is a signicant gain compared to cell culturing

methods. The accuracy performance of the deep learning system is higher than chance

and outperforms a traditional machine learning system by up to 20%.
ContributorsIriya, Rafael (Author) / Turaga, Pavan (Thesis advisor) / Wang, Shaopeng (Committee member) / Grys, Thomas (Committee member) / Zhang, Yanchao (Committee member) / Arizona State University (Publisher)
Created2020