This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 138
149977-Thumbnail Image.png
Description
Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from

Reliable extraction of human pose features that are invariant to view angle and body shape changes is critical for advancing human movement analysis. In this dissertation, the multifactor analysis techniques, including the multilinear analysis and the multifactor Gaussian process methods, have been exploited to extract such invariant pose features from video data by decomposing various key contributing factors, such as pose, view angle, and body shape, in the generation of the image observations. Experimental results have shown that the resulting pose features extracted using the proposed methods exhibit excellent invariance properties to changes in view angles and body shapes. Furthermore, using the proposed invariant multifactor pose features, a suite of simple while effective algorithms have been developed to solve the movement recognition and pose estimation problems. Using these proposed algorithms, excellent human movement analysis results have been obtained, and most of them are superior to those obtained from state-of-the-art algorithms on the same testing datasets. Moreover, a number of key movement analysis challenges, including robust online gesture spotting and multi-camera gesture recognition, have also been addressed in this research. To this end, an online gesture spotting framework has been developed to automatically detect and learn non-gesture movement patterns to improve gesture localization and recognition from continuous data streams using a hidden Markov network. In addition, the optimal data fusion scheme has been investigated for multicamera gesture recognition, and the decision-level camera fusion scheme using the product rule has been found to be optimal for gesture recognition using multiple uncalibrated cameras. Furthermore, the challenge of optimal camera selection in multi-camera gesture recognition has also been tackled. A measure to quantify the complementary strength across cameras has been proposed. Experimental results obtained from a real-life gesture recognition dataset have shown that the optimal camera combinations identified according to the proposed complementary measure always lead to the best gesture recognition results.
ContributorsPeng, Bo (Author) / Qian, Gang (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150401-Thumbnail Image.png
Description
The North American Monsoon System (NAMS) contributes ~55% of the annual rainfall in the Chihuahuan Desert during the summer months. Relatively frequent, intense storms during the NAMS increase soil moisture, reduce surface temperature and lead to runoff in ephemeral channels. Quantifying these processes, however, is difficult due to the sparse

The North American Monsoon System (NAMS) contributes ~55% of the annual rainfall in the Chihuahuan Desert during the summer months. Relatively frequent, intense storms during the NAMS increase soil moisture, reduce surface temperature and lead to runoff in ephemeral channels. Quantifying these processes, however, is difficult due to the sparse nature of coordinated observations. In this study, I present results from a field network of rain gauges (n = 5), soil probes (n = 48), channel flumes (n = 4), and meteorological equipment in a small desert shrubland watershed (~0.05 km2) in the Jornada Experimental. Using this high-resolution network, I characterize the temporal and spatial variability of rainfall, soil conditions and channel runoff within the watershed from June 2010 to September 2011, covering two NAMS periods. In addition, CO2, water and energy measurements at an eddy covariance tower quantify seasonal, monthly and event-scale changes in land-atmosphere states and fluxes. Results from this study indicate a strong seasonality in water and energy fluxes, with a reduction in Bowen ratio (B, the ratio of sensible to latent heat fluxes) from winter (B = 14) to summer (B = 3.3). This reduction is tied to shallow soil moisture availability during the summer (s = 0.040 m3/m3) as compared to the winter (s = 0.004 m3/m3). During the NAMS, I analyzed four consecutive rainfall-runoff events to quantify the soil moisture and channel flow responses and how water availability impacted the land-atmosphere fluxes. Spatial hydrologic variations during events occur over distances as short as ~15 m. The field network also allowed comparisons of several approaches to estimate evapotranspiration (ET). I found a more accurate ET estimate (a reduction of mean absolute error by 38%) when using distributed soil moisture data, as compared to a standard water balance approach based on the tower site. In addition, use of spatially-varied soil moisture data yielded a more reasonable relationship between ET and soil moisture, an important parameterization in many hydrologic models. The analyses illustrates the value of high-resolution sampling for quantifying seasonal fluxes in desert shrublands and their improvements in closing the water balance in small watersheds.
ContributorsTempleton, Ryan (Author) / Vivoni, Enrique R (Thesis advisor) / Mays, Larry (Committee member) / Fox, Peter (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149962-Thumbnail Image.png
Description
In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow

In the last few years, significant advances in nanofabrication have allowed tailoring of structures and materials at a molecular level enabling nanofabrication with precise control of dimensions and organization at molecular length scales, a development leading to significant advances in nanoscale systems. Although, the direction of progress seems to follow the path of microelectronics, the fundamental physics in a nanoscale system changes more rapidly compared to microelectronics, as the size scale is decreased. The changes in length, area, and volume ratios due to reduction in size alter the relative influence of various physical effects determining the overall operation of a system in unexpected ways. One such category of nanofluidic structures demonstrating unique ionic and molecular transport characteristics are nanopores. Nanopores derive their unique transport characteristics from the electrostatic interaction of nanopore surface charge with aqueous ionic solutions. In this doctoral research cylindrical nanopores, in single and array configuration, were fabricated in silicon-on-insulator (SOI) using a combination of electron beam lithography (EBL) and reactive ion etching (RIE). The fabrication method presented is compatible with standard semiconductor foundries and allows fabrication of nanopores with desired geometries and precise dimensional control, providing near ideal and isolated physical modeling systems to study ion transport at the nanometer level. Ion transport through nanopores was characterized by measuring ionic conductances of arrays of nanopores of various diameters for a wide range of concentration of aqueous hydrochloric acid (HCl) ionic solutions. Measured ionic conductances demonstrated two distinct regimes based on surface charge interactions at low ionic concentrations and nanopore geometry at high ionic concentrations. Field effect modulation of ion transport through nanopore arrays, in a fashion similar to semiconductor transistors, was also studied. Using ionic conductance measurements, it was shown that the concentration of ions in the nanopore volume was significantly changed when a gate voltage on nanopore arrays was applied, hence controlling their transport. Based on the ion transport results, single nanopores were used to demonstrate their application as nanoscale particle counters by using polystyrene nanobeads, monodispersed in aqueous HCl solutions of different molarities. Effects of field effect modulation on particle transition events were also demonstrated.
ContributorsJoshi, Punarvasu (Author) / Thornton, Trevor J (Thesis advisor) / Goryll, Michael (Thesis advisor) / Spanias, Andreas (Committee member) / Saraniti, Marco (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
150177-Thumbnail Image.png
Description
Local municipalities in the Phoenix Metropolitan Area have voiced an interest in purchasing alternate source water with lower DBP precursors. Along the primary source is a hydroelectric dam in which water will be diverted from. This project is an assessment of optimizing the potential blends of source water to a

Local municipalities in the Phoenix Metropolitan Area have voiced an interest in purchasing alternate source water with lower DBP precursors. Along the primary source is a hydroelectric dam in which water will be diverted from. This project is an assessment of optimizing the potential blends of source water to a water treatment plant in an effort to enable them to more readily meet DBP regulations. To perform this analysis existing water treatment models were used in conjunction with historic water quality sampling data to predict chemical usage necessary to meet DBP regulations. A retrospective analysis was performed for the summer months of 2007 regarding potential for the WTP to reduce cost through optimizing the source water by an average of 30% over the four-month period, accumulating to overall treatment savings of $154 per MG ($82 per AF).
ContributorsRice, Jacelyn (Author) / Westerhoff, Paul (Thesis advisor) / Fox, Peter (Committee member) / Hristovski, Kiril (Committee member) / Arizona State University (Publisher)
Created2011
150162-Thumbnail Image.png
Description
Disinfection byproducts are the result of reactions between natural organic matter (NOM) and a disinfectant. The formation and speciation of DBP formation is largely dependent on the disinfectant used and the natural organic matter (NOM) concentration and composition. This study examined the use of photocatalysis with titanium dioxide for the

Disinfection byproducts are the result of reactions between natural organic matter (NOM) and a disinfectant. The formation and speciation of DBP formation is largely dependent on the disinfectant used and the natural organic matter (NOM) concentration and composition. This study examined the use of photocatalysis with titanium dioxide for the oxidation and removal of DBP precursors (NOM) and the inhibition of DBP formation. Water sources were collected from various points in the treatment process, treated with photocatalysis, and chlorinated to analyze the implications on total trihalomethane (TTHM) and the five haloacetic acids (HAA5) formations. The three sub-objectives for this study included: the comparison of enhanced and standard coagulation to photocatalysis for the removal of DBP precursors; the analysis of photocatalysis and characterization of organic matter using size exclusion chromatography and fluorescence spectroscopy and excitation-emission matrices; and the analysis of photocatalysis before GAC filtration. There were consistencies in the trends for each objective including reduced DBP precursors, measured as dissolved organic carbon DOC concentration and UV absorbance at 254 nm. Both of these parameters decreased with increased photocatalytic treatment and could be due in part to the adsorption to as well as the oxidation of NOM on the TiO2 surface. This resulted in lower THM and HAA concentrations at Medium and High photocatalytic treatment levels. However, at No UV exposure and Low photocatalytic treatment levels where oxidation reactions were inherently incomplete, there was an increase in THM and HAA formation potential, in most cases being significantly greater than those found in the raw water or Control samples. The size exclusion chromatography (SEC) results suggest that photocatalysis preferentially degrades the higher molecular mass fraction of NOM releasing lower molecular mass (LMM) compounds that have not been completely oxidized. The molecular weight distributions could explain the THM and HAA formation potentials that decreased at the No UV exposure samples but increased at Low photocatalytic treatment levels. The use of photocatalysis before GAC adsorption appears to increase bed life of the contactors; however, higher photocatalytic treatment levels have been shown to completely mineralize NOM and would therefore not require additional GAC adsorption after photocatalysis.
ContributorsDaugherty, Erin (Author) / Abbaszadegan, Morteza (Thesis advisor) / Fox, Peter (Committee member) / Mayer, Brooke (Committee member) / Arizona State University (Publisher)
Created2011
150317-Thumbnail Image.png
Description
To address sustainability issues in wastewater treatment (WWT), Siemens Water Technologies (SWT) has designed a "hybrid" process that couples common activated sludge (AS) and anaerobic digestion (AD) technologies with the novel concepts of AD sludge recycle and biosorption. At least 85% of the hybrid's AD sludge is recycled to the

To address sustainability issues in wastewater treatment (WWT), Siemens Water Technologies (SWT) has designed a "hybrid" process that couples common activated sludge (AS) and anaerobic digestion (AD) technologies with the novel concepts of AD sludge recycle and biosorption. At least 85% of the hybrid's AD sludge is recycled to the AS process, providing additional sorbent for influent particulate chemical oxygen demand (PCOD) biosorption in contact tanks. Biosorbed PCOD is transported to the AD, where it is converted to methane. The aim of this study is to provide mass balance and microbial community analysis (MCA) of SWT's two hybrid and one conventional pilot plant trains and mathematical modeling of the hybrid process including a novel model of biosorption. A detailed mass balance was performed on each tank and the overall system. The mass balance data supports the hybrid process is more sustainable: It produces 1.5 to 5.5x more methane and 50 to 83% less sludge than the conventional train. The hybrid's superior performance is driven by 4 to 8 times longer solid retention times (SRTs) as compared to conventional trains. However, the conversion of influent COD to methane was low at 15 to 22%, and neither train exhibited significant nitrification or denitrification. Data were inconclusive as to the role of biosorption in the processes. MCA indicated the presence of Archaea and nitrifiers throughout both systems. However, it is inconclusive as to how active Archaea and nitrifiers are under anoxic, aerobic, and anaerobic conditions. Mathematical modeling confirms the hybrid process produces 4 to 20 times more methane and 20 to 83% less sludge than the conventional train under various operating conditions. Neither process removes more than 25% of the influent nitrogen or converts more that 13% to nitrogen gas due to biomass washout in the contact tank and short SRTs in the stabilization tank. In addition, a mathematical relationship was developed to describe PCOD biosorption through adsorption to biomass and floc entrapment. Ultimately, process performance is more heavily influenced by the higher AD SRTs attained when sludge is recycled through the system and less influenced by the inclusion of biosorption kinetics.
ContributorsYoung, Michelle Nichole (Author) / Rittmann, Bruce E. (Thesis advisor) / Fox, Peter (Committee member) / Krajmalnik-Brown, Rosa (Committee member) / Arizona State University (Publisher)
Created2011
151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
ContributorsZwart, Christine M. (Author) / Frakes, David H (Thesis advisor) / Karam, Lina (Committee member) / Kodibagkar, Vikram (Committee member) / Spanias, Andreas (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2013