This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 21 - 30 of 72
Filtering by

Clear all filters

149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
149902-Thumbnail Image.png
Description
For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it

For synthetic aperture radar (SAR) image formation processing, the chirp scaling algorithm (CSA) has gained considerable attention mainly because of its excellent target focusing ability, optimized processing steps, and ease of implementation. In particular, unlike the range Doppler and range migration algorithms, the CSA is easy to implement since it does not require interpolation, and it can be used on both stripmap and spotlight SAR systems. Another transform that can be used to enhance the processing of SAR image formation is the fractional Fourier transform (FRFT). This transform has been recently introduced to the signal processing community, and it has shown many promising applications in the realm of SAR signal processing, specifically because of its close association to the Wigner distribution and ambiguity function. The objective of this work is to improve the application of the FRFT in order to enhance the implementation of the CSA for SAR processing. This will be achieved by processing real phase-history data from the RADARSAT-1 satellite, a multi-mode SAR platform operating in the C-band, providing imagery with resolution between 8 and 100 meters at incidence angles of 10 through 59 degrees. The phase-history data will be processed into imagery using the conventional chirp scaling algorithm. The results will then be compared using a new implementation of the CSA based on the use of the FRFT, combined with traditional SAR focusing techniques, to enhance the algorithm's focusing ability, thereby increasing the peak-to-sidelobe ratio of the focused targets. The FRFT can also be used to provide focusing enhancements at extended ranges.
ContributorsNorthrop, Judith (Author) / Papandreou-Suppappola, Antonia (Thesis advisor) / Spanias, Andreas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2011
149915-Thumbnail Image.png
Description
Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage

Spotlight mode synthetic aperture radar (SAR) imaging involves a tomo- graphic reconstruction from projections, necessitating acquisition of large amounts of data in order to form a moderately sized image. Since typical SAR sensors are hosted on mobile platforms, it is common to have limitations on SAR data acquisi- tion, storage and communication that can lead to data corruption and a resulting degradation of image quality. It is convenient to consider corrupted samples as missing, creating a sparsely sampled aperture. A sparse aperture would also result from compressive sensing, which is a very attractive concept for data intensive sen- sors such as SAR. Recent developments in sparse decomposition algorithms can be applied to the problem of SAR image formation from a sparsely sampled aperture. Two modified sparse decomposition algorithms are developed, based on well known existing algorithms, modified to be practical in application on modest computa- tional resources. The two algorithms are demonstrated on real-world SAR images. Algorithm performance with respect to super-resolution, noise, coherent speckle and target/clutter decomposition is explored. These algorithms yield more accu- rate image reconstruction from sparsely sampled apertures than classical spectral estimators. At the current state of development, sparse image reconstruction using these two algorithms require about two orders of magnitude greater processing time than classical SAR image formation.
ContributorsWerth, Nicholas (Author) / Karam, Lina (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150530-Thumbnail Image.png
Description
With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.
ContributorsKrishnan, Venkatachalam (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Ayyanar, Raja (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
151028-Thumbnail Image.png
Description
In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can

In this thesis, we consider the problem of fast and efficient indexing techniques for time sequences which evolve on manifold-valued spaces. Using manifolds is a convenient way to work with complex features that often do not live in Euclidean spaces. However, computing standard notions of geodesic distance, mean etc. can get very involved due to the underlying non-linearity associated with the space. As a result a complex task such as manifold sequence matching would require very large number of computations making it hard to use in practice. We believe that one can device smart approximation algorithms for several classes of such problems which take into account the geometry of the manifold and maintain the favorable properties of the exact approach. This problem has several applications in areas of human activity discovery and recognition, where several features and representations are naturally studied in a non-Euclidean setting. We propose a novel solution to the problem of indexing manifold-valued sequences by proposing an intrinsic approach to map sequences to a symbolic representation. This is shown to enable the deployment of fast and accurate algorithms for activity recognition, motif discovery, and anomaly detection. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. Experiments show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The proposed methods are ideally suited for real-time systems and resource constrained scenarios.
ContributorsAnirudh, Rushil (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2012
151092-Thumbnail Image.png
Description
Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for

Recent advances in camera architectures and associated mathematical representations now enable compressive acquisition of images and videos at low data-rates. While most computer vision applications of today are composed of conventional cameras, which collect a large amount redundant data and power hungry embedded systems, which compress the collected data for further processing, compressive cameras offer the advantage of direct acquisition of data in compressed domain and hence readily promise to find applicability in computer vision, particularly in environments hampered by limited communication bandwidths. However, despite the significant progress in theory and methods of compressive sensing, little headway has been made in developing systems for such applications by exploiting the merits of compressive sensing. In such a setting, we consider the problem of activity recognition, which is an important inference problem in many security and surveillance applications. Since all successful activity recognition systems involve detection of human, followed by recognition, a potential fully functioning system motivated by compressive camera would involve the tracking of human, which requires the reconstruction of atleast the initial few frames to detect the human. Once the human is tracked, the recognition part of the system requires only the features to be extracted from the tracked sequences, which can be the reconstructed images or the compressed measurements of such sequences. However, it is desirable in resource constrained environments that these features be extracted from the compressive measurements without reconstruction. Motivated by this, in this thesis, we propose a framework for understanding activities as a non-linear dynamical system, and propose a robust, generalizable feature that can be extracted directly from the compressed measurements without reconstructing the original video frames. The proposed feature is termed recurrence texture and is motivated from recurrence analysis of non-linear dynamical systems. We show that it is possible to obtain discriminative features directly from the compressed stream and show its utility in recognition of activities at very low data rates.
ContributorsKulkarni, Kuldeep Sharad (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2012
150924-Thumbnail Image.png
Description
Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The

Approximately 1% of the world population suffers from epilepsy. Continuous long-term electroencephalographic (EEG) monitoring is the gold-standard for recording epileptic seizures and assisting in the diagnosis and treatment of patients with epilepsy. However, this process still requires that seizures are visually detected and marked by experienced and trained electroencephalographers. The motivation for the development of an automated seizure detection algorithm in this research was to assist physicians in such a laborious, time consuming and expensive task. Seizures in the EEG vary in duration (seconds to minutes), morphology and severity (clinical to subclinical, occurrence rate) within the same patient and across patients. The task of seizure detection is also made difficult due to the presence of movement and other recording artifacts. An early approach towards the development of automated seizure detection algorithms utilizing both EEG changes and clinical manifestations resulted to a sensitivity of 70-80% and 1 false detection per hour. Approaches based on artificial neural networks have improved the detection performance at the cost of algorithm's training. Measures of nonlinear dynamics, such as Lyapunov exponents, have been applied successfully to seizure prediction. Within the framework of this MS research, a seizure detection algorithm based on measures of linear and nonlinear dynamics, i.e., the adaptive short-term maximum Lyapunov exponent (ASTLmax) and the adaptive Teager energy (ATE) was developed and tested. The algorithm was tested on long-term (0.5-11.7 days) continuous EEG recordings from five patients (3 with intracranial and 2 with scalp EEG) and a total of 56 seizures, producing a mean sensitivity of 93% and mean specificity of 0.048 false positives per hour. The developed seizure detection algorithm is data-adaptive, training-free and patient-independent. It is expected that this algorithm will assist physicians in reducing the time spent on detecting seizures, lead to faster and more accurate diagnosis, better evaluation of treatment, and possibly to better treatments if it is incorporated on-line and real-time with advanced neuromodulation therapies for epilepsy.
ContributorsVenkataraman, Vinay (Author) / Jassemidis, Leonidas (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Arizona State University (Publisher)
Created2012
150942-Thumbnail Image.png
Description
The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices

The ease of use of mobile devices and tablets by students has generated a lot of interest in the area of engineering education. By using mobile technologies in signal analysis and applied mathematics, undergraduate-level courses can broaden the scope and effectiveness of technical education in classrooms. The current mobile devices have abundant memory and powerful processors, in addition to providing interactive interfaces. Therefore, these devices can support the implementation of non-trivial signal processing algorithms. Several existing visual programming environments such as Java Digital Signal Processing (J-DSP), are built using the platform-independent infrastructure of Java applets. These enable students to perform signal-processing exercises over the Internet. However, some mobile devices do not support Java applets. Furthermore, mobile simulation environments rely heavily on establishing robust Internet connections with a remote server where the processing is performed. The interactive Java Digital Signal Processing tool (iJDSP) has been developed as graphical mobile app on iOS devices (iPads, iPhones and iPod touches). In contrast to existing mobile applications, iJDSP has the ability to execute simulations directly on the mobile devices, and is a completely stand-alone application. In addition to a substantial set of signal processing algorithms, iJDSP has a highly interactive graphical interface where block diagrams can be constructed using a simple drag-n-drop procedure. Functions such as visualization of the convolution operation, and an interface to wireless sensors have been developed. The convolution module animates the process of the continuous and discrete convolution operations, including time-shift and integration, so that users can observe and learn, intuitively. The current set of DSP functions in the application enables students to perform simulation exercises on continuous and discrete convolution, z-transform, filter design and the Fast Fourier Transform (FFT). The interface to wireless sensors in iJDSP allows users to import data from wireless sensor networks, and use the rich suite of functions in iJDSP for data processing. This allows users to perform operations such as localization, activity detection and data fusion. The exercises and the iJDSP application were evaluated by senior-level students at Arizona State University (ASU), and the results of those assessments are analyzed and reported in this thesis.
ContributorsHu, Shuang (Author) / Spanias, Andreas (Thesis advisor) / Tsakalis, Kostas (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2012
156015-Thumbnail Image.png
Description
Fully distributed wireless sensor networks (WSNs) without fusion center have advantages such as scalability in network size and energy efficiency in communications. Each sensor shares its data only with neighbors and then achieves global consensus quantities by in-network processing. This dissertation considers robust distributed parameter estimation methods, seeking global consensus

Fully distributed wireless sensor networks (WSNs) without fusion center have advantages such as scalability in network size and energy efficiency in communications. Each sensor shares its data only with neighbors and then achieves global consensus quantities by in-network processing. This dissertation considers robust distributed parameter estimation methods, seeking global consensus on parameters of adaptive learning algorithms and statistical quantities.

Diffusion adaptation strategy with nonlinear transmission is proposed. The nonlinearity was motivated by the necessity for bounded transmit power, as sensors need to iteratively communicate each other energy-efficiently. Despite the nonlinearity, it is shown that the algorithm performs close to the linear case with the added advantage of power savings. This dissertation also discusses convergence properties of the algorithm in the mean and the mean-square sense.

Often, average is used to measure central tendency of sensed data over a network. When there are outliers in the data, however, average can be highly biased. Alternative choices of robust metrics against outliers are median, mode, and trimmed mean. Quantiles generalize the median, and they also can be used for trimmed mean. Consensus-based distributed quantile estimation algorithm is proposed and applied for finding trimmed-mean, median, maximum or minimum values, and identification of outliers through simulation. It is shown that the estimated quantities are asymptotically unbiased and converges toward the sample quantile in the mean-square sense. Step-size sequences with proper decay rates are also discussed for convergence analysis.

Another measure of central tendency is a mode which represents the most probable value and also be robust to outliers and other contaminations in data. The proposed distributed mode estimation algorithm achieves a global mode by recursively shifting conditional mean of the measurement data until it converges to stationary points of estimated density function. It is also possible to estimate the mode by utilizing grid vector as well as kernel density estimator. The densities are estimated at each grid point, while the points are updated until they converge to a global mode.
ContributorsLee, Jongmin (Electrical engineer) (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Spanias, Andreas (Thesis advisor) / Tsakalis, Konstantinos (Committee member) / Reisslein, Martin (Committee member) / Arizona State University (Publisher)
Created2017
156587-Thumbnail Image.png
Description
Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance.

Modern machine learning systems leverage data and features from multiple modalities to gain more predictive power. In most scenarios, the modalities are vastly different and the acquired data are heterogeneous in nature. Consequently, building highly effective fusion algorithms is at the core to achieve improved model robustness and inferencing performance. This dissertation focuses on the representation learning approaches as the fusion strategy. Specifically, the objective is to learn the shared latent representation which jointly exploit the structural information encoded in all modalities, such that a straightforward learning model can be adopted to obtain the prediction.

We first consider sensor fusion, a typical multimodal fusion problem critical to building a pervasive computing platform. A systematic fusion technique is described to support both multiple sensors and descriptors for activity recognition. Targeted to learn the optimal combination of kernels, Multiple Kernel Learning (MKL) algorithms have been successfully applied to numerous fusion problems in computer vision etc. Utilizing the MKL formulation, next we describe an auto-context algorithm for learning image context via the fusion with low-level descriptors. Furthermore, a principled fusion algorithm using deep learning to optimize kernel machines is developed. By bridging deep architectures with kernel optimization, this approach leverages the benefits of both paradigms and is applied to a wide variety of fusion problems.

In many real-world applications, the modalities exhibit highly specific data structures, such as time sequences and graphs, and consequently, special design of the learning architecture is needed. In order to improve the temporal modeling for multivariate sequences, we developed two architectures centered around attention models. A novel clinical time series analysis model is proposed for several critical problems in healthcare. Another model coupled with triplet ranking loss as metric learning framework is described to better solve speaker diarization. Compared to state-of-the-art recurrent networks, these attention-based multivariate analysis tools achieve improved performance while having a lower computational complexity. Finally, in order to perform community detection on multilayer graphs, a fusion algorithm is described to derive node embedding from word embedding techniques and also exploit the complementary relational information contained in each layer of the graph.
ContributorsSong, Huan (Author) / Spanias, Andreas (Thesis advisor) / Thiagarajan, Jayaraman (Committee member) / Berisha, Visar (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Arizona State University (Publisher)
Created2018