Matching Items (4)
Filtering by

Clear all filters

153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156805-Thumbnail Image.png
Description
Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques

Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques require data to train the algorithms and model a complex system and make predictions based on that model. Due to development of sophisticated sensors it has become easier to collect large volumes of data which is used to make necessary hypotheses using ML. The promising results obtained using ML have opened up new opportunities of research across various departments and this dissertation is a manifestation of it. Here, some unique studies have been presented, from which valuable inference have been drawn for a real-world complex system. Each study has its own unique sets of motivation and relevance to the real world. An ensemble of signal processing (SP) and ML techniques have been explored in each study. This dissertation provides the detailed systematic approach and discusses the results achieved in each study. Valuable inferences drawn from each study play a vital role in areas of science and technology, and it is worth further investigation. This dissertation also provides a set of useful SP and ML tools for researchers in various fields of interest.
ContributorsDutta, Arindam (Author) / Bliss, Daniel W (Thesis advisor) / Berisha, Visar (Committee member) / Richmond, Christ (Committee member) / Corman, Steven (Committee member) / Arizona State University (Publisher)
Created2018
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
132193-Thumbnail Image.png
Description
Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation.

Power spectral analysis is a fundamental aspect of signal processing used in the detection and \\estimation of various signal features. Signals spaced closely in frequency are problematic and lead analysts to miss crucial details surrounding the data. The Capon and Bartlett methods are non-parametric filterbank approaches to power spectrum estimation. The Capon algorithm is known as the "adaptive" approach to power spectrum estimation because its filter impulse responses are adapted to fit the characteristics of the data. The Bartlett method is known as the "conventional" approach to power spectrum estimation (PSE) and has a fixed deterministic filter. Both techniques rely on the Sample Covariance Matrix (SCM). The first objective of this project is to analyze the origins and characteristics of the Capon and Bartlett methods to understand their abilities to resolve signals closely spaced in frequency. Taking into consideration the Capon and Bartlett's reliance on the SCM, there is a novelty in combining these two algorithms using their cross-coherence. The second objective of this project is to analyze the performance of the Capon-Bartlett Cross Spectra. This study will involve Matlab simulations of known test cases and comparisons with approximate theoretical predictions.
ContributorsYoshiyama, Cassidy (Author) / Richmond, Christ (Thesis director) / Bliss, Daniel (Committee member) / Electrical Engineering Program (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05