Matching Items (6)
Filtering by

Clear all filters

153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137044-Thumbnail Image.png
Description
In applications such as Magnetic Resonance Imaging (MRI), data are acquired as Fourier samples. Since the underlying images are only piecewise smooth, standard recon- struction techniques will yield the Gibbs phenomenon, which can lead to misdiagnosis. Although filtering will reduce the oscillations at jump locations, it can often have the

In applications such as Magnetic Resonance Imaging (MRI), data are acquired as Fourier samples. Since the underlying images are only piecewise smooth, standard recon- struction techniques will yield the Gibbs phenomenon, which can lead to misdiagnosis. Although filtering will reduce the oscillations at jump locations, it can often have the adverse effect of blurring at these critical junctures, which can also lead to misdiagno- sis. Incorporating prior information into reconstruction methods can help reconstruct a sharper solution. For example, compressed sensing (CS) algorithms exploit the expected sparsity of some features of the image. In this thesis, we develop a method to exploit the sparsity in the edges of the underlying image. We design a convex optimization problem that exploits this sparsity to provide an approximation of the underlying image. Our method successfully reduces the Gibbs phenomenon with only minimal "blurring" at the discontinuities. In addition, we see a high rate of convergence in smooth regions.
ContributorsWasserman, Gabriel Kanter (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Archibald, Rick (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
135973-Thumbnail Image.png
Description
Imaging technologies such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) collect Fourier data and then process the data to form images. Because images are piecewise smooth, the Fourier partial sum (i.e. direct inversion of the Fourier data) yields a poor approximation, with spurious oscillations forming at the

Imaging technologies such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) collect Fourier data and then process the data to form images. Because images are piecewise smooth, the Fourier partial sum (i.e. direct inversion of the Fourier data) yields a poor approximation, with spurious oscillations forming at the interior edges of the image and reduced accuracy overall. This is the well known Gibbs phenomenon and many attempts have been made to rectify its effects. Previous algorithms exploited the sparsity of edges in the underlying image as a constraint with which to optimize for a solution with reduced spurious oscillations. While the sparsity enforcing algorithms are fairly effective, they are sensitive to several issues, including undersampling and noise. Because of the piecewise nature of the underlying image, we theorize that projecting the solution onto the wavelet basis would increase the overall accuracy. Thus in this investigation we develop an algorithm that continues to exploit the sparsity of edges in the underlying image while also seeking to represent the solution using the wavelet rather than Fourier basis. Our method successfully decreases the effect of the Gibbs phenomenon and provides a good approximation for the underlying image. The primary advantages of our method is its robustness to undersampling and perturbations in the optimization parameters.
ContributorsFan, Jingjing (Co-author) / Mead, Ryan (Co-author) / Gelb, Anne (Thesis director) / Platte, Rodrigo (Committee member) / Archibald, Richard (Committee member) / School of Music (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
154381-Thumbnail Image.png
Description
High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI

High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on $\ell^1$ regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and $\ell^1$ regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.
ContributorsDenker, Dennis (Author) / Gelb, Anne (Thesis advisor) / Archibald, Richard (Committee member) / Armbruster, Dieter (Committee member) / Boggess, Albert (Committee member) / Platte, Rodrigo (Committee member) / Saders, Toby (Committee member) / Arizona State University (Publisher)
Created2016