Matching Items (17)
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
135377-Thumbnail Image.png
Description
A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a

A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a microbial fuel cell. Identification of possible chemical processes and properties of the reactions used by the Geobacter are investigated indirectly by taking measurements using Electrochemical Impedance Spectroscopy of the electrode-electrolyte interface of the microbial fuel cell to obtain the value of the fuel cell's complex impedance at specific frequencies. Investigation of the multiple polarization processes which give rise to measured impedance values is difficult to do directly and so examination of the distribution function of relaxation times (DRT) is considered instead. The DRT is related to the measured complex impedance values using a general, non-physical equivalent circuit model. That model is originally given in terms of a Fredholm integral equation with a non-square integrable kernel which makes the inverse problem of determining the DRT given the impedance measurements an ill-posed problem. The original integral equation is rewritten in terms of new variables into an equation relating the complex impedance to the convolution of a function based upon the original integral kernel and a related but separate distribution function which we call the convolutional distribution function. This new convolutional equation is solved by reducing the convolution to a pointwise product using the Fourier transform and then solving the inverse problem by pointwise division and application of a filter function (equivalent to regularization). The inverse Fourier transform is then taken to get the convolutional distribution function. In the literature the convolutional distribution function is then examined and certain values of a specific, less general equivalent circuit model are calculated from which aspects of the original chemical processes are derived. We attempted to instead directly determine the original DRT from the calculated convolutional distribution function. This method proved to be practically less useful due to certain values determined at the time of experiment which meant the original DRT could only be recovered in a window which would not normally contain the desired information for the original DRT. This limits any attempt to extend the solution for the convolutional distribution function to the original DRT. Further research may determine a method for interpreting the convolutional distribution function without an equivalent circuit model as is done with the regularization method used to solve directly for the original DRT.
ContributorsBaker, Robert Simpson (Author) / Renaut, Rosemary (Thesis director) / Kostelich, Eric (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137014-Thumbnail Image.png
Description
The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$

The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$ introduces the unknown regularization parameter $\lambda$ which trades off the fidelity of the solution data fit and its smoothing norm, which is determined by the choice of $L$. The Generalized Discrepancy Principle (GDP) and Unbiased Predictive Risk Estimator (UPRE) are methods for finding $\lambda$ given prior conditions on the noise in the measurements $b$. Here we consider the case of $L=I$, and hence use the relationship between the singular value expansion and the singular value decomposition for square integrable kernels to prove that the GDP and UPRE estimates yield a convergent sequence for $\lambda$ with increasing problem size. Hence the estimate of $\lambda$ for a large problem may be found by down-sampling to a smaller problem, or to a set of smaller problems, and applying these estimators more efficiently on the smaller problems. In consequence the large scale problem can be solved in a single step immediately with the parameter found from the down sampled problem(s).
ContributorsHorst, Michael Jacob (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Wang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
137100-Thumbnail Image.png
Description
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
ContributorsCrider, Lauren Nicole (Author) / Cochran, Douglas (Thesis director) / Renaut, Rosemary (Committee member) / Kosut, Oliver (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2014-05
154804-Thumbnail Image.png
Description
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and

Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and electric fields in the Maxwell's equations. In addition, the methods studied here are meshfree, and are suitable for problems defined on complex domains, where mesh generation is computationally expensive or inaccurate, or for problems where the data is only available at scattered locations.

The contributions of this work include a detailed comparison between standard and divergence-free radial basis approximations, a study of the Lebesgue constants for divergence-free approximations and their dependence on node placement, and an investigation of the flat limit of divergence-free interpolants. Finally, numerical solvers for the incompressible Navier-Stokes equations in primitive variables are implemented using discretizations based on traditional and divergence-free kernels. The numerical results are compared to reference solutions obtained with a spectral

method.
ContributorsAraujo Mitrano, Arthur (Author) / Platte, Rodrigo (Thesis advisor) / Wright, Grady (Committee member) / Welfert, Bruno (Committee member) / Gelb, Anne (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2016
128965-Thumbnail Image.png
Description

Tikhonov regularization for projected solutions of large-scale ill-posed problems is considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to nd a subspace approximation to the full problem. Determination of the regularization, parameter for the projected problem by unbiased predictive risk

Tikhonov regularization for projected solutions of large-scale ill-posed problems is considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to nd a subspace approximation to the full problem. Determination of the regularization, parameter for the projected problem by unbiased predictive risk estimation, generalized cross validation, and discrepancy principle techniques is investigated. It is shown that the regularized parameter obtained by the unbiased predictive risk estimator can provide a good estimate which can be used for a full problem that is moderately to severely ill-posed. A similar analysis provides the weight parameter for the weighted generalized cross validation such that the approach is also useful in these cases, and also explains why the generalized cross validation without weighting is not always useful. All results are independent of whether systems are over- or underdetermined. Numerical simulations for standard one-dimensional test problems and two- dimensional data, for both image restoration and tomographic image reconstruction, support the analysis and validate the techniques. The size of the projected problem is found using an extension of a noise revealing function for the projected problem [I. Hn etynkov a, M. Ple singer, and Z. Strako s, BIT Numer. Math., 49 (2009), pp. 669{696]. Furthermore, an iteratively reweighted regularization approach for edge preserving regularization is extended for projected systems, providing stabilization of the solutions of the projected systems and reducing dependence on the determination of the size of the projected subspace.

ContributorsRenaut, Rosemary (Author)
Created2017-03-08
168481-Thumbnail Image.png
Description
Solving partial differential equations on surfaces has many applications including modeling chemical diffusion, pattern formation, geophysics and texture mapping. This dissertation presents two techniques for solving time dependent partial differential equations on various surfaces using the partition of unity method. A novel spectral cubed sphere method that utilizes the windowed

Solving partial differential equations on surfaces has many applications including modeling chemical diffusion, pattern formation, geophysics and texture mapping. This dissertation presents two techniques for solving time dependent partial differential equations on various surfaces using the partition of unity method. A novel spectral cubed sphere method that utilizes the windowed Fourier technique is presented and used for both approximating functions on spherical domains and solving partial differential equations. The spectral cubed sphere method is applied to solve the transport equation as well as the diffusion equation on the unit sphere. The second approach is a partition of unity method with local radial basis function approximations. This technique is also used to explore the effect of the node distribution as it is well known that node choice plays an important role in the accuracy and stability of an approximation. A greedy algorithm is implemented to generate good interpolation nodes using the column pivoting QR factorization. The partition of unity radial basis function method is applied to solve the diffusion equation on the sphere as well as a system of reaction-diffusion equations on multiple surfaces including the surface of a red blood cell, a torus, and the Stanford bunny. Accuracy and stability of both methods are investigated.
ContributorsIslas, Genesis Juneiva (Author) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Espanol, Malena (Committee member) / Kao, Ming-Hung (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2021
187776-Thumbnail Image.png
Description
This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain,

This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain, well-conditioned methods can be formulated. In particular, these methods provide exponential decay of the error down to a finite but user-controlled tolerance $\epsilon>0$. Additionally, this thesis explores two implementations of the frame approximation: a singular value decomposition (SVD)-regularized least-squares fit as described by Adcock and Shadrin in 2022, and a column and row selection method that leverages QR factorizations to reduce the data needed in the approximation. Moreover, strategies to reduce the complexity of the approximation problem by exploiting randomized linear algebra in low-rank algorithms are also explored, including the AZ algorithm described by Coppe and Huybrechs in 2020.
ContributorsGuo, Maosheng (Author) / Platte, Rodrigo (Thesis advisor) / Espanol, Malena (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2023