Matching Items (13)

135377-Thumbnail Image.png

Practicality of the Convolutional Solution Method of the Polarization Estimation Inverse Problem for Solid Oxide Fuel Cells

Description

A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr

A specific species of the genus Geobacter exhibits useful electrical properties when processing a molecule often found in waste water. A team at ASU including Dr Cèsar Torres and Dr Sudeep Popat used that species to create a special type of solid oxide fuel cell we refer to as a microbial fuel cell. Identification of possible chemical processes and properties of the reactions used by the Geobacter are investigated indirectly by taking measurements using Electrochemical Impedance Spectroscopy of the electrode-electrolyte interface of the microbial fuel cell to obtain the value of the fuel cell's complex impedance at specific frequencies. Investigation of the multiple polarization processes which give rise to measured impedance values is difficult to do directly and so examination of the distribution function of relaxation times (DRT) is considered instead. The DRT is related to the measured complex impedance values using a general, non-physical equivalent circuit model. That model is originally given in terms of a Fredholm integral equation with a non-square integrable kernel which makes the inverse problem of determining the DRT given the impedance measurements an ill-posed problem. The original integral equation is rewritten in terms of new variables into an equation relating the complex impedance to the convolution of a function based upon the original integral kernel and a related but separate distribution function which we call the convolutional distribution function. This new convolutional equation is solved by reducing the convolution to a pointwise product using the Fourier transform and then solving the inverse problem by pointwise division and application of a filter function (equivalent to regularization). The inverse Fourier transform is then taken to get the convolutional distribution function. In the literature the convolutional distribution function is then examined and certain values of a specific, less general equivalent circuit model are calculated from which aspects of the original chemical processes are derived. We attempted to instead directly determine the original DRT from the calculated convolutional distribution function. This method proved to be practically less useful due to certain values determined at the time of experiment which meant the original DRT could only be recovered in a window which would not normally contain the desired information for the original DRT. This limits any attempt to extend the solution for the convolutional distribution function to the original DRT. Further research may determine a method for interpreting the convolutional distribution function without an equivalent circuit model as is done with the regularization method used to solve directly for the original DRT.

Contributors

Agent

Created

Date Created
  • 2016-05

137100-Thumbnail Image.png

Maximum Entropy Surrogation in Multiple Channel Signal Detection

Description

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal

Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.

Contributors

Agent

Created

Date Created
  • 2014-05

136520-Thumbnail Image.png

Downsampling for Efficient Parameter Choice in Ill-Posed Deconvolution Problems

Description

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.

Contributors

Agent

Created

Date Created
  • 2015-05

137014-Thumbnail Image.png

Validity of down-sampling data for regularization parameter estimation when solving large-scale ill-posed inverse problems

Description

The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means

The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$ introduces the unknown regularization parameter $\lambda$ which trades off the fidelity of the solution data fit and its smoothing norm, which is determined by the choice of $L$. The Generalized Discrepancy Principle (GDP) and Unbiased Predictive Risk Estimator (UPRE) are methods for finding $\lambda$ given prior conditions on the noise in the measurements $b$. Here we consider the case of $L=I$, and hence use the relationship between the singular value expansion and the singular value decomposition for square integrable kernels to prove that the GDP and UPRE estimates yield a convergent sequence for $\lambda$ with increasing problem size. Hence the estimate of $\lambda$ for a large problem may be found by down-sampling to a smaller problem, or to a set of smaller problems, and applying these estimators more efficiently on the smaller problems. In consequence the large scale problem can be solved in a single step immediately with the parameter found from the down sampled problem(s).

Contributors

Agent

Created

Date Created
  • 2014-05

129269-Thumbnail Image.png

Application of the chi(2) principle and unbiased predictive risk estimator for determining the regularization parameter in 3-D focusing gravity inversion

Description

The χ[superscript 2] principle and the unbiased predictive risk estimator are used to determine optimal regularization parameters in the context of 3-D focusing gravity inversion with the minimum support stabilizer.

The χ[superscript 2] principle and the unbiased predictive risk estimator are used to determine optimal regularization parameters in the context of 3-D focusing gravity inversion with the minimum support stabilizer. At each iteration of the focusing inversion the minimum support stabilizer is determined and then the fidelity term is updated using the standard form transformation. Solution of the resulting Tikhonov functional is found efficiently using the singular value decomposition of the transformed model matrix, which also provides for efficient determination of the updated regularization parameter each step. Experimental 3-D simulations using synthetic data of a dipping dike and a cube anomaly demonstrate that both parameter estimation techniques outperform the Morozov discrepancy principle for determining the regularization parameter. Smaller relative errors of the reconstructed models are obtained with fewer iterations. Data acquired over the Gotvand dam site in the south-west of Iran are used to validate use of the methods for inversion of practical data and provide good estimates of anomalous structures within the subsurface.

Contributors

Created

Date Created
  • 2015-01-01

128965-Thumbnail Image.png

Hybrid and Iteratively Reweighted Regularization by Unbiased Predictive Risk and Weighted GCV for Projected Systems

Description

Tikhonov regularization for projected solutions of large-scale ill-posed problems is
considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a
subspace and regularization then applied to

Tikhonov regularization for projected solutions of large-scale ill-posed problems is
considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a
subspace and regularization then applied to nd a subspace approximation to the full problem.
Determination of the regularization, parameter for the projected problem by unbiased predictive risk
estimation, generalized cross validation, and discrepancy principle techniques is investigated. It is
shown that the regularized parameter obtained by the unbiased predictive risk estimator can provide
a good estimate which can be used for a full problem that is moderately to severely ill-posed. A
similar analysis provides the weight parameter for the weighted generalized cross validation such
that the approach is also useful in these cases, and also explains why the generalized cross validation
without weighting is not always useful. All results are independent of whether systems are over-
or underdetermined. Numerical simulations for standard one-dimensional test problems and two-
dimensional data, for both image restoration and tomographic image reconstruction, support the
analysis and validate the techniques. The size of the projected problem is found using an extension
of a noise revealing function for the projected problem [I. Hn etynkov a, M. Ple singer, and Z. Strako s,
BIT Numer. Math., 49 (2009), pp. 669{696]. Furthermore, an iteratively reweighted regularization
approach for edge preserving regularization is extended for projected systems, providing stabilization
of the solutions of the projected systems and reducing dependence on the determination of the size
of the projected subspace.

Contributors

Agent

Created

Date Created
  • 2017-03-08

Non-negatively constrained least squares and parameter Choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy data

Description

The inverse problem associated with electrochemical impedance spectroscopy requiring the solution of a Fredholm integral equation of the first kind is considered. If the underlying physical model is not clearly

The inverse problem associated with electrochemical impedance spectroscopy requiring the solution of a Fredholm integral equation of the first kind is considered. If the underlying physical model is not clearly determined, the inverse problem needs to be solved using a regularized linear least squares problem that is obtained from the discretization of the integral equation. For this system, it is shown that the model error can be made negligible by a change of variables and by extending the effective range of quadrature. This change of variables serves as a right preconditioner that significantly improves the condition of the system. Still, to obtain feasible solutions the additional constraint of non-negativity is required. Simulations with artificial, but realistic, data demonstrate that the use of non-negatively constrained least squares with a smoothing norm provides higher quality solutions than those obtained without the non-negative constraint. Using higher-order smoothing norms also reduces the error in the solutions. The L-curve and residual periodogram parameter choice criteria, which are used for parameter choice with regularized linear least squares, are successfully adapted to be used for the non-negatively constrained Tikhonov least squares problem. Although these results have been verified within the context of the analysis of electrochemical impedance spectroscopy, there is no reason to suppose that they would not be relevant within the broader framework of solving Fredholm integral equations for other applications.

Contributors

Agent

Created

Date Created
  • 2015-04-15

158103-Thumbnail Image.png

Global Optimization Using Piecewise Linear Approximation

Description

Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early

Global optimization (programming) has been attracting the attention of researchers for almost a century. Since linear programming (LP) and mixed integer linear programming (MILP) had been well studied in early stages, MILP methods and software tools had improved in their efficiency in the past few years. They are now fast and robust even for problems with millions of variables. Therefore, it is desirable to use MILP software to solve mixed integer nonlinear programming (MINLP) problems. For an MINLP problem to be solved by an MILP solver, its nonlinear functions must be transformed to linear ones. The most common method to do the transformation is the piecewise linear approximation (PLA). This dissertation will summarize the types of optimization and the most important tools and methods, and will discuss in depth the PLA tool. PLA will be done using nonuniform partitioning of the domain of the variables involved in the function that will be approximated. Also partial PLA models that approximate only parts of a complicated optimization problem will be introduced. Computational experiments will be done and the results will show that nonuniform partitioning and partial PLA can be beneficial.

Contributors

Agent

Created

Date Created
  • 2020

151128-Thumbnail Image.png

Some topics concerning the singular value decomposition and generalized singular value decomposition

Description

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.

Contributors

Agent

Created

Date Created
  • 2012

157649-Thumbnail Image.png

Optimal sampling for linear function approximation and high-order finite difference methods over complex regions

Description

I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand,

I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling points are not known explicitly. This work presents robust algorithms that find good sampling points in complex regions for polynomial interpolation, least-squares, and radial basis function (RBF) methods. The quality of these nodes is measured using the Lebesgue constant. I will also consider optimal sampling for constrained optimization, used to solve PDEs, where boundary conditions must be imposed. Furthermore, I extend the scope of the problem to include finding near-optimal sampling points for high-order finite difference methods. These high-order finite difference methods can be implemented using either piecewise polynomials or RBFs.

Contributors

Agent

Created

Date Created
  • 2019