Matching Items (11)
Filtering by

Clear all filters

149993-Thumbnail Image.png
Description
Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's

Many products undergo several stages of testing ranging from tests on individual components to end-item tests. Additionally, these products may be further "tested" via customer or field use. The later failure of a delivered product may in some cases be due to circumstances that have no correlation with the product's inherent quality. However, at times, there may be cues in the upstream test data that, if detected, could serve to predict the likelihood of downstream failure or performance degradation induced by product use or environmental stresses. This study explores the use of downstream factory test data or product field reliability data to infer data mining or pattern recognition criteria onto manufacturing process or upstream test data by means of support vector machines (SVM) in order to provide reliability prediction models. In concert with a risk/benefit analysis, these models can be utilized to drive improvement of the product or, at least, via screening to improve the reliability of the product delivered to the customer. Such models can be used to aid in reliability risk assessment based on detectable correlations between the product test performance and the sources of supply, test stands, or other factors related to product manufacture. As an enhancement to the usefulness of the SVM or hyperplane classifier within this context, L-moments and the Western Electric Company (WECO) Rules are used to augment or replace the native process or test data used as inputs to the classifier. As part of this research, a generalizable binary classification methodology was developed that can be used to design and implement predictors of end-item field failure or downstream product performance based on upstream test data that may be composed of single-parameter, time-series, or multivariate real-valued data. Additionally, the methodology provides input parameter weighting factors that have proved useful in failure analysis and root cause investigations as indicators of which of several upstream product parameters have the greater influence on the downstream failure outcomes.
ContributorsMosley, James (Author) / Morrell, Darryl (Committee member) / Cochran, Douglas (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Roberts, Chell (Committee member) / Spanias, Andreas (Committee member) / Arizona State University (Publisher)
Created2011
150824-Thumbnail Image.png
Description
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both

This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both finite (FIR) and infinite (IIR) impulse responses, temporal systems modeled by ordinary differential equations (ODE), and spatio-temporal systems modeled by partial differential equations (PDE). For LTI systems, the experimental results illustrate existing theory for identification of LTI FIR systems. It is seen that basis pursuit does not identify sparse LTI IIR systems, but it does identify alternate systems with nearly identical magnitude response characteristics when there are small numbers of non-zero coefficients. For ODE systems, the experimental results are consistent with earlier research for differential equations that are polynomials in the system variables, illustrating feasibility of the approach for small numbers of non-zero terms. For PDE systems, it is demonstrated that basis pursuit can be applied to system identification, along with a comparison in performance with another existing method. In all cases the impact of measurement noise on identification performance is considered, and it is empirically observed that high signal-to-noise ratio is required for successful application of basis pursuit to system identification problems.
ContributorsThompson, Robert C. (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
155969-Thumbnail Image.png
Description
Scattering from random rough surface has been of interest for decades. Several

methods were proposed to solve this problem, and Kirchho approximation (KA)

and small perturbation method (SMP) are among the most popular. Both methods

provide accurate results on rst order scattering, and the range of validity is limited

and cross-polarization scattering coecient is

Scattering from random rough surface has been of interest for decades. Several

methods were proposed to solve this problem, and Kirchho approximation (KA)

and small perturbation method (SMP) are among the most popular. Both methods

provide accurate results on rst order scattering, and the range of validity is limited

and cross-polarization scattering coecient is zero for these two methods unless these

two methods are carried out for higher orders. Furthermore, it is complicated for

higher order formulation and multiple scattering and shadowing are neglected in these

classic methods.

Extension of these two methods has been made in order to x these problems.

However, it is usually complicated and problem specic. While small slope approximation

is one of the most widely used methods to bridge KA and SMP, it is not easy

to implement in a general form. Two scale model can be employed to solve scattering

problems for a tilted perturbation plane, the range of validity is limited.

A new model is proposed in this thesis to deal with cross-polarization scattering

phenomenon on perfect electric conducting random surfaces. Integral equation

is adopted in this model. While integral equation method is often combined with

numerical method to solve the scattering coecient, the proposed model solves the

integral equation iteratively by analytic approximation. We utilize some approximations

on the randomness of the surface, and obtain an explicit expression. It is shown

that this expression achieves agreement with SMP method in second order.
ContributorsCao, Jiahao (Author) / Pan, George (Thesis advisor) / Balanis, Constantine A (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2017
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
155888-Thumbnail Image.png
Description
There is an ever-growing need for broadband conformal antennas to not only reduce the number of antennas utilized to cover a broad range of frequencies (VHF-UHF) but also to reduce visual and RF signatures associated with communication systems. In many applications antennas needs to be very close to low-impedance mediums

There is an ever-growing need for broadband conformal antennas to not only reduce the number of antennas utilized to cover a broad range of frequencies (VHF-UHF) but also to reduce visual and RF signatures associated with communication systems. In many applications antennas needs to be very close to low-impedance mediums or embedded inside low-impedance mediums. However, for conventional metal and dielectric antennas to operate efficiently in such environments either a very narrow bandwidth must be tolerated, or enough loss added to expand the bandwidth, or they must be placed one quarter of a wavelength above the conducting surface. The latter is not always possible since in the HF through low UHF bands, critical to Military and Security functions, this quarter-wavelength requirement would result in impractically large antennas.

Despite an error based on a false assumption in the 1950’s, which had severely underestimated the efficiency of magneto-dielectric antennas, recently demonstrated magnetic-antennas have been shown to exhibit extraordinary efficiency in conformal applications. Whereas conventional metal-and-dielectric antennas carrying radiating electric currents suffer a significant disadvantage when placed conformal to the conducting surface of a platform, because they induce opposing image currents in the surface, magnetic-antennas carrying magnetic radiating currents have no such limitation. Their magnetic currents produce co-linear image currents in electrically conducting surfaces.

However, the permeable antennas built to date have not yet attained the wide bandwidth expected because the magnetic-flux-channels carrying the wave have not been designed to guide the wave near the speed of light at all frequencies. Instead, they tend to lose the wave by a leaky fast-wave mechanism at low frequencies or they over-bind a slow-wave at high frequencies. In this dissertation, we have studied magnetic antennas in detail and presented the design approach and apparatus required to implement a flux-channel carrying the magnetic current wave near the speed of light over a very broad frequency range which also makes the design of a frequency independent antenna (spiral) possible. We will learn how to construct extremely thin conformal antennas, frequency-independent permeable antennas, and even micron-sized antennas that can be embedded inside the brain without damaging the tissue.
ContributorsYousefi, Tara (Author) / Diaz, Rodolfo E (Thesis advisor) / Cochran, Douglas (Committee member) / Goodnick, Stephen (Committee member) / Pan, George (Committee member) / Arizona State University (Publisher)
Created2017
155818-Thumbnail Image.png
Description
Electric field imaging allows for a low cost, compact, non-invasive, non-ionizing alternative to other methods of imaging. It has many promising industrial applications including security, safely imaging power lines at construction sites, finding sources of electromagnetic interference, geo-prospecting, and medical imaging. The work presented in this dissertation concerns

Electric field imaging allows for a low cost, compact, non-invasive, non-ionizing alternative to other methods of imaging. It has many promising industrial applications including security, safely imaging power lines at construction sites, finding sources of electromagnetic interference, geo-prospecting, and medical imaging. The work presented in this dissertation concerns low frequency electric field imaging: the physics, hardware, and various methods of achieving it.

Electric fields have historically been notoriously difficult to work with due to how intrinsically noisy the data is in electric field sensors. As a first contribution, an in-depth study demonstrates just how prevalent electric field noise is. In field tests, various cables were placed underneath power lines. Despite being shielded, the 60 Hz power line signal readily penetrated several types of cables.

The challenges of high noise levels were largely addressed by connecting the output of an electric field sensor to a lock-in amplifier. Using the more accurate means of collecting electric field data, D-dot sensors were arrayed in a compact grid to resolve electric field images as a second contribution. This imager has successfully captured electric field images of live concealed wires and electromagnetic interference.

An active method was developed as a third contribution. In this method, distortions created by objects when placed in a known electric field are read. This expands the domain of what can be imaged because the object does not need to be a time-varying electric field source. Images of dielectrics (e.g. bodies of water) and DC wires were captured using this new method.

The final contribution uses a collection of one-dimensional electric field images, i.e. projections, to reconstruct a two-dimensional image. This was achieved using algorithms based in computed tomography such as filtered backprojection. An algebraic approach was also used to enforce sparsity regularization with the L1 norm, further improving the quality of some images.
ContributorsChung, Hugh Emanuel (Author) / Allee, David R. (Thesis advisor) / Cochran, Douglas (Committee member) / Aberle, James T (Committee member) / Phillips, Stephen M (Committee member) / Arizona State University (Publisher)
Created2017
168481-Thumbnail Image.png
Description
Solving partial differential equations on surfaces has many applications including modeling chemical diffusion, pattern formation, geophysics and texture mapping. This dissertation presents two techniques for solving time dependent partial differential equations on various surfaces using the partition of unity method. A novel spectral cubed sphere method that utilizes the windowed

Solving partial differential equations on surfaces has many applications including modeling chemical diffusion, pattern formation, geophysics and texture mapping. This dissertation presents two techniques for solving time dependent partial differential equations on various surfaces using the partition of unity method. A novel spectral cubed sphere method that utilizes the windowed Fourier technique is presented and used for both approximating functions on spherical domains and solving partial differential equations. The spectral cubed sphere method is applied to solve the transport equation as well as the diffusion equation on the unit sphere. The second approach is a partition of unity method with local radial basis function approximations. This technique is also used to explore the effect of the node distribution as it is well known that node choice plays an important role in the accuracy and stability of an approximation. A greedy algorithm is implemented to generate good interpolation nodes using the column pivoting QR factorization. The partition of unity radial basis function method is applied to solve the diffusion equation on the sphere as well as a system of reaction-diffusion equations on multiple surfaces including the surface of a red blood cell, a torus, and the Stanford bunny. Accuracy and stability of both methods are investigated.
ContributorsIslas, Genesis Juneiva (Author) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Espanol, Malena (Committee member) / Kao, Ming-Hung (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2021
187441-Thumbnail Image.png
Description
During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by

During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by the introduction of non-negative regularization parameter(s). Many methods are available for both the selection of appropriate regularization parame- ters and the inversion of the discrete linear system. Generally, for a single problem there is just one regularization parameter. Here, a learning approach is considered to identify a single regularization parameter based on the use of multiple data sets de- scribed by a linear system with a common model matrix. The situation with multiple regularization parameters that weight different spectral components of the solution is considered as well. To obtain these multiple parameters, standard methods are modified for identifying the optimal regularization parameters. Modifications of the unbiased predictive risk estimation, generalized cross validation, and the discrepancy principle are derived for finding spectral windowing regularization parameters. These estimators are extended for finding the regularization parameters when multiple data sets with common system matrices are available. Statistical analysis of these estima- tors is conducted for real and complex transformations of data. It is demonstrated that spectral windowing regularization parameters can be learned from these new esti- mators applied for multiple data and with multiple windows. Numerical experiments evaluating these new methods demonstrate that these modified methods, which do not require the use of true data for learning regularization parameters, are effective and efficient, and perform comparably to a supervised learning method based on es- timating the parameters using true data. The theoretical developments are validated for one and two dimensional image deblurring. It is verified that the obtained estimates of spectral windowing regularization parameters can be used effectively on validation data sets that are separate from the training data, and do not require known data.
ContributorsByrne, Michael John (Author) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Espanol, Malena (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023