Matching Items (28)
150321-Thumbnail Image.png
Description
Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive

Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive flow control to delay separation on bluff bodies is known, basic mechanisms are not well understood. Of particular interest for the current work is understanding the role of surface dimpling on boundary layers. A computational approach is employed and the study has two main goals. The first is to understand and advance the numerical methodology utilized for the computations. The second is to shed some light on the details of how surface dimples distort boundary layers and cause transition to turbulence. Simulations are performed of the flow over a simplified configuration: the flow of a boundary layer over a dimpled flat plate. The flow is modeled using an immersed boundary as a representation of the dimpled surface along with direct numerical simulation of the Navier-Stokes equations. The dimple geometry used is fixed and is that of a spherical depression in the flat plate with a depth-to-diameter ratio of 0.1. The dimples are arranged in staggered rows separated by spacing of the center of the bottom of the dimples by one diameter in both the spanwise and streamwise dimensions. The simulations are conducted for both two and three staggered rows of dimples. Flow variables are normalized at the inlet by the dimple depth and the Reynolds number is specified as 4000 (based on freestream velocity and inlet boundary layer thickness). First and second order statistics show the turbulent boundary layers correlate well to channel flow and flow of a zero pressure gradient flat plate boundary layers in the viscous sublayer and the buffer layer, but deviates further away from the wall. The forcing of transition to turbulence by the dimples is unlike the transition caused by a naturally transitioning flow, a small perturbation such as trip tape in experimental flows, or noise in the inlet condition for computational flows.
ContributorsGutierrez-Jensen, Jeremiah J (Author) / Squires, Kyle (Thesis advisor) / Hermann, Marcus (Committee member) / Gelb, Anne (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
150803-Thumbnail Image.png
Description
Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain

Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain for their turbulent motions' form, organization to compose very long motions, and relationship to vortical structures. POD extracts highly energetic structures from flow fields and is one tool to further understand the turbulence physics. A variety of direct numerical simulations provide velocity fields suitable for detailed analysis. Since POD modes require significant interpretation, this study begins with wall-normal, one-dimensional POD for a set of turbulent channel flows. Important features of the modes and their scaling are interpreted in light of flow physics, also leading to a method of synthesizing one-dimensional POD modes. Properties of a pipe flow simulation are then studied via several methods. The presence of very long streamwise motions is assessed using a number of statistical quantities, including energy spectra, which are compared to experiments. Further properties of energy spectra, including their relation to fictitious forces associated with mean Reynolds stress, are considered in depth. After reviewing salient features of turbulent structures previously observed in relevant experiments, structures in the pipe flow are examined in greater detail. A variety of methods reveal organization patterns of structures in instantaneous fields and their associated vortical structures. Properties of POD modes for a boundary layer flow are considered. Finally, very wide modes that occur when computing POD modes in all three canonical flows are compared. The results demonstrate that POD extracts structures relevant to characterizing wall-bounded turbulent flows. However, significant care is necessary in interpreting POD results, for which modes can be categorized according to their self-similarity. Additional analysis techniques reveal the organization of smaller motions in characteristic patterns to compose very long motions in pipe flows. The very large scale motions are observed to contribute large fractions of turbulent kinetic energy and Reynolds stress. The associated vortical structures possess characteristics of hairpins, but are commonly distorted from pristine hairpin geometries.
ContributorsBaltzer, Jon Ronald (Author) / Adrian, Ronald J (Thesis advisor) / Calhoun, Ronald (Committee member) / Gelb, Anne (Committee member) / Herrmann, Marcus (Committee member) / Squires, Kyle D (Committee member) / Arizona State University (Publisher)
Created2012
150824-Thumbnail Image.png
Description
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both

This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both finite (FIR) and infinite (IIR) impulse responses, temporal systems modeled by ordinary differential equations (ODE), and spatio-temporal systems modeled by partial differential equations (PDE). For LTI systems, the experimental results illustrate existing theory for identification of LTI FIR systems. It is seen that basis pursuit does not identify sparse LTI IIR systems, but it does identify alternate systems with nearly identical magnitude response characteristics when there are small numbers of non-zero coefficients. For ODE systems, the experimental results are consistent with earlier research for differential equations that are polynomials in the system variables, illustrating feasibility of the approach for small numbers of non-zero terms. For PDE systems, it is demonstrated that basis pursuit can be applied to system identification, along with a comparison in performance with another existing method. In all cases the impact of measurement noise on identification performance is considered, and it is empirically observed that high signal-to-noise ratio is required for successful application of basis pursuit to system identification problems.
ContributorsThompson, Robert C. (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
135425-Thumbnail Image.png
Description
The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor

The detection and characterization of transients in signals is important in many wide-ranging applications from computer vision to audio processing. Edge detection on images is typically realized using small, local, discrete convolution kernels, but this is not possible when samples are measured directly in the frequency domain. The concentration factor edge detection method was therefore developed to realize an edge detector directly from spectral data. This thesis explores the possibilities of detecting edges from the phase of the spectral data, that is, without the magnitude of the sampled spectral data. Prior work has demonstrated that the spectral phase contains particularly important information about underlying features in a signal. Furthermore, the concentration factor method yields some insight into the detection of edges in spectral phase data. An iterative design approach was taken to realize an edge detector using only the spectral phase data, also allowing for the design of an edge detector when phase data are intermittent or corrupted. Problem formulations showing the power of the design approach are given throughout. A post-processing scheme relying on the difference of multiple edge approximations yields a strong edge detector which is shown to be resilient under noisy, intermittent phase data. Lastly, a thresholding technique is applied to give an explicit enhanced edge detector ready to be used. Examples throughout are demonstrate both on signals and images.
ContributorsReynolds, Alexander Bryce (Author) / Gelb, Anne (Thesis director) / Cochran, Douglas (Committee member) / Viswanathan, Adityavikram (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136533-Thumbnail Image.png
Description
Physical limitations of Magnetic Resonance Imaging (MRI) introduce different errors in the image reconstruction process. The discretization and truncation of data under discrete Fourier transform causes oscillations near jump discontinuities, a phenomenon known as the Gibbs effect. Using Gaussian-based approximations rather than the discrete Fourier transform to reconstruct images serves

Physical limitations of Magnetic Resonance Imaging (MRI) introduce different errors in the image reconstruction process. The discretization and truncation of data under discrete Fourier transform causes oscillations near jump discontinuities, a phenomenon known as the Gibbs effect. Using Gaussian-based approximations rather than the discrete Fourier transform to reconstruct images serves to diminish the Gibbs effect slightly, especially when coupled with filtering. Additionally, a simplifying assumption is made that, during signal collection, the amount of transverse magnetization decay at a point does not depend on that point's position in space. Though this methodology significantly reduces operational run-time, it nonetheless introduces geometric error, which can be mitigated using Single-Shot (SS) Parse.
ContributorsNeufer, Ian Douglas (Author) / Platte, Rodrigo (Thesis director) / Gelb, Anne (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05