Matching Items (11)
Filtering by

Clear all filters

150321-Thumbnail Image.png
Description
Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive

Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive flow control to delay separation on bluff bodies is known, basic mechanisms are not well understood. Of particular interest for the current work is understanding the role of surface dimpling on boundary layers. A computational approach is employed and the study has two main goals. The first is to understand and advance the numerical methodology utilized for the computations. The second is to shed some light on the details of how surface dimples distort boundary layers and cause transition to turbulence. Simulations are performed of the flow over a simplified configuration: the flow of a boundary layer over a dimpled flat plate. The flow is modeled using an immersed boundary as a representation of the dimpled surface along with direct numerical simulation of the Navier-Stokes equations. The dimple geometry used is fixed and is that of a spherical depression in the flat plate with a depth-to-diameter ratio of 0.1. The dimples are arranged in staggered rows separated by spacing of the center of the bottom of the dimples by one diameter in both the spanwise and streamwise dimensions. The simulations are conducted for both two and three staggered rows of dimples. Flow variables are normalized at the inlet by the dimple depth and the Reynolds number is specified as 4000 (based on freestream velocity and inlet boundary layer thickness). First and second order statistics show the turbulent boundary layers correlate well to channel flow and flow of a zero pressure gradient flat plate boundary layers in the viscous sublayer and the buffer layer, but deviates further away from the wall. The forcing of transition to turbulence by the dimples is unlike the transition caused by a naturally transitioning flow, a small perturbation such as trip tape in experimental flows, or noise in the inlet condition for computational flows.
ContributorsGutierrez-Jensen, Jeremiah J (Author) / Squires, Kyle (Thesis advisor) / Hermann, Marcus (Committee member) / Gelb, Anne (Committee member) / Arizona State University (Publisher)
Created2011
150803-Thumbnail Image.png
Description
Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain

Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain for their turbulent motions' form, organization to compose very long motions, and relationship to vortical structures. POD extracts highly energetic structures from flow fields and is one tool to further understand the turbulence physics. A variety of direct numerical simulations provide velocity fields suitable for detailed analysis. Since POD modes require significant interpretation, this study begins with wall-normal, one-dimensional POD for a set of turbulent channel flows. Important features of the modes and their scaling are interpreted in light of flow physics, also leading to a method of synthesizing one-dimensional POD modes. Properties of a pipe flow simulation are then studied via several methods. The presence of very long streamwise motions is assessed using a number of statistical quantities, including energy spectra, which are compared to experiments. Further properties of energy spectra, including their relation to fictitious forces associated with mean Reynolds stress, are considered in depth. After reviewing salient features of turbulent structures previously observed in relevant experiments, structures in the pipe flow are examined in greater detail. A variety of methods reveal organization patterns of structures in instantaneous fields and their associated vortical structures. Properties of POD modes for a boundary layer flow are considered. Finally, very wide modes that occur when computing POD modes in all three canonical flows are compared. The results demonstrate that POD extracts structures relevant to characterizing wall-bounded turbulent flows. However, significant care is necessary in interpreting POD results, for which modes can be categorized according to their self-similarity. Additional analysis techniques reveal the organization of smaller motions in characteristic patterns to compose very long motions in pipe flows. The very large scale motions are observed to contribute large fractions of turbulent kinetic energy and Reynolds stress. The associated vortical structures possess characteristics of hairpins, but are commonly distorted from pristine hairpin geometries.
ContributorsBaltzer, Jon Ronald (Author) / Adrian, Ronald J (Thesis advisor) / Calhoun, Ronald (Committee member) / Gelb, Anne (Committee member) / Herrmann, Marcus (Committee member) / Squires, Kyle D (Committee member) / Arizona State University (Publisher)
Created2012
150824-Thumbnail Image.png
Description
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both

This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both finite (FIR) and infinite (IIR) impulse responses, temporal systems modeled by ordinary differential equations (ODE), and spatio-temporal systems modeled by partial differential equations (PDE). For LTI systems, the experimental results illustrate existing theory for identification of LTI FIR systems. It is seen that basis pursuit does not identify sparse LTI IIR systems, but it does identify alternate systems with nearly identical magnitude response characteristics when there are small numbers of non-zero coefficients. For ODE systems, the experimental results are consistent with earlier research for differential equations that are polynomials in the system variables, illustrating feasibility of the approach for small numbers of non-zero terms. For PDE systems, it is demonstrated that basis pursuit can be applied to system identification, along with a comparison in performance with another existing method. In all cases the impact of measurement noise on identification performance is considered, and it is empirically observed that high signal-to-noise ratio is required for successful application of basis pursuit to system identification problems.
ContributorsThompson, Robert C. (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
134351-Thumbnail Image.png
Description
The retina is the lining in the back of the eye responsible for vision. When light photons hits the retina, the photoreceptors within the retina respond by sending impulses to the optic nerve, which connects to the brain. If there is injury to the eye or heredity retinal problems, this

The retina is the lining in the back of the eye responsible for vision. When light photons hits the retina, the photoreceptors within the retina respond by sending impulses to the optic nerve, which connects to the brain. If there is injury to the eye or heredity retinal problems, this part can become detached. Detachment leads to loss of nutrients, such as oxygen and glucose, to the cells in the eye and causes cell death. Sometimes the retina is able to be surgically reattached. If the photoreceptor cells have not died and the reattachment is successful, then these cells are able to regenerate their outer segments (OS) which are essential for their functionality and vitality. In this work we will explore how the regrowth of the photoreceptor cells in a healthy eye after retinal detachment can lead to a deeper understanding of how eye cells take up nutrients and regenerate. This work uses a mathematical model for a healthy eye in conjunction with data for photoreceptors' regrowth and decay. The parameters for the healthy eye model are estimated from the data and the ranges of these parameter values are centered +/- 10\% away from these values are used for sensitivity analysis. Using parameter estimation and sensitivity analysis we can better understand how certain processes represented by these parameters change within the model as a result of retinal detachment. Having a deeper understanding for any sort of photoreceptor death and growth can be used by the greater scientific community to help with these currently irreversible conditions that lead to blindness, such as retinal detachment. The analysis in this work shows that maximizing the carrying capacity of the trophic pool and the rate of RDCVF, as well as minimizing nutrient withdrawal of the rods and the cones from the trophic pool results in both the most regrowth and least cell death in retinal detachment.
ContributorsGoldman, Miriam Ayla (Author) / Camacho, Erikia (Thesis director) / Wirkus, Stephen (Committee member) / School of Mathematical and Natural Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
154804-Thumbnail Image.png
Description
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and

Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and electric fields in the Maxwell's equations. In addition, the methods studied here are meshfree, and are suitable for problems defined on complex domains, where mesh generation is computationally expensive or inaccurate, or for problems where the data is only available at scattered locations.

The contributions of this work include a detailed comparison between standard and divergence-free radial basis approximations, a study of the Lebesgue constants for divergence-free approximations and their dependence on node placement, and an investigation of the flat limit of divergence-free interpolants. Finally, numerical solvers for the incompressible Navier-Stokes equations in primitive variables are implemented using discretizations based on traditional and divergence-free kernels. The numerical results are compared to reference solutions obtained with a spectral

method.
ContributorsAraujo Mitrano, Arthur (Author) / Platte, Rodrigo (Thesis advisor) / Wright, Grady (Committee member) / Welfert, Bruno (Committee member) / Gelb, Anne (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2016
135987-Thumbnail Image.png
Description
Edge detection plays a significant role in signal processing and image reconstruction applications where it is used to identify important features in the underlying signal or image. In some of these applications, such as magnetic resonance imaging (MRI), data are sampled in the Fourier domain. When the data are sampled

Edge detection plays a significant role in signal processing and image reconstruction applications where it is used to identify important features in the underlying signal or image. In some of these applications, such as magnetic resonance imaging (MRI), data are sampled in the Fourier domain. When the data are sampled uniformly, a variety of algorithms can be used to efficiently extract the edges of the underlying images. However, in cases where the data are sampled non-uniformly, such as in non-Cartesian MRI, standard inverse Fourier transformation techniques are no longer suitable. Methods exist for handling these types of sampling patterns, but are often ill-equipped for cases where data are highly non-uniform. This thesis further develops an existing approach to discontinuity detection, the use of concentration factors. Previous research shows that the concentration factor technique can successfully determine jump discontinuities in non-uniform data. However, as the distribution diverges further away from uniformity so does the efficacy of the identification. This thesis proposes a method for reverse-engineering concentration factors specifically tailored to non-uniform data by employing the finite Fourier frame approximation. Numerical results indicate that this design method produces concentration factors which can more precisely identify jump locations than those previously developed.
ContributorsMoore, Rachael (Author) / Gelb, Anne (Thesis director) / Davis, Jacueline (Committee member) / Barrett, The Honors College (Contributor)
Created2015-05
154381-Thumbnail Image.png
Description
High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI

High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on $\ell^1$ regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and $\ell^1$ regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.
ContributorsDenker, Dennis (Author) / Gelb, Anne (Thesis advisor) / Archibald, Richard (Committee member) / Armbruster, Dieter (Committee member) / Boggess, Albert (Committee member) / Platte, Rodrigo (Committee member) / Saders, Toby (Committee member) / Arizona State University (Publisher)
Created2016