Matching Items (27)
152074-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized

Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized by the translational and rotational velocity and effective diffusivity. In a uniform environment the long-time motion of a motor can be fully characterized by the effective diffusivity. In this work it is shown that when motors possess both translational and rotational velocity the motor transitions from a short-time diffusivity to a long-time diffusivity at a time of pi/w. The short-time diffusivities are two to three orders of magnitude larger than the diffusivity of a Brownian sphere of the same size, increase linearly with concentration, and scale as v^2/2w. The measured long-time diffusivities are five times lower than the short-time diffusivities, scale as v^2/{2Dr [1 + (w/Dr )^2]}, and exhibit a maximum as a function of concentration. The variation of a colloid's velocity and effective diffusivity to its local environment (e.g. fuel concentration) suggests that the motors can accumulate in a bounded system, analogous to biological chemokinesis. Chemokinesis of organisms is the non-uniform equilibrium concentration that arises from a bounded random walk of swimming organisms in a chemical concentration gradient. In non-swimming organisms we term this response diffusiokinesis. We show that particles that migrate only by Brownian thermal motion are capable of achieving non-uniform pseudo equilibrium distribution in a diffusivity gradient. The concentration is a result of a bounded random-walk process where at any given time a larger percentage of particles can be found in the regions of low diffusivity than in regions of high diffusivity. Individual particles are not trapped in any given region but at equilibrium the net flux between regions is zero. For Brownian particles the gradient in diffusivity is achieved by creating a viscosity gradient in a microfluidic device. The distribution of the particles is described by the Fokker-Planck equation for variable diffusivity. The strength of the probe concentration gradient is proportional to the strength of the diffusivity gradient and inversely proportional to the mean probe diffusivity in the channel in accordance with the no flux condition at steady state. This suggests that Brownian colloids, natural or synthetic, will concentrate in a bounded system in response to a gradient in diffusivity and that the magnitude of the response is proportional to the magnitude of the gradient in diffusivity divided by the mean diffusivity in the channel.
ContributorsMarine, Nathan Arasmus (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald J (Committee member) / Frakes, David (Committee member) / Phelan, Patrick E (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
150321-Thumbnail Image.png
Description
Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive

Many methods of passive flow control rely on changes to surface morphology. Roughening surfaces to induce boundary layer transition to turbulence and in turn delay separation is a powerful approach to lowering drag on bluff bodies. While the influence in broad terms of how roughness and other means of passive flow control to delay separation on bluff bodies is known, basic mechanisms are not well understood. Of particular interest for the current work is understanding the role of surface dimpling on boundary layers. A computational approach is employed and the study has two main goals. The first is to understand and advance the numerical methodology utilized for the computations. The second is to shed some light on the details of how surface dimples distort boundary layers and cause transition to turbulence. Simulations are performed of the flow over a simplified configuration: the flow of a boundary layer over a dimpled flat plate. The flow is modeled using an immersed boundary as a representation of the dimpled surface along with direct numerical simulation of the Navier-Stokes equations. The dimple geometry used is fixed and is that of a spherical depression in the flat plate with a depth-to-diameter ratio of 0.1. The dimples are arranged in staggered rows separated by spacing of the center of the bottom of the dimples by one diameter in both the spanwise and streamwise dimensions. The simulations are conducted for both two and three staggered rows of dimples. Flow variables are normalized at the inlet by the dimple depth and the Reynolds number is specified as 4000 (based on freestream velocity and inlet boundary layer thickness). First and second order statistics show the turbulent boundary layers correlate well to channel flow and flow of a zero pressure gradient flat plate boundary layers in the viscous sublayer and the buffer layer, but deviates further away from the wall. The forcing of transition to turbulence by the dimples is unlike the transition caused by a naturally transitioning flow, a small perturbation such as trip tape in experimental flows, or noise in the inlet condition for computational flows.
ContributorsGutierrez-Jensen, Jeremiah J (Author) / Squires, Kyle (Thesis advisor) / Hermann, Marcus (Committee member) / Gelb, Anne (Committee member) / Arizona State University (Publisher)
Created2011
149953-Thumbnail Image.png
Description
The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability

The theme for this work is the development of fast numerical algorithms for sparse optimization as well as their applications in medical imaging and source localization using sensor array processing. Due to the recently proposed theory of Compressive Sensing (CS), the $\ell_1$ minimization problem attracts more attention for its ability to exploit sparsity. Traditional interior point methods encounter difficulties in computation for solving the CS applications. In the first part of this work, a fast algorithm based on the augmented Lagrangian method for solving the large-scale TV-$\ell_1$ regularized inverse problem is proposed. Specifically, by taking advantage of the separable structure, the original problem can be approximated via the sum of a series of simple functions with closed form solutions. A preconditioner for solving the block Toeplitz with Toeplitz block (BTTB) linear system is proposed to accelerate the computation. An in-depth discussion on the rate of convergence and the optimal parameter selection criteria is given. Numerical experiments are used to test the performance and the robustness of the proposed algorithm to a wide range of parameter values. Applications of the algorithm in magnetic resonance (MR) imaging and a comparison with other existing methods are included. The second part of this work is the application of the TV-$\ell_1$ model in source localization using sensor arrays. The array output is reformulated into a sparse waveform via an over-complete basis and study the $\ell_p$-norm properties in detecting the sparsity. An algorithm is proposed for minimizing a non-convex problem. According to the results of numerical experiments, the proposed algorithm with the aid of the $\ell_p$-norm can resolve closely distributed sources with higher accuracy than other existing methods.
ContributorsShen, Wei (Author) / Mittlemann, Hans D (Thesis advisor) / Renaut, Rosemary A. (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Gelb, Anne (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2011
150803-Thumbnail Image.png
Description
Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain

Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain for their turbulent motions' form, organization to compose very long motions, and relationship to vortical structures. POD extracts highly energetic structures from flow fields and is one tool to further understand the turbulence physics. A variety of direct numerical simulations provide velocity fields suitable for detailed analysis. Since POD modes require significant interpretation, this study begins with wall-normal, one-dimensional POD for a set of turbulent channel flows. Important features of the modes and their scaling are interpreted in light of flow physics, also leading to a method of synthesizing one-dimensional POD modes. Properties of a pipe flow simulation are then studied via several methods. The presence of very long streamwise motions is assessed using a number of statistical quantities, including energy spectra, which are compared to experiments. Further properties of energy spectra, including their relation to fictitious forces associated with mean Reynolds stress, are considered in depth. After reviewing salient features of turbulent structures previously observed in relevant experiments, structures in the pipe flow are examined in greater detail. A variety of methods reveal organization patterns of structures in instantaneous fields and their associated vortical structures. Properties of POD modes for a boundary layer flow are considered. Finally, very wide modes that occur when computing POD modes in all three canonical flows are compared. The results demonstrate that POD extracts structures relevant to characterizing wall-bounded turbulent flows. However, significant care is necessary in interpreting POD results, for which modes can be categorized according to their self-similarity. Additional analysis techniques reveal the organization of smaller motions in characteristic patterns to compose very long motions in pipe flows. The very large scale motions are observed to contribute large fractions of turbulent kinetic energy and Reynolds stress. The associated vortical structures possess characteristics of hairpins, but are commonly distorted from pristine hairpin geometries.
ContributorsBaltzer, Jon Ronald (Author) / Adrian, Ronald J (Thesis advisor) / Calhoun, Ronald (Committee member) / Gelb, Anne (Committee member) / Herrmann, Marcus (Committee member) / Squires, Kyle D (Committee member) / Arizona State University (Publisher)
Created2012
150824-Thumbnail Image.png
Description
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both

This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both finite (FIR) and infinite (IIR) impulse responses, temporal systems modeled by ordinary differential equations (ODE), and spatio-temporal systems modeled by partial differential equations (PDE). For LTI systems, the experimental results illustrate existing theory for identification of LTI FIR systems. It is seen that basis pursuit does not identify sparse LTI IIR systems, but it does identify alternate systems with nearly identical magnitude response characteristics when there are small numbers of non-zero coefficients. For ODE systems, the experimental results are consistent with earlier research for differential equations that are polynomials in the system variables, illustrating feasibility of the approach for small numbers of non-zero terms. For PDE systems, it is demonstrated that basis pursuit can be applied to system identification, along with a comparison in performance with another existing method. In all cases the impact of measurement noise on identification performance is considered, and it is empirically observed that high signal-to-noise ratio is required for successful application of basis pursuit to system identification problems.
ContributorsThompson, Robert C. (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Cochran, Douglas (Committee member) / Arizona State University (Publisher)
Created2012
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
Description
Over the past three decades, particle image velocimetry (PIV) has been continuously growing to become an informative and robust experimental tool for fluid mechanics research. Compared to the early stage of PIV development, the dynamic range of PIV has been improved by about an order of magnitude (Adrian, 2005; Westerweel

Over the past three decades, particle image velocimetry (PIV) has been continuously growing to become an informative and robust experimental tool for fluid mechanics research. Compared to the early stage of PIV development, the dynamic range of PIV has been improved by about an order of magnitude (Adrian, 2005; Westerweel et al., 2013). Further improvement requires a breakthrough innovation, which constitutes the main motivation of this dissertation. N-pulse particle image velocimetry-accelerometry (N-pulse PIVA, where N>=3) is a promising technique to this regard. It employs bursts of N pulses to gain advantages in both spatial and temporal resolution. The performance improvement by N-pulse PIVA is studied using particle tracking (i.e. N-pulse PTVA), and it is shown that an enhancement of at least another order of magnitude is achievable. Furthermore, the capability of N-pulse PIVA to measure unsteady acceleration and force is demonstrated in the context of an oscillating cylinder interacting with surrounding fluid. The cylinder motion, the fluid velocity and acceleration, and the fluid force exerted on the cylinder are successfully measured. On the other hand, a key issue of multi-camera registration for the implementation of N-pulse PIVA is addressed with an accuracy of 0.001 pixel. Subsequently, two applications of N-pulse PTVA to complex flows and turbulence are presented. A novel 8-pulse PTVA analysis was developed and validated to accurately resolve particle unsteady drag in post-shock flows. It is found that the particle drag is substantially elevated from the standard drag due to flow unsteadiness, and a new drag correlation incorporating particle Reynolds number and unsteadiness is desired upon removal of the uncertainty arising from non-uniform particle size. Next, the estimation of turbulence statistics utilizes the ensemble average of 4-pulse PTV data within a small domain of an optimally determined size. The estimation of mean velocity, mean velocity gradient and isotropic dissipation rate are presented and discussed by means of synthetic turbulence, as well as a tomographic measurement of turbulent boundary layer. The results indicate the superior capability of the N-pulse PTV based method to extract high-spatial-resolution high-accuracy turbulence statistics.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J (Thesis advisor) / Frakes, David (Committee member) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2018
Description
Rapid expansion of dense beds of fine, spherical particles subjected to rapid depressurization is studied in a vertical shock tube. As the particle bed is unloaded, a high-speed video camera captures the dramatic evolution of the particle bed structure. Pressure transducers are used to measure the dynamic pressure changes during

Rapid expansion of dense beds of fine, spherical particles subjected to rapid depressurization is studied in a vertical shock tube. As the particle bed is unloaded, a high-speed video camera captures the dramatic evolution of the particle bed structure. Pressure transducers are used to measure the dynamic pressure changes during the particle bed expansion process. Image processing, signal processing, and Particle Image Velocimetry techniques, are used to examine the relationships between particle size, initial bed height, bed expansion rate, and gas velocities.

The gas-particle interface and the particle bed as a whole expand and evolve in stages. First, the bed swells nearly homogeneously for a very brief period of time (< 2ms). Shortly afterward, the interface begins to develop instabilities as it continues to rise, with particles nearest the wall rising more quickly. Meanwhile, the bed fractures into layers and then breaks down further into cellular-like structures. The rate at which the structural evolution occurs is shown to be dependent on particle size. Additionally, the rate of the overall bed expansion is shown to be dependent on particle size and initial bed height.

Taller particle beds and beds composed of smaller-diameter particles are found to be associated with faster bed-expansion rates, as measured by the velocity of the gas-particle interface. However, the expansion wave travels more slowly through these same beds. It was also found that higher gas velocities above the the gas-particle interface measured \textit{via} Particle Image Velocimetry or PIV, were associated with particle beds composed of larger-diameter particles. The gas dilation between the shocktube diaphragm and the particle bed interface is more dramatic when the distance between the gas-particle interface and the diaphragm is decreased-as is the case for taller beds.

To further elucidate the complexities of this multiphase compressible flow, simple OpenFOAM (Weller, 1998) simulations of the shocktube experiment were performed and compared to bed expansion rates, pressure fluctuations, and gas velocities. In all cases, the trends and relationships between bed height, particle diameter, with expansion rates, pressure fluctuations and gas velocities matched well between experiments and simulations. In most cases, the experimentally-measured bed rise rates and the simulated bed rise rates matched reasonably well in early times. The trends and overall values of the pressure fluctuations and gas velocities matched well between the experiments and simulations; shedding light on the effects each parameter has on the overall flow.
ContributorsZunino, Heather (Author) / Adrian, Ronald J (Thesis advisor) / Clarke, Amanda (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2019