Matching Items (78)
151515-Thumbnail Image.png
Description
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified

This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS®) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
ContributorsChoukulkar, Aditya (Author) / Calhoun, Ronald (Thesis advisor) / Mahalov, Alex (Committee member) / Kostelich, Eric (Committee member) / Huang, Huei-Ping (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2013
153170-Thumbnail Image.png
Description
Advances in experimental techniques have allowed for investigation of molecular dynamics at ever smaller temporal and spatial scales. There is currently a varied and growing body of literature which demonstrates the phenomenon of \emph{anomalous diffusion} in physics, engineering, and biology. In particular many diffusive type processes in the cell have

Advances in experimental techniques have allowed for investigation of molecular dynamics at ever smaller temporal and spatial scales. There is currently a varied and growing body of literature which demonstrates the phenomenon of \emph{anomalous diffusion} in physics, engineering, and biology. In particular many diffusive type processes in the cell have been observed to follow a power law $\left \propto t^\alpha$ scaling of the mean square displacement of a particle. This contrasts with the expected linear behavior of particles undergoing normal diffusion. \emph{Anomalous sub-diffusion} ($\alpha<1$) has been attributed to factors such as cytoplasmic crowding of macromolecules, and trap-like structures in the subcellular environment non-linearly slowing the diffusion of molecules. Compared to normal diffusion, signaling molecules in these constrained spaces can be more concentrated at the source, and more diffuse at longer distances, potentially effecting the signalling dynamics. As diffusion at the cellular scale is a fundamental mechanism of cellular signaling and additionally is an implicit underlying mathematical assumption of many canonical models, a closer look at models of anomalous diffusion is warranted. Approaches in the literature include derivations of fractional differential diffusion equations (FDE) and continuous time random walks (CTRW). However these approaches are typically based on \emph{ad-hoc} assumptions on time- and space- jump distributions. We apply recent developments in asymptotic techniques on collisional kinetic equations to develop a FDE model of sub-diffusion due to trapping regions and investigate the nature of the space/time probability distributions assosiated with trapping regions. This approach both contrasts and compliments the stochastic CTRW approach by positing more physically realistic underlying assumptions on the motion of particles and their interactions with trapping regions, and additionally allowing varying assumptions to be applied individually to the traps and particle kinetics.
ContributorsHoleva, Thomas Matthew (Author) / Ringhofer, Christian (Thesis advisor) / Baer, Steve (Thesis advisor) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Taylor, Jesse (Committee member) / Arizona State University (Publisher)
Created2014
153262-Thumbnail Image.png
Description
In 1968, phycologist M.R. Droop published his famous discovery on the functional relationship between growth rate and internal nutrient status of algae in chemostat culture. The simple notion that growth is directly dependent on intracellular nutrient concentration is useful for understanding the dynamics in many ecological systems. The cell quota

In 1968, phycologist M.R. Droop published his famous discovery on the functional relationship between growth rate and internal nutrient status of algae in chemostat culture. The simple notion that growth is directly dependent on intracellular nutrient concentration is useful for understanding the dynamics in many ecological systems. The cell quota in particular lends itself to ecological stoichiometry, which is a powerful framework for mathematical ecology. Three models are developed based on the cell quota principal in order to demonstrate its applications beyond chemostat culture.

First, a data-driven model is derived for neutral lipid synthesis in green microalgae with respect to nitrogen limitation. This model synthesizes several established frameworks in phycology and ecological stoichiometry. The model demonstrates how the cell quota is a useful abstraction for understanding the metabolic shift to neutral lipid production that is observed in certain oleaginous species.

Next a producer-grazer model is developed based on the cell quota model and nutrient recycling. The model incorporates a novel feedback loop to account for animal toxicity due to accumulation of nitrogen waste. The model exhibits rich, complex dynamics which leave several open mathematical questions.

Lastly, disease dynamics in vivo are in many ways analogous to those of an ecosystem, giving natural extensions of the cell quota concept to disease modeling. Prostate cancer can be modeled within this framework, with androgen the limiting nutrient and the prostate and cancer cells as competing species. Here the cell quota model provides a useful abstraction for the dependence of cellular proliferation and apoptosis on androgen and the androgen receptor. Androgen ablation therapy is often used for patients in biochemical recurrence or late-stage disease progression and is in general initially effective. However, for many patients the cancer eventually develops resistance months to years after treatment begins. Understanding how and predicting when hormone therapy facilitates evolution of resistant phenotypes has immediate implications for treatment. Cell quota models for prostate cancer can be useful tools for this purpose and motivate applications to other diseases.
ContributorsPacker, Aaron (Author) / Kuang, Yang (Thesis advisor) / Nagy, John (Committee member) / Smith, Hal (Committee member) / Kostelich, Eric (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
Description
It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement

It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter.
ContributorsMunroe, Michael R (Author) / Phelan, Patrick (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Arizona State University (Publisher)
Created2012
150890-Thumbnail Image.png
Description
Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this ga

Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this gap between simulation and observation by rendering the model output in image format which is then processed by tools commonly used in observational astronomy. Images are synthesized in various filters by folding the output of cosmological simulations of gasdynamics with star-formation and dark matter with the Bruzual- Charlot stellar population synthesis models. A variation of the Virgo-Gadget numerical simulation code is used with the hybrid gas and stellar formation models of Springel and Hernquist (2003). Outputs taken at various redshifts are stacked to create a synthetic view of the simulated star clusters. Source Extractor (SExtractor) is used to find groupings of stellar populations which are considered as galaxies or galaxy building blocks and photometry used to estimate the rest frame luminosities and distribution functions. With further refinements, this is expected to provide support for missions such as JWST, as well as to probe what additional physics are needed to model the data. The results show good agreement in many respects with observed properties of the galaxy luminosity function (LF) over a wide range of high redshifts. In particular, the slope (alpha) when fitted to the standard Schechter function shows excellent agreement both in value and evolution with redshift, when compared with observation. Discrepancies of other properties with observation are seen to be a result of limitations of the simulation and additional feedback mechanisms which are needed.
ContributorsMorgan, Robert (Author) / Windhorst, Rogier A (Thesis advisor) / Scannapieco, Evan (Committee member) / Rhoads, James (Committee member) / Gardner, Carl (Committee member) / Belitsky, Andrei (Committee member) / Arizona State University (Publisher)
Created2012
154081-Thumbnail Image.png
Description
Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical analysis. A deterministic closure model leading to a second

Factory production is stochastic in nature with time varying input and output processes that are non-stationary stochastic processes. Hence, the principle quantities of interest are random variables. Typical modeling of such behavior involves numerical simulation and statistical analysis. A deterministic closure model leading to a second order model for the product density and product speed has previously been proposed. The resulting partial differential equations (PDE) are compared to discrete event simulations (DES) that simulate factory production as a time dependent M/M/1 queuing system. Three fundamental scenarios for the time dependent influx are studied: An instant step up/down of the mean arrival rate; an exponential step up/down of the mean arrival rate; and periodic variation of the mean arrival rate. It is shown that the second order model, in general, yields significant improvement over current first order models. Specifically, the agreement between the DES and the PDE for the step up and for periodic forcing that is not too rapid is very good. Adding diffusion to the PDE further improves the agreement. The analysis also points to fundamental open issues regarding the deterministic modeling of low signal-to-noise ratio for some stochastic processes and the possibility of resonance in deterministic models that is not present in the original stochastic process.
ContributorsWienke, Matthew (Author) / Armbruster, Dieter (Thesis advisor) / Jones, Donald (Committee member) / Platte, Rodrigo (Committee member) / Gardner, Carl (Committee member) / Ringhofer, Christian (Committee member) / Arizona State University (Publisher)
Created2015
154089-Thumbnail Image.png
Description
Swarms of animals, fish, birds, locusts etc. are a common occurrence but their coherence and method of organization poses a major question for mathematics and biology.The Vicsek and the Attraction-Repulsion are two models that have been proposed to explain the emergence of collective motion. A major issue

Swarms of animals, fish, birds, locusts etc. are a common occurrence but their coherence and method of organization poses a major question for mathematics and biology.The Vicsek and the Attraction-Repulsion are two models that have been proposed to explain the emergence of collective motion. A major issue for the Vicsek Model is that its particles are not attracted to each other, leaving the swarm with alignment in velocity but without spatial coherence. Restricting the particles to a bounded domain generates global spatial coherence of swarms while maintaining velocity alignment. While individual particles are specularly reflected at the boundary, the swarm as a whole is not. As a result, new dynamical swarming solutions are found.

The Attraction-Repulsion Model set with a long-range attraction and short-range repulsion interaction potential typically stabilizes to a well-studied flock steady state solution. The particles for a flock remain spatially coherent but have no spatial bound and explore all space. A bounded domain with specularly reflecting walls traps the particles within a specific region. A fundamental refraction law for a swarm impacting on a planar boundary is derived. The swarm reflection varies from specular for a swarm dominated by

kinetic energy to inelastic for a swarm dominated by potential energy. Inelastic collisions lead to alignment with the wall and to damped pulsating oscillations of the swarm. The fundamental refraction law provides a one-dimensional iterative map that allows for a prediction and analysis of the trajectory of the center of mass of a flock in a channel and a square domain.

The extension of the wall collisions to a scattering experiment is conducted by setting two identical flocks to collide. The two particle dynamics is studied analytically and shows a transition from scattering: diverging flocks to bound states in the form of oscillations or parallel motions. Numerical studies of collisions of flocks show the same transition where the bound states become either a single translating flock or a rotating (mill).
ContributorsThatcher, Andrea (Author) / Armbruster, Hans (Thesis advisor) / Motsch, Sebastien (Committee member) / Ringhofer, Christian (Committee member) / Platte, Rodrigo (Committee member) / Gardner, Carl (Committee member) / Arizona State University (Publisher)
Created2015
156214-Thumbnail Image.png
Description
The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency

The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency of several different partitioning methods which demarcate flow fields into dynamically distinct regions, and the correlation of finite-time statistics from the advection-diffusion equation to these regions.

For autonomous systems, invariant manifold theory can be used to separate the system into dynamically distinct regions. Despite there being no equivalent method for nonautonomous systems, a similar analysis can be done. Systems with general time dependencies must resort to using finite-time transport barriers for partitioning; these barriers are the edges of Lagrangian coherent structures (LCS), the analog to the stable and unstable manifolds of invariant manifold theory. Using the coherent structures of a flow to analyze the statistics of trapping, flight, and residence times, the signature of anomalous diffusion are obtained.

This research also investigates the use of linear models for approximating the elements of the covariance matrix of nonlinear flows, and then applying the covariance matrix approximation over coherent regions. The first and second-order moments can be used to fully describe an ensemble evolution in linear systems, however there is no direct method for nonlinear systems. The problem is only compounded by the fact that the moments for nonlinear flows typically don't have analytic representations, therefore direct numerical simulations would be needed to obtain the moments throughout the domain. To circumvent these many computations, the nonlinear system is approximated as many linear systems for which analytic expressions for the moments exist. The parameters introduced in the linear models are obtained locally from the nonlinear deformation tensor.
ContributorsWalker, Phillip (Author) / Tang, Wenbo (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018
156216-Thumbnail Image.png
Description
Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of

Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain.

This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges.

Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems.
ContributorsScarnati, Theresa (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Gardner, Carl (Committee member) / Sanders, Toby (Committee member) / Arizona State University (Publisher)
Created2018
155984-Thumbnail Image.png
Description
Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced

Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced prostate cancer patients undergoing androgen deprivation therapy (ADT). I demonstrate that the inverse problem of parameter estimation might be too complicated and simply relying on data fitting can give incorrect conclusions, since there is a large error in parameter values estimated and parameters might be unidentifiable. I provide confidence intervals to give estimate forecasts using data assimilation via an ensemble Kalman Filter. Using the ensemble Kalman Filter, I perform dual estimation of parameters and state variables to test the prediction accuracy of the models. Finally, I present a novel model with time delay and a delay-dependent parameter. I provide a geometric stability result to study the behavior of this model and show that the inclusion of time delay may improve the accuracy of predictions. Also, I demonstrate with clinical data that the inclusion of the delay-dependent parameter facilitates the identification and estimation of parameters.
ContributorsBaez, Javier (Author) / Kuang, Yang (Thesis advisor) / Kostelich, Eric (Committee member) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Nagy, John (Committee member) / Arizona State University (Publisher)
Created2017