Matching Items (45)
Filtering by

Clear all filters

152845-Thumbnail Image.png
Description
There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework

There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework that produce rich dynamics. While the effects of nutrient deficiency on consumer growth are well understood, recent discoveries in ecological stoichiometry suggest that consumer dynamics are not only affected by insufficient food nutrient content (low phosphorus (P): carbon (C) ratio) but also by excess food nutrient content (high P:C). This phenomenon, known as the stoichiometric knife edge, in which animal growth is reduced not only by food with low P content but also by food with high P content, needs to be incorporated into mathematical models. Here we present Lotka-Volterra type models to investigate the growth response of Daphnia to algae of varying P:C ratios. Using a nonsmooth system of two ordinary differential equations (ODEs), we formulate the first model to incorporate the phenomenon of the stoichiometric knife edge. We then extend this stoichiometric model by mechanistically deriving and tracking free P in the environment. This resulting full knife edge model is a nonsmooth system of three ODEs. Bifurcation analysis and numerical simulations of the full model, that explicitly tracks phosphorus, leads to quantitatively different predictions than previous models that neglect to track free nutrients. The full model shows that the grazer population is sensitive to excess nutrient concentrations as a dynamical free nutrient pool induces extreme grazer population density changes. These modeling efforts provide insight on the effects of excess nutrient content on grazer dynamics and deepen our understanding of the effects of stoichiometry on the mechanisms governing population dynamics and the interactions between trophic levels.
ContributorsPeace, Angela (Author) / Kuang, Yang (Thesis advisor) / Elser, James J (Committee member) / Baer, Steven (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
153936-Thumbnail Image.png
Description
Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the

Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the initial location of scalar impurity and can be tied to Lagrangian coherent structures through recent advances in the identification of finite-time transport barriers. Advantage is relatively small for initial nutrient found within high stretching regions of the flow, and nutrient within elliptic structures provide the greatest advantage for motile species. How the flow field and the relevant flow topology lead to such a relation is analyzed.
ContributorsJones, Kimberly (Author) / Tang, Wenbo (Thesis advisor) / Kang, Yun (Committee member) / Jones, Donald (Committee member) / Arizona State University (Publisher)
Created2015
156637-Thumbnail Image.png
Description
Earth-system models describe the interacting components of the climate system and

technological systems that affect society, such as communication infrastructures. Data

assimilation addresses the challenge of state specification by incorporating system

observations into the model estimates. In this research, a particular data

assimilation technique called the Local Ensemble Transform Kalman Filter (LETKF) is

applied

Earth-system models describe the interacting components of the climate system and

technological systems that affect society, such as communication infrastructures. Data

assimilation addresses the challenge of state specification by incorporating system

observations into the model estimates. In this research, a particular data

assimilation technique called the Local Ensemble Transform Kalman Filter (LETKF) is

applied to the ionosphere, which is a domain of practical interest due to its effects

on infrastructures that depend on satellite communication and remote sensing. This

dissertation consists of three main studies that propose strategies to improve space-

weather specification during ionospheric extreme events, but are generally applicable

to Earth-system models:

Topic I applies the LETKF to estimate ion density with an idealized model of

the ionosphere, given noisy synthetic observations of varying sparsity. Results show

that the LETKF yields accurate estimates of the ion density field and unobserved

components of neutral winds even when the observation density is spatially sparse

(2% of grid points) and there is large levels (40%) of Gaussian observation noise.

Topic II proposes a targeted observing strategy for data assimilation, which uses

the influence matrix diagnostic to target errors in chosen state variables. This

strategy is applied in observing system experiments, in which synthetic electron density

observations are assimilated with the LETKF into the Thermosphere-Ionosphere-

Electrodynamics Global Circulation Model (TIEGCM) during a geomagnetic storm.

Results show that assimilating targeted electron density observations yields on

average about 60%–80% reduction in electron density error within a 600 km radius of

the observed location, compared to 15% reduction obtained with randomly placed

vertical profiles.

Topic III proposes a methodology to account for systematic model bias arising

ifrom errors in parametrized solar and magnetospheric inputs. This strategy is ap-

plied with the TIEGCM during a geomagnetic storm, and is used to estimate the

spatiotemporal variations of bias in electron density predictions during the

transitionary phases of the geomagnetic storm. Results show that this strategy reduces

error in 1-hour predictions of electron density by about 35% and 30% in polar regions

during the main and relaxation phases of the geomagnetic storm, respectively.
ContributorsDurazo, Juan, Ph.D (Author) / Kostelich, Eric J. (Thesis advisor) / Mahalov, Alex (Thesis advisor) / Tang, Wenbo (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018
157240-Thumbnail Image.png
Description
The dynamics of a fluid flow inside 2D square and 3D cubic cavities

under various configurations were simulated and analyzed using a

spectral code I developed.

This code was validated against known studies in the 3D lid-driven

cavity. It was then used to explore the various dynamical behaviors

close to the onset

The dynamics of a fluid flow inside 2D square and 3D cubic cavities

under various configurations were simulated and analyzed using a

spectral code I developed.

This code was validated against known studies in the 3D lid-driven

cavity. It was then used to explore the various dynamical behaviors

close to the onset of instability of the steady-state flow, and explain

in the process the mechanism underlying an intermittent bursting

previously observed. A fairly complete bifurcation picture emerged,

using a combination of computational tools such as selective

frequency damping, edge-state tracking and subspace restriction.

The code was then used to investigate the flow in a 2D square cavity

under stable temperature stratification, an idealized version of a lake

with warmer water at the surface compared to the bottom. The governing

equations are the Navier-Stokes equations under the Boussinesq approximation.

Simulations were done over a wide range of parameters of the problem quantifying

the driving velocity at the top (e.g. wind) and the strength of the stratification.

Particular attention was paid to the mechanisms associated with the onset of

instability of the base steady state, and the complex nontrivial dynamics

occurring beyond onset, where the presence of multiple states leads to a

rich spectrum of states, including homoclinic and heteroclinic chaos.

A third configuration investigates the flow dynamics of a fluid in a rapidly

rotating cube subjected to small amplitude modulations. The responses were

quantified by the global helicity and energy measures, and various peak

responses associated to resonances with intrinsic eigenmodes of the cavity

and/or internal retracing beams were clearly identified for the first time.

A novel approach to compute the eigenmodes is also described, making accessible

a whole catalog of these with various properties and dynamics. When the small

amplitude modulation does not align with the rotation axis (precession) we show

that a new set of eigenmodes are primarily excited as the angular velocity

increases, while triadic resonances may occur once the nonlinear regime kicks in.
ContributorsWu, Ke (Author) / Lopez, Juan (Thesis advisor) / Welfert, Bruno (Thesis advisor) / Tang, Wenbo (Committee member) / Platte, Rodrigo (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2019
135327-Thumbnail Image.png
Description
A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog

A semi-implicit, fourth-order time-filtered leapfrog numerical scheme is investigated for accuracy and stability, and applied to several test cases, including one-dimensional advection and diffusion, the anelastic equations to simulate the Kelvin-Helmholtz instability, and the global shallow water spectral model to simulate the nonlinear evolution of twin tropical cyclones. The leapfrog scheme leads to computational modes in the solutions to highly nonlinear systems, and time-filters are often used to damp these modes. The proposed filter damps the computational modes without appreciably degrading the physical mode. Its performance in these metrics is superior to the second-order time-filtered leapfrog scheme developed by Robert and Asselin.
Created2016-05
135651-Thumbnail Image.png
Description
Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure,

Honey bees (Apis mellifera) are responsible for pollinating nearly 80\% of all pollinated plants, meaning humans depend on honey bees to pollinate many staple crops. The success or failure of a colony is vital to global food production. There are various complex factors that can contribute to a colony's failure, including pesticides. Neonicotoids are a popular pesticide that have been used in recent times. In this study we concern ourselves with pesticides and its impact on honey bee colonies. Previous investigations that we draw significant inspiration from include Khoury et Al's \emph{A Quantitative Model of Honey Bee Colony Population Dynamics}, Henry et Al's \emph{A Common Pesticide Decreases Foraging Success and Survival in Honey Bees}, and Brown's \emph{ Mathematical Models of Honey Bee Populations: Rapid Population Decline}. In this project we extend a mathematical model to investigate the impact of pesticides on a honey bee colony, with birth rates and death rates being dependent on pesticides, and we see how these death rates influence the growth of a colony. Our studies have found an equilibrium point that depends on pesticides. Trace amounts of pesticide are detrimental as they not only affect death rates, but birth rates as well.
ContributorsSalinas, Armando (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136625-Thumbnail Image.png
Description
A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to

A Guide to Financial Mathematics is a comprehensive and easy-to-use study guide for students studying for the one of the first actuarial exams, Exam FM. While there are many resources available to students to study for these exams, this study is free to the students and offers an approach to the material similar to that of which is presented in class at ASU. The guide is available to students and professors in the new Actuarial Science degree program offered by ASU. There are twelve chapters, including financial calculator tips, detailed notes, examples, and practice exercises. Included at the end of the guide is a list of referenced material.
ContributorsDougher, Caroline Marie (Author) / Milovanovic, Jelena (Thesis director) / Boggess, May (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136691-Thumbnail Image.png
Description
Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has

Covering subsequences with sets of permutations arises in many applications, including event-sequence testing. Given a set of subsequences to cover, one is often interested in knowing the fewest number of permutations required to cover each subsequence, and in finding an explicit construction of such a set of permutations that has size close to or equal to the minimum possible. The construction of such permutation coverings has proven to be computationally difficult. While many examples for permutations of small length have been found, and strong asymptotic behavior is known, there are few explicit constructions for permutations of intermediate lengths. Most of these are generated from scratch using greedy algorithms. We explore a different approach here. Starting with a set of permutations with the desired coverage properties, we compute local changes to individual permutations that retain the total coverage of the set. By choosing these local changes so as to make one permutation less "essential" in maintaining the coverage of the set, our method attempts to make a permutation completely non-essential, so it can be removed without sacrificing total coverage. We develop a post-optimization method to do this and present results on sequence covering arrays and other types of permutation covering problems demonstrating that it is surprisingly effective.
ContributorsMurray, Patrick Charles (Author) / Colbourn, Charles (Thesis director) / Czygrinow, Andrzej (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Physics (Contributor)
Created2014-12
136520-Thumbnail Image.png
Description
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods

Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
ContributorsHansen, Jakob Kristian (Author) / Renaut, Rosemary (Thesis director) / Cochran, Douglas (Committee member) / Barrett, The Honors College (Contributor) / School of Music (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136340-Thumbnail Image.png
Description
This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way.

This paper focuses on the Szemerédi regularity lemma, a result in the field of extremal graph theory. The lemma says that every graph can be partitioned into bounded equal parts such that most edges of the graph span these partitions, and these edges are distributed in a fairly uniform way. Definitions and notation will be established, leading to explorations of three proofs of the regularity lemma. These are a version of the original proof, a Pythagoras proof utilizing elemental geometry, and a proof utilizing concepts of spectral graph theory. This paper is intended to supplement the proofs with background information about the concepts utilized. Furthermore, it is the hope that this paper will serve as another resource for students and others to begin study of the regularity lemma.
ContributorsByrne, Michael John (Author) / Czygrinow, Andrzej (Thesis director) / Kierstead, Hal (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Chemistry and Biochemistry (Contributor)
Created2015-05