Matching Items (1,006)
151944-Thumbnail Image.png
Description
The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom-

The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions.
ContributorsGhods, Sina (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Tang, Wenbo (Committee member) / Arizona State University (Publisher)
Created2013
151840-Thumbnail Image.png
Description
Urbanization and infrastructure development often brings dramatic changes in the surface and groundwater regimes. These changes in moisture content may be particularly problematic when subsurface soils are moisture sensitive such as expansive soils. Residential foundations such as slab-on ground may be built on unsaturated expansive soils and therefore have to

Urbanization and infrastructure development often brings dramatic changes in the surface and groundwater regimes. These changes in moisture content may be particularly problematic when subsurface soils are moisture sensitive such as expansive soils. Residential foundations such as slab-on ground may be built on unsaturated expansive soils and therefore have to resist the deformations associated with change in moisture content (matric suction) in the soil. The problem is more pronounced in arid and semi arid regions with drying periods followed by wet season resulting in large changes in soil suction. Moisture content change causes volume change in expansive soil which causes serious damage to the structures. In order to mitigate these ill effects various mitigation are adopted. The most commonly adopted method in the US is the removal and replacement of upper soils in the profile. The remove and replace method, although heavily used, is not well understood with regard to its impact on the depth of soil wetting or near-surface differential soil movements. In this study the effectiveness of the remove and replace method is studied. A parametric study is done with various removal and replacement materials used and analyzed to obtain the optimal replacement depths and best material. The depth of wetting and heave caused in expansive soil profile under climatic conditions and common irrigation scenarios are studied for arid regions. Soil suction changes and associated soil deformations are analyzed using finite element codes for unsaturated flow and stress/deformation, SVFlux and SVSolid, respectively. The effectiveness and fundamental mechanisms at play in mitigation of expansive soils for remove and replace methods are studied, and include (1) its role in reducing the depth and degree of wetting, and (2) its effect in reducing the overall heave potential, and (3) the effectiveness of this method in pushing the seat of movement deeper within the soil profile to reduce differential soil surface movements. Various non-expansive replacement layers and different surface flux boundary conditions are analyzed, and the concept of optimal depth and soil is introduced. General observations are made concerning the efficacy of remove and replace as a mitigation method.
ContributorsBharadwaj, Anushree (Author) / Houston, Sandra L. (Thesis advisor) / Welfert, Bruno (Thesis advisor) / Zapata, Claudia E (Committee member) / Arizona State University (Publisher)
Created2013
152845-Thumbnail Image.png
Description
There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework

There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework that produce rich dynamics. While the effects of nutrient deficiency on consumer growth are well understood, recent discoveries in ecological stoichiometry suggest that consumer dynamics are not only affected by insufficient food nutrient content (low phosphorus (P): carbon (C) ratio) but also by excess food nutrient content (high P:C). This phenomenon, known as the stoichiometric knife edge, in which animal growth is reduced not only by food with low P content but also by food with high P content, needs to be incorporated into mathematical models. Here we present Lotka-Volterra type models to investigate the growth response of Daphnia to algae of varying P:C ratios. Using a nonsmooth system of two ordinary differential equations (ODEs), we formulate the first model to incorporate the phenomenon of the stoichiometric knife edge. We then extend this stoichiometric model by mechanistically deriving and tracking free P in the environment. This resulting full knife edge model is a nonsmooth system of three ODEs. Bifurcation analysis and numerical simulations of the full model, that explicitly tracks phosphorus, leads to quantitatively different predictions than previous models that neglect to track free nutrients. The full model shows that the grazer population is sensitive to excess nutrient concentrations as a dynamical free nutrient pool induces extreme grazer population density changes. These modeling efforts provide insight on the effects of excess nutrient content on grazer dynamics and deepen our understanding of the effects of stoichiometry on the mechanisms governing population dynamics and the interactions between trophic levels.
ContributorsPeace, Angela (Author) / Kuang, Yang (Thesis advisor) / Elser, James J (Committee member) / Baer, Steven (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
153049-Thumbnail Image.png
Description
Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and

Obtaining high-quality experimental designs to optimize statistical efficiency and data quality is quite challenging for functional magnetic resonance imaging (fMRI). The primary fMRI design issue is on the selection of the best sequence of stimuli based on a statistically meaningful optimality criterion. Some previous studies have provided some guidance and powerful computational tools for obtaining good fMRI designs. However, these results are mainly for basic experimental settings with simple statistical models. In this work, a type of modern fMRI experiments is considered, in which the design matrix of the statistical model depends not only on the selected design, but also on the experimental subject's probabilistic behavior during the experiment. The design matrix is thus uncertain at the design stage, making it diffcult to select good designs. By taking this uncertainty into account, a very efficient approach for obtaining high-quality fMRI designs is developed in this study. The proposed approach is built upon an analytical result, and an efficient computer algorithm. It is shown through case studies that the proposed approach can outperform an existing method in terms of computing time, and the quality of the obtained designs.
ContributorsZhou, Lin (Author) / Kao, Ming-Hung (Thesis advisor) / Reiser, Mark R. (Committee member) / Stufken, John (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2014
153290-Thumbnail Image.png
Description
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection.

Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection. A deterministic mathematical model of HIV transmission is developed to evaluate the public-health impact of oral PrEP interventions, and to compare PrEP effectiveness with respect to different evaluation methods. The effects of demographic, behavioral, and epidemic parameters on the PrEP impact are studied in a multivariate sensitivity analysis. Most of the published models on HIV intervention impact assume that the number of individuals joining the sexually active population per year is constant or proportional to the total population. In the second part of this study, three models are presented and analyzed to study the PrEP intervention, with constant, linear, and logistic recruitment rates. How different demographic assumptions can affect the evaluation of PrEP is studied. When provided with data, often least square fitting or similar approaches can be used to determine a single set of approximated parameter values that make the model fit the data best. However, least square fitting only provides point estimates and does not provide information on how strongly the data supports these particular estimates. Therefore, in the third part of this study, Bayesian parameter estimation is applied on fitting ODE model to the related HIV data. Starting with a set of prior distributions for the parameters as initial guess, Bayes' formula can be applied to obtain a set of posterior distributions for the parameters which makes the model fit the observed data best. Evaluating the posterior distribution often requires the integration of high-dimensional functions, which is usually difficult to calculate numerically. Therefore, the Markov chain Monte Carlo (MCMC) method is used to approximate the posterior distribution.
ContributorsZhao, Yuqin (Author) / Kuang, Yang (Thesis advisor) / Taylor, Jesse (Committee member) / Armbruster, Dieter (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
153936-Thumbnail Image.png
Description
Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the

Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the initial location of scalar impurity and can be tied to Lagrangian coherent structures through recent advances in the identification of finite-time transport barriers. Advantage is relatively small for initial nutrient found within high stretching regions of the flow, and nutrient within elliptic structures provide the greatest advantage for motile species. How the flow field and the relevant flow topology lead to such a relation is analyzed.
ContributorsJones, Kimberly (Author) / Tang, Wenbo (Thesis advisor) / Kang, Yun (Committee member) / Jones, Donald (Committee member) / Arizona State University (Publisher)
Created2015
156214-Thumbnail Image.png
Description
The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency

The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency of several different partitioning methods which demarcate flow fields into dynamically distinct regions, and the correlation of finite-time statistics from the advection-diffusion equation to these regions.

For autonomous systems, invariant manifold theory can be used to separate the system into dynamically distinct regions. Despite there being no equivalent method for nonautonomous systems, a similar analysis can be done. Systems with general time dependencies must resort to using finite-time transport barriers for partitioning; these barriers are the edges of Lagrangian coherent structures (LCS), the analog to the stable and unstable manifolds of invariant manifold theory. Using the coherent structures of a flow to analyze the statistics of trapping, flight, and residence times, the signature of anomalous diffusion are obtained.

This research also investigates the use of linear models for approximating the elements of the covariance matrix of nonlinear flows, and then applying the covariance matrix approximation over coherent regions. The first and second-order moments can be used to fully describe an ensemble evolution in linear systems, however there is no direct method for nonlinear systems. The problem is only compounded by the fact that the moments for nonlinear flows typically don't have analytic representations, therefore direct numerical simulations would be needed to obtain the moments throughout the domain. To circumvent these many computations, the nonlinear system is approximated as many linear systems for which analytic expressions for the moments exist. The parameters introduced in the linear models are obtained locally from the nonlinear deformation tensor.
ContributorsWalker, Phillip (Author) / Tang, Wenbo (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018
156420-Thumbnail Image.png
Description
The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and

The Kuramoto model is an archetypal model for studying synchronization in groups

of nonidentical oscillators where oscillators are imbued with their own frequency and

coupled with other oscillators though a network of interactions. As the coupling

strength increases, there is a bifurcation to complete synchronization where all oscillators

move with the same frequency and show a collective rhythm. Kuramoto-like

dynamics are considered a relevant model for instabilities of the AC-power grid which

operates in synchrony under standard conditions but exhibits, in a state of failure,

segmentation of the grid into desynchronized clusters.

In this dissertation the minimum coupling strength required to ensure total frequency

synchronization in a Kuramoto system, called the critical coupling, is investigated.

For coupling strength below the critical coupling, clusters of oscillators form

where oscillators within a cluster are on average oscillating with the same long-term

frequency. A unified order parameter based approach is developed to create approximations

of the critical coupling. Some of the new approximations provide strict lower

bounds for the critical coupling. In addition, these approximations allow for predictions

of the partially synchronized clusters that emerge in the bifurcation from the

synchronized state.

Merging the order parameter approach with graph theoretical concepts leads to a

characterization of this bifurcation as a weighted graph partitioning problem on an

arbitrary networks which then leads to an optimization problem that can efficiently

estimate the partially synchronized clusters. Numerical experiments on random Kuramoto

systems show the high accuracy of these methods. An interpretation of the

methods in the context of power systems is provided.
ContributorsGilg, Brady (Author) / Armbruster, Dieter (Thesis advisor) / Mittelmann, Hans (Committee member) / Scaglione, Anna (Committee member) / Strogatz, Steven (Committee member) / Welfert, Bruno (Committee member) / Arizona State University (Publisher)
Created2018
157495-Thumbnail Image.png
Description
Lidar has demonstrated its utility in meteorological studies, wind resource assessment, and wind farm control. More recently, lidar has gained widespread attention for autonomous vehicles.

The first part of the dissertation begins with an application of a coherent Doppler lidar to wind gust characterization for wind farm control. This application focuses

Lidar has demonstrated its utility in meteorological studies, wind resource assessment, and wind farm control. More recently, lidar has gained widespread attention for autonomous vehicles.

The first part of the dissertation begins with an application of a coherent Doppler lidar to wind gust characterization for wind farm control. This application focuses on wind gusts on a scale from 100 m to 1000 m. A detecting and tracking algorithm is proposed to extract gusts from a wind field and track their movement. The algorithm was implemented for a three-hour, two-dimensional wind field retrieved from the measurements of a coherent Doppler lidar. The Gaussian distribution of the gust spanwise deviation from the streamline was demonstrated. Size dependency of gust deviations is discussed. A prediction model estimating the impact of gusts with respect to arrival time and the probability of arrival locations is introduced. The prediction model was applied to a virtual wind turbine array, and estimates are given for which wind turbines would be impacted.

The second part of this dissertation describes a Time-of-Flight lidar simulation. The lidar simulation includes a laser source module, a propagation module, a receiver module, and a timing module. A two-dimensional pulse model is introduced in the laser source module. The sampling rate for the pulse model is explored. The propagation module takes accounts of beam divergence, target characteristics, atmosphere, and optics. The receiver module contains models of noise and analog filters in a lidar receiver. The effect of analog filters on the signal behavior was investigated. The timing module includes a Time-to-Digital Converter (TDC) module and an Analog-to-Digital converter (ADC) module. In the TDC module, several walk-error compensation methods for leading-edge detection and multiple timing algorithms were modeled and tested on simulated signals. In the ADC module, a benchmark (BM) timing algorithm is proposed. A Neyman-Pearson (NP) detector was implemented in the time domain and frequency domain (fast Fourier transform (FFT) approach). The FFT approach with frequency-domain zero-paddings improves the timing resolution. The BM algorithm was tested on simulated signals, and the NP detector was evaluated on both simulated signals and measurements from a prototype lidar (Bhaskaran, 2018).
ContributorsZhou, Kai (Author) / Calhoun, Ronald (Thesis advisor) / Chen, Kangping (Committee member) / Tang, Wenbo (Committee member) / Peet, Yulia (Committee member) / Krishnamurthy, Raghavendra (Committee member) / Arizona State University (Publisher)
Created2019