Matching Items (1,094)
Filtering by

Clear all filters

152249-Thumbnail Image.png
Description
For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study

For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be noticeable if laminar flow was assumed. Mach numbers up to 20 are investigated to study the effects of vibrational and chemical non-equilibrium on compressor performance. A direct impact on the trends on the kinetic energy efficiency and compressor efficiency was found due to dissociation.
ContributorsOliden, Daniel (Author) / Lee, Tae-Woo (Thesis advisor) / Peet, Yulia (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
151944-Thumbnail Image.png
Description
The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom-

The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions.
ContributorsGhods, Sina (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Tang, Wenbo (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
152845-Thumbnail Image.png
Description
There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework

There has been important progress in understanding ecological dynamics through the development of the theory of ecological stoichiometry. This fast growing theory provides new constraints and mechanisms that can be formulated into mathematical models. Stoichiometric models incorporate the effects of both food quantity and food quality into a single framework that produce rich dynamics. While the effects of nutrient deficiency on consumer growth are well understood, recent discoveries in ecological stoichiometry suggest that consumer dynamics are not only affected by insufficient food nutrient content (low phosphorus (P): carbon (C) ratio) but also by excess food nutrient content (high P:C). This phenomenon, known as the stoichiometric knife edge, in which animal growth is reduced not only by food with low P content but also by food with high P content, needs to be incorporated into mathematical models. Here we present Lotka-Volterra type models to investigate the growth response of Daphnia to algae of varying P:C ratios. Using a nonsmooth system of two ordinary differential equations (ODEs), we formulate the first model to incorporate the phenomenon of the stoichiometric knife edge. We then extend this stoichiometric model by mechanistically deriving and tracking free P in the environment. This resulting full knife edge model is a nonsmooth system of three ODEs. Bifurcation analysis and numerical simulations of the full model, that explicitly tracks phosphorus, leads to quantitatively different predictions than previous models that neglect to track free nutrients. The full model shows that the grazer population is sensitive to excess nutrient concentrations as a dynamical free nutrient pool induces extreme grazer population density changes. These modeling efforts provide insight on the effects of excess nutrient content on grazer dynamics and deepen our understanding of the effects of stoichiometry on the mechanisms governing population dynamics and the interactions between trophic levels.
ContributorsPeace, Angela (Author) / Kuang, Yang (Thesis advisor) / Elser, James J (Committee member) / Baer, Steven (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
153123-Thumbnail Image.png
Description
Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an

Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an otherwise closed surface.

A general method for denoising and adaptively smoothing these dirty stereolithography files is proposed. Unlike existing means, this approach aims to smoothen the dirty surface representation by utilizing the well established levelset method. The level of smoothing and denoising can be set depending on a per-requirement basis by means of input parameters. Once the surface representation is smoothened as desired, it can be extracted as a standard levelset scalar isosurface.

The approach presented in this thesis is also coupled to a fully unstructured Cartesian mesh generation library with built-in localized adaptive mesh refinement (AMR) capabilities, thereby ensuring lower computational cost while also providing sufficient resolution. Future work will focus on implementing tetrahedral cuts to the base hexahedral mesh structure in order to extract a fully unstructured hexahedra-dominant mesh describing the STL geometry, which can be used for fluid flow simulations.
ContributorsKannan, Karthik (Author) / Herrmann, Marcus (Thesis advisor) / Peet, Yulia (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2014
153290-Thumbnail Image.png
Description
Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection.

Pre-Exposure Prophylaxis (PrEP) is any medical or public health procedure used before exposure to the disease causing agent, its purpose is to prevent, rather than treat or cure a disease. Most commonly, PrEP refers to an experimental HIV-prevention strategy that would use antiretrovirals to protect HIV-negative people from HIV infection. A deterministic mathematical model of HIV transmission is developed to evaluate the public-health impact of oral PrEP interventions, and to compare PrEP effectiveness with respect to different evaluation methods. The effects of demographic, behavioral, and epidemic parameters on the PrEP impact are studied in a multivariate sensitivity analysis. Most of the published models on HIV intervention impact assume that the number of individuals joining the sexually active population per year is constant or proportional to the total population. In the second part of this study, three models are presented and analyzed to study the PrEP intervention, with constant, linear, and logistic recruitment rates. How different demographic assumptions can affect the evaluation of PrEP is studied. When provided with data, often least square fitting or similar approaches can be used to determine a single set of approximated parameter values that make the model fit the data best. However, least square fitting only provides point estimates and does not provide information on how strongly the data supports these particular estimates. Therefore, in the third part of this study, Bayesian parameter estimation is applied on fitting ODE model to the related HIV data. Starting with a set of prior distributions for the parameters as initial guess, Bayes' formula can be applied to obtain a set of posterior distributions for the parameters which makes the model fit the observed data best. Evaluating the posterior distribution often requires the integration of high-dimensional functions, which is usually difficult to calculate numerically. Therefore, the Markov chain Monte Carlo (MCMC) method is used to approximate the posterior distribution.
ContributorsZhao, Yuqin (Author) / Kuang, Yang (Thesis advisor) / Taylor, Jesse (Committee member) / Armbruster, Dieter (Committee member) / Tang, Wenbo (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
153299-Thumbnail Image.png
Description
With a ground-based Doppler lidar on the upwind side of a wind farm in the Tehachapi Pass of California, radial wind velocity measurements were collected for repeating sector sweeps, scanning up to 10 kilometers away. This region consisted of complex terrain, with the scans made between mountains. The dataset was

With a ground-based Doppler lidar on the upwind side of a wind farm in the Tehachapi Pass of California, radial wind velocity measurements were collected for repeating sector sweeps, scanning up to 10 kilometers away. This region consisted of complex terrain, with the scans made between mountains. The dataset was utilized for techniques being studied for short-term forecasting of wind power by correlating changes in energy content and of turbulence intensity by tracking spatial variance, in the wind ahead of a wind farm. A ramp event was also captured and its propagation was tracked.

Orthogonal horizontal wind vectors were retrieved from the radial velocity using a sector Velocity Azimuth Display method. Streamlines were plotted to determine the potential sites for a correlation of upstream wind speed with wind speed at downstream locations near the wind farm. A "virtual wind turbine" was "placed" in locations along the streamline by using the time-series velocity data at the location as the input to a modeled wind turbine, to determine the extractable energy content at that location. The relationship between this time-dependent energy content upstream and near the wind farm was studied. By correlating the energy content with each upstream location based on a time shift estimated according to advection at the mean wind speed, several fits were evaluated. A prediction of the downstream energy content was produced by shifting the power output in time and applying the best-fit function. This method made predictions of the power near the wind farm several minutes in advance. Predictions were also made up to an hour in advance for a large ramp event. The Magnitude Absolute Error and Standard Deviation are presented for the predictions based on each selected upstream location.
ContributorsMagerman, Beth (Author) / Calhoun, Ronald (Thesis advisor) / Peet, Yulia (Committee member) / Huang, Huei-Ping (Committee member) / Krishnamurthy, Raghavendra (Committee member) / Arizona State University (Publisher)
Created2014
153936-Thumbnail Image.png
Description
Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the

Presented is a study on the chemotaxis reaction process and its relation with flow topology. The effect of coherent structures in turbulent flows is characterized by studying nutrient uptake and the advantage that is received from motile bacteria over other non-motile bacteria. Variability is found to be dependent on the initial location of scalar impurity and can be tied to Lagrangian coherent structures through recent advances in the identification of finite-time transport barriers. Advantage is relatively small for initial nutrient found within high stretching regions of the flow, and nutrient within elliptic structures provide the greatest advantage for motile species. How the flow field and the relevant flow topology lead to such a relation is analyzed.
ContributorsJones, Kimberly (Author) / Tang, Wenbo (Thesis advisor) / Kang, Yun (Committee member) / Jones, Donald (Committee member) / Arizona State University (Publisher)
Created2015
156214-Thumbnail Image.png
Description
The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency

The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency of several different partitioning methods which demarcate flow fields into dynamically distinct regions, and the correlation of finite-time statistics from the advection-diffusion equation to these regions.

For autonomous systems, invariant manifold theory can be used to separate the system into dynamically distinct regions. Despite there being no equivalent method for nonautonomous systems, a similar analysis can be done. Systems with general time dependencies must resort to using finite-time transport barriers for partitioning; these barriers are the edges of Lagrangian coherent structures (LCS), the analog to the stable and unstable manifolds of invariant manifold theory. Using the coherent structures of a flow to analyze the statistics of trapping, flight, and residence times, the signature of anomalous diffusion are obtained.

This research also investigates the use of linear models for approximating the elements of the covariance matrix of nonlinear flows, and then applying the covariance matrix approximation over coherent regions. The first and second-order moments can be used to fully describe an ensemble evolution in linear systems, however there is no direct method for nonlinear systems. The problem is only compounded by the fact that the moments for nonlinear flows typically don't have analytic representations, therefore direct numerical simulations would be needed to obtain the moments throughout the domain. To circumvent these many computations, the nonlinear system is approximated as many linear systems for which analytic expressions for the moments exist. The parameters introduced in the linear models are obtained locally from the nonlinear deformation tensor.
ContributorsWalker, Phillip (Author) / Tang, Wenbo (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018