Matching Items (63)
Description

The main purpose of this project is to create a method for determining the absolute position of an accelerometer. Acceleration and angular speed were obtained from an accelerometer attached to a vehicle as it moves around. As the vehicle moves to collect information the orientation of the accelerometer changes, so

The main purpose of this project is to create a method for determining the absolute position of an accelerometer. Acceleration and angular speed were obtained from an accelerometer attached to a vehicle as it moves around. As the vehicle moves to collect information the orientation of the accelerometer changes, so a rotation matrix is applied to the data based on the angular change at each time. The angular change and distance are obtained by using the trapezoidal approximation of the integrals. This method was first validated by using simple sets of "true" data which are explicitly known sets of data to compare the results to. Then, an analysis of how different time steps and levels of noise affect the error of the results was performed to determine the optimal time step of 0.1 sec that was then used for the actual tests. The tests that were performed were: a stationary test for uses of calibration, a straight line test to verify a simple test, and a closed loop test to test the accuracy. The graphs for these tests give no indication of the actual paths, so the final results can only show that the data from the accelerometer is too noisy and inaccurate for this method to be used by this sensor. The future work would be to test different ways to get more accurate data and then use it to verify this methods. These ways could include using more sensors to interpolate the data, reducing noise by using a different sensor, or adding a filter. Then, if this method is considered accurate enough, it could be implemented into control systems.

ContributorsHorner, Devon (Author) / Kostelich, Eric (Thesis director) / Crook, Sharon (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
168448-Thumbnail Image.png
Description
High-dimensional systems are difficult to model and predict. The underlying mechanisms of such systems are too complex to be fully understood with limited theoretical knowledge and/or physical measurements. Nevertheless, redcued-order models have been widely used to study high-dimensional systems, because they are practical and efficient to develop and implement. Although

High-dimensional systems are difficult to model and predict. The underlying mechanisms of such systems are too complex to be fully understood with limited theoretical knowledge and/or physical measurements. Nevertheless, redcued-order models have been widely used to study high-dimensional systems, because they are practical and efficient to develop and implement. Although model errors (biases) are inevitable for reduced-order models, these models can still be proven useful to develop real-world applications. Evaluation and validation for idealized models are indispensable to serve the mission of developing useful applications. Data assimilation and uncertainty quantification can provide a way to assess the performance of a reduced-order model. Real data and a dynamical model are combined together in a data assimilation framework to generate corrected model forecasts of a system. Uncertainties in model forecasts and observations are also quantified in a data assimilation cycle to provide optimal updates that are representative of the real dynamics. In this research, data assimilation is applied to assess the performance of two reduced-order models. The first model is developed for predicting prostate cancer treatment response under intermittent androgen suppression therapy. A sequential data assimilation scheme, the ensemble Kalman filter (EnKF), is used to quantify uncertainties in model predictions using clinical data of individual patients provided by Vancouver Prostate Center. The second model is developed to study what causes the changes of the state of stratospheric polar vortex. Two data assimilation schemes: EnKF and ES-MDA (ensemble smoother with multiple data assimilation), are used to validate the qualitative properties of the model using ECMWF (European Center for Medium-Range Weather Forecasts) reanalysis data. In both studies, the reduced-order model is able to reproduce the data patterns and provide insights to understand the underlying mechanism. However, significant model errors are also diagnosed for both models from the results of data assimilation schemes, which suggests specific improvements of the reduced-order models.
ContributorsWu, Zhimin (Author) / Kostelich, Eric (Thesis advisor) / Moustaoui, Mohamed (Thesis advisor) / Jones, Chris (Committee member) / Espanol, Malena (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2021
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
187669-Thumbnail Image.png
Description
Advancements to a dual scale Large Eddy Simulation (LES) modeling approach for immiscible turbulent phase interfaces are presented. In the dual scale LES approach, a high resolution auxiliary grid, used to capture a fully resolved interface geometry realization, is linked to an LES grid that solves the filtered Navier-Stokes equations.

Advancements to a dual scale Large Eddy Simulation (LES) modeling approach for immiscible turbulent phase interfaces are presented. In the dual scale LES approach, a high resolution auxiliary grid, used to capture a fully resolved interface geometry realization, is linked to an LES grid that solves the filtered Navier-Stokes equations. Exact closure of the sub-filter interface terms is provided by explicitly filtering the fully resolved quantities from the auxiliary grid. Reconstructing a fully resolved velocity field to advance the phase interface requires modeling several sub-filter effects, including shear and accelerational instabilities and phase change. Two sub-filter models were developed to generate these sub-filter hydrodynamic instabilities: an Orr-Sommerfeld model and a Volume-of-Fluid (VoF) vortex sheet method. The Orr-Sommerfeld sub-filter model was found to be incompatible with the dual scale approach, since it is unable to generate interface rollup and a process to separate filtered and sub-filter scales could not be established. A novel VoF vortex sheet method was therefore proposed, since prior vortex methods have demonstrated interface rollup and following the LES methodology, the vortex sheet strength could be decomposed into its filtered and sub-filter components. In the development of the VoF vortex sheet method, it was tested with a variety of classical hydrodynamic instability problems, compared against prior work and linear theory, and verified using Direct Numerical Simulations (DNS). An LES consistent approach to coupling the VoF vortex sheet with the LES filtered equations is presented and compared against DNS. Finally, a sub-filter phase change model is proposed and assessed in the dual scale LES framework with an evaporating interface subjected to decaying homogeneous isotropic turbulence. Results are compared against DNS and the interplay between surface tension forces and evaporation are discussed.
ContributorsGoodrich, Austin Chase (Author) / Herrmann, Marcus (Thesis advisor) / Dahm, Werner (Committee member) / Kim, Jeonglae (Committee member) / Huang, Huei-Ping (Committee member) / Kostelich, Eric (Committee member) / Arizona State University (Publisher)
Created2023
187415-Thumbnail Image.png
Description
A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model,

A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model, which incorporates the key pertinent epidemiological features of SARS-CoV-2 and fitted using observed COVID-19 data, was designed and used to assess the population-level impacts of vaccination and face mask usage in mitigating the burden of the pandemic in the United States. Conditions for the existence and asymptotic stability of the various equilibria of the model were derived. The model was shown to undergo a vaccine-induced backward bifurcation when the associated reproduction number is less than one. Conditions for achieving vaccine-derived herd immunity were derived for three of the four FDA-approved vaccines (namely Pfizer, Moderna and Johnson & Johnson vaccine), and the vaccination coverage level needed to achieve it decreases with increasing coverage of moderately and highly-effective face masks. It was also shown that using face masks as a singular intervention strategy could lead to the elimination of the pandemic if moderate or highly-effective masks are prioritized and pandemic elimination prospects are greatly enhanced if the vaccination program is combined with a face mask use strategy that emphasizes the use of moderate to highly-effective masks with at least moderate coverage. The model was extended in Chapter 3 to allow for the assessment of the impacts of waning and boosting of vaccine-derived and natural immunity against the BA.1 Omicron variant of SARS-CoV-2. It was shown that vaccine-derived herd immunity can be achieved in the United States via a vaccination-boosting strategy which entails fully vaccinating at least 72% of the susceptible populace. Boosting of vaccine-derived immunity was shown to be more beneficial than boosting of natural immunity. Overall, this study showed that the prospects of the elimination of the pandemic in the United States were highly promising using the two intervention measures.
ContributorsSafdar, Salman (Author) / Gumel, Abba (Thesis advisor) / Kostelich, Eric (Committee member) / Kang, Yun (Committee member) / Fricks, John (Committee member) / Espanol, Malena (Committee member) / Arizona State University (Publisher)
Created2023
Description
The planetary boundary layer (PBL) is the lowest part of the troposphere and is directly influenced by surface forcing. Anthropogenic modification from natural to urban environments characterized by increased impervious surfaces, anthropogenic heat emission, and a three-dimensional building morphology, affects land-atmosphere interactions in the urban boundary layer (UBL). Ample research

The planetary boundary layer (PBL) is the lowest part of the troposphere and is directly influenced by surface forcing. Anthropogenic modification from natural to urban environments characterized by increased impervious surfaces, anthropogenic heat emission, and a three-dimensional building morphology, affects land-atmosphere interactions in the urban boundary layer (UBL). Ample research has demonstrated the effect of landscape modifications on development and modulation of the near-surface urban heat island (UHI). However, despite potential implications for air quality, precipitation patterns and aviation operations, considerably less attention has been given to impacts on regional scale wind flow. This dissertation, composed of three peer reviewed manuscripts, fills a fundamental gap in urban climate research, by investigating individual and combined impacts of urbanization, heat adaptation strategies and projected climate change on UBL dynamics. Paper 1 uses medium-resolution Weather Research and Forecast (WRF) climate simulations to assess contemporary and future impacts across the Conterminous US (CONUS). Results indicate that projected urbanization and climate change are expected to increase summer daytime UBL height in the eastern CONUS. Heat adaptation strategies are expected to reduce summer daytime UBL depth by several hundred meters, increase both daytime and nighttime static stability and induce stronger subsidence, especially in the southwestern US. Paper 2 investigates urban modifications to contemporary wind circulation in the complex terrain of the Phoenix Metropolitan Area (PMA) using high-resolution WRF simulations. The built environment of PMA decreases wind flow in the evening and nighttime inertial sublayer and produces a UHI-induced circulation of limited vertical extent that modulates the background flow. During daytime, greater urban sensible heat flux dampens the urban roughness-induced drag effect by promoting a deeper, more mixed UBL. Paper 3 extends the investigation to future scenarios showing that, overall, climate change is expected to reduce wind speed across the PMA. Projected increased soil moisture is expected to intensify katabatic winds and weaken anabatic winds along steeper slopes. Urban development is expected to obstruct nighttime wind flow across areas of urban expansion and increase turbulence in the westernmost UBL. This dissertation advances the understanding of regional-scale UBL dynamics and highlights challenges and opportunities for future research.
ContributorsBrandi, Aldo (Author) / Georgescu, Matei (Thesis advisor) / Broadbent, Ashley (Committee member) / Moustaoui, Mohamed (Committee member) / Sailor, David (Committee member) / Arizona State University (Publisher)
Created2023
171849-Thumbnail Image.png
Description
This thesis focuses on the turbulent bluff body wakes in incompressible and compressible flows. An incompressible wake flow past an axisymmetric body of revolution at a diameter-based Reynolds number Re=5000 is investigated via a direct numerical simulation. It is followed by the development of a compressible solver using a split-form

This thesis focuses on the turbulent bluff body wakes in incompressible and compressible flows. An incompressible wake flow past an axisymmetric body of revolution at a diameter-based Reynolds number Re=5000 is investigated via a direct numerical simulation. It is followed by the development of a compressible solver using a split-form discontinuous Galerkin spectral element method framework with shock capturing. In the study on incompressible wake flows, three dominant coherent vortical motions are identified in the wake: the vortex shedding motion with the frequency of St=0.27, the bubble pumping motion with St=0.02, and the very-low-frequency (VLF) motion originated in the very near wake of the body with the frequencies St=0.002 and 0.005. The very-low-frequency motion is associated with a slow precession of the wake barycenter. The vortex shedding pattern is demonstrated to follow a reflectional symmetry breaking mode, with the detachment location rotating continuously and making a full circle over one vortex shedding period. The VLF radial motion with St=0.005 originates as m = 1 mode, but later transitions into m = 2 mode in the intermediate wake. Proper orthogonaldecomposition (POD) and dynamic mode decomposition (DMD) are further performed to analyze the spatial structure associated with the dominant coherent motions. Results of the POD and DMD analysis are consistent with the results of the azimuthal Fourier analysis. To extend the current incompressible code to be able to solve compressible flows, a computational methodology is developed using a high-order approximation for the compressible Navier-Stokes equations with discontinuities. The methodology is based on a split discretization framework with a summation-by-part operator. An entropy viscosity method and a subcell finite volume method are implemented to capture discontinuities. The developed high-order split-form with shock-capturing methodology is subject to a series of evaluation on cases from subsonic to hypersonic, from one-dimensional to three dimensional. The Taylor-Green vortex case and the supersonic sphere wake case show the capability to handle three-dimensional turbulent flows without and with the presence of shocks. It is also shown that higher-order approximations yield smaller errors than lower-order approximations, for the same number of total degrees of freedom.
ContributorsZhang, Fengrui (Author) / Peet, Yulia (Thesis advisor) / Kostelich, Eric (Committee member) / Kim, Jeonglae (Committee member) / Hermann, Marcus (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2022
Description
Glioblastoma Multiforme is a prevalent and aggressive brain tumor. It has an average 5-year survival rate of 6% and average survival time of 14 months. Using patient-specific MRI data from the Barrow Neurological Institute, this thesis investigates the impact of parameter manipulation on reaction-diffusion models for predicting and simulating glioblastoma

Glioblastoma Multiforme is a prevalent and aggressive brain tumor. It has an average 5-year survival rate of 6% and average survival time of 14 months. Using patient-specific MRI data from the Barrow Neurological Institute, this thesis investigates the impact of parameter manipulation on reaction-diffusion models for predicting and simulating glioblastoma growth. The study aims to explore key factors influencing tumor morphology and to contribute to enhancing prediction techniques for treatment.
ContributorsShayegan, Tara (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Human Evolution & Social Change (Contributor)
Created2024-05
157690-Thumbnail Image.png
Description
The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling

The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling can be split into two groups: microscopic and macroscopic models. Microscopic models described the motion of so-called agents (e.g. cells, ants) that interact with their surrounding neighbors. The interactions among these agents form at a large scale some special structures such as flocking and swarming. One of the key questions is to relate the particular interactions among agents with the overall emerging structures. Macroscopic models are precisely designed to describe the evolution of such large structures. They are usually given as partial differential equations describing the time evolution of a density distribution (instead of tracking each individual agent). For instance, reaction-diffusion equations are used to model glioma cells and are being used to predict tumor growth. This dissertation aims at developing such a framework to better understand the complex behavior of foraging ants and glioma cells.
ContributorsJamous, Sara Sami (Author) / Motsch, Sebastien (Thesis advisor) / Armbruster, Dieter (Committee member) / Camacho, Erika (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2019
157649-Thumbnail Image.png
Description
I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling

I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling points are not known explicitly. This work presents robust algorithms that find good sampling points in complex regions for polynomial interpolation, least-squares, and radial basis function (RBF) methods. The quality of these nodes is measured using the Lebesgue constant. I will also consider optimal sampling for constrained optimization, used to solve PDEs, where boundary conditions must be imposed. Furthermore, I extend the scope of the problem to include finding near-optimal sampling points for high-order finite difference methods. These high-order finite difference methods can be implemented using either piecewise polynomials or RBFs.
ContributorsLiu, Tony (Author) / Platte, Rodrigo B (Thesis advisor) / Renaut, Rosemary (Committee member) / Kaspar, David (Committee member) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2019