Matching Items (8)
Filtering by

Clear all filters

153915-Thumbnail Image.png
Description
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition

Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
ContributorsIlkturk, Utku (Author) / Gelb, Anne (Thesis advisor) / Platte, Rodrigo (Thesis advisor) / Cochran, Douglas (Committee member) / Renaut, Rosemary (Committee member) / Armbruster, Dieter (Committee member) / Arizona State University (Publisher)
Created2015
156214-Thumbnail Image.png
Description
The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency

The tools developed for the use of investigating dynamical systems have provided critical understanding to a wide range of physical phenomena. Here these tools are used to gain further insight into scalar transport, and how it is affected by mixing. The aim of this research is to investigate the efficiency of several different partitioning methods which demarcate flow fields into dynamically distinct regions, and the correlation of finite-time statistics from the advection-diffusion equation to these regions.

For autonomous systems, invariant manifold theory can be used to separate the system into dynamically distinct regions. Despite there being no equivalent method for nonautonomous systems, a similar analysis can be done. Systems with general time dependencies must resort to using finite-time transport barriers for partitioning; these barriers are the edges of Lagrangian coherent structures (LCS), the analog to the stable and unstable manifolds of invariant manifold theory. Using the coherent structures of a flow to analyze the statistics of trapping, flight, and residence times, the signature of anomalous diffusion are obtained.

This research also investigates the use of linear models for approximating the elements of the covariance matrix of nonlinear flows, and then applying the covariance matrix approximation over coherent regions. The first and second-order moments can be used to fully describe an ensemble evolution in linear systems, however there is no direct method for nonlinear systems. The problem is only compounded by the fact that the moments for nonlinear flows typically don't have analytic representations, therefore direct numerical simulations would be needed to obtain the moments throughout the domain. To circumvent these many computations, the nonlinear system is approximated as many linear systems for which analytic expressions for the moments exist. The parameters introduced in the linear models are obtained locally from the nonlinear deformation tensor.
ContributorsWalker, Phillip (Author) / Tang, Wenbo (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018
156637-Thumbnail Image.png
Description
Earth-system models describe the interacting components of the climate system and

technological systems that affect society, such as communication infrastructures. Data

assimilation addresses the challenge of state specification by incorporating system

observations into the model estimates. In this research, a particular data

assimilation technique called the Local Ensemble Transform Kalman Filter (LETKF) is

applied

Earth-system models describe the interacting components of the climate system and

technological systems that affect society, such as communication infrastructures. Data

assimilation addresses the challenge of state specification by incorporating system

observations into the model estimates. In this research, a particular data

assimilation technique called the Local Ensemble Transform Kalman Filter (LETKF) is

applied to the ionosphere, which is a domain of practical interest due to its effects

on infrastructures that depend on satellite communication and remote sensing. This

dissertation consists of three main studies that propose strategies to improve space-

weather specification during ionospheric extreme events, but are generally applicable

to Earth-system models:

Topic I applies the LETKF to estimate ion density with an idealized model of

the ionosphere, given noisy synthetic observations of varying sparsity. Results show

that the LETKF yields accurate estimates of the ion density field and unobserved

components of neutral winds even when the observation density is spatially sparse

(2% of grid points) and there is large levels (40%) of Gaussian observation noise.

Topic II proposes a targeted observing strategy for data assimilation, which uses

the influence matrix diagnostic to target errors in chosen state variables. This

strategy is applied in observing system experiments, in which synthetic electron density

observations are assimilated with the LETKF into the Thermosphere-Ionosphere-

Electrodynamics Global Circulation Model (TIEGCM) during a geomagnetic storm.

Results show that assimilating targeted electron density observations yields on

average about 60%–80% reduction in electron density error within a 600 km radius of

the observed location, compared to 15% reduction obtained with randomly placed

vertical profiles.

Topic III proposes a methodology to account for systematic model bias arising

ifrom errors in parametrized solar and magnetospheric inputs. This strategy is ap-

plied with the TIEGCM during a geomagnetic storm, and is used to estimate the

spatiotemporal variations of bias in electron density predictions during the

transitionary phases of the geomagnetic storm. Results show that this strategy reduces

error in 1-hour predictions of electron density by about 35% and 30% in polar regions

during the main and relaxation phases of the geomagnetic storm, respectively.
ContributorsDurazo, Juan, Ph.D (Author) / Kostelich, Eric J. (Thesis advisor) / Mahalov, Alex (Thesis advisor) / Tang, Wenbo (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2018
168448-Thumbnail Image.png
Description
High-dimensional systems are difficult to model and predict. The underlying mechanisms of such systems are too complex to be fully understood with limited theoretical knowledge and/or physical measurements. Nevertheless, redcued-order models have been widely used to study high-dimensional systems, because they are practical and efficient to develop and implement. Although

High-dimensional systems are difficult to model and predict. The underlying mechanisms of such systems are too complex to be fully understood with limited theoretical knowledge and/or physical measurements. Nevertheless, redcued-order models have been widely used to study high-dimensional systems, because they are practical and efficient to develop and implement. Although model errors (biases) are inevitable for reduced-order models, these models can still be proven useful to develop real-world applications. Evaluation and validation for idealized models are indispensable to serve the mission of developing useful applications. Data assimilation and uncertainty quantification can provide a way to assess the performance of a reduced-order model. Real data and a dynamical model are combined together in a data assimilation framework to generate corrected model forecasts of a system. Uncertainties in model forecasts and observations are also quantified in a data assimilation cycle to provide optimal updates that are representative of the real dynamics. In this research, data assimilation is applied to assess the performance of two reduced-order models. The first model is developed for predicting prostate cancer treatment response under intermittent androgen suppression therapy. A sequential data assimilation scheme, the ensemble Kalman filter (EnKF), is used to quantify uncertainties in model predictions using clinical data of individual patients provided by Vancouver Prostate Center. The second model is developed to study what causes the changes of the state of stratospheric polar vortex. Two data assimilation schemes: EnKF and ES-MDA (ensemble smoother with multiple data assimilation), are used to validate the qualitative properties of the model using ECMWF (European Center for Medium-Range Weather Forecasts) reanalysis data. In both studies, the reduced-order model is able to reproduce the data patterns and provide insights to understand the underlying mechanism. However, significant model errors are also diagnosed for both models from the results of data assimilation schemes, which suggests specific improvements of the reduced-order models.
ContributorsWu, Zhimin (Author) / Kostelich, Eric (Thesis advisor) / Moustaoui, Mohamed (Thesis advisor) / Jones, Chris (Committee member) / Espanol, Malena (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2021
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
157690-Thumbnail Image.png
Description
The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling

The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling can be split into two groups: microscopic and macroscopic models. Microscopic models described the motion of so-called agents (e.g. cells, ants) that interact with their surrounding neighbors. The interactions among these agents form at a large scale some special structures such as flocking and swarming. One of the key questions is to relate the particular interactions among agents with the overall emerging structures. Macroscopic models are precisely designed to describe the evolution of such large structures. They are usually given as partial differential equations describing the time evolution of a density distribution (instead of tracking each individual agent). For instance, reaction-diffusion equations are used to model glioma cells and are being used to predict tumor growth. This dissertation aims at developing such a framework to better understand the complex behavior of foraging ants and glioma cells.
ContributorsJamous, Sara Sami (Author) / Motsch, Sebastien (Thesis advisor) / Armbruster, Dieter (Committee member) / Camacho, Erika (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2019
157649-Thumbnail Image.png
Description
I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling

I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling points are not known explicitly. This work presents robust algorithms that find good sampling points in complex regions for polynomial interpolation, least-squares, and radial basis function (RBF) methods. The quality of these nodes is measured using the Lebesgue constant. I will also consider optimal sampling for constrained optimization, used to solve PDEs, where boundary conditions must be imposed. Furthermore, I extend the scope of the problem to include finding near-optimal sampling points for high-order finite difference methods. These high-order finite difference methods can be implemented using either piecewise polynomials or RBFs.
ContributorsLiu, Tony (Author) / Platte, Rodrigo B (Thesis advisor) / Renaut, Rosemary (Committee member) / Kaspar, David (Committee member) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2019
157651-Thumbnail Image.png
Description
This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical

This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical steady-state transverse magnetization (M) from single-shot magnetic resonance imaging (MRI) scans. Sparse regularization on an approximation to the edge map is used to solve the associated inverse problem. Several studies are carried out for both one- and two-dimensional test problems, including comparisons to the first order approximation method, as well as the first order approximation method with joint sparsity across multiple time windows enforced. The second order accurate model provides increased accuracy while reducing the amount of data required to reconstruct an image when compared to piecewise constant in time models. A key component of the proposed technique is the use of fast transforms for the forward evaluation. It is determined that the second order model is capable of providing accurate single-shot MRI reconstructions, but requires an adequate coverage of k-space to do so. Alternative data sampling schemes are investigated in an attempt to improve reconstruction with single-shot data, as current trajectories do not provide ideal k-space coverage for the proposed method.
ContributorsJesse, Aaron Mitchel (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Kostelich, Eric (Committee member) / Mittelmann, Hans (Committee member) / Moustaoui, Mohamed (Committee member) / Arizona State University (Publisher)
Created2019