Matching Items (83)
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
187776-Thumbnail Image.png
Description
This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain,

This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain, well-conditioned methods can be formulated. In particular, these methods provide exponential decay of the error down to a finite but user-controlled tolerance $\epsilon>0$. Additionally, this thesis explores two implementations of the frame approximation: a singular value decomposition (SVD)-regularized least-squares fit as described by Adcock and Shadrin in 2022, and a column and row selection method that leverages QR factorizations to reduce the data needed in the approximation. Moreover, strategies to reduce the complexity of the approximation problem by exploiting randomized linear algebra in low-rank algorithms are also explored, including the AZ algorithm described by Coppe and Huybrechs in 2020.
ContributorsGuo, Maosheng (Author) / Platte, Rodrigo (Thesis advisor) / Espanol, Malena (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2023
187789-Thumbnail Image.png
Description
Ferrofluidic microrobots have emerged as promising tools for minimally invasive medical procedures, leveraging their unique properties to navigate through complex fluids and reach otherwise inaccessible regions of the human body, thereby enabling new applications in areas such as targeted drug delivery, tissue engineering, and diagnostics. This dissertation develops a

Ferrofluidic microrobots have emerged as promising tools for minimally invasive medical procedures, leveraging their unique properties to navigate through complex fluids and reach otherwise inaccessible regions of the human body, thereby enabling new applications in areas such as targeted drug delivery, tissue engineering, and diagnostics. This dissertation develops a model-predictive controller for the external magnetic manipulation of ferrofluid microrobots. Several experiments are performed to illustrate the adaptability and generalizability of the control algorithm to changes in system parameters, including the three-dimensional reference trajectory, the velocity of the workspace fluid, and the size, orientation, deformation, and velocity of the microrobotic droplet. A linear time-invariant control system governing the dynamics of locomotion is derived and used as the constraints of a least squares optimal control algorithm to minimize the projected error between the actual trajectory and the desired trajectory of the microrobot. The optimal control problem is implemented after time discretization using quadratic programming. In addition to demonstrating generalizability and adaptability, the accuracy of the control algorithm is analyzed for several different types of experiments. The experiments are performed in a workspace with a static surrounding fluid and extended to a workspace with fluid flowing through it. The results suggest that the proposed control algorithm could enable new capabilities for ferrofluidic microrobots, opening up new opportunities for applications in minimally invasive medical procedures, lab-on-a-chip, and microfluidics.
ContributorsSkowronek, Elizabeth Olga (Author) / Marvi, Hamidreza (Thesis advisor) / Berman, Spring (Committee member) / Platte, Rodrigo (Committee member) / Xu, Zhe (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2023
187790-Thumbnail Image.png
Description
Balancing temporal shortages of renewable energy with natural gas for the generation of electricity is a challenge for dispatchers. This is compounded by the recent proposal of blending cleanly-produced hydrogen into natural gas networks. To introduce the concepts of gas flow, this thesis begins by linearizing the

Balancing temporal shortages of renewable energy with natural gas for the generation of electricity is a challenge for dispatchers. This is compounded by the recent proposal of blending cleanly-produced hydrogen into natural gas networks. To introduce the concepts of gas flow, this thesis begins by linearizing the partial differential equations (PDEs) that govern the flow of natural gas in a single pipe. The solution of the linearized PDEs is used to investigate wave attenuation and characterize critical operating regions where linearization is applicable. The nonlinear PDEs for a single gas are extended to mixtures of gases with the addition of a PDE that governs the conservation of composition. The gas mixture formulation is developed for general gas networks that can inject or withdraw arbitrary time-varying mixtures of gases into or from the network at arbitrarily specified nodes, while being influenced by time-varying control actions of compressor units. The PDE formulation is discretized in space to form a nonlinear control system of ordinary differential equations (ODEs), which is used to prove that homogeneous mixtures are well-behaved and heterogeneous mixtures may be ill-behaved in the sense of monotone-ordering of solutions. Numerical simulations are performed to compute interfaces that delimit monotone and periodic system responses. The ODE system is used as the constraints of an optimal control problem (OCP) to minimize the expended energy of compressors. Moreover, the ODE system for the natural gas network is linearized and used as the constraints of a linear OCP. The OCPs are digitally implemented as optimization problems following the discretization of the time domain. The optimization problems are applied to pipelines and small test networks. Some qualitative and computational applications, including linearization error analysis and transient responses, are also investigated.
ContributorsBaker, Luke Silas (Author) / Armbruster, Dieter (Thesis advisor) / Zlotnik, Anatoly (Committee member) / Herty, Michael (Committee member) / Platte, Rodrigo (Committee member) / Milner, Fabio (Committee member) / Arizona State University (Publisher)
Created2023
189270-Thumbnail Image.png
Description
The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a

The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a challenging semi-blind deconvolution problem involving mixed Gaussian kernels is employed. The aim is to recover the original signal accurately while estimating the mixed Gaussian kernel utilized during the convolution process. The numerical results obtained through the implementation of the proposed algo- rithm are presented. These results highlight the method’s ability to approximate the true signal successfully. However, accurately estimating the mixed Gaussian kernel remains a challenging task. The implementation details, specifically focusing on con- structing a simplified Jacobian for the Gauss-Newton method, are explored. This contribution enhances the understanding and practicality of the approach.
ContributorsDworaczyk, Jordan Taylor (Author) / Espanol, Malena (Thesis advisor) / Welfert, Bruno (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
187669-Thumbnail Image.png
Description
Advancements to a dual scale Large Eddy Simulation (LES) modeling approach for immiscible turbulent phase interfaces are presented. In the dual scale LES approach, a high resolution auxiliary grid, used to capture a fully resolved interface geometry realization, is linked to an LES grid that solves the filtered Navier-Stokes equations.

Advancements to a dual scale Large Eddy Simulation (LES) modeling approach for immiscible turbulent phase interfaces are presented. In the dual scale LES approach, a high resolution auxiliary grid, used to capture a fully resolved interface geometry realization, is linked to an LES grid that solves the filtered Navier-Stokes equations. Exact closure of the sub-filter interface terms is provided by explicitly filtering the fully resolved quantities from the auxiliary grid. Reconstructing a fully resolved velocity field to advance the phase interface requires modeling several sub-filter effects, including shear and accelerational instabilities and phase change. Two sub-filter models were developed to generate these sub-filter hydrodynamic instabilities: an Orr-Sommerfeld model and a Volume-of-Fluid (VoF) vortex sheet method. The Orr-Sommerfeld sub-filter model was found to be incompatible with the dual scale approach, since it is unable to generate interface rollup and a process to separate filtered and sub-filter scales could not be established. A novel VoF vortex sheet method was therefore proposed, since prior vortex methods have demonstrated interface rollup and following the LES methodology, the vortex sheet strength could be decomposed into its filtered and sub-filter components. In the development of the VoF vortex sheet method, it was tested with a variety of classical hydrodynamic instability problems, compared against prior work and linear theory, and verified using Direct Numerical Simulations (DNS). An LES consistent approach to coupling the VoF vortex sheet with the LES filtered equations is presented and compared against DNS. Finally, a sub-filter phase change model is proposed and assessed in the dual scale LES framework with an evaporating interface subjected to decaying homogeneous isotropic turbulence. Results are compared against DNS and the interplay between surface tension forces and evaporation are discussed.
ContributorsGoodrich, Austin Chase (Author) / Herrmann, Marcus (Thesis advisor) / Dahm, Werner (Committee member) / Kim, Jeonglae (Committee member) / Huang, Huei-Ping (Committee member) / Kostelich, Eric (Committee member) / Arizona State University (Publisher)
Created2023
187415-Thumbnail Image.png
Description
A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model,

A pneumonia-like illness emerged late in 2019 (coined COVID-19), caused by SARSCoV-2, causing a devastating global pandemic on a scale never before seen sincethe 1918/1919 influenza pandemic. This dissertation contributes in providing deeper qualitative insights into the transmission dynamics and control of the disease in the United States. A basic mathematical model, which incorporates the key pertinent epidemiological features of SARS-CoV-2 and fitted using observed COVID-19 data, was designed and used to assess the population-level impacts of vaccination and face mask usage in mitigating the burden of the pandemic in the United States. Conditions for the existence and asymptotic stability of the various equilibria of the model were derived. The model was shown to undergo a vaccine-induced backward bifurcation when the associated reproduction number is less than one. Conditions for achieving vaccine-derived herd immunity were derived for three of the four FDA-approved vaccines (namely Pfizer, Moderna and Johnson & Johnson vaccine), and the vaccination coverage level needed to achieve it decreases with increasing coverage of moderately and highly-effective face masks. It was also shown that using face masks as a singular intervention strategy could lead to the elimination of the pandemic if moderate or highly-effective masks are prioritized and pandemic elimination prospects are greatly enhanced if the vaccination program is combined with a face mask use strategy that emphasizes the use of moderate to highly-effective masks with at least moderate coverage. The model was extended in Chapter 3 to allow for the assessment of the impacts of waning and boosting of vaccine-derived and natural immunity against the BA.1 Omicron variant of SARS-CoV-2. It was shown that vaccine-derived herd immunity can be achieved in the United States via a vaccination-boosting strategy which entails fully vaccinating at least 72% of the susceptible populace. Boosting of vaccine-derived immunity was shown to be more beneficial than boosting of natural immunity. Overall, this study showed that the prospects of the elimination of the pandemic in the United States were highly promising using the two intervention measures.
ContributorsSafdar, Salman (Author) / Gumel, Abba (Thesis advisor) / Kostelich, Eric (Committee member) / Kang, Yun (Committee member) / Fricks, John (Committee member) / Espanol, Malena (Committee member) / Arizona State University (Publisher)
Created2023
187441-Thumbnail Image.png
Description
During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by

During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by the introduction of non-negative regularization parameter(s). Many methods are available for both the selection of appropriate regularization parame- ters and the inversion of the discrete linear system. Generally, for a single problem there is just one regularization parameter. Here, a learning approach is considered to identify a single regularization parameter based on the use of multiple data sets de- scribed by a linear system with a common model matrix. The situation with multiple regularization parameters that weight different spectral components of the solution is considered as well. To obtain these multiple parameters, standard methods are modified for identifying the optimal regularization parameters. Modifications of the unbiased predictive risk estimation, generalized cross validation, and the discrepancy principle are derived for finding spectral windowing regularization parameters. These estimators are extended for finding the regularization parameters when multiple data sets with common system matrices are available. Statistical analysis of these estima- tors is conducted for real and complex transformations of data. It is demonstrated that spectral windowing regularization parameters can be learned from these new esti- mators applied for multiple data and with multiple windows. Numerical experiments evaluating these new methods demonstrate that these modified methods, which do not require the use of true data for learning regularization parameters, are effective and efficient, and perform comparably to a supervised learning method based on es- timating the parameters using true data. The theoretical developments are validated for one and two dimensional image deblurring. It is verified that the obtained estimates of spectral windowing regularization parameters can be used effectively on validation data sets that are separate from the training data, and do not require known data.
ContributorsByrne, Michael John (Author) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Espanol, Malena (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
189358-Thumbnail Image.png
Description
The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on

The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on finite graphs are studied. More precisely, having a finite graph where edges are independently open with the same probability $p$ and a vertex $x$ chosen uniformly at random, the goal is to find the first and second moments of the number of vertices in the cluster of open edges containing $x$. Exact expressions for the first and second moments of the size distribution of a bond percolation cluster on essential building blocks of hybrid graphs: the ring, the path, the random star, and regular graphs are derived. Upper bounds for the moments are obtained by using a coupling argument to compare the percolation model with branching processes when the graph is the random rooted tree with a given offspring distribution and a given finite radius. Second, the Petri Net modeling framework for performance analysis is well established; extensions provide enough flexibility to examine the behavior of a permissioned blockchain platform in the context of an ongoing cyberattack via simulation. The relationship between system performance and cyberattack configuration is analyzed. The simulations vary the blockchain's parameters and network structure, revealing the factors that contribute positively or negatively to a Sybil attack through the performance impact of the system. Lastly, the denoising diffusion probabilistic models (DDPM) ability for synthetic tabular data augmentation is studied. DDPMs surpass generative adversarial networks in improving computer vision classification tasks and image generation, for example, stable diffusion. Recent research and open-source implementations point to a strong quality of synthetic tabular data generation for classification and regression tasks. Unfortunately, the present state of literature concerning tabular data augmentation with DDPM for classification is lacking. Further, cyber datasets commonly have highly unbalanced distributions complicating training. Synthetic tabular data augmentation is investigated with cyber datasets and performance of well-known metrics in machine learning classification tasks improve with augmentation and balancing.
ContributorsLa Salle, Axel (Author) / Lanchier, Nicolas (Thesis advisor) / Jevtic, Petar (Thesis advisor) / Motsch, Sebastien (Committee member) / Boscovic, Dragan (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
171849-Thumbnail Image.png
Description
This thesis focuses on the turbulent bluff body wakes in incompressible and compressible flows. An incompressible wake flow past an axisymmetric body of revolution at a diameter-based Reynolds number Re=5000 is investigated via a direct numerical simulation. It is followed by the development of a compressible solver using a split-form

This thesis focuses on the turbulent bluff body wakes in incompressible and compressible flows. An incompressible wake flow past an axisymmetric body of revolution at a diameter-based Reynolds number Re=5000 is investigated via a direct numerical simulation. It is followed by the development of a compressible solver using a split-form discontinuous Galerkin spectral element method framework with shock capturing. In the study on incompressible wake flows, three dominant coherent vortical motions are identified in the wake: the vortex shedding motion with the frequency of St=0.27, the bubble pumping motion with St=0.02, and the very-low-frequency (VLF) motion originated in the very near wake of the body with the frequencies St=0.002 and 0.005. The very-low-frequency motion is associated with a slow precession of the wake barycenter. The vortex shedding pattern is demonstrated to follow a reflectional symmetry breaking mode, with the detachment location rotating continuously and making a full circle over one vortex shedding period. The VLF radial motion with St=0.005 originates as m = 1 mode, but later transitions into m = 2 mode in the intermediate wake. Proper orthogonaldecomposition (POD) and dynamic mode decomposition (DMD) are further performed to analyze the spatial structure associated with the dominant coherent motions. Results of the POD and DMD analysis are consistent with the results of the azimuthal Fourier analysis. To extend the current incompressible code to be able to solve compressible flows, a computational methodology is developed using a high-order approximation for the compressible Navier-Stokes equations with discontinuities. The methodology is based on a split discretization framework with a summation-by-part operator. An entropy viscosity method and a subcell finite volume method are implemented to capture discontinuities. The developed high-order split-form with shock-capturing methodology is subject to a series of evaluation on cases from subsonic to hypersonic, from one-dimensional to three dimensional. The Taylor-Green vortex case and the supersonic sphere wake case show the capability to handle three-dimensional turbulent flows without and with the presence of shocks. It is also shown that higher-order approximations yield smaller errors than lower-order approximations, for the same number of total degrees of freedom.
ContributorsZhang, Fengrui (Author) / Peet, Yulia (Thesis advisor) / Kostelich, Eric (Committee member) / Kim, Jeonglae (Committee member) / Hermann, Marcus (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2022