Matching Items (42)
187776-Thumbnail Image.png
Description
This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain,

This thesis addresses the problem of approximating analytic functions over general and compact multidimensional domains. Although the methods we explore can be used in complex domains, most of the tests are performed on the interval $[-1,1]$ and the square $[-1,1]\times[-1,1]$. Using Fourier and polynomial frame approximations on an extended domain, well-conditioned methods can be formulated. In particular, these methods provide exponential decay of the error down to a finite but user-controlled tolerance $\epsilon>0$. Additionally, this thesis explores two implementations of the frame approximation: a singular value decomposition (SVD)-regularized least-squares fit as described by Adcock and Shadrin in 2022, and a column and row selection method that leverages QR factorizations to reduce the data needed in the approximation. Moreover, strategies to reduce the complexity of the approximation problem by exploiting randomized linear algebra in low-rank algorithms are also explored, including the AZ algorithm described by Coppe and Huybrechs in 2020.
ContributorsGuo, Maosheng (Author) / Platte, Rodrigo (Thesis advisor) / Espanol, Malena (Committee member) / Renaut, Rosemary (Committee member) / Arizona State University (Publisher)
Created2023
187789-Thumbnail Image.png
Description
Ferrofluidic microrobots have emerged as promising tools for minimally invasive medical procedures, leveraging their unique properties to navigate through complex fluids and reach otherwise inaccessible regions of the human body, thereby enabling new applications in areas such as targeted drug delivery, tissue engineering, and diagnostics. This dissertation develops a

Ferrofluidic microrobots have emerged as promising tools for minimally invasive medical procedures, leveraging their unique properties to navigate through complex fluids and reach otherwise inaccessible regions of the human body, thereby enabling new applications in areas such as targeted drug delivery, tissue engineering, and diagnostics. This dissertation develops a model-predictive controller for the external magnetic manipulation of ferrofluid microrobots. Several experiments are performed to illustrate the adaptability and generalizability of the control algorithm to changes in system parameters, including the three-dimensional reference trajectory, the velocity of the workspace fluid, and the size, orientation, deformation, and velocity of the microrobotic droplet. A linear time-invariant control system governing the dynamics of locomotion is derived and used as the constraints of a least squares optimal control algorithm to minimize the projected error between the actual trajectory and the desired trajectory of the microrobot. The optimal control problem is implemented after time discretization using quadratic programming. In addition to demonstrating generalizability and adaptability, the accuracy of the control algorithm is analyzed for several different types of experiments. The experiments are performed in a workspace with a static surrounding fluid and extended to a workspace with fluid flowing through it. The results suggest that the proposed control algorithm could enable new capabilities for ferrofluidic microrobots, opening up new opportunities for applications in minimally invasive medical procedures, lab-on-a-chip, and microfluidics.
ContributorsSkowronek, Elizabeth Olga (Author) / Marvi, Hamidreza (Thesis advisor) / Berman, Spring (Committee member) / Platte, Rodrigo (Committee member) / Xu, Zhe (Committee member) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2023
187790-Thumbnail Image.png
Description
Balancing temporal shortages of renewable energy with natural gas for the generation of electricity is a challenge for dispatchers. This is compounded by the recent proposal of blending cleanly-produced hydrogen into natural gas networks. To introduce the concepts of gas flow, this thesis begins by linearizing the

Balancing temporal shortages of renewable energy with natural gas for the generation of electricity is a challenge for dispatchers. This is compounded by the recent proposal of blending cleanly-produced hydrogen into natural gas networks. To introduce the concepts of gas flow, this thesis begins by linearizing the partial differential equations (PDEs) that govern the flow of natural gas in a single pipe. The solution of the linearized PDEs is used to investigate wave attenuation and characterize critical operating regions where linearization is applicable. The nonlinear PDEs for a single gas are extended to mixtures of gases with the addition of a PDE that governs the conservation of composition. The gas mixture formulation is developed for general gas networks that can inject or withdraw arbitrary time-varying mixtures of gases into or from the network at arbitrarily specified nodes, while being influenced by time-varying control actions of compressor units. The PDE formulation is discretized in space to form a nonlinear control system of ordinary differential equations (ODEs), which is used to prove that homogeneous mixtures are well-behaved and heterogeneous mixtures may be ill-behaved in the sense of monotone-ordering of solutions. Numerical simulations are performed to compute interfaces that delimit monotone and periodic system responses. The ODE system is used as the constraints of an optimal control problem (OCP) to minimize the expended energy of compressors. Moreover, the ODE system for the natural gas network is linearized and used as the constraints of a linear OCP. The OCPs are digitally implemented as optimization problems following the discretization of the time domain. The optimization problems are applied to pipelines and small test networks. Some qualitative and computational applications, including linearization error analysis and transient responses, are also investigated.
ContributorsBaker, Luke Silas (Author) / Armbruster, Dieter (Thesis advisor) / Zlotnik, Anatoly (Committee member) / Herty, Michael (Committee member) / Platte, Rodrigo (Committee member) / Milner, Fabio (Committee member) / Arizona State University (Publisher)
Created2023
189270-Thumbnail Image.png
Description
The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a

The variable projection method has been developed as a powerful tool for solvingseparable nonlinear least squares problems. It has proven effective in cases where the underlying model consists of a linear combination of nonlinear functions, such as exponential functions. In this thesis, a modified version of the variable projection method to address a challenging semi-blind deconvolution problem involving mixed Gaussian kernels is employed. The aim is to recover the original signal accurately while estimating the mixed Gaussian kernel utilized during the convolution process. The numerical results obtained through the implementation of the proposed algo- rithm are presented. These results highlight the method’s ability to approximate the true signal successfully. However, accurately estimating the mixed Gaussian kernel remains a challenging task. The implementation details, specifically focusing on con- structing a simplified Jacobian for the Gauss-Newton method, are explored. This contribution enhances the understanding and practicality of the approach.
ContributorsDworaczyk, Jordan Taylor (Author) / Espanol, Malena (Thesis advisor) / Welfert, Bruno (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
187441-Thumbnail Image.png
Description
During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by

During the inversion of discrete linear systems, noise in data can be amplified and result in meaningless solutions. To combat this effect, characteristics of solutions that are considered desirable are mathematically implemented during inversion. This is a process called regularization. The influence of the provided prior information is controlled by the introduction of non-negative regularization parameter(s). Many methods are available for both the selection of appropriate regularization parame- ters and the inversion of the discrete linear system. Generally, for a single problem there is just one regularization parameter. Here, a learning approach is considered to identify a single regularization parameter based on the use of multiple data sets de- scribed by a linear system with a common model matrix. The situation with multiple regularization parameters that weight different spectral components of the solution is considered as well. To obtain these multiple parameters, standard methods are modified for identifying the optimal regularization parameters. Modifications of the unbiased predictive risk estimation, generalized cross validation, and the discrepancy principle are derived for finding spectral windowing regularization parameters. These estimators are extended for finding the regularization parameters when multiple data sets with common system matrices are available. Statistical analysis of these estima- tors is conducted for real and complex transformations of data. It is demonstrated that spectral windowing regularization parameters can be learned from these new esti- mators applied for multiple data and with multiple windows. Numerical experiments evaluating these new methods demonstrate that these modified methods, which do not require the use of true data for learning regularization parameters, are effective and efficient, and perform comparably to a supervised learning method based on es- timating the parameters using true data. The theoretical developments are validated for one and two dimensional image deblurring. It is verified that the obtained estimates of spectral windowing regularization parameters can be used effectively on validation data sets that are separate from the training data, and do not require known data.
ContributorsByrne, Michael John (Author) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Espanol, Malena (Committee member) / Jackiewicz, Zdzislaw (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
189358-Thumbnail Image.png
Description
The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on

The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on finite graphs are studied. More precisely, having a finite graph where edges are independently open with the same probability $p$ and a vertex $x$ chosen uniformly at random, the goal is to find the first and second moments of the number of vertices in the cluster of open edges containing $x$. Exact expressions for the first and second moments of the size distribution of a bond percolation cluster on essential building blocks of hybrid graphs: the ring, the path, the random star, and regular graphs are derived. Upper bounds for the moments are obtained by using a coupling argument to compare the percolation model with branching processes when the graph is the random rooted tree with a given offspring distribution and a given finite radius. Second, the Petri Net modeling framework for performance analysis is well established; extensions provide enough flexibility to examine the behavior of a permissioned blockchain platform in the context of an ongoing cyberattack via simulation. The relationship between system performance and cyberattack configuration is analyzed. The simulations vary the blockchain's parameters and network structure, revealing the factors that contribute positively or negatively to a Sybil attack through the performance impact of the system. Lastly, the denoising diffusion probabilistic models (DDPM) ability for synthetic tabular data augmentation is studied. DDPMs surpass generative adversarial networks in improving computer vision classification tasks and image generation, for example, stable diffusion. Recent research and open-source implementations point to a strong quality of synthetic tabular data generation for classification and regression tasks. Unfortunately, the present state of literature concerning tabular data augmentation with DDPM for classification is lacking. Further, cyber datasets commonly have highly unbalanced distributions complicating training. Synthetic tabular data augmentation is investigated with cyber datasets and performance of well-known metrics in machine learning classification tasks improve with augmentation and balancing.
ContributorsLa Salle, Axel (Author) / Lanchier, Nicolas (Thesis advisor) / Jevtic, Petar (Thesis advisor) / Motsch, Sebastien (Committee member) / Boscovic, Dragan (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
152362-Thumbnail Image.png
Description
Signaling cascades transduce signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three or four layers. Using singular perturbation theory, Michaelis-Menten type equations

Signaling cascades transduce signals received on the cell membrane to the nucleus. While noise filtering, ultra-sensitive switches, and signal amplification have all been shown to be features of such signaling cascades, it is not understood why cascades typically show three or four layers. Using singular perturbation theory, Michaelis-Menten type equations are derived for open enzymatic systems. When these equations are organized into a cascade, it is demonstrated that the output signal as a function of time becomes sigmoidal with the addition of more layers. Furthermore, it is shown that the activation time will speed up to a point, after which more layers become superfluous. It is shown that three layers create a reliable sigmoidal response progress curve from a wide variety of time-dependent signaling inputs arriving at the cell membrane, suggesting that natural selection may have favored signaling cascades as a parsimonious solution to the problem of generating switch-like behavior in a noisy environment.
ContributorsYoung, Jonathan Trinity (Author) / Armbruster, Dieter (Thesis advisor) / Platte, Rodrigo (Committee member) / Nagy, John (Committee member) / Baer, Steven (Committee member) / Taylor, Jesse (Committee member) / Arizona State University (Publisher)
Created2013
154381-Thumbnail Image.png
Description
High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI

High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use

in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on $\ell^1$ regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and $\ell^1$ regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.
ContributorsDenker, Dennis (Author) / Gelb, Anne (Thesis advisor) / Archibald, Richard (Committee member) / Armbruster, Dieter (Committee member) / Boggess, Albert (Committee member) / Platte, Rodrigo (Committee member) / Saders, Toby (Committee member) / Arizona State University (Publisher)
Created2016
157690-Thumbnail Image.png
Description
The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling

The main objective of mathematical modeling is to connect mathematics with other scientific fields. Developing predictable models help to understand the behavior of biological systems. By testing models, one can relate mathematics and real-world experiments. To validate predictions numerically, one has to compare them with experimental data sets. Mathematical modeling can be split into two groups: microscopic and macroscopic models. Microscopic models described the motion of so-called agents (e.g. cells, ants) that interact with their surrounding neighbors. The interactions among these agents form at a large scale some special structures such as flocking and swarming. One of the key questions is to relate the particular interactions among agents with the overall emerging structures. Macroscopic models are precisely designed to describe the evolution of such large structures. They are usually given as partial differential equations describing the time evolution of a density distribution (instead of tracking each individual agent). For instance, reaction-diffusion equations are used to model glioma cells and are being used to predict tumor growth. This dissertation aims at developing such a framework to better understand the complex behavior of foraging ants and glioma cells.
ContributorsJamous, Sara Sami (Author) / Motsch, Sebastien (Thesis advisor) / Armbruster, Dieter (Committee member) / Camacho, Erika (Committee member) / Moustaoui, Mohamed (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2019
157651-Thumbnail Image.png
Description
This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical

This dissertation develops a second order accurate approximation to the magnetic resonance (MR) signal model used in the PARSE (Parameter Assessment by Retrieval from Single Encoding) method to recover information about the reciprocal of the spin-spin relaxation time function (R2*) and frequency offset function (w) in addition to the typical steady-state transverse magnetization (M) from single-shot magnetic resonance imaging (MRI) scans. Sparse regularization on an approximation to the edge map is used to solve the associated inverse problem. Several studies are carried out for both one- and two-dimensional test problems, including comparisons to the first order approximation method, as well as the first order approximation method with joint sparsity across multiple time windows enforced. The second order accurate model provides increased accuracy while reducing the amount of data required to reconstruct an image when compared to piecewise constant in time models. A key component of the proposed technique is the use of fast transforms for the forward evaluation. It is determined that the second order model is capable of providing accurate single-shot MRI reconstructions, but requires an adequate coverage of k-space to do so. Alternative data sampling schemes are investigated in an attempt to improve reconstruction with single-shot data, as current trajectories do not provide ideal k-space coverage for the proposed method.
ContributorsJesse, Aaron Mitchel (Author) / Platte, Rodrigo (Thesis advisor) / Gelb, Anne (Committee member) / Kostelich, Eric (Committee member) / Mittelmann, Hans (Committee member) / Moustaoui, Mohamed (Committee member) / Arizona State University (Publisher)
Created2019