Matching Items (6)
152694-Thumbnail Image.png
Description
In the field of infectious disease epidemiology, the assessment of model robustness outcomes plays a significant role in the identification, reformulation, and evaluation of preparedness strategies aimed at limiting the impact of catastrophic events (pandemics or the deliberate release of biological agents) or used in the management of disease prevention

In the field of infectious disease epidemiology, the assessment of model robustness outcomes plays a significant role in the identification, reformulation, and evaluation of preparedness strategies aimed at limiting the impact of catastrophic events (pandemics or the deliberate release of biological agents) or used in the management of disease prevention strategies, or employed in the identification and evaluation of control or mitigation measures. The research work in this dissertation focuses on: The comparison and assessment of the role of exponentially distributed waiting times versus the use of generalized non-exponential parametric distributed waiting times of infectious periods on the quantitative and qualitative outcomes generated by Susceptible-Infectious-Removed (SIR) models. Specifically, Gamma distributed infectious periods are considered in the three research projects developed following the applications found in (Bailey 1964, Anderson 1980, Wearing 2005, Feng 2007, Feng 2007, Yan 2008, lloyd 2009, Vergu 2010). i) The first project focuses on the influence of input model parameters, such as the transmission rate, mean and variance of Gamma distributed infectious periods, on disease prevalence, the peak epidemic size and its timing, final epidemic size, epidemic duration and basic reproduction number. Global uncertainty and sensitivity analyses are carried out using a deterministic Susceptible-Infectious-Recovered (SIR) model. The quantitative effect and qualitative relation between input model parameters and outcome variables are established using Latin Hypercube Sampling (LHS) and Partial rank correlation coefficient (PRCC) and Spearman rank correlation coefficient (RCC) sensitivity indices. We learnt that: For relatively low (R0 close to one) to high (mean of R0 equals 15) transmissibility, the variance of the Gamma distribution for the infectious period, input parameter of the deterministic age-of-infection SIR model, is key (statistically significant) on the predictability of the epidemiological variables such as the epidemic duration and the peak size and timing of the prevalence of infectious individuals and therefore, for the predictability these variables, it is preferable to utilize a nonlinear system of Volterra integral equations, rather than a nonlinear system of ordinary differential equations. The predictability of epidemiological variables such as the final epidemic size and the basic reproduction number are unaffected by (or independent of) the variance of the Gamma distribution for the infectious period and therefore for the choice on which type of nonlinear system for the description of the SIR model (VIE's or ODE's) is irrelevant. Although, for practical proposes, with the aim of lowering the complexity and number operations in the numerical methods, a nonlinear system of ordinary differential equations is preferred. The main contribution lies in the development of a model based decision-tool that helps determine when SIR models given in terms of Volterra integral equations are equivalent or better suited than SIR models that only consider exponentially distributed infectious periods. ii) The second project addresses the question of whether or not there is sufficient evidence to conclude that two empirical distributions for a single epidemiological outcome, one generated using a stochastic SIR model under exponentially distributed infectious periods and the other under the non-exponentially distributed infectious period, are statistically dissimilar. The stochastic formulations are modeled via a continuous time Markov chain model. The statistical hypothesis test is conducted using the non-parametric Kolmogorov-Smirnov test. We found evidence that shows that for low to moderate transmissibility, all empirical distribution pairs (generated from exponential and non-exponential distributions) for each of the epidemiological quantities considered are statistically dissimilar. The research in this project helps determine whether the weakening exponential distribution assumption must be considered in the estimation of probability of events defined from the empirical distribution of specific random variables. iii) The third project involves the assessment of the effect of exponentially distributed infectious periods on estimates of input parameter and the associated outcome variable predictions. Quantities unaffected by the use of exponentially distributed infectious period within low transmissibility scenarios include, the prevalence peak time, final epidemic size, epidemic duration and basic reproduction number and for high transmissibility scenarios only the prevalence peak time and final epidemic size. An application designed to determine from incidence data whether there is sufficient statistical evidence to conclude that the infectious period distribution should not be modeled by an exponential distribution is developed. A method for estimating explicitly specified non-exponential parametric probability density functions for the infectious period from epidemiological data is developed. The methodologies presented in this dissertation may be applicable to models where waiting times are used to model transitions between stages, a process that is common in the study of life-history dynamics of many ecological systems.
ContributorsMorales Butler, Emmanuel J (Author) / Castillo-Chavez, Carlos (Thesis advisor) / Aparicio, Juan P (Thesis advisor) / Camacho, Erika T (Committee member) / Kang, Yun (Committee member) / Arizona State University (Publisher)
Created2014
162238-Thumbnail Image.png
DescriptionUnderstanding the evolution of opinions is a delicate task as the dynamics of how one changes their opinion based on their interactions with others are unclear.
ContributorsWeber, Dylan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicolas (Committee member) / Platte, Rodrigo (Committee member) / Armbruster, Dieter (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2021
Description

The Mack model and the Bootstrap Over-Dispersed Poisson model have long been the primary modeling tools used by actuaries and insurers to forecast losses. With the emergence of faster computational technology, new and novel methods to calculate and simulate data are more applicable than ever before. This paper explores the

The Mack model and the Bootstrap Over-Dispersed Poisson model have long been the primary modeling tools used by actuaries and insurers to forecast losses. With the emergence of faster computational technology, new and novel methods to calculate and simulate data are more applicable than ever before. This paper explores the use of various Bayesian Monte Carlo Markov Chain models recommended by Glenn Meyers and compares the results to the simulated data from the Mack model and the Bootstrap Over-Dispersed Poisson model. Although the Mack model and the Bootstrap Over-Dispersed Poisson model are accurate to a certain degree, newer models could be developed that may yield better results. However, a general concern is that no singular model is able to reflect underlying information that only an individual who has intimate knowledge of the data would know. Thus, the purpose of this paper is not to distinguish one model that works for all applicable data, but to propose various models that have pros and cons and suggest ways that they can be improved upon.

ContributorsZhang, Zhaobo (Author) / Zicarelli, John (Thesis director) / Milovanovic, Jelena (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2023-05
189358-Thumbnail Image.png
Description
The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on

The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on finite graphs are studied. More precisely, having a finite graph where edges are independently open with the same probability $p$ and a vertex $x$ chosen uniformly at random, the goal is to find the first and second moments of the number of vertices in the cluster of open edges containing $x$. Exact expressions for the first and second moments of the size distribution of a bond percolation cluster on essential building blocks of hybrid graphs: the ring, the path, the random star, and regular graphs are derived. Upper bounds for the moments are obtained by using a coupling argument to compare the percolation model with branching processes when the graph is the random rooted tree with a given offspring distribution and a given finite radius. Second, the Petri Net modeling framework for performance analysis is well established; extensions provide enough flexibility to examine the behavior of a permissioned blockchain platform in the context of an ongoing cyberattack via simulation. The relationship between system performance and cyberattack configuration is analyzed. The simulations vary the blockchain's parameters and network structure, revealing the factors that contribute positively or negatively to a Sybil attack through the performance impact of the system. Lastly, the denoising diffusion probabilistic models (DDPM) ability for synthetic tabular data augmentation is studied. DDPMs surpass generative adversarial networks in improving computer vision classification tasks and image generation, for example, stable diffusion. Recent research and open-source implementations point to a strong quality of synthetic tabular data generation for classification and regression tasks. Unfortunately, the present state of literature concerning tabular data augmentation with DDPM for classification is lacking. Further, cyber datasets commonly have highly unbalanced distributions complicating training. Synthetic tabular data augmentation is investigated with cyber datasets and performance of well-known metrics in machine learning classification tasks improve with augmentation and balancing.
ContributorsLa Salle, Axel (Author) / Lanchier, Nicolas (Thesis advisor) / Jevtic, Petar (Thesis advisor) / Motsch, Sebastien (Committee member) / Boscovic, Dragan (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
161246-Thumbnail Image.png
Description
With demand for increased efficiency and smaller carbon footprint, power system operators are striving to improve their modeling, down to the individual consumer device, paving the way for higher production and consumption efficiencies and increased renewable generation without sacrificing system reliability. This dissertation explores two lines of research. The first

With demand for increased efficiency and smaller carbon footprint, power system operators are striving to improve their modeling, down to the individual consumer device, paving the way for higher production and consumption efficiencies and increased renewable generation without sacrificing system reliability. This dissertation explores two lines of research. The first part looks at stochastic continuous-time power system scheduling, where the goal is to better capture system ramping characteristics to address increased variability and uncertainty. The second part of the dissertation starts by developing aggregate population models for residential Demand Response (DR), focusing on storage devices, Electric Vehicles (EVs), Deferrable Appliances (DAs) and Thermostatically Controlled Loads (TCLs). Further, the characteristics of such a population aggregate are explored, such as the resemblance to energy storage devices, and particular attentions is given to how such aggregate models can be considered approximately convex even if the individual resource model is not. Armed with an approximately convex aggregate model for DR, how to interface it with present day energy markets is explored, looking at directions the market could go towards to better accommodate such devices for the benefit of not only the prosumer itself but the system as a whole.
ContributorsHreinsson, Kári (Author) / Scaglione, Anna (Thesis advisor) / Hedman, Kory (Committee member) / Zhang, Junshan (Committee member) / Alizadeh, Mahnoosh (Committee member) / Arizona State University (Publisher)
Created2020
161250-Thumbnail Image.png
Description
Inside cells, axonal and dendritic transport by motor proteins is a process that is responsible for supplying cargo, such as vesicles and organelles, to support neuronal function. Motor proteins achieve transport through a cycle of chemical and mechanical processes. Particle tracking experiments are used to study this intracellular cargo transport

Inside cells, axonal and dendritic transport by motor proteins is a process that is responsible for supplying cargo, such as vesicles and organelles, to support neuronal function. Motor proteins achieve transport through a cycle of chemical and mechanical processes. Particle tracking experiments are used to study this intracellular cargo transport by recording multi-dimensional, discrete cargo position trajectories over time. However, due to experimental limitations, much of the mechanochemical process cannot be directly observed, making mathematical modeling and statistical inference an essential tool for identifying the underlying mechanisms. The cargo movement during transport is modeled using a switching stochastic differential equation framework that involves classification into one of three proposed hidden regimes. Each regime is characterized by different levels of velocity and stochasticity. The equations are presented as a state-space model with Markovian properties. Through a stochastic expectation-maximization algorithm, statistical inference can be made based on the observed trajectory. Regime predictions and particle location predictions are calculated through an auxiliary particle filter and particle smoother. Based on these predictions, parameters are estimated through maximum likelihood. Diagnostics are proposed that can assess model performance and therefore also be a form of model selection criteria. Model selection is used to find the most accurate regime models and the optimal number of regimes for a certain motor-cargo system. A method for incorporating a second positional dimension is also introduced. These methods are tested on both simulated data and different types of experimental data.
ContributorsCrow, Lauren (Author) / Fricks, John (Thesis advisor) / McKinley, Scott (Committee member) / Hahn, Paul R (Committee member) / Reiser, Mark (Committee member) / Cheng, Dan (Committee member) / Arizona State University (Publisher)
Created2021