Matching Items (40)
137407-Thumbnail Image.png
Description
This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by

This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by its strategy and the strategies of its neighbors. Update times are exponential random variables with parameters equal to the absolute value of the respective cells' payoffs. The model is connected to an ordinary differential equation known as the replicator equation. This differential equation is analyzed to find its fixed points and stability. Then, by simulating the model using Java code and observing the change in dynamics which result from varying the parameters of the payoff matrix, the stochastic model's phase diagram is compared to the replicator equation's phase diagram to see what effect local interactions and stochastic update times have on the evolutionary stability of strategies. It is revealed that in the stochastic model altruistic strategies can be evolutionarily stable, and selfish strategies are only evolutionarily stable if they are more selfish than their opposing strategy. This contrasts with the replicator equation where selfishness is always evolutionarily stable and altruism never is.
ContributorsWehn, Austin Brent (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Motsch, Sebastien (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor)
Created2013-12
132421-Thumbnail Image.png
Description
The objective of this paper is to find and describe trends in the fast Fourier transformed accelerometer data that can be used to predict the mechanical failure of large vacuum pumps used in industrial settings, such as providing drinking water. Using three-dimensional plots of the data, this paper suggests how

The objective of this paper is to find and describe trends in the fast Fourier transformed accelerometer data that can be used to predict the mechanical failure of large vacuum pumps used in industrial settings, such as providing drinking water. Using three-dimensional plots of the data, this paper suggests how a model can be developed to predict the mechanical failure of vacuum pumps.
ContributorsHalver, Grant (Author) / Taylor, Tom (Thesis director) / Konstantinos, Tsakalis (Committee member) / Fricks, John (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic

Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic models which studies the tumor structure as a whole, and microscopic models which focus on the behavior of individual cells. Both modeling strategies strive the same goal of creating a model that can be validated with experimental data, and is reliable for predicting tumor growth. In order to achieve this goal, models must be developed based on certain rules that tumor structures follow. This paper will introduce how such rules can be implemented in a mathematical model, with the example of individual based modeling.
ContributorsHan, Zimo (Author) / Motsch, Sebastien (Thesis director) / Moustaoui, Mohamed (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
190981-Thumbnail Image.png
Description
As the impacts of climate change worsen in the coming decades, natural hazards are expected to increase in frequency and intensity, leading to increased loss and risk to human livelihood. The spatio-temporal statistical approaches developed and applied in this dissertation highlight the ways in which hazard data can be leveraged

As the impacts of climate change worsen in the coming decades, natural hazards are expected to increase in frequency and intensity, leading to increased loss and risk to human livelihood. The spatio-temporal statistical approaches developed and applied in this dissertation highlight the ways in which hazard data can be leveraged to understand loss trends, build forecasts, and study societal impacts of losses. Specifically, this work makes use of the Spatial Hazard Events and Losses Database which is an unparalleled source of loss data for the United States. The first portion of this dissertation develops accurate loss baselines that are crucial for mitigation planning, infrastructure investment, and risk communication. This is accomplished thorough a stationarity analysis of county level losses following a normalization procedure. A wide variety of studies employ loss data without addressing stationarity assumptions or the possibility for spurious regression. This work enables the statistically rigorous application of such loss time series to modeling applications. The second portion of this work develops a novel matrix variate dynamic factor model for spatio-temporal loss data stratified across multiple correlated hazards or perils. The developed model is employed to analyze and forecast losses from convective storms, which constitute some of the highest losses covered by insurers. Adopting factor-based approach, forecasts are achieved despite the complex and often unobserved underlying drivers of these losses. The developed methodology extends the literature on dynamic factor models to matrix variate time series. Specifically, a covariance structure is imposed that is well suited to spatio-temporal problems while significantly reducing model complexity. The model is fit via the EM algorithm and Kalman filter. The third and final part of this dissertation investigates the impact of compounding hazard events on state and regional migration in the United States. Any attempt to capture trends in climate related migration must account for the inherent uncertainties surrounding climate change, natural hazard occurrences, and socioeconomic factors. For this reason, I adopt a Bayesian modeling approach that enables the explicit estimation of the inherent uncertainty. This work can provide decision-makers with greater clarity regarding the extent of knowledge on climate trends.
ContributorsBoyle, Esther Sarai (Author) / Jevtic, Petar (Thesis advisor) / Lanchier, Nicolas (Thesis advisor) / Lan, Shiwei (Committee member) / Cheng, Dan (Committee member) / Fricks, John (Committee member) / Gall, Melanie (Committee member) / Cutter, Susan (Committee member) / McNicholas, Paul (Committee member) / Arizona State University (Publisher)
Created2023
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
189236-Thumbnail Image.png
Description
Artificial Intelligence (AI) is a rapidly advancing field with the potential to impact every aspect of society, including the inventive practices of science and technology. The creation of new ideas, devices, or methods, commonly known as inventions, is typically viewed as a process of combining existing knowledge. To understand how

Artificial Intelligence (AI) is a rapidly advancing field with the potential to impact every aspect of society, including the inventive practices of science and technology. The creation of new ideas, devices, or methods, commonly known as inventions, is typically viewed as a process of combining existing knowledge. To understand how AI can transform scientific and technological inventions, it is essential to comprehend how such combinatorial inventions have emerged in the development of AI.This dissertation aims to investigate three aspects of combinatorial inventions in AI using data-driven and network analysis methods. Firstly, how knowledge is combined to generate new scientific publications in AI; secondly, how technical com- ponents are combined to create new AI patents; and thirdly, how organizations cre- ate new AI inventions by integrating knowledge within organizational and industrial boundaries. Using an AI publication dataset of nearly 300,000 AI publications and an AI patent dataset of almost 260,000 AI patents granted by the United States Patent and Trademark Office (USPTO), this study found that scientific research related to AI is predominantly driven by combining existing knowledge in highly conventional ways, which also results in the most impactful publications. Similarly, incremental improvements and refinements that rely on existing knowledge rather than radically new ideas are the primary driver of AI patenting. Nonetheless, AI patents combin- ing new components tend to disrupt citation networks and hence future inventive practices more than those that involve only existing components. To examine AI organizations’ inventive activities, an analytical framework called the Combinatorial Exploitation and Exploration (CEE) framework was developed to measure how much an organization accesses and discovers knowledge while working within organizational and industrial boundaries. With a dataset of nearly 500 AI organizations that have continuously contributed to AI technologies, the research shows that AI organizations favor exploitative over exploratory inventions. However, local exploitation tends to peak within the first five years and remain stable, while exploratory inventions grow gradually over time. Overall, this dissertation offers empirical evidence regarding how inventions in AI have emerged and provides insights into how combinatorial characteristics relate to AI inventions’ quality. Additionally, the study offers tools to assess inventive outcomes and competence.
ContributorsWang, Jieshu (Author) / Maynard, Andrew (Thesis advisor) / Lobo, Jose (Committee member) / Michael, Katina (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2023
171927-Thumbnail Image.png
Description
Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems.

Tracking disease cases is an essential task in public health; however, tracking the number of cases of a disease may be difficult not every infection can be recorded by public health authorities. Notably, this may happen with whole country measles case reports, even such countries with robust registration systems. Eilertson et al. (2019) propose using a state-space model combined with maximum likelihood methods for estimating measles transmission. A Bayesian approach that uses particle Markov Chain Monte Carlo (pMCMC) is proposed to estimate the parameters of the non-linear state-space model developed in Eilertson et al. (2019) and similar previous studies. This dissertation illustrates the performance of this approach by calculating posterior estimates of the model parameters and predictions of the unobserved states in simulations and case studies. Also, Iteration Filtering (IF2) is used as a support method to verify the Bayesian estimation and to inform the selection of prior distributions. In the second half of the thesis, a birth-death process is proposed to model the unobserved population size of a disease vector. This model studies the effect of a disease vector population size on a second affected population. The second population follows a non-homogenous Poisson process when conditioned on the vector process with a transition rate given by a scaled version of the vector population. The observation model also measures a potential threshold event when the host species population size surpasses a certain level yielding a higher transmission rate. A maximum likelihood procedure is developed for this model, which combines particle filtering with the Minorize-Maximization (MM) algorithm and extends the work of Crawford et al. (2014).
ContributorsMartinez Rivera, Wilmer Osvaldo (Author) / Fricks, John (Thesis advisor) / Reiser, Mark (Committee member) / Zhou, Shuang (Committee member) / Cheng, Dan (Committee member) / Lan, Shiwei (Committee member) / Arizona State University (Publisher)
Created2022
171851-Thumbnail Image.png
Description
A leading crisis in the United States is the opioid use disorder (OUD) epidemic. Opioid overdose deaths have been increasing, with over 100,000 deaths due to overdose from April 2020 to April 2021. This dissertation presents two mathematical models to address illicit OUD (IOUD), treatment, and recovery within an epidemiological

A leading crisis in the United States is the opioid use disorder (OUD) epidemic. Opioid overdose deaths have been increasing, with over 100,000 deaths due to overdose from April 2020 to April 2021. This dissertation presents two mathematical models to address illicit OUD (IOUD), treatment, and recovery within an epidemiological framework. In the first model, individuals remain in the recovery class unless they relapse. Due to the limited availability of specialty treatment facilities for individuals with OUD, a saturation treat- ment function was incorporated. The second model is an extension of the first, where a casual user class and its corresponding specialty treatment class were added. Using U.S. population data, the data was scaled to a population of 200,000 to find parameter estimates. While the first model used the heroin-only dataset, the second model used both the heroin and all-illicit opioids datasets. Backward bifurcation was found in the first IOUD model for realistic parameter values. Additionally, bistability was observed in the second IOUD model with the heroin-only dataset. This result implies that it would be beneficial to increase the availability of treatment. An alarming effect was discovered about the high overdose death rate: by 2038, the disease-free equilibrium would be the only stable equilibrium. This consequence is concerning because although the goal is for the epidemic to end, it would be preferable to end it through treatment rather than overdose. The IOUD model with a casual user class, its sensitivity results, and the comparison of parameters for both datasets, showed the importance of not overlooking the influence that casual users have in driving the all-illicit opioid epidemic. Casual users stay in the casual user class longer and are not going to treatment as quickly as the users of the heroin epidemic. Another result was that the users of the all-illicit opioids were going to the recovered class by means other than specialty treatment. However, the relapse rates for those individuals were much more significant than in the heroin-only epidemic. The results above from analyzing these models may inform health and policy officials, leading to more effective treatment options and prevention efforts.
ContributorsCole, Sandra (Author) / Wirkus, Stephen (Thesis advisor) / Gardner, Carl (Committee member) / Lanchier, Nicolas (Committee member) / Camacho, Erika (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2022
171638-Thumbnail Image.png
Description
The high uncertainty of renewables introduces more dynamics to power systems. The conventional way of monitoring and controlling power systems is no longer reliable. New strategies are needed to ensure the stability and reliability of power systems. This work aims to assess the use of machine learning methods in analyzing

The high uncertainty of renewables introduces more dynamics to power systems. The conventional way of monitoring and controlling power systems is no longer reliable. New strategies are needed to ensure the stability and reliability of power systems. This work aims to assess the use of machine learning methods in analyzing data from renewable integrated power systems to aid the decisionmaking of electricity market participants. Specifically, the work studies the cases of electricity price forecast, solar panel detection, and how to constrain the machine learning methods to obey domain knowledge.Chapter 2 proposes to diversify the data source to ensure a more accurate electricity price forecast. Specifically, the proposed two-stage method, namely the rerouted method, learns two types of mapping rules: the mapping between the historical wind power and the historical price and the forecasting rule for wind generation. Based on the two rules, we forecast the price via the forecasted generation and the learned mapping between power and price. The massive numerical comparison gives guidance for choosing proper machine learning methods and proves the effectiveness of the proposed method. Chapter 3 proposes to integrate advanced data compression techniques into machine learning algorithms to either improve the predicting accuracy or accelerate the computation speed. New semi-supervised learning and one-class classification methods are proposed based on autoencoders to compress the data while refining the nonlinear data representation of human behavior and solar behavior. The numerical results show robust detection accuracy, laying down the foundation for managing distributed energy resources in distribution grids. Guidance is also provided to determine the proper machine learning methods for the solar detection problem. Chapter 4 proposes to integrate different types of domain knowledge-based constraints into basic neural networks to guide the model selection and enhance interpretability. A hybrid model is proposed to penalize derivatives and alter the structure to improve the performance of a neural network. We verify the performance improvement of introducing prior knowledge-based constraints on both synthetic and real data sets.
ContributorsLuo, Shuman (Author) / Weng, Yang (Thesis advisor) / Lei, Qin (Committee member) / Fricks, John (Committee member) / Qin, Jiangchao (Committee member) / Arizona State University (Publisher)
Created2022