Matching Items (19)
137407-Thumbnail Image.png
Description
This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by

This thesis explores and explains a stochastic model in Evolutionary Game Theory introduced by Dr. Nicolas Lanchier. The model is a continuous-time Markov chain that maps the two-dimensional lattice into the strategy space {1,2}. At every vertex in the grid there is exactly one player whose payoff is determined by its strategy and the strategies of its neighbors. Update times are exponential random variables with parameters equal to the absolute value of the respective cells' payoffs. The model is connected to an ordinary differential equation known as the replicator equation. This differential equation is analyzed to find its fixed points and stability. Then, by simulating the model using Java code and observing the change in dynamics which result from varying the parameters of the payoff matrix, the stochastic model's phase diagram is compared to the replicator equation's phase diagram to see what effect local interactions and stochastic update times have on the evolutionary stability of strategies. It is revealed that in the stochastic model altruistic strategies can be evolutionarily stable, and selfish strategies are only evolutionarily stable if they are more selfish than their opposing strategy. This contrasts with the replicator equation where selfishness is always evolutionarily stable and altruism never is.
ContributorsWehn, Austin Brent (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Motsch, Sebastien (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor)
Created2013-12
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic

Cancer modeling has brought a lot of attention in recent years. It had been proven to be a difficult task to model the behavior of cancer cells, since little about the "rules" a cell follows has been known. Existing models for cancer cells can be generalized into two categories: macroscopic models which studies the tumor structure as a whole, and microscopic models which focus on the behavior of individual cells. Both modeling strategies strive the same goal of creating a model that can be validated with experimental data, and is reliable for predicting tumor growth. In order to achieve this goal, models must be developed based on certain rules that tumor structures follow. This paper will introduce how such rules can be implemented in a mathematical model, with the example of individual based modeling.
ContributorsHan, Zimo (Author) / Motsch, Sebastien (Thesis director) / Moustaoui, Mohamed (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
189236-Thumbnail Image.png
Description
Artificial Intelligence (AI) is a rapidly advancing field with the potential to impact every aspect of society, including the inventive practices of science and technology. The creation of new ideas, devices, or methods, commonly known as inventions, is typically viewed as a process of combining existing knowledge. To understand how

Artificial Intelligence (AI) is a rapidly advancing field with the potential to impact every aspect of society, including the inventive practices of science and technology. The creation of new ideas, devices, or methods, commonly known as inventions, is typically viewed as a process of combining existing knowledge. To understand how AI can transform scientific and technological inventions, it is essential to comprehend how such combinatorial inventions have emerged in the development of AI.This dissertation aims to investigate three aspects of combinatorial inventions in AI using data-driven and network analysis methods. Firstly, how knowledge is combined to generate new scientific publications in AI; secondly, how technical com- ponents are combined to create new AI patents; and thirdly, how organizations cre- ate new AI inventions by integrating knowledge within organizational and industrial boundaries. Using an AI publication dataset of nearly 300,000 AI publications and an AI patent dataset of almost 260,000 AI patents granted by the United States Patent and Trademark Office (USPTO), this study found that scientific research related to AI is predominantly driven by combining existing knowledge in highly conventional ways, which also results in the most impactful publications. Similarly, incremental improvements and refinements that rely on existing knowledge rather than radically new ideas are the primary driver of AI patenting. Nonetheless, AI patents combin- ing new components tend to disrupt citation networks and hence future inventive practices more than those that involve only existing components. To examine AI organizations’ inventive activities, an analytical framework called the Combinatorial Exploitation and Exploration (CEE) framework was developed to measure how much an organization accesses and discovers knowledge while working within organizational and industrial boundaries. With a dataset of nearly 500 AI organizations that have continuously contributed to AI technologies, the research shows that AI organizations favor exploitative over exploratory inventions. However, local exploitation tends to peak within the first five years and remain stable, while exploratory inventions grow gradually over time. Overall, this dissertation offers empirical evidence regarding how inventions in AI have emerged and provides insights into how combinatorial characteristics relate to AI inventions’ quality. Additionally, the study offers tools to assess inventive outcomes and competence.
ContributorsWang, Jieshu (Author) / Maynard, Andrew (Thesis advisor) / Lobo, Jose (Committee member) / Michael, Katina (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2023
189358-Thumbnail Image.png
Description
The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on

The main objective of this work is to study novel stochastic modeling applications to cybersecurity aspects across three dimensions: Loss, attack, and detection. First, motivated by recent spatial stochastic models with cyber insurance applications, the first and second moments of the size of a typical cluster of bond percolation on finite graphs are studied. More precisely, having a finite graph where edges are independently open with the same probability $p$ and a vertex $x$ chosen uniformly at random, the goal is to find the first and second moments of the number of vertices in the cluster of open edges containing $x$. Exact expressions for the first and second moments of the size distribution of a bond percolation cluster on essential building blocks of hybrid graphs: the ring, the path, the random star, and regular graphs are derived. Upper bounds for the moments are obtained by using a coupling argument to compare the percolation model with branching processes when the graph is the random rooted tree with a given offspring distribution and a given finite radius. Second, the Petri Net modeling framework for performance analysis is well established; extensions provide enough flexibility to examine the behavior of a permissioned blockchain platform in the context of an ongoing cyberattack via simulation. The relationship between system performance and cyberattack configuration is analyzed. The simulations vary the blockchain's parameters and network structure, revealing the factors that contribute positively or negatively to a Sybil attack through the performance impact of the system. Lastly, the denoising diffusion probabilistic models (DDPM) ability for synthetic tabular data augmentation is studied. DDPMs surpass generative adversarial networks in improving computer vision classification tasks and image generation, for example, stable diffusion. Recent research and open-source implementations point to a strong quality of synthetic tabular data generation for classification and regression tasks. Unfortunately, the present state of literature concerning tabular data augmentation with DDPM for classification is lacking. Further, cyber datasets commonly have highly unbalanced distributions complicating training. Synthetic tabular data augmentation is investigated with cyber datasets and performance of well-known metrics in machine learning classification tasks improve with augmentation and balancing.
ContributorsLa Salle, Axel (Author) / Lanchier, Nicolas (Thesis advisor) / Jevtic, Petar (Thesis advisor) / Motsch, Sebastien (Committee member) / Boscovic, Dragan (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
162238-Thumbnail Image.png
DescriptionUnderstanding the evolution of opinions is a delicate task as the dynamics of how one changes their opinion based on their interactions with others are unclear.
ContributorsWeber, Dylan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicolas (Committee member) / Platte, Rodrigo (Committee member) / Armbruster, Dieter (Committee member) / Fricks, John (Committee member) / Arizona State University (Publisher)
Created2021
157588-Thumbnail Image.png
Description
The main part of this work establishes existence, uniqueness and regularity properties of measure-valued solutions of a nonlinear hyperbolic conservation law with non-local velocities. Major challenges stem from in- and out-fluxes containing nonzero pure-point parts which cause discontinuities of the velocities. This part is preceded, and motivated, by an extended

The main part of this work establishes existence, uniqueness and regularity properties of measure-valued solutions of a nonlinear hyperbolic conservation law with non-local velocities. Major challenges stem from in- and out-fluxes containing nonzero pure-point parts which cause discontinuities of the velocities. This part is preceded, and motivated, by an extended study which proves that an associated optimal control problem has no optimal $L^1$-solutions that are supported on short time intervals.

The hyperbolic conservation law considered here is a well-established model for a highly re-entrant semiconductor manufacturing system. Prior work established well-posedness for $L^1$-controls and states, and existence of optimal solutions for $L^2$-controls, states, and control objectives. The results on measure-valued solutions presented here reduce to the existing literature in the case of initial state and in-flux being absolutely continuous measures. The surprising well-posedness (in the face of measures containing nonzero pure-point part and discontinuous velocities) is directly related to characteristic features of the model that capture the highly re-entrant nature of the semiconductor manufacturing system.

More specifically, the optimal control problem is to minimize an $L^1$-functional that measures the mismatch between actual and desired accumulated out-flux. The focus is on the transition between equilibria with eventually zero backlog. In the case of a step up to a larger equilibrium, the in-flux not only needs to increase to match the higher desired out-flux, but also needs to increase the mass in the factory and to make up for the backlog caused by an inverse response of the system. The optimality results obtained confirm the heuristic inference that the optimal solution should be an impulsive in-flux, but this is no longer in the space of $L^1$-controls.

The need for impulsive controls motivates the change of the setting from $L^1$-controls and states to controls and states that are Borel measures. The key strategy is to temporarily abandon the Eulerian point of view and first construct Lagrangian solutions. The final section proposes a notion of weak measure-valued solutions and proves existence and uniqueness of such.

In the case of the in-flux containing nonzero pure-point part, the weak solution cannot depend continuously on the time with respect to any norm. However, using semi-norms that are related to the flat norm, a weaker form of continuity of solutions with respect to time is proven. It is conjectured that also a similar weak continuous dependence on initial data holds with respect to a variant of the flat norm.
ContributorsGong, Xiaoqian, Ph.D (Author) / Kawski, Matthias (Thesis advisor) / Kaliszewski, Steven (Committee member) / Motsch, Sebastien (Committee member) / Smith, Hal (Committee member) / Thieme, Horst (Committee member) / Arizona State University (Publisher)
Created2019
157649-Thumbnail Image.png
Description
I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling

I focus on algorithms that generate good sampling points for function approximation. In 1D, it is well known that polynomial interpolation using equispaced points is unstable. On the other hand, using Chebyshev nodes provides both stable and highly accurate points for polynomial interpolation. In higher dimensional complex regions, optimal sampling points are not known explicitly. This work presents robust algorithms that find good sampling points in complex regions for polynomial interpolation, least-squares, and radial basis function (RBF) methods. The quality of these nodes is measured using the Lebesgue constant. I will also consider optimal sampling for constrained optimization, used to solve PDEs, where boundary conditions must be imposed. Furthermore, I extend the scope of the problem to include finding near-optimal sampling points for high-order finite difference methods. These high-order finite difference methods can be implemented using either piecewise polynomials or RBFs.
ContributorsLiu, Tony (Author) / Platte, Rodrigo B (Thesis advisor) / Renaut, Rosemary (Committee member) / Kaspar, David (Committee member) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Arizona State University (Publisher)
Created2019
157010-Thumbnail Image.png
Description
I investigate two models interacting agent systems: the first is motivated by the flocking and swarming behaviors in biological systems, while the second models opinion formation in social networks. In each setting, I define natural notions of convergence (to a ``flock" and to a ``consensus'', respectively), and study the convergence

I investigate two models interacting agent systems: the first is motivated by the flocking and swarming behaviors in biological systems, while the second models opinion formation in social networks. In each setting, I define natural notions of convergence (to a ``flock" and to a ``consensus'', respectively), and study the convergence properties of each in the limit as $t \rightarrow \infty$. Specifically, I provide sufficient conditions for the convergence of both of the models, and conduct numerical experiments to study the resulting solutions.
ContributorsTheisen, Ryan (Author) / Motsch, Sebastien (Thesis advisor) / Lanchier, Nicholas (Committee member) / Kostelich, Eric (Committee member) / Arizona State University (Publisher)
Created2018