Matching Items (12)
136330-Thumbnail Image.png
Description
We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We examine this process on complete graphs, bipartite graphs, and the integers, answering questions about the relationship between communication, defection rates and the division of labor. Assuming the division of labor is ideal when exactly half of the colony is performing each task, we nd that on some bipartite graphs and the integers it can eventually be made arbitrarily close to optimal if defection rates are sufficiently small. On complete graphs the fraction of individuals performing each task is also closest to one half when there is no defection, but is bounded by a constant dependent on the relative costs of each task.
ContributorsArcuri, Alesandro Antonio (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Fewell, Jennifer (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137559-Thumbnail Image.png
Description
Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a

Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a binomial random variable. This analysis revisits Galams models from the point of view of the hypergeometric random variable by assuming the initial number of individuals in favor of an opinion is a fixed deterministic number. This assumption is more realistic, especially when analyzing small populations. Evolution of the models is based on majority rules, with a bias introduced when there is a tie. For the hier- archical voting system model, in order to derive the probability that opinion +1 would win, the analysis was done by reversing time and assuming that an individual in favor of opinion +1 wins. Then, working backwards we counted the number of configurations at the next lowest level that could induce each possible configuration at the level above, and continued this process until reaching the bottom level, i.e., the initial population. Using this method, we were able to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion, for any group size greater than or equal to three. For the public debate model, we counted the total number of individuals in favor of opinion +1 at each time step and used this variable to define a random walk. Then, we used first-step analysis to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion for group sizes of three. The spatial public debate model evolves based on the proportional rule. For the spatial model, the most natural graphical representation to construct the process results in a model that is not mathematically tractable. Thus, we defined a different graphical representation that is mathematically equivalent to the first graphical representation, but in this model it is possible to define a dual process that is mathematically tractable. Using this graphical representation we prove clustering in 1D and 2D and coexistence in higher dimensions following the same approach as for the voter model interacting particle system.
ContributorsTaylor, Nicole Robyn (Co-author) / Lanchier, Nicolas (Co-author, Thesis director) / Smith, Hal (Committee member) / Hurlbert, Glenn (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
132857-Thumbnail Image.png
Description
Predictive analytics have been used in a wide variety of settings, including healthcare,
sports, banking, and other disciplines. We use predictive analytics and modeling to
determine the impact of certain factors that increase the probability of a successful
fourth down conversion in the Power 5 conferences. The logistic regression models

Predictive analytics have been used in a wide variety of settings, including healthcare,
sports, banking, and other disciplines. We use predictive analytics and modeling to
determine the impact of certain factors that increase the probability of a successful
fourth down conversion in the Power 5 conferences. The logistic regression models
predict the likelihood of going for fourth down with a 64% or more probability based on
2015-17 data obtained from ESPN’s college football API. Offense type though important
but non-measurable was incorporated as a random effect. We found that distance to go,
play type, field position, and week of the season were key leading covariates in
predictability. On average, our model performed as much as 14% better than coaches
in 2018.
ContributorsBlinkoff, Joshua Ian (Co-author) / Voeller, Michael (Co-author) / Wilson, Jeffrey (Thesis director) / Graham, Scottie (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Department of Information Systems (Contributor) / Department of Management and Entrepreneurship (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132858-Thumbnail Image.png
Description
Predictive analytics have been used in a wide variety of settings, including healthcare, sports, banking, and other disciplines. We use predictive analytics and modeling to determine the impact of certain factors that increase the probability of a successful fourth down conversion in the Power 5 conferences. The logistic regression models

Predictive analytics have been used in a wide variety of settings, including healthcare, sports, banking, and other disciplines. We use predictive analytics and modeling to determine the impact of certain factors that increase the probability of a successful fourth down conversion in the Power 5 conferences. The logistic regression models predict the likelihood of going for fourth down with a 64% or more probability based on 2015-17 data obtained from ESPN’s college football API. Offense type though important but non-measurable was incorporated as a random effect. We found that distance to go, play type, field position, and week of the season were key leading covariates in predictability. On average, our model performed as much as 14% better than coaches in 2018.
ContributorsVoeller, Michael Jeffrey (Co-author) / Blinkoff, Josh (Co-author) / Wilson, Jeffrey (Thesis director) / Graham, Scottie (Committee member) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135086-Thumbnail Image.png
Description
We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the

We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the winner receives no new points, while in the winner-takes-all model the points that the loser had are added to the winner's total. Recurrence properties are assessed for both models: the knockout model is recurrent except for the all-zero state, and the winner-takes-all model is transient, but retains some aspect of recurrence. In addition, we study the population-level allocation of points; for the winner-takes-all model we show explicitly that the proportion of individuals having any number j of points, j=0,1,... approaches a stationary distribution that can be computed recursively. Graphs of numerical simulations are included to exemplify the results proved.
ContributorsVanKirk, Maxwell Joshua (Author) / Lanchier, Nicolas (Thesis director) / Foxall, Eric (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2016-12
134943-Thumbnail Image.png
Description
Prostate cancer is the second most common kind of cancer in men. Fortunately, it has a 99% survival rate. To achieve such a survival rate, a variety of aggressive therapies are used to treat prostate cancers that are caught early. Androgen deprivation therapy (ADT) is a therapy that is given

Prostate cancer is the second most common kind of cancer in men. Fortunately, it has a 99% survival rate. To achieve such a survival rate, a variety of aggressive therapies are used to treat prostate cancers that are caught early. Androgen deprivation therapy (ADT) is a therapy that is given in cycles to patients. This study attempted to analyze what factors in a group of 79 patients caused them to stick with or discontinue the treatment. This was done using naïve Bayes classification, a machine-learning algorithm. The usage of this algorithm identified high testosterone as an indicator of a patient persevering with the treatment, but failed to produce statistically significant high rates of prediction.
ContributorsMillea, Timothy Michael (Author) / Kostelich, Eric (Thesis director) / Kuang, Yang (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
137637-Thumbnail Image.png
Description
The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many

The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many mechanisms to approach this issue ; the Axelrod Model is one of them.
ContributorsYu, Yili (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Brooks, Dan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor)
Created2013-05
153557-Thumbnail Image.png
Description

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value for δ of the call and put option, the %B value of the stock, and the amount of time until expiration of the stock option. The %B value is the most important. The purpose of analyzing the data is to see the relationship between the variables and, given certain values, what is the probability the trade makes money. This result will be used in finding the probability certain trades make money over a period of time.

Since options are so dependent on probability, this research specifically analyzes stock options rather than stocks themselves. Stock options have value like stocks except options are leveraged. The most common model used to calculate the value of an option is the Black-Scholes Model [1]. There are five main variables the Black-Scholes Model uses to calculate the overall value of an option. These variables are θ, δ, γ, v, and ρ. The variable, θ is the rate of change in price of the option due to time decay, δ is the rate of change of the option’s price due to the stock’s changing value, γ is the rate of change of δ, v represents the rate of change of the value of the option in relation to the stock’s volatility, and ρ represents the rate of change in value of the option in relation to the interest rate [2]. In this research, the %B value of the stock is analyzed along with the time until expiration of the option. All options have the same δ. This is due to the fact that all the options analyzed in this experiment are less than two months from expiration and the value of δ reveals how far in or out of the money an option is.

The machine learning technique used to analyze the data and the probability



is support vector machines. Support vector machines analyze data that can be classified in one of two or more groups and attempts to find a pattern in the data to develop a model, which reliably classifies similar, future data into the correct group. This is used to analyze the outcome of stock options.

ContributorsReeves, Michael (Author) / Richa, Andrea (Thesis advisor) / McCarville, Daniel R. (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2015
132766-Thumbnail Image.png
Description
This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters assign scores to candidates based on both policy and non-policy considerations, then voters then decide which candidate they support based on which has a higher candidate score. The traditional assumption that most of the population will vote is replaced by a function describing the probability of voting based on candidate scores assigned by individual voters. If the voter's likelihood of voting is not certain, but rather modelled by a sigmoid curve, it has radical implications on party decisions and actions taken during an election cycle. The model also includes a significant interaction term between the candidate scores and the differential between the scores which enhances the Downsian model. The thesis is proposed in a similar manner to Downs' original presentation, including several allegorical and hypothetical examples of the model in action. The results of the model reveal that single issue voters can have a significant impact on election outcomes, and that the weight of non-policy considerations is high enough that political parties would spend large sums of money on campaigning. Future research will include creating an experiment to verify the interaction terms, as well as adjusting the model for individual costs so that more empirical analysis may be completed.
ContributorsCoulter, Jarod Maxwell (Author) / Foster, William (Thesis director) / Goegan, Brian (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05