Matching Items (7)
Filtering by

Clear all filters

136526-Thumbnail Image.png
Description
The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections

The purpose of this thesis is to examine the events surrounding the creation of the oboe and its rapid spread throughout Europe during the mid to late seventeenth century. The first section describes similar instruments that existed for thousands of years before the invention of the oboe. The following sections examine reasons and methods for the oboe's invention, as well as possible causes of its migration from its starting place in France to other European countries, as well as many other places around the world. I conclude that the oboe was invented to suit the needs of composers in the court of Louis XIV, and that it was brought to other countries by French performers who left France for many reasons, including to escape from the authority of composer Jean-Baptiste Lully and in some cases to promote French culture in other countries.
ContributorsCook, Mary Katherine (Author) / Schuring, Martin (Thesis director) / Micklich, Albie (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Music (Contributor)
Created2015-05
136330-Thumbnail Image.png
Description
We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We examine this process on complete graphs, bipartite graphs, and the integers, answering questions about the relationship between communication, defection rates and the division of labor. Assuming the division of labor is ideal when exactly half of the colony is performing each task, we nd that on some bipartite graphs and the integers it can eventually be made arbitrarily close to optimal if defection rates are sufficiently small. On complete graphs the fraction of individuals performing each task is also closest to one half when there is no defection, but is bounded by a constant dependent on the relative costs of each task.
ContributorsArcuri, Alesandro Antonio (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Fewell, Jennifer (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137559-Thumbnail Image.png
Description
Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a

Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a binomial random variable. This analysis revisits Galams models from the point of view of the hypergeometric random variable by assuming the initial number of individuals in favor of an opinion is a fixed deterministic number. This assumption is more realistic, especially when analyzing small populations. Evolution of the models is based on majority rules, with a bias introduced when there is a tie. For the hier- archical voting system model, in order to derive the probability that opinion +1 would win, the analysis was done by reversing time and assuming that an individual in favor of opinion +1 wins. Then, working backwards we counted the number of configurations at the next lowest level that could induce each possible configuration at the level above, and continued this process until reaching the bottom level, i.e., the initial population. Using this method, we were able to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion, for any group size greater than or equal to three. For the public debate model, we counted the total number of individuals in favor of opinion +1 at each time step and used this variable to define a random walk. Then, we used first-step analysis to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion for group sizes of three. The spatial public debate model evolves based on the proportional rule. For the spatial model, the most natural graphical representation to construct the process results in a model that is not mathematically tractable. Thus, we defined a different graphical representation that is mathematically equivalent to the first graphical representation, but in this model it is possible to define a dual process that is mathematically tractable. Using this graphical representation we prove clustering in 1D and 2D and coexistence in higher dimensions following the same approach as for the voter model interacting particle system.
ContributorsTaylor, Nicole Robyn (Co-author) / Lanchier, Nicolas (Co-author, Thesis director) / Smith, Hal (Committee member) / Hurlbert, Glenn (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135086-Thumbnail Image.png
Description
We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the

We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the winner receives no new points, while in the winner-takes-all model the points that the loser had are added to the winner's total. Recurrence properties are assessed for both models: the knockout model is recurrent except for the all-zero state, and the winner-takes-all model is transient, but retains some aspect of recurrence. In addition, we study the population-level allocation of points; for the winner-takes-all model we show explicitly that the proportion of individuals having any number j of points, j=0,1,... approaches a stationary distribution that can be computed recursively. Graphs of numerical simulations are included to exemplify the results proved.
ContributorsVanKirk, Maxwell Joshua (Author) / Lanchier, Nicolas (Thesis director) / Foxall, Eric (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2016-12
137637-Thumbnail Image.png
Description
The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many

The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many mechanisms to approach this issue ; the Axelrod Model is one of them.
ContributorsYu, Yili (Author) / Lanchier, Nicolas (Thesis director) / Kang, Yun (Committee member) / Brooks, Dan (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Department of Finance (Contributor)
Created2013-05
132766-Thumbnail Image.png
Description
This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters assign scores to candidates based on both policy and non-policy considerations, then voters then decide which candidate they support based on which has a higher candidate score. The traditional assumption that most of the population will vote is replaced by a function describing the probability of voting based on candidate scores assigned by individual voters. If the voter's likelihood of voting is not certain, but rather modelled by a sigmoid curve, it has radical implications on party decisions and actions taken during an election cycle. The model also includes a significant interaction term between the candidate scores and the differential between the scores which enhances the Downsian model. The thesis is proposed in a similar manner to Downs' original presentation, including several allegorical and hypothetical examples of the model in action. The results of the model reveal that single issue voters can have a significant impact on election outcomes, and that the weight of non-policy considerations is high enough that political parties would spend large sums of money on campaigning. Future research will include creating an experiment to verify the interaction terms, as well as adjusting the model for individual costs so that more empirical analysis may be completed.
ContributorsCoulter, Jarod Maxwell (Author) / Foster, William (Thesis director) / Goegan, Brian (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05