Matching Items (10)

Filtering by

Clear all filters

132766-Thumbnail Image.png

Probabilistic Voting: An Addition to theDowns Two Party Voting Model

Description

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014),

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters assign scores to candidates based on both policy and non-policy considerations, then voters then decide which candidate they support based on which has a higher candidate score. The traditional assumption that most of the population will vote is replaced by a function describing the probability of voting based on candidate scores assigned by individual voters. If the voter's likelihood of voting is not certain, but rather modelled by a sigmoid curve, it has radical implications on party decisions and actions taken during an election cycle. The model also includes a significant interaction term between the candidate scores and the differential between the scores which enhances the Downsian model. The thesis is proposed in a similar manner to Downs' original presentation, including several allegorical and hypothetical examples of the model in action. The results of the model reveal that single issue voters can have a significant impact on election outcomes, and that the weight of non-policy considerations is high enough that political parties would spend large sums of money on campaigning. Future research will include creating an experiment to verify the interaction terms, as well as adjusting the model for individual costs so that more empirical analysis may be completed.

Contributors

Created

Date Created
2019-05

133482-Thumbnail Image.png

Utilizing Machine Learning Methods to Model Cryptocurrency

Description

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.

Contributors

Agent

Created

Date Created
2018-05

133413-Thumbnail Image.png

Linear Modeling for Insurance Ratemaking/Reserving: Modeling Loss Development Factors for Catastrophe Claims

Description

Catastrophe events occur rather infrequently, but upon their occurrence, can lead to colossal losses for insurance companies. Due to their size and volatility, catastrophe losses are often treated separately from other insurance losses. In fact, many property and casualty insurance

Catastrophe events occur rather infrequently, but upon their occurrence, can lead to colossal losses for insurance companies. Due to their size and volatility, catastrophe losses are often treated separately from other insurance losses. In fact, many property and casualty insurance companies feature a department or team which focuses solely on modeling catastrophes. Setting reserves for catastrophe losses is difficult due to their unpredictable and often long-tailed nature. Determining loss development factors (LDFs) to estimate the ultimate loss amounts for catastrophe events is one method for setting reserves. In an attempt to aid Company XYZ set more accurate reserves, the research conducted focuses on estimating LDFs for catastrophes which have already occurred and have been settled. Furthermore, the research describes the process used to build a linear model in R to estimate LDFs for Company XYZ's closed catastrophe claims from 2001 \u2014 2016. This linear model was used to predict a catastrophe's LDFs based on the age in weeks of the catastrophe during the first year. Back testing was also performed, as was the comparison between the estimated ultimate losses and actual losses. Future research consideration was proposed.

Contributors

Agent

Created

Date Created
2018-05

136330-Thumbnail Image.png

A Model for the Division of Labor Through Network Interactions

Description

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost

We model communication among social insects as an interacting particle system in which individuals perform one of two tasks and neighboring sites anti-mimic one another. Parameters of our model are a probability of defection 2 (0; 1) and relative cost ci > 0 to the individual performing task i. We examine this process on complete graphs, bipartite graphs, and the integers, answering questions about the relationship between communication, defection rates and the division of labor. Assuming the division of labor is ideal when exactly half of the colony is performing each task, we nd that on some bipartite graphs and the integers it can eventually be made arbitrarily close to optimal if defection rates are sufficiently small. On complete graphs the fraction of individuals performing each task is also closest to one half when there is no defection, but is bounded by a constant dependent on the relative costs of each task.

Contributors

Created

Date Created
2015-05

137559-Thumbnail Image.png

Galam's Voting Systems and Public Debate Models Revisited

Description

Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p,

Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a binomial random variable. This analysis revisits Galams models from the point of view of the hypergeometric random variable by assuming the initial number of individuals in favor of an opinion is a fixed deterministic number. This assumption is more realistic, especially when analyzing small populations. Evolution of the models is based on majority rules, with a bias introduced when there is a tie. For the hier- archical voting system model, in order to derive the probability that opinion +1 would win, the analysis was done by reversing time and assuming that an individual in favor of opinion +1 wins. Then, working backwards we counted the number of configurations at the next lowest level that could induce each possible configuration at the level above, and continued this process until reaching the bottom level, i.e., the initial population. Using this method, we were able to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion, for any group size greater than or equal to three. For the public debate model, we counted the total number of individuals in favor of opinion +1 at each time step and used this variable to define a random walk. Then, we used first-step analysis to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion for group sizes of three. The spatial public debate model evolves based on the proportional rule. For the spatial model, the most natural graphical representation to construct the process results in a model that is not mathematically tractable. Thus, we defined a different graphical representation that is mathematically equivalent to the first graphical representation, but in this model it is possible to define a dual process that is mathematically tractable. Using this graphical representation we prove clustering in 1D and 2D and coexistence in higher dimensions following the same approach as for the voter model interacting particle system.

Contributors

Agent

Created

Date Created
2013-05

132267-Thumbnail Image.png

Homeward Bound: An Overview of Continuing Care at Home

Description

AARP estimates that 90% of seniors wish to remain in their homes during retirement. Seniors need assistance as they age, historically they have received assistance from either family members, nursing homes, or Continuing Care Retirement Communities. For seniors not wanting

AARP estimates that 90% of seniors wish to remain in their homes during retirement. Seniors need assistance as they age, historically they have received assistance from either family members, nursing homes, or Continuing Care Retirement Communities. For seniors not wanting any of these options, there has been very few alternatives. Now, the emergence of the continuing care at home program is providing hope for a different method of elder care moving forward. CCaH programs offer services such as: skilled nursing care, care coordination, emergency response systems, aid with personal and health care, and transportation. Such services allow seniors to continue to live in their own home with assistance as their health deteriorates over time. Currently, only 30 CCaH programs exist. With the growth of the elderly population in the coming years, this model seems poised for growth.

Contributors

Agent

Created

Date Created
2019-05

137637-Thumbnail Image.png

REVIEW OF THE AXELROD MODEL

Description

The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a

The Axelrod Model is an agent-based adaptive model. The Axelrod Model shows the eects of a mechanism of convergent social inuence. Do local conver- gences generate global polarization ? Will it be possible for all dierences between individuals in a population comprised of neighbors to disappear ? There are many mechanisms to approach this issue ; the Axelrod Model is one of them.

Contributors

Agent

Created

Date Created
2013-05

135086-Thumbnail Image.png

A Study of Two Models of Competitive Interaction

Description

We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what

We study two models of a competitive game in which players continuously receive points and wager them in one-on-one battles. In each model the loser of a battle has their points reset, while the points the winner receives is what sets the two models apart. In the knockout model the winner receives no new points, while in the winner-takes-all model the points that the loser had are added to the winner's total. Recurrence properties are assessed for both models: the knockout model is recurrent except for the all-zero state, and the winner-takes-all model is transient, but retains some aspect of recurrence. In addition, we study the population-level allocation of points; for the winner-takes-all model we show explicitly that the proportion of individuals having any number j of points, j=0,1,... approaches a stationary distribution that can be computed recursively. Graphs of numerical simulations are included to exemplify the results proved.

Contributors

Agent

Created

Date Created
2016-12

165923-Thumbnail Image.png

Automating by Developing Model Components for the Insurance Ratemaking Actuarial Procedures

Description

The objective of this study is to build a model using R and RStudio that automates ratemaking procedures for Company XYZ’s actuaries in their commercial general liability pricing department. The purpose and importance of this objective is to allow actuaries

The objective of this study is to build a model using R and RStudio that automates ratemaking procedures for Company XYZ’s actuaries in their commercial general liability pricing department. The purpose and importance of this objective is to allow actuaries to work more efficiently and effectively by using this model that outputs the results they otherwise would have had to code and calculate on their own. Instead of spending time working towards these results, the actuaries can analyze the findings, strategize accordingly, and communicate with business partners. The model was built from R code that was later transformed to Shiny, a package within RStudio that allows for the build-up of interactive web applications. The final result is a Shiny app that first takes in multiple datasets from Company XYZ’s data warehouse and displays different views of the data in order for actuaries to make selections on development and trend methods. The app outputs the re-created ratemaking exhibits showing the resulting developed and trended loss and premium as well as the experience-based indicated rate level change based on prior selections. The ratemaking process and Shiny app functionality will be detailed in this report.

Contributors

Agent

Created

Date Created
2022-05

165134-Thumbnail Image.png

Looking at COVID-19 as a Factor in Insurance Loss Reserving Models

Description

A factor accounting for the COVID-19 pandemic was added to a generalized linear model to more accurately predict unpaid claims. COVID-19 has affected not just healthcare, but all sectors of the economy. Because of this, whether or not an automobile

A factor accounting for the COVID-19 pandemic was added to a generalized linear model to more accurately predict unpaid claims. COVID-19 has affected not just healthcare, but all sectors of the economy. Because of this, whether or not an automobile insurance claim is filed during the pandemic needs to be taken into account while estimating unpaid claims. Reserve-estimating functions such as glmReserve from the “ChainLadder” package in the statistical software R were experimented with to produce their own results. Because of their insufficiency, a manual approach to building the model turned out to be the most proficient method. Utilizing the GLM function, a model was built that emulated linear regression with a factor for COVID-19. The effects of such a model are analyzed based on effectiveness and interpretablility. A model such as this would prove useful for future calculations, especially as society is now returning to a “normal” state.

Contributors

Agent

Created

Date Created
2022-05