Matching Items (16)
Filtering by

Clear all filters

148169-Thumbnail Image.png
Description

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development of a physical distancing index based on three significant attributes. This index was then compared to the expenditure and case counts to support decision making.
A regression model was developed to analyze and compare how different states case counts played out against the regression model and the risk index.

ContributorsJaisinghani, Shaurya (Author) / Mirchandani, Pitu (Thesis director) / Clough, Michael (Committee member) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Department of Information Systems (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
149960-Thumbnail Image.png
Description
By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random

By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random variables, then with probability one, both players have unique optimal mixed strategies utilizing the same number of pure strategies with positive probability (Jonasson 2004). The pure strategies with positive probability in the unique optimal mixed strategies are called saddle squares. In 1957, Goldman evaluated the probability of a saddle point (a 1 by 1 saddle square), which was rediscovered by many authors including Thorp (1979). Thorp gave two proofs of the probability of a saddle point, one using combinatorics and one using a beta integral. In 1965, Falk and Thrall investigated the integrals required for the probabilities of a 2 by 2 saddle square for 2 × n and m × 2 games with iid uniform (0,1) payoffs, but they were not able to evaluate the integrals. This dissertation generalizes Thorp's beta integral proof of Goldman's probability of a saddle point, establishing an integral formula for the probability that a m × n game with iid uniform (0,1) payoffs has a k by k saddle square (k ≤ m,n). Additionally, the probabilities of a 2 by 2 and a 3 by 3 saddle square for a 3 × 3 game with iid uniform(0,1) payoffs are found. For these, the 14 integrals observed by Falk and Thrall are dissected into 38 disjoint domains, and the integrals are evaluated using the basic properties of the dilogarithm function. The final results for the probabilities of a 2 by 2 and a 3 by 3 saddle square in a 3 × 3 game are linear combinations of 1, π2, and ln(2) with rational coefficients.
ContributorsManley, Michael (Author) / Kadell, Kevin W. J. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Lohr, Sharon (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2011
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
135606-Thumbnail Image.png
Description
League of Legends is a Multiplayer Online Battle Arena (MOBA) game. MOBA games are generally formatted where two teams of five, each player controlling a character (champion), will try to take each other's base as quickly as possible. Currently, with about 70 million, League of Legends is number one in

League of Legends is a Multiplayer Online Battle Arena (MOBA) game. MOBA games are generally formatted where two teams of five, each player controlling a character (champion), will try to take each other's base as quickly as possible. Currently, with about 70 million, League of Legends is number one in the digital entertainment industry with $1.63 billion dollars of revenue in year 2015. This research analysis scopes in on the niche of the "Jungler" role between different tiers of player in League of Legends. I uncovered differences in player strategy that may explain the achievement of high rank using data aggregation through Riot Games' API, data slicing with time-sensitive data, random sampling, clustering by tiers, graphical techniques to display the cluster, distribution analysis and finally, a comprehensive factor analysis on the data's implications.
ContributorsPoon, Alex (Author) / Clark, Joseph (Thesis director) / Simon, Alan (Committee member) / Department of Information Systems (Contributor) / Department of Management (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
132394-Thumbnail Image.png
Description
In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in

In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in statistics and sabermetrics and their gradual acceptance by professional coaches, the role of the starting pitcher is beginning to change. Teams are experimenting with having starters being replaced quicker, challenging the traditional role of the starting pitcher. The goal of this study is to determine if there is an exact point at which a team would benefit from replacing a starting or relief pitcher with another pitcher using statistical analyses. We will use logistic stepwise regression to predict the likelihood of a team scoring a run if a substitution is made or not made given the current game situation.
ContributorsBuckley, Nicholas J (Author) / Samara, Marko (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
131125-Thumbnail Image.png
Description
The goal of this research paper is to analyze how we define economic success and how that affects large corporations and consumers. This paper asks the questions: What do we define as a good economy? What metrics are currently utilized? And how do perceptions of a good economy influence politics?

The goal of this research paper is to analyze how we define economic success and how that affects large corporations and consumers. This paper asks the questions: What do we define as a good economy? What metrics are currently utilized? And how do perceptions of a good economy influence politics? Overall, the research seeks to identify common economic and financial fallacies held by the average citizen and offer alternative methods of how socio-economic information is presented to the consumers. Consumers play a major role in the market, and the information they receive has a considerable impact on their behaviors. Determining why the present economic analysis is used is the first step in finding ways to improve the system. Observing past political and economic trends and relating them to current issues is necessary for finding future solutions.
ContributorsTosca, Carlos (Author) / Brian, Jennifer (Thesis director) / Sadusky, Brian (Committee member) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133983-Thumbnail Image.png
Description
There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models

There are multiple mathematical models for alignment of individuals moving within a group. In a first class of models, individuals tend to relax their velocity toward the average velocity of other nearby neighbors. These types of models are motivated by the flocking behavior exhibited by birds. Another class of models have been introduced to describe rapid changes of individual velocity, referred to as jump, which better describes behavior of smaller agents (e.g. locusts, ants). In the second class of model, individuals will randomly choose to align with another nearby individual, matching velocities. There are several open questions concerning these two type of behavior: which behavior is the most efficient to create a flock (i.e. to converge toward the same velocity)? Will flocking still emerge when the number of individuals approach infinity? Analysis of these models show that, in the homogeneous case where all individuals are capable of interacting with each other, the variance of the velocities in both the jump model and the relaxation model decays to 0 exponentially for any nonzero number of individuals. This implies the individuals in the system converge to an absorbing state where all individuals share the same velocity, therefore individuals converge to a flock even as the number of individuals approach infinity. Further analysis focused on the case where interactions between individuals were determined by an adjacency matrix. The second eigenvalues of the Laplacian of this adjacency matrix (denoted ƛ2) provided a lower bound on the rate of decay of the variance. When ƛ2 is nonzero, the system is said to converge to a flock almost surely. Furthermore, when the adjacency matrix is generated by a random graph, such that connections between individuals are formed with probability p (where 0

1/N. ƛ2 is a good estimator of the rate of convergence of the system, in comparison to the value of p used to generate the adjacency matrix..

ContributorsTrent, Austin L. (Author) / Motsch, Sebastien (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134082-Thumbnail Image.png
Description
Over the past several decades, analytics have become more and more prevalent in the game of baseball. Statistics are used in nearly every facet of the game. Each team develops its own processes, hoping to gain a competitive advantage over the rest of the league. One area of the game

Over the past several decades, analytics have become more and more prevalent in the game of baseball. Statistics are used in nearly every facet of the game. Each team develops its own processes, hoping to gain a competitive advantage over the rest of the league. One area of the game that has struggled to produce definitive analytics is amateur scouting. This project seeks to resolve this problem through the creation of a new statistic, Valued Plate Appearance Index (VPI). The problem is identified through analysis that was performed to determine whether any correlation exists between performances at the country's top amateur baseball league, the Cape Cod League, and performances in Major League Baseball. After several stats were analyzed, almost no correlation was determined between the two. This essentially means that teams have no way to statistically analyze Cape Cod League performance and project future statistics. An inherent contextual error in these amateur statistics prevents them from correlating. The project seeks to close that contextual gap and create concrete, encompassing values to illustrate a player's offensive performance in the Cape League. To solve for this problem, data was collected from the 2017 CCBL season. In addition to VPI, Valued Plate Appearance Approach (VPA) and Valued Plate Appearance Result (VPR) were created to better depict a player's all-around performance in each plate appearance. VPA values the quality of a player's approach in each plate appearance. VPR values the quality of the contact result, excluding factors out of the hitter's control. This statistic isolates player performance as well as eliminates luck that cannot normally be taken into account. This paper results in the segmentation of players from the 2017 CCBL into four different groups, which project how they will perform as they transition into professional baseball. These groups and the creation of these statistics could be essential tools in the evaluation and projection of amateur players by Major League clubs for years to come.
ContributorsLothrop, Joseph Kent (Author) / Eaton, John (Thesis director) / McIntosh, Daniel (Committee member) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12