Matching Items (20)
Filtering by

Clear all filters

148169-Thumbnail Image.png
Description

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development

This thesis was conducted to study and analyze the fund allocation process adopted by different states in the United States to reduce the impact of the Covid-19 virus. Seven different states and their funding methodologies were compared against the case count within the state. The study also focused on development of a physical distancing index based on three significant attributes. This index was then compared to the expenditure and case counts to support decision making.
A regression model was developed to analyze and compare how different states case counts played out against the regression model and the risk index.

ContributorsJaisinghani, Shaurya (Author) / Mirchandani, Pitu (Thesis director) / Clough, Michael (Committee member) / McCarville, Daniel R. (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Department of Information Systems (Contributor) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135605-Thumbnail Image.png
Description
An application called "Productivity Heatmap" was created with this project with the goal of allowing users to track how productive they are over the course of a day and week, input through scheduled prompts separated by 30 minutes to 4 hours, depending on preference. The result is a heat ma

An application called "Productivity Heatmap" was created with this project with the goal of allowing users to track how productive they are over the course of a day and week, input through scheduled prompts separated by 30 minutes to 4 hours, depending on preference. The result is a heat map colored according to a user's productivity at particular times of each day during the week. The aim is to allow a user to have a visualization on when he or she is best able to be productive, given that every individual has different habits and life patterns. This application was made completely in Google's Android Studio environment using Java and XML, with SQLite being used for database management. The application runs on any Android device, and was designed to be a balance of providing useful information to a user while maintaining an attractive and intuitive interface. This thesis explores the creation of a functional mobile application for mass distribution, with a particular set of end users in mind, namely college students. Many challenges in the form of learning a new development environment were encountered and overcome, as explained in the report. The application created is a core functionality proof-of-concept of a much larger personal project in creating a versatile and useful mobile application for student use. The principles covered are the creation of a mobile application, meeting requirements specified by others, and investigating the interest generated by such a concept. Beyond this thesis, testing will be done, and future enhancements will be made for mass-market consumption.
ContributorsWeser, Matthew Paul (Author) / Nelson, Brian (Thesis director) / Balasooriya, Janaka (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135606-Thumbnail Image.png
Description
League of Legends is a Multiplayer Online Battle Arena (MOBA) game. MOBA games are generally formatted where two teams of five, each player controlling a character (champion), will try to take each other's base as quickly as possible. Currently, with about 70 million, League of Legends is number one in

League of Legends is a Multiplayer Online Battle Arena (MOBA) game. MOBA games are generally formatted where two teams of five, each player controlling a character (champion), will try to take each other's base as quickly as possible. Currently, with about 70 million, League of Legends is number one in the digital entertainment industry with $1.63 billion dollars of revenue in year 2015. This research analysis scopes in on the niche of the "Jungler" role between different tiers of player in League of Legends. I uncovered differences in player strategy that may explain the achievement of high rank using data aggregation through Riot Games' API, data slicing with time-sensitive data, random sampling, clustering by tiers, graphical techniques to display the cluster, distribution analysis and finally, a comprehensive factor analysis on the data's implications.
ContributorsPoon, Alex (Author) / Clark, Joseph (Thesis director) / Simon, Alan (Committee member) / Department of Information Systems (Contributor) / Department of Management (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137620-Thumbnail Image.png
Description
The area of real-time baseball statistics presents several challenges that can be addressed using mobile devices. In order to accurately record real-time statistics, it is necessary to present the user with a concise interface that can be used to quickly record the necessary data during in-game events. In this project,

The area of real-time baseball statistics presents several challenges that can be addressed using mobile devices. In order to accurately record real-time statistics, it is necessary to present the user with a concise interface that can be used to quickly record the necessary data during in-game events. In this project, we use a mobile application to address this by separating out the required input into pre-game and in-game inputs. We also explore the use of a mobile application to leverage crowd sourcing techniques, which address the challenge of accuracy and precision in subjective real-time statistics.
ContributorsVan Egmond, Eric David (Author) / Tadayon-Navabi, Farideh (Thesis director) / Wilkerson, Kelly (Committee member) / Gorla, Mark (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
131478-Thumbnail Image.png
Description
The process of cooking a turkey is a yearly task that families undertake in order to deliver a delicious centerpiece to a Thanksgiving meal. While other dishes accompany and comprise the traditional Thanksgiving supper, focusing on creating a turkey that satisfies the tastes of all guests is difficult, as preferences

The process of cooking a turkey is a yearly task that families undertake in order to deliver a delicious centerpiece to a Thanksgiving meal. While other dishes accompany and comprise the traditional Thanksgiving supper, focusing on creating a turkey that satisfies the tastes of all guests is difficult, as preferences vary. Over the years, many cooking methods and preparation variations have come to light. This thesis studies these cooking methods and preparation variations, as well as the effects on the crispiness of the skin, the juiciness of the meat, the tenderness of the meat, and the overall taste, to simplify the choices that home cooks have to prepare a turkey that best fits their tastes. Testing and evaluation reveal that among deep-frying, grilling, and oven roasting turkey, a number of preparation variations show statistically significant changes relative to a lack of these preparation variations. For crispiness, fried turkeys are statistically superior, scoring about 1.5 points higher than other cooking methods on a 5 point scale. For juiciness, the best preparation variation was using an oven bag, with the oven roasted turkey scoring about 4.5 points on a 5 point scale. For tenderness, multiple methods are excellent, with the best three preparation variations in order being spatchcocking, brining, and using an oven bag, each of these preparation variations are just under a 4 out of 5. Finally, testing reaffirms that judges tend to have different subjective tastes, with some having different perceptions and opinions on some criteria, while statistically agreeing on others: there was 67% agreement among judges on crispiness and tenderness, while there was only 17% agreement on juiciness. Evaluation of these cooking methods, as well as their respective preparation variations, addresses the question of which methods are worthwhile endeavors for cooks.
ContributorsVance, Jarod (Co-author) / Lacsa, Jeremy (Co-author) / Green, Matthew (Thesis director) / Taylor, David (Committee member) / Chemical Engineering Program (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132394-Thumbnail Image.png
Description
In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in

In baseball, a starting pitcher has historically been a more durable pitcher capable of lasting long into games without tiring. For the entire history of Major League Baseball, these pitchers have been expected to last 6 innings or more into a game before being replaced. However, with the advances in statistics and sabermetrics and their gradual acceptance by professional coaches, the role of the starting pitcher is beginning to change. Teams are experimenting with having starters being replaced quicker, challenging the traditional role of the starting pitcher. The goal of this study is to determine if there is an exact point at which a team would benefit from replacing a starting or relief pitcher with another pitcher using statistical analyses. We will use logistic stepwise regression to predict the likelihood of a team scoring a run if a substitution is made or not made given the current game situation.
ContributorsBuckley, Nicholas J (Author) / Samara, Marko (Thesis director) / Lanchier, Nicolas (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
131125-Thumbnail Image.png
Description
The goal of this research paper is to analyze how we define economic success and how that affects large corporations and consumers. This paper asks the questions: What do we define as a good economy? What metrics are currently utilized? And how do perceptions of a good economy influence politics?

The goal of this research paper is to analyze how we define economic success and how that affects large corporations and consumers. This paper asks the questions: What do we define as a good economy? What metrics are currently utilized? And how do perceptions of a good economy influence politics? Overall, the research seeks to identify common economic and financial fallacies held by the average citizen and offer alternative methods of how socio-economic information is presented to the consumers. Consumers play a major role in the market, and the information they receive has a considerable impact on their behaviors. Determining why the present economic analysis is used is the first step in finding ways to improve the system. Observing past political and economic trends and relating them to current issues is necessary for finding future solutions.
ContributorsTosca, Carlos (Author) / Brian, Jennifer (Thesis director) / Sadusky, Brian (Committee member) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
133482-Thumbnail Image.png
Description
Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries

Cryptocurrencies have become one of the most fascinating forms of currency and economics due to their fluctuating values and lack of centralization. This project attempts to use machine learning methods to effectively model in-sample data for Bitcoin and Ethereum using rule induction methods. The dataset is cleaned by removing entries with missing data. The new column is created to measure price difference to create a more accurate analysis on the change in price. Eight relevant variables are selected using cross validation: the total number of bitcoins, the total size of the blockchains, the hash rate, mining difficulty, revenue from mining, transaction fees, the cost of transactions and the estimated transaction volume. The in-sample data is modeled using a simple tree fit, first with one variable and then with eight. Using all eight variables, the in-sample model and data have a correlation of 0.6822657. The in-sample model is improved by first applying bootstrap aggregation (also known as bagging) to fit 400 decision trees to the in-sample data using one variable. Then the random forests technique is applied to the data using all eight variables. This results in a correlation between the model and data of 9.9443413. The random forests technique is then applied to an Ethereum dataset, resulting in a correlation of 9.6904798. Finally, an out-of-sample model is created for Bitcoin and Ethereum using random forests, with a benchmark correlation of 0.03 for financial data. The correlation between the training model and the testing data for Bitcoin was 0.06957639, while for Ethereum the correlation was -0.171125. In conclusion, it is confirmed that cryptocurrencies can have accurate in-sample models by applying the random forests method to a dataset. However, out-of-sample modeling is more difficult, but in some cases better than typical forms of financial data. It should also be noted that cryptocurrency data has similar properties to other related financial datasets, realizing future potential for system modeling for cryptocurrency within the financial world.
ContributorsBrowning, Jacob Christian (Author) / Meuth, Ryan (Thesis director) / Jones, Donald (Committee member) / McCulloch, Robert (Committee member) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05