Matching Items (17)
Filtering by

Clear all filters

136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
133957-Thumbnail Image.png
Description
Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures

Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures in turbulent flows. In this study, we will apply these tools to extract important coherent structures and analyze their statistical properties as well as their implications on kinematics and dynamics of the flow. Such information will aide representation of small-scale nonlinear processes that large-scale models of natural processes may not be able to resolve.
ContributorsCass, Brentlee Jerry (Author) / Tang, Wenbo (Thesis director) / Kostelich, Eric (Committee member) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
Description
The object of the present study is to examine methods in which the company can optimize their costs on third-party suppliers whom oversee other third-party trade labor. The third parties in scope of this study are suspected to overstaff their workforce, thus overcharging the company. We will introduce a complex

The object of the present study is to examine methods in which the company can optimize their costs on third-party suppliers whom oversee other third-party trade labor. The third parties in scope of this study are suspected to overstaff their workforce, thus overcharging the company. We will introduce a complex spreadsheet model that will propose a proper project staffing level based on key qualitative variables and statistics. Using the model outputs, the Thesis team proposes a headcount solution for the company and problem areas to focus on, going forward. All sources of information come from company proprietary and confidential documents.
ContributorsLoo, Andrew (Co-author) / Brennan, Michael (Co-author) / Sheiner, Alexander (Co-author) / Hertzel, Michael (Thesis director) / Simonson, Mark (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Supply Chain Management (Contributor) / WPC Graduate Programs (Contributor) / School of Accountancy (Contributor)
Created2014-05
134418-Thumbnail Image.png
Description
We seek a comprehensive measurement for the economic prosperity of persons with disabilities. We survey the current literature and identify the major economic indicators used to describe the socioeconomic standing of persons with disabilities. We then develop a methodology for constructing a statistically valid composite index of these indicators, and

We seek a comprehensive measurement for the economic prosperity of persons with disabilities. We survey the current literature and identify the major economic indicators used to describe the socioeconomic standing of persons with disabilities. We then develop a methodology for constructing a statistically valid composite index of these indicators, and build this index using data from the 2014 American Community Survey. Finally, we provide context for further use and development of the index and describe an example application of the index in practice.
ContributorsTheisen, Ryan (Co-author) / Helms, Tyler (Co-author) / Lewis, Paul (Thesis director) / Reiser, Mark (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134373-Thumbnail Image.png
Description
Our research encompassed the prospect draft in baseball and looked at what type of player teams drafted to maximize value. We wanted to know which position returned the best value to the team that drafted them, and which level is safer to draft players from, college or high school. We

Our research encompassed the prospect draft in baseball and looked at what type of player teams drafted to maximize value. We wanted to know which position returned the best value to the team that drafted them, and which level is safer to draft players from, college or high school. We decided to look at draft data from 2006-2010 for the first ten rounds of players selected. Because there is only a monetary cap on players drafted in the first ten rounds we restricted our data to these players. Once we set up the parameters we compiled a spreadsheet of these players with both their signing bonuses and their wins above replacement (WAR). This allowed us to see how much a team was spending per win at the major league level. After the data was compiled we made pivot tables and graphs to visually represent our data and better understand the numbers. We found that the worst position that MLB teams could draft would be high school second baseman. They returned the lowest WAR of any player that we looked at. In general though high school players were more costly to sign and had lower WARs than their college counterparts making them, on average, a worse pick value wise. The best position you could pick was college shortstops. They had the trifecta of the best signability of all players, along with one of the highest WARs and lowest signing bonuses. These were three of the main factors that you want with your draft pick and they ranked near the top in all three categories. This research can help give guidelines to Major League teams as they go to select players in the draft. While there are always going to be exceptions to trends, by following the enclosed research teams can minimize risk in the draft.
ContributorsValentine, Robert (Co-author) / Johnson, Ben (Co-author) / Eaton, John (Thesis director) / Goegan, Brian (Committee member) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134082-Thumbnail Image.png
Description
Over the past several decades, analytics have become more and more prevalent in the game of baseball. Statistics are used in nearly every facet of the game. Each team develops its own processes, hoping to gain a competitive advantage over the rest of the league. One area of the game

Over the past several decades, analytics have become more and more prevalent in the game of baseball. Statistics are used in nearly every facet of the game. Each team develops its own processes, hoping to gain a competitive advantage over the rest of the league. One area of the game that has struggled to produce definitive analytics is amateur scouting. This project seeks to resolve this problem through the creation of a new statistic, Valued Plate Appearance Index (VPI). The problem is identified through analysis that was performed to determine whether any correlation exists between performances at the country's top amateur baseball league, the Cape Cod League, and performances in Major League Baseball. After several stats were analyzed, almost no correlation was determined between the two. This essentially means that teams have no way to statistically analyze Cape Cod League performance and project future statistics. An inherent contextual error in these amateur statistics prevents them from correlating. The project seeks to close that contextual gap and create concrete, encompassing values to illustrate a player's offensive performance in the Cape League. To solve for this problem, data was collected from the 2017 CCBL season. In addition to VPI, Valued Plate Appearance Approach (VPA) and Valued Plate Appearance Result (VPR) were created to better depict a player's all-around performance in each plate appearance. VPA values the quality of a player's approach in each plate appearance. VPR values the quality of the contact result, excluding factors out of the hitter's control. This statistic isolates player performance as well as eliminates luck that cannot normally be taken into account. This paper results in the segmentation of players from the 2017 CCBL into four different groups, which project how they will perform as they transition into professional baseball. These groups and the creation of these statistics could be essential tools in the evaluation and projection of amateur players by Major League clubs for years to come.
ContributorsLothrop, Joseph Kent (Author) / Eaton, John (Thesis director) / McIntosh, Daniel (Committee member) / Department of Information Systems (Contributor) / Department of Marketing (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
134953-Thumbnail Image.png
Description
Campaign finance regulation has drastically changed since the founding of the Republic. Originally, few laws regulated how much could be contributed to political campaigns and who could make contributions. One by one, Congress passed laws to limit the possibility of corruption, for example by banning the solicitation of federal workers

Campaign finance regulation has drastically changed since the founding of the Republic. Originally, few laws regulated how much could be contributed to political campaigns and who could make contributions. One by one, Congress passed laws to limit the possibility of corruption, for example by banning the solicitation of federal workers and banning contributions from corporations. As the United States moved into the 20th Century, regulations became more robust with more accountability. The modern structure of campaign finance regulation was established in the 1970's with legislation like the Federal Election Campaign Act and with Supreme Court rulings like in Buckley v. Valeo. Since then, the Court has moved increasingly to strike down campaign finance laws they see as limiting to First Amendment free speech. However, Arizona is one of a handful of states that established a system of publicly financed campaigns at the state-wide and legislative level. Passed in 1998, Proposition 200 attempted to limit the influence of money politics. For my research I hypothesized that a public financing system like the Arizona Citizens Clean Elections Commission (CCEC) would lead to Democrats running with public funds more than Republicans, women running clean more than men, and rural candidates running clean more than urban ones, and that Democrats, women, and rural candidates would win in higher proportions than than if they ran a traditional campaign. After compiling data from the CCEC and the National Institute on Money in State Politics, I found that Democrats do run with public funds in statistically higher proportions than Republicans, but when they do they lose in higher proportions than Democrats who run traditionally. Female candidates only ran at a statistically higher proportion from 2002 to 2008, after which the difference was not statistically significant. For all year ranges women who ran with public money lost in higher proportions than women who ran traditionally. Similarly, rural candidates only ran at a statistically higher proportion from 2002 to 2008. However, they only lost at higher proportions from 2002 to 2008 instead of the whole range like with women and Democratic candidates.
ContributorsMarshall, Austin Tyler (Author) / Herrera, Richard (Thesis director) / Jones, Ruth (Committee member) / Economics Program in CLAS (Contributor) / School of Politics and Global Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12