Matching Items (52)
Filtering by

Clear all filters

135186-Thumbnail Image.png
Description
This paper explores the consequences of cleaning rescue ropes with common disinfectants and cleansers in order to assess their usability in cleaning ropes contaminated with blood borne pathogens. Using a modified version of an industry-standard testing procedure and in-depth statistical analysis, it characterizes the effect each chemical has on the

This paper explores the consequences of cleaning rescue ropes with common disinfectants and cleansers in order to assess their usability in cleaning ropes contaminated with blood borne pathogens. Using a modified version of an industry-standard testing procedure and in-depth statistical analysis, it characterizes the effect each chemical has on the mechanical properties of the rope. The experiment measured the strength and elastic properties of rope core fibers soaked in different chemicals and at different concentration levels. The data show that certain common solutions for cleaning equipment are, in fact, damaging to the equipment and thus dangerous to the users. Even products marketed for climbing ropes were found to be potentially hazardous. The results also demonstrate a curious phenomenon occurring within the washing process that causes a shift in the elastic properties of the fibers, prompting additional research. Further work is needed to expand the breadth and depth of these results and to make effective recommendations to the rope industry and rescue professionals regarding rope care and maintenance.
ContributorsDenike, Andrew Nicholas (Author) / Middleton, James (Thesis director) / Liao, Yabin (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135445-Thumbnail Image.png
Description
While former New York Yankees pitcher Goose Gossage unleashed his tirade on the deterioration of the unwritten rules of baseball and nerds ruining the sport about halfway through my writing of the paper, sentiments like his were inspiration for my topic: the evolution of statistics and data in baseball. By

While former New York Yankees pitcher Goose Gossage unleashed his tirade on the deterioration of the unwritten rules of baseball and nerds ruining the sport about halfway through my writing of the paper, sentiments like his were inspiration for my topic: the evolution of statistics and data in baseball. By telling the story of how baseball data and statistics have evolved, my goal was to also demonstrate how they have been intertwined since the beginning—which would essentially mean that nerds have always been ruining the sport (if you subscribe to that kind of thought).

In the quest to showcase this, it was necessary to document how baseball prospers from numbers and numbers prosper from baseball. The relationship between the two is mutualistic. Furthermore, an all-encompassing historical look at how data and statistics in baseball have matured was a critical portion of the paper. With a metric such as batting average going from a radical new measure that posed a threat to the status quo, to a fiercely cherished statistic that was suddenly being unseated by advanced analytics, it shows the creation of new and destruction of old has been incessant. Innovators like Pete Palmer, Dick Cramer and Bill James played a large role in this process in the 1980s. Computers aided their effort and when paired with the Internet, unleashed the ability to crunch data to an even larger sector of the population. The unveiling of Statcast at the commencement of the 2015 season showed just how much potential there is for measuring previously unquantifiable baseball acts.

Essentially, there will always be people who mourn the presence of data and statistics in baseball. Despite this, the evolution story indicates baseball and numbers will be intertwined into the future, likely to an even greater extent than ever before, as technology and new philosophies become increasingly integrated into front offices and clubhouses.
ContributorsGarcia, Jacob Michael (Author) / Kurland, Brett (Thesis director) / Doig, Stephen (Committee member) / Jackson, Victoria (Committee member) / Walter Cronkite School of Journalism and Mass Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135661-Thumbnail Image.png
Description
This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same games played in "NBA 2k16". The findings of this paper

This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same games played in "NBA 2k16". The findings of this paper indicate that "NBA 2k16" utilizes statistical findings to model their gameplay. It was also determined that "NBA 2k16" modeled the shooting patterns of the Suns in the first five games of the 2015-2016 season very closely. Both, the real Suns' games and the "NBA 2k16" Suns' games, showed a higher probability of success for shots taken in the first eight seconds of the shot clock than the last eight seconds of the shot clock. Similarly, both game types illustrated a trend that the probability of success for a shot increases as a player holds onto a ball longer. This result was not expected for either game type, however, "NBA 2k16" modeled the findings consistent with real Suns' games. The video game modeled the Suns with significantly more passes per possession than the real Suns' games, while they also showed a trend that more passes per possession has a significant effect on the outcome of the shot. This trend was not present in the real Suns' games, however literature supports this finding. Also, "NBA 2k16" did not correctly model the allocation of team shots for each player, however, the differences were found only in bench players. Lastly, "NBA 2k16" did not correctly allocate shots across the seven regions for Eric Bledsoe, however, there was no evidence indicating that the game did not correctly model the allocation of shots for the other starters, as well as the probability of success across the regions.
ContributorsHarrington, John P. (Author) / Armbruster, Dieter (Thesis director) / Kamarianakis, Ioannis (Committee member) / Chemical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136587-Thumbnail Image.png
Description
In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory,

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.
ContributorsSynodis, Nicholas Dahn (Author) / Kashiwagi, Dean (Thesis director, Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
133957-Thumbnail Image.png
Description
Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures

Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures in turbulent flows. In this study, we will apply these tools to extract important coherent structures and analyze their statistical properties as well as their implications on kinematics and dynamics of the flow. Such information will aide representation of small-scale nonlinear processes that large-scale models of natural processes may not be able to resolve.
ContributorsCass, Brentlee Jerry (Author) / Tang, Wenbo (Thesis director) / Kostelich, Eric (Committee member) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133570-Thumbnail Image.png
Description
In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD)

In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD) was coined to explain the sudden and sharp decline of the honey bee colonies that beekeepers were experiencing. Colony collapses have been rising higher compared to expected averages over the years, and during the winter season losses are even more severe than what is normally acceptable. There are some possible explanations pointing towards meteorological variables, diseases, and even pesticide usage. Despite the cause of CCD being unknown, thousands of beekeepers have reported their losses, and even numbers of infected colonies and colonies under certain stressors in the most recent years. Using the data that was reported to The United States Department of Agriculture (USDA), as well as weather data collected by The National Centers for Environmental Information (NOAA) and the National Centers for Environmental Information (NCEI), regression analysis was used to investigate honey bee colonies to find relationships between stressors in honey bee colonies and meteorological variables, and colony collapses during the winter months. The regression analysis focused on the winter season, or quarter 4 of the year, which includes the months of October, November, and December. In the model, the response variables was the percentage of colonies lost in quarter 4. Through the model, it was concluded that certain weather thresholds and the percentage increase of colonies under certain stressors were related to colony loss.
ContributorsVasquez, Henry Antony (Author) / Zheng, Yi (Thesis director) / Saffell, Erinanne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05