Matching Items (44)
Filtering by

Clear all filters

136587-Thumbnail Image.png
Description
In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory,

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.
ContributorsSynodis, Nicholas Dahn (Author) / Kashiwagi, Dean (Thesis director, Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
135858-Thumbnail Image.png
Description
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic

The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
ContributorsLubold, Shane Michael (Author) / Gelb, Anne (Thesis director) / Cochran, Doug (Committee member) / Viswanathan, Aditya (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136255-Thumbnail Image.png
Description
Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that hel

Over the course of six months, we have worked in partnership with Arizona State University and a leading producer of semiconductor chips in the United States market (referred to as the "Company"), lending our skills in finance, statistics, model building, and external insight. We attempt to design models that help predict how much time it takes to implement a cost-saving project. These projects had previously been considered only on the merit of cost savings, but with an added dimension of time, we hope to forecast time according to a number of variables. With such a forecast, we can then apply it to an expense project prioritization model which relates time and cost savings together, compares many different projects simultaneously, and returns a series of present value calculations over different ranges of time. The goal is twofold: assist with an accurate prediction of a project's time to implementation, and provide a basis to compare different projects based on their present values, ultimately helping to reduce the Company's manufacturing costs and improve gross margins. We believe this approach, and the research found toward this goal, is most valuable for the Company. Two coaches from the Company have provided assistance and clarified our questions when necessary throughout our research. In this paper, we begin by defining the problem, setting an objective, and establishing a checklist to monitor our progress. Next, our attention shifts to the data: making observations, trimming the dataset, framing and scoping the variables to be used for the analysis portion of the paper. Before creating a hypothesis, we perform a preliminary statistical analysis of certain individual variables to enrich our variable selection process. After the hypothesis, we run multiple linear regressions with project duration as the dependent variable. After regression analysis and a test for robustness, we shift our focus to an intuitive model based on rules of thumb. We relate these models to an expense project prioritization tool developed using Microsoft Excel software. Our deliverables to the Company come in the form of (1) a rules of thumb intuitive model and (2) an expense project prioritization tool.
ContributorsAl-Assi, Hashim (Co-author) / Chiang, Robert (Co-author) / Liu, Andrew (Co-author) / Ludwick, David (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Information Systems (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / WPC Graduate Programs (Contributor)
Created2015-05
133941-Thumbnail Image.png
Description
A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which

A thorough understanding of the key concepts of logic is critical for student success. Logic is often not explicitly taught as its own subject in modern curriculums, which results in misconceptions among students as to what comprises logical reasoning. In addition, current standardized testing schemes often promote teaching styles which emphasize students' abilities to memorize set problem-solving methods over their capacities to reason abstractly and creatively. These phenomena, in tandem with halting progress in United States education compared to other developed nations, suggest that implementing logic courses into public schools and universities can better prepare students for professional careers and beyond. In particular, logic is essential for mathematics students as they transition from calculation-based courses to theoretical, proof-based classes. Many students find this adjustment difficult, and existing university-level courses which emphasize the technical aspects of symbolic logic do not fully bridge the gap between these two different approaches to mathematics. As a step towards resolving this problem, this project proposes a logic course which integrates historical, technical, and interdisciplinary investigations to present logic as a robust and meaningful subject warranting independent study. This course is designed with mathematics students in mind, with particular stresses on different formulations of deductively valid proof schemes. Additionally, this class can either be taught before existing logic classes in an effort to gradually expose students to logic over an extended period of time, or it can replace current logic courses as a more holistic introduction to the subject. The first section of the course investigates historical developments in studies of argumentation and logic throughout different civilizations; specifically, the works of ancient China, ancient India, ancient Greece, medieval Europe, and modernity are investigated. Along the way, several important themes are highlighted within appropriate historical contexts; these are often presented in an ad hoc way in courses emphasizing technical features of symbolic logic. After the motivations for modern symbolic logic are established, the key technical features of symbolic logic are presented, including: logical connectives, truth tables, logical equivalence, derivations, predicates, and quantifiers. Potential obstacles in students' understandings of these ideas are anticipated, and resolution methods are proposed. Finally, examples of how ideas of symbolic logic are manifested in many modern disciplines are presented. In particular, key concepts in game theory, computer science, biology, grammar, and mathematics are reformulated in the context of symbolic logic. By combining the three perspectives of historical context, technical aspects, and practical applications of symbolic logic, this course will ideally make logic a more meaningful and accessible subject for students.
ContributorsRyba, Austin (Author) / Vaz, Paul (Thesis director) / Jones, Donald (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / School of Historical, Philosophical and Religious Studies (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133957-Thumbnail Image.png
Description
Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures

Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures in turbulent flows. In this study, we will apply these tools to extract important coherent structures and analyze their statistical properties as well as their implications on kinematics and dynamics of the flow. Such information will aide representation of small-scale nonlinear processes that large-scale models of natural processes may not be able to resolve.
ContributorsCass, Brentlee Jerry (Author) / Tang, Wenbo (Thesis director) / Kostelich, Eric (Committee member) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
132832-Thumbnail Image.png
Description
Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price

Exchange traded funds (ETFs) in many ways are similar to more traditional closed-end mutual funds, although thee differ in a crucial way. ETFs rely on a creation and redemption feature to achieve their functionality and this mechanism is designed to minimize the deviations that occur between the ETF’s listed price and the net asset value of the ETF’s underlying assets. However while this does cause ETF deviations to be generally lower than their mutual fund counterparts, as our paper explores this process does not eliminate these deviations completely. This article builds off an earlier paper by Engle and Sarkar (2006) that investigates these properties of premiums (discounts) of ETFs from their fair market value. And looks to see if these premia have changed in the last 10 years. Our paper then diverges from the original and takes a deeper look into the standard deviations of these premia specifically.

Our findings show that over 70% of an ETFs standard deviation of premia can be explained through a linear combination consisting of two variables: a categorical (Domestic[US], Developed, Emerging) and a discrete variable (time-difference from US). This paper also finds that more traditional metrics such as market cap, ETF price volatility, and even 3rd party market indicators such as the economic freedom index and investment freedom index are insignificant predictors of an ETFs standard deviation of premia when combined with the categorical variable. These findings differ somewhat from existing literature which indicate that these factors should have a significant impact on the predictive ability of an ETFs standard deviation of premia.
ContributorsZhang, Jingbo (Co-author, Co-author) / Henning, Thomas (Co-author) / Simonson, Mark (Thesis director) / Licon, L. Wendell (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133570-Thumbnail Image.png
Description
In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD)

In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD) was coined to explain the sudden and sharp decline of the honey bee colonies that beekeepers were experiencing. Colony collapses have been rising higher compared to expected averages over the years, and during the winter season losses are even more severe than what is normally acceptable. There are some possible explanations pointing towards meteorological variables, diseases, and even pesticide usage. Despite the cause of CCD being unknown, thousands of beekeepers have reported their losses, and even numbers of infected colonies and colonies under certain stressors in the most recent years. Using the data that was reported to The United States Department of Agriculture (USDA), as well as weather data collected by The National Centers for Environmental Information (NOAA) and the National Centers for Environmental Information (NCEI), regression analysis was used to investigate honey bee colonies to find relationships between stressors in honey bee colonies and meteorological variables, and colony collapses during the winter months. The regression analysis focused on the winter season, or quarter 4 of the year, which includes the months of October, November, and December. In the model, the response variables was the percentage of colonies lost in quarter 4. Through the model, it was concluded that certain weather thresholds and the percentage increase of colonies under certain stressors were related to colony loss.
ContributorsVasquez, Henry Antony (Author) / Zheng, Yi (Thesis director) / Saffell, Erinanne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137483-Thumbnail Image.png
Description
Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory

Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory research of Fewell and Armbruster's "Basketball teams as strategic networks" (2012), which modeled basketball teams as networks and used metrics to characterize team strategy in the NBA's 2010 playoffs. Individual players and outcomes were nodes and passes and actions were the links. This paper used data that was recorded from playoff games of the two 2012 NBA finalists: the Miami Heat and the Oklahoma City Thunder. The same metrics that Fewell and Armbruster used were explained, then calculated using this data. The offensive networks of these two teams during the playoffs were analyzed and interpreted by using other data and qualitative characterization of the teams' strategies; the paper found that the calculated metrics largely matched with our qualitative characterizations of the teams. The validity of the metrics in this paper and Fewell and Armbruster's paper was then discussed, and modeling basketball teams as multiple-order Markov chains rather than as networks was explored.
ContributorsMohanraj, Hariharan (Co-author) / Choi, David (Co-author) / Armbruster, Dieter (Thesis director) / Fewell, Jennifer (Committee member) / Brooks, Daniel (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05
137023-Thumbnail Image.png
Description
Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology,

Previous research discusses students' difficulties in grasping an operational understanding of covariational reasoning. In this study, I interviewed four undergraduate students in calculus and pre-calculus classes to determine their ways of thinking when working on an animated covariation problem. With previous studies in mind and with the use of technology, I devised an interview method, which I structured using multiple phases of pre-planned support. With these interviews, I gathered information about two main aspects about students' thinking: how students think when attempting to reason covariationally and which of the identified ways of thinking are most propitious for the development of an understanding of covariational reasoning. I will discuss how, based on interview data, one of the five identified ways of thinking about covariational reasoning is highly propitious, while the other four are somewhat less propitious.
ContributorsWhitmire, Benjamin James (Author) / Thompson, Patrick (Thesis director) / Musgrave, Stacy (Committee member) / Moore, Kevin C. (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / T. Denny Sanford School of Social and Family Dynamics (Contributor)
Created2014-05