Matching Items (19)
Filtering by

Clear all filters

136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
132677-Thumbnail Image.png
Description
This paper analyzes responses to a survey using a modified fourfold pattern of preference to determine if implicit information, once made explicit, is practically significant in nudging irrational decision makers towards more rational decisions. Respondents chose between two scenarios and an option for indifference for each of the four questions

This paper analyzes responses to a survey using a modified fourfold pattern of preference to determine if implicit information, once made explicit, is practically significant in nudging irrational decision makers towards more rational decisions. Respondents chose between two scenarios and an option for indifference for each of the four questions from the fourfold pattern with expected value being implicit information. Then respondents were asked familiarity with expected value and given the same four questions again but with the expected value for each scenario then explicitly given. Respondents were asked to give feedback if their answers had changed and if the addition of the explicit information was the reason for that change. Results found the addition of the explicit information in the form of expected value to be practically significant with ~90% of respondents who changed their answers giving that for the reason. In the implicit section of the survey, three out of four of the questions had a response majority of lower expected value answers given compared to the alternative. In the explicit section of the survey, all four questions achieved a response majority of higher expected value answers given compared to the alternative. In moving from the implicit to the explicit section, for each question, the scenario with lower expected value experienced a decrease in percentage of responses, and the scenario with higher expected value and indifference between the scenarios both experienced an increase in percentage of responses.
ContributorsJohnson, Matthew (Author) / Goegan, Brian (Thesis director) / Foster, William (Committee member) / School of Sustainability (Contributor) / Economics Program in CLAS (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133036-Thumbnail Image.png
Description
This study examines the economic impact of the opioid crisis in the United States. Primarily testing the years 2007-2018, I gathered data from the Census Bureau, Centers for Disease Control, and Kaiser Family Foundation in order to examine the relative impact of a one dollar increase in GDP per Capita

This study examines the economic impact of the opioid crisis in the United States. Primarily testing the years 2007-2018, I gathered data from the Census Bureau, Centers for Disease Control, and Kaiser Family Foundation in order to examine the relative impact of a one dollar increase in GDP per Capita on the death rates caused by opioids. By implementing a fixed-effects panel data design, I regressed deaths on GDP per Capita while holding the following constant: population, U.S. retail opioid prescriptions per 100 people, annual average unemployment rate, percent of the population that is Caucasian, and percent of the population that is male. I found that GDP per Capita and opioid related deaths are negatively correlated, meaning that with every additional person dying from opioids, GDP per capita decreases. The finding of this research is important because opioid overdose is harmful to society, as U.S. life expectancy is consistently dropping as opioid death rates rise. Increasing awareness on this topic can help prevent misuse and the overall reduction in opioid related deaths.
ContributorsRavi, Ritika Lisa (Author) / Goegan, Brian (Thesis director) / Hill, John (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132991-Thumbnail Image.png
Description
More than 40% of all U.S. opioid overdose deaths in 2016 involved a prescription opioid, with more than 46 people dying every day from overdoses involving prescription opioids, (CDC, 2017). Over the years, lawmakers have implemented policies and laws to address the opioid epidemic, and many of these vary from

More than 40% of all U.S. opioid overdose deaths in 2016 involved a prescription opioid, with more than 46 people dying every day from overdoses involving prescription opioids, (CDC, 2017). Over the years, lawmakers have implemented policies and laws to address the opioid epidemic, and many of these vary from state to state. This study will lay out the basic guidelines of common pieces of legislation. It also examines relationships between 6 state-specific prescribing or preventative laws and associated changes in opioid-related deaths using a longitudinal cross-state study design (2007-2015). Specifically, it uses a linear regression to examine changes in state-specific rates of opioid-related deaths after implementation of specific policies, and whether states implementing these policies saw smaller increases than states without these policies. Initial key findings of this study show that three policies have a statistically significant association with opioid related overdose deaths are—Good Samaritan Laws, Standing Order Laws, and Naloxone Liability Laws. Paradoxically, all three policies correlated with an increase in opioid overdose deaths between 2007 and 2016. However, after correcting for the potential spurious relationship between state-specific timing of policy implementation and death rates, two policies have a statistically significant association (alpha <0.05) with opioid overdose death rates. First, the Naloxone Liability Laws were significantly associated with changes in opioid-related deaths and was correlated with a 0.33 log increase in opioid overdose death rates, or a 29% increase. This equates to about 1.39 more deaths per year per 100,000 people. Second, the legislation that allows for 3rd Party Naloxone prescriptions correlated with a 0.33 log decrease in opioid overdose death rates, or a 29% decrease. This equates to 1.39 fewer deaths per year per 100,000 people.
ContributorsDavis, Joshua Alan (Author) / Hruschka, Daniel (Thesis director) / Gaughan, Monica (Committee member) / School of Human Evolution & Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132766-Thumbnail Image.png
Description
This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters

This paper proposes that voter decision making is determined by more than just the policy positions adopted by the candidates in the election as proposed by Antony Downs (1957). Using a vector valued voting model proposed by William Foster (2014), voter behavior can be described by a mathematical model. Voters assign scores to candidates based on both policy and non-policy considerations, then voters then decide which candidate they support based on which has a higher candidate score. The traditional assumption that most of the population will vote is replaced by a function describing the probability of voting based on candidate scores assigned by individual voters. If the voter's likelihood of voting is not certain, but rather modelled by a sigmoid curve, it has radical implications on party decisions and actions taken during an election cycle. The model also includes a significant interaction term between the candidate scores and the differential between the scores which enhances the Downsian model. The thesis is proposed in a similar manner to Downs' original presentation, including several allegorical and hypothetical examples of the model in action. The results of the model reveal that single issue voters can have a significant impact on election outcomes, and that the weight of non-policy considerations is high enough that political parties would spend large sums of money on campaigning. Future research will include creating an experiment to verify the interaction terms, as well as adjusting the model for individual costs so that more empirical analysis may be completed.
ContributorsCoulter, Jarod Maxwell (Author) / Foster, William (Thesis director) / Goegan, Brian (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Economics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134603-Thumbnail Image.png
Description
Beginning with the publication of Moneyball by Michael Lewis in 2003, the use of sabermetrics \u2014 the application of statistical analysis to baseball records - has exploded in major league front offices. Executives Billy Beane, Paul DePoedesta, and Theo Epstein are notable figures that have been successful in incorporating sabermetrics

Beginning with the publication of Moneyball by Michael Lewis in 2003, the use of sabermetrics \u2014 the application of statistical analysis to baseball records - has exploded in major league front offices. Executives Billy Beane, Paul DePoedesta, and Theo Epstein are notable figures that have been successful in incorporating sabermetrics to their team's philosophy, resulting in playoff appearances and championship success. The competitive market of baseball, once dominated by the collusion of owners, now promotes innovative thought to analytically develop competitive advantages. The tiered economic payrolls of Major League Baseball (MLB) has created an environment in which large-market teams are capable of "buying" championships through the acquisition of the best available talent in free agency, and small-market teams are pushed to "build" championships through the drafting and systematic farming of high-school and college level players. The use of sabermetrics promotes both models of success \u2014 buying and building \u2014 by unbiasedly determining a player's productivity. The objective of this paper is to develop a regression-based predictive model that can be used by Majors League Baseball teams to forecast the MLB career average offensive performance of college baseball players from specific conferences. The development of this model required multiple tasks: I. Data was obtained from The Baseball Cube, a baseball records database providing both College and MLB data. II. Modifications to the data were applied to adjust for year-to-year formatting, a missing variable for seasons played, the presence of missing values, and to correct league identifiers. III. Evaluation of multiple offensive productivity models capable of handling the obtained dataset and regression forecasting technique. IV. SAS software was used to create the regression models and analyze the residuals for any irregularities or normality violations. The results of this paper find that there is a relationship between Division 1 collegiate baseball conferences and average career offensive productivity in Major Leagues Baseball, with the SEC having the most accurate reflection of performance.
ContributorsBadger, Mathew Bernard (Author) / Goegan, Brian (Thesis director) / Eaton, John (Committee member) / Department of Economics (Contributor) / Department of Marketing (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134373-Thumbnail Image.png
Description
Our research encompassed the prospect draft in baseball and looked at what type of player teams drafted to maximize value. We wanted to know which position returned the best value to the team that drafted them, and which level is safer to draft players from, college or high school. We

Our research encompassed the prospect draft in baseball and looked at what type of player teams drafted to maximize value. We wanted to know which position returned the best value to the team that drafted them, and which level is safer to draft players from, college or high school. We decided to look at draft data from 2006-2010 for the first ten rounds of players selected. Because there is only a monetary cap on players drafted in the first ten rounds we restricted our data to these players. Once we set up the parameters we compiled a spreadsheet of these players with both their signing bonuses and their wins above replacement (WAR). This allowed us to see how much a team was spending per win at the major league level. After the data was compiled we made pivot tables and graphs to visually represent our data and better understand the numbers. We found that the worst position that MLB teams could draft would be high school second baseman. They returned the lowest WAR of any player that we looked at. In general though high school players were more costly to sign and had lower WARs than their college counterparts making them, on average, a worse pick value wise. The best position you could pick was college shortstops. They had the trifecta of the best signability of all players, along with one of the highest WARs and lowest signing bonuses. These were three of the main factors that you want with your draft pick and they ranked near the top in all three categories. This research can help give guidelines to Major League teams as they go to select players in the draft. While there are always going to be exceptions to trends, by following the enclosed research teams can minimize risk in the draft.
ContributorsValentine, Robert (Co-author) / Johnson, Ben (Co-author) / Eaton, John (Thesis director) / Goegan, Brian (Committee member) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / School of Accountancy (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
171508-Thumbnail Image.png
Description
Longitudinal data involving multiple subjects is quite popular in medical and social science areas. I consider generalized linear mixed models (GLMMs) applied to such longitudinal data, and the optimal design searching problem under such models. In this case, based on optimal design theory, the optimality criteria depend on the estimated

Longitudinal data involving multiple subjects is quite popular in medical and social science areas. I consider generalized linear mixed models (GLMMs) applied to such longitudinal data, and the optimal design searching problem under such models. In this case, based on optimal design theory, the optimality criteria depend on the estimated parameters, which leads to local optimality. Moreover, the information matrix under a GLMM doesn't have a closed-form expression. My dissertation includes three topics related to this design problem. The first part is searching for locally optimal designs under GLMMs with longitudinal data. I apply penalized quasi-likelihood (PQL) method to approximate the information matrix and compare several approximations to show the superiority of PQL over other approximations. Under different local parameters and design restrictions, locally D- and A- optimal designs are constructed based on the approximation. An interesting finding is that locally optimal designs sometimes apply different designs to different subjects. Finally, the robustness of these locally optimal designs is discussed. In the second part, an unknown observational covariate is added to the previous model. With an unknown observational variable in the experiment, expected optimality criteria are considered. Under different assumptions of the unknown variable and parameter settings, locally optimal designs are constructed and discussed. In the last part, Bayesian optimal designs are considered under logistic mixed models. Considering different priors of the local parameters, Bayesian optimal designs are generated. Bayesian design under such a model is usually expensive in time. The running time in this dissertation is optimized to an acceptable amount with accurate results. I also discuss the robustness of these Bayesian optimal designs, which is the motivation of applying such an approach.
ContributorsShi, Yao (Author) / Stufken, John (Thesis advisor) / Kao, Ming-Hung (Thesis advisor) / Lan, Shiwei (Committee member) / Pan, Rong (Committee member) / Reiser, Mark (Committee member) / Arizona State University (Publisher)
Created2022
190731-Thumbnail Image.png
Description
Uncertainty Quantification (UQ) is crucial in assessing the reliability of predictivemodels that make decisions for human experts in a data-rich world. The Bayesian approach to UQ for inverse problems has gained popularity. However, addressing UQ in high-dimensional inverse problems is challenging due to the intensity and inefficiency of Markov Chain

Uncertainty Quantification (UQ) is crucial in assessing the reliability of predictivemodels that make decisions for human experts in a data-rich world. The Bayesian approach to UQ for inverse problems has gained popularity. However, addressing UQ in high-dimensional inverse problems is challenging due to the intensity and inefficiency of Markov Chain Monte Carlo (MCMC) based Bayesian inference methods. Consequently, the first primary focus of this thesis is enhancing efficiency and scalability for UQ in inverse problems. On the other hand, the omnipresence of spatiotemporal data, particularly in areas like traffic analysis, underscores the need for effectively addressing inverse problems with spatiotemporal observations. Conventional solutions often overlook spatial or temporal correlations, resulting in underutilization of spatiotemporal interactions for parameter learning. Appropriately modeling spatiotemporal observations in inverse problems thus forms another pivotal research avenue. In terms of UQ methodologies, the calibration-emulation-sampling (CES) scheme has emerged as effective for large-dimensional problems. I introduce a novel CES approach by employing deep neural network (DNN) models during the emulation and sampling phase. This approach not only enhances computational efficiency but also diminishes sensitivity to training set variations. The newly devised “Dimension- Reduced Emulative Autoencoder Monte Carlo (DREAM)” algorithm scales Bayesian UQ up to thousands of dimensions in physics-constrained inverse problems. The algorithm’s effectiveness is exemplified through elliptic and advection-diffusion inverse problems. In the realm of spatiotemporal modeling, I propose to use Spatiotemporal Gaussian processes (STGP) in likelihood modeling and Spatiotemporal Besov processes (STBP) in prior modeling separately. These approaches highlight the efficacy of incorporat- ing spatial and temporal information for enhanced parameter estimation and UQ. Additionally, the superiority of STGP is demonstrated compared to static and time- averaged methods in time-dependent advection-diffusion partial differential equation (PDE) and three chaotic ordinary differential equations (ODE). Expanding upon Besov Process (BP), a method known for sparsity-promotion and edge-preservation, STBP is introduced to capture spatial data features and model temporal correlations by replacing the random coefficients in the series expansion with stochastic time functions following Q-exponential process(Q-EP). This advantage is showcased in dynamic computerized tomography (CT) reconstructions through comparison with classic STGP and a time-uncorrelated approach.
ContributorsLi, Shuyi (Author) / Lan, Shiwei (Thesis advisor) / Hahn, Paul (Committee member) / McCulloch, Robert (Committee member) / Dan, Cheng (Committee member) / Lopes, Hedibert (Committee member) / Arizona State University (Publisher)
Created2023
190789-Thumbnail Image.png
Description
In this work, the author analyzes quantitative and structural aspects of Bayesian inference using Markov kernels, Wasserstein metrics, and Kantorovich monads. In particular, the author shows the following main results: first, that Markov kernels can be viewed as Borel measurable maps with values in a Wasserstein space; second, that the

In this work, the author analyzes quantitative and structural aspects of Bayesian inference using Markov kernels, Wasserstein metrics, and Kantorovich monads. In particular, the author shows the following main results: first, that Markov kernels can be viewed as Borel measurable maps with values in a Wasserstein space; second, that the Disintegration Theorem can be interpreted as a literal equality of integrals using an original theory of integration for Markov kernels; third, that the Kantorovich monad can be defined for Wasserstein metrics of any order; and finally, that, under certain assumptions, a generalized Bayes’s Law for Markov kernels provably leads to convergence of the expected posterior distribution in the Wasserstein metric. These contributions provide a basis for studying further convergence, approximation, and stability properties of Bayesian inverse maps and inference processes using a unified theoretical framework that bridges between statistical inference, machine learning, and probabilistic programming semantics.
ContributorsEikenberry, Keenan (Author) / Cochran, Douglas (Thesis advisor) / Lan, Shiwei (Thesis advisor) / Dasarathy, Gautam (Committee member) / Kotschwar, Brett (Committee member) / Shahbaba, Babak (Committee member) / Arizona State University (Publisher)
Created2023