Matching Items (38)
Filtering by

Clear all filters

149960-Thumbnail Image.png
Description
By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random

By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random variables, then with probability one, both players have unique optimal mixed strategies utilizing the same number of pure strategies with positive probability (Jonasson 2004). The pure strategies with positive probability in the unique optimal mixed strategies are called saddle squares. In 1957, Goldman evaluated the probability of a saddle point (a 1 by 1 saddle square), which was rediscovered by many authors including Thorp (1979). Thorp gave two proofs of the probability of a saddle point, one using combinatorics and one using a beta integral. In 1965, Falk and Thrall investigated the integrals required for the probabilities of a 2 by 2 saddle square for 2 × n and m × 2 games with iid uniform (0,1) payoffs, but they were not able to evaluate the integrals. This dissertation generalizes Thorp's beta integral proof of Goldman's probability of a saddle point, establishing an integral formula for the probability that a m × n game with iid uniform (0,1) payoffs has a k by k saddle square (k ≤ m,n). Additionally, the probabilities of a 2 by 2 and a 3 by 3 saddle square for a 3 × 3 game with iid uniform(0,1) payoffs are found. For these, the 14 integrals observed by Falk and Thrall are dissected into 38 disjoint domains, and the integrals are evaluated using the basic properties of the dilogarithm function. The final results for the probabilities of a 2 by 2 and a 3 by 3 saddle square in a 3 × 3 game are linear combinations of 1, π2, and ln(2) with rational coefficients.
ContributorsManley, Michael (Author) / Kadell, Kevin W. J. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Lohr, Sharon (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2011
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
150494-Thumbnail Image.png
Description
Value-added models (VAMs) are used by many states to assess contributions of individual teachers and schools to students' academic growth. The generalized persistence VAM, one of the most flexible in the literature, estimates the ``value added'' by individual teachers to their students' current and future test scores by employing a

Value-added models (VAMs) are used by many states to assess contributions of individual teachers and schools to students' academic growth. The generalized persistence VAM, one of the most flexible in the literature, estimates the ``value added'' by individual teachers to their students' current and future test scores by employing a mixed model with a longitudinal database of test scores. There is concern, however, that missing values that are common in the longitudinal student scores can bias value-added assessments, especially when the models serve as a basis for personnel decisions -- such as promoting or dismissing teachers -- as they are being used in some states. Certain types of missing data require that the VAM be modeled jointly with the missingness process in order to obtain unbiased parameter estimates. This dissertation studies two problems. First, the flexibility and multimembership random effects structure of the generalized persistence model lead to computational challenges that have limited the model's availability. To this point, no methods have been developed for scalable maximum likelihood estimation of the model. An EM algorithm to compute maximum likelihood estimates efficiently is developed, making use of the sparse structure of the random effects and error covariance matrices. The algorithm is implemented in the package GPvam in R statistical software. Illustrations of the gains in computational efficiency achieved by the estimation procedure are given. Furthermore, to address the presence of potentially nonignorable missing data, a flexible correlated random effects model is developed that extends the generalized persistence model to jointly model the test scores and the missingness process, allowing the process to depend on both students and teachers. The joint model gives the ability to test the sensitivity of the VAM to the presence of nonignorable missing data. Estimation of the model is challenging due to the non-hierarchical dependence structure and the resulting intractable high-dimensional integrals. Maximum likelihood estimation of the model is performed using an EM algorithm with fully exponential Laplace approximations for the E step. The methods are illustrated with data from university calculus classes and with data from standardized test scores from an urban school district.
ContributorsKarl, Andrew (Author) / Lohr, Sharon L (Thesis advisor) / Yang, Yan (Thesis advisor) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Wilson, Jeffrey R (Committee member) / Arizona State University (Publisher)
Created2012
Description
When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of

When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of moments (GMM) for estimating the coefficients in longitudinal data is presented. The appropriate and valid estimating equations associated with the time-dependent covariates are identified, thus providing substantial gains in efficiency over generalized estimating equations (GEE) with the independent working correlation. Identifying the estimating equations for computation is of utmost importance. This paper provides a technique for identifying the relevant estimating equations through a general method of moments. I develop an approach that makes use of all the valid estimating equations necessary with each time-dependent and time-independent covariate. Moreover, my approach does not assume that feedback is always present over time, or present at the same degree. I fit the GMM correlated logistic regression model in SAS with PROC IML. I examine two datasets for illustrative purposes. I look at rehospitalization in a Medicare database. I revisit data regarding the relationship between the body mass index and future morbidity among children in the Philippines. These datasets allow us to compare my results with some earlier methods of analyses.
ContributorsYin, Jianqiong (Author) / Wilson, Jeffrey Wilson (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2012
150996-Thumbnail Image.png
Description
A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis

A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis (RMA) regression and is often used instead of Ordinary Least Squares (OLS) regression. Results for confidence intervals, hypothesis testing and asymptotic distributions of coefficient estimates in the bivariate case are reviewed. A generalization of RMA to more than two variables for fitting a plane to data is obtained by minimizing the sum of a function of the volumes obtained by drawing, from each data point, lines parallel to each coordinate axis to the fitted plane (Draper and Yang 1997; Goodman and Tofallis 2003). Generalized RMA results for the multivariate case obtained by Draper and Yang (1997) are reviewed and some investigations of multivariate RMA are given. A linear model is proposed that does not specify a dependent variable and allows for errors in the measurement of each variable. Coefficients in the model are estimated by minimization of the function of the volumes previously mentioned. Methods for obtaining coefficient estimates are discussed and simulations are used to investigate the distribution of coefficient estimates. The effects of sample size, sampling error and correlation among variables on the estimates are studied. Bootstrap methods are used to obtain confidence intervals for model coefficients. Residual analysis is considered for assessing model assumptions. Outlier and influential case diagnostics are developed and a forward selection method is proposed for subset selection of model variables. A real data example is provided that uses the methods developed. Topics for further research are discussed.
ContributorsLi, Jingjin (Author) / Young, Dennis (Thesis advisor) / Eubank, Randall (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2012
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
132677-Thumbnail Image.png
Description
This paper analyzes responses to a survey using a modified fourfold pattern of preference to determine if implicit information, once made explicit, is practically significant in nudging irrational decision makers towards more rational decisions. Respondents chose between two scenarios and an option for indifference for each of the four questions

This paper analyzes responses to a survey using a modified fourfold pattern of preference to determine if implicit information, once made explicit, is practically significant in nudging irrational decision makers towards more rational decisions. Respondents chose between two scenarios and an option for indifference for each of the four questions from the fourfold pattern with expected value being implicit information. Then respondents were asked familiarity with expected value and given the same four questions again but with the expected value for each scenario then explicitly given. Respondents were asked to give feedback if their answers had changed and if the addition of the explicit information was the reason for that change. Results found the addition of the explicit information in the form of expected value to be practically significant with ~90% of respondents who changed their answers giving that for the reason. In the implicit section of the survey, three out of four of the questions had a response majority of lower expected value answers given compared to the alternative. In the explicit section of the survey, all four questions achieved a response majority of higher expected value answers given compared to the alternative. In moving from the implicit to the explicit section, for each question, the scenario with lower expected value experienced a decrease in percentage of responses, and the scenario with higher expected value and indifference between the scenarios both experienced an increase in percentage of responses.
ContributorsJohnson, Matthew (Author) / Goegan, Brian (Thesis director) / Foster, William (Committee member) / School of Sustainability (Contributor) / Economics Program in CLAS (Contributor) / Dean, W.P. Carey School of Business (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133036-Thumbnail Image.png
Description
This study examines the economic impact of the opioid crisis in the United States. Primarily testing the years 2007-2018, I gathered data from the Census Bureau, Centers for Disease Control, and Kaiser Family Foundation in order to examine the relative impact of a one dollar increase in GDP per Capita

This study examines the economic impact of the opioid crisis in the United States. Primarily testing the years 2007-2018, I gathered data from the Census Bureau, Centers for Disease Control, and Kaiser Family Foundation in order to examine the relative impact of a one dollar increase in GDP per Capita on the death rates caused by opioids. By implementing a fixed-effects panel data design, I regressed deaths on GDP per Capita while holding the following constant: population, U.S. retail opioid prescriptions per 100 people, annual average unemployment rate, percent of the population that is Caucasian, and percent of the population that is male. I found that GDP per Capita and opioid related deaths are negatively correlated, meaning that with every additional person dying from opioids, GDP per capita decreases. The finding of this research is important because opioid overdose is harmful to society, as U.S. life expectancy is consistently dropping as opioid death rates rise. Increasing awareness on this topic can help prevent misuse and the overall reduction in opioid related deaths.
ContributorsRavi, Ritika Lisa (Author) / Goegan, Brian (Thesis director) / Hill, John (Committee member) / Department of Economics (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132991-Thumbnail Image.png
Description
More than 40% of all U.S. opioid overdose deaths in 2016 involved a prescription opioid, with more than 46 people dying every day from overdoses involving prescription opioids, (CDC, 2017). Over the years, lawmakers have implemented policies and laws to address the opioid epidemic, and many of these vary from

More than 40% of all U.S. opioid overdose deaths in 2016 involved a prescription opioid, with more than 46 people dying every day from overdoses involving prescription opioids, (CDC, 2017). Over the years, lawmakers have implemented policies and laws to address the opioid epidemic, and many of these vary from state to state. This study will lay out the basic guidelines of common pieces of legislation. It also examines relationships between 6 state-specific prescribing or preventative laws and associated changes in opioid-related deaths using a longitudinal cross-state study design (2007-2015). Specifically, it uses a linear regression to examine changes in state-specific rates of opioid-related deaths after implementation of specific policies, and whether states implementing these policies saw smaller increases than states without these policies. Initial key findings of this study show that three policies have a statistically significant association with opioid related overdose deaths are—Good Samaritan Laws, Standing Order Laws, and Naloxone Liability Laws. Paradoxically, all three policies correlated with an increase in opioid overdose deaths between 2007 and 2016. However, after correcting for the potential spurious relationship between state-specific timing of policy implementation and death rates, two policies have a statistically significant association (alpha <0.05) with opioid overdose death rates. First, the Naloxone Liability Laws were significantly associated with changes in opioid-related deaths and was correlated with a 0.33 log increase in opioid overdose death rates, or a 29% increase. This equates to about 1.39 more deaths per year per 100,000 people. Second, the legislation that allows for 3rd Party Naloxone prescriptions correlated with a 0.33 log decrease in opioid overdose death rates, or a 29% decrease. This equates to 1.39 fewer deaths per year per 100,000 people.
ContributorsDavis, Joshua Alan (Author) / Hruschka, Daniel (Thesis director) / Gaughan, Monica (Committee member) / School of Human Evolution & Social Change (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05