Matching Items (55)
Filtering by

Clear all filters

149960-Thumbnail Image.png
Description
By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random

By the von Neumann min-max theorem, a two person zero sum game with finitely many pure strategies has a unique value for each player (summing to zero) and each player has a non-empty set of optimal mixed strategies. If the payoffs are independent, identically distributed (iid) uniform (0,1) random variables, then with probability one, both players have unique optimal mixed strategies utilizing the same number of pure strategies with positive probability (Jonasson 2004). The pure strategies with positive probability in the unique optimal mixed strategies are called saddle squares. In 1957, Goldman evaluated the probability of a saddle point (a 1 by 1 saddle square), which was rediscovered by many authors including Thorp (1979). Thorp gave two proofs of the probability of a saddle point, one using combinatorics and one using a beta integral. In 1965, Falk and Thrall investigated the integrals required for the probabilities of a 2 by 2 saddle square for 2 × n and m × 2 games with iid uniform (0,1) payoffs, but they were not able to evaluate the integrals. This dissertation generalizes Thorp's beta integral proof of Goldman's probability of a saddle point, establishing an integral formula for the probability that a m × n game with iid uniform (0,1) payoffs has a k by k saddle square (k ≤ m,n). Additionally, the probabilities of a 2 by 2 and a 3 by 3 saddle square for a 3 × 3 game with iid uniform(0,1) payoffs are found. For these, the 14 integrals observed by Falk and Thrall are dissected into 38 disjoint domains, and the integrals are evaluated using the basic properties of the dilogarithm function. The final results for the probabilities of a 2 by 2 and a 3 by 3 saddle square in a 3 × 3 game are linear combinations of 1, π2, and ln(2) with rational coefficients.
ContributorsManley, Michael (Author) / Kadell, Kevin W. J. (Thesis advisor) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Lohr, Sharon (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2011
152220-Thumbnail Image.png
Description
Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment

Many longitudinal studies, especially in clinical trials, suffer from missing data issues. Most estimation procedures assume that the missing values are ignorable or missing at random (MAR). However, this assumption leads to unrealistic simplification and is implausible for many cases. For example, an investigator is examining the effect of treatment on depression. Subjects are scheduled with doctors on a regular basis and asked questions about recent emotional situations. Patients who are experiencing severe depression are more likely to miss an appointment and leave the data missing for that particular visit. Data that are not missing at random may produce bias in results if the missing mechanism is not taken into account. In other words, the missing mechanism is related to the unobserved responses. Data are said to be non-ignorable missing if the probabilities of missingness depend on quantities that might not be included in the model. Classical pattern-mixture models for non-ignorable missing values are widely used for longitudinal data analysis because they do not require explicit specification of the missing mechanism, with the data stratified according to a variety of missing patterns and a model specified for each stratum. However, this usually results in under-identifiability, because of the need to estimate many stratum-specific parameters even though the eventual interest is usually on the marginal parameters. Pattern mixture models have the drawback that a large sample is usually required. In this thesis, two studies are presented. The first study is motivated by an open problem from pattern mixture models. Simulation studies from this part show that information in the missing data indicators can be well summarized by a simple continuous latent structure, indicating that a large number of missing data patterns may be accounted by a simple latent factor. Simulation findings that are obtained in the first study lead to a novel model, a continuous latent factor model (CLFM). The second study develops CLFM which is utilized for modeling the joint distribution of missing values and longitudinal outcomes. The proposed CLFM model is feasible even for small sample size applications. The detailed estimation theory, including estimating techniques from both frequentist and Bayesian perspectives is presented. Model performance and evaluation are studied through designed simulations and three applications. Simulation and application settings change from correctly-specified missing data mechanism to mis-specified mechanism and include different sample sizes from longitudinal studies. Among three applications, an AIDS study includes non-ignorable missing values; the Peabody Picture Vocabulary Test data have no indication on missing data mechanism and it will be applied to a sensitivity analysis; the Growth of Language and Early Literacy Skills in Preschoolers with Developmental Speech and Language Impairment study, however, has full complete data and will be used to conduct a robust analysis. The CLFM model is shown to provide more precise estimators, specifically on intercept and slope related parameters, compared with Roy's latent class model and the classic linear mixed model. This advantage will be more obvious when a small sample size is the case, where Roy's model experiences challenges on estimation convergence. The proposed CLFM model is also robust when missing data are ignorable as demonstrated through a study on Growth of Language and Early Literacy Skills in Preschoolers.
ContributorsZhang, Jun (Author) / Reiser, Mark R. (Thesis advisor) / Barber, Jarrett (Thesis advisor) / Kao, Ming-Hung (Committee member) / Wilson, Jeffrey (Committee member) / St Louis, Robert D. (Committee member) / Arizona State University (Publisher)
Created2013
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
150494-Thumbnail Image.png
Description
Value-added models (VAMs) are used by many states to assess contributions of individual teachers and schools to students' academic growth. The generalized persistence VAM, one of the most flexible in the literature, estimates the ``value added'' by individual teachers to their students' current and future test scores by employing a

Value-added models (VAMs) are used by many states to assess contributions of individual teachers and schools to students' academic growth. The generalized persistence VAM, one of the most flexible in the literature, estimates the ``value added'' by individual teachers to their students' current and future test scores by employing a mixed model with a longitudinal database of test scores. There is concern, however, that missing values that are common in the longitudinal student scores can bias value-added assessments, especially when the models serve as a basis for personnel decisions -- such as promoting or dismissing teachers -- as they are being used in some states. Certain types of missing data require that the VAM be modeled jointly with the missingness process in order to obtain unbiased parameter estimates. This dissertation studies two problems. First, the flexibility and multimembership random effects structure of the generalized persistence model lead to computational challenges that have limited the model's availability. To this point, no methods have been developed for scalable maximum likelihood estimation of the model. An EM algorithm to compute maximum likelihood estimates efficiently is developed, making use of the sparse structure of the random effects and error covariance matrices. The algorithm is implemented in the package GPvam in R statistical software. Illustrations of the gains in computational efficiency achieved by the estimation procedure are given. Furthermore, to address the presence of potentially nonignorable missing data, a flexible correlated random effects model is developed that extends the generalized persistence model to jointly model the test scores and the missingness process, allowing the process to depend on both students and teachers. The joint model gives the ability to test the sensitivity of the VAM to the presence of nonignorable missing data. Estimation of the model is challenging due to the non-hierarchical dependence structure and the resulting intractable high-dimensional integrals. Maximum likelihood estimation of the model is performed using an EM algorithm with fully exponential Laplace approximations for the E step. The methods are illustrated with data from university calculus classes and with data from standardized test scores from an urban school district.
ContributorsKarl, Andrew (Author) / Lohr, Sharon L (Thesis advisor) / Yang, Yan (Thesis advisor) / Kao, Ming-Hung (Committee member) / Montgomery, Douglas C. (Committee member) / Wilson, Jeffrey R (Committee member) / Arizona State University (Publisher)
Created2012
Description
When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of

When analyzing longitudinal data it is essential to account both for the correlation inherent from the repeated measures of the responses as well as the correlation realized on account of the feedback created between the responses at a particular time and the predictors at other times. A generalized method of moments (GMM) for estimating the coefficients in longitudinal data is presented. The appropriate and valid estimating equations associated with the time-dependent covariates are identified, thus providing substantial gains in efficiency over generalized estimating equations (GEE) with the independent working correlation. Identifying the estimating equations for computation is of utmost importance. This paper provides a technique for identifying the relevant estimating equations through a general method of moments. I develop an approach that makes use of all the valid estimating equations necessary with each time-dependent and time-independent covariate. Moreover, my approach does not assume that feedback is always present over time, or present at the same degree. I fit the GMM correlated logistic regression model in SAS with PROC IML. I examine two datasets for illustrative purposes. I look at rehospitalization in a Medicare database. I revisit data regarding the relationship between the body mass index and future morbidity among children in the Philippines. These datasets allow us to compare my results with some earlier methods of analyses.
ContributorsYin, Jianqiong (Author) / Wilson, Jeffrey Wilson (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2012
150996-Thumbnail Image.png
Description
A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis

A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis (RMA) regression and is often used instead of Ordinary Least Squares (OLS) regression. Results for confidence intervals, hypothesis testing and asymptotic distributions of coefficient estimates in the bivariate case are reviewed. A generalization of RMA to more than two variables for fitting a plane to data is obtained by minimizing the sum of a function of the volumes obtained by drawing, from each data point, lines parallel to each coordinate axis to the fitted plane (Draper and Yang 1997; Goodman and Tofallis 2003). Generalized RMA results for the multivariate case obtained by Draper and Yang (1997) are reviewed and some investigations of multivariate RMA are given. A linear model is proposed that does not specify a dependent variable and allows for errors in the measurement of each variable. Coefficients in the model are estimated by minimization of the function of the volumes previously mentioned. Methods for obtaining coefficient estimates are discussed and simulations are used to investigate the distribution of coefficient estimates. The effects of sample size, sampling error and correlation among variables on the estimates are studied. Bootstrap methods are used to obtain confidence intervals for model coefficients. Residual analysis is considered for assessing model assumptions. Outlier and influential case diagnostics are developed and a forward selection method is proposed for subset selection of model variables. A real data example is provided that uses the methods developed. Topics for further research are discussed.
ContributorsLi, Jingjin (Author) / Young, Dennis (Thesis advisor) / Eubank, Randall (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2012
136098-Thumbnail Image.png
Description
In order to discover if Company X's current system of local trucking is the most efficient and cost-effective way to move freight between sites in the Western U.S., we will compare the current system to varying alternatives to see if there are potential avenues for Company X to create or

In order to discover if Company X's current system of local trucking is the most efficient and cost-effective way to move freight between sites in the Western U.S., we will compare the current system to varying alternatives to see if there are potential avenues for Company X to create or implement an improved cost saving freight movement system.
ContributorsPicone, David (Co-author) / Krueger, Brandon (Co-author) / Harrison, Sarah (Co-author) / Way, Noah (Co-author) / Simonson, Mark (Thesis director) / Hertzel, Michael (Committee member) / Barrett, The Honors College (Contributor) / Department of Supply Chain Management (Contributor) / Department of Finance (Contributor) / Economics Program in CLAS (Contributor) / School of Accountancy (Contributor) / W. P. Carey School of Business (Contributor) / Sandra Day O'Connor College of Law (Contributor)
Created2015-05
136561-Thumbnail Image.png
Description
The current model of revenue generation for some free to play video games is preventing the companies controlling them from growing, but with a few changes in approach these issues could be alleviated. A new style of video games, called a MOBA (Massive Online Battle Arena) has emerged in the

The current model of revenue generation for some free to play video games is preventing the companies controlling them from growing, but with a few changes in approach these issues could be alleviated. A new style of video games, called a MOBA (Massive Online Battle Arena) has emerged in the past few years bringing with it a new style of generating wealth. Contrary to past gaming models, where users must either purchase the game outright, view advertisements, or purchase items to gain a competitive advantage, MOBAs require no payment of any kind. These are free to play computer games that provides users with all the tools necessary to compete with anyone free of charge; no advantages can be purchased in this game. This leaves the only way for users to provide money to the company through optional purchases of purely aesthetic items, only to be purchased if the buyer wishes to see their character in a different set of attire. The genre’s best in show—called League of Legends, or LOL—has spearheaded this method of revenue-generation. Fortunately for LOL, its level of popularity has reached levels never seen in video games: the world championships had more viewers than game 7 of the NBA Finals (Dorsey). The player base alone is enough to keep the company afloat currently, but the fact that they only convert 3.75% of the players into revenue is alarming. Each player brings the company an average of $1.32, or 30% of what some other free to play games earn per user (Comparing MMO). It is this low per player income that has caused Riot Games, the developer of LOL, to state that their e-sports division is not currently profitable. To resolve this issue, LOL must take on a more aggressive marketing plan. Advertisements for the NBA Finals cost $460,000 for 30 seconds, and LOL should aim for ads in this range (Lombardo). With an average of 3 million people logged on at any time, 90% of the players being male and 85% being between the ages of 16 and 30, advertising via this game would appeal to many companies, making a deal easy to strike (LOL infographic 2012). The idea also appeals to players: 81% of players surveyed said that an advertisement on the client that allows for the option to place an order would improve or not impact their experience. Moving forward with this, the gaming client would be updated to contain both an option to order pizza and an advertisement for Mountain Dew. This type of advertising was determined based on community responses through a sequence of survey questions. These small adjustments to the game would allow LOL to generate enough income for Riot Games to expand into other areas of the e-sports industry.
ContributorsSeip, Patrick (Co-author) / Zhao, BoNing (Co-author) / Kashiwagi, Dean (Thesis director) / Kashiwagi, Jacob (Committee member) / Barrett, The Honors College (Contributor) / Sandra Day O'Connor College of Law (Contributor) / Department of Economics (Contributor) / Department of Supply Chain Management (Contributor)
Created2015-05
135869-Thumbnail Image.png
Description
This report is a summary of a long-term project completed by Ido Gilboa for his Honors Thesis. The purpose of this project is to determine if an arbitrage between different crypto-currency exchanges exists, and if it is possible to acts upon such triangular arbitrage. Bitcoin, the specific crypto-currency this report

This report is a summary of a long-term project completed by Ido Gilboa for his Honors Thesis. The purpose of this project is to determine if an arbitrage between different crypto-currency exchanges exists, and if it is possible to acts upon such triangular arbitrage. Bitcoin, the specific crypto-currency this report focuses on, has become a household name, yet most do not understand its origin and patterns. The report will detail the process of collecting data from different sources, manipulating it in order to run the algorithms, explain the meaning behind the algorithms, results and important statistics found, and conclusion of the project. In addition to that, the report will go into detail discussing financial terms such as triangular arbitrage as well as information system concepts such as sockets and server communication. The project was completed with the assistance of Dr. Sunil Wahal and Dr. Daniel Mazzola, professors in the W.P. Carey School of business. This project has been stretched over along period of time, spanning from early 2013 to fall of 2015.
ContributorsGilboa, Ido (Author) / Wahal, Sunil (Thesis director) / Mazzola, Daniel (Committee member) / Department of Information Systems (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
136981-Thumbnail Image.png
Description
This paper takes a look at developing a technological start up revolving around the world of health and fitness. The entire process is documented, starting from the ideation phase, and continuing on to product testing and market research. The research done focuses on identifying a target market for a 24/7

This paper takes a look at developing a technological start up revolving around the world of health and fitness. The entire process is documented, starting from the ideation phase, and continuing on to product testing and market research. The research done focuses on identifying a target market for a 24/7 fitness service that connects clients with personal trainers. It is a good study on the steps needed in creating a business, and serves as a learning tool for how to bring a product to market.
ContributorsHeck, Kyle (Co-author) / Mitchell, Jake (Co-author) / Korczynski, Brian (Co-author) / Peck, Sidnee (Thesis director) / Eaton, John (Committee member) / Barrett, The Honors College (Contributor) / Department of Finance (Contributor) / Department of Economics (Contributor) / Department of Management (Contributor) / Department of Psychology (Contributor) / Department of Supply Chain Management (Contributor) / School of Accountancy (Contributor) / W. P. Carey School of Business (Contributor)
Created2014-05