Matching Items (64)
Filtering by

Clear all filters

148072-Thumbnail Image.png
Description

This thesis project has been conducted in accordance with The Founder’s Lab initiative which is sponsored by the W. P. Carey School of Business. This program groups three students together and tasks them with creating a business idea, conducting the necessary research to bring the concept to life, and exploring

This thesis project has been conducted in accordance with The Founder’s Lab initiative which is sponsored by the W. P. Carey School of Business. This program groups three students together and tasks them with creating a business idea, conducting the necessary research to bring the concept to life, and exploring different aspects of business, with the end goal of gaining traction. The product we were given to work through this process with was Hot Head, an engineering capstone project concept. The Hot Head product is a sustainable and innovative solution to the water waste issue we find is very prominent in the United States. In order to bring the Hot Head idea to life, we were tasked with doing research on topics ranging from the Hot Head life cycle to finding plausible personas who may have an interest in the Hot Head product. This paper outlines the journey to gaining traction via a marketing campaign and exposure of our brand on several platforms, with a specific interest in website traffic. Our research scope comes from mainly primary sources like gathering opinions of potential buyers by sending out surveys and hosting focus groups. The paper concludes with some possible future steps that could be taken if this project were to be continued.

ContributorsGoodall, Melody Anne (Co-author) / Rote, Jennifer (Co-author) / Lozano Porras, Mariela (Co-author) / Byrne, Jared (Thesis director) / Sebold, Brent (Committee member) / Department of Finance (Contributor) / Department of Economics (Contributor) / Dean, W.P. Carey School of Business (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148222-Thumbnail Image.png
Description

When you are sitting at the terminal waiting for your flight or taking the bus to get to work, have you ever thought about who used your seat last? More importantly, have you ever thought about the last time that seat was cleaned? Sadly, it is uncertain to see if

When you are sitting at the terminal waiting for your flight or taking the bus to get to work, have you ever thought about who used your seat last? More importantly, have you ever thought about the last time that seat was cleaned? Sadly, it is uncertain to see if it was properly sanitized in the last hour, yesterday, in the last week, or even last month. Especially during these tough times, everyone wants to be assured that they are always in a safe and healthy environment. Through the Founders Lab, our team is collaborating with an engineering capstone team to bring automated seat cleaning technology into the market. This product is a custom-designed seat cover that is tear-resistant and provides a sanitary surface for anyone to sit on. When someone leaves the seat, a pressure sensor is triggered, and the cover is replaced with a secondary cover that was stored in a UV radiated container. The waterproof fabric and internal filters prevent spills and food crumbs from remaining when the user changes. The reason for bringing this product into the market is due to the unsanitary conditions in many high traffic areas. This technology can be implemented in public transportation, restaurants, sports stadiums, and much more. It will instantly improve the efficiency of sanitation for many businesses and keep a promise to its users that they will never bring something they sat on back home. #Safeseating

ContributorsJawahar, Nandita (Co-author) / Yang, Tiger (Co-author) / Nimmagadda, Viraj (Co-author) / Byrne, Jared (Thesis director) / Sebold, Brent (Committee member) / Department of Finance (Contributor) / Department of Information Systems (Contributor) / School of Community Resources and Development (Contributor) / Department of Supply Chain Management (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148238-Thumbnail Image.png
Description

Dreadnought is a free-to-play multiplayer flight simulation in which two teams of 8 players each compete against one another to complete an objective. Each player controls a large-scale spaceship, various aspects of which can be customized to improve a player’s performance in a game. One such aspect is Officer Briefings,

Dreadnought is a free-to-play multiplayer flight simulation in which two teams of 8 players each compete against one another to complete an objective. Each player controls a large-scale spaceship, various aspects of which can be customized to improve a player’s performance in a game. One such aspect is Officer Briefings, which are passive abilities that grant ships additional capabilities. Two of these Briefings, known as Retaliator and Get My Good Side, have strong synergy when used together, which has led to the Dreadnought community’s claiming that the Briefings are too powerful and should be rebalanced to be more in line with the power levels of other Briefings. This study collected gameplay data with and without the use of these specific Officer Briefings to determine the precise impact on gameplay. Linear correlation matrices and inference on two means were used to determine performance impact. It was found that, although these Officer Briefings do improve an individual player’s performance in a game, they do not have a consistent impact on the player’s team performance, and that these Officer Briefings are therefore not in need of rebalancing.

ContributorsJacobs, Max I. (Author) / Schneider, Laurence (Thesis director) / Tran, Samantha (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150135-Thumbnail Image.png
Description
It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among

It is common in the analysis of data to provide a goodness-of-fit test to assess the performance of a model. In the analysis of contingency tables, goodness-of-fit statistics are frequently employed when modeling social science, educational or psychological data where the interest is often directed at investigating the association among multi-categorical variables. Pearson's chi-squared statistic is well-known in goodness-of-fit testing, but it is sometimes considered to produce an omnibus test as it gives little guidance to the source of poor fit once the null hypothesis is rejected. However, its components can provide powerful directional tests. In this dissertation, orthogonal components are used to develop goodness-of-fit tests for models fit to the counts obtained from the cross-classification of multi-category dependent variables. Ordinal categories are assumed. Orthogonal components defined on marginals are obtained when analyzing multi-dimensional contingency tables through the use of the QR decomposition. A subset of these orthogonal components can be used to construct limited-information tests that allow one to identify the source of lack-of-fit and provide an increase in power compared to Pearson's test. These tests can address the adverse effects presented when data are sparse. The tests rely on the set of first- and second-order marginals jointly, the set of second-order marginals only, and the random forest method, a popular algorithm for modeling large complex data sets. The performance of these tests is compared to the likelihood ratio test as well as to tests based on orthogonal polynomial components. The derived goodness-of-fit tests are evaluated with studies for detecting two- and three-way associations that are not accounted for by a categorical variable factor model with a single latent variable. In addition the tests are used to investigate the case when the model misspecification involves parameter constraints for large and sparse contingency tables. The methodology proposed here is applied to data from the 38th round of the State Survey conducted by the Institute for Public Policy and Michigan State University Social Research (2005) . The results illustrate the use of the proposed techniques in the context of a sparse data set.
ContributorsMilovanovic, Jelena (Author) / Young, Dennis (Thesis advisor) / Reiser, Mark R. (Thesis advisor) / Wilson, Jeffrey (Committee member) / Eubank, Randall (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2011
151976-Thumbnail Image.png
Description
Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs

Parallel Monte Carlo applications require the pseudorandom numbers used on each processor to be independent in a probabilistic sense. The TestU01 software package is the standard testing suite for detecting stream dependence and other properties that make certain pseudorandom generators ineffective in parallel (as well as serial) settings. TestU01 employs two basic schemes for testing parallel generated streams. The first applies serial tests to the individual streams and then tests the resulting P-values for uniformity. The second turns all the parallel generated streams into one long vector and then applies serial tests to the resulting concatenated stream. Various forms of stream dependence can be missed by each approach because neither one fully addresses the multivariate nature of the accumulated data when generators are run in parallel. This dissertation identifies these potential faults in the parallel testing methodologies of TestU01 and investigates two different methods to better detect inter-stream dependencies: correlation motivated multivariate tests and vector time series based tests. These methods have been implemented in an extension to TestU01 built in C++ and the unique aspects of this extension are discussed. A variety of different generation scenarios are then examined using the TestU01 suite in concert with the extension. This enhanced software package is found to better detect certain forms of inter-stream dependencies than the original TestU01 suites of tests.
ContributorsIsmay, Chester (Author) / Eubank, Randall (Thesis advisor) / Young, Dennis (Committee member) / Kao, Ming-Hung (Committee member) / Lanchier, Nicolas (Committee member) / Reiser, Mark R. (Committee member) / Arizona State University (Publisher)
Created2013
150996-Thumbnail Image.png
Description
A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis

A least total area of triangle method was proposed by Teissier (1948) for fitting a straight line to data from a pair of variables without treating either variable as the dependent variable while allowing each of the variables to have measurement errors. This method is commonly called Reduced Major Axis (RMA) regression and is often used instead of Ordinary Least Squares (OLS) regression. Results for confidence intervals, hypothesis testing and asymptotic distributions of coefficient estimates in the bivariate case are reviewed. A generalization of RMA to more than two variables for fitting a plane to data is obtained by minimizing the sum of a function of the volumes obtained by drawing, from each data point, lines parallel to each coordinate axis to the fitted plane (Draper and Yang 1997; Goodman and Tofallis 2003). Generalized RMA results for the multivariate case obtained by Draper and Yang (1997) are reviewed and some investigations of multivariate RMA are given. A linear model is proposed that does not specify a dependent variable and allows for errors in the measurement of each variable. Coefficients in the model are estimated by minimization of the function of the volumes previously mentioned. Methods for obtaining coefficient estimates are discussed and simulations are used to investigate the distribution of coefficient estimates. The effects of sample size, sampling error and correlation among variables on the estimates are studied. Bootstrap methods are used to obtain confidence intervals for model coefficients. Residual analysis is considered for assessing model assumptions. Outlier and influential case diagnostics are developed and a forward selection method is proposed for subset selection of model variables. A real data example is provided that uses the methods developed. Topics for further research are discussed.
ContributorsLi, Jingjin (Author) / Young, Dennis (Thesis advisor) / Eubank, Randall (Thesis advisor) / Reiser, Mark R. (Committee member) / Kao, Ming-Hung (Committee member) / Yang, Yan (Committee member) / Arizona State University (Publisher)
Created2012
151128-Thumbnail Image.png
Description
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection

This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
ContributorsHuang, Qing (Author) / Eubank, Randall (Thesis advisor) / Renaut, Rosemary (Thesis advisor) / Cochran, Douglas (Committee member) / Gelb, Anne (Committee member) / Young, Dennis (Committee member) / Arizona State University (Publisher)
Created2012
136550-Thumbnail Image.png
Description
The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team

The NFL is one of largest and most influential industries in the world. In America there are few companies that have a stronger hold on the American culture and create such a phenomena from year to year. In this project aimed to develop a strategy that helps an NFL team be as successful as possible by defining which positions are most important to a team's success. Data from fifteen years of NFL games was collected and information on every player in the league was analyzed. First there needed to be a benchmark which describes a team as being average and then every player in the NFL must be compared to that average. Based on properties of linear regression using ordinary least squares this project aims to define such a model that shows each position's importance. Finally, once such a model had been established then the focus turned to the NFL draft in which the goal was to find a strategy of where each position needs to be drafted so that it is most likely to give the best payoff based on the results of the regression in part one.
ContributorsBalzer, Kevin Ryan (Author) / Goegan, Brian (Thesis director) / Dassanayake, Maduranga (Committee member) / Barrett, The Honors College (Contributor) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
136587-Thumbnail Image.png
Description
In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory,

In the words of W. Edwards Deming, "the central problem in management and in leadership is failure to understand the information in variation." While many quality management programs propose the institution of technical training in advanced statistical methods, this paper proposes that by understanding the fundamental information behind statistical theory, and by minimizing bias and variance while fully utilizing the available information about the system at hand, one can make valuable, accurate predictions about the future. Combining this knowledge with the work of quality gurus W. E. Deming, Eliyahu Goldratt, and Dean Kashiwagi, a framework for making valuable predictions for continuous improvement is made. After this information is synthesized, it is concluded that the best way to make accurate, informative predictions about the future is to "balance the present and future," seeing the future through the lens of the present and thus minimizing bias, variance, and risk.
ContributorsSynodis, Nicholas Dahn (Author) / Kashiwagi, Dean (Thesis director, Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2015-05
137483-Thumbnail Image.png
Description
Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory

Analytic research on basketball games is growing quickly, specifically in the National Basketball Association. This paper explored the development of this analytic research and discovered that there has been a focus on individual player metrics and a dearth of quantitative team characterizations and evaluations. Consequently, this paper continued the exploratory research of Fewell and Armbruster's "Basketball teams as strategic networks" (2012), which modeled basketball teams as networks and used metrics to characterize team strategy in the NBA's 2010 playoffs. Individual players and outcomes were nodes and passes and actions were the links. This paper used data that was recorded from playoff games of the two 2012 NBA finalists: the Miami Heat and the Oklahoma City Thunder. The same metrics that Fewell and Armbruster used were explained, then calculated using this data. The offensive networks of these two teams during the playoffs were analyzed and interpreted by using other data and qualitative characterization of the teams' strategies; the paper found that the calculated metrics largely matched with our qualitative characterizations of the teams. The validity of the metrics in this paper and Fewell and Armbruster's paper was then discussed, and modeling basketball teams as multiple-order Markov chains rather than as networks was explored.
ContributorsMohanraj, Hariharan (Co-author) / Choi, David (Co-author) / Armbruster, Dieter (Thesis director) / Fewell, Jennifer (Committee member) / Brooks, Daniel (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2013-05