Matching Items (17)
Filtering by

Clear all filters

155978-Thumbnail Image.png
Description
Though the likelihood is a useful tool for obtaining estimates of regression parameters, it is not readily available in the fit of hierarchical binary data models. The correlated observations negate the opportunity to have a joint likelihood when fitting hierarchical logistic regression models. Through conditional likelihood, inferences for the regression

Though the likelihood is a useful tool for obtaining estimates of regression parameters, it is not readily available in the fit of hierarchical binary data models. The correlated observations negate the opportunity to have a joint likelihood when fitting hierarchical logistic regression models. Through conditional likelihood, inferences for the regression and covariance parameters as well as the intraclass correlation coefficients are usually obtained. In those cases, I have resorted to use of Laplace approximation and large sample theory approach for point and interval estimates such as Wald-type confidence intervals and profile likelihood confidence intervals. These methods rely on distributional assumptions and large sample theory. However, when dealing with small hierarchical datasets they often result in severe bias or non-convergence. I present a generalized quasi-likelihood approach and a generalized method of moments approach; both do not rely on any distributional assumptions but only moments of response. As an alternative to the typical large sample theory approach, I present bootstrapping hierarchical logistic regression models which provides more accurate interval estimates for small binary hierarchical data. These models substitute computations as an alternative to the traditional Wald-type and profile likelihood confidence intervals. I use a latent variable approach with a new split bootstrap method for estimating intraclass correlation coefficients when analyzing binary data obtained from a three-level hierarchical structure. It is especially useful with small sample size and easily expanded to multilevel. Comparisons are made to existing approaches through both theoretical justification and simulation studies. Further, I demonstrate my findings through an analysis of three numerical examples, one based on cancer in remission data, one related to the China’s antibiotic abuse study, and a third related to teacher effectiveness in schools from a state of southwest US.
ContributorsWang, Bei (Author) / Wilson, Jeffrey R (Thesis advisor) / Kamarianakis, Ioannis (Committee member) / Reiser, Mark R. (Committee member) / St Louis, Robert (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2017
155984-Thumbnail Image.png
Description
Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced

Predicting resistant prostate cancer is critical for lowering medical costs and improving the quality of life of advanced prostate cancer patients. I formulate, compare, and analyze two mathematical models that aim to forecast future levels of prostate-specific antigen (PSA). I accomplish these tasks by employing clinical data of locally advanced prostate cancer patients undergoing androgen deprivation therapy (ADT). I demonstrate that the inverse problem of parameter estimation might be too complicated and simply relying on data fitting can give incorrect conclusions, since there is a large error in parameter values estimated and parameters might be unidentifiable. I provide confidence intervals to give estimate forecasts using data assimilation via an ensemble Kalman Filter. Using the ensemble Kalman Filter, I perform dual estimation of parameters and state variables to test the prediction accuracy of the models. Finally, I present a novel model with time delay and a delay-dependent parameter. I provide a geometric stability result to study the behavior of this model and show that the inclusion of time delay may improve the accuracy of predictions. Also, I demonstrate with clinical data that the inclusion of the delay-dependent parameter facilitates the identification and estimation of parameters.
ContributorsBaez, Javier (Author) / Kuang, Yang (Thesis advisor) / Kostelich, Eric (Committee member) / Crook, Sharon (Committee member) / Gardner, Carl (Committee member) / Nagy, John (Committee member) / Arizona State University (Publisher)
Created2017
156371-Thumbnail Image.png
Description
Generalized Linear Models (GLMs) are widely used for modeling responses with non-normal error distributions. When the values of the covariates in such models are controllable, finding an optimal (or at least efficient) design could greatly facilitate the work of collecting and analyzing data. In fact, many theoretical results are obtained

Generalized Linear Models (GLMs) are widely used for modeling responses with non-normal error distributions. When the values of the covariates in such models are controllable, finding an optimal (or at least efficient) design could greatly facilitate the work of collecting and analyzing data. In fact, many theoretical results are obtained on a case-by-case basis, while in other situations, researchers also rely heavily on computational tools for design selection.

Three topics are investigated in this dissertation with each one focusing on one type of GLMs. Topic I considers GLMs with factorial effects and one continuous covariate. Factors can have interactions among each other and there is no restriction on the possible values of the continuous covariate. The locally D-optimal design structures for such models are identified and results for obtaining smaller optimal designs using orthogonal arrays (OAs) are presented. Topic II considers GLMs with multiple covariates under the assumptions that all but one covariate are bounded within specified intervals and interaction effects among those bounded covariates may also exist. An explicit formula for D-optimal designs is derived and OA-based smaller D-optimal designs for models with one or two two-factor interactions are also constructed. Topic III considers multiple-covariate logistic models. All covariates are nonnegative and there is no interaction among them. Two types of D-optimal design structures are identified and their global D-optimality is proved using the celebrated equivalence theorem.
ContributorsWang, Zhongsheng (Author) / Stufken, John (Thesis advisor) / Kamarianakis, Ioannis (Committee member) / Kao, Ming-Hung (Committee member) / Reiser, Mark R. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2018
133957-Thumbnail Image.png
Description
Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures

Coherent vortices are ubiquitous structures in natural flows that affect mixing and transport of substances and momentum/energy. Being able to detect these coherent structures is important for pollutant mitigation, ecological conservation and many other aspects. In recent years, mathematical criteria and algorithms have been developed to extract these coherent structures in turbulent flows. In this study, we will apply these tools to extract important coherent structures and analyze their statistical properties as well as their implications on kinematics and dynamics of the flow. Such information will aide representation of small-scale nonlinear processes that large-scale models of natural processes may not be able to resolve.
ContributorsCass, Brentlee Jerry (Author) / Tang, Wenbo (Thesis director) / Kostelich, Eric (Committee member) / Department of Information Systems (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133570-Thumbnail Image.png
Description
In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD)

In the last decade, the population of honey bees across the globe has declined sharply leaving scientists and bee keepers to wonder why? Amongst all nations, the United States has seen some of the greatest declines in the last 10 plus years. Without a definite explanation, Colony Collapse Disorder (CCD) was coined to explain the sudden and sharp decline of the honey bee colonies that beekeepers were experiencing. Colony collapses have been rising higher compared to expected averages over the years, and during the winter season losses are even more severe than what is normally acceptable. There are some possible explanations pointing towards meteorological variables, diseases, and even pesticide usage. Despite the cause of CCD being unknown, thousands of beekeepers have reported their losses, and even numbers of infected colonies and colonies under certain stressors in the most recent years. Using the data that was reported to The United States Department of Agriculture (USDA), as well as weather data collected by The National Centers for Environmental Information (NOAA) and the National Centers for Environmental Information (NCEI), regression analysis was used to investigate honey bee colonies to find relationships between stressors in honey bee colonies and meteorological variables, and colony collapses during the winter months. The regression analysis focused on the winter season, or quarter 4 of the year, which includes the months of October, November, and December. In the model, the response variables was the percentage of colonies lost in quarter 4. Through the model, it was concluded that certain weather thresholds and the percentage increase of colonies under certain stressors were related to colony loss.
ContributorsVasquez, Henry Antony (Author) / Zheng, Yi (Thesis director) / Saffell, Erinanne (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134976-Thumbnail Image.png
Description
Problems related to alcohol consumption cause not only extra economic expenses, but are an expense to the health of both drinkers and non-drinkers due to the harm directly and indirectly caused by alcohol consumption. Investigating predictors and reasons for alcohol-related problems is of importance, as alcohol-related problems could be prevented

Problems related to alcohol consumption cause not only extra economic expenses, but are an expense to the health of both drinkers and non-drinkers due to the harm directly and indirectly caused by alcohol consumption. Investigating predictors and reasons for alcohol-related problems is of importance, as alcohol-related problems could be prevented by quitting or limiting consumption of alcohol. We were interested in predicting alcohol-related problems using multiple linear regression and regression trees, and then comparing the regressions to the tree. Impaired control, anxiety sensitivity, mother permissiveness, father permissiveness, gender, and age were included as predictors. The data used was comprised of participants (n=835) sampled from students at Arizona State University. A multiple linear regression without interactions, multiple linear regression with two-way interactions and squares, and a regression tree were used and compared. The regression and the tree had similar results. Multiple interactions of variables predicted alcohol-related problems. Overall, the tree was easier to interpret than the regressions, however, the regressions provided specific predicted alcohol-related problems scores, whereas the tree formed large groups and had a predicted alcohol-related problems score for each group. Nevertheless, the tree still predicted alcohol-related problems nearly as well, if not better than the regressions.
ContributorsVoorhies, Kirsten Reed (Author) / McCulloch, Robert (Thesis director) / Zheng, Yi (Committee member) / Patock-Peckham, Julie (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
155445-Thumbnail Image.png
Description
The Pearson and likelihood ratio statistics are commonly used to test goodness-of-fit for models applied to data from a multinomial distribution. When data are from a table formed by cross-classification of a large number of variables, the common statistics may have low power and inaccurate Type I error level due

The Pearson and likelihood ratio statistics are commonly used to test goodness-of-fit for models applied to data from a multinomial distribution. When data are from a table formed by cross-classification of a large number of variables, the common statistics may have low power and inaccurate Type I error level due to sparseness in the cells of the table. The GFfit statistic can be used to examine model fit in subtables. It is proposed to assess model fit by using a new version of GFfit statistic based on orthogonal components of Pearson chi-square as a diagnostic to examine the fit on two-way subtables. However, due to variables with a large number of categories and small sample size, even the GFfit statistic may have low power and inaccurate Type I error level due to sparseness in the two-way subtable. In this dissertation, the theoretical power and empirical power of the GFfit statistic are studied. A method based on subsets of orthogonal components for the GFfit statistic on the subtables is developed to improve the performance of the GFfit statistic. Simulation results for power and type I error rate for several different cases along with comparisons to other diagnostics are presented.
ContributorsZhu, Junfei (Author) / Reiser, Mark R. (Thesis advisor) / Stufken, John (Committee member) / Zheng, Yi (Committee member) / St Louis, Robert (Committee member) / Kao, Ming-Hung (Committee member) / Arizona State University (Publisher)
Created2017
155598-Thumbnail Image.png
Description
This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives

This article proposes a new information-based subdata selection (IBOSS) algorithm, Squared Scaled Distance Algorithm (SSDA). It is based on the invariance of the determinant of the information matrix under orthogonal transformations, especially rotations. Extensive simulation results show that the new IBOSS algorithm retains nice asymptotic properties of IBOSS and gives a larger determinant of the subdata information matrix. It has the same order of time complexity as the D-optimal IBOSS algorithm. However, it exploits the advantages of vectorized calculation avoiding for loops and is approximately 6 times as fast as the D-optimal IBOSS algorithm in R. The robustness of SSDA is studied from three aspects: nonorthogonality, including interaction terms and variable misspecification. A new accurate variable selection algorithm is proposed to help the implementation of IBOSS algorithms when a large number of variables are present with sparse important variables among them. Aggregating random subsample results, this variable selection algorithm is much more accurate than the LASSO method using full data. Since the time complexity is associated with the number of variables only, it is also very computationally efficient if the number of variables is fixed as n increases and not massively large. More importantly, using subsamples it solves the problem that full data cannot be stored in the memory when a data set is too large.
ContributorsZheng, Yi (Author) / Stufken, John (Thesis advisor) / Reiser, Mark R. (Committee member) / McCulloch, Robert (Committee member) / Arizona State University (Publisher)
Created2017
148450-Thumbnail Image.png
Description

Adaptive therapy utilizes competitive interactions between resistant and sensitive cells by keeping some sensitive cells to control tumor burden with the aim of increasing overall survival and time to progression. The use of adaptive therapy to treat breast cancer, ovarian cancer, and pancreatic cancer in preclinical models has shown significant

Adaptive therapy utilizes competitive interactions between resistant and sensitive cells by keeping some sensitive cells to control tumor burden with the aim of increasing overall survival and time to progression. The use of adaptive therapy to treat breast cancer, ovarian cancer, and pancreatic cancer in preclinical models has shown significant results in controlling tumor growth. The purpose of this thesis is to draft a protocol to study adaptive therapy in a preclinical model of breast cancer on MCF7, estrogen receptor-positive, cells that have evolved resistance to fulvestrant and palbociclib (MCF7 R). In this study, we used two protocols: drug dose adjustment and intermittent therapy. The MCF7 R cell lines were injected into the mammary fat pads of 11-month-old NOD/SCID gamma (NSG) mice (18 mice) which were then treated with gemcitabine.<br/>The results of this experiment did not provide complete information because of the short-term treatments. In addition, we saw an increase in the tumor size of a few of the treated mice, which could be due to the metabolism of the drug at that age, or because of the difference in injection times. Therefore, these adaptive therapy protocols on hormone-refractory breast cancer cell lines will be repeated on young, 6-week old mice by injecting the cell lines at the same time for all mice, which helps the results to be more consistent and accurate.

ContributorsConti, Aviona (Author) / Maley, Carlo (Thesis director) / Blattman, Joseph (Committee member) / Seyedi, Sareh (Committee member) / School of Life Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Career information for degrees in statistics and data science according to frequently asked questions and twelve major categories of interest: arts, business, education, engineering, environment, government, law, medicine, science, social science, sports, and technology.

ContributorsDerby-Lawson, Lili (Author) / Zheng, Yi (Thesis director) / Zhang, Helen (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Economics Program in CLAS (Contributor) / School of Sustainability (Contributor)
Created2023-05