Matching Items (8)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
156112-Thumbnail Image.png
Description
Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a

Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a partial dose due to nonadherence. Using these data, we can estimate the magnitude of the treatment effect at different levels of adherence, which serve as a proxy for different levels of treatment. In this dissertation, I conducted Monte Carlo simulations to evaluate when linear dose-response effects can be accurately and precisely estimated in randomized experiments comparing a no-treatment control condition to a treatment condition with partial adherence. Specifically, I evaluated the performance of confounder adjustment and instrumental variable methods when their assumptions were met (Study 1) and when their assumptions were violated (Study 2). In Study 1, the confounder adjustment and instrumental variable methods provided unbiased estimates of the dose-response effect across sample sizes (200, 500, 2,000) and adherence distributions (uniform, right skewed, left skewed). The adherence distribution affected power for the instrumental variable method. In Study 2, the confounder adjustment method provided unbiased or minimally biased estimates of the dose-response effect under no or weak (but not moderate or strong) unobserved confounding. The instrumental variable method provided extremely biased estimates of the dose-response effect under violations of the exclusion restriction (no direct effect of treatment assignment on the outcome), though less severe violations of the exclusion restriction should be investigated.
ContributorsMazza, Gina L (Author) / Grimm, Kevin J. (Thesis advisor) / West, Stephen G. (Thesis advisor) / Mackinnon, David P (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2018
156579-Thumbnail Image.png
Description
The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to

The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to determine a cut score using receiver operating characteristic (ROC) curves. Psychometric methods provide reliable and interpretable scores, but the prediction of the diagnosis is not the primary product of the measurement process. In contrast, machine learning methods, such as regularization or binary recursive partitioning, can build a model from the assessment items to predict the probability of diagnosis. Machine learning predicts the diagnosis directly, but does not provide an inferential framework to explain why item responses are related to the diagnosis. It remains unclear whether psychometric and machine learning methods have comparable accuracy or if one method is preferable in some situations. In this study, Monte Carlo simulation methods were used to compare psychometric and machine learning methods on diagnostic classification accuracy. Results suggest that classification accuracy of psychometric models depends on the diagnostic-test correlation and prevalence of diagnosis. Also, machine learning methods that reduce prediction error have inflated specificity and very low sensitivity compared to the data-generating model, especially when prevalence is low. Finally, machine learning methods that use ROC curves to determine probability thresholds have comparable classification accuracy to the psychometric models as sample size, number of items, and number of item categories increase. Therefore, results suggest that machine learning models could provide a viable alternative for classification in diagnostic assessments. Strengths and limitations for each of the methods are discussed, and future directions are considered.
ContributorsGonzález, Oscar (Author) / Mackinnon, David P (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Grimm, Kevin J. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2018
154889-Thumbnail Image.png
Description
Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is

Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is little research using simulations to investigate LDS models. This study examined the influence of time metric on model estimation, interpretation, parameter estimate accuracy, and convergence in LDS models using empirical simulations. Results indicated that for a time structure with a true time metric where participants had different starting points and unequally spaced intervals, LDS models fit with a restructured and less informative time metric resulted in biased parameter estimates. However, models examined using the true time metric were less likely to converge than models using the restructured time metric, likely due to missing data. Where participants had different starting points but equally spaced intervals, LDS models fit with a restructured time metric resulted in biased estimates of intercept means, but all other parameter estimates were unbiased, and models examined using the true time metric had less convergence than the restructured time metric as well due to missing data. The findings of this study support prior research on time metric in longitudinal models, and further research should examine these findings under alternative conditions. The importance of these findings for substantive researchers is discussed.
ContributorsO'Rourke, Holly P (Author) / Grimm, Kevin J. (Thesis advisor) / Mackinnon, David P (Thesis advisor) / Chassin, Laurie (Committee member) / Aiken, Leona S. (Committee member) / Arizona State University (Publisher)
Created2016
Description
Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their

Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their statistical analyses. Both cases interfere with the accuracy and generalizability of statistical results. Despite their importance, collider effects remain relatively unknown in the social sciences. This research introduces both the conceptual and the mathematical foundation for collider effects and demonstrates how to calculate a collider effect and test it for statistical significance. Simulation studies examined the efficiency and accuracy of the collider estimation methods and tested the viability of Thorndike’s Case III equation as a potential solution to correcting for collider bias in cases of biased sample selection.
ContributorsLamp, Sophia Josephine (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2021
154216-Thumbnail Image.png
Description
The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the

The Partition of Variance (POV) method is a simplistic way to identify large sources of variation in manufacturing systems. This method identifies the variance by estimating the variance of the means (between variance) and the means of the variance (within variance). The project shows that the method correctly identifies the variance source when compared to the ANOVA method. Although the variance estimators deteriorate when varying degrees of non-normality is introduced through simulation; however, the POV method is shown to be a more stable measure of variance in the aggregate. The POV method also provides non-negative, stable estimates for interaction when compared to the ANOVA method. The POV method is shown to be more stable, particularly in low sample size situations. Based on these findings, it is suggested that the POV is not a replacement for more complex analysis methods, but rather, a supplement to them. POV is ideal for preliminary analysis due to the ease of implementation, the simplicity of interpretation, and the lack of dependency on statistical analysis packages or statistical knowledge.
ContributorsLittle, David John (Author) / Borror, Connie (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Broatch, Jennifer (Committee member) / Arizona State University (Publisher)
Created2015
157542-Thumbnail Image.png
Description
Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can hel

Statistical inference from mediation analysis applies to populations, however, researchers and clinicians may be interested in making inference to individual clients or small, localized groups of people. Person-oriented approaches focus on the differences between people, or latent groups of people, to ask how individuals differ across variables, and can help researchers avoid ecological fallacies when making inferences about individuals. Traditional variable-oriented mediation assumes the population undergoes a homogenous reaction to the mediating process. However, mediation is also described as an intra-individual process where each person passes from a predictor, through a mediator, to an outcome (Collins, Graham, & Flaherty, 1998). Configural frequency mediation is a person-oriented analysis of contingency tables that has not been well-studied or implemented since its introduction in the literature (von Eye, Mair, & Mun, 2010; von Eye, Mun, & Mair, 2009). The purpose of this study is to describe CFM and investigate its statistical properties while comparing it to traditional and casual inference mediation methods. The results of this study show that joint significance mediation tests results in better Type I error rates but limit the person-oriented interpretations of CFM. Although the estimator for logistic regression and causal mediation are different, they both perform well in terms of Type I error and power, although the causal estimator had higher bias than expected, which is discussed in the limitations section.
ContributorsSmyth, Heather Lynn (Author) / Mackinnon, David P (Thesis advisor) / Grimm, Kevin J. (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2019
158877-Thumbnail Image.png
Description
This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated

This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated conceptually and in a statistical simulation study where we impose suppression and non-suppression conditions. For each estimator without an existing standard error formula, one was derived in order to conduct significance tests and build confidence intervals. Overall, two of the estimators were biased and had poor coverage, one worked well but had inflated type-I error rates when the population model was complete mediation. As a result of analyzing these three tests, a fourth was considered in the late stages of the project and showed promising results that address concerns of the other tests. When the tests were applied to real data, they gave similar results and were consistent.
ContributorsMuniz, Felix (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F. (Committee member) / McNeish, Daniel M (Committee member) / Arizona State University (Publisher)
Created2020