Matching Items (13)
Filtering by

Clear all filters

151957-Thumbnail Image.png
Description
Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The

Random Forests is a statistical learning method which has been proposed for propensity score estimation models that involve complex interactions, nonlinear relationships, or both of the covariates. In this dissertation I conducted a simulation study to examine the effects of three Random Forests model specifications in propensity score analysis. The results suggested that, depending on the nature of data, optimal specification of (1) decision rules to select the covariate and its split value in a Classification Tree, (2) the number of covariates randomly sampled for selection, and (3) methods of estimating Random Forests propensity scores could potentially produce an unbiased average treatment effect estimate after propensity scores weighting by the odds adjustment. Compared to the logistic regression estimation model using the true propensity score model, Random Forests had an additional advantage in producing unbiased estimated standard error and correct statistical inference of the average treatment effect. The relationship between the balance on the covariates' means and the bias of average treatment effect estimate was examined both within and between conditions of the simulation. Within conditions, across repeated samples there was no noticeable correlation between the covariates' mean differences and the magnitude of bias of average treatment effect estimate for the covariates that were imbalanced before adjustment. Between conditions, small mean differences of covariates after propensity score adjustment were not sensitive enough to identify the optimal Random Forests model specification for propensity score analysis.
ContributorsCham, Hei Ning (Author) / Tein, Jenn-Yun (Thesis advisor) / Enders, Stephen G (Thesis advisor) / Enders, Craig K. (Committee member) / Mackinnon, David P (Committee member) / Arizona State University (Publisher)
Created2013
156112-Thumbnail Image.png
Description
Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a

Understanding how adherence affects outcomes is crucial when developing and assigning interventions. However, interventions are often evaluated by conducting randomized experiments and estimating intent-to-treat effects, which ignore actual treatment received. Dose-response effects can supplement intent-to-treat effects when participants are offered the full dose but many only receive a partial dose due to nonadherence. Using these data, we can estimate the magnitude of the treatment effect at different levels of adherence, which serve as a proxy for different levels of treatment. In this dissertation, I conducted Monte Carlo simulations to evaluate when linear dose-response effects can be accurately and precisely estimated in randomized experiments comparing a no-treatment control condition to a treatment condition with partial adherence. Specifically, I evaluated the performance of confounder adjustment and instrumental variable methods when their assumptions were met (Study 1) and when their assumptions were violated (Study 2). In Study 1, the confounder adjustment and instrumental variable methods provided unbiased estimates of the dose-response effect across sample sizes (200, 500, 2,000) and adherence distributions (uniform, right skewed, left skewed). The adherence distribution affected power for the instrumental variable method. In Study 2, the confounder adjustment method provided unbiased or minimally biased estimates of the dose-response effect under no or weak (but not moderate or strong) unobserved confounding. The instrumental variable method provided extremely biased estimates of the dose-response effect under violations of the exclusion restriction (no direct effect of treatment assignment on the outcome), though less severe violations of the exclusion restriction should be investigated.
ContributorsMazza, Gina L (Author) / Grimm, Kevin J. (Thesis advisor) / West, Stephen G. (Thesis advisor) / Mackinnon, David P (Committee member) / Tein, Jenn-Yun (Committee member) / Arizona State University (Publisher)
Created2018
156579-Thumbnail Image.png
Description
The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to

The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to determine a cut score using receiver operating characteristic (ROC) curves. Psychometric methods provide reliable and interpretable scores, but the prediction of the diagnosis is not the primary product of the measurement process. In contrast, machine learning methods, such as regularization or binary recursive partitioning, can build a model from the assessment items to predict the probability of diagnosis. Machine learning predicts the diagnosis directly, but does not provide an inferential framework to explain why item responses are related to the diagnosis. It remains unclear whether psychometric and machine learning methods have comparable accuracy or if one method is preferable in some situations. In this study, Monte Carlo simulation methods were used to compare psychometric and machine learning methods on diagnostic classification accuracy. Results suggest that classification accuracy of psychometric models depends on the diagnostic-test correlation and prevalence of diagnosis. Also, machine learning methods that reduce prediction error have inflated specificity and very low sensitivity compared to the data-generating model, especially when prevalence is low. Finally, machine learning methods that use ROC curves to determine probability thresholds have comparable classification accuracy to the psychometric models as sample size, number of items, and number of item categories increase. Therefore, results suggest that machine learning models could provide a viable alternative for classification in diagnostic assessments. Strengths and limitations for each of the methods are discussed, and future directions are considered.
ContributorsGonzález, Oscar (Author) / Mackinnon, David P (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Grimm, Kevin J. (Committee member) / Zheng, Yi (Committee member) / Arizona State University (Publisher)
Created2018
157343-Thumbnail Image.png
Description
Previous research has shown functional mixed-effects models and traditional mixed-effects models perform similarly when recovering mean and individual trajectories (Fine, Suk, & Grimm, 2019). However, Fine et al. (2019) showed traditional mixed-effects models were able to more accurately recover the underlying mean curves compared to functional mixed-effects models. That project

Previous research has shown functional mixed-effects models and traditional mixed-effects models perform similarly when recovering mean and individual trajectories (Fine, Suk, & Grimm, 2019). However, Fine et al. (2019) showed traditional mixed-effects models were able to more accurately recover the underlying mean curves compared to functional mixed-effects models. That project generated data following a parametric structure. This paper extended previous work and aimed to compare nonlinear mixed-effects models and functional mixed-effects models on their ability to recover underlying trajectories which were generated from an inherently nonparametric process. This paper introduces readers to nonlinear mixed-effects models and functional mixed-effects models. A simulation study is then presented where the mean and random effects structure of the simulated data were generated using B-splines. The accuracy of recovered curves was examined under various conditions including sample size, number of time points per curve, and measurement design. Results showed the functional mixed-effects models recovered the underlying mean curve more accurately than the nonlinear mixed-effects models. In general, the functional mixed-effects models recovered the underlying individual curves more accurately than the nonlinear mixed-effects models. Progesterone cycle data from Brumback and Rice (1998) were then analyzed to demonstrate the utility of both models. Both models were shown to perform similarly when analyzing the progesterone data.
ContributorsFine, Kimberly L (Author) / Grimm, Kevin J. (Thesis advisor) / Edward, Mike (Committee member) / O'Rourke, Holly (Committee member) / McNeish, Dan (Committee member) / Arizona State University (Publisher)
Created2019
157034-Thumbnail Image.png
Description
To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes.
ContributorsGunn, Heather J (Author) / Grimm, Kevin J. (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Tein, Jenn-Yun (Committee member) / Anderson, Samantha F. (Committee member) / Arizona State University (Publisher)
Created2019
154781-Thumbnail Image.png
Description
Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection

Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection of ordered-categorical indicators (e.g., Likert scale items). To evaluate longitudinal measurement invariance with ordered-categorical indicators, a set of hierarchical models can be sequentially tested and compared. If the statistical tests of measurement invariance fail to be supported for one of the models, it is useful to have a method with which to gauge the practical significance of the differences in measurement model parameters over time. Drawing on studies of latent growth models and second-order latent growth models with continuous indicators (e.g., Kim & Willson, 2014a; 2014b; Leite, 2007; Wirth, 2008), this study examined the performance of a potential sensitivity analysis to gauge the practical significance of violations of longitudinal measurement invariance for ordered-categorical indicators using second-order latent growth models. The change in the estimate of the second-order growth parameters following the addition of an incorrect level of measurement invariance constraints at the first-order level was used as an effect size for measurement non-invariance. This study investigated how sensitive the proposed sensitivity analysis was to different locations of non-invariance (i.e., non-invariance in the factor loadings, the thresholds, and the unique factor variances) given a sufficient sample size. This study also examined whether the sensitivity of the proposed sensitivity analysis depended on a number of other factors including the magnitude of non-invariance, the number of non-invariant indicators, the number of non-invariant occasions, and the number of response categories in the indicators.
ContributorsLiu, Yu, Ph.D (Author) / West, Stephen G. (Thesis advisor) / Tein, Jenn-Yun (Thesis advisor) / Green, Samuel (Committee member) / Grimm, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2016
154889-Thumbnail Image.png
Description
Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is

Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is little research using simulations to investigate LDS models. This study examined the influence of time metric on model estimation, interpretation, parameter estimate accuracy, and convergence in LDS models using empirical simulations. Results indicated that for a time structure with a true time metric where participants had different starting points and unequally spaced intervals, LDS models fit with a restructured and less informative time metric resulted in biased parameter estimates. However, models examined using the true time metric were less likely to converge than models using the restructured time metric, likely due to missing data. Where participants had different starting points but equally spaced intervals, LDS models fit with a restructured time metric resulted in biased estimates of intercept means, but all other parameter estimates were unbiased, and models examined using the true time metric had less convergence than the restructured time metric as well due to missing data. The findings of this study support prior research on time metric in longitudinal models, and further research should examine these findings under alternative conditions. The importance of these findings for substantive researchers is discussed.
ContributorsO'Rourke, Holly P (Author) / Grimm, Kevin J. (Thesis advisor) / Mackinnon, David P (Thesis advisor) / Chassin, Laurie (Committee member) / Aiken, Leona S. (Committee member) / Arizona State University (Publisher)
Created2016
155855-Thumbnail Image.png
Description
Time-to-event analysis or equivalently, survival analysis deals with two variables simultaneously: when (time information) an event occurs and whether an event occurrence is observed or not during the observation period (censoring information). In behavioral and social sciences, the event of interest usually does not lead to a terminal state

Time-to-event analysis or equivalently, survival analysis deals with two variables simultaneously: when (time information) an event occurs and whether an event occurrence is observed or not during the observation period (censoring information). In behavioral and social sciences, the event of interest usually does not lead to a terminal state such as death. Other outcomes after the event can be collected and thus, the survival variable can be considered as a predictor as well as an outcome in a study. One example of a case where the survival variable serves as a predictor as well as an outcome is a survival-mediator model. In a single survival-mediator model an independent variable, X predicts a survival variable, M which in turn, predicts a continuous outcome, Y. The survival-mediator model consists of two regression equations: X predicting M (M-regression), and M and X simultaneously predicting Y (Y-regression). To estimate the regression coefficients of the survival-mediator model, Cox regression is used for the M-regression. Ordinary least squares regression is used for the Y-regression using complete case analysis assuming censored data in M are missing completely at random so that the Y-regression is unbiased. In this dissertation research, different measures for the indirect effect were proposed and a simulation study was conducted to compare performance of different indirect effect test methods. Bias-corrected bootstrapping produced high Type I error rates as well as low parameter coverage rates in some conditions. In contrast, the Sobel test produced low Type I error rates as well as high parameter coverage rates in some conditions. The bootstrap of the natural indirect effect produced low Type I error and low statistical power when the censoring proportion was non-zero. Percentile bootstrapping, distribution of the product and the joint-significance test showed best performance. Statistical analysis of the survival-mediator model is discussed. Two indirect effect measures, the ab-product and the natural indirect effect are compared and discussed. Limitations and future directions of the simulation study are discussed. Last, interpretation of the survival-mediator model for a made-up empirical data set is provided to clarify the meaning of the quantities in the survival-mediator model.
ContributorsKim, Han Joe (Author) / Mackinnon, David P. (Thesis advisor) / Tein, Jenn-Yun (Thesis advisor) / West, Stephen G. (Committee member) / Grimm, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2017
Description
Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their

Collider effects pose a major problem in psychological research. Colliders are third variables that bias the relationship between an independent and dependent variable when (1) the composition of a research sample is restricted by the scores on a collider variable or (2) researchers adjust for a collider variable in their statistical analyses. Both cases interfere with the accuracy and generalizability of statistical results. Despite their importance, collider effects remain relatively unknown in the social sciences. This research introduces both the conceptual and the mathematical foundation for collider effects and demonstrates how to calculate a collider effect and test it for statistical significance. Simulation studies examined the efficiency and accuracy of the collider estimation methods and tested the viability of Thorndike’s Case III equation as a potential solution to correcting for collider bias in cases of biased sample selection.
ContributorsLamp, Sophia Josephine (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2021
158877-Thumbnail Image.png
Description
This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated

This research explores tests for statistical suppression. Suppression is a statistical phenomenon whereby the magnitude of an effect becomes larger when another variable is added to the regression equation. From a causal perspective, suppression occurs when there is inconsistent mediation or negative confounding. Several different estimators for suppression are evaluated conceptually and in a statistical simulation study where we impose suppression and non-suppression conditions. For each estimator without an existing standard error formula, one was derived in order to conduct significance tests and build confidence intervals. Overall, two of the estimators were biased and had poor coverage, one worked well but had inflated type-I error rates when the population model was complete mediation. As a result of analyzing these three tests, a fourth was considered in the late stages of the project and showed promising results that address concerns of the other tests. When the tests were applied to real data, they gave similar results and were consistent.
ContributorsMuniz, Felix (Author) / Mackinnon, David P (Thesis advisor) / Anderson, Samantha F. (Committee member) / McNeish, Daniel M (Committee member) / Arizona State University (Publisher)
Created2020