Matching Items (4)
157034-Thumbnail Image.png
Description
To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes.
ContributorsGunn, Heather J (Author) / Grimm, Kevin J. (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Tein, Jenn-Yun (Committee member) / Anderson, Samantha F. (Committee member) / Arizona State University (Publisher)
Created2019
154781-Thumbnail Image.png
Description
Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection

Researchers who conduct longitudinal studies are inherently interested in studying individual and population changes over time (e.g., mathematics achievement, subjective well-being). To answer such research questions, models of change (e.g., growth models) make the assumption of longitudinal measurement invariance. In many applied situations, key constructs are measured by a collection of ordered-categorical indicators (e.g., Likert scale items). To evaluate longitudinal measurement invariance with ordered-categorical indicators, a set of hierarchical models can be sequentially tested and compared. If the statistical tests of measurement invariance fail to be supported for one of the models, it is useful to have a method with which to gauge the practical significance of the differences in measurement model parameters over time. Drawing on studies of latent growth models and second-order latent growth models with continuous indicators (e.g., Kim & Willson, 2014a; 2014b; Leite, 2007; Wirth, 2008), this study examined the performance of a potential sensitivity analysis to gauge the practical significance of violations of longitudinal measurement invariance for ordered-categorical indicators using second-order latent growth models. The change in the estimate of the second-order growth parameters following the addition of an incorrect level of measurement invariance constraints at the first-order level was used as an effect size for measurement non-invariance. This study investigated how sensitive the proposed sensitivity analysis was to different locations of non-invariance (i.e., non-invariance in the factor loadings, the thresholds, and the unique factor variances) given a sufficient sample size. This study also examined whether the sensitivity of the proposed sensitivity analysis depended on a number of other factors including the magnitude of non-invariance, the number of non-invariant indicators, the number of non-invariant occasions, and the number of response categories in the indicators.
ContributorsLiu, Yu, Ph.D (Author) / West, Stephen G. (Thesis advisor) / Tein, Jenn-Yun (Thesis advisor) / Green, Samuel (Committee member) / Grimm, Kevin J. (Committee member) / Arizona State University (Publisher)
Created2016
168753-Thumbnail Image.png
Description
Latent profile analysis (LPA), a type of finite mixture model, has grown in popularity due to its ability to detect latent classes or unobserved subgroups within a sample. Though numerous methods exist to determine the correct number of classes, past research has repeatedly demonstrated that no one method is consistently

Latent profile analysis (LPA), a type of finite mixture model, has grown in popularity due to its ability to detect latent classes or unobserved subgroups within a sample. Though numerous methods exist to determine the correct number of classes, past research has repeatedly demonstrated that no one method is consistently the best as each tends to struggle under specific conditions. Recently, the likelihood incremental percentage per parameter (LI3P), a method using a new approach, was proposed and tested which yielded promising initial results. To evaluate this new method more thoroughly, this study simulated 50,000 datasets, manipulating factors such as sample size, class distance, number of items, and number of classes. After evaluating the performance of the LI3P on simulated data, the LI3P is applied to LPA models fit to an empirical dataset to illustrate the method’s application. Results indicate the LI3P performs in line with standard class enumeration techniques, and primarily reflects class separation and the number of classes.
ContributorsHoupt, Russell Paul (Author) / Grimm, Kevin J (Thesis advisor) / McNeish, Daniel (Committee member) / Edwards, Michael C (Committee member) / Arizona State University (Publisher)
Created2022
190785-Thumbnail Image.png
Description
Psychologists report effect sizes in randomized controlled trials to facilitate interpretation and inform clinical or policy guidance. Since commonly used effect size measures (e.g., standardized mean difference) are not sensitive to heterogeneous treatment effects, methodologists have suggested the use of an alternative effect size δ, a between-subjects causal parameter describing

Psychologists report effect sizes in randomized controlled trials to facilitate interpretation and inform clinical or policy guidance. Since commonly used effect size measures (e.g., standardized mean difference) are not sensitive to heterogeneous treatment effects, methodologists have suggested the use of an alternative effect size δ, a between-subjects causal parameter describing the probability that the outcome of a random participant in the treatment group is better than the outcome of another random participant in the control group. Although this effect size is useful, researchers could mistakenly use δ to describe its within-subject analogue, ψ, the probability that an individual will do better under the treatment than the control. Hand’s paradox describes the situation where ψ and δ are on opposing sides of 0.5: δ may imply most are helped whereas the (unknown) underlying ψ indicates that most are harmed by the treatment. The current study used Monte Carlo simulations to investigate plausible situations under which Hand’s paradox does and does not occur, tracked the magnitude of the discrepancy between ψ and δ, and explored whether the size of the discrepancy could be reduced with a relevant covariate. The findings suggested that although the paradox should not occur under bivariate normal data conditions in the population, there could be sample cases with the paradox. The magnitude of the discrepancy between ψ and δ depended on both the size of the average treatment effect and the underlying correlation between the potential outcomes, ρ. Smaller effects led to larger discrepancies when ρ < 0 and ρ = 1, whereas larger effects led to larger discrepancies when 0 < ρ < 1. It was useful to consider a relevant covariate when calculating ψ and δ. Although ψ and δ were still discrepant within covariate levels, results indicated that conditioning upon relevant covariates is still useful in describing heterogeneous treatment effects.
ContributorsLiu, Xinran (Author) / Anderson, Samantha F (Thesis advisor) / McNeish, Daniel (Committee member) / MacKinnon, David (Committee member) / Arizona State University (Publisher)
Created2023