Matching Items (6)

154498-Thumbnail Image.png

The impact of partial measurement invariance on between-group comparisons of latent means for a second-order factor

Description

A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory

A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both levels for a second-order CFA model. The numbers and directions of differences in noninvariant loadings or intercepts were also manipulated, along with total sample size and effect size of the second-order factor mean difference. Data were analyzed using correct and incorrect specifications of noninvariant loadings and intercepts. Results summarized across the 5,000 replications in each condition included Type I error rates and powers for the chi-square difference test and the Wald test of the second-order factor mean difference, estimation bias and efficiency for this latent mean difference, and means of the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA).

When the model was correctly specified, no obvious estimation bias was observed; when the model was misspecified by constraining noninvariant loadings or intercepts to be equal, the latent mean difference was overestimated if the direction of the difference in loadings or intercepts of was consistent with the direction of the latent mean difference, and vice versa. Increasing the number of noninvariant loadings or intercepts resulted in larger estimation bias if these noninvariant loadings or intercepts were constrained to be equal. Power to detect the latent mean difference was influenced by estimation bias and the estimated variance of the difference in the second-order factor mean, in addition to sample size and effect size. Constraining more parameters to be equal between groups—even when unequal in the population—led to a decrease in the variance of the estimated latent mean difference, which increased power somewhat. Finally, RMSEA was very sensitive for detecting misspecification due to improper equality constraints in all conditions in the current scenario, including the nonzero latent mean difference, but SRMR did not increase as expected when noninvariant parameters were constrained.

Contributors

Agent

Created

Date Created
  • 2016

154396-Thumbnail Image.png

Approaches to studying measurement invariance in multilevel data with a level-1 grouping variable

Description

Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data;

Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data; however, there are times when the participants are clustered within units, creating dependency in the data. Researchers have taken different approaches to address this dependency when studying measurement invariance (e.g., Kim, Kwok, & Yoon, 2012; Ryu, 2014; Kim, Yoon, Wen, Luo, & Kwok, 2015), but there are no comparisons of the various approaches. The purpose of this master's thesis was to investigate measurement invariance in multilevel data when the grouping variable was a level-1 variable using five different approaches. Publicly available data from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K) was used as an illustrative example. The construct of early behavior, which was made up of four teacher-rated behavior scales, was evaluated for measurement invariance in relation to gender. In the specific case of this illustrative example, the statistical conclusions of the five approaches were in agreement (i.e., the loading of the externalizing item and the intercept of the approaches to learning item were not invariant). Simulation work should be done to investigate in which situations the conclusions of these approaches diverge.

Contributors

Agent

Created

Date Created
  • 2016

149323-Thumbnail Image.png

A study of statistical power and type I errors in testing a factor analytic model for group differences in regression intercepts

Description

In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some

In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in the common factor variances. The present research examined Type I errors and the statistical power of a method that detects violations to the factorial invariant model in the presence of group differences in regression intercepts, under different sample sizes and different number of predictors (one or two). Data were simulated under two models: in model A only differences in the factor means were allowed, while model B violated invariance. A factorial invariant model was fitted to the data. Type I errors were defined as the proportion of samples in which the hypothesis of invariance was incorrectly rejected, and statistical power was defined as the proportion of samples in which the hypothesis of factorial invariance was correctly rejected. In the case of one predictor, the results show that the chi-square statistic has low power to detect violations to the model. Unexpected and systematic results were obtained regarding the negative unique variance in the predictor. It is proposed that negative unique variance in the predictor can be used as indication of measurement bias instead of the chi-square fit statistic with sample sizes of 500 or more. The results of the two predictor case show larger power. In both cases Type I errors were as expected. The implications of the results and some suggestions for increasing the power of the method are provided.

Contributors

Agent

Created

Date Created
  • 2010

151017-Thumbnail Image.png

Longitudinal factor structure of the Wechsler Intelligence Scale for Children-Fourth Edition in a referred sample

Description

Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this

Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this test for children is the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV); given the multiple test revisions that have occurred with the WISC, it is essential to address evidence regarding the structural validity of the test; specifically, that the internal structure of the test corresponds with the structure of the theoretical construct being measured. The current study is the first to investigate the factor structure of the WISC-IV across time for the same individuals. Factorial invariance of the WISC-IV was investigated using a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. One research question was addressed: Does the structure of the WISC-IV remain invariant for the same individuals across time? Using structural equation modeling methods for a four-factor oblique model of the WISC-IV, this study found invariance at the configural and weak levels and partial invariance at the strong and strict levels. This indicated that the overall factor structure remained the same at test and retest with equal precision of the factor loadings at both time points. Three subtest intercepts (BD, CD, and SI) were not equivalent across test and retest; additionally, four subtest error variances (BD, CD, SI, and SS) were not equivalent across test and retest. These results indicate that the WISC-IV measures the same constructs equally well across time, and differences in an individual's cognitive profile can be safely interpreted as reflecting change in the underlying construct across time rather than variations in the test itself. This allows clinicians to be more confident in interpretation of changes in the overall cognitive profile of individual's across time. However, this study's results did not indicate that an individual's test scores should be compared across time. Overall, it was concluded that there is partial measurement invariance of the WISC-IV across time, with invariance of all factor loadings, invariance of all but three intercepts, and invariance of all but four item error variances.

Contributors

Agent

Created

Date Created
  • 2012

156621-Thumbnail Image.png

Assessing measurement invariance and latent mean differences with bifactor multidimensional data in structural equation modeling

Description

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.

Contributors

Agent

Created

Date Created
  • 2018

157034-Thumbnail Image.png

Evaluation of five effect size measures of measurement non-invariance for continuous outcomes

Description

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes.

Contributors

Agent

Created

Date Created
  • 2019