Matching Items (7)
Filtering by

Clear all filters

151017-Thumbnail Image.png
Description
Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this test for children is the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV); given the multiple test revisions that

Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this test for children is the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV); given the multiple test revisions that have occurred with the WISC, it is essential to address evidence regarding the structural validity of the test; specifically, that the internal structure of the test corresponds with the structure of the theoretical construct being measured. The current study is the first to investigate the factor structure of the WISC-IV across time for the same individuals. Factorial invariance of the WISC-IV was investigated using a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. One research question was addressed: Does the structure of the WISC-IV remain invariant for the same individuals across time? Using structural equation modeling methods for a four-factor oblique model of the WISC-IV, this study found invariance at the configural and weak levels and partial invariance at the strong and strict levels. This indicated that the overall factor structure remained the same at test and retest with equal precision of the factor loadings at both time points. Three subtest intercepts (BD, CD, and SI) were not equivalent across test and retest; additionally, four subtest error variances (BD, CD, SI, and SS) were not equivalent across test and retest. These results indicate that the WISC-IV measures the same constructs equally well across time, and differences in an individual's cognitive profile can be safely interpreted as reflecting change in the underlying construct across time rather than variations in the test itself. This allows clinicians to be more confident in interpretation of changes in the overall cognitive profile of individual's across time. However, this study's results did not indicate that an individual's test scores should be compared across time. Overall, it was concluded that there is partial measurement invariance of the WISC-IV across time, with invariance of all factor loadings, invariance of all but three intercepts, and invariance of all but four item error variances.
ContributorsRicherson, Lindsay Patricia (Author) / Watkins, Marley W. (Thesis advisor) / Balles, John R (Thesis advisor) / Lynch, Christa S (Committee member) / Arizona State University (Publisher)
Created2012
156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2018
157034-Thumbnail Image.png
Description
To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes.
ContributorsGunn, Heather J (Author) / Grimm, Kevin J. (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Tein, Jenn-Yun (Committee member) / Anderson, Samantha F. (Committee member) / Arizona State University (Publisher)
Created2019
154498-Thumbnail Image.png
Description
A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both

A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both levels for a second-order CFA model. The numbers and directions of differences in noninvariant loadings or intercepts were also manipulated, along with total sample size and effect size of the second-order factor mean difference. Data were analyzed using correct and incorrect specifications of noninvariant loadings and intercepts. Results summarized across the 5,000 replications in each condition included Type I error rates and powers for the chi-square difference test and the Wald test of the second-order factor mean difference, estimation bias and efficiency for this latent mean difference, and means of the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA).

When the model was correctly specified, no obvious estimation bias was observed; when the model was misspecified by constraining noninvariant loadings or intercepts to be equal, the latent mean difference was overestimated if the direction of the difference in loadings or intercepts of was consistent with the direction of the latent mean difference, and vice versa. Increasing the number of noninvariant loadings or intercepts resulted in larger estimation bias if these noninvariant loadings or intercepts were constrained to be equal. Power to detect the latent mean difference was influenced by estimation bias and the estimated variance of the difference in the second-order factor mean, in addition to sample size and effect size. Constraining more parameters to be equal between groups—even when unequal in the population—led to a decrease in the variance of the estimated latent mean difference, which increased power somewhat. Finally, RMSEA was very sensitive for detecting misspecification due to improper equality constraints in all conditions in the current scenario, including the nonzero latent mean difference, but SRMR did not increase as expected when noninvariant parameters were constrained.
ContributorsLiu, Yixing (Author) / Thompson, Marilyn (Thesis advisor) / Green, Samuel (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2016
149323-Thumbnail Image.png
Description
In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in

In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in the common factor variances. The present research examined Type I errors and the statistical power of a method that detects violations to the factorial invariant model in the presence of group differences in regression intercepts, under different sample sizes and different number of predictors (one or two). Data were simulated under two models: in model A only differences in the factor means were allowed, while model B violated invariance. A factorial invariant model was fitted to the data. Type I errors were defined as the proportion of samples in which the hypothesis of invariance was incorrectly rejected, and statistical power was defined as the proportion of samples in which the hypothesis of factorial invariance was correctly rejected. In the case of one predictor, the results show that the chi-square statistic has low power to detect violations to the model. Unexpected and systematic results were obtained regarding the negative unique variance in the predictor. It is proposed that negative unique variance in the predictor can be used as indication of measurement bias instead of the chi-square fit statistic with sample sizes of 500 or more. The results of the two predictor case show larger power. In both cases Type I errors were as expected. The implications of the results and some suggestions for increasing the power of the method are provided.
ContributorsAguilar, Margarita Olivera (Author) / Millsap, Roger E. (Thesis advisor) / Aiken, Leona S. (Committee member) / Enders, Craig K. (Committee member) / Arizona State University (Publisher)
Created2010
171966-Thumbnail Image.png
Description
Quality in early childhood education (ECE) is central to equitable child development and preparation for formal schooling and has been widely studied by researchers and of interest to policy makers. As the federal pre-k program, Head Start is a key ECE context to understand quality and its implications for equity.

Quality in early childhood education (ECE) is central to equitable child development and preparation for formal schooling and has been widely studied by researchers and of interest to policy makers. As the federal pre-k program, Head Start is a key ECE context to understand quality and its implications for equity. One central measure of classroom quality, the Classroom Assessment Scoring System (CLASS), is used in policy-making and funding decisions to study the impact of quality on children’s school readiness. The CLASS is a measure of teacher-child interactional quality, but measurement invariance across teacher race/ethnicity has yet to be examined for this measure in the published literature. Additionally, patterns of classroom quality and the sociocultural context of classrooms as predictors of children’s social skills and approaches to learning have yet to be examined. Using anti-racist early childhood education theory and a nationally representative Head Start sample, the Family and Child Experiences Survey 2009 cohort, I conducted two studies to address these gaps. In the first study, I investigated the measurement invariance of the CLASS across teacher race/ethnicity (Black, Latine, White). I found evidence of partial strong invariance, with only one non invariant parameter for Black teachers, suggesting that means may be compared across teacher race/ethnicity. However, the implications of these findings must be interpreted through an equity lens, and quality measures should work to include equity indicators explicitly. In the second study, I examined patterns of classroom quality indicated by the CLASS and 1) dual language learner (DLL) composition and 2) in combination with child demographics and teacher-child demographic match as predictors of school readiness outcomes. I found evidence of three profiles of classroom quality and DLL composition did not significantly predict profile membership. Further the profile with higher levels of negative climate and moderate emotional support and classroom organization negatively predicted child social skills and approaches to learning. Applying anti-racist ECE theory studies suggest that the CLASS does not sufficiently address equity in ECE, but may be used with Black, Latine, and White teachers and low quality should be addressed through intervention to prevent negative outcomes for children.
ContributorsAlexander, Brittany L. (Author) / Yoo, Hyung C (Thesis advisor) / Meek, Shantel (Thesis advisor) / Edyburn, Kelly (Committee member) / Herrera, Manuela J. (Committee member) / Arizona State University (Publisher)
Created2022
154396-Thumbnail Image.png
Description
Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data; however, there are times when the participants are clustered within units, creating dependency in the data. Researchers have taken different

Measurement invariance exists when a scale functions equivalently across people and is therefore essential for making meaningful group comparisons. Often, measurement invariance is examined with independent and identically distributed data; however, there are times when the participants are clustered within units, creating dependency in the data. Researchers have taken different approaches to address this dependency when studying measurement invariance (e.g., Kim, Kwok, & Yoon, 2012; Ryu, 2014; Kim, Yoon, Wen, Luo, & Kwok, 2015), but there are no comparisons of the various approaches. The purpose of this master's thesis was to investigate measurement invariance in multilevel data when the grouping variable was a level-1 variable using five different approaches. Publicly available data from the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K) was used as an illustrative example. The construct of early behavior, which was made up of four teacher-rated behavior scales, was evaluated for measurement invariance in relation to gender. In the specific case of this illustrative example, the statistical conclusions of the five approaches were in agreement (i.e., the loading of the externalizing item and the intercept of the approaches to learning item were not invariant). Simulation work should be done to investigate in which situations the conclusions of these approaches diverge.
ContributorsGunn, Heather (Author) / Grimm, Kevin J. (Thesis advisor) / Aiken, Leona S. (Committee member) / Suk, Hye Won (Committee member) / Arizona State University (Publisher)
Created2016