Matching Items (4)
Filtering by

Clear all filters

151017-Thumbnail Image.png
Description
Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this test for children is the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV); given the multiple test revisions that

Standardized intelligence tests are some of the most widely used tests by psychologists. Of these, clinicians most frequently use the Wechsler scales of intelligence. The most recent version of this test for children is the Wechsler Intelligence Scale for Children - Fourth Edition (WISC-IV); given the multiple test revisions that have occurred with the WISC, it is essential to address evidence regarding the structural validity of the test; specifically, that the internal structure of the test corresponds with the structure of the theoretical construct being measured. The current study is the first to investigate the factor structure of the WISC-IV across time for the same individuals. Factorial invariance of the WISC-IV was investigated using a group of 352 students eligible for psychoeducational evaluations tested, on average, 2.8 years apart. One research question was addressed: Does the structure of the WISC-IV remain invariant for the same individuals across time? Using structural equation modeling methods for a four-factor oblique model of the WISC-IV, this study found invariance at the configural and weak levels and partial invariance at the strong and strict levels. This indicated that the overall factor structure remained the same at test and retest with equal precision of the factor loadings at both time points. Three subtest intercepts (BD, CD, and SI) were not equivalent across test and retest; additionally, four subtest error variances (BD, CD, SI, and SS) were not equivalent across test and retest. These results indicate that the WISC-IV measures the same constructs equally well across time, and differences in an individual's cognitive profile can be safely interpreted as reflecting change in the underlying construct across time rather than variations in the test itself. This allows clinicians to be more confident in interpretation of changes in the overall cognitive profile of individual's across time. However, this study's results did not indicate that an individual's test scores should be compared across time. Overall, it was concluded that there is partial measurement invariance of the WISC-IV across time, with invariance of all factor loadings, invariance of all but three intercepts, and invariance of all but four item error variances.
ContributorsRicherson, Lindsay Patricia (Author) / Watkins, Marley W. (Thesis advisor) / Balles, John R (Thesis advisor) / Lynch, Christa S (Committee member) / Arizona State University (Publisher)
Created2012
156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2018
157034-Thumbnail Image.png
Description
To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample

To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes.
ContributorsGunn, Heather J (Author) / Grimm, Kevin J. (Thesis advisor) / Edwards, Michael C (Thesis advisor) / Tein, Jenn-Yun (Committee member) / Anderson, Samantha F. (Committee member) / Arizona State University (Publisher)
Created2019
171966-Thumbnail Image.png
Description
Quality in early childhood education (ECE) is central to equitable child development and preparation for formal schooling and has been widely studied by researchers and of interest to policy makers. As the federal pre-k program, Head Start is a key ECE context to understand quality and its implications for equity.

Quality in early childhood education (ECE) is central to equitable child development and preparation for formal schooling and has been widely studied by researchers and of interest to policy makers. As the federal pre-k program, Head Start is a key ECE context to understand quality and its implications for equity. One central measure of classroom quality, the Classroom Assessment Scoring System (CLASS), is used in policy-making and funding decisions to study the impact of quality on children’s school readiness. The CLASS is a measure of teacher-child interactional quality, but measurement invariance across teacher race/ethnicity has yet to be examined for this measure in the published literature. Additionally, patterns of classroom quality and the sociocultural context of classrooms as predictors of children’s social skills and approaches to learning have yet to be examined. Using anti-racist early childhood education theory and a nationally representative Head Start sample, the Family and Child Experiences Survey 2009 cohort, I conducted two studies to address these gaps. In the first study, I investigated the measurement invariance of the CLASS across teacher race/ethnicity (Black, Latine, White). I found evidence of partial strong invariance, with only one non invariant parameter for Black teachers, suggesting that means may be compared across teacher race/ethnicity. However, the implications of these findings must be interpreted through an equity lens, and quality measures should work to include equity indicators explicitly. In the second study, I examined patterns of classroom quality indicated by the CLASS and 1) dual language learner (DLL) composition and 2) in combination with child demographics and teacher-child demographic match as predictors of school readiness outcomes. I found evidence of three profiles of classroom quality and DLL composition did not significantly predict profile membership. Further the profile with higher levels of negative climate and moderate emotional support and classroom organization negatively predicted child social skills and approaches to learning. Applying anti-racist ECE theory studies suggest that the CLASS does not sufficiently address equity in ECE, but may be used with Black, Latine, and White teachers and low quality should be addressed through intervention to prevent negative outcomes for children.
ContributorsAlexander, Brittany L. (Author) / Yoo, Hyung C (Thesis advisor) / Meek, Shantel (Thesis advisor) / Edyburn, Kelly (Committee member) / Herrera, Manuela J. (Committee member) / Arizona State University (Publisher)
Created2022