Matching Items (24)
Filtering by

Clear all filters

149687-Thumbnail Image.png
Description
Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. While IRT has become prevalent in the assessment of ability and achievement, it has not been widely embraced by clinical psychologists. This appears due, in part, to psychometrists'

Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. While IRT has become prevalent in the assessment of ability and achievement, it has not been widely embraced by clinical psychologists. This appears due, in part, to psychometrists' use of unidimensional models despite evidence that psychiatric disorders are inherently multidimensional. The construct validity of unidimensional and multidimensional latent variable models was compared to evaluate the utility of modern psychometric theory in clinical assessment. Archival data consisting of 688 outpatients' presenting concerns, psychiatric diagnoses, and item level responses to the Brief Symptom Inventory (BSI) were extracted from files at a university mental health clinic. Confirmatory factor analyses revealed that models with oblique factors and/or item cross-loadings better represented the internal structure of the BSI in comparison to a strictly unidimensional model. The models were generally equivalent in their ability to account for variance in criterion-related validity variables; however, bifactor models demonstrated superior validity in differentiating between mood and anxiety disorder diagnoses. Multidimensional IRT analyses showed that the orthogonal bifactor model partitioned distinct, clinically relevant sources of item variance. Similar results were also achieved through multivariate prediction with an oblique simple structure model. Receiver operating characteristic curves confirmed improved sensitivity and specificity through multidimensional models of psychopathology. Clinical researchers are encouraged to consider these and other comprehensive models of psychological distress.
ContributorsThomas, Michael Lee (Author) / Lanyon, Richard (Thesis advisor) / Barrera, Manuel (Committee member) / Levy, Roy (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2011
149678-Thumbnail Image.png
Description
In the current context of fiscal austerity as well as neo-colonial criticisms, the discipline of religious studies has been challenged to critically assess its teaching methods as well as articulate its relevance in the modern university setting. Responding to these needs, this dissertation explores the educational outcomes on undergraduate students

In the current context of fiscal austerity as well as neo-colonial criticisms, the discipline of religious studies has been challenged to critically assess its teaching methods as well as articulate its relevance in the modern university setting. Responding to these needs, this dissertation explores the educational outcomes on undergraduate students as a result of religious studies curriculum. This research employs a robust quantitative methodology designed to assess the impact of the courses while controlling for a number of covariates. Based on data collected from pre- and post-course surveys of a combined 1,116 students enrolled at Arizona State University (ASU) and two area community colleges, the research examines student change across five outcomes: attributional complexity, multi-religious awareness, commitment to social justice, individual religiosity, and the first to be developed, neo-colonial measures. The sample was taken in the Fall of 2009 from courses including Religions of the World, introductory Islamic studies courses, and a control group consisting of engineering and political science students. The findings were mixed. From the "virtues of the humanities" standpoint, select within group changes showed a statistically significant positive shift, but when compared across groups and the control group, there were no statistically significant findings after controlling for key variables. The students' pre-course survey score was the best predictor of their post-course survey score. In response to the neo-colonial critiques, the non-findings suggest the critiques have been overstated in terms of their impact pedagogically or in the classroom.
ContributorsLewis, Bret (Author) / Gereboff, Joel (Thesis advisor) / Foard, James (Committee member) / Levy, Roy (Committee member) / Woodward, Mark (Committee member) / Arizona State University (Publisher)
Created2011
149935-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed

The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality.
ContributorsSvetina, Dubravka (Author) / Levy, Roy (Thesis advisor) / Gorin, Joanna S. (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2011
151505-Thumbnail Image.png
Description
Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students.

Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students. This analogue study investigated whether school psychologists provided with more comprehensive psychoeducational evaluations of a student with TBI succeeded in detecting TBI, in making TBI-related accommodations, and were more confident in their decisions. To test these hypotheses, 76 school psychologists were randomly assigned to one of three groups that received increasingly comprehensive levels of psychoeducational evaluation embedded in a cumulative folder of a hypothetical student whose history included a recent head injury and TBI-compatible school problems. As expected, school psychologists who received a more comprehensive psychoeducational evaluation were more likely to make a TBI educational diagnosis, but the effect size was not strong, and the predictive value came from the variance between the first and third groups. Likewise, school psychologists receiving more comprehensive evaluation data produced more accommodations related to student needs and felt more confidence in those accommodations, but significant differences were not found at all levels of evaluation. Contrary to expectations, however, providing more comprehensive information failed to engender more confidence in decisions about TBI educational diagnoses. Concluding that a TBI is present may itself facilitate accommodations; school psychologists who judged that the student warranted a TBI educational diagnosis produce more TBI-related accommodations. Impact of findings suggest the importance of training school psychologists in the interpretation of neuropsychology test results to aid in educational diagnosis and to increase confidence in their use.
ContributorsHildreth, Lisa Jane (Author) / Hildreth, Lisa J (Thesis advisor) / Wodrich, David (Committee member) / Levy, Roy (Committee member) / Lavoie, Michael (Committee member) / Arizona State University (Publisher)
Created2012
152032-Thumbnail Image.png
Description
In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations.
ContributorsOlivera-Aguilar, Margarita (Author) / Millsap, Roger E. (Thesis advisor) / Levy, Roy (Committee member) / MacKinnon, David (Committee member) / West, Stephen G. (Committee member) / Arizona State University (Publisher)
Created2013
151021-Thumbnail Image.png
Description
The Culture-Language Interpretive Matrix (C-LIM) is a new tool hypothesized to help practitioners accurately determine whether students who are administered an IQ test are culturally and linguistically different from the normative comparison group (i.e., different) or culturally and linguistically similar to the normative comparison group and possibly have Specific Learning

The Culture-Language Interpretive Matrix (C-LIM) is a new tool hypothesized to help practitioners accurately determine whether students who are administered an IQ test are culturally and linguistically different from the normative comparison group (i.e., different) or culturally and linguistically similar to the normative comparison group and possibly have Specific Learning Disabilities (SLD) or other neurocognitive disabilities (i.e., disordered). Diagnostic utility statistics were used to test the ability of the Wechsler Intelligence Scales for Children-Fourth Edition (WISC-IV) C-LIM to accurately identify students from a referred sample of English language learners (Ells) (n = 86) for whom Spanish was the primary language spoken at home and a sample of students from the WISC-IV normative sample (n = 2,033) as either culturally and linguistically different from the WISC-IV normative sample or culturally and linguistically similar to the WISC-IV normative sample. WISC-IV scores from three paired comparison groups were analyzed using the Receiver Operating Characteristic (ROC) curve: (a) Ells with SLD and the WISC-IV normative sample, (b) Ells without SLD and the WISC-IV normative sample, and (c) Ells with SLD and Ells without SLD. Results of the ROC yielded Area Under the Curve (AUC) values that ranged between 0.51 and 0.53 for the comparison between Ells with SLD and the WISC-IV normative sample, AUC values that ranged between 0.48 and 0.53 for the comparison between Ells without SLD and the WISC-IV normative sample, and AUC values that ranged between 0.49 and 0.55 for the comparison between Ells with SLD and Ells without SLD. These values indicate that the C-LIM has low diagnostic accuracy in terms of differentiating between a sample of Ells and the WISC-IV normative sample. Current available evidence does not support use of the C-LIM in applied practice at this time.
ContributorsStyck, Kara M (Author) / Watkins, Marley W. (Thesis advisor) / Levy, Roy (Thesis advisor) / Balles, John (Committee member) / Arizona State University (Publisher)
Created2012
171917-Thumbnail Image.png
Description
The last two decades have seen growing awareness of and emphasis on the replication of empirical findings. While this is a large literature, very little of it has focused on or considered the interaction of replication and psychometrics. This is unfortunate given that sound measurement is crucial when considering the

The last two decades have seen growing awareness of and emphasis on the replication of empirical findings. While this is a large literature, very little of it has focused on or considered the interaction of replication and psychometrics. This is unfortunate given that sound measurement is crucial when considering the complex constructs studied in psychological research. If the psychometric properties of a scale fail to replicate, then inferences made using scores from that scale are questionable at best. In this dissertation, I begin to address replication issues in factor analysis – a widely used psychometric method in psychology. After noticing inconsistencies across results for studies that factor analyzed the same scale, I sought to gain a better understanding of what replication means in factor analysis as well as address issues that affect the replicability of factor analytic models. With this work, I take steps toward integrating factor analysis into the broader replication discussion. Ultimately, the goal of this dissertation was to highlight the importance of psychometric replication and bring attention to its role in fostering a more replicable scientific literature.
ContributorsManapat, Patrick D. (Author) / Edwards, Michael C. (Thesis advisor) / Anderson, Samantha F. (Thesis advisor) / Grimm, Kevin J. (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2022
189395-Thumbnail Image.png
Description
The proliferation of intensive longitudinal datasets has necessitated the development of analytical techniques that are flexible and accessible to researchers collecting dyadic or individual data. Dynamic structural equation models (DSEMs), as implemented in Mplus, provides the flexibility researchers require by combining components from multilevel modeling, structural equation modeling, and time

The proliferation of intensive longitudinal datasets has necessitated the development of analytical techniques that are flexible and accessible to researchers collecting dyadic or individual data. Dynamic structural equation models (DSEMs), as implemented in Mplus, provides the flexibility researchers require by combining components from multilevel modeling, structural equation modeling, and time series analyses. This dissertation project presents a simulation study that evaluates the performance of categorical DSEM using a probit link function across different numbers of clusters (N = 50 or 200), timepoints (T = 14, 28, or 56), categories on the outcome (2, 3, or 5), and distribution of responses on the outcome (symmetric/approximate normal, skewed, or uniform) for both univariate and multivariate models (representing individual data and dyadic longitudinal Actor-Partner Interdependence Model data, respectively). The 3- and 5-category model conditions were also evaluated as continuous DSEMs across the same cluster, timepoint, and distribution conditions to evaluate to what extent ignoring the categorical nature of the outcome impacted model performance. Results indicated that previously-suggested minimums for number of clusters and timepoints from studies evaluating continuous DSEM performance with continuous outcomes are not large enough to produce unbiased and adequately powered models in categorical DSEM. The distribution of responses on the outcome did not have a noticeable impact in model performance for categorical DSEM, but did affect model performance when fitting a continuous DSEM to the same datasets. Ignoring the categorical nature of the outcome lead to underestimated effects across parameters and conditions, and showed large Type-I error rates in the N = 200 cluster conditions.
ContributorsSavord, Andrea (Author) / McNeish, Daniel (Thesis advisor) / Grimm, Kevin J (Committee member) / Iida, Masumi (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2023
156690-Thumbnail Image.png
Description
Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, in press). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require assessment arguments and psychometric models capable of integrating those complexities.

Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, in press). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require assessment arguments and psychometric models capable of integrating those complexities. Unfortunately, DBNs remain understudied and their psychometric properties relatively unknown. If the apparent strengths of DBNs are to be leveraged, then the body of literature surrounding their properties and use needs to be expanded upon. To this end, the current work aimed at exploring the properties of DBNs under a variety of realistic psychometric conditions. A two-phase Monte Carlo simulation study was conducted in order to evaluate parameter recovery for DBNs using maximum likelihood estimation with the Netica software package. Phase 1 included a limited number of conditions and was exploratory in nature while Phase 2 included a larger and more targeted complement of conditions. Manipulated factors included sample size, measurement quality, test length, the number of measurement occasions. Results suggested that measurement quality has the most prominent impact on estimation quality with more distinct performance categories yielding better estimation. While increasing sample size tended to improve estimation, there were a limited number of conditions under which greater samples size led to more estimation bias. An exploration of this phenomenon is included. From a practical perspective, parameter recovery appeared to be sufficient with samples as low as N = 400 as long as measurement quality was not poor and at least three items were present at each measurement occasion. Tests consisting of only a single item required exceptional measurement quality in order to adequately recover model parameters. The study was somewhat limited due to potentially software-specific issues as well as a non-comprehensive collection of experimental conditions. Further research should replicate and, potentially expand the current work using other software packages including exploring alternate estimation methods (e.g., Markov chain Monte Carlo).
ContributorsReichenberg, Raymond E (Author) / Levy, Roy (Thesis advisor) / Eggum-Wilkens, Natalie (Thesis advisor) / Iida, Masumi (Committee member) / DeLay, Dawn (Committee member) / Arizona State University (Publisher)
Created2018
156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2018