Matching Items (10)
Filtering by

Clear all filters

152032-Thumbnail Image.png
Description
In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations.
ContributorsOlivera-Aguilar, Margarita (Author) / Millsap, Roger E. (Thesis advisor) / Levy, Roy (Committee member) / MacKinnon, David (Committee member) / West, Stephen G. (Committee member) / Arizona State University (Publisher)
Created2013
151719-Thumbnail Image.png
Description
Including a covariate can increase power to detect an effect between two variables. Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase the power to detect a relation between two variables has not been investigated. The first study identified

Including a covariate can increase power to detect an effect between two variables. Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase the power to detect a relation between two variables has not been investigated. The first study identified situations where empirical and analytical power of two tests of significance for a single mediator model was greater than power of a bivariate significance test. Results from the first study indicated that including a mediator increased statistical power in small samples with large effects and in large samples with small effects. Next, a study was conducted to assess when power was greater for a significance test for a two mediator model as compared with power of a bivariate significance test. Results indicated that including two mediators increased power in small samples when both specific mediated effects were large and in large samples when both specific mediated effects were small. Implications of the results and directions for future research are then discussed.
ContributorsO'Rourke, Holly Patricia (Author) / Mackinnon, David P (Thesis advisor) / Enders, Craig K. (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2013
151636-Thumbnail Image.png
Description
The purpose of this dissertation was to develop a Compassionate Communication Scale (CCS) by conducting a series of studies. The first study used qualitative data to identify and develop initial scale items. A series of follow-up studies used exploratory factor analysis to investigate the underlying structure of the CCS. A

The purpose of this dissertation was to develop a Compassionate Communication Scale (CCS) by conducting a series of studies. The first study used qualitative data to identify and develop initial scale items. A series of follow-up studies used exploratory factor analysis to investigate the underlying structure of the CCS. A three-factor structure emerged, which included: Compassionate conversation, such as listening, letting the distressed person disclose feelings, and making empathetic remarks; compassionate touch, such as holding someone's hand or patting someone's back; and compassionate messaging, such as posting an encouraging message on a social networking site or sending a sympathetic email. The next study tested convergent and divergent validity by determining how the three forms of compassionate communication associate with various traits. Compassionate conversation was positively related to compassion, empathetic concern, perspective taking, emotional intelligence, social expressivity, emotional expressivity and benevolence, and negatively related to verbal aggressiveness and narcissism. Compassionate touch was positively correlated with compassion, empathetic concern, perspective taking, emotional intelligence, social expressivity, emotional expressivity, and benevolence, and uncorrelated with verbal aggressiveness and benevolence. Finally, compassionate messaging was positively correlated with social expressivity, emotional expressivity, and uncorrelated with verbal aggressiveness and narcissism. The next study focused on cross-validation and criterion-related validity. Correlations showing that self-reports of a person's compassionate communication were positively related to a friend or romantic partner's report of that person's compassionate communication provided cross-validation. The test for criterion-related validity examined whether compassionate communication predicts relational satisfaction. Regression analyses revealed that people were more relationally satisfied when they perceived themselves to use compassionate conversation, when they perceived their partner to use compassionate conversation, and when their partner reported using compassionate conversation. This finding did not extend to compassionate touch or compassionate messaging. In fact, in one regression analysis, people reported more relational satisfaction when they perceived that their partners used high levels of compassionate conversation and low levels of compassionate touch. Overall, the analyses suggest that of the three forms of compassionate communication, compassionate conversation is most strongly related to relational satisfaction. Taken together, this series of studies provides initial evidence for the validity of the CCS.
ContributorsRamos Salazar, Leslie (Author) / Guerrero, Laura (Thesis advisor) / Roberto, Anthony (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2013
152477-Thumbnail Image.png
Description
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.
ContributorsCrawford, Aaron (Author) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2014
152985-Thumbnail Image.png
Description
Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When

Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When prior information about a relationship is available, the estimates obtained could differ drastically depending on the choice of Bayesian or frequentist method. Study 1 in this project compared the performance of five methods for obtaining interval estimates of the mediated effect in terms of coverage, Type I error rate, empirical power, interval imbalance, and interval width at N = 20, 40, 60, 100 and 500. In Study 1, Bayesian methods with informative prior distributions performed almost identically to Bayesian methods with diffuse prior distributions, and had more power than normal theory confidence limits, lower Type I error rates than the percentile bootstrap, and coverage, interval width, and imbalance comparable to normal theory, percentile bootstrap, and the bias-corrected bootstrap confidence limits. Study 2 evaluated if a Bayesian method with true parameter values as prior information outperforms the other methods. The findings indicate that with true values of parameters as the prior information, Bayesian credibility intervals with informative prior distributions have more power, less imbalance, and narrower intervals than Bayesian credibility intervals with diffuse prior distributions, normal theory, percentile bootstrap, and bias-corrected bootstrap confidence limits. Study 3 examined how much power increases when increasing the precision of the prior distribution by a factor of ten for either the action or the conceptual path in mediation analysis. Power generally increases with increases in precision but there are many sample size and parameter value combinations where precision increases by a factor of 10 do not lead to substantial increases in power.
ContributorsMiocevic, Milica (Author) / Mackinnon, David P. (Thesis advisor) / Levy, Roy (Committee member) / West, Stephen G. (Committee member) / Enders, Craig (Committee member) / Arizona State University (Publisher)
Created2014
150357-Thumbnail Image.png
Description
The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component

The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items.
ContributorsMorgan, Gareth Philip (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Committee member) / Gray, Shelley (Committee member) / Arizona State University (Publisher)
Created2011
149935-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed

The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality.
ContributorsSvetina, Dubravka (Author) / Levy, Roy (Thesis advisor) / Gorin, Joanna S. (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2011
152217-Thumbnail Image.png
Description
In investigating mediating processes, researchers usually use randomized experiments and linear regression or structural equation modeling to determine if the treatment affects the hypothesized mediator and if the mediator affects the targeted outcome. However, randomizing the treatment will not yield accurate causal path estimates unless certain assumptions are satisfied. Since

In investigating mediating processes, researchers usually use randomized experiments and linear regression or structural equation modeling to determine if the treatment affects the hypothesized mediator and if the mediator affects the targeted outcome. However, randomizing the treatment will not yield accurate causal path estimates unless certain assumptions are satisfied. Since randomization of the mediator may not be plausible for most studies (i.e., the mediator status is not randomly assigned, but self-selected by participants), both the direct and indirect effects may be biased by confounding variables. The purpose of this dissertation is (1) to investigate the extent to which traditional mediation methods are affected by confounding variables and (2) to assess the statistical performance of several modern methods to address confounding variable effects in mediation analysis. This dissertation first reviewed the theoretical foundations of causal inference in statistical mediation analysis, modern statistical analysis for causal inference, and then described different methods to estimate causal direct and indirect effects in the presence of two post-treatment confounders. A large simulation study was designed to evaluate the extent to which ordinary regression and modern causal inference methods are able to obtain correct estimates of the direct and indirect effects when confounding variables that are present in the population are not included in the analysis. Five methods were compared in terms of bias, relative bias, mean square error, statistical power, Type I error rates, and confidence interval coverage to test how robust the methods are to the violation of the no unmeasured confounders assumption and confounder effect sizes. The methods explored were linear regression with adjustment, inverse propensity weighting, inverse propensity weighting with truncated weights, sequential g-estimation, and a doubly robust sequential g-estimation. Results showed that in estimating the direct and indirect effects, in general, sequential g-estimation performed the best in terms of bias, Type I error rates, power, and coverage across different confounder effect, direct effect, and sample sizes when all confounders were included in the estimation. When one of the two confounders were omitted from the estimation process, in general, none of the methods had acceptable relative bias in the simulation study. Omitting one of the confounders from estimation corresponded to the common case in mediation studies where no measure of a confounder is available but a confounder may affect the analysis. Failing to measure potential post-treatment confounder variables in a mediation model leads to biased estimates regardless of the analysis method used and emphasizes the importance of sensitivity analysis for causal mediation analysis.
ContributorsKisbu Sakarya, Yasemin (Author) / Mackinnon, David Peter (Thesis advisor) / Aiken, Leona (Committee member) / West, Stephen (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2013
149687-Thumbnail Image.png
Description
Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. While IRT has become prevalent in the assessment of ability and achievement, it has not been widely embraced by clinical psychologists. This appears due, in part, to psychometrists'

Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. While IRT has become prevalent in the assessment of ability and achievement, it has not been widely embraced by clinical psychologists. This appears due, in part, to psychometrists' use of unidimensional models despite evidence that psychiatric disorders are inherently multidimensional. The construct validity of unidimensional and multidimensional latent variable models was compared to evaluate the utility of modern psychometric theory in clinical assessment. Archival data consisting of 688 outpatients' presenting concerns, psychiatric diagnoses, and item level responses to the Brief Symptom Inventory (BSI) were extracted from files at a university mental health clinic. Confirmatory factor analyses revealed that models with oblique factors and/or item cross-loadings better represented the internal structure of the BSI in comparison to a strictly unidimensional model. The models were generally equivalent in their ability to account for variance in criterion-related validity variables; however, bifactor models demonstrated superior validity in differentiating between mood and anxiety disorder diagnoses. Multidimensional IRT analyses showed that the orthogonal bifactor model partitioned distinct, clinically relevant sources of item variance. Similar results were also achieved through multivariate prediction with an oblique simple structure model. Receiver operating characteristic curves confirmed improved sensitivity and specificity through multidimensional models of psychopathology. Clinical researchers are encouraged to consider these and other comprehensive models of psychological distress.
ContributorsThomas, Michael Lee (Author) / Lanyon, Richard (Thesis advisor) / Barrera, Manuel (Committee member) / Levy, Roy (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2011
151761-Thumbnail Image.png
Description
The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters

The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters in dichotomous classification scenarios). The successful use of exams for classification purposes assumes at least minimal levels of accuracy of these classifications. Classification accuracy is an index that reflects the rate of correct classification of individuals into the same category which contains their true ability score. Traditional methods estimate classification accuracy via methods which assume that true scores follow a four-parameter beta-binomial distribution. Recent research suggests that Item Response Theory may be a preferable alternative framework for estimating examinees' true scores and may return more accurate classifications based on these scores. Researchers hypothesized that test length, the location of the cut score, the distribution of items, and the distribution of examinee ability would impact the recovery of accurate estimates of classification accuracy. The current simulation study manipulated these factors to assess their potential influence on classification accuracy. Observed classification as masters vs. nonmasters, true classification accuracy, estimated classification accuracy, BIAS, and RMSE were analyzed. In addition, Analysis of Variance tests were conducted to determine whether an interrelationship existed between levels of the four manipulated factors. Results showed small values of estimated classification accuracy and increased BIAS in accuracy estimates with few items, mismatched distributions of item difficulty and examinee ability, and extreme cut scores. A significant four-way interaction between manipulated variables was observed. In additional to interpretations of these findings and explanation of potential causes for the recovered values, recommendations that inform practice and avenues of future research are provided.
ContributorsKunze, Katie (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2013