Matching Items (23)

152032-Thumbnail Image.png

Impact of violations of longitudinal measurement invariance in latent growth models and autoregressive quasi-simplex models

Description

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations.

Contributors

Agent

Created

Date Created
  • 2013

152217-Thumbnail Image.png

Estimating causal direct and indirect effects in the presence of post-treatment confounders: a simulation study

Description

In investigating mediating processes, researchers usually use randomized experiments and linear regression or structural equation modeling to determine if the treatment affects the hypothesized mediator and if the mediator affects

In investigating mediating processes, researchers usually use randomized experiments and linear regression or structural equation modeling to determine if the treatment affects the hypothesized mediator and if the mediator affects the targeted outcome. However, randomizing the treatment will not yield accurate causal path estimates unless certain assumptions are satisfied. Since randomization of the mediator may not be plausible for most studies (i.e., the mediator status is not randomly assigned, but self-selected by participants), both the direct and indirect effects may be biased by confounding variables. The purpose of this dissertation is (1) to investigate the extent to which traditional mediation methods are affected by confounding variables and (2) to assess the statistical performance of several modern methods to address confounding variable effects in mediation analysis. This dissertation first reviewed the theoretical foundations of causal inference in statistical mediation analysis, modern statistical analysis for causal inference, and then described different methods to estimate causal direct and indirect effects in the presence of two post-treatment confounders. A large simulation study was designed to evaluate the extent to which ordinary regression and modern causal inference methods are able to obtain correct estimates of the direct and indirect effects when confounding variables that are present in the population are not included in the analysis. Five methods were compared in terms of bias, relative bias, mean square error, statistical power, Type I error rates, and confidence interval coverage to test how robust the methods are to the violation of the no unmeasured confounders assumption and confounder effect sizes. The methods explored were linear regression with adjustment, inverse propensity weighting, inverse propensity weighting with truncated weights, sequential g-estimation, and a doubly robust sequential g-estimation. Results showed that in estimating the direct and indirect effects, in general, sequential g-estimation performed the best in terms of bias, Type I error rates, power, and coverage across different confounder effect, direct effect, and sample sizes when all confounders were included in the estimation. When one of the two confounders were omitted from the estimation process, in general, none of the methods had acceptable relative bias in the simulation study. Omitting one of the confounders from estimation corresponded to the common case in mediation studies where no measure of a confounder is available but a confounder may affect the analysis. Failing to measure potential post-treatment confounder variables in a mediation model leads to biased estimates regardless of the analysis method used and emphasizes the importance of sensitivity analysis for causal mediation analysis.

Contributors

Agent

Created

Date Created
  • 2013

152985-Thumbnail Image.png

Obtaining accurate estimates of the mediated effect with and without prior information

Description

Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not

Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When prior information about a relationship is available, the estimates obtained could differ drastically depending on the choice of Bayesian or frequentist method. Study 1 in this project compared the performance of five methods for obtaining interval estimates of the mediated effect in terms of coverage, Type I error rate, empirical power, interval imbalance, and interval width at N = 20, 40, 60, 100 and 500. In Study 1, Bayesian methods with informative prior distributions performed almost identically to Bayesian methods with diffuse prior distributions, and had more power than normal theory confidence limits, lower Type I error rates than the percentile bootstrap, and coverage, interval width, and imbalance comparable to normal theory, percentile bootstrap, and the bias-corrected bootstrap confidence limits. Study 2 evaluated if a Bayesian method with true parameter values as prior information outperforms the other methods. The findings indicate that with true values of parameters as the prior information, Bayesian credibility intervals with informative prior distributions have more power, less imbalance, and narrower intervals than Bayesian credibility intervals with diffuse prior distributions, normal theory, percentile bootstrap, and bias-corrected bootstrap confidence limits. Study 3 examined how much power increases when increasing the precision of the prior distribution by a factor of ten for either the action or the conceptual path in mediation analysis. Power generally increases with increases in precision but there are many sample size and parameter value combinations where precision increases by a factor of 10 do not lead to substantial increases in power.

Contributors

Agent

Created

Date Created
  • 2014

149659-Thumbnail Image.png

Stability of grammaticality judgments in German-English code-switching

Description

Code-switching, a bilingual language phenomenon, which may be defined as the concurrent use of two or more languages by fluent speakers is frequently misunderstood and stigmatized. Given that the majority

Code-switching, a bilingual language phenomenon, which may be defined as the concurrent use of two or more languages by fluent speakers is frequently misunderstood and stigmatized. Given that the majority of the world's population is bilingual rather than monolingual, the study of code-switching provides a fundamental window into human cognition and the systematic structural outcomes of language contact. Intra-sentential code-switching is said to systematically occur, constrained by the lexicons of each respective language. In order to access information about the acceptability of certain switches, linguists often elicit grammaticality judgments from bilingual informants. In current linguistic research, grammaticality judgment tasks are often scrutinized on account of the lack of stability of responses to individual sentences. Although this claim is largely motivated by research on monolingual strings under a variety of variable conditions, the stability of code-switched grammaticality judgment data given by bilingual informants has yet to be systematically investigated. By comparing grammaticality judgment data from 3 groups of German-English bilinguals, Group A (N=50), Group B (N=34), and Group C (N=40), this thesis investigates the stability of grammaticality judgments in code-switching over time, as well as a potential difference in judgments between judgment data for spoken and written code-switching stimuli. Using a web-based survey, informants were asked to give ratings of each code-switched token. The results were computed and findings from a correlated groups t test attest to the stability of code-switched judgment data over time with a p value of .271 and to the validity of the methodologies currently in place. Furthermore, results from the study also indicated that no statistically significant difference was found between spoken and written judgment data as computed with an independent groups t test resulting in a p value of .186, contributing a valuable fact to the body of data collection practices in research in bilingualism. Results from this study indicate that there are significant differences attributable to language dominance for specific token types, which were calculated using an ANOVA test. However, when using group composite scores of all tokens, the ANOVA measure returned a non-significant score of .234, suggesting that bilinguals with differing language dominances rank in a similar manner. The findings from this study hope to help clarify current practices in code-switching research.

Contributors

Agent

Created

Date Created
  • 2011

149323-Thumbnail Image.png

A study of statistical power and type I errors in testing a factor analytic model for group differences in regression intercepts

Description

In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some

In the past, it has been assumed that measurement and predictive invariance are consistent so that if one form of invariance holds the other form should also hold. However, some studies have proven that both forms of invariance only hold under certain conditions such as factorial invariance and invariance in the common factor variances. The present research examined Type I errors and the statistical power of a method that detects violations to the factorial invariant model in the presence of group differences in regression intercepts, under different sample sizes and different number of predictors (one or two). Data were simulated under two models: in model A only differences in the factor means were allowed, while model B violated invariance. A factorial invariant model was fitted to the data. Type I errors were defined as the proportion of samples in which the hypothesis of invariance was incorrectly rejected, and statistical power was defined as the proportion of samples in which the hypothesis of factorial invariance was correctly rejected. In the case of one predictor, the results show that the chi-square statistic has low power to detect violations to the model. Unexpected and systematic results were obtained regarding the negative unique variance in the predictor. It is proposed that negative unique variance in the predictor can be used as indication of measurement bias instead of the chi-square fit statistic with sample sizes of 500 or more. The results of the two predictor case show larger power. In both cases Type I errors were as expected. The implications of the results and some suggestions for increasing the power of the method are provided.

Contributors

Agent

Created

Date Created
  • 2010

153962-Thumbnail Image.png

Planned missing data in mediation analysis

Description

This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample

This dissertation examines a planned missing data design in the context of mediational analysis. The study considered a scenario in which the high cost of an expensive mediator limited sample size, but in which less expensive mediators could be gathered on a larger sample size. Simulated multivariate normal data were generated from a latent variable mediation model with three observed indicator variables, M1, M2, and M3. Planned missingness was implemented on M1 under the missing completely at random mechanism. Five analysis methods were employed: latent variable mediation model with all three mediators as indicators of a latent construct (Method 1), auxiliary variable model with M1 as the mediator and M2 and M3 as auxiliary variables (Method 2), auxiliary variable model with M1 as the mediator and M2 as a single auxiliary variable (Method 3), maximum likelihood estimation including all available data but incorporating only mediator M1 (Method 4), and listwise deletion (Method 5).

The main outcome of interest was empirical power to detect the mediated effect. The main effects of mediation effect size, sample size, and missing data rate performed as expected with power increasing for increasing mediation effect sizes, increasing sample sizes, and decreasing missing data rates. Consistent with expectations, power was the greatest for analysis methods that included all three mediators, and power decreased with analysis methods that included less information. Across all design cells relative to the complete data condition, Method 1 with 20% missingness on M1 produced only 2.06% loss in power for the mediated effect; with 50% missingness, 6.02% loss; and 80% missingess, only 11.86% loss. Method 2 exhibited 20.72% power loss at 80% missingness, even though the total amount of data utilized was the same as Method 1. Methods 3 – 5 exhibited greater power loss. Compared to an average power loss of 11.55% across all levels of missingness for Method 1, average power losses for Methods 3, 4, and 5 were 23.87%, 29.35%, and 32.40%, respectively. In conclusion, planned missingness in a multiple mediator design may permit higher quality characterization of the mediator construct at feasible cost.

Contributors

Agent

Created

Date Created
  • 2015

151975-Thumbnail Image.png

The structure of cyber and traditional aggression: an integrated conceptualization

Description

ABSTRACT The phenomenon of cyberbullying has captured the attention of educators and researchers alike as it has been associated with multiple aversive outcomes including suicide. Young people today have easy

ABSTRACT The phenomenon of cyberbullying has captured the attention of educators and researchers alike as it has been associated with multiple aversive outcomes including suicide. Young people today have easy access to computer mediated communication (CMC) and frequently use it to harass one another -- a practice that many researchers have equated to cyberbullying. However, there is great disagreement among researchers whether intentional harmful actions carried out by way of CMC constitute cyberbullying, and some authors have argued that "cyber-aggression" is a more accurate term to describe this phenomenon. Disagreement in terms of cyberbullying's definition and methodological inconsistencies including choice of questionnaire items has resulted in highly variable results across cyberbullying studies. Researchers are in agreement however, that cyber and traditional forms of aggression are closely related phenomena, and have suggested that they may be extensions of one another. This research developed a comprehensive set of items to span cyber-aggression's content domain in order to 1) fully address all types of cyber-aggression, and 2) assess the interrelated nature of cyber and traditional aggression. These items were administered to 553 middle school students located in a central Illinois school district. Results from confirmatory factor analyses suggested that cyber-aggression is best conceptualized as integrated with traditional aggression, and that cyber and traditional aggression share two dimensions: direct-verbal and relational aggression. Additionally, results indicated that all forms of aggression are a function of general aggressive tendencies. This research identified two synthesized models combining cyber and traditional aggression into a shared framework that demonstrated excellent fit to the item data.

Contributors

Agent

Created

Date Created
  • 2013

152477-Thumbnail Image.png

Posterior predictive model checking in Bayesian networks

Description

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN)

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.

Contributors

Agent

Created

Date Created
  • 2014

151636-Thumbnail Image.png

Communicating with compassion: the exploratory factor analysis and primary validation process of the Compassionate Communication Scale

Description

The purpose of this dissertation was to develop a Compassionate Communication Scale (CCS) by conducting a series of studies. The first study used qualitative data to identify and develop initial

The purpose of this dissertation was to develop a Compassionate Communication Scale (CCS) by conducting a series of studies. The first study used qualitative data to identify and develop initial scale items. A series of follow-up studies used exploratory factor analysis to investigate the underlying structure of the CCS. A three-factor structure emerged, which included: Compassionate conversation, such as listening, letting the distressed person disclose feelings, and making empathetic remarks; compassionate touch, such as holding someone's hand or patting someone's back; and compassionate messaging, such as posting an encouraging message on a social networking site or sending a sympathetic email. The next study tested convergent and divergent validity by determining how the three forms of compassionate communication associate with various traits. Compassionate conversation was positively related to compassion, empathetic concern, perspective taking, emotional intelligence, social expressivity, emotional expressivity and benevolence, and negatively related to verbal aggressiveness and narcissism. Compassionate touch was positively correlated with compassion, empathetic concern, perspective taking, emotional intelligence, social expressivity, emotional expressivity, and benevolence, and uncorrelated with verbal aggressiveness and benevolence. Finally, compassionate messaging was positively correlated with social expressivity, emotional expressivity, and uncorrelated with verbal aggressiveness and narcissism. The next study focused on cross-validation and criterion-related validity. Correlations showing that self-reports of a person's compassionate communication were positively related to a friend or romantic partner's report of that person's compassionate communication provided cross-validation. The test for criterion-related validity examined whether compassionate communication predicts relational satisfaction. Regression analyses revealed that people were more relationally satisfied when they perceived themselves to use compassionate conversation, when they perceived their partner to use compassionate conversation, and when their partner reported using compassionate conversation. This finding did not extend to compassionate touch or compassionate messaging. In fact, in one regression analysis, people reported more relational satisfaction when they perceived that their partners used high levels of compassionate conversation and low levels of compassionate touch. Overall, the analyses suggest that of the three forms of compassionate communication, compassionate conversation is most strongly related to relational satisfaction. Taken together, this series of studies provides initial evidence for the validity of the CCS.

Contributors

Agent

Created

Date Created
  • 2013

150357-Thumbnail Image.png

Nonword item generation: predicting item difficulty in nonword repetition

Description

The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect

The current study employs item difficulty modeling procedures to evaluate the feasibility of potential generative item features for nonword repetition. Specifically, the extent to which the manipulated item features affect the theoretical mechanisms that underlie nonword repetition accuracy was estimated. Generative item features were based on the phonological loop component of Baddelely's model of working memory which addresses phonological short-term memory (Baddeley, 2000, 2003; Baddeley & Hitch, 1974). Using researcher developed software, nonwords were generated to adhere to the phonological constraints of Spanish. Thirty-six nonwords were chosen based on the set item features identified by the proposed cognitive processing model. Using a planned missing data design, two-hundred fifteen Spanish-English bilingual children were administered 24 of the 36 generated nonwords. Multiple regression and explanatory item response modeling techniques (e.g., linear logistic test model, LLTM; Fischer, 1973) were used to estimate the impact of item features on item difficulty. The final LLTM included three item radicals and two item incidentals. Results indicated that the LLTM predicted item difficulties were highly correlated with the Rasch item difficulties (r = .89) and accounted for a substantial amount of the variance in item difficulty (R2 = .79). The findings are discussed in terms of validity evidence in support of using the phonological loop component of Baddeley's model (2000) as a cognitive processing model for nonword repetition items and the feasibility of using the proposed radical structure as an item blueprint for the future generation of nonword repetition items.

Contributors

Agent

Created

Date Created
  • 2011