Matching Items (7)

152032-Thumbnail Image.png

Impact of violations of longitudinal measurement invariance in latent growth models and autoregressive quasi-simplex models

Description

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations.

Contributors

Agent

Created

Date Created
  • 2013

154063-Thumbnail Image.png

Model criticism for growth curve models via posterior predictive model checking

Description

Although models for describing longitudinal data have become increasingly sophisticated, the criticism of even foundational growth curve models remains challenging. The challenge arises from the need to disentangle data-model misfit

Although models for describing longitudinal data have become increasingly sophisticated, the criticism of even foundational growth curve models remains challenging. The challenge arises from the need to disentangle data-model misfit at multiple and interrelated levels of analysis. Using posterior predictive model checking (PPMC)—a popular Bayesian framework for model criticism—the performance of several discrepancy functions was investigated in a Monte Carlo simulation study. The discrepancy functions of interest included two types of conditional concordance correlation (CCC) functions, two types of R2 functions, two types of standardized generalized dimensionality discrepancy (SGDDM) functions, the likelihood ratio (LR), and the likelihood ratio difference test (LRT). Key outcomes included effect sizes of the design factors on the realized values of discrepancy functions, distributions of posterior predictive p-values (PPP-values), and the proportion of extreme PPP-values.

In terms of the realized values, the behavior of the CCC and R2 functions were generally consistent with prior research. However, as diagnostics, these functions were extremely conservative even when some aspect of the data was unaccounted for. In contrast, the conditional SGDDM (SGDDMC), LR, and LRT were generally sensitive to the underspecifications investigated in this work on all outcomes considered. Although the proportions of extreme PPP-values for these functions tended to increase in null situations for non-normal data, this behavior may have reflected the true misfit that resulted from the specification of normal prior distributions. Importantly, the LR and the SGDDMC to a greater extent exhibited some potential for untangling the sources of data-model misfit. Owing to connections of growth curve models to the more fundamental frameworks of multilevel modeling, structural equation models with a mean structure, and Bayesian hierarchical models, the results of the current work may have broader implications that warrant further research.

Contributors

Agent

Created

Date Created
  • 2015

154889-Thumbnail Image.png

Time metric in latent difference score models

Description

Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables.

Time metric is an important consideration for all longitudinal models because it can influence the interpretation of estimates, parameter estimate accuracy, and model convergence in longitudinal models with latent variables. Currently, the literature on latent difference score (LDS) models does not discuss the importance of time metric. Furthermore, there is little research using simulations to investigate LDS models. This study examined the influence of time metric on model estimation, interpretation, parameter estimate accuracy, and convergence in LDS models using empirical simulations. Results indicated that for a time structure with a true time metric where participants had different starting points and unequally spaced intervals, LDS models fit with a restructured and less informative time metric resulted in biased parameter estimates. However, models examined using the true time metric were less likely to converge than models using the restructured time metric, likely due to missing data. Where participants had different starting points but equally spaced intervals, LDS models fit with a restructured time metric resulted in biased estimates of intercept means, but all other parameter estimates were unbiased, and models examined using the true time metric had less convergence than the restructured time metric as well due to missing data. The findings of this study support prior research on time metric in longitudinal models, and further research should examine these findings under alternative conditions. The importance of these findings for substantive researchers is discussed.

Contributors

Agent

Created

Date Created
  • 2016

154905-Thumbnail Image.png

Determining appropriate sample sizes and their effects on key parameters in longitudinal three-level models

Description

Through a two study simulation design with different design conditions (sample size at level 1 (L1) was set to 3, level 2 (L2) sample size ranged from 10 to 75,

Through a two study simulation design with different design conditions (sample size at level 1 (L1) was set to 3, level 2 (L2) sample size ranged from 10 to 75, level 3 (L3) sample size ranged from 30 to 150, intraclass correlation (ICC) ranging from 0.10 to 0.50, model complexity ranging from one predictor to three predictors), this study intends to provide general guidelines about adequate sample sizes at three levels under varying ICC conditions for a viable three level HLM analysis (e.g., reasonably unbiased and accurate parameter estimates). In this study, the data generating parameters for the were obtained using a large-scale longitudinal data set from North Carolina, provided by the National Center on Assessment and Accountability for Special Education (NCAASE). I discuss ranges of sample sizes that are inadequate or adequate for convergence, absolute bias, relative bias, root mean squared error (RMSE), and coverage of individual parameter estimates. The current study, with the help of a detailed two-part simulation design for various sample sizes, model complexity and ICCs, provides various options of adequate sample sizes under different conditions. This study emphasizes that adequate sample sizes at either L1, L2, and L3 can be adjusted according to different interests in parameter estimates, different ranges of acceptable absolute bias, relative bias, root mean squared error, and coverage. Under different model complexity and varying ICC conditions, this study aims to help researchers identify L1, L2, and L3 sample size or both as the source of variation in absolute bias, relative bias, RMSE, or coverage proportions for a certain parameter estimate. This assists researchers in making better decisions for selecting adequate sample sizes in a three-level HLM analysis. A limitation of the study was the use of only a single distribution for the dependent and explanatory variables, different types of distributions and their effects might result in different sample size recommendations.

Contributors

Agent

Created

Date Created
  • 2016

154948-Thumbnail Image.png

Friends of my enemies: a longitudinal investigation into supply base management

Description

In this dissertation research, I expand the definition of the supply network to include the buying firm’s competitors. Just as one buyer-supplier relationship impacts all other relationships within the

In this dissertation research, I expand the definition of the supply network to include the buying firm’s competitors. Just as one buyer-supplier relationship impacts all other relationships within the network, the presence of competitor-supplier relationships must also impact the focal buying firm. Therefore, the concept of a “competitive network” made up of a focal firm, its competitors and all of their combined suppliers is introduced. Utilizing a unique longitudinal dataset, this research explores how the organic structural changes within the new, many-to-many supply network impact firm performance. The investigation begins by studying the change in number of suppliers used by global auto manufacturers between 2004 and 2013. Following the Great Recession of 2008-09, firms have been growing the number of suppliers at more than twice the rate they had been reducing suppliers just a few years prior. The second phase of research explores the structural changes to the network resulting from this explosive growth in the number of suppliers. The final investigation explores a different flow – financial flow -- and evaluates its association with firm performance. Overall, this dissertation research demonstrates the value of aggregating individual supply networks into a macro-network defined as the competitive network. From this view, no one firm is able to control the structure of the network and the change in structure directly impacts firm performance. A new metric is introduced which addresses the subtle changes in buyer-supplier relationships and relates significantly to firm performance. The analyses expand the body of knowledge through the use of longitudinal datasets and uncovers otherwise overlooked dynamics existing within supply networks over the past decade.

Contributors

Agent

Created

Date Created
  • 2016

156631-Thumbnail Image.png

Comparison of methods for estimating longitudinal indirect effects

Description

Mediation analysis is used to investigate how an independent variable, X, is related to an outcome variable, Y, through a mediator variable, M (MacKinnon, 2008). If X represents a randomized

Mediation analysis is used to investigate how an independent variable, X, is related to an outcome variable, Y, through a mediator variable, M (MacKinnon, 2008). If X represents a randomized intervention it is difficult to make a cause and effect inference regarding indirect effects without making no unmeasured confounding assumptions using the potential outcomes framework (Holland, 1988; MacKinnon, 2008; Robins & Greenland, 1992; VanderWeele, 2015), using longitudinal data to determine the temporal order of M and Y (MacKinnon, 2008), or both. The goals of this dissertation were to (1) define all indirect and direct effects in a three-wave longitudinal mediation model using the causal mediation formula (Pearl, 2012), (2) analytically compare traditional estimators (ANCOVA, difference score, and residualized change score) to the potential outcomes-defined indirect effects, and (3) use a Monte Carlo simulation to compare the performance of regression and potential outcomes-based methods for estimating longitudinal indirect effects and apply the methods to an empirical dataset. The results of the causal mediation formula revealed the potential outcomes definitions of indirect effects are equivalent to the product of coefficient estimators in a three-wave longitudinal mediation model with linear and additive relations. It was demonstrated with analytical comparisons that the ANCOVA, difference score, and residualized change score models’ estimates of two time-specific indirect effects differ as a function of the respective mediator-outcome relations at each time point. The traditional model that performed the best in terms of the evaluation criteria in the Monte Carlo study was the ANCOVA model and the potential outcomes model that performed the best in terms of the evaluation criteria was sequential G-estimation. Implications and future directions are discussed.

Contributors

Agent

Created

Date Created
  • 2018

155069-Thumbnail Image.png

Handling sparse and missing data in functional data analysis: a functional mixed-effects model approach

Description

This paper investigates a relatively new analysis method for longitudinal data in the framework of functional data analysis. This approach treats longitudinal data as so-called sparse functional data. The first

This paper investigates a relatively new analysis method for longitudinal data in the framework of functional data analysis. This approach treats longitudinal data as so-called sparse functional data. The first section of the paper introduces functional data and the general ideas of functional data analysis. The second section discusses the analysis of longitudinal data in the context of functional data analysis, while considering the unique characteristics of longitudinal data such, in particular sparseness and missing data. The third section introduces functional mixed-effects models that can handle these unique characteristics of sparseness and missingness. The next section discusses a preliminary simulation study conducted to examine the performance of a functional mixed-effects model under various conditions. An extended simulation study was carried out to evaluate the estimation accuracy of a functional mixed-effects model. Specifically, the accuracy of the estimated trajectories was examined under various conditions including different types of missing data and varying levels of sparseness.

Contributors

Agent

Created

Date Created
  • 2016