A Monte Carlo simulation was used to generate data based on the contextual multilevel model, where sample size, effect size, and intraclass correlation (ICC) of the predictor variable were varied. The effects of simulation factors on parameter bias, parameter variability, and standard error accuracy were assessed. Parameter estimates were in general unbiased. Power to detect the slope variance and contextual effect was over 80% for most conditions, except some of the smaller sample size conditions. Type I error rates for the contextual effect were also high for some of the smaller sample size conditions. Conclusions and future directions are discussed.
Method: Ninety, 4- and 5-year-old Navajo preschoolers with LI and with TD language were selected. Children completed the PEARL, which measured both language comprehension and production using pretest and posttest scores, and a modifiability scale. In addition, children completed the Clinical Evaluation of Language Fundamental, Preschool, Second Edition (CELF – Preschool 2) and language samples. A Navajo Speech Language Pathologist confirmed the diagnosis of the participants. Research assistants pretested, briefly taught the principles of narrative structure (story grammar, language complexity and episode) and evaluated response to learning using an index of modifiability.
Results: Results of discriminant analysis indicated that PEARL pretest differentiated both ability groups with 89% accuracy. In addition, posttest scores discriminated with 89% accuracy and modifiability scores with 100% accuracy. Further, the subtest story grammar was the best predictor at pretest and posttest, although modifiability scores were better predictors of both ability groups.
Conclusion: Findings indicate that the PEARL is a promising assessment for accurately differentiating Navajo preschool children with LI from Navajo preschool children with TD language. The PEARL’s recommended pretest cut score over-identified Navajo children with TD language; therefore, a new recommended cut score was determined.
We constructed an 11-arm, walk-through, human radial-arm maze (HRAM) as a translational instrument to compare existing methodology in the areas of rodent and human learning and memory research. The HRAM, utilized here, serves as an intermediary test between the classic rat radial-arm maze (RAM) and standard human neuropsychological and cognitive tests. We show that the HRAM is a useful instrument to examine working memory ability, explore the relationships between rodent and human memory and cognition models, and evaluate factors that contribute to human navigational ability. One-hundred-and-fifty-seven participants were tested on the HRAM, and scores were compared to performance on a standard cognitive battery focused on episodic memory, working memory capacity, and visuospatial ability. We found that errors on the HRAM increased as working memory demand became elevated, similar to the pattern typically seen in rodents, and that for this task, performance appears similar to Miller's classic description of a processing-inclusive human working memory capacity of 7 ± 2 items. Regression analysis revealed that measures of working memory capacity and visuospatial ability accounted for a large proportion of variance in HRAM scores, while measures of episodic memory and general intelligence did not serve as significant predictors of HRAM performance. We present the HRAM as a novel instrument for measuring navigational behavior in humans, as is traditionally done in basic science studies evaluating rodent learning and memory, thus providing a useful tool to help connect and translate between human and rodent models of cognitive functioning.
The purpose of this study was to examine in which way adding more indicators or a covariate influences the performance of latent class analysis (LCA). We varied the sample size (100 ≤ N ≤ 2000), number, and quality of binary indicators (between 4 and 12 indicators with conditional response probabilities of [0.3, 0.7], [0.2, 0.8], or [0.1, 0.9]), and the strength of covariate effects (zero, small, medium, large) in a Monte Carlo simulation study of 2- and 3-class models. The results suggested that in general, a larger sample size, more indicators, a higher quality of indicators, and a larger covariate effect lead to more converged and proper replications, as well as fewer boundary parameter estimates and less parameter bias. Furthermore, interactions among these study factors demonstrated how using more or higher quality indicators, as well as larger covariate effect size, could sometimes compensate for small sample size. Including a covariate appeared to be generally beneficial, although the covariate parameters themselves showed relatively large bias. Our results provide useful information for practitioners designing an LCA study in terms of highlighting the factors that lead to better or worse performance of LCA.