Matching Items (3)
Filtering by

Clear all filters

152477-Thumbnail Image.png
Description
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.
ContributorsCrawford, Aaron (Author) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2014
150306-Thumbnail Image.png
Description
Lexical diversity (LD) has been used in a wide range of applications, producing a rich history in the field of speech-language pathology. However, for clinicians and researchers identifying a robust measure to quantify LD has been challenging. Recently, sophisticated techniques have been developed that assert to measure LD. Each one

Lexical diversity (LD) has been used in a wide range of applications, producing a rich history in the field of speech-language pathology. However, for clinicians and researchers identifying a robust measure to quantify LD has been challenging. Recently, sophisticated techniques have been developed that assert to measure LD. Each one is based on its own theoretical assumptions and employs different computational machineries. Therefore, it is not clear to what extent these techniques produce valid scores and how they relate to each other. Further, in the field of speech-language pathology, researchers and clinicians often use different methods to elicit various types of discourse and it is an empirical question whether the inferences drawn from analyzing one type of discourse relate and generalize to other types. The current study examined a corpus of four types of discourse (procedures, eventcasts, storytelling, recounts) from 442 adults. Using four techniques (D; Maas; Measure of textual lexical diversity, MTLD; Moving average type token ratio, MATTR), LD scores were estimated for each type. Subsequently, data were modeled using structural equation modeling to uncover their latent structure. Results indicated that two estimation techniques (MATTR and MTLD) generated scores that were stronger indicators of the LD of the language samples. For the other two techniques, results were consistent with the presence of method factors that represented construct-irrelevant sources. A hierarchical factor analytic model indicated that a common factor underlay all combinations of types of discourse and estimation techniques and was interpreted as a general construct of LD. Two discourse types (storytelling and eventcasts) were significantly stronger indicators of the underlying trait. These findings supplement our understanding regarding the validity of scores generated by different estimation techniques. Further, they enhance our knowledge about how productive vocabulary manifests itself across different types of discourse that impose different cognitive and linguistic demands. They also offer clinicians and researchers a point of reference in terms of techniques that measure the LD of a language sample and little of anything else and also types of discourse that might be the most informative for measuring the LD of individuals.
ContributorsFergadiotis, Gerasimos (Author) / Wright, Heather H (Thesis advisor) / Katz, Richard (Committee member) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2011
153921-Thumbnail Image.png
Description
Research suggests that some children with primary language impairment (PLI)

have difficulty with certain aspects of executive function; however, most studies examining executive function have been conducted using tasks that require children to use language to complete the task. As a result, it is unclear whether poor performance on executive function

Research suggests that some children with primary language impairment (PLI)

have difficulty with certain aspects of executive function; however, most studies examining executive function have been conducted using tasks that require children to use language to complete the task. As a result, it is unclear whether poor performance on executive function tasks was due to language impairment, to executive function deficits, or both. The purpose of this study is to evaluate whether preschoolers with PLI have deficits in executive function by comprehensively examining inhibition, updating, and mental set shifting using tasks that do and do not required language to complete the tasks.

Twenty-two four and five-year-old preschoolers with PLI and 30 age-matched preschoolers with typical development (TD) completed two sets of computerized executive function tasks that measured inhibition, updating, and mental set shifting. The first set of tasks were language based and the second were visually-based. This permitted us to test the hypothesis that poor performance on executive function tasks results from poor executive function rather than language impairment. A series of one-way analyses of covariance (ANCOVAs) were completed to test whether there was a significant between-group difference on each task after controlling for attention scale scores. In each analysis the between-group factor was group and the covariate was attention scale scores.

Results showed that preschoolers with PLI showed difficulties on a broad range of linguistic and visual executive function tasks even with scores on an attention measure covaried. Executive function deficits were found for linguistic inhibition, linguistic and visual updating, and linguistic and visual mental set shifting. Overall, findings add to evidence showing that the executive functioning deficits of children with PLI is not limited to the language domain, but is more general in nature. Implications for early assessment and intervention will be discussed.
ContributorsYang, Huichun (Author) / Gray, Shelley (Thesis advisor) / Restrepo, Maria (Committee member) / Azuma, Tamiko (Committee member) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2015