Matching Items (11)
Filtering by

Clear all filters

151684-Thumbnail Image.png
Description
This study tested the effects of two kinds of cognitive, domain-based preparation tasks on learning outcomes after engaging in a collaborative activity with a partner. The collaborative learning method of interest was termed "preparing-to-interact," and is supported in theory by the Preparation for Future Learning (PFL) paradigm and the Interactive-Constructive-Active-Passive

This study tested the effects of two kinds of cognitive, domain-based preparation tasks on learning outcomes after engaging in a collaborative activity with a partner. The collaborative learning method of interest was termed "preparing-to-interact," and is supported in theory by the Preparation for Future Learning (PFL) paradigm and the Interactive-Constructive-Active-Passive (ICAP) framework. The current work combined these two cognitive-based approaches to design collaborative learning activities that can serve as alternatives to existing methods, which carry limitations and challenges. The "preparing-to-interact" method avoids the need for training students in specific collaboration skills or guiding/scripting their dialogic behaviors, while providing the opportunity for students to acquire the necessary prior knowledge for maximizing their discussions towards learning. The study used a 2x2 experimental design, investigating the factors of Preparation (No Prep and Prep) and Type of Activity (Active and Constructive) on deep and shallow learning. The sample was community college students in introductory psychology classes; the domain tested was "memory," in particular, concepts related to the process of remembering/forgetting information. Results showed that Preparation was a significant factor affecting deep learning, while shallow learning was not affected differently by the interventions. Essentially, equalizing time-on-task and content across all conditions, time spent individually preparing by working on the task alone and then discussing the content with a partner produced deeper learning than engaging in the task jointly for the duration of the learning period. Type of Task was not a significant factor in learning outcomes, however, exploratory analyses showed evidence of Constructive-type behaviors leading to deeper learning of the content. Additionally, a novel method of multilevel analysis (MLA) was used to examine the data to account for the dependency between partners within dyads. This work showed that "preparing-to-interact" is a way to maximize the benefits of collaborative learning. When students are first cognitively prepared, they seem to make the most efficient use of discussion towards learning, engage more deeply in the content during learning, leading to deeper knowledge of the content. Additionally, in using MLA to account for subject nonindependency, this work introduces new questions about the validity of statistical analyses for dyadic data.
ContributorsLam, Rachel Jane (Author) / Nakagawa, Kathryn (Thesis advisor) / Green, Samuel (Committee member) / Stamm, Jill (Committee member) / Arizona State University (Publisher)
Created2013
151761-Thumbnail Image.png
Description
The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters

The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters in dichotomous classification scenarios). The successful use of exams for classification purposes assumes at least minimal levels of accuracy of these classifications. Classification accuracy is an index that reflects the rate of correct classification of individuals into the same category which contains their true ability score. Traditional methods estimate classification accuracy via methods which assume that true scores follow a four-parameter beta-binomial distribution. Recent research suggests that Item Response Theory may be a preferable alternative framework for estimating examinees' true scores and may return more accurate classifications based on these scores. Researchers hypothesized that test length, the location of the cut score, the distribution of items, and the distribution of examinee ability would impact the recovery of accurate estimates of classification accuracy. The current simulation study manipulated these factors to assess their potential influence on classification accuracy. Observed classification as masters vs. nonmasters, true classification accuracy, estimated classification accuracy, BIAS, and RMSE were analyzed. In addition, Analysis of Variance tests were conducted to determine whether an interrelationship existed between levels of the four manipulated factors. Results showed small values of estimated classification accuracy and increased BIAS in accuracy estimates with few items, mismatched distributions of item difficulty and examinee ability, and extreme cut scores. A significant four-way interaction between manipulated variables was observed. In additional to interpretations of these findings and explanation of potential causes for the recovered values, recommendations that inform practice and avenues of future research are provided.
ContributorsKunze, Katie (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2013
152057-Thumbnail Image.png
Description
Possible selves researchers have uncovered many issues associated with the current possible selves measures. For instance, one of the most famous possible selves measures, Oyserman (2004)'s open-ended possible selves, has proven to be difficult to score reliably and also involves laborious scoring procedures. Therefore, this study was initiated to develo

Possible selves researchers have uncovered many issues associated with the current possible selves measures. For instance, one of the most famous possible selves measures, Oyserman (2004)'s open-ended possible selves, has proven to be difficult to score reliably and also involves laborious scoring procedures. Therefore, this study was initiated to develop a close-ended measure, called the Persistent Academic Possible Selves Scale for Adolescents (PAPSS), that meets these challenges. The PAPSS integrates possible selves theories (personal and social identities) and educational psychology (self-regulation in social cognitive theory). Four hundred and ninety five junior high and high school students participated in the validation study of the PAPSS. I conducted confirmatory factor analyses (CFA) to compare fit for a baseline model to the hypothesized models using Mplus version 7 (Muthén & Muthén, 2012). A weighted least square means and a variance adjusted (WLSMV) estimation method was used for handling multivariate nonnormality of ordered categorical data. The final PAPSS has validity evidence based on the internal structure. The factor structure is composed of three goal-driven factors, one self-regulated factor that focuses on peers, and four self-regulated factors that emphasize the self. Oyserman (2004)'s open-ended questionnaire was used for exploring the evidence of convergent validity. Many issues regarding Oyserman (2003)'s instructions were found during the coding process of academic plausibility. It was complicated to detect hidden academic possible selves and strategies from non-academic possible selves and strategies. Also, interpersonal related strategies were over weighted in the scoring process compared to interpersonal related academic possible selves. The study results uncovered that all of the academic goal-related factors in the PAPSS are significantly related to academic plausibility in a positive direction. However, self-regulated factors in the PAPSS are not. The correlation results between the self-regulated factors and academic plausibility do not provide the evidence of convergent validity. Theoretical and methodological explanations for the test results are discussed.
ContributorsLee, Ji Eun (Author) / Husman, Jenefer (Thesis advisor) / Green, Samuel (Committee member) / Millsap, Roger (Committee member) / Brem, Sarah (Committee member) / Arizona State University (Publisher)
Created2013
151905-Thumbnail Image.png
Description
This study looked at ways of understanding how schoolyards might act as meaningful places in children's developing sense of identity and possibility. Photographs and other images such as historical photographs and maps were used to look at how built environments outside of school reflect demographic and social differences within one

This study looked at ways of understanding how schoolyards might act as meaningful places in children's developing sense of identity and possibility. Photographs and other images such as historical photographs and maps were used to look at how built environments outside of school reflect demographic and social differences within one southwest city. Intersections of children's worlds with various socio-political communities, woven into and through schooling, were examined for evidence of ways that schools act as the embodiment of a community's values: they are the material and observable effects of resource-allocation decisions. And scholarly materials were consulted to examine relationships in the images to existing theories of place, and its effect on children, as well as to consider theories of the hidden curriculum and its relationship to social reproduction, and the nature of visual representation as a form of data rather than strictly in the service of illustrating other forms of data. The focus of the study was on identifying appropriate research methods for investigating ways to understand the importance of the material worlds of school and childhood. Using a combination of visual and narrative approaches to contribute to our understanding of those material worlds, I sought to expose areas of inequity and class differences in ways that children experience schooling, as evidenced by differences in the material environment. Using a mixed-methods approach, created and found images were coded for categories of material culture, such as the existence of fences, trees, views from the playground or walking in the neighborhood at four Tempe schools. Findings were connected to a rich body of knowledge in areas such as theories of space and place, the nature of the hidden curriculum, visual culture, visual research methods including mapping. Familiar aspects of schooling were exposed in different ways, linking past decisions made by adults to their continuing effects on children today. In this way I arrived at an expanded and enriched understanding of the present worlds of children communicated as through the material environment. Visually examining children's worlds, by looking at the material artifacts of everyday worlds that children experience at school and including the child's-eye view in decision processes, has promise in moving decision makers away from strictly analytical and impersonal approaches to decision making about schooling children of the future. I proposed that by weighting of data points, as used in decision-making processes regarding schooling, differently than is currently done, and by paying closer attention to possible longer-term effects of place for all children, not just a few, there is the potential to improve the quality of life for today's children, and tomorrow's adults.
ContributorsWalsum, Joyce Van (Author) / Margolis, Eric M. (Thesis advisor) / Green, Samuel (Thesis advisor) / Collins, Daniel (Committee member) / Arizona State University (Publisher)
Created2013
150790-Thumbnail Image.png
Description
The structural validity of the WJ-III Cognitive was investigated using the GIA-Extended Battery test scores of 529, six-to-thirteen-year-old students referred for a psychoeducational evaluation. The results of an exploratory factor analysis revealed 11 of the 14 tests loaded on their expected factors. For the factors Gc, Gf, Gs, and Gv,

The structural validity of the WJ-III Cognitive was investigated using the GIA-Extended Battery test scores of 529, six-to-thirteen-year-old students referred for a psychoeducational evaluation. The results of an exploratory factor analysis revealed 11 of the 14 tests loaded on their expected factors. For the factors Gc, Gf, Gs, and Gv, both tests associated with the factor loaded highly; for Gsm, Glr, and Ga, only one test associated with each factor loaded highly. Obtained congruence coefficients supported the similarity between the factors Gs, Gf, Gc, Glr, and Gv for the current referred sample and the normative factor structure. Gsm and Ga were not found to be similar. The WJ-III Cognitive structure established in the normative sample was not fully replicated in this referred sample. The Schmid-Leiman orthogonalization procedure identified a higher-order factor structure with a second-order, general ability factor, g, which accounted for approximately 38.4% of common variance and 23.1% of total variance among the seven, first-order factors. However, g accounted for more variance in both associated tests for only the orthogonal first-order factor Gf. In contrast, the Gc and Gs factors accounted for more variance than the general factor for both of their respective tests. The Gsm, Glr, Ga, and Gv factors accounted for more variance than g for one of the two tests associated with each factor. The outcome indicates Gc, Gf, Gs, and Gv were supported and thus are likely factors that can be utilized in assessment while Gsm, Glr, and Gr were not supported by this study. Additionally, results indicate that interpretation of the WJ-III scores should not ignore the global ability factor.
ContributorsStrickland, Tracy Nicole (Author) / Watkins, Marley (Thesis advisor) / Caterino, Linda (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2012
151105-Thumbnail Image.png
Description
Two models of motivation are prevalent in the literature on sport and exercise participation (Deci & Ryan, 1991; Vallerand, 1997, 2000). Both models are grounded in self-determination theory (Deci & Ryan, 1985; Ryan & Deci, 2000) and consider the relationship between intrinsic, extrinsic, and amotivation in explaining behavior choice and

Two models of motivation are prevalent in the literature on sport and exercise participation (Deci & Ryan, 1991; Vallerand, 1997, 2000). Both models are grounded in self-determination theory (Deci & Ryan, 1985; Ryan & Deci, 2000) and consider the relationship between intrinsic, extrinsic, and amotivation in explaining behavior choice and outcomes. Both models articulate the relationship between need satisfaction (i.e., autonomy, competence, relatedness; Deci & Ryan, 1985, 2000; Ryan & Deci, 2000) and various cognitive, affective, and behavioral outcomes as a function of self-determined motivation. Despite these comprehensive models, inconsistencies remain between the theories and their practical applications. The purpose of my study was to examine alternative theoretical models of intrinsic, extrinsic, and amotivation using the Sport Motivation Scale-6 (SMS-6; Mallett et al., 2007) to more thoroughly study the structure of motivation and the practical utility of using such a scale to measure motivation among runners. Confirmatory factor analysis was used to evaluate eight alternative models. After finding unsatisfactory fit of these models, exploratory factor analysis was conducted post hoc to further examine the measurement structure of motivation. A three-factor structure of general motivation, external accolades, and isolation/solitude explained motivation best, although high cross-loadings of items suggest the structure of this construct still lacks clarity. Future directions to modify item content and re-examine structure as well as limitations of this study are discussed.
ContributorsKube, Erin (Author) / Thompson, Marilyn (Thesis advisor) / Tracey, Terence (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2012
156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2018
154498-Thumbnail Image.png
Description
A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both

A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both levels for a second-order CFA model. The numbers and directions of differences in noninvariant loadings or intercepts were also manipulated, along with total sample size and effect size of the second-order factor mean difference. Data were analyzed using correct and incorrect specifications of noninvariant loadings and intercepts. Results summarized across the 5,000 replications in each condition included Type I error rates and powers for the chi-square difference test and the Wald test of the second-order factor mean difference, estimation bias and efficiency for this latent mean difference, and means of the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA).

When the model was correctly specified, no obvious estimation bias was observed; when the model was misspecified by constraining noninvariant loadings or intercepts to be equal, the latent mean difference was overestimated if the direction of the difference in loadings or intercepts of was consistent with the direction of the latent mean difference, and vice versa. Increasing the number of noninvariant loadings or intercepts resulted in larger estimation bias if these noninvariant loadings or intercepts were constrained to be equal. Power to detect the latent mean difference was influenced by estimation bias and the estimated variance of the difference in the second-order factor mean, in addition to sample size and effect size. Constraining more parameters to be equal between groups—even when unequal in the population—led to a decrease in the variance of the estimated latent mean difference, which increased power somewhat. Finally, RMSEA was very sensitive for detecting misspecification due to improper equality constraints in all conditions in the current scenario, including the nonzero latent mean difference, but SRMR did not increase as expected when noninvariant parameters were constrained.
ContributorsLiu, Yixing (Author) / Thompson, Marilyn (Thesis advisor) / Green, Samuel (Committee member) / Levy, Roy (Committee member) / Arizona State University (Publisher)
Created2016
152477-Thumbnail Image.png
Description
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.
ContributorsCrawford, Aaron (Author) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2014
153000-Thumbnail Image.png
Description
Structural equation modeling is potentially useful for assessing mean differences between groups on latent variables (i.e., factors). However, to evaluate these differences accurately, the parameters of the indicators of these latent variables must be specified correctly. The focus of the current research is on the specification of between-group equality constraints

Structural equation modeling is potentially useful for assessing mean differences between groups on latent variables (i.e., factors). However, to evaluate these differences accurately, the parameters of the indicators of these latent variables must be specified correctly. The focus of the current research is on the specification of between-group equality constraints on the loadings and intercepts of indicators. These equality constraints are referred to as invariance constraints. Previous simulation studies in this area focused on fitting a particular model to data that were generated to have various levels and patterns of non-invariance. Results from these studies were interpreted from a viewpoint of assumption violation rather than model misspecification. In contrast, the current study investigated analysis models with varying number of invariance constraints given data that were generated based on a model with indicators that were invariant, partially invariant, or non-invariant. More broadly, the current simulation study was conducted to examine the effect of correctly or incorrectly imposing invariance constraints as well as correctly or incorrectly not imposing invariance constraints on the assessment of factor mean differences. The results indicated that different types of analysis models yield different results in terms of Type I error rates, power, bias in estimation of factor mean difference, and model fit. Benefits and risks are associated with imposing or reducing invariance constraints on models. In addition, model fit or lack of fit can lead to wrong decisions concerning invariance constraints.
ContributorsXu, Yuning (Author) / Green, Samuel (Thesis advisor) / Levy, Roy (Committee member) / Lai, Keke (Committee member) / Arizona State University (Publisher)
Created2014