Matching Items (51)
152032-Thumbnail Image.png
Description
In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required

In order to analyze data from an instrument administered at multiple time points it is a common practice to form composites of the items at each wave and to fit a longitudinal model to the composites. The advantage of using composites of items is that smaller sample sizes are required in contrast to second order models that include the measurement and the structural relationships among the variables. However, the use of composites assumes that longitudinal measurement invariance holds; that is, it is assumed that that the relationships among the items and the latent variables remain constant over time. Previous studies conducted on latent growth models (LGM) have shown that when longitudinal metric invariance is violated, the parameter estimates are biased and that mistaken conclusions about growth can be made. The purpose of the current study was to examine the impact of non-invariant loadings and non-invariant intercepts on two longitudinal models: the LGM and the autoregressive quasi-simplex model (AR quasi-simplex). A second purpose was to determine if there are conditions in which researchers can reach adequate conclusions about stability and growth even in the presence of violations of invariance. A Monte Carlo simulation study was conducted to achieve the purposes. The method consisted of generating items under a linear curve of factors model (COFM) or under the AR quasi-simplex. Composites of the items were formed at each time point and analyzed with a linear LGM or an AR quasi-simplex model. The results showed that AR quasi-simplex model yielded biased path coefficients only in the conditions with large violations of invariance. The fit of the AR quasi-simplex was not affected by violations of invariance. In general, the growth parameter estimates of the LGM were biased under violations of invariance. Further, in the presence of non-invariant loadings the rejection rates of the hypothesis of linear growth increased as the proportion of non-invariant items and as the magnitude of violations of invariance increased. A discussion of the results and limitations of the study are provided as well as general recommendations.
ContributorsOlivera-Aguilar, Margarita (Author) / Millsap, Roger E. (Thesis advisor) / Levy, Roy (Committee member) / MacKinnon, David (Committee member) / West, Stephen G. (Committee member) / Arizona State University (Publisher)
Created2013
152057-Thumbnail Image.png
Description
Possible selves researchers have uncovered many issues associated with the current possible selves measures. For instance, one of the most famous possible selves measures, Oyserman (2004)'s open-ended possible selves, has proven to be difficult to score reliably and also involves laborious scoring procedures. Therefore, this study was initiated to develo

Possible selves researchers have uncovered many issues associated with the current possible selves measures. For instance, one of the most famous possible selves measures, Oyserman (2004)'s open-ended possible selves, has proven to be difficult to score reliably and also involves laborious scoring procedures. Therefore, this study was initiated to develop a close-ended measure, called the Persistent Academic Possible Selves Scale for Adolescents (PAPSS), that meets these challenges. The PAPSS integrates possible selves theories (personal and social identities) and educational psychology (self-regulation in social cognitive theory). Four hundred and ninety five junior high and high school students participated in the validation study of the PAPSS. I conducted confirmatory factor analyses (CFA) to compare fit for a baseline model to the hypothesized models using Mplus version 7 (Muthén & Muthén, 2012). A weighted least square means and a variance adjusted (WLSMV) estimation method was used for handling multivariate nonnormality of ordered categorical data. The final PAPSS has validity evidence based on the internal structure. The factor structure is composed of three goal-driven factors, one self-regulated factor that focuses on peers, and four self-regulated factors that emphasize the self. Oyserman (2004)'s open-ended questionnaire was used for exploring the evidence of convergent validity. Many issues regarding Oyserman (2003)'s instructions were found during the coding process of academic plausibility. It was complicated to detect hidden academic possible selves and strategies from non-academic possible selves and strategies. Also, interpersonal related strategies were over weighted in the scoring process compared to interpersonal related academic possible selves. The study results uncovered that all of the academic goal-related factors in the PAPSS are significantly related to academic plausibility in a positive direction. However, self-regulated factors in the PAPSS are not. The correlation results between the self-regulated factors and academic plausibility do not provide the evidence of convergent validity. Theoretical and methodological explanations for the test results are discussed.
ContributorsLee, Ji Eun (Author) / Husman, Jenefer (Thesis advisor) / Green, Samuel (Committee member) / Millsap, Roger (Committee member) / Brem, Sarah (Committee member) / Arizona State University (Publisher)
Created2013
151992-Thumbnail Image.png
Description
Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools;

Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
ContributorsReichenberg, Ray E (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn S. (Thesis advisor) / Green, Samuel B. (Committee member) / Arizona State University (Publisher)
Created2013
151505-Thumbnail Image.png
Description
Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students.

Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students. This analogue study investigated whether school psychologists provided with more comprehensive psychoeducational evaluations of a student with TBI succeeded in detecting TBI, in making TBI-related accommodations, and were more confident in their decisions. To test these hypotheses, 76 school psychologists were randomly assigned to one of three groups that received increasingly comprehensive levels of psychoeducational evaluation embedded in a cumulative folder of a hypothetical student whose history included a recent head injury and TBI-compatible school problems. As expected, school psychologists who received a more comprehensive psychoeducational evaluation were more likely to make a TBI educational diagnosis, but the effect size was not strong, and the predictive value came from the variance between the first and third groups. Likewise, school psychologists receiving more comprehensive evaluation data produced more accommodations related to student needs and felt more confidence in those accommodations, but significant differences were not found at all levels of evaluation. Contrary to expectations, however, providing more comprehensive information failed to engender more confidence in decisions about TBI educational diagnoses. Concluding that a TBI is present may itself facilitate accommodations; school psychologists who judged that the student warranted a TBI educational diagnosis produce more TBI-related accommodations. Impact of findings suggest the importance of training school psychologists in the interpretation of neuropsychology test results to aid in educational diagnosis and to increase confidence in their use.
ContributorsHildreth, Lisa Jane (Author) / Hildreth, Lisa J (Thesis advisor) / Wodrich, David (Committee member) / Levy, Roy (Committee member) / Lavoie, Michael (Committee member) / Arizona State University (Publisher)
Created2012
152419-Thumbnail Image.png
Description
Science, Technology, Engineering & Mathematics (STEM) careers have been touted as critical to the success of our nation and also provide important opportunities for access and equity of underrepresented minorities (URM's). Community colleges serve a diverse population and a large number of undergraduates currently enrolled in college, they are well

Science, Technology, Engineering & Mathematics (STEM) careers have been touted as critical to the success of our nation and also provide important opportunities for access and equity of underrepresented minorities (URM's). Community colleges serve a diverse population and a large number of undergraduates currently enrolled in college, they are well situated to help address the increasing STEM workforce demands. Geoscience is a discipline that draws great interest, but has very low representation of URM's as majors. What factors influence a student's decision to major in the geosciences and are community college students different from research universities in what factors influence these decisions? Through a survey-design mixed with classroom observations, structural equation model was employed to predict a student's intent to persist in introductory geology based on student expectancy for success in their geology class, math self-concept, and interest in the content. A measure of classroom pedagogy was also used to determine if instructor played a role in predicting student intent to persist. The targeted population was introductory geology students participating in the Geoscience Affective Research NETwork (GARNET) project, a national sampling of students in enrolled in introductory geology courses. Results from SEM analysis indicated that interest was the primary predictor in a students intent to persist in the geosciences for both community college and research university students. In addition, self-efficacy appeared to be mediated by interest within these models. Classroom pedagogy impacted how much interest was needed to predict intent to persist, in which as classrooms became more student centered, less interest was required to predict intent to persist. Lastly, math self-concept did not predict student intent to persist in the geosciences, however, it did share variance with self-efficacy and control of learning beliefs, indicating it may play a moderating effect on student interest and self-efficacy. Implications of this work are that while community college students and research university students are different in demographics and content preparation, student-centered instruction continues to be the best way to support student's interest in the sciences. Future work includes examining how math self-concept may play a role in longitudinal persistence in the geosciences.
ContributorsKraft, Katrien J. van der Hoeven (Author) / Husman, Jenefer (Thesis advisor) / Semken, Steven (Thesis advisor) / Baker, Dale R. (Committee member) / McConnell, David (Committee member) / Arizona State University (Publisher)
Created2014
152595-Thumbnail Image.png
Description
The semiconductor field of Photovoltaics (PV) has experienced tremendous growth, requiring curricula to consider ways to promote student success. One major barrier to success students may face when learning PV is the development of misconceptions. The purpose of this work was to determine the presence and prevalence of misconceptions students

The semiconductor field of Photovoltaics (PV) has experienced tremendous growth, requiring curricula to consider ways to promote student success. One major barrier to success students may face when learning PV is the development of misconceptions. The purpose of this work was to determine the presence and prevalence of misconceptions students may have for three PV semiconductor phenomena; Diffusion, Drift and Excitation. These phenomena are emergent, a class of phenomena that have certain characteristics. In emergent phenomena, the individual entities in the phenomena interact and aggregate to form a self-organizing pattern that can be observed at a higher level. Learners develop a different type of misconception for these phenomena, an emergent misconception. Participants (N=41) completed a written protocol. The pilot study utilized half of these protocols (n = 20) to determine the presence of both general and emergent misconceptions for the three phenomena. Once the presence of both general and emergent misconceptions was confirmed, all protocols (N=41) were analyzed to determine the presence and prevalence of general and emergent misconceptions, and to note any relationships among these misconceptions (full study). Through written protocol analysis of participants' responses, numerous codes emerged from the data for both general and emergent misconceptions. General and emergent misconceptions were found in 80% and 55% of participants' responses, respectively. General misconceptions indicated limited understandings of chemical bonding, electricity and magnetism, energy, and the nature of science. Participants also described the phenomena using teleological, predictable, and causal traits, indicating participants had misconceptions regarding the emergent aspects of the phenomena. For both general and emergent misconceptions, relationships were observed between similar misconceptions within and across the three phenomena, and differences in misconceptions were observed across the phenomena. Overall, the presence and prevalence of both general and emergent misconceptions indicates that learners have limited understandings of the physical and emergent mechanisms for the phenomena. Even though additional work is required, the identification of specific misconceptions can be utilized to enhance semiconductor and PV course content. Specifically, changes can be made to curriculum in order to limit the formation of misconceptions as well as promote conceptual change.
ContributorsNelson, Katherine G (Author) / Brem, Sarah K. (Thesis advisor) / Mckenna, Ann F (Thesis advisor) / Hilpert, Jonathan (Committee member) / Honsberg, Christiana (Committee member) / Husman, Jenefer (Committee member) / Arizona State University (Publisher)
Created2014
152477-Thumbnail Image.png
Description
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation

This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex performance assessment within a digital-simulation educational context grounded in theories of cognition and learning. BN models were manipulated along two factors: latent variable dependency structure and number of latent classes. Distributions of posterior predicted p-values (PPP-values) served as the primary outcome measure and were summarized in graphical presentations, by median values across replications, and by proportions of replications in which the PPP-values were extreme. An effect size measure for PPMC was introduced as a supplemental numerical summary to the PPP-value. Consistent with previous PPMC research, all investigated fit functions tended to perform conservatively, but Standardized Generalized Dimensionality Discrepancy Measure (SGDDM), Yen's Q3, and Hierarchy Consistency Index (HCI) only mildly so. Adequate power to detect at least some types of misfit was demonstrated by SGDDM, Q3, HCI, Item Consistency Index (ICI), and to a lesser extent Deviance, while proportion correct (PC), a chi-square-type item-fit measure, Ranked Probability Score (RPS), and Good's Logarithmic Scale (GLS) were powerless across all investigated factors. Bivariate SGDDM and Q3 were found to provide powerful and detailed feedback for all investigated types of misfit.
ContributorsCrawford, Aaron (Author) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2014
152928-Thumbnail Image.png
Description
The study examined how ATFIND, Mantel-Haenszel, SIBTEST, and Crossing SIBTEST function when items in the dataset are modelled to differentially advantage a lower ability focal group over a higher ability reference group. The primary purpose of the study was to examine ATFIND's usefulness as a valid subtest selection tool, but

The study examined how ATFIND, Mantel-Haenszel, SIBTEST, and Crossing SIBTEST function when items in the dataset are modelled to differentially advantage a lower ability focal group over a higher ability reference group. The primary purpose of the study was to examine ATFIND's usefulness as a valid subtest selection tool, but it also explored the influence of DIF items, item difficulty, and presence of multiple examinee populations with different ability distributions on both its selection of the assessment test (AT) and partitioning test (PT) lists and on all three differential item functioning (DIF) analysis procedures. The results of SIBTEST were also combined with those of Crossing SIBTEST, as might be done in practice.

ATFIND was found to be a less-than-effective matching subtest selection tool with DIF items that are modelled unidimensionally. If an item was modelled with uniform DIF or if it had a referent difficulty parameter in the Medium range, it was found to be selected slightly more often for the AT List than the PT List. These trends were seen to increase as sample size increased. All three DIF analyses, and the combined SIBTEST and Crossing SIBTEST, generally were found to perform less well as DIF contaminated the matching subtest, as well as when DIF was modelled less severely or when the focal group ability was skewed. While the combined SIBTEST and Crossing SIBTEST was found to have the highest power among the DIF analyses, it also was found to have Type I error rates that were sometimes extremely high.
ContributorsScott, Lietta Marie (Author) / Levy, Roy (Thesis advisor) / Green, Samuel B (Thesis advisor) / Gorin, Joanna S (Committee member) / Williams, Leila E (Committee member) / Arizona State University (Publisher)
Created2014
152985-Thumbnail Image.png
Description
Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When

Research methods based on the frequentist philosophy use prior information in a priori power calculations and when determining the necessary sample size for the detection of an effect, but not in statistical analyses. Bayesian methods incorporate prior knowledge into the statistical analysis in the form of a prior distribution. When prior information about a relationship is available, the estimates obtained could differ drastically depending on the choice of Bayesian or frequentist method. Study 1 in this project compared the performance of five methods for obtaining interval estimates of the mediated effect in terms of coverage, Type I error rate, empirical power, interval imbalance, and interval width at N = 20, 40, 60, 100 and 500. In Study 1, Bayesian methods with informative prior distributions performed almost identically to Bayesian methods with diffuse prior distributions, and had more power than normal theory confidence limits, lower Type I error rates than the percentile bootstrap, and coverage, interval width, and imbalance comparable to normal theory, percentile bootstrap, and the bias-corrected bootstrap confidence limits. Study 2 evaluated if a Bayesian method with true parameter values as prior information outperforms the other methods. The findings indicate that with true values of parameters as the prior information, Bayesian credibility intervals with informative prior distributions have more power, less imbalance, and narrower intervals than Bayesian credibility intervals with diffuse prior distributions, normal theory, percentile bootstrap, and bias-corrected bootstrap confidence limits. Study 3 examined how much power increases when increasing the precision of the prior distribution by a factor of ten for either the action or the conceptual path in mediation analysis. Power generally increases with increases in precision but there are many sample size and parameter value combinations where precision increases by a factor of 10 do not lead to substantial increases in power.
ContributorsMiocevic, Milica (Author) / Mackinnon, David P. (Thesis advisor) / Levy, Roy (Committee member) / West, Stephen G. (Committee member) / Enders, Craig (Committee member) / Arizona State University (Publisher)
Created2014
153133-Thumbnail Image.png
Description
The primary objective of this study was to develop the Perceived Control of the Attribution Process Scale (PCAPS), a measure of metacognitive beliefs of causality, or a perceived control of the attribution process. The PCAPS included two subscales: perceived control of attributions (PCA), and awareness of the motivational consequences of

The primary objective of this study was to develop the Perceived Control of the Attribution Process Scale (PCAPS), a measure of metacognitive beliefs of causality, or a perceived control of the attribution process. The PCAPS included two subscales: perceived control of attributions (PCA), and awareness of the motivational consequences of attributions (AMC). Study 1 (a pilot study) generated scale items, explored suitable measurement formats, and provided initial evidence for the validity of an event-specific version of the scale. Study 2 achieved several outcomes; Study 2a provided strong evidence for the validity and reliability of the PCA and AMC subscales, and showed that they represent separate constructs. Study 2b demonstrated the predictive validity of the scale and provided support for the perceived control of the attribution process model. This study revealed that those who adopt these beliefs are significantly more likely to experience autonomy and well-being. Study 2c revealed that these constructs are influenced by context, yet they lead to adaptive outcomes regardless of this contextual-specificity. These findings suggest that there are individual differences in metacognitive beliefs of causality and that these differences have measurable motivational implications.
ContributorsFishman, Evan Jacob (Author) / Nakagawa, Kathryn (Committee member) / Husman, Jenefer (Committee member) / Graham, Steve (Committee member) / Moore, Elsie (Committee member) / Arizona State University (Publisher)
Created2014