Matching Items (4)
Filtering by

Clear all filters

150518-Thumbnail Image.png
Description
ABSTRACT This study investigated the possibility of item parameter drift (IPD) in a calculus placement examination administered to approximately 3,000 students at a large university in the United States. A single form of the exam was administered continuously for a period of two years, possibly allowing later examinees to have

ABSTRACT This study investigated the possibility of item parameter drift (IPD) in a calculus placement examination administered to approximately 3,000 students at a large university in the United States. A single form of the exam was administered continuously for a period of two years, possibly allowing later examinees to have prior knowledge of specific items on the exam. An analysis of IPD was conducted to explore evidence of possible item exposure. Two assumptions concerning items exposure were made: 1) item recall and item exposure are positively correlated, and 2) item exposure results in the items becoming easier over time. Special consideration was given to two contextual item characteristics: 1) item location within the test, specifically items at the beginning and end of the exam, and 2) the use of an associated diagram. The hypotheses stated that these item characteristics would make the items easier to recall and, therefore, more likely to be exposed, resulting in item drift. BILOG-MG 3 was used to calibrate the items and assess for IPD. No evidence was found to support the hypotheses that the items located at the beginning of the test or with an associated diagram drifted as a result of item exposure. Three items among the last ten on the exam drifted significantly and became easier, consistent with item exposure. However, in this study, the possible effects of item exposure could not be separated from the effects of other potential factors such as speededness, curriculum changes, better test preparation on the part of subsequent examinees, or guessing.
ContributorsKrause, Janet (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn (Thesis advisor) / Gorin, Joanna (Committee member) / Arizona State University (Publisher)
Created2012
150186-Thumbnail Image.png
Description
The current study analyzed existing data, collected under a previous U.S. Department of Education Reading First grant, to investigate the strength of the relationship between scores on the first- through third-grade Dynamic Indicators of Basic Early Literacy Skills - Oral Reading Fluency (DIBELS-ORF) test and scores on a reading comprehension

The current study analyzed existing data, collected under a previous U.S. Department of Education Reading First grant, to investigate the strength of the relationship between scores on the first- through third-grade Dynamic Indicators of Basic Early Literacy Skills - Oral Reading Fluency (DIBELS-ORF) test and scores on a reading comprehension test (TerraNova-Reading) administered at the conclusion of second- and third-grade. Participants were sixty-five English Language Learners (ELLs) learning to read in a school district adjacent to the U.S.-Mexico border. DIBELS-ORF and TerraNova-Reading scores were provided by the school district, which administers the assessments in accordance with state and federal mandates to monitor early literacy skill development. Bivariate correlation results indicate moderate-to-strong positive correlations between DIBELS-ORF scores and TerraNova-Reading performance that strengthened between grades one and three. Results suggest that the concurrent relationship between oral reading fluency scores and performance on standardized and high-stakes measures of reading comprehension may be different among ELLs as compared to non-ELLs during first- and second-grade. However, by third-grade the correlations approximate those reported in previous non-ELL studies. This study also examined whether the Peabody Picture Vocabulary Test (PPVT), a receptive vocabulary measure, could explain any additional variance on second- and third-grade TerraNova-Reading performance beyond that explained by the DIBELS-ORF. The PPVT was individually administered by researchers collecting data under a Reading First research grant prior to the current study. Receptive vocabulary was found to be a strong predictor of reading comprehension among ELLs, and largely overshadowed the predictive ability of the DIBELS-ORF during first-grade. Results suggest that receptive vocabulary scores, used in conjunction with the DIBELS-ORF, may be useful for identifying beginning ELL readers who are at risk for third-grade reading failure as early as first-grade.
ContributorsMillett, Joseph Ridge (Author) / Atkinson, Robert (Thesis advisor) / Blanchard, Jay (Committee member) / Thompson, Marilyn (Committee member) / Christie, James (Committee member) / Arizona State University (Publisher)
Created2011
150008-Thumbnail Image.png
Description
Proponents of current educational reform initiatives emphasize strict accountability, the standardization of curriculum and pedagogy and the use of standardized tests to measure student learning and indicate teacher, administrator and school performance. As a result, professional learning communities have emerged as a platform for teachers to collaborate with one another

Proponents of current educational reform initiatives emphasize strict accountability, the standardization of curriculum and pedagogy and the use of standardized tests to measure student learning and indicate teacher, administrator and school performance. As a result, professional learning communities have emerged as a platform for teachers to collaborate with one another in order to improve their teaching practices, increase student achievement and promote continuous school improvement. The primary purpose of this inquiry was to investigate how teachers respond to working in professional learning communities in which the discourses privilege the practice of regularly comparing evidence of students' learning and results. A second purpose was to raise questions about how the current focus on standardization, assessment and accountability impacts teachers, their interactions and relationships with one another, their teaching practices, and school culture. Participants in this qualitative, ethnographic inquiry included fifteen teachers working within Green School District (a pseudonym). Initial interviews were conducted with all teachers, and responses were categorized in a typology borrowed from Barone (2008). Data analysis involved attending to the behaviors and experiences of these teachers, and the meanings these teachers associated with those behaviors and events. Teachers of GSD responded differently to the various layers of expectations and pressures inherent in the policies and practices in education today. The experiences of the teachers from GSD confirm the body of research that illuminates the challenges and complexity of working in collaborative forms of professional development, situated within the present era of accountability. Looking through lenses privileged by critical theorists, this study examined important intended and unintended consequences inherent in the educational practices of standardization and accountability. The inquiry revealed that a focus on certain "results" and the demand to achieve short terms gains may impede the creation of successful, collaborative, professional learning communities.
ContributorsBenson, Karen (Author) / Barone, Thomas (Thesis advisor) / Berliner, David (Committee member) / Enz, Billie (Committee member) / Arizona State University (Publisher)
Created2011
151761-Thumbnail Image.png
Description
The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters

The use of exams for classification purposes has become prevalent across many fields including professional assessment for employment screening and standards based testing in educational settings. Classification exams assign individuals to performance groups based on the comparison of their observed test scores to a pre-selected criterion (e.g. masters vs. nonmasters in dichotomous classification scenarios). The successful use of exams for classification purposes assumes at least minimal levels of accuracy of these classifications. Classification accuracy is an index that reflects the rate of correct classification of individuals into the same category which contains their true ability score. Traditional methods estimate classification accuracy via methods which assume that true scores follow a four-parameter beta-binomial distribution. Recent research suggests that Item Response Theory may be a preferable alternative framework for estimating examinees' true scores and may return more accurate classifications based on these scores. Researchers hypothesized that test length, the location of the cut score, the distribution of items, and the distribution of examinee ability would impact the recovery of accurate estimates of classification accuracy. The current simulation study manipulated these factors to assess their potential influence on classification accuracy. Observed classification as masters vs. nonmasters, true classification accuracy, estimated classification accuracy, BIAS, and RMSE were analyzed. In addition, Analysis of Variance tests were conducted to determine whether an interrelationship existed between levels of the four manipulated factors. Results showed small values of estimated classification accuracy and increased BIAS in accuracy estimates with few items, mismatched distributions of item difficulty and examinee ability, and extreme cut scores. A significant four-way interaction between manipulated variables was observed. In additional to interpretations of these findings and explanation of potential causes for the recovered values, recommendations that inform practice and avenues of future research are provided.
ContributorsKunze, Katie (Author) / Gorin, Joanna (Thesis advisor) / Levy, Roy (Thesis advisor) / Green, Samuel (Committee member) / Arizona State University (Publisher)
Created2013