Matching Items (406)
ContributorsWard, Geoffrey Harris (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-18
151505-Thumbnail Image.png
Description
Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students.

Students with traumatic brain injury (TBI) sometimes experience impairments that can adversely affect educational performance. Consequently, school psychologists may be needed to help determine if a TBI diagnosis is warranted (i.e., in compliance with the Individuals with Disabilities Education Improvement Act, IDEIA) and to suggest accommodations to assist those students. This analogue study investigated whether school psychologists provided with more comprehensive psychoeducational evaluations of a student with TBI succeeded in detecting TBI, in making TBI-related accommodations, and were more confident in their decisions. To test these hypotheses, 76 school psychologists were randomly assigned to one of three groups that received increasingly comprehensive levels of psychoeducational evaluation embedded in a cumulative folder of a hypothetical student whose history included a recent head injury and TBI-compatible school problems. As expected, school psychologists who received a more comprehensive psychoeducational evaluation were more likely to make a TBI educational diagnosis, but the effect size was not strong, and the predictive value came from the variance between the first and third groups. Likewise, school psychologists receiving more comprehensive evaluation data produced more accommodations related to student needs and felt more confidence in those accommodations, but significant differences were not found at all levels of evaluation. Contrary to expectations, however, providing more comprehensive information failed to engender more confidence in decisions about TBI educational diagnoses. Concluding that a TBI is present may itself facilitate accommodations; school psychologists who judged that the student warranted a TBI educational diagnosis produce more TBI-related accommodations. Impact of findings suggest the importance of training school psychologists in the interpretation of neuropsychology test results to aid in educational diagnosis and to increase confidence in their use.
ContributorsHildreth, Lisa Jane (Author) / Hildreth, Lisa J (Thesis advisor) / Wodrich, David (Committee member) / Levy, Roy (Committee member) / Lavoie, Michael (Committee member) / Arizona State University (Publisher)
Created2012
150157-Thumbnail Image.png
Description
Traditional design education consists of three phases: perceptual, transitional, and professional. This study explored three independent variables (IVs) as predictors of success in the Transitional Phase of a visual communication design (VCD) program: (a) prior academic performance (as reported by GPA); (b) cognitive style (assessed with Peterson, Deary, and Austin's

Traditional design education consists of three phases: perceptual, transitional, and professional. This study explored three independent variables (IVs) as predictors of success in the Transitional Phase of a visual communication design (VCD) program: (a) prior academic performance (as reported by GPA); (b) cognitive style (assessed with Peterson, Deary, and Austin's Verbal Imagery Cognitive Styles Test [VICS] and Extended Cognitive Style Analysis-Wholistic Analytic Test [E-CSA-WA]); and (c) learning style (assessed with Kolb's Learning Style Inventory [LSI] 3.1). To address the research problem and hypothesis, this study examined (a) the relationship between academic performance, cognitive style, and learning style, and visual communication design students' performance in the Transitional Phase; (b) the cognitive style and learning style preferences of visual communication design students as compared with other samples; and (c) how the resulting knowledge can be used to improve instructional design for the Transitional Phase in VCD programs. Multiple regression analysis revealed that 9% of Transitional Phase performance was predicted by studio GPA. No other variables were statistically significant predictors of Transitional Phase performance. However, ANOVA and t tests revealed statistically significant and suggested relationships among components of the independent variables, that indicate avenues for future study. The results are discussed in the context of style-based learning theory, and the cognitive apprenticeship approach to instructional design.
ContributorsMurdock, John Boardman (Author) / Sanft, Alfred C (Thesis advisor) / Patel, Mookesh (Thesis advisor) / Weed, Andrew (Committee member) / Arizona State University (Publisher)
Created2011
150355-Thumbnail Image.png
Description
This study investigated the internal factor structure of the English language development Assessment (ELDA) using confirmatory factor analysis. ELDA is an English language proficiency test developed by a consortium of multiple states and is used to identify and reclassify English language learners in kindergarten to grade 12. Scores on item

This study investigated the internal factor structure of the English language development Assessment (ELDA) using confirmatory factor analysis. ELDA is an English language proficiency test developed by a consortium of multiple states and is used to identify and reclassify English language learners in kindergarten to grade 12. Scores on item parcels based on the standards tested from the four domains of reading, writing, listening, and speaking were used for the analyses. Five different factor models were tested: a single factor model, a correlated two-factor model, a correlated four-factor model, a second-order factor model and a bifactor model. The results indicate that the four-factor model, second-order model, and bifactor model fit the data well. The four-factor model hypothesized constructs for reading, writing, listening and speaking. The second-order model hypothesized a second-order English language proficiency factor as well as the four lower-order factors of reading, writing, listening and speaking. The bifactor model hypothesized a general English language proficiency factor as well as the four domain specific factors of reading, writing, listening, and speaking. The Chi-square difference tests indicated that the bifactor model best explains the factor structure of the ELDA. The results from this study are consistent with the findings in the literature about the multifactorial nature of language but differ from the conclusion about the factor structures reported in previous studies. The overall proficiency levels on the ELDA gives more weight to the reading and writing sections of the test than the speaking and listening sections. This study has implications on the rules used for determining proficiency levels and recommends the use of conjunctive scoring where all constructs are weighted equally contrary to current practice.
ContributorsKuriakose, Anju Susan (Author) / Macswan, Jeff (Thesis advisor) / Haladyna, Thomas (Thesis advisor) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2011
149935-Thumbnail Image.png
Description
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed

The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in compensatory and noncompensatory multidimensional item response models (MIRT) of assessment data using dimensionality assessment procedures based on conditional covariances (i.e., DETECT) and a factor analytical approach (i.e., NOHARM). The DETECT-based methods typically outperformed the NOHARM-based methods in both two- (2D) and three-dimensional (3D) compensatory MIRT conditions. The DETECT-based methods yielded high proportion correct, especially when correlations were .60 or smaller, data exhibited 30% or less complexity, and larger sample size. As the complexity increased and the sample size decreased, the performance typically diminished. As the complexity increased, it also became more difficult to label the resulting sets of items from DETECT in terms of the dimensions. DETECT was consistent in classification of simple items, but less consistent in classification of complex items. Out of the three NOHARM-based methods, χ2G/D and ALR generally outperformed RMSR. χ2G/D was more accurate when N = 500 and complexity levels were 30% or lower. As the number of items increased, ALR performance improved at correlation of .60 and 30% or less complexity. When the data followed a noncompensatory MIRT model, the NOHARM-based methods, specifically χ2G/D and ALR, were the most accurate of all five methods. The marginal proportions for labeling sets of items as dimension-like were typically low, suggesting that the methods generally failed to label two (three) sets of items as dimension-like in 2D (3D) noncompensatory situations. The DETECT-based methods were more consistent in classifying simple items across complexity levels, sample sizes, and correlations. However, as complexity and correlation levels increased the classification rates for all methods decreased. In most conditions, the DETECT-based methods classified complex items equally or more consistent than the NOHARM-based methods. In particular, as complexity, the number of items, and the true dimensionality increased, the DETECT-based methods were notably more consistent than any NOHARM-based method. Despite DETECT's consistency, when data follow a noncompensatory MIRT model, the NOHARM-based method should be preferred over the DETECT-based methods to assess dimensionality due to poor performance of DETECT in identifying the true dimensionality.
ContributorsSvetina, Dubravka (Author) / Levy, Roy (Thesis advisor) / Gorin, Joanna S. (Committee member) / Millsap, Roger (Committee member) / Arizona State University (Publisher)
Created2011
150013-Thumbnail Image.png
Description
Few measurement tools provide reliable, valid data on both children's emotional and behavioral engagement in school. The School Liking and Avoidance Questionnaire (SLAQ) is one such self-report measure developed to evaluate a child's degree of engagement in the school setting as it is manifest in a child's school liking and

Few measurement tools provide reliable, valid data on both children's emotional and behavioral engagement in school. The School Liking and Avoidance Questionnaire (SLAQ) is one such self-report measure developed to evaluate a child's degree of engagement in the school setting as it is manifest in a child's school liking and school avoidance. This study evaluated the SLAQ's dimensionality, reliability, and validity. Data were gathered on children from kindergarten through 6th grade (n=396). Participants reported on their school liking and avoidance in the spring of each school year. Scores consistently represented two distinct, yet related subscales (i.e., school liking and school avoidance) that were reliable and stable over time. Validation analyses provided some corroboration of the construct validity of the SLAQ subscales, but evidence of predictive validity was inconsistent with the hypothesized relations (i.e., early report of school liking and school avoidance did not predict later achievement outcomes). In sum, the findings from this study provide some support for the dimensionality, reliability, and validity of the SLAQ and suggest that it can be used for the assessment of young children's behavioral and emotional engagement in school.
ContributorsSmith, Jillian (Author) / Ladd, Gary W. (Thesis advisor) / Ladd, Becky (Committee member) / Thompson, Marilyn (Committee member) / Arizona State University (Publisher)
Created2011
150015-Thumbnail Image.png
Description
This study is a discourse analysis and deconstruction of public documents published electronically in connection with the evaluation of the Advanced Placement Language and Composition Examination, found on the educational website: apcentral.collegeboard.com. The subject of this dissertation is how the characteristic of writing identified as Voice functions covertly in the

This study is a discourse analysis and deconstruction of public documents published electronically in connection with the evaluation of the Advanced Placement Language and Composition Examination, found on the educational website: apcentral.collegeboard.com. The subject of this dissertation is how the characteristic of writing identified as Voice functions covertly in the calibration of raters' evaluation of student writing in two sets of electronic commentaries: the Scoring Commentaries and the Student Performance Q&A;'s published between the years 2000-2010. The study is intended to contribute to both socio-linguistic and sociological research in education on the influence of inherited forms of cultural capital in educational attainment, with particular emphasis upon performance on high-stakes examinations. Modeled after Pierre Bourdieu's inquiry into the latent bias revealed in the "euphemized" language of teacher commentary found in The State Nobility, lists of recurrent descriptors and binary oppositions in the texts are deconstructed. The result of the deconstruction is the manifestation of latent class bias in the commentaries. Conclusions: discourse analysis reveals that a particular Voice, expressive of a preferred social class identity, which is initiated to and particularly deft in such academic performances, is rewarded by the test evaluators. Similarly, findings reveal that a low-scoring essay is negatively critiqued for being particularly unaccustomed to the form(s) of knowledge and style of writing required by the test situation. In summation, a high score on the AP Language Examination, rather than a certification of writerly competence, is actually a testament to the performance of cultural capital. Following an analysis of the language of classification and assessment in the electronic documents, the author provides several "tactics" (after de Certeau) or recommendations for writing the AP Language and Composition Examination, conducive to the stylistic performances privileged by the rating system.
ContributorsGraber, Stacy (Author) / Blasingame, James (Thesis advisor) / Tobin, Joseph (Committee member) / Nilsen, Alleen (Committee member) / Adams, Karen (Committee member) / Arizona State University (Publisher)
Created2011
150518-Thumbnail Image.png
Description
ABSTRACT This study investigated the possibility of item parameter drift (IPD) in a calculus placement examination administered to approximately 3,000 students at a large university in the United States. A single form of the exam was administered continuously for a period of two years, possibly allowing later examinees to have

ABSTRACT This study investigated the possibility of item parameter drift (IPD) in a calculus placement examination administered to approximately 3,000 students at a large university in the United States. A single form of the exam was administered continuously for a period of two years, possibly allowing later examinees to have prior knowledge of specific items on the exam. An analysis of IPD was conducted to explore evidence of possible item exposure. Two assumptions concerning items exposure were made: 1) item recall and item exposure are positively correlated, and 2) item exposure results in the items becoming easier over time. Special consideration was given to two contextual item characteristics: 1) item location within the test, specifically items at the beginning and end of the exam, and 2) the use of an associated diagram. The hypotheses stated that these item characteristics would make the items easier to recall and, therefore, more likely to be exposed, resulting in item drift. BILOG-MG 3 was used to calibrate the items and assess for IPD. No evidence was found to support the hypotheses that the items located at the beginning of the test or with an associated diagram drifted as a result of item exposure. Three items among the last ten on the exam drifted significantly and became easier, consistent with item exposure. However, in this study, the possible effects of item exposure could not be separated from the effects of other potential factors such as speededness, curriculum changes, better test preparation on the part of subsequent examinees, or guessing.
ContributorsKrause, Janet (Author) / Levy, Roy (Thesis advisor) / Thompson, Marilyn (Thesis advisor) / Gorin, Joanna (Committee member) / Arizona State University (Publisher)
Created2012
150522-Thumbnail Image.png
Description
Given the political and public demands for accountability, using the voices of students from the frontlines, this study investigated student perceptions of New Mexico's high-stakes testing program taking public schools in the right direction. Did the students perceive the program having an impact on retention, drop outs, or graduation requirements?

Given the political and public demands for accountability, using the voices of students from the frontlines, this study investigated student perceptions of New Mexico's high-stakes testing program taking public schools in the right direction. Did the students perceive the program having an impact on retention, drop outs, or graduation requirements? What were the perceptions of Navajo students in Navajo reservation schools as to the impact of high-stakes testing on their emotional, physical, social, and academic well-being? The specific tests examined were the New Mexico High School Competency Exam (NMHSCE) and the New Mexico Standard Based Assessment (SBA/ High School Graduation Assessment) on Native American students. Based on interviews published by the Daily Times of Farmington, New Mexico, our local newspaper, some of the students reported that the testing program was not taking schools in the right direction, that the test was used improperly, and that the one-time test scores were not an accurate assessment of students learning. In addition, they were cited on negative and positive effects on the curriculum, teaching and learning, and student and teacher motivation. Based on the survey results, the students' positive and negative concerns and praises of high-stakes testing were categorized into themes. The positive effects cited included the fact that the testing held students, educators, and parents accountable for their actions. The students were not opposed to accountability, but rather, opposed to the manner in which it was currently implemented. Several implications of these findings were examined: (a) requirements to pass the New Mexico High School Competency Exam; (b) what high stakes testing meant for the emotional well-being of the students; (c) the impact of sanctions under New Mexico's high-stakes testing proficiency; and (d) the effects of high-stakes tests on students' perceptions, experiences and attitudes. Student voices are not commonly heard in meetings and discussions about K-12 education policy. Yet, the adults who control policy could learn much from listening to what students have to say about their experiences.
ContributorsTracy, Gladys Yazzie (Author) / Tracy, Gladys Y (Thesis advisor) / Spencer, Dr. Dee (Committee member) / Appleton, Dr. Nicholas (Committee member) / Slowman-Chee, Dr. Janet (Committee member) / Arizona State University (Publisher)
Created2012
150186-Thumbnail Image.png
Description
The current study analyzed existing data, collected under a previous U.S. Department of Education Reading First grant, to investigate the strength of the relationship between scores on the first- through third-grade Dynamic Indicators of Basic Early Literacy Skills - Oral Reading Fluency (DIBELS-ORF) test and scores on a reading comprehension

The current study analyzed existing data, collected under a previous U.S. Department of Education Reading First grant, to investigate the strength of the relationship between scores on the first- through third-grade Dynamic Indicators of Basic Early Literacy Skills - Oral Reading Fluency (DIBELS-ORF) test and scores on a reading comprehension test (TerraNova-Reading) administered at the conclusion of second- and third-grade. Participants were sixty-five English Language Learners (ELLs) learning to read in a school district adjacent to the U.S.-Mexico border. DIBELS-ORF and TerraNova-Reading scores were provided by the school district, which administers the assessments in accordance with state and federal mandates to monitor early literacy skill development. Bivariate correlation results indicate moderate-to-strong positive correlations between DIBELS-ORF scores and TerraNova-Reading performance that strengthened between grades one and three. Results suggest that the concurrent relationship between oral reading fluency scores and performance on standardized and high-stakes measures of reading comprehension may be different among ELLs as compared to non-ELLs during first- and second-grade. However, by third-grade the correlations approximate those reported in previous non-ELL studies. This study also examined whether the Peabody Picture Vocabulary Test (PPVT), a receptive vocabulary measure, could explain any additional variance on second- and third-grade TerraNova-Reading performance beyond that explained by the DIBELS-ORF. The PPVT was individually administered by researchers collecting data under a Reading First research grant prior to the current study. Receptive vocabulary was found to be a strong predictor of reading comprehension among ELLs, and largely overshadowed the predictive ability of the DIBELS-ORF during first-grade. Results suggest that receptive vocabulary scores, used in conjunction with the DIBELS-ORF, may be useful for identifying beginning ELL readers who are at risk for third-grade reading failure as early as first-grade.
ContributorsMillett, Joseph Ridge (Author) / Atkinson, Robert (Thesis advisor) / Blanchard, Jay (Committee member) / Thompson, Marilyn (Committee member) / Christie, James (Committee member) / Arizona State University (Publisher)
Created2011