Matching Items (6)
Filtering by

Clear all filters

151477-Thumbnail Image.png
Description
This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country.

This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country. A large-scale electronic survey was used to investigate the model's reliability and validity; to determine whether teachers used the EVAAS data in formative ways as intended; to gather teachers' opinions on EVAAS's claimed benefits and statements; and to understand the unintended consequences that occurred as a result of EVAAS use in HISD. Mixed methods data collection and analyses were used to present the findings in user-friendly ways, particularly when using the words and experiences of the teachers themselves. Results revealed that the reliability of the EVAAS model produced split and inconsistent results among teacher participants, and teachers indicated that students biased the EVAAS results. The majority of teachers did not report similar EVAAS and principal observation scores, reducing the criterion-related validity of both measures of teacher quality. Teachers revealed discrepancies in the distribution of EVAAS reports, the awareness of trainings offered, and among principals' understanding of EVAAS across the district. This resulted in an underwhelming number of teachers who reportedly used EVAAS data for formative purposes. Teachers disagreed with EVAAS marketing claims, implying the majority did not believe EVAAS worked as intended and promoted. Additionally, many unintended consequences associated with the high-stakes use of EVAAS emerged through teachers' responses, which revealed among others that teachers felt heightened pressure and competition, which reduced morale and collaboration, and encouraged cheating or teaching to the test in attempt to raise EVAAS scores. This study is one of the first to investigate how the EVAAS model works in practice and provides a glimpse of whether value-added models might produce desired outcomes and encourage best teacher practices. This is information of which policymakers, researchers, and districts should be aware and consider when implementing the EVAAS, or any value-added model for teacher evaluation, as many of the reported issues are not specific to the EVAAS model.
ContributorsCollins, Clarin (Author) / Amrein-Beardsley, Audrey (Thesis advisor) / Berliner, David C. (Committee member) / Fischman, Gustavo E (Committee member) / Arizona State University (Publisher)
Created2012
152322-Thumbnail Image.png
Description
The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items.

The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items. 3,908 educators in a database developed over 10+ years at the world's largest education company were sent a recruiting email; 400 elementary and secondary teachers in the final sample completed the online survey containing 48 questions over a three-week deployment period in the spring of 2013. Results indicated that overall teachers believe educational research is important, that the most important purpose of research is to increase effectiveness of classroom practice, yet research is not frequently sought out during the course of practice. Teachers perceive results in research journals as the most trustworthy yet also perceive research journals the most difficult to access (relying second-most often for research via in-service trainings). These findings have implications for teachers, administrators, policy-makers, and researchers. Educational researchers should seek to address both the theoretical and the applied aspects of learning. Professional development must make explicit links between research findings and classroom strategies and tactics, and research must be made more readily available to those who are not currently seeking additional credentialing, and therefore do not individually have access to scholarly literature. Further research is needed to expand the survey sample and refine the survey instrument. Similar research with administrators in pre-K-20 settings as well as in-depth interviews would serve to investigate the "why" of many findings.
ContributorsMahoney, Shawn (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2013
153078-Thumbnail Image.png
Description
Increasing public criticism of traditional teacher evaluation systems based largely on classroom observations has spurred an unprecedented shift in the debate surrounding educational accountability policies, specifically about the purposes for and measures used to evaluate teachers. In response to growing public demand and associated federal mandates, states have been prompted

Increasing public criticism of traditional teacher evaluation systems based largely on classroom observations has spurred an unprecedented shift in the debate surrounding educational accountability policies, specifically about the purposes for and measures used to evaluate teachers. In response to growing public demand and associated federal mandates, states have been prompted to design and implement teacher evaluation systems that use increasingly available, statistically complex models (i.e., value-added) intended to isolate and measure the effects of individual teachers on student academic growth over time. The purpose of this study was to examine the perceptions of school administrators and teachers within one of the largest school districts in the state of Arizona with regards to the design and implementation of a federally-supported, state policy-directed teacher evaluation system based on professional practice and value-added measures. While much research has been conducted on teacher evaluation, few studies have examined teacher evaluation systems in context to better understand the standards of effectiveness used by school administrators and teachers to measure system effectiveness. The perceptions of school administrators and teachers, considering their lived experiences as the subjects of the nation's new and improved teacher evaluation systems in context, must be better understood if state and federal policymakers are to also better recognize and understand the consequences (intended and unintended) associated with the design and implementation of these systems in practice.
ContributorsPaufler, Noelle A (Author) / Amrein-Beardsley, Audrey L (Thesis advisor) / Berliner, David C. (Committee member) / Fischman, Gustavo E (Committee member) / Arizona State University (Publisher)
Created2014
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
155689-Thumbnail Image.png
Description
Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool

Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool that connects paper-based assessments to digital space. I designed a classroom study and collected data from ASU computer science classes. I tracked and modeled students' reviewing and reflecting behaviors based on the use of WPGA. I analyzed students' reviewing efforts, in terms of frequency, timing, and the associations with their academic performances. Results showed that students put extra emphasis in reviewing prior to the exams and the efforts demonstrated the desire to review formal assessments regardless of if they were graded for academic performance or for attendance. In addition, all students paid more attention on reviewing quizzes and exams toward the end of semester.
ContributorsHuang, Po-Kai (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017
190938-Thumbnail Image.png
Description
Although an integral part of the pedagogical process is the evaluation of students, questions remain about the purpose and characteristics of effective assessments. Assessments should benefit both the instructor and the student, but this could be a challenge in large classes, such as the English service courses offered at the

Although an integral part of the pedagogical process is the evaluation of students, questions remain about the purpose and characteristics of effective assessments. Assessments should benefit both the instructor and the student, but this could be a challenge in large classes, such as the English service courses offered at the University of Guyana (UG), which are compulsory courses offered to over 2,000 first year students annually. However, the transition to online delivery of these courses because of the Coronavirus (COVID-19) pandemic has offered new opportunities for innovation in relation to course assessments. Consequently, this Action Research study was undertaken with the intention of improving the methods of assessment in the course, Introduction to the Use of English (ENG 1105), one of the three English service courses currently offered at UG.Multiple methods of data collection, including surveys, and semi-structured interviews, observations and analyses were used to determine how the assessment strategies used in the course helped develop academic self-efficacy in students and prepare them for other courses in their programs of study. The findings from the first two cycles of this study suggest that while the current assessment methods used in the course are beneficial to both lecturers and students, there is a need to adjust aspects of the assessments so students benefit from assessments that better align with other courses in their programs, as well as sharpen their English language skills. The third cycle captures the impact that the use of an innovation-an ungraded portfolio-had on student learning and suggests it should become a regular feature in the English service courses.
ContributorsMc Gowan, Mark Alastair (Author) / Thompson, Nicole L (Thesis advisor) / Fischman, Gustavo E (Committee member) / Wolf, Leigh G (Committee member) / Arizona State University (Publisher)
Created2023