Matching Items (4)
Filtering by

Clear all filters

151477-Thumbnail Image.png
Description
This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country.

This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country. A large-scale electronic survey was used to investigate the model's reliability and validity; to determine whether teachers used the EVAAS data in formative ways as intended; to gather teachers' opinions on EVAAS's claimed benefits and statements; and to understand the unintended consequences that occurred as a result of EVAAS use in HISD. Mixed methods data collection and analyses were used to present the findings in user-friendly ways, particularly when using the words and experiences of the teachers themselves. Results revealed that the reliability of the EVAAS model produced split and inconsistent results among teacher participants, and teachers indicated that students biased the EVAAS results. The majority of teachers did not report similar EVAAS and principal observation scores, reducing the criterion-related validity of both measures of teacher quality. Teachers revealed discrepancies in the distribution of EVAAS reports, the awareness of trainings offered, and among principals' understanding of EVAAS across the district. This resulted in an underwhelming number of teachers who reportedly used EVAAS data for formative purposes. Teachers disagreed with EVAAS marketing claims, implying the majority did not believe EVAAS worked as intended and promoted. Additionally, many unintended consequences associated with the high-stakes use of EVAAS emerged through teachers' responses, which revealed among others that teachers felt heightened pressure and competition, which reduced morale and collaboration, and encouraged cheating or teaching to the test in attempt to raise EVAAS scores. This study is one of the first to investigate how the EVAAS model works in practice and provides a glimpse of whether value-added models might produce desired outcomes and encourage best teacher practices. This is information of which policymakers, researchers, and districts should be aware and consider when implementing the EVAAS, or any value-added model for teacher evaluation, as many of the reported issues are not specific to the EVAAS model.
ContributorsCollins, Clarin (Author) / Amrein-Beardsley, Audrey (Thesis advisor) / Berliner, David C. (Committee member) / Fischman, Gustavo E (Committee member) / Arizona State University (Publisher)
Created2012
Description
ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP).

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.
ContributorsSloat, Edward F. (Author) / Wetzel, Keith (Thesis advisor) / Amrein-Beardsley, Audrey (Thesis advisor) / Ewbank, Ann (Committee member) / Shough, Lori (Committee member) / Arizona State University (Publisher)
Created2015
136787-Thumbnail Image.png
Description
There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City of Phoenix Head Start program consists of nine delegate agencies,

There is a serious need for early childhood intervention practices for children who are living at or below the poverty line. Since 1965 Head Start has provided a federally funded, free preschool program for children in this population. The City of Phoenix Head Start program consists of nine delegate agencies, seven of which reside in school districts. These agencies are currently not conducting local longitudinal evaluations of their preschool graduates. The purpose of this study was to recommend initial steps the City of Phoenix grantee and the delegate agencies can take to begin a longitudinal evaluation process of their Head Start programs. Seven City of Phoenix Head Start agency directors were interviewed. These interviews provided information about the attitudes of the directors when considering longitudinal evaluations and how Head Start already evaluates their programs through internal assessments. The researcher also took notes on the Third Grade Follow-Up to the Head Start Executive Summary in order to make recommendations to the City of Phoenix Head Start programs about the best practices for longitudinal student evaluations.
Created2014-05
153205-Thumbnail Image.png
Description
Teacher evaluation policies have recently shifted in the United States. For the first time in history, many states, districts, and administrators are now required to evaluate teachers by methods that are up to 50% based on their "value-added," as demonstrated at the classroom-level by growth on student achievement data over

Teacher evaluation policies have recently shifted in the United States. For the first time in history, many states, districts, and administrators are now required to evaluate teachers by methods that are up to 50% based on their "value-added," as demonstrated at the classroom-level by growth on student achievement data over time. Other related instruments and methods, such as classroom observations and rubrics, have also become common practices in teacher evaluation systems. Such methods are consistent with the neoliberal discourse that has dominated the social and political sphere for the past three decades. Employing a discourse analytic approach that called upon a governmentality framework, the author used a complementary approach to understand how contemporary teacher evaluation polices, practices, and instruments work to discursively (re)define teachers and teacher quality in terms of their market value.

For the first part of the analysis, the author collected and analyzed documents and field notes related to the teacher evaluation system at one urban middle school. The analysis included official policy documents, official White House speeches and press releases, evaluation system promotional materials, evaluator training materials, and the like. For the second part of the analysis, she interviewed teachers and their evaluators at the local middle school in order to understand how the participants had embodied the market-based discourse to define themselves as teachers and qualify their practice, quality, and worth accordingly.

The findings of the study suggest that teacher evaluation policies, practices, and instruments make possible a variety of techniques, such as numericization, hierarchical surveillance, normalizing judgments, and audit, in order to first make teachers objects of knowledge and then act upon that knowledge to manage teachers' conduct. The author also found that teachers and their evaluators have taken up this discourse in order to think about and act upon themselves as responsibilized subjects. Ultimately, the author argues that while much of the attention related to teacher evaluations has focused on the instruments used to measure the construct of teacher quality, that teacher evaluation instruments work in a mutually constitutive ways to discursively shape the construct of teacher quality.
ContributorsHolloway-Libell, Jessica (Author) / Amrein-Beardsley, Audrey (Thesis advisor) / Anderson, Kate T. (Thesis advisor) / Berliner, David C. (Committee member) / Arizona State University (Publisher)
Created2014