Matching Items (434)
ContributorsWard, Geoffrey Harris (Performer) / ASU Library. Music Library (Publisher)
Created2018-03-18
151988-Thumbnail Image.png
Description
Growing popularity of alternatively certifying teachers has created challenges for teacher preparation programs. Many non-traditional routes into classroom include no full-time mentor teacher. Absence of a mentor teacher in the classroom leaves teachers with a deficit. This study follows ten teachers on the intern certificate enrolled in both an alternative

Growing popularity of alternatively certifying teachers has created challenges for teacher preparation programs. Many non-traditional routes into classroom include no full-time mentor teacher. Absence of a mentor teacher in the classroom leaves teachers with a deficit. This study follows ten teachers on the intern certificate enrolled in both an alternative certification teacher preparation program and the Teach for America organization as they pursue a master's degree in education and state teaching certification from a large southwestern university. The five randomly chosen for the treatment group and the control group contained 1 male and 4 female teachers, some of whom teach at public schools and others at charter schools. All were secondary education language arts teachers ranging in age from 22- 29. The treatment used in this study is a job-embedded, professional development, software tool designed to help teachers track their classroom practices called MyiLOGS. The purpose of this action research project was to study the effect using MyiLOGS had on six of the nine areas evaluated by a modified version of the Teacher Advancement Program evaluation rubric, alignment with Opportunity To Learn constructs, and the tool's influence on the efficacy of these first year teachers. The data generated from this study indicate that the MyiLOGS tool did have a positive effect on the teachers' TAP evaluation performances. Also, the MyiLOGS tool had a large impact on the teachers' instruction as measured by the constructs of Opportunity to Learn and their teaching self-efficacy. Implications suggested the tool was an asset to these teachers because they tracked their data, became more reflective, and self-sufficient.
ContributorsRoggeman, Pamela (Author) / Puckett, Kathleen (Thesis advisor) / Kurz, Alexander (Committee member) / Mathur, Sarup (Committee member) / Arizona State University (Publisher)
Created2013
151737-Thumbnail Image.png
Description
This mixed methods research study explores the experiences of Board Certified music therapists who completed a university-affiliated (UA) internship as part of their education and clinical training in music therapy. The majority of music therapy students complete a national roster (NR) internship as the final stage in clinical training. Limited

This mixed methods research study explores the experiences of Board Certified music therapists who completed a university-affiliated (UA) internship as part of their education and clinical training in music therapy. The majority of music therapy students complete a national roster (NR) internship as the final stage in clinical training. Limited data and research is available on the UA internship model. This research seeks to uncover themes identified by former university-affiliated interns regarding: (1) on-site internship supervision; (2) university support and supervision during internship; and (3) self-identified perceptions of professional preparedness following internship completion. The quantitative data was useful in creating a profile of interns interviewed. The qualitative data provided a context for understanding responses and experiences. Fourteen Board Certified music therapists were interviewed (N=14) and asked to reflect on their experiences during their university-affiliated internship. Commonalities discovered among former university-affiliated interns included: (1) the desire for peer supervision opportunities in internship; (2) an overall perception of being professionally prepared to sit for the Board Certification exam following internship; (3) a sense of readiness to enter the professional world after internship; and (4) a current or future desire to supervise university-affiliated interns.
ContributorsEubanks, Kymla (Author) / Rio, Robin (Thesis advisor) / Crowe, Barbara (Committee member) / Sullivan, Jill (Committee member) / Arizona State University (Publisher)
Created2013
151817-Thumbnail Image.png
Description
This action research project engages questions about the relationship of teacher evaluation and teacher learning, joining the national conversation of accountability and teacher quality. It provides a solid philosophical foundation for changes in teacher evaluation and staff development, and analyzes past and current methods and trends in teacher evaluation. Set

This action research project engages questions about the relationship of teacher evaluation and teacher learning, joining the national conversation of accountability and teacher quality. It provides a solid philosophical foundation for changes in teacher evaluation and staff development, and analyzes past and current methods and trends in teacher evaluation. Set in the context of a suburban elementary charter school, the problems of traditional evaluation methods are confronted. The innovation proposed and implemented is Teacher Evaluation for Learning, Accountability, and Recognition (TELAR), a teacher evaluation system designed to support learning and accountability. TELAR includes multiple data points and perspectives, ongoing feedback and support, an evaluation instrument centered on collective values and a shared vision for professional work, and an emphasis on teacher reflection and self-assessment. This mixed-methods study employs both qualitative and quantitative measures to provide an enriched understanding of the current problem and the impact of the change effort. Results suggest that TELAR 1) helps teachers re-define their role as professionals in their own evaluation, positively increasing perceptions of value, 2) promotes a culture of learning through a focus on shared values for professional work, a spirit of support and teamwork, and continuous improvement; and 3) empowers teachers to assess their own practice, self-diagnose areas for growth, and generate goals through a continuous process of feedback, reflection, conversation, and support. Implications for practice and future studies are presented.
ContributorsMusser, Stephanie (Author) / Zambo, Ronald (Thesis advisor) / Jiménez, Rosa (Committee member) / Harrington, Timothy (Committee member) / Arizona State University (Publisher)
Created2013
151664-Thumbnail Image.png
Description
ABSTRACT A review of studies selected from the Educational Resource Information Center (ERIC) covering the years 1985 through 2011 revealed three key evaluation components to analyze within a comprehensive teacher evaluation program: (a) designing, planning, and implementing instruction; (b) learning environments; and (c) parent and peer surveys. In this dissertation,

ABSTRACT A review of studies selected from the Educational Resource Information Center (ERIC) covering the years 1985 through 2011 revealed three key evaluation components to analyze within a comprehensive teacher evaluation program: (a) designing, planning, and implementing instruction; (b) learning environments; and (c) parent and peer surveys. In this dissertation, these three components are investigated in the context of two research questions: 1. What is the relationship, if any, between comprehensive teacher evaluation scores and student standardized test scores? 2. How do teachers and administrators experience the comprehensive evaluation process and how do they use their experiences to inform instruction? The methodology for the study included a mixed-method case study at a charter school located in a middle-class neighborhood within a large metropolitan area of the southwestern United States, which included a comparison of teachers' average evaluation scores in the areas of instruction and environment, peer survey scores, parent survey scores, and students' standardized test (SST) benchmark scores over a two-year period as the quantitative data for the study. I also completed in-depth interviews with classroom teachers, mentor teachers, the master teacher, and the school principal; I used these interviews for the qualitative portion of my study. All three teachers had similar evaluation scores; however, when comparing student scores among the teachers, differences were evident. While no direct correlations between student achievement data and teacher evaluation scores are possible, the qualitative data suggest that there were variations among the teachers and administrators in how they experienced or "bought into" the comprehensive teacher evaluation, but they all used evaluation information to inform their instruction. This dissertation contributes to current research by suggesting that comprehensive teacher evaluation has the potential to change teachers' and principals' perceptions of teacher evaluation as inefficient and unproductive to a system that can enhance instruction and ultimately improve student achievement.  
ContributorsBullock, Donna (Author) / Mccarty, Teresa (Thesis advisor) / Powers, Jeanne (Thesis advisor) / Stafford, Catherine (Committee member) / Arizona State University (Publisher)
Created2013
151464-Thumbnail Image.png
Description
The purpose of this study was to investigate whether an alignment exists between the mission of Puente de Hozho Magnet School and the visualization of how current Navajo students view their education at the school. Qualitative research was used as an opportunity to explore the significance and to gain an

The purpose of this study was to investigate whether an alignment exists between the mission of Puente de Hozho Magnet School and the visualization of how current Navajo students view their education at the school. Qualitative research was used as an opportunity to explore the significance and to gain an in-depth understanding of how Navajo students view their education in the context of their personal experiences. The population consisted of six Navajo fifth grade students who lived outside the boundaries of their Indian reservation and attended school at Puente de Hozho Magnet School. The six student participants were asked to respond to the question, "What does your education look like at Puente de Hozho Magnet School?" through the pictures they took with a camera in and around the school. After the pictures were developed, students were individually interviewed by utilizing selected pictures to prompt their memory in eliciting descriptions and meanings of the images they captured. The students' responses generated a data set for coding and analysis, from which a wealth of data yielded prominent themes as to their education at Puente de Hozho Magnet School. Analysis of this research concluded that the students' visualization of their education at Puente de Hozho is aligned with the original mission and vision of the school. The student voices represent a relationship of natural connections to their cultural heritage as experienced in their school by disregarding stereotypes and rising above the expected.
ContributorsYazzie, Lamont L (Author) / Spencer, Dee Ann (Thesis advisor) / Appleton, Nicholas A (Committee member) / Gilmore, Treva C (Committee member) / Arizona State University (Publisher)
Created2012
151477-Thumbnail Image.png
Description
This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country.

This study examined the intended and unintended consequences associated with the Education Value-Added Assessment System (EVAAS) as perceived and experienced by teachers in the Houston Independent School District (HISD). To evaluate teacher effectiveness, HISD is using EVAAS for high-stakes consequences more than any other district or state in the country. A large-scale electronic survey was used to investigate the model's reliability and validity; to determine whether teachers used the EVAAS data in formative ways as intended; to gather teachers' opinions on EVAAS's claimed benefits and statements; and to understand the unintended consequences that occurred as a result of EVAAS use in HISD. Mixed methods data collection and analyses were used to present the findings in user-friendly ways, particularly when using the words and experiences of the teachers themselves. Results revealed that the reliability of the EVAAS model produced split and inconsistent results among teacher participants, and teachers indicated that students biased the EVAAS results. The majority of teachers did not report similar EVAAS and principal observation scores, reducing the criterion-related validity of both measures of teacher quality. Teachers revealed discrepancies in the distribution of EVAAS reports, the awareness of trainings offered, and among principals' understanding of EVAAS across the district. This resulted in an underwhelming number of teachers who reportedly used EVAAS data for formative purposes. Teachers disagreed with EVAAS marketing claims, implying the majority did not believe EVAAS worked as intended and promoted. Additionally, many unintended consequences associated with the high-stakes use of EVAAS emerged through teachers' responses, which revealed among others that teachers felt heightened pressure and competition, which reduced morale and collaboration, and encouraged cheating or teaching to the test in attempt to raise EVAAS scores. This study is one of the first to investigate how the EVAAS model works in practice and provides a glimpse of whether value-added models might produce desired outcomes and encourage best teacher practices. This is information of which policymakers, researchers, and districts should be aware and consider when implementing the EVAAS, or any value-added model for teacher evaluation, as many of the reported issues are not specific to the EVAAS model.
ContributorsCollins, Clarin (Author) / Amrein-Beardsley, Audrey (Thesis advisor) / Berliner, David C. (Committee member) / Fischman, Gustavo E (Committee member) / Arizona State University (Publisher)
Created2012
152322-Thumbnail Image.png
Description
The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items.

The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items. 3,908 educators in a database developed over 10+ years at the world's largest education company were sent a recruiting email; 400 elementary and secondary teachers in the final sample completed the online survey containing 48 questions over a three-week deployment period in the spring of 2013. Results indicated that overall teachers believe educational research is important, that the most important purpose of research is to increase effectiveness of classroom practice, yet research is not frequently sought out during the course of practice. Teachers perceive results in research journals as the most trustworthy yet also perceive research journals the most difficult to access (relying second-most often for research via in-service trainings). These findings have implications for teachers, administrators, policy-makers, and researchers. Educational researchers should seek to address both the theoretical and the applied aspects of learning. Professional development must make explicit links between research findings and classroom strategies and tactics, and research must be made more readily available to those who are not currently seeking additional credentialing, and therefore do not individually have access to scholarly literature. Further research is needed to expand the survey sample and refine the survey instrument. Similar research with administrators in pre-K-20 settings as well as in-depth interviews would serve to investigate the "why" of many findings.
ContributorsMahoney, Shawn (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2013
152517-Thumbnail Image.png
Description
ABSTRACT Education policymakers at the national level have initiated reforms in K-12 education for that past several years that have focused on teacher quality and teacher evaluation. More recently, reforms have included legislation that focuses on administrator quality as well. Included in far-reaching recent legislation in Arizona is a requirement

ABSTRACT Education policymakers at the national level have initiated reforms in K-12 education for that past several years that have focused on teacher quality and teacher evaluation. More recently, reforms have included legislation that focuses on administrator quality as well. Included in far-reaching recent legislation in Arizona is a requirement that administrators be evaluated on a standards-based evaluation system that is linked to student outcomes. The end result is an annual summative measure of administrator effectiveness that impacts job retention. Because of this, Arizona administrators have become concerned about rapidly becoming proficient in the new evaluation systems. Administrators rarely have the explicit professional development opportunities they need to collaborate on a shared understanding of these new evaluation systems. This action research study focused on a group of eight administrators in a small urban district grappling with a new, complex, and high-stakes administrator evaluation that is a component of an all-encompassing Teacher Incentive Fund Grant. An existing professional learning time was engaged to assist administrators in lessening their concerns and increasing their understanding and use of the evaluation instrument. Activities were designed to engage the administrators in dynamic, contextualized learning. Participants interacted in a group to interpret the meaning of the evaluation instrument share practical knowledge and support each other's acquisition understanding. Data were gathered with mixed methods. Administrators were given pre-and post-surveys prior to and immediately after this six-week innovation. Formal and informal interviews were conduct throughout the innovation. Additionally, detailed records in the form of meeting records and a researcher journal were kept. Qualitative and quantitative data were triangulated to validate findings. Results identified concerns and understanding of administrators as they attempted to come to a shared understanding of the new evaluation instrument. As a result of learning together, their concerns about the use of the instrument lessened. Other concerns however, remained or increased. Administrators found the process of the Administrator Learning Community valuable and felt their understanding and use of the instrument had increased. Intense concerns about the competing priorities and initiatives led to the administrators to consider a reevaluation of the competing initiatives. Implications from this study can be used to help other administrators and professional development facilitators grappling with common concerns.
ContributorsEsmont, Leah W (Author) / Wetzel, Keith (Thesis advisor) / Ewbank, Ann (Thesis advisor) / McNeil, David (Committee member) / Arizona State University (Publisher)
Created2014
Description
ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP).

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.
ContributorsSloat, Edward F. (Author) / Wetzel, Keith (Thesis advisor) / Amrein-Beardsley, Audrey (Thesis advisor) / Ewbank, Ann (Committee member) / Shough, Lori (Committee member) / Arizona State University (Publisher)
Created2015