Matching Items (2,656)
Filtering by

Clear all filters

151335-Thumbnail Image.png
Description
The use of educational technologies as a tool to improve academic achievement continues to increase as more technologies becomes available to students. However, teachers are entering the classroom not fully prepared to integrate technology into their daily classroom teaching because they have not been adequately prepared to do so. Teacher

The use of educational technologies as a tool to improve academic achievement continues to increase as more technologies becomes available to students. However, teachers are entering the classroom not fully prepared to integrate technology into their daily classroom teaching because they have not been adequately prepared to do so. Teacher preparation programs are falling short in this area because educational technology and the role of technology in the classroom is seen as an extra component to daily teaching rather than a central one. Many teacher preparation programs consist of one stand-alone educational technology course that is expected to prepare teachers to integrate technology in their future classrooms. Throughout the remainder of the program, the teachers are not seeing educational technologies modeled in their other core courses, nor are they getting the hands-on interaction necessary to become more confident in using these technologies with their future students. The purpose of this study was to examine teachers' views of educational technology in the classroom from those enrolled in a graduate program. The study consisted 74 first- and second-year teachers who were enrolled an alternative teacher preparation program. Thirty-four of the teachers received the Integrating Curriculum and Technology (iCAT) intervention and the remaining 40 teachers were part of the control group. Each teacher completed a pre- and post-intervention questionnaire and 23 of the 74 teachers participated in one of three focus group interviews. Additional data from the teachers' course instructors were gathered and analyzed to compliment the focus group and quantitative data. Results showed that iCAT participants' scores for confidence in using technology and efficacy for using educational technology increased at a faster rate than the control group participants' scores. Similarly, confidence in using technology, perceptions about integrating technology in the classroom, and efficacy for using educational technology could be predicted by the amount of hands-on interaction with technology that the teachers received during their graduate course. The discussion focuses on recommendations for infusing technology throughout teacher preparation programs so that teachers have the tools to prepare their students to use a variety of technologies so that their students can be better prepared to complete in today's workforce.
ContributorsKisicki, Todd (Author) / Wetzel, Keith (Thesis advisor) / Bitter, Gary (Thesis advisor) / Buss, Ray (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2012
152535-Thumbnail Image.png
Description
Virtual Patient Simulations (VPS) are web-based exercises involving simulated patients in virtual environments. This study investigates the utility of VPS for increasing medical student clinical reasoning skills, collaboration, and engagement. Many studies indicate that VPS provide medical students with essential practice in clinical decision making before they encounter real life

Virtual Patient Simulations (VPS) are web-based exercises involving simulated patients in virtual environments. This study investigates the utility of VPS for increasing medical student clinical reasoning skills, collaboration, and engagement. Many studies indicate that VPS provide medical students with essential practice in clinical decision making before they encounter real life patients. The utility of a recursive, inductive VPS for increasing clinical decision-making skills, collaboration, or engagement is unknown. Following a design-based methodology, VPS were implemented in two phases with two different cohorts of first year medical students: spring and fall of 2013. Participants were 108 medical students and six of their clinical faculty tutors. Students collaborated in teams of three to complete a series of virtual patient cases, submitting a ballpark diagnosis at the conclusion of each session. Student participants subsequently completed an electronic, 28-item Exit Survey. Finally, students participated in a randomized controlled trial comparing traditional (tutor-led) and VPS case instruction methods. This sequence of activities rendered quantitative and qualitative data that were triangulated during data analysis to increase the validity of findings. After practicing through four VPS cases, student triad teams selected accurate ballpark diagnosis 92 percent of the time. Pre-post test results revealed that PPT was significantly more effective than VPS after 20 minutes of instruction. PPT instruction resulted in significantly higher learning gains, but both modalities supported significant learning gains in clinical reasoning. Students collaborated well and held rich clinical discussions; the central phenomenon that emerged was "synthesizing evidence inductively to make clinical decisions." Using an inductive process, student teams collaborated to analyze patient data, and in nearly all instances successfully solved the case, while remaining cognitively engaged. This is the first design-based study regarding virtual patient simulation, reporting iterative phases of implementation and design improvement, culminating in local theories (petite generalizations) about VPS design. A thick, rich description of environment, process, and findings may benefit other researchers and institutions in designing and implementing effective VPS.
ContributorsMcCoy, Lise (Author) / Wetzel, Keith (Thesis advisor) / Ewbank, Ann (Thesis advisor) / Simon, Harvey (Committee member) / Arizona State University (Publisher)
Created2014
152567-Thumbnail Image.png
Description
The purpose of this mixed methods research study was to assess the impact and influence of a pre-international experience course on Arizona State University (ASU) students before they study or intern abroad. Currently, the study abroad pre-departure orientation for ASU participants consists of online modules and a two-hour face-to-face orientation.

The purpose of this mixed methods research study was to assess the impact and influence of a pre-international experience course on Arizona State University (ASU) students before they study or intern abroad. Currently, the study abroad pre-departure orientation for ASU participants consists of online modules and a two-hour face-to-face orientation. In this action research study, the practitioner-researcher re-designed an ASU School of Politics and Global Studies (SPGS), one-credit course that focused exclusively on cross-cultural awareness and sensitivity. A needs assessment was distributed to a sample of 800 returning study abroad participants and was used to influence the study, along with an extensive literature review and two cycles of action research. The dissertation research and study was conducted during the ASU fall 2013 semester. Quantitative data and qualitative data were collected using eight different measures. To better understand the impact of a pre-international experience curriculum for ASU study abroad and international internship participants before they go abroad, this research study investigated the following research questions: (1) What cultural impact does a pre-international experience course have on students who complete the course before studying or interning abroad? (2) What specific cultural competencies are gained by the participants after participating in the pre-international experience course? (3) How has developing the curriculum, teaching the curriculum and implementing the innovation influenced and informed my practice as an international educator and the Assistant Director of the Arizona State Study Abroad Office? The following five assertions were identified within the quantitative and qualitative analysis of the collected data to answer the three research questions: (1) Students are more confident in their abilities to cross cultures after successfully completing taking the new course; (2) Students are more aware of other cultures and their own culture after successfully completing taking the new course; (3) Students gained important knowledge about understanding others' worldviews after successfully completing taking the new course; (4) Students gained general openness toward intercultural learning and to people from cultures different from their own after successfully completing the new course; (5) Developing and implementing a pre-international experience course changed me as a leader, instructor and researcher. Implications for future implementation and research are discussed.
ContributorsHenry, Adam (Author) / Wetzel, Keith (Thesis advisor) / Ewbank, Ann (Committee member) / LePore, Paul (Committee member) / Arizona State University (Publisher)
Created2014
152517-Thumbnail Image.png
Description
ABSTRACT Education policymakers at the national level have initiated reforms in K-12 education for that past several years that have focused on teacher quality and teacher evaluation. More recently, reforms have included legislation that focuses on administrator quality as well. Included in far-reaching recent legislation in Arizona is a requirement

ABSTRACT Education policymakers at the national level have initiated reforms in K-12 education for that past several years that have focused on teacher quality and teacher evaluation. More recently, reforms have included legislation that focuses on administrator quality as well. Included in far-reaching recent legislation in Arizona is a requirement that administrators be evaluated on a standards-based evaluation system that is linked to student outcomes. The end result is an annual summative measure of administrator effectiveness that impacts job retention. Because of this, Arizona administrators have become concerned about rapidly becoming proficient in the new evaluation systems. Administrators rarely have the explicit professional development opportunities they need to collaborate on a shared understanding of these new evaluation systems. This action research study focused on a group of eight administrators in a small urban district grappling with a new, complex, and high-stakes administrator evaluation that is a component of an all-encompassing Teacher Incentive Fund Grant. An existing professional learning time was engaged to assist administrators in lessening their concerns and increasing their understanding and use of the evaluation instrument. Activities were designed to engage the administrators in dynamic, contextualized learning. Participants interacted in a group to interpret the meaning of the evaluation instrument share practical knowledge and support each other's acquisition understanding. Data were gathered with mixed methods. Administrators were given pre-and post-surveys prior to and immediately after this six-week innovation. Formal and informal interviews were conduct throughout the innovation. Additionally, detailed records in the form of meeting records and a researcher journal were kept. Qualitative and quantitative data were triangulated to validate findings. Results identified concerns and understanding of administrators as they attempted to come to a shared understanding of the new evaluation instrument. As a result of learning together, their concerns about the use of the instrument lessened. Other concerns however, remained or increased. Administrators found the process of the Administrator Learning Community valuable and felt their understanding and use of the instrument had increased. Intense concerns about the competing priorities and initiatives led to the administrators to consider a reevaluation of the competing initiatives. Implications from this study can be used to help other administrators and professional development facilitators grappling with common concerns.
ContributorsEsmont, Leah W (Author) / Wetzel, Keith (Thesis advisor) / Ewbank, Ann (Thesis advisor) / McNeil, David (Committee member) / Arizona State University (Publisher)
Created2014
Description
ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP).

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.
ContributorsSloat, Edward F. (Author) / Wetzel, Keith (Thesis advisor) / Amrein-Beardsley, Audrey (Thesis advisor) / Ewbank, Ann (Committee member) / Shough, Lori (Committee member) / Arizona State University (Publisher)
Created2015