Matching Items (9)
Filtering by

Clear all filters

151664-Thumbnail Image.png
Description
ABSTRACT A review of studies selected from the Educational Resource Information Center (ERIC) covering the years 1985 through 2011 revealed three key evaluation components to analyze within a comprehensive teacher evaluation program: (a) designing, planning, and implementing instruction; (b) learning environments; and (c) parent and peer surveys. In this dissertation,

ABSTRACT A review of studies selected from the Educational Resource Information Center (ERIC) covering the years 1985 through 2011 revealed three key evaluation components to analyze within a comprehensive teacher evaluation program: (a) designing, planning, and implementing instruction; (b) learning environments; and (c) parent and peer surveys. In this dissertation, these three components are investigated in the context of two research questions: 1. What is the relationship, if any, between comprehensive teacher evaluation scores and student standardized test scores? 2. How do teachers and administrators experience the comprehensive evaluation process and how do they use their experiences to inform instruction? The methodology for the study included a mixed-method case study at a charter school located in a middle-class neighborhood within a large metropolitan area of the southwestern United States, which included a comparison of teachers' average evaluation scores in the areas of instruction and environment, peer survey scores, parent survey scores, and students' standardized test (SST) benchmark scores over a two-year period as the quantitative data for the study. I also completed in-depth interviews with classroom teachers, mentor teachers, the master teacher, and the school principal; I used these interviews for the qualitative portion of my study. All three teachers had similar evaluation scores; however, when comparing student scores among the teachers, differences were evident. While no direct correlations between student achievement data and teacher evaluation scores are possible, the qualitative data suggest that there were variations among the teachers and administrators in how they experienced or "bought into" the comprehensive teacher evaluation, but they all used evaluation information to inform their instruction. This dissertation contributes to current research by suggesting that comprehensive teacher evaluation has the potential to change teachers' and principals' perceptions of teacher evaluation as inefficient and unproductive to a system that can enhance instruction and ultimately improve student achievement.  
ContributorsBullock, Donna (Author) / Mccarty, Teresa (Thesis advisor) / Powers, Jeanne (Thesis advisor) / Stafford, Catherine (Committee member) / Arizona State University (Publisher)
Created2013
151333-Thumbnail Image.png
Description
With changes in federal legislation and the proposed reauthorization of The Elementary and Secondary Education Act, school administrators are held to high standards in an attempt to improve achievement for all students. They no longer just manage their schools but must now be instructional leaders charged with observing and conferencing

With changes in federal legislation and the proposed reauthorization of The Elementary and Secondary Education Act, school administrators are held to high standards in an attempt to improve achievement for all students. They no longer just manage their schools but must now be instructional leaders charged with observing and conferencing with teachers, leading professional development aligned to data, and measuring results. Classroom walkthroughs have become a way of assisting with these tasks while supporting the mission of each school. The purpose of this research was to describe how walkthroughs operate in practice and how they were experienced by school administration, teacher leaders, and teachers at two schools within the same suburban district. Interviews illustrated that experiences were varied using the classroom walkthrough protocol. Continued professional development needed to occur with administrators and teachers. Participants shared their thoughts on implementation and usage, as well as made recommendations to schools and/or districts considering implementing classroom walkthroughs. Results also indicated a great deal of attention paid to the collection of data within the schools but there was less consensus on the analysis and use of the collected data. There was also confusion with teachers as to the vision, purpose, and goals of using classroom walkthroughs. Changes in leadership during the five years since implementation and young administrators, who were relatively new in their positions, helped shape school experiences. Recommendations to schools and/or districts considering implementation focused on support from the district office, a need for help with data collection and analysis, and a clear vision for the use of the protocol. Interviewees mentioned it would benefit districts and schools to develop a shared vocabulary for instructional engagement, alignment, and rigor, as well as a focus for professional development. They also shared the view that calibration conferences and conversations, centered on instruction, provided a focus for teaching and learning within a school and/or district.
ContributorsCunningham, Alexa Renee (Author) / Danzig, Arnold (Thesis advisor) / Harris, Connie (Committee member) / Hurley, Beverly (Committee member) / Arizona State University (Publisher)
Created2012
152171-Thumbnail Image.png
Description

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited

Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited in supporting dynamic map generation from large areal data. This limitation has become increasingly problematic in online choropleth mapping as access to small area statistics, such as high-resolution census data and real-time aggregates of geospatial data streams, has never been easier due to advances in geospatial web technologies. The current literature shows that the challenge of large areal data can be mitigated through tiled maps where pre-processed map data are hierarchically partitioned into tiny rectangular images or map chunks for efficient data transmission. Various approaches have emerged lately to enable this tile-based choropleth mapping, yet little empirical evidence exists on their ability to handle spatial data with large numbers of areal units, thus complicating technical decision making in the development of online choropleth mapping applications. To fill this knowledge gap, this dissertation study conducts a scalability evaluation of three tile-based methods discussed in the literature: raster, scalable vector graphics (SVG), and HTML5 Canvas. For the evaluation, the study develops two test applications, generates map tiles from five different boundaries of the United States, and measures the response times of the applications under multiple test operations. While specific to the experimental setups of the study, the evaluation results show that the raster method scales better across various types of user interaction than the other methods. Empirical evidence also points to the superior scalability of Canvas to SVG in dynamic rendering of vector tiles, but not necessarily for partial updates of the tiles. These findings indicate that the raster method is better suited for dynamic choropleth rendering from large areal data, while Canvas would be more suitable than SVG when such rendering frequently involves complete updates of vector shapes.

ContributorsHwang, Myunghwa (Author) / Anselin, Luc (Thesis advisor) / Rey, Sergio J. (Committee member) / Wentz, Elizabeth (Committee member) / Arizona State University (Publisher)
Created2013
Description
ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP).

ABSTRACT

This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels.

The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona.

Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.
ContributorsSloat, Edward F. (Author) / Wetzel, Keith (Thesis advisor) / Amrein-Beardsley, Audrey (Thesis advisor) / Ewbank, Ann (Committee member) / Shough, Lori (Committee member) / Arizona State University (Publisher)
Created2015
150536-Thumbnail Image.png
Description
In the First Innovations Initiative at Arizona State University students are exposed to the culture of innovation and the entrepreneurial process through two courses situated intentionally within an American Indian sustainability context. In this action research dissertation, a summer field practicum was designed and implemented to complement the two in-classroom

In the First Innovations Initiative at Arizona State University students are exposed to the culture of innovation and the entrepreneurial process through two courses situated intentionally within an American Indian sustainability context. In this action research dissertation, a summer field practicum was designed and implemented to complement the two in-classroom course offerings. The first implementation of the new summer field practicum was documented for the two participating students. A survey and focus group were conducted to evaluate the spring 2011 classroom course and, separately, to evaluate the summer field practicum. Students in the spring 2011 course and summer field practicum reported that they were stimulated to think more innovatively, gained interest in the subject area and entrepreneurial/innovation processes, and improved their skills related to public speaking, networking, problem solving and research. The summer practicum participants reported larger increases in confidence in creating, planning and implementing a sustainable entrepreneurship venture, compared with the reports of the spring in-classroom participants. Additionally, differences favoring the summer practicum students were found in reported sense of community and individualism in support of entrepreneurship and innovation. The study results are being used to revamp both the in-classroom and field practicum experience for the benefit of future participants. Specifically, the American Indian perspective will be more fully embedded in each class session, contemporary timely articles and issues will be sought out and discussed in class, and the practicum experience will be further developed with additional student participants and site organizations sought. Additionally, the trans-disciplinary team approach will continue, with additional professional development opportunities provided for current team members and the addition of new instructional team members.
ContributorsWalters, Fonda (Author) / Clark, Christopher M. (Thesis advisor) / Jarratt-Snider, Karen (Committee member) / Kelley, Michael (Committee member) / Arizona State University (Publisher)
Created2012
154146-Thumbnail Image.png
Description
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was

Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply instructors with biology questions, a semantic network approach was developed for generating open response biology questions. The generated questions were compared to professional authorized questions.

To boost students’ learning experience, adaptive selection was built on the generated questions. Bayesian Knowledge Tracing was used as embedded assessment of the student’s current competence so that a suitable question could be selected based on the student’s previous performance. A between-subjects experiment with 42 participants was performed, where half of the participants studied with adaptive selected questions and the rest studied with mal-adaptive order of questions. Both groups significantly improved their test scores, and the participants in adaptive group registered larger learning gains than participants in the control group.

To explore the possibility of generating rich instructional feedback for machine-generated questions, a question-paragraph mapping task was identified. Given a set of questions and a list of paragraphs for a textbook, the goal of the task was to map the related paragraphs to each question. An algorithm was developed whose performance was comparable to human annotators.

A multiple-choice question with high quality distractors (incorrect answers) can be pedagogically valuable as well as being much easier to grade than open-response questions. Thus, an algorithm was developed to generate good distractors for multiple-choice questions. The machine-generated multiple-choice questions were compared to human-generated questions in terms of three measures: question difficulty, question discrimination and distractor usefulness. By recruiting 200 participants from Amazon Mechanical Turk, it turned out that the two types of questions performed very closely on all the three measures.
ContributorsZhang, Lishang (Author) / VanLehn, Kurt (Thesis advisor) / Baral, Chitta (Committee member) / Hsiao, Ihan (Committee member) / Wright, Christian (Committee member) / Arizona State University (Publisher)
Created2015
189209-Thumbnail Image.png
Description
In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained

In natural language processing, language models have achieved remarkable success over the last few years. The Transformers are at the core of most of these models. Their success can be mainly attributed to an enormous amount of curated data they are trained on. Even though such language models are trained on massive curated data, they often need specific extracted knowledge to understand better and reason. This is because often relevant knowledge may be implicit or missing, which hampers machine reasoning. Apart from that, manual knowledge curation is time-consuming and erroneous. Hence, finding fast and effective methods to extract such knowledge from data is important for improving language models. This leads to finding ideal ways to utilize such knowledge by incorporating them into language models. Successful knowledge extraction and integration lead to an important question of knowledge evaluation of such models by developing tools or introducing challenging test suites to learn about their limitations and improve them further. So to improve the transformer-based models, understanding the role of knowledge becomes important. In the pursuit to improve language models with knowledge, in this dissertation I study three broad research directions spanning across the natural language, biomedical and cybersecurity domains: (1) Knowledge Extraction (KX) - How can transformer-based language models be leveraged to extract knowledge from data? (2) Knowledge Integration (KI) - How can such specific knowledge be used to improve such models? (3) Knowledge Evaluation (KE) - How can language models be evaluated for specific skills and understand their limitations? I propose methods to extract explicit textual, implicit structural, missing textual, and missing structural knowledge from natural language and binary programs using transformer-based language models. I develop ways to improve the language model’s multi-step and commonsense reasoning abilities using external knowledge. Finally, I develop challenging datasets which assess their numerical reasoning skills in both in-domain and out-of-domain settings.
ContributorsPal, Kuntal Kumar (Author) / Baral, Chitta (Thesis advisor) / Wang, Ruoyu (Committee member) / Blanco, Eduardo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
158724-Thumbnail Image.png
Description
The purpose of this study was to increase microlearning training module usage and completions by 10–15% over a 30-day period by including evaluation in the design and development of a new microlearning training module in the golf equipment industry. Evaluation was conducted using a bespoke evaluation tool, which was designed

The purpose of this study was to increase microlearning training module usage and completions by 10–15% over a 30-day period by including evaluation in the design and development of a new microlearning training module in the golf equipment industry. Evaluation was conducted using a bespoke evaluation tool, which was designed and developed using design thinking methodology. The evaluation tool was applied to two previously designed microlearning modules, Driver Distance B and Driver Distance C, both of which served as comparisons for the new module’s completion data. Evaluation reports were generated that informed the development of the new module, named Golf Software. This action research study was grounded in constructivist learning theory, design thinking, and dashboards research. A nested, case study-mixed methods (CS- MM) design and a sequential qualitative to quantitative design were used. Research was conducted with the Knowledge Management Department at Ping, an original golf equipment manufacturer (OEM) in Phoenix, Arizona. Participants included three eLearning Designers, which included the researcher as a participant observer. Qualitative data included interviews, reflective researcher journal, and artifacts such as the new microlearning training module and evaluation reports. Quantitative data included completion numbers collected from the organization’s learning management system (LMS) and email campaign service. Findings from this study were mixed, with the new module’s completion numbers 20.27% greater than Driver Distance C and 7.46% lower than the Driver Distance B. The objective of this study was not met, but outcomes provided valuable information about incorporating evaluation in the Knowledge Management Department’s instructional design process.
ContributorsRegan, Elizabeth (Author) / Marsh, Josephine P (Thesis advisor) / Leahy, Sean (Committee member) / Gretter, Sarah (Committee member) / Arizona State University (Publisher)
Created2020
158565-Thumbnail Image.png
Description
Making significant progress on the U.N. Sustainable Development Goals (SDGs) needs change agents equipped with key competencies in sustainability. While thousands of sustainability programs have emerged at various educational levels over the past decade, there is, as of yet, no reliable way to assess if these programs successfully convey key

Making significant progress on the U.N. Sustainable Development Goals (SDGs) needs change agents equipped with key competencies in sustainability. While thousands of sustainability programs have emerged at various educational levels over the past decade, there is, as of yet, no reliable way to assess if these programs successfully convey key competencies in sustainability. This dissertation contributes to addressing this gap in three ways. First, it reviews the body of work on key competencies in sustainability. Based on broad agreement around five key competencies as well as an emerging set of three, an extended framework is outlined that can be used as unified set of learning objectives across sustainability programs. The next chapter reviews the scholarly work on assessing sustainability competencies. Based on this review, a typology of assessment tools is proposed offering guidance to both educators and researchers. Finally, drawing on experience of the four-year “Educating Future Change Agents” project, the last chapter explores the results from a diverse set of competency assessments in numerous courses. The study appraises assessment practices and results to demonstrate opportunities and challenges in the current state of assessing key competencies in sustainability. The results of this doctoral thesis are expected to make a practical and scholarly contribution to the teaching and learning in sustainability programs, in particular with regards to reliably assessing key competencies in sustainability.
ContributorsRedman, Aaron (Author) / Wiek, Arnim (Thesis advisor) / Barth, Matthias (Committee member) / Basile, George (Committee member) / Fischer, Daniel (Committee member) / Mochizuki, Yoko (Committee member) / Arizona State University (Publisher)
Created2020