Matching Items (5)
Filtering by

Clear all filters

152322-Thumbnail Image.png
Description
The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items.

The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items. 3,908 educators in a database developed over 10+ years at the world's largest education company were sent a recruiting email; 400 elementary and secondary teachers in the final sample completed the online survey containing 48 questions over a three-week deployment period in the spring of 2013. Results indicated that overall teachers believe educational research is important, that the most important purpose of research is to increase effectiveness of classroom practice, yet research is not frequently sought out during the course of practice. Teachers perceive results in research journals as the most trustworthy yet also perceive research journals the most difficult to access (relying second-most often for research via in-service trainings). These findings have implications for teachers, administrators, policy-makers, and researchers. Educational researchers should seek to address both the theoretical and the applied aspects of learning. Professional development must make explicit links between research findings and classroom strategies and tactics, and research must be made more readily available to those who are not currently seeking additional credentialing, and therefore do not individually have access to scholarly literature. Further research is needed to expand the survey sample and refine the survey instrument. Similar research with administrators in pre-K-20 settings as well as in-depth interviews would serve to investigate the "why" of many findings.
ContributorsMahoney, Shawn (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2013
156311-Thumbnail Image.png
Description
To foster both external and internal accountability, universities seek more effective models for student learning outcomes assessment (SLOA). Meaningful and authentic measurement of program-level student learning outcomes requires engagement with an institution’s faculty members, especially to gather student performance assessment data using common scoring instruments, or rubrics, across a university’s

To foster both external and internal accountability, universities seek more effective models for student learning outcomes assessment (SLOA). Meaningful and authentic measurement of program-level student learning outcomes requires engagement with an institution’s faculty members, especially to gather student performance assessment data using common scoring instruments, or rubrics, across a university’s many colleges and programs. Too often, however, institutions rely on faculty engagement for SLOA initiatives like this without providing necessary support, communication, and training. The resulting data may lack sufficient reliability and reflect deficiencies in an institution’s culture of assessment.

This mixed methods action research study gauged how well one form of SLOA training – a rubric-norming workshop – could affect both inter-rater reliability for faculty scorers and faculty perceptions of SLOA while exploring the nature of faculty collaboration toward a shared understanding of student learning outcomes. The study participants, ten part-time faculty members at the institution, each held primary careers in the health care industry, apart from their secondary role teaching university courses. Accordingly, each contributed expertise and experience to the rubric-norming discussions, surveys of assessment-related perceptions, and individual scoring of student performance with a common rubric. Drawing on sociocultural learning principles and the specific lens of activity theory, influences on faculty SLOA were arranged and analyzed within the heuristic framework of an activity system to discern effects of collaboration and perceptions toward SLOA on consistent rubric-scoring by faculty participants.

Findings suggest participation in the study did not correlate to increased inter-rater reliability for faculty scorers when using the common rubric. Constraints found within assessment tools and unclear institutional leadership prevented more reliable use of common rubrics. Instead, faculty participants resorted to individual assessment approaches to meaningfully guide students to classroom achievement and preparation for careers in the health care field. Despite this, faculty participants valued SLOA, collaborated readily with colleagues for shared assessment goals, and worked hard to teach and assess students meaningfully.
ContributorsWilliams, Nicholas (Author) / Liou, Daniel D (Thesis advisor) / Rotheram-Fuller, Erin (Committee member) / Turbow, David (Committee member) / Arizona State University (Publisher)
Created2018
154406-Thumbnail Image.png
Description
“If you treat an individual as he is, he will stay as he is,

but if you treat him as if he were what he ought to be

and could be, he will become what he ought and could be.”

Johann Wolfgang von Goethe (1749-1832)

Teacher leaders in public education have

“If you treat an individual as he is, he will stay as he is,

but if you treat him as if he were what he ought to be

and could be, he will become what he ought and could be.”

Johann Wolfgang von Goethe (1749-1832)

Teacher leaders in public education have a great amount of responsibility on their shoulders in today’s political climate. They are responsible for evaluating instruction, improving the teaching force, and raising student achievement. These responsibilities coupled with the day-to-day demands of effectively running a school have caused many teacher leaders to disengage from the true purpose of their work and have lead to retention rates that are less than desirable. This mixed methods action research study was conducted to investigate how participation in L.E.A.D. (Learn. Engage. Act. Discuss.) groups, influenced the self-perceptions teacher leaders have of their ability to engage in the change process at their schools. The innovation was a series of three action-driven sessions aimed at providing the participating teacher leaders with a space to discuss their roles in the change process at their school, their perceived engagement in those processes, and their perceived ability to navigate the technical, normative, and political dimensions of change. The greater purpose behind the design of this innovation was to provide teacher leaders with tools they could utilize that would support them in the realization that their level of engagement was not totally dependent on those around them. Through the L.E.A.D. groups, it became evident that the participating teacher leaders were resilient and optimistic individuals that, despite factors outside of their control demanding their time and energy, were still dedicated to the change process at their schools.
ContributorsSaltmarsh, Sarah Schmaltz (Author) / Liou, Daniel D (Thesis advisor) / Rotheram-Fuller, Erin (Committee member) / Shaw, Ann (Committee member) / Arizona State University (Publisher)
Created2016
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
155689-Thumbnail Image.png
Description
Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool

Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool that connects paper-based assessments to digital space. I designed a classroom study and collected data from ASU computer science classes. I tracked and modeled students' reviewing and reflecting behaviors based on the use of WPGA. I analyzed students' reviewing efforts, in terms of frequency, timing, and the associations with their academic performances. Results showed that students put extra emphasis in reviewing prior to the exams and the efforts demonstrated the desire to review formal assessments regardless of if they were graded for academic performance or for attendance. In addition, all students paid more attention on reviewing quizzes and exams toward the end of semester.
ContributorsHuang, Po-Kai (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017