Matching Items (12)
Filtering by

Clear all filters

152039-Thumbnail Image.png
Description
An integral part of teacher development are teacher observations. Many teachers are observed once or twice a year to evaluate their performance and hold them accountable for meeting standards. Instructional coaches, however, observe and work with teachers to help them reflect on their performance, with the goal of improving their

An integral part of teacher development are teacher observations. Many teachers are observed once or twice a year to evaluate their performance and hold them accountable for meeting standards. Instructional coaches, however, observe and work with teachers to help them reflect on their performance, with the goal of improving their practice. Video-based evidence has long been used in connection with teacher reflection and as the technology necessary to record video has become more readily available, video recordings have found an increasing presence in teacher observations. In addition, more and more schools are turning to mobile technology to help record evidence during teacher observations. Several mobile applications have been developed, which are designed to help instructional coaches, administrators, and teachers make the most of teacher observations. This study looked at the use of the DataCapture mobile application to record video-based evidence in teacher observations as part of an instructional coaching program in a large public school district in the Southwestern United States. Six instructional coaches and two teachers participated in interviews at the end of the study period. Additional data was collected from the DataCapture mobile application and from a survey of instructional coaches conducted by the school district in connection with its Title I programs. Results show that instructional coaches feel that using video-based evidence for teacher reflection is effective in a number of ways. Teachers who have experienced seeing themselves on video also felt that video-based evidence is effective at improving teacher reflection, while teachers who have not yet experienced seeing themselves on video displayed extreme apprehensiveness about being video recorded in the classroom. Instructional coaches felt the DataCapture mobile application was beneficial in teacher evaluation, but there were several issues that impacted the use of the mobile application and video-based evidence, including logistics, time requirements, and administrative support. The discussion focuses on recommendations for successfully using video-based evidence in an instructional coaching context, as well as some suggestions for other researchers attempting to study how video-based evidence impacts teachers' ability to reflect on their own teaching.
ContributorsShewell, Justin Reed (Author) / Bitter, Gary (Thesis advisor) / Dawson, Edwin (Committee member) / Blair, Heidi (Committee member) / Arizona State University (Publisher)
Created2013
151335-Thumbnail Image.png
Description
The use of educational technologies as a tool to improve academic achievement continues to increase as more technologies becomes available to students. However, teachers are entering the classroom not fully prepared to integrate technology into their daily classroom teaching because they have not been adequately prepared to do so. Teacher

The use of educational technologies as a tool to improve academic achievement continues to increase as more technologies becomes available to students. However, teachers are entering the classroom not fully prepared to integrate technology into their daily classroom teaching because they have not been adequately prepared to do so. Teacher preparation programs are falling short in this area because educational technology and the role of technology in the classroom is seen as an extra component to daily teaching rather than a central one. Many teacher preparation programs consist of one stand-alone educational technology course that is expected to prepare teachers to integrate technology in their future classrooms. Throughout the remainder of the program, the teachers are not seeing educational technologies modeled in their other core courses, nor are they getting the hands-on interaction necessary to become more confident in using these technologies with their future students. The purpose of this study was to examine teachers' views of educational technology in the classroom from those enrolled in a graduate program. The study consisted 74 first- and second-year teachers who were enrolled an alternative teacher preparation program. Thirty-four of the teachers received the Integrating Curriculum and Technology (iCAT) intervention and the remaining 40 teachers were part of the control group. Each teacher completed a pre- and post-intervention questionnaire and 23 of the 74 teachers participated in one of three focus group interviews. Additional data from the teachers' course instructors were gathered and analyzed to compliment the focus group and quantitative data. Results showed that iCAT participants' scores for confidence in using technology and efficacy for using educational technology increased at a faster rate than the control group participants' scores. Similarly, confidence in using technology, perceptions about integrating technology in the classroom, and efficacy for using educational technology could be predicted by the amount of hands-on interaction with technology that the teachers received during their graduate course. The discussion focuses on recommendations for infusing technology throughout teacher preparation programs so that teachers have the tools to prepare their students to use a variety of technologies so that their students can be better prepared to complete in today's workforce.
ContributorsKisicki, Todd (Author) / Wetzel, Keith (Thesis advisor) / Bitter, Gary (Thesis advisor) / Buss, Ray (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2012
151416-Thumbnail Image.png
Description
The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly

The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly mid-level critical thinking skills, with fewer examples limited to lower or higher order thinking skill demonstration. Some researchers suggest that instructors may facilitate increased demonstration of higher-order critical thinking skills within asynchronous discussion-board activities. However, there is little empirical evidence available to compare the use of different external supports to facilitate students' critical thinking skills performance and learning achievement in blended learning environments. Results of the present study indicate that response prompts and rubrics can affect students' discussion performance, learning, and satisfaction ratings. The results, however, are complex, perhaps mirroring the complexity of instructor-led online learning environments. Regarding discussion board performance, presenting students with a rubric tended to yield higher scores on most aspects that is, on overall performance, as well as depth and breadth of performance, though these differences were not significant. In contrast, instructor prompts tended to yield lower scores on aspects of discussion board performance. On breadth, in fact, this main effect difference was significant. Interactions also indicated significant differences on several aspects of discussion board performance, in most cases indicating that the combination of rubric and prompt was detrimental to scores. The learning performance on the quiz showed, again, the effectiveness of rubrics, with students who received the rubric earning significantly higher scores, and with no main effects or interactions for instructor prompts. Regarding student satisfaction, again, the picture is complicated. Results indicated that, in some instances, the integration of prompts resulted in lower satisfaction ratings, particularly in the areas of students' perceptions of the amount of work required, learning in the partially online format, and student-to-student interaction. Based on these results, design considerations to support rubric use and explicit feedback in asynchronous discussions to support student learning are proposed.
ContributorsGiacumo, Lisa (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Legacy, Jane (Committee member) / Bitter, Gary (Committee member) / Arizona State University (Publisher)
Created2012
152537-Thumbnail Image.png
Description
This study collected and examined information on K-12 teachers currently involved in online education in the United States. The purposes of this study included defining the demographics of these teachers, determining the extent to which they were formally educated and/or trained to teach online, and to compare these findings to

This study collected and examined information on K-12 teachers currently involved in online education in the United States. The purposes of this study included defining the demographics of these teachers, determining the extent to which they were formally educated and/or trained to teach online, and to compare these findings to those from a similar study conducted six years earlier. A web-based survey, including questions in both open and closed form, was used to gather data from 325 participants currently teaching at least one online class at publicly funded K-12 online schools nationwide. Survey questions covered the following six domains: a) personal demographics, b) educational background and experience, c) pre-service training, d) in-service training, and e) current online teaching assignments. The results of this study indicate that those currently teaching online to K-12 students have demographic characteristics that are similar to face-to-face teachers, particularly in terms of gender, age, and ethnicity/race; however, the online teachers generally had higher levels of educational attainment, more years of teaching experience, and were significantly more likely to teach on a part-time basis. It was found that teachers working with K-12 students online are self-motivated, place a high value on learning and education, and enjoy the challenge and process of using technology for this purpose. Based on findings, only a limited number of university-based teacher preparation programs address any aspect of the methods and techniques required for teaching online, and even fewer offer online field placement opportunities for pre-service teachers. For the most part, current online teachers were found to have received training after graduation, while working in the field. Further research is needed to specifically define and empirically validate the methods and techniques required for effective online teaching at the K-12 levels so that formal educational and training programs can be further developed to effectively prepare future K-12 online teachers.
ContributorsLarson, Jean Sutton (Author) / Archambault, Leanna (Thesis advisor) / Savenye, Wilhelmina (Thesis advisor) / Bitter, Gary (Committee member) / Arizona State University (Publisher)
Created2014
153640-Thumbnail Image.png
Description
The purpose of this instructional design and development study was to describe, evaluate and improve the instructional design process and the work of interdisciplinary design teams. A National Science Foundation (NSF) funded, Transforming Undergraduate Education in Science (TUES) project was the foundation for this study. The project developed new curriculum

The purpose of this instructional design and development study was to describe, evaluate and improve the instructional design process and the work of interdisciplinary design teams. A National Science Foundation (NSF) funded, Transforming Undergraduate Education in Science (TUES) project was the foundation for this study. The project developed new curriculum materials to teach learning content in unsaturated soils in undergraduate geotechnical engineering classes, a subset of the civil engineering. The study describes the instructional design (ID) processes employed by the team members as they assess the need, develop the materials, disseminate the learning unit, and evaluate its effectiveness, along with the impact the instructional design process played in the success of the learning materials with regard to student achievement and faculty and student attitudes. Learning data were collected from undergraduate geotechnical engineering classes from eight partner universities across the country and Puerto Rico over three phases of implementation. Data were collected from students and faculty that included pretest/posttest scores and attitudinal survey questions. The findings indicated a significant growth in the learning with the students of the faculty who were provided all learning materials. The findings also indicated an overall faculty and student satisfaction with the instructional materials. Observational and anecdotal data were also collected in the form of team meeting notes, personal observations, interviews and design logs. Findings of these data indicated a preference with working on an interdisciplinary instructional design team. All these data assisted in the analysis of the ID process, providing a basis for descriptive and inferential data used to provide suggestions for improving the ID process and the work of interdisciplinary instructional design teams.
ContributorsOrnelas, Arthur (Author) / Savenye, Wilhelmina C. (Thesis advisor) / Atkinson, Robert (Committee member) / Bitter, Gary (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2015
154145-Thumbnail Image.png
Description
This study was conducted to assess the performance of 176 students who received algebra instruction through an online platform presented in one of two experimental conditions to explore the effect of personalized learning paths by comparing it with linearly flowing instruction. The study was designed around eight research questions investigating

This study was conducted to assess the performance of 176 students who received algebra instruction through an online platform presented in one of two experimental conditions to explore the effect of personalized learning paths by comparing it with linearly flowing instruction. The study was designed around eight research questions investigating the effect of personalized learning paths on students’ learning, intrinsic motivation and satisfaction with their experience. Quantitative results were analyzed using Analysis of Variance (ANOVA), Analysis of Covariance (ANCOVA) and split-plot ANOVA methods. Additionally, qualitative feedback data were gathered from students and teachers on their experience to better explain the quantitative findings as well as improve understanding of how to effectively design an adaptive personalized learning platform. Quantitative results of the study showed no statistical difference between students assigned to treatments that compared linear and adaptive personalized instructional flows.

The lack of significant differences was explained by two main factors: (a) low usage and (b) platform and content related issues. Low usage may have prevented students from being exposed to the platforms long enough to create a potential for differences between the groups. Additionally, the reasons for low usage may in part be explained by the qualitative findings, which indicated that unmotivated and tired teachers and students were not very enthusiastic about the study because it occurred near the end of school year. Further, computer access was a challenging issue at the school throughout the study. On the other hand, platform and content related issues worked to inhibit the potential beneficial effects of the platforms. The three prominent issues were: (a) the majority of the students found the content boring or difficult, (b) repeated recommendations from the adaptive platform created frustration, and (c) a barely moving progress bar caused disappointment among participants.
ContributorsBicer, Alpay (Author) / Bitter, Gary G. (Thesis advisor) / Buss, Ray R (Committee member) / Legacy, Jane M. (Committee member) / Arizona State University (Publisher)
Created2015
154936-Thumbnail Image.png
Description
Public Mathematics Education is not at its best in the United States and technology is often seen as part of the solution to address this issue. With the existence of high-speed Internet, mobile technologies, ever-improving computer programming and graphing, the concepts of learning management systems (LMS’s) and online learning environments

Public Mathematics Education is not at its best in the United States and technology is often seen as part of the solution to address this issue. With the existence of high-speed Internet, mobile technologies, ever-improving computer programming and graphing, the concepts of learning management systems (LMS’s) and online learning environments (OLE’s), technology-based learning has elevated to a whole new level. The new generation of online learning enables multi-modal utilization, and, interactivity with instant feedback, among the other precious characteristics identified in this study. The studies that evaluated the effects of online learning often measured the immediate impacts on student achievement; there are very few studies that have investigated the longer-term effects in addition to the short term ones.

In this study, the effects of the new generation Online Learning Activity Based (OLAB) Curriculum on middle school students’ achievement in mathematics at the statewide high-stakes testing system were examined. The results pointed out that the treatment group performed better than the control group in the short term (immediately after the intervention), medium term (one year after the intervention), and long term (two years after the intervention) and that the results were statistically significant in the short and long terms.

Within the context of this study, the researcher also examined some of the factors affecting student achievement while using the OLE as a supplemental resource, namely, the time and frequency of usage, professional development of the facilitators, modes of instruction, and fidelity of implementation. While the researcher detected positive correlations between all of the variables and student achievement, he observed that school culture is indeed a major feature creating the difference attributed to the treatment group teachers.

The researcher discovered that among the treatment group teachers, the ones who spent more time on professional development, used the OLE with greater fidelity and attained greater gains in student achievement and interestingly they came from the same schools. This verified the importance of school culture in teachers’ attitudes toward making the most of the resources made available to them so as to achieve better results in terms of student success in high stakes tests.
ContributorsMeylani, Rusen (Author) / Bitter, Gary G. (Thesis advisor) / Legacy, Jane (Committee member) / Buss, Ray (Committee member) / Arizona State University (Publisher)
Created2016
155689-Thumbnail Image.png
Description
Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool

Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool that connects paper-based assessments to digital space. I designed a classroom study and collected data from ASU computer science classes. I tracked and modeled students' reviewing and reflecting behaviors based on the use of WPGA. I analyzed students' reviewing efforts, in terms of frequency, timing, and the associations with their academic performances. Results showed that students put extra emphasis in reviewing prior to the exams and the efforts demonstrated the desire to review formal assessments regardless of if they were graded for academic performance or for attendance. In addition, all students paid more attention on reviewing quizzes and exams toward the end of semester.
ContributorsHuang, Po-Kai (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017
155595-Thumbnail Image.png
Description
While predicting completion in Massive Open Online Courses (MOOCs) has been an active area of research in recent years, predicting completion in self-paced MOOCS, the fastest growing segment of open online courses, has largely been ignored. Using learning analytics and educational data mining techniques, this study examined data generated by

While predicting completion in Massive Open Online Courses (MOOCs) has been an active area of research in recent years, predicting completion in self-paced MOOCS, the fastest growing segment of open online courses, has largely been ignored. Using learning analytics and educational data mining techniques, this study examined data generated by over 4,600 individuals working in a self-paced, open enrollment college algebra MOOC over a period of eight months.

Although just 4% of these students completed the course, models were developed that could predict correctly nearly 80% of the time which students would complete the course and which would not, based on each student’s first day of work in the online course. Logistic regression was used as the primary tool to predict completion and focused on variables associated with self-regulated learning (SRL) and demographic variables available from survey information gathered as students begin edX courses (the MOOC platform employed).

The strongest SRL predictor was the amount of time students spent in the course on their first day. The number of math skills obtained the first day and the pace at which these skills were gained were also predictors, although pace was negatively correlated with completion. Prediction models using only SRL data obtained on the first day in the course correctly predicted course completion 70% of the time, whereas models based on first-day SRL and demographic data made correct predictions 79% of the time.
ContributorsCunningham, James Allan (Author) / Bitter, Gary (Thesis advisor) / Barber, Rebecca (Committee member) / Douglas, Ian (Committee member) / Arizona State University (Publisher)
Created2017
155553-Thumbnail Image.png
Description
This study investigated the effects of distributed presentation microlearning and the testing effect on mobile devices and student attitudes about the use of mobile devices for learning in higher education. For this study, a mobile device is considered a smartphone. All communication, content, and testing were completed remotely through participants’

This study investigated the effects of distributed presentation microlearning and the testing effect on mobile devices and student attitudes about the use of mobile devices for learning in higher education. For this study, a mobile device is considered a smartphone. All communication, content, and testing were completed remotely through participants’ mobile devices.

The study consisted of four conditions: (a) an attitudinal and demographic pre-survey, (b) five mobile instructional modules, (c) mobile quizzes, and (d) an attitudinal post-survey. A total of 311 participants in higher education were enrolled in the study. One hundred thirty-seven participants completed all four conditions of the study. Participants were randomly assigned to experimental conditions in a 2 x 2 factorial design. The levels of the first factor, distribution of instructional content, were: once-per-day and once-per-week. The levels of the second factor, testing, were: a quiz after each module plus a comprehensive quiz and a single comprehensive quiz after all instruction. The dependent variable was learning outcomes in the form of quiz-score results. Attitudinal survey results were analyzed using Principal Axis Factoring to reveal three components, (a) student perceptions about the use of mobile devices in education,

(b) student perceptions about instructors’ beliefs for mobile devices for learning, and (c) student perceptions about the use of mobile devices post-instruction.

The results revealed several findings. There was no significant effect for type of delivery of instruction in a one-way ANOVA. There was a significant effect for testing in a one-way ANOVA There were no main effects of delivery and testing in a 2 x 2 factorial design and there was no main interaction effect, and there was a significant effect of testing on final quiz scores controlling for technical beliefs in a 2 x 2 ANCOVA. The significant difference in testing was contradictory to some literature.

Ownership of personal mobile devices in persons aged 18–29 is practically all-inclusive. Thus, future research on student attitudes and the implementation of personal smartphones for microlearning and testing is still needed to develop and integrate mobile-ready content for higher education.
ContributorsRettger, Elaine (Author) / Bitter, Gary (Thesis advisor) / Legacy, Jane (Committee member) / Savenye, Wilhelmina (Committee member) / Arizona State University (Publisher)
Created2017