Matching Items (2)
Filtering by

Clear all filters

151416-Thumbnail Image.png
Description
The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly

The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly mid-level critical thinking skills, with fewer examples limited to lower or higher order thinking skill demonstration. Some researchers suggest that instructors may facilitate increased demonstration of higher-order critical thinking skills within asynchronous discussion-board activities. However, there is little empirical evidence available to compare the use of different external supports to facilitate students' critical thinking skills performance and learning achievement in blended learning environments. Results of the present study indicate that response prompts and rubrics can affect students' discussion performance, learning, and satisfaction ratings. The results, however, are complex, perhaps mirroring the complexity of instructor-led online learning environments. Regarding discussion board performance, presenting students with a rubric tended to yield higher scores on most aspects that is, on overall performance, as well as depth and breadth of performance, though these differences were not significant. In contrast, instructor prompts tended to yield lower scores on aspects of discussion board performance. On breadth, in fact, this main effect difference was significant. Interactions also indicated significant differences on several aspects of discussion board performance, in most cases indicating that the combination of rubric and prompt was detrimental to scores. The learning performance on the quiz showed, again, the effectiveness of rubrics, with students who received the rubric earning significantly higher scores, and with no main effects or interactions for instructor prompts. Regarding student satisfaction, again, the picture is complicated. Results indicated that, in some instances, the integration of prompts resulted in lower satisfaction ratings, particularly in the areas of students' perceptions of the amount of work required, learning in the partially online format, and student-to-student interaction. Based on these results, design considerations to support rubric use and explicit feedback in asynchronous discussions to support student learning are proposed.
ContributorsGiacumo, Lisa (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Legacy, Jane (Committee member) / Bitter, Gary (Committee member) / Arizona State University (Publisher)
Created2012
155595-Thumbnail Image.png
Description
While predicting completion in Massive Open Online Courses (MOOCs) has been an active area of research in recent years, predicting completion in self-paced MOOCS, the fastest growing segment of open online courses, has largely been ignored. Using learning analytics and educational data mining techniques, this study examined data generated by

While predicting completion in Massive Open Online Courses (MOOCs) has been an active area of research in recent years, predicting completion in self-paced MOOCS, the fastest growing segment of open online courses, has largely been ignored. Using learning analytics and educational data mining techniques, this study examined data generated by over 4,600 individuals working in a self-paced, open enrollment college algebra MOOC over a period of eight months.

Although just 4% of these students completed the course, models were developed that could predict correctly nearly 80% of the time which students would complete the course and which would not, based on each student’s first day of work in the online course. Logistic regression was used as the primary tool to predict completion and focused on variables associated with self-regulated learning (SRL) and demographic variables available from survey information gathered as students begin edX courses (the MOOC platform employed).

The strongest SRL predictor was the amount of time students spent in the course on their first day. The number of math skills obtained the first day and the pace at which these skills were gained were also predictors, although pace was negatively correlated with completion. Prediction models using only SRL data obtained on the first day in the course correctly predicted course completion 70% of the time, whereas models based on first-day SRL and demographic data made correct predictions 79% of the time.
ContributorsCunningham, James Allan (Author) / Bitter, Gary (Thesis advisor) / Barber, Rebecca (Committee member) / Douglas, Ian (Committee member) / Arizona State University (Publisher)
Created2017