This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 16
Filtering by

Clear all filters

151416-Thumbnail Image.png
Description
The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly

The purpose of this study was to investigate the effects of instructor response prompts and rubrics on students' performance in an asynchronous discussion-board assignment, their learning achievement on an objective-type posttest, and their reported satisfaction levels. Researchers who have studied asynchronous computer-mediated student discussion transcripts have found evidence of mostly mid-level critical thinking skills, with fewer examples limited to lower or higher order thinking skill demonstration. Some researchers suggest that instructors may facilitate increased demonstration of higher-order critical thinking skills within asynchronous discussion-board activities. However, there is little empirical evidence available to compare the use of different external supports to facilitate students' critical thinking skills performance and learning achievement in blended learning environments. Results of the present study indicate that response prompts and rubrics can affect students' discussion performance, learning, and satisfaction ratings. The results, however, are complex, perhaps mirroring the complexity of instructor-led online learning environments. Regarding discussion board performance, presenting students with a rubric tended to yield higher scores on most aspects that is, on overall performance, as well as depth and breadth of performance, though these differences were not significant. In contrast, instructor prompts tended to yield lower scores on aspects of discussion board performance. On breadth, in fact, this main effect difference was significant. Interactions also indicated significant differences on several aspects of discussion board performance, in most cases indicating that the combination of rubric and prompt was detrimental to scores. The learning performance on the quiz showed, again, the effectiveness of rubrics, with students who received the rubric earning significantly higher scores, and with no main effects or interactions for instructor prompts. Regarding student satisfaction, again, the picture is complicated. Results indicated that, in some instances, the integration of prompts resulted in lower satisfaction ratings, particularly in the areas of students' perceptions of the amount of work required, learning in the partially online format, and student-to-student interaction. Based on these results, design considerations to support rubric use and explicit feedback in asynchronous discussions to support student learning are proposed.
ContributorsGiacumo, Lisa (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Legacy, Jane (Committee member) / Bitter, Gary (Committee member) / Arizona State University (Publisher)
Created2012
152322-Thumbnail Image.png
Description
The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items.

The purpose of this survey study was to collect data from pre-K-12 educators in the U.S. regarding their perceptions of the purpose, conceptions, use, impact, and results of educational research. The survey tool was based on existing questionnaires and case studies in the literature, as well as newly developed items. 3,908 educators in a database developed over 10+ years at the world's largest education company were sent a recruiting email; 400 elementary and secondary teachers in the final sample completed the online survey containing 48 questions over a three-week deployment period in the spring of 2013. Results indicated that overall teachers believe educational research is important, that the most important purpose of research is to increase effectiveness of classroom practice, yet research is not frequently sought out during the course of practice. Teachers perceive results in research journals as the most trustworthy yet also perceive research journals the most difficult to access (relying second-most often for research via in-service trainings). These findings have implications for teachers, administrators, policy-makers, and researchers. Educational researchers should seek to address both the theoretical and the applied aspects of learning. Professional development must make explicit links between research findings and classroom strategies and tactics, and research must be made more readily available to those who are not currently seeking additional credentialing, and therefore do not individually have access to scholarly literature. Further research is needed to expand the survey sample and refine the survey instrument. Similar research with administrators in pre-K-20 settings as well as in-depth interviews would serve to investigate the "why" of many findings.
ContributorsMahoney, Shawn (Author) / Savenye, Wilhelmina (Thesis advisor) / Nelson, Brian (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2013
151845-Thumbnail Image.png
Description
This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure--a traditional subjective measure--and two objective, physiological measures based on eye-tracking and EEG technology.

This study explored three methods to measure cognitive load in a learning environment using four logic puzzles that systematically varied in level of intrinsic cognitive load. Participants' perceived intrinsic load was simultaneously measured with a self-report measure--a traditional subjective measure--and two objective, physiological measures based on eye-tracking and EEG technology. In addition to gathering self-report, eye-tracking data, and EEG data, this study also captured data on individual difference variables and puzzle performance. Specifically, this study addressed the following research questions: 1. Are self-report ratings of cognitive load sensitive to tasks that increase in level of intrinsic load? 2. Are physiological measures sensitive to tasks that increase in level of intrinsic load? 3. To what extent do objective physiological measures and individual difference variables predict self-report ratings of intrinsic cognitive load? 4. Do the number of errors and the amount of time spent on each puzzle increase as the puzzle difficulty increases? Participants were 56 undergraduate students. Results from analyses with inferential statistics and data-mining techniques indicated features from the physiological data were sensitive to the puzzle tasks that varied in level of intrinsic load. The self-report measures performed similarly when the difference in intrinsic load of the puzzles was the most varied. Implications for these results and future directions for this line of research are discussed.
ContributorsJoseph, Stacey (Author) / Atkinson, Robert K (Thesis advisor) / Johnson-Glenberg, Mina (Committee member) / Nelson, Brian (Committee member) / Klein, James (Committee member) / Arizona State University (Publisher)
Created2013
153513-Thumbnail Image.png
Description
This study aims to uncover whether English Central, an online English as a Second Language tool, improves speaking proficiency for undergraduate students with developing English skills. Eighty-three advanced English language learners from the American English and Culture Program at Arizona State University were randomly assigned to one of three

This study aims to uncover whether English Central, an online English as a Second Language tool, improves speaking proficiency for undergraduate students with developing English skills. Eighty-three advanced English language learners from the American English and Culture Program at Arizona State University were randomly assigned to one of three conditions: the use of English Central with a learner-control, shared-control, and a no-treatment condition. The two treatment groups were assigned approximately 14.7 hours of online instruction. The relative impact of each of the three conditions was assessed using two measures. First, the Pearson Versant Test (www.versanttest.com), a well-established English-as-a-second-language speaking test, was administered to all of the participants as a pre- and post-test measure. Second, students were given a post-treatment questionnaire that measured their motivation in using online instruction in general, and English Central specifically. Since a significant teacher effect was found, teachers involved in this study were also interviewed in order to ascertain their attitude toward English Central as a homework tool. Learner outcomes were significantly different between the shared and learner conditions. Student motivation was predictive of learning outcomes. Subjects in the shared condition outperformed those in the learner condition. Furthermore, those in the shared condition scored higher than the control condition; however, this result did not reach statistical significance. Results of the follow-up teacher survey revealed that while a teacher's view of the tool (positive or negative), was not a predictor of student success, teacher presentation of the tool may lead to a significant impact on student learning outcomes.
ContributorsDixon, Shane Y. (Shane Yahlu) (Author) / Atkinson, Robert (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2015
153254-Thumbnail Image.png
Description
The overall purpose of this study was to explore the dynamics of teaching and learning in the context of an informal, online discussion forum. This investigation utilized the Community of Inquiry (CoI) elements of Teaching Presence and Social Presence along with the construct of Learning Presence to examine Adobe® Forums,

The overall purpose of this study was to explore the dynamics of teaching and learning in the context of an informal, online discussion forum. This investigation utilized the Community of Inquiry (CoI) elements of Teaching Presence and Social Presence along with the construct of Learning Presence to examine Adobe® Forums, Photoshop® for Beginners Forum (PfBF) an internet discussion forum designed to provide support for beginning users of Adobe Photoshop. The researcher collected four days of discussion post data comprising 62 discussion threads for a total of 202 discussion posts. During this initial pilot analysis, the discussion threads were divided into posts created by members who were deemed to be acting as teachers and posts written by members acting as learners. Three analyses were conducted. First, a pilot analysis was conducted where the researcher divided the data in half and coded 31 discussion threads and a total of 142 discussion posts with the Teaching Presence, Social Presence and Learning Presence coding schemes. Second, a reliability analysis was conducted to determine the interrater reliability of the coding schemes. For this analysis two additional coders were recruited, trained and coded a small subsample of data (4 discussion threads for a total of 29 discussion posts) using the same three coding schemes. Third, a final analysis was conducted where the researcher coded and analyzed 134 discussion posts created by 24 teachers using the Teaching Presence coding scheme. At the conclusion of the final analysis, it was determined that eighteen percent (18%) of the data could not be coded using the Teaching Presence coding scheme. However, this data were observed to contain behavioral indicators of Social Presence. Consequently, the Social Presence coding scheme was used to code and analyze the remaining data. The results of this study revealed that forum members who interact on PfBF do indeed exhibit Teaching Presence behaviors. Direct Instruction was the largest category of Teaching Presence behaviors exhibited, over and above Facilitating Discussion and Design and Organization. It was also observed that forum members serving in the role of teachers exhibit behaviors of Social Presence alongside Teaching Presence behaviors.
ContributorsWilliams, Indi Marie (Author) / Gee, Elizabeth (Thesis advisor) / Olaniran, Bolanle (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2014
150808-Thumbnail Image.png
Description
The goal of this research was to understand the different kinds of learning that take place in Mod The Sims (MTS), an online Sims gaming community. The study aimed to explore users' experiences and to understand learning practices that are not commonly observed in formal educational settings. To achieve this

The goal of this research was to understand the different kinds of learning that take place in Mod The Sims (MTS), an online Sims gaming community. The study aimed to explore users' experiences and to understand learning practices that are not commonly observed in formal educational settings. To achieve this goal, the researcher conducted a four-year virtual ethnographic study that followed guidelines set forth in Hine (2000). After Hine, the study focused on understanding the complexity of the relationships between technology and social interactions among people, with a particular emphasis on investigating how participants shaped both the culture and structure of the affinity space. The format for the dissertation consists of an introduction, three core chapters that present different sets of findings, and a concluding chapter. Each of the core chapters, which can stand alone as separate studies, applies different theoretical lenses and analytic methods and uses a separate data set. The data corpus includes hundreds of thread posts, member profiles, online interview data obtained through email and personal messaging (PM), numerous screenshots, field notes, and additional artifacts, such as college coursework shared by a participant. Chapter 2 examines thread posts to understand the social support system in MTS and the language learning practices of one member who was a non-English speaker. Chapter 3 analyzes thread posts from administrative staff and users in MTS to identify patterns of interactions, with the goal of ascertaining how users contribute to the ongoing design and redesign of the site. Chapter 4 investigates user-generated tutorials to understand the nature of these instructional texts and how they are adapted to an online context. The final chapter (Chapter 5) presents conclusions about how the analyses overall represent examples of participatory learning practices that expand our understanding of 21st century learning. Finally, the chapter offers theoretical and practical implications, reflections on lessons learned, and suggestions for future research.
ContributorsLee, Yoonhee Naseef (Author) / Hayes, Elisabeth (Thesis advisor) / Gee, James (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2012
156275-Thumbnail Image.png
Description
The purpose of this study was to investigate the effects of static pedagogical agents (included and excluded) and gamification practice (included and excluded) on vocabulary acquisition and perceptions of cognitive load by junior high students who studied Navajo

language via computer-based instructional program. A total of 153 students attending a junior

The purpose of this study was to investigate the effects of static pedagogical agents (included and excluded) and gamification practice (included and excluded) on vocabulary acquisition and perceptions of cognitive load by junior high students who studied Navajo

language via computer-based instructional program. A total of 153 students attending a junior high school in the southwestern United States were the participants for this study. Prior to the beginning of the study, students were randomly assigned to one of four

treatment groups who used a Navajo language computer-based program that contained a combination of static pedagogical agent (included and excluded) and gamification practice (included and excluded). There were two criterion measures in this study, a

vocabulary acquisition posttest and a survey designed both to measure students’ attitudes toward the program and to measure cognitive load. Anecdotal observations of students’ interactions were also examined.

Results indicated that there were no significant differences in posttest scores among treatment conditions; students were, however, generally successful in learning the Navajo vocabulary terms. Participants also reported positive attitudes toward the Navajo

language content and gamification practice and expressed a desire to see additional content and games during activities of this type. These findings provide evidence of the impact that computer-based training may have in teaching students an indigenous second

language. Furthermore, students seem to enjoy this type of language learning program. Many also indicated that, while static agent was not mentioned, gamification practice may enhance students’ attitudes in such instruction and is an area for future research.

Language learning programs could include a variety of gamification practice activities to assist student to learn new vocabulary. Further research is needed to study motivation and cognitive load in Navajo language computer-based training.
ContributorsShurley, Kenneth Alessandro (Author) / Savenye, Wilhelmina C (Thesis advisor) / Atkinson, Robert (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2018
156508-Thumbnail Image.png
Description
A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation

A recorded tutorial dialogue can produce positive learning gains, when observed and used to promote discussion between a pair of learners; however, this same effect does not typically occur when an leaner observes a tutorial dialogue by himself or herself. One potential approach to enhancing learning in the latter situation is by incorporating self-explanation prompts, a proven technique for encouraging students to engage in active learning and attend to the material in a meaningful way. This study examined whether learning from observing recorded tutorial dialogues could be made more effective by adding self-explanation prompts in computer-based learning environment. The research questions in this two-experiment study were (a) Do self-explanation prompts help support student learning while watching a recorded dialogue? and (b) Does collaboratively observing (in dyads) a tutorial dialogue with self-explanation prompts help support student learning while watching a recorded dialogue? In Experiment 1, 66 participants were randomly assigned as individuals to a physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). In Experiment 2, 20 participants were randomly assigned in 10 pairs to the same physics lesson (a) with self-explanation prompts (Condition 1) or (b) without self-explanation prompts (Condition 2). Pretests and posttests were administered, as well as other surveys that measured motivation and system usability. Although supplemental analyses showed some significant differences among individual scale items or factors, neither primary results for Experiment 1 or Experiment 2 were significant for changes in posttest scores from pretest scores for learning, motivation, or system usability assessments.
ContributorsWright, Kyle Matthew (Author) / Atkinson, Robert K (Thesis advisor) / Savenye, Wilhelmina (Committee member) / Nelson, Brian (Committee member) / Arizona State University (Publisher)
Created2018
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
155689-Thumbnail Image.png
Description
Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool

Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool that connects paper-based assessments to digital space. I designed a classroom study and collected data from ASU computer science classes. I tracked and modeled students' reviewing and reflecting behaviors based on the use of WPGA. I analyzed students' reviewing efforts, in terms of frequency, timing, and the associations with their academic performances. Results showed that students put extra emphasis in reviewing prior to the exams and the efforts demonstrated the desire to review formal assessments regardless of if they were graded for academic performance or for attendance. In addition, all students paid more attention on reviewing quizzes and exams toward the end of semester.
ContributorsHuang, Po-Kai (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017