Matching Items (6)
Filtering by

Clear all filters

154260-Thumbnail Image.png
Description
Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an

Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an important part of social information seeking. Thousands of users participate in question and answer based communities on the web like Stack Overflow, Yahoo Answers and Wiki Answers. Research in user participation in different online communities identifies a universal phenomenon that a few users are responsible for answering a high percentage of questions and thus promoting the sustenance of a learning community. This principle implies two major categories of user participation, people who ask questions and those who answer questions. In this research, I try to look beyond this traditional view, identify multiple subtler user participation categories. Identification of multiple categories of users helps to provide specific support by treating each of these groups of users separately, in order to maintain the sustenance of the community.

In this thesis, participation behavior of users in an open and learning based question and answer community called OpenStudy has been analyzed. Initially, users were grouped into different categories based on the number of questions they have answered like non participators, sample participators, low, medium and high participators. In further steps, users were compared across several features which reflect temporal, content and question/thread specific dimensions of user participation including those suggestive of learning in OpenStudy.

The goal of this thesis is to analyze user participation in three steps:

a. Inter group participation analysis: compare pre assumed user groups across the participation features extracted from OpenStudy data.

b. Intra group participation analysis: Identify sub groups in each category and examine how participation differs within each group with help of unsupervised learning techniques.

c. With these grouping insights, suggest what interventions might support the categories of users for the benefit of users and community.

This thesis presents new insights into participation because of the broad range of

features extracted and their significance in understanding the behavior of users in this learning community.
ContributorsSamala, Ritesh Reddy (Author) / Walker, Erin (Thesis advisor) / VanLehn, Kurt (Committee member) / Hsieh, Gary (Committee member) / Wetzel, Jon (Committee member) / Arizona State University (Publisher)
Created2015
154470-Thumbnail Image.png
Description
For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model

For this master's thesis, an open learner model is integrated with Quinn, a teachable robotic agent developed at Arizona State University. This system is represented as a feedback system, which aims to improve a student’s understanding of a subject. It also helps to understand the effect of the learner model when it is represented by performance of the teachable agent. The feedback system represents performance of the teachable agent, and not of a student. Data in the feedback system is thus updated according to a student's understanding of the subject. This provides students an opportunity to enhance their understanding of a subject by analyzing their performance. To test the effectiveness of the feedback system, student understanding in two different conditions is analyzed. In the first condition a feedback report is not provided to the students, while in the second condition the feedback report is provided in the form of the agent’s performance.
ContributorsUpadhyay, Abha (Author) / Walker, Erin (Thesis advisor) / Nelson, Brian (Committee member) / Amresh, Ashish (Committee member) / Arizona State University (Publisher)
Created2016
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
152909-Thumbnail Image.png
Description
This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction

This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction between pairs of students. Thousands of different features were defined, and then extracted computationally from the audio and log data. Human coders used richer data (several video streams) and a thorough understand of the tasks to code episodes as

collaborative, cooperative or asymmetric contribution. Machine learning was used to induce a detector, based on random forests, that outputs one of these three codes for an episode given only a characterization of the episode in terms of superficial features. An overall accuracy of 92.00% (kappa = 0.82) was obtained when

comparing the detector's codes to the humans' codes. However, due irregularities in running the study (e.g., the tablet software kept crashing), these results should be viewed as preliminary.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / T.H CHI, Michelene (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
152976-Thumbnail Image.png
Description
Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses

Research in the learning sciences suggests that students learn better by collaborating with their peers than learning individually. Students working together as a group tend to generate new ideas more frequently and exhibit a higher level of reasoning. In this internet age with the advent of massive open online courses (MOOCs), students across the world are able to access and learn material remotely. This creates a need for tools that support distant or remote collaboration. In order to build such tools we need to understand the basic elements of remote collaboration and how it differs from traditional face-to-face collaboration.

The main goal of this thesis is to explore how spoken dialogue varies in face-to-face and remote collaborative learning settings. Speech data is collected from student participants solving mathematical problems collaboratively on a tablet. Spoken dialogue is analyzed based on conversational and acoustic features in both the settings. Looking for collaborative differences of transactivity and dialogue initiative, both settings are compared in detail using machine learning classification techniques based on acoustic and prosodic features of speech. Transactivity is defined as a joint construction of knowledge by peers. The main contributions of this thesis are: a speech corpus to analyze spoken dialogue in face-to-face and remote settings and an empirical analysis of conversation, collaboration, and speech prosody in both the settings. The results from the experiments show that amount of overlap is lower in remote dialogue than in the face-to-face setting. There is a significant difference in transactivity among strangers. My research benefits the computer-supported collaborative learning community by providing an analysis that can be used to build more efficient tools for supporting remote collaborative learning.
ContributorsNelakurthi, Arun Reddy (Author) / Pon-Barry, Heather (Thesis advisor) / VanLehn, Kurt (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
158831-Thumbnail Image.png
Description
Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom

Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom problem solving activities, such as card sorting and handwriting.

Transfer learning using different representations was also studied with a goal of building collaboration detectors for one task can be used with a new task. Data for building such detectors were collected in the form of verbal interaction and user action logs from students’ tablets. Three qualitative levels of interactivity were distinguished: Collaboration, Cooperation and Asymmetric Contribution. Machine learning was used to induce a classifier that can assign a code for every episode based on the set of features. The results indicate that machine learned classifiers were reliable and can transfer.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / Hsiao, Ihan (Committee member) / Walker, Erin (Committee member) / D' Angelo, Cynthia (Committee member) / Arizona State University (Publisher)
Created2020