Matching Items (3)
Filtering by

Clear all filters

154260-Thumbnail Image.png
Description
Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an

Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an important part of social information seeking. Thousands of users participate in question and answer based communities on the web like Stack Overflow, Yahoo Answers and Wiki Answers. Research in user participation in different online communities identifies a universal phenomenon that a few users are responsible for answering a high percentage of questions and thus promoting the sustenance of a learning community. This principle implies two major categories of user participation, people who ask questions and those who answer questions. In this research, I try to look beyond this traditional view, identify multiple subtler user participation categories. Identification of multiple categories of users helps to provide specific support by treating each of these groups of users separately, in order to maintain the sustenance of the community.

In this thesis, participation behavior of users in an open and learning based question and answer community called OpenStudy has been analyzed. Initially, users were grouped into different categories based on the number of questions they have answered like non participators, sample participators, low, medium and high participators. In further steps, users were compared across several features which reflect temporal, content and question/thread specific dimensions of user participation including those suggestive of learning in OpenStudy.

The goal of this thesis is to analyze user participation in three steps:

a. Inter group participation analysis: compare pre assumed user groups across the participation features extracted from OpenStudy data.

b. Intra group participation analysis: Identify sub groups in each category and examine how participation differs within each group with help of unsupervised learning techniques.

c. With these grouping insights, suggest what interventions might support the categories of users for the benefit of users and community.

This thesis presents new insights into participation because of the broad range of

features extracted and their significance in understanding the behavior of users in this learning community.
ContributorsSamala, Ritesh Reddy (Author) / Walker, Erin (Thesis advisor) / VanLehn, Kurt (Committee member) / Hsieh, Gary (Committee member) / Wetzel, Jon (Committee member) / Arizona State University (Publisher)
Created2015
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
155352-Thumbnail Image.png
Description
Computational thinking, the fundamental way of thinking in computer science, including information sourcing and problem solving behind programming, is considered vital to children who live in a digital era. Most of current educational games designed to teach children about coding either rely on external curricular materials or are too complicated

Computational thinking, the fundamental way of thinking in computer science, including information sourcing and problem solving behind programming, is considered vital to children who live in a digital era. Most of current educational games designed to teach children about coding either rely on external curricular materials or are too complicated to work well with young children. In this thesis project, Guardy, an iOS tower defense game, was developed to help children over 8 years old learn about and practice using basic concepts in programming. The game is built with the SpriteKit, a graphics rendering and animation infrastructure in Apple’s integrated development environment Xcode. It simplifies switching among different game scenes and animating game sprites in the development. In a typical game, a sequence of operations is arranged by players to destroy incoming enemy minions. Basic coding concepts like looping, sequencing, conditionals, and classification are integrated in different levels. In later levels, players are required to type in commands and put them in an order to keep playing the game. To reduce the difficulty of the usability testing, a method combining questionnaires and observation was conducted with two groups of college students who either have no programming experience or are familiar with coding. The results show that Guardy has the potential to help children learn programming and practice computational thinking.
ContributorsWang, Xiaoxiao (Author) / Nelson, Brian C. (Thesis advisor) / Turaga, Pavan (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2017