Matching Items (7)
Filtering by

Clear all filters

152909-Thumbnail Image.png
Description
This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction

This thesis is an initial test of the hypothesis that superficial measures suffice for measuring collaboration among pairs of students solving complex math problems, where the degree of collaboration is categorized at a high level. Data were collected

in the form of logs from students' tablets and the vocal interaction between pairs of students. Thousands of different features were defined, and then extracted computationally from the audio and log data. Human coders used richer data (several video streams) and a thorough understand of the tasks to code episodes as

collaborative, cooperative or asymmetric contribution. Machine learning was used to induce a detector, based on random forests, that outputs one of these three codes for an episode given only a characterization of the episode in terms of superficial features. An overall accuracy of 92.00% (kappa = 0.82) was obtained when

comparing the detector's codes to the humans' codes. However, due irregularities in running the study (e.g., the tablet software kept crashing), these results should be viewed as preliminary.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / T.H CHI, Michelene (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2014
136804-Thumbnail Image.png
Description
The quality of user interface designs largely depends on the aptitude of the designer. The ability to generate mental abstract models and characterize a target user audience helps greatly when conceiving a design. The dry cleaning point-of-sale industry lacks quality user interface designs. These impaired interfaces were compared with textbook

The quality of user interface designs largely depends on the aptitude of the designer. The ability to generate mental abstract models and characterize a target user audience helps greatly when conceiving a design. The dry cleaning point-of-sale industry lacks quality user interface designs. These impaired interfaces were compared with textbook design techniques to discover how applicable published interface design concepts are in practice. Four variations of a software package were deployed to end users. Each variation contained different design techniques. Surveyed users responded positively to interface design practices that were consistent and easy to learn. This followed textbook expectations. Users however responded poorly to customization options, an important feature according to textbook material. The study made conservative changes to the four interface variations provided to end-users. A more liberal approach may have yielded additional results.
ContributorsSmith, Andrew David (Author) / Nakamura, Mutsumi (Thesis director) / Gottesman, Aaron (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2014-05
137541-Thumbnail Image.png
Description
Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices

Over the course of computing history there have been many ways for humans to pass information to computers. These different input types, at first, tended to be used one or two at a time for the users interfacing with computers. As time has progressed towards the present, however, many devices are beginning to make use of multiple different input types, and will likely continue to do so. With this happening, users need to be able to interact with single applications through a variety of ways without having to change the design or suffer a loss of functionality. This is important because having only one user interface, UI, across all input types is makes it easier for the user to learn and keeps all interactions consistent across the application. Some of the main input types in use today are touch screens, mice, microphones, and keyboards; all seen in Figure 1 below. Current design methods tend to focus on how well the users are able to learn and use a computing system. It is good to focus on those aspects, but it is important to address the issues that come along with using different input types, or in this case, multiple input types. UI design for touch screens, mice, microphones, and keyboards each requires satisfying a different set of needs. Due to this trend in single devices being used in many different input configurations, a "fully functional" UI design will need to address the needs of multiple input configurations. In this work, clashing concerns are described for the primary input sources for computers and suggests methodologies and techniques for designing a single UI that is reasonable for all of the input configurations.
ContributorsJohnson, David Bradley (Author) / Calliss, Debra (Thesis director) / Wilkerson, Kelly (Committee member) / Walker, Erin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
154605-Thumbnail Image.png
Description
With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can

With the advent of Massive Open Online Courses (MOOCs) educators have the opportunity to collect data from students and use it to derive insightful information about the students. Specifically, for programming based courses the ability to identify the specific areas or topics that need more attention from the students can be of immense help. But the majority of traditional, non-virtual classes lack the ability to uncover such information that can serve as a feedback to the effectiveness of teaching. In majority of the schools paper exams and assignments provide the only form of assessment to measure the success of the students in achieving the course objectives. The overall grade obtained in paper exams and assignments need not present a complete picture of a student’s strengths and weaknesses. In part, this can be addressed by incorporating research-based technology into the classrooms to obtain real-time updates on students' progress. But introducing technology to provide real-time, class-wide engagement involves a considerable investment both academically and financially. This prevents the adoption of such technology thereby preventing the ideal, technology-enabled classrooms. With increasing class sizes, it is becoming impossible for teachers to keep a persistent track of their students progress and to provide personalized feedback. What if we can we provide technology support without adding more burden to the existing pedagogical approach? How can we enable semantic enrichment of exams that can translate to students' understanding of the topics taught in the class? Can we provide feedback to students that goes beyond only numbers and reveal areas that need their focus. In this research I focus on bringing the capability of conducting insightful analysis to paper exams with a less intrusive learning analytics approach that taps into the generic classrooms with minimum technology introduction. Specifically, the work focuses on automatic indexing of programming exam questions with ontological semantics. The thesis also focuses on designing and evaluating a novel semantic visual analytics suite for in-depth course monitoring. By visualizing the semantic information to illustrate the areas that need a student’s focus and enable teachers to visualize class level progress, the system provides a richer feedback to both sides for improvement.
ContributorsPandhalkudi Govindarajan, Sesha Kumar (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / Walker, Erin (Committee member) / Arizona State University (Publisher)
Created2016
Description
Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to

Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.

The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.

Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption.
ContributorsSeth, Nayan Sateesh (Author) / Nelson, Brian (Thesis advisor) / Walker, Erin (Committee member) / Atkinson, Robert (Committee member) / Arizona State University (Publisher)
Created2017
154260-Thumbnail Image.png
Description
Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an

Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an important part of social information seeking. Thousands of users participate in question and answer based communities on the web like Stack Overflow, Yahoo Answers and Wiki Answers. Research in user participation in different online communities identifies a universal phenomenon that a few users are responsible for answering a high percentage of questions and thus promoting the sustenance of a learning community. This principle implies two major categories of user participation, people who ask questions and those who answer questions. In this research, I try to look beyond this traditional view, identify multiple subtler user participation categories. Identification of multiple categories of users helps to provide specific support by treating each of these groups of users separately, in order to maintain the sustenance of the community.

In this thesis, participation behavior of users in an open and learning based question and answer community called OpenStudy has been analyzed. Initially, users were grouped into different categories based on the number of questions they have answered like non participators, sample participators, low, medium and high participators. In further steps, users were compared across several features which reflect temporal, content and question/thread specific dimensions of user participation including those suggestive of learning in OpenStudy.

The goal of this thesis is to analyze user participation in three steps:

a. Inter group participation analysis: compare pre assumed user groups across the participation features extracted from OpenStudy data.

b. Intra group participation analysis: Identify sub groups in each category and examine how participation differs within each group with help of unsupervised learning techniques.

c. With these grouping insights, suggest what interventions might support the categories of users for the benefit of users and community.

This thesis presents new insights into participation because of the broad range of

features extracted and their significance in understanding the behavior of users in this learning community.
ContributorsSamala, Ritesh Reddy (Author) / Walker, Erin (Thesis advisor) / VanLehn, Kurt (Committee member) / Hsieh, Gary (Committee member) / Wetzel, Jon (Committee member) / Arizona State University (Publisher)
Created2015
158831-Thumbnail Image.png
Description
Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom

Students seldom spontaneously collaborate with each other. A system that can measure collaboration in real time could be useful, for example, by helping the teacher locate a group requiring guidance. To address this challenge, the research presented here focuses on building and comparing collaboration detectors for different types of classroom problem solving activities, such as card sorting and handwriting.

Transfer learning using different representations was also studied with a goal of building collaboration detectors for one task can be used with a new task. Data for building such detectors were collected in the form of verbal interaction and user action logs from students’ tablets. Three qualitative levels of interactivity were distinguished: Collaboration, Cooperation and Asymmetric Contribution. Machine learning was used to induce a classifier that can assign a code for every episode based on the set of features. The results indicate that machine learned classifiers were reliable and can transfer.
ContributorsViswanathan, Sree Aurovindh (Author) / VanLehn, Kurt (Thesis advisor) / Hsiao, Ihan (Committee member) / Walker, Erin (Committee member) / D' Angelo, Cynthia (Committee member) / Arizona State University (Publisher)
Created2020