Matching Items (26)
Filtering by

Clear all filters

187457-Thumbnail Image.png
Description
Experience, whether personal or vicarious, plays an influential role in shaping human knowledge. Through these experiences, one develops an understanding of the world, which leads to learning. The process of gaining knowledge in higher education transcends beyond the passive transmission of knowledge from an expert to a novice. Instead, students

Experience, whether personal or vicarious, plays an influential role in shaping human knowledge. Through these experiences, one develops an understanding of the world, which leads to learning. The process of gaining knowledge in higher education transcends beyond the passive transmission of knowledge from an expert to a novice. Instead, students are encouraged to actively engage in every learning opportunity to achieve mastery in their chosen field. Evaluation of such mastery typically entails using educational assessments that provide objective measures to determine whether the student has mastered what is required of them. With the proliferation of educational technology in the modern classroom, information about students is being collected at an unprecedented rate, covering demographic, performance, and behavioral data. In the absence of analytics expertise, stakeholders may miss out on valuable insights that can guide future instructional interventions, especially in helping students understand their strengths and weaknesses. This dissertation presents Web-Programming Grading Assistant (WebPGA), a homegrown educational technology designed based on various learning sciences principles, which has been used by 6,000+ students. In addition to streamlining and improving the grading process, it encourages students to reflect on their performance. WebPGA integrates learning analytics into educational assessments using students' physical and digital footprints. A series of classroom studies is presented demonstrating the use of learning analytics and assessment data to make students aware of their misconceptions. It aims to develop ways for students to learn from previous mistakes made by themselves or by others. The key findings of this dissertation include the identification of effective strategies of better-performing students, the demonstration of the importance of individualized guidance during the reviewing process, and the likely impact of validating one's understanding of another's experiences. Moreover, the Personalized Recommender of Items to Master and Evaluate (PRIME) framework is introduced. It is a novel and intelligent approach for diagnosing one's domain mastery and providing tailored learning opportunities by allowing students to observe others' mistakes. Thus, this dissertation lays the groundwork for further improvement and inspires better use of available data to improve the quality of educational assessments that will benefit both students and teachers.
ContributorsParedes, Yancy Vance (Author) / Hsiao, I-Han (Thesis advisor) / VanLehn, Kurt (Thesis advisor) / Craig, Scotty D (Committee member) / Bansal, Srividya (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2023
154047-Thumbnail Image.png
Description
Question Answering has been under active research for decades, but it has recently taken the spotlight following IBM Watson's success in Jeopardy! and digital assistants such as Apple's Siri, Google Now, and Microsoft Cortana through every smart-phone and browser. However, most of the research in Question Answering aims at factual

Question Answering has been under active research for decades, but it has recently taken the spotlight following IBM Watson's success in Jeopardy! and digital assistants such as Apple's Siri, Google Now, and Microsoft Cortana through every smart-phone and browser. However, most of the research in Question Answering aims at factual questions rather than deep ones such as ``How'' and ``Why'' questions.

In this dissertation, I suggest a different approach in tackling this problem. We believe that the answers of deep questions need to be formally defined before found.

Because these answers must be defined based on something, it is better to be more structural in natural language text; I define Knowledge Description Graphs (KDGs), a graphical structure containing information about events, entities, and classes. We then propose formulations and algorithms to construct KDGs from a frame-based knowledge base, define the answers of various ``How'' and ``Why'' questions with respect to KDGs, and suggest how to obtain the answers from KDGs using Answer Set Programming. Moreover, I discuss how to derive missing information in constructing KDGs when the knowledge base is under-specified and how to answer many factual question types with respect to the knowledge base.

After having the answers of various questions with respect to a knowledge base, I extend our research to use natural language text in specifying deep questions and knowledge base, generate natural language text from those specification. Toward these goals, I developed NL2KR, a system which helps in translating natural language to formal language. I show NL2KR's use in translating ``How'' and ``Why'' questions, and generating simple natural language sentences from natural language KDG specification. Finally, I discuss applications of the components I developed in Natural Language Understanding.
ContributorsVo, Nguyen Ha (Author) / Baral, Chitta (Thesis advisor) / Lee, Joohyung (Committee member) / VanLehn, Kurt (Committee member) / Tran, Son Cao (Committee member) / Arizona State University (Publisher)
Created2015
154253-Thumbnail Image.png
Description
Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of

Embedded assessment constantly updates a model of the student as the student works on instructional tasks. Accurate embedded assessment allows students, instructors and instructional systems to make informed decisions without requiring the student to stop instruction and take a test. This thesis describes the development and comparison of several student models for Dragoon, an intelligent tutoring system. All the models were instances of Bayesian Knowledge Tracing, a standard method. Several methods of parameterization and calibration were explored using two recently developed toolkits, FAST and BNT-SM that replaces constant-valued parameters with logistic regressions. The evaluation was done by calculating the fit of the models to data from human subjects and by assessing the accuracy of their assessment of simulated students. The student models created using node properties as subskills were superior to coarse-grained, skill-only models. Adding this extra level of representation to emission parameters was superior to adding it to transmission parameters. Adding difficulty parameters did not improve fit, contrary to standard practice in psychometrics.
ContributorsGrover, Sachin (Author) / VanLehn, Kurt (Thesis advisor) / Walker, Erin (Committee member) / Shiao, Ihan (Committee member) / Arizona State University (Publisher)
Created2015
154260-Thumbnail Image.png
Description
Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an

Online learning communities have changed the way users learn due to the technological affordances web 2.0 has offered. This shift has produced different kinds of learning communities like massive open online courses (MOOCs), learning management systems (LMS) and question and answer based learning communities. Question and answer based communities are an important part of social information seeking. Thousands of users participate in question and answer based communities on the web like Stack Overflow, Yahoo Answers and Wiki Answers. Research in user participation in different online communities identifies a universal phenomenon that a few users are responsible for answering a high percentage of questions and thus promoting the sustenance of a learning community. This principle implies two major categories of user participation, people who ask questions and those who answer questions. In this research, I try to look beyond this traditional view, identify multiple subtler user participation categories. Identification of multiple categories of users helps to provide specific support by treating each of these groups of users separately, in order to maintain the sustenance of the community.

In this thesis, participation behavior of users in an open and learning based question and answer community called OpenStudy has been analyzed. Initially, users were grouped into different categories based on the number of questions they have answered like non participators, sample participators, low, medium and high participators. In further steps, users were compared across several features which reflect temporal, content and question/thread specific dimensions of user participation including those suggestive of learning in OpenStudy.

The goal of this thesis is to analyze user participation in three steps:

a. Inter group participation analysis: compare pre assumed user groups across the participation features extracted from OpenStudy data.

b. Intra group participation analysis: Identify sub groups in each category and examine how participation differs within each group with help of unsupervised learning techniques.

c. With these grouping insights, suggest what interventions might support the categories of users for the benefit of users and community.

This thesis presents new insights into participation because of the broad range of

features extracted and their significance in understanding the behavior of users in this learning community.
ContributorsSamala, Ritesh Reddy (Author) / Walker, Erin (Thesis advisor) / VanLehn, Kurt (Committee member) / Hsieh, Gary (Committee member) / Wetzel, Jon (Committee member) / Arizona State University (Publisher)
Created2015
158027-Thumbnail Image.png
Description
Learning programming involves a variety of complex cognitive activities, from abstract knowledge construction to structural operations, which include program design,modifying, debugging, and documenting tasks. In this work, the objective was to explore and investigate the barriers and obstacles that programming novice learners encountered and how the learners overcome them. Several

Learning programming involves a variety of complex cognitive activities, from abstract knowledge construction to structural operations, which include program design,modifying, debugging, and documenting tasks. In this work, the objective was to explore and investigate the barriers and obstacles that programming novice learners encountered and how the learners overcome them. Several lab and classroom studies were designed and conducted, the results showed that novice students had different behavior patterns compared to experienced learners, which indicates obstacles encountered. The studies also proved that proper assistance could help novices find helpful materials to read. However, novices still suffered from the lack of background knowledge and the limited cognitive load while learning, which resulted in challenges in understanding programming related materials, especially code examples. Therefore, I further proposed to use the natural language generator (NLG) to generate code explanations for educational purposes. The natural language generator is designed based on Long Short Term Memory (LSTM), a deep-learning translation model. To establish the model, a data set was collected from Amazon Mechanical Turks (AMT) recording explanations from human experts for programming code lines.

To evaluate the model, a pilot study was conducted and proved that the readability of the machine generated (MG) explanation was compatible with human explanations, while its accuracy is still not ideal, especially for complicated code lines. Furthermore, a code-example based learning platform was developed to utilize the explanation generating model in programming teaching. To examine the effect of code example explanations on different learners, two lab-class experiments were conducted separately ii in a programming novices’ class and an advanced students’ class. The experiment result indicated that when learning programming concepts, the MG code explanations significantly improved the learning Predictability for novices compared to control group, and the explanations also extended the novices’ learning time by generating more material to read, which potentially lead to a better learning gain. Besides, a completed correlation model was constructed according to the experiment result to illustrate the connections between different factors and the learning effect.
ContributorsLu, Yihan (Author) / Hsiao, I-Han (Thesis advisor) / VanLehn, Kurt (Committee member) / Tong, Hanghang (Committee member) / Yang, Yezhou (Committee member) / Price, Thomas (Committee member) / Arizona State University (Publisher)
Created2020
168847-Thumbnail Image.png
Description
Persistent self-assessment is the key to proficiency in computer programming. The process involves distributed practice of code tracing and writing skills which encompasses a large amount of training that is tailored for the student's learning condition. It requires the instructor to efficiently manage the learning resource and diligently generate related

Persistent self-assessment is the key to proficiency in computer programming. The process involves distributed practice of code tracing and writing skills which encompasses a large amount of training that is tailored for the student's learning condition. It requires the instructor to efficiently manage the learning resource and diligently generate related programming questions for the student. However, programming question generation (PQG) is not an easy job. The instructor has to organize heterogeneous types of resources, i.e., conceptual programming concepts and procedural programming rules. S/he also has to carefully align the learning goals with the design of questions in regard to the topic relevance and complexity. Although numerous educational technologies like learning management systems (LMS) have been adopted across levels of programming learning, PQG is still largely based on the demanding creation task performed by the instructor without advanced technological support. To fill this gap, I propose a knowledge-based PQG model that aims to help the instructor generate new programming questions and expand existing assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network model by the Local Knowledge Graph (LKG) and the Abstract Syntax Tree (AST). For a given question, the model can generate a set of new questions by the associated LKG/AST semantic structures. I used the model to compare instructor-made questions from 9 undergraduate programming courses and textbook questions, which showed that the instructor-made questions had much simpler complexity than the textbook ones. The analysis also revealed the difference in topic distributions between the two question sets. A classification analysis further showed that the complexity of questions was correlated with student performance. To evaluate the performance of PQG, a group of experienced instructors from introductory programming courses was recruited. The result showed that the machine-generated questions were semantically similar to the instructor-generated questions. The questions also received significantly positive feedback regarding the topic relevance and extensibility. Overall, this work demonstrates a feasible PQG model that sheds light on AI-assisted PQG for the future development of intelligent authoring tools for programming learning.
ContributorsChung, Cheng-Yu (Author) / Hsiao, Ihan (Thesis advisor) / VanLehn, Kurt (Committee member) / Sahebi, Shaghayegh (Committee member) / Bansal, Srividya (Committee member) / Arizona State University (Publisher)
Created2022