Matching Items (2)
Filtering by

Clear all filters

155500-Thumbnail Image.png
Description
Reading comprehension is a critical aspect of life in America, but many English language learners struggle with this skill. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is a tablet-based interactive learning environment is designed to improve reading comprehension. During use of EMBRACE, all interactions with the system

Reading comprehension is a critical aspect of life in America, but many English language learners struggle with this skill. Enhanced Moved by Reading to Accelerate Comprehension in English (EMBRACE) is a tablet-based interactive learning environment is designed to improve reading comprehension. During use of EMBRACE, all interactions with the system are logged, including correct and incorrect behaviors and help requests. These interactions could potentially be used to predict the child’s reading comprehension, providing an online measure of understanding. In addition, time-related features have been used for predicting learning by educational data mining models in mathematics and science, and may be relevant in this context. This project investigated the predictive value of data mining models based on user actions for reading comprehension, with and without timing information. Contradictory results of the investigation were obtained. The KNN and SVM models indicated that elapsed time is an important feature, but the linear regression models indicated that elapsed time is not an important feature. Finally, a new statistical test was performed on the KNN algorithm which indicated that the feature selection process may have caused overfitting, where features were chosen due coincidental alignment with the participants’ performance. These results provide important insights which will aid in the development of a reading comprehension predictor that improves the EMBRACE system’s ability to better serve ELLs.
ContributorsDexheimer, Matthew Scott (Author) / Walker, Erin (Thesis advisor) / Glenberg, Arthur (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017
168847-Thumbnail Image.png
Description
Persistent self-assessment is the key to proficiency in computer programming. The process involves distributed practice of code tracing and writing skills which encompasses a large amount of training that is tailored for the student's learning condition. It requires the instructor to efficiently manage the learning resource and diligently generate related

Persistent self-assessment is the key to proficiency in computer programming. The process involves distributed practice of code tracing and writing skills which encompasses a large amount of training that is tailored for the student's learning condition. It requires the instructor to efficiently manage the learning resource and diligently generate related programming questions for the student. However, programming question generation (PQG) is not an easy job. The instructor has to organize heterogeneous types of resources, i.e., conceptual programming concepts and procedural programming rules. S/he also has to carefully align the learning goals with the design of questions in regard to the topic relevance and complexity. Although numerous educational technologies like learning management systems (LMS) have been adopted across levels of programming learning, PQG is still largely based on the demanding creation task performed by the instructor without advanced technological support. To fill this gap, I propose a knowledge-based PQG model that aims to help the instructor generate new programming questions and expand existing assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network model by the Local Knowledge Graph (LKG) and the Abstract Syntax Tree (AST). For a given question, the model can generate a set of new questions by the associated LKG/AST semantic structures. I used the model to compare instructor-made questions from 9 undergraduate programming courses and textbook questions, which showed that the instructor-made questions had much simpler complexity than the textbook ones. The analysis also revealed the difference in topic distributions between the two question sets. A classification analysis further showed that the complexity of questions was correlated with student performance. To evaluate the performance of PQG, a group of experienced instructors from introductory programming courses was recruited. The result showed that the machine-generated questions were semantically similar to the instructor-generated questions. The questions also received significantly positive feedback regarding the topic relevance and extensibility. Overall, this work demonstrates a feasible PQG model that sheds light on AI-assisted PQG for the future development of intelligent authoring tools for programming learning.
ContributorsChung, Cheng-Yu (Author) / Hsiao, Ihan (Thesis advisor) / VanLehn, Kurt (Committee member) / Sahebi, Shaghayegh (Committee member) / Bansal, Srividya (Committee member) / Arizona State University (Publisher)
Created2022